Fork me on GitHub
hadoop自动安装的脚本与步骤

hadoop自动安装的脚本与步骤

最近要在10几台机器上安装hadoop。对于这种繁复而重复的工作,一步步的打命令行,对于程序员来说是一件不能忍的事情。所以我就琢磨着怎么写一个脚本来自动安装hadoop。

任务: 在10几台机器上中的任意一台执行脚本,即可安装好hadoop。

条件: 每台机器的用户名和密码都是一样的。每台机器都配置好了ssh,能够远程登录。

解决思路:

  1. 首先读取配置文件,读取到节点的ip和想要的机器名字,然后修改好本地hadoop的配置文件。

  2. 然后读取配置文件,复制所有文件到每个节点的安装的路径。(利用scp命令可以远程复制)

  3. 然后读取配置文件,自动ssh到每个节点做一些配置工作,包括配置hadoop和JDK 环境变量、生成ssh-key。

  4. ssh到主节点,将每个节点(包括主节点自己)生成的public key 都复制并追加到自己的authorized_keys. 然后把这个authorized_keys分发到每个节点。

 

这样就配置好了hadoop。

  题外话,介绍一下 ssh-keygen。ssh-keygen是一个ssh用于生成密钥的命令。用途是用于免密码登录。它会生成两个密钥,一个是公钥,一个是私钥。比如A 机器生成了pubKeyA,priKeyB。然后A 把pubKeyA给了机器B ,然后机器B 就可以无密码登录机器A 了。

 

在上面的步骤中,主要的难题是。

1. 对于步骤一,主要难题在于怎么用shell读取配置文件。由于我也之前没写过shell,所以Shell的循环和if 语句和字符串处理也卡了不少时间。

复制代码
# 这段代码是怎么从hosts的节点配置文件中读取节点的信息
# hosts的格式如下
# 192.168.1.100 master
# 192.168.1.101 slave1
# 192.168.1.102 slave2
# ...
while
read line do echo $line ip=`echo $line | cut -d" " -f1` name=`echo $line | cut -d" " -f2` if [ ! -z $ip ]; then echo $name if [[ $name == maste* ]]; then echo "$name" >> ../hadoop-1.2.1/conf/masters elif [[ $name == slave* ]]; then echo "$name" >> ../hadoop-1.2.1/conf/slaves  fi fi done < hosts
复制代码

2. 对于步骤2,由于刚开始节点直接没有实现无密码ssh,所以scp命令需要输入密码,所以怎么实现自动输入密码实在是一个棘手的问题。我搜索之后,发现一个工具叫expect。

 expect工具就像bash一样有自己的语法,然后有自己的命令。它的语法是基于TCL这种脚本语言(我也没听过),看帮助可以直接man expect。我觉得主要需要知道的expect命令是spawn,expect,exp_continue这三个。

复制代码
#!/usr/bin/expect
# expect 定义函数的方式如下 proc usage {} { puts stderr
"usage: $::argv0 ip usrname password" exit 1 } if {$argc != 3} { usage } #利用脚本传参数 set hostip [lindex $argv 0] set username [lindex $argv 1] set password [lindex $argv 2] set timeout 100000 # 利用expect的spawn命令来代理执行命令 spawn scp -r ../../hadoop ${username}@${hostip}:~ #获取期望的输出 expect { #如果输出是 要输入密码
#注意下面的大括号 不能换行写,必须用Java风格,而且与前面的“之间要有一个空格,我当时犯了这错误,程序执行的结果很奇怪却不报错。
"*assword:" { send "$password\n" #输入密码后期待spawn代理的命令结束 expect eof } #如果不需要输入密码,那也是一样期待命令结束 expect eof }
复制代码

 

对于步骤3、4已经没什么挑战性了,很快就完成了。

下面我把所有代码贴上来

  1. setHadoopOnce.sh 这个文件是脚本执行的起点

复制代码
 1 #!/bin/bash
 2 #修改密码
 3 pw=123456
 4 loginName=hadoop
 5 master=master
 6 slave=slave
 7 slaveNum=1
 8 set timeout 100000
 9 > ../hadoop-1.2.1/conf/masters
10 > ../hadoop-1.2.1/conf/slaves
11 #update local file
12 while read line
13 do
14     echo $line
15     ip=`echo $line | cut -d" " -f1`
16     name=`echo $line | cut -d" " -f2`
17     if [ ! -z $ip ]; then
18         echo $name
19         if [[ $name == maste* ]]; then
20         echo "$name" >> ../hadoop-1.2.1/conf/masters
21         elif [[ $name == slave* ]]; then
22         echo "$name" >> ../hadoop-1.2.1/conf/slaves
23         fi
24     fi
25 done < hosts
26 #upload file to all nodes
27 while read line
28 do
29     ip=`echo $line | cut -d" " -f1`
30     name=`echo $line | cut -d" " -f2`
31     if [ ! -z $ip ]; then
32         expect copyDataToAll.exp $ip $loginName $pw
33         expect setForAll.exp $ip $loginName $pw
34     fi
35 done < hosts
36 
37 while read line
38 do
39     ip=`echo $line | cut -d" " -f1`
40     name=`echo $line | cut -d" " -f2`
41     if [ ! -z $ip ]; then
42         if [[ $name == maste* ]]; then
43             expect setForMaster.exp $ip $loginName $pw
44         fi
45     fi
46 done < hosts
复制代码

  2. copyDataToAll.exp 这个在setHadoopOnce.sh中的32行被调用,以复制文件到所有节点。

复制代码
 1 #!/usr/bin/expect
 2 proc usage {} {
 3     puts stderr "usage: $::argv0 ip usrname password"
 4     exit 1
 5 }
 6 if {$argc != 3} { usage }
 7 set hostip [lindex $argv 0]
 8 set username [lindex $argv 1]
 9 set password [lindex $argv 2]
10 set timeout 100000
11 spawn scp -r ../../hadoop ${username}@${hostip}:~
12 expect {
13     "*assword:" {
14         send "$password\n"
15         expect eof
16     }
17     expect eof
18 }
复制代码

  3. setForAll.exp 为所有节点进行一些配置工作,在setHadoopOnce.sh中的33行被调用.

复制代码
#!/usr/bin/expect
proc usage {} {
    puts stderr "usage: $::argv0 ip usrname password"
    exit 1
}
proc connect {pwd} {
    expect {
        "*(yes/no)?" {
            send "yes\n"
            expect "*assword:" {
                send "$pwd\n"
                expect {
                    "*Last login:*" {
                        return 0
                    }
                }
            }
        }
        "*assword:" {
            send "$pwd\n"
            expect {
                "*Last login:*" {
                    return 0
                }
            }
        }
        "*Last login:*" {
            return 0
        }
    }
    return 1
}
if {$argc != 3} { usage }
set hostip [lindex $argv 0]
set username [lindex $argv 1]
set password [lindex $argv 2]
set timeout 100000

spawn ssh ${username}@${hostip}
if {[connect $password]} {
    exit 1
}
#set host
send "sudo bash ~/hadoop/setup/addHosts.sh\r"
expect "*assword*"
send "$password\r"
expect "*ddhostsucces*"
sleep 1

send "ssh-agent bash ~/hadoop/setup/sshGen.sh\n"
expect {
    "*(yes/no)?" {
        send "yes\n"
        exp_continue
    }
    "*verwrite (y/n)?" {
        send "n\n"
        exp_continue
    } 
    "*nter file in which to save the key*" {
        send "\n"
        exp_continue
    }
    "*nter passphrase*" {
        send "\n"
        exp_continue
    }
    "*nter same passphrase again*" {
        send "\n"
        exp_continue
    }
    "*our public key has been saved*" {
        exp_continue
    }
    "*etsshGenSucces*" {
        sleep 1
    }
}

send "bash ~/hadoop/setup/setEnvironment.sh\n"
expect "*etEnvironmentSucces*"
sleep 1

send "exit\n"
expect eof
复制代码

  3.1  addHosts.sh 在setForAll.exp中被调用,用于设置节点的hosts文件

复制代码
#!/bin/bash

hadoopRoot=~/hadoop
hadoopPath=$hadoopRoot/hadoop-1.2.1
setupPath=$hadoopRoot/setup
localip="`ifconfig |head -n 2|tail -n1 |cut -f2 -d: |cut -f1 -d" " `"
hostline="`grep "$localip$" $hadoopRoot/setup/hosts`"
sed -i /$hostline/\d $hadoopRoot/setup/hosts
#cp /etc/hosts /etc/hosts.hadoop.bak
for delip in `cat $hadoopRoot/setup/hosts`
do
    delipline="`grep -n "$delip[[:space:]]" /etc/hosts |cut -f1 -d:`"
    #echo $delipline
    if [ -n "$delipline" ]; then
        sed -i $delipline\d /etc/hosts
        sleep 1s
    #else
        #echo "Your List have no the ip $delip"
    fi
done
cat $hadoopRoot/setup/hosts >> /etc/hosts
rm -f "$setupPath"/sed*
echo "addhostsuccess"
复制代码

  3.2 sshGen.sh 在setForAll.sh中被调用,用于生成sshkey。

复制代码
#!/bin/bash
sshPath=~/.ssh
setupPath=~/hadoop/setup
rm "$sshPath"/authorized_keys
sleep 1
ssh-keygen -t rsa
cat "$sshPath"/id_rsa.pub >> "$sshPath"/authorized_keys
ssh-add
echo "setsshGenSuccess"
复制代码

  3.3 setEnvironment.sh 在setForAll.sh中被调用,用于设置环境变量

复制代码
#!/bin/bash
hadoopRoot=~/hadoop
hadoopPath=$hadoopRoot/hadoop-1.2.1
setupPath=$hadoopRoot/setup
JAVA_VERSION=`java -version 2>&1 | awk '/java version/ {print $3}'|sed 's/"//g'|awk '{if ($1>=1.6) print "ok"}'`

if [ "$JAVA_VERSION"x != "okx" ]; then
    cat "$setupPath"/jdkenv >> ~/.bashrc
    sleep 1
    source ~/.bashrc
    sleep 1
fi

Hadoop_Version=`hadoop version|awk '/Hadoop/ {print $2}'|awk '{if ($1>=1.0) print "ok"}'`

if [ "$Hadoop_Version"x != "okx" ]; then
    cat "$setupPath"/hadoopenv >> ~/.bashrc
    sleep 1
    source ~/.bashrc
    sleep 1
fi

echo "setEnvironmentSuccess"
复制代码

4. setForMaster.exp 远程ssh调用setForMaster.sh,以配置无密码登录的功能。

复制代码
#!/usr/bin/expect
proc usage {} {
    puts stderr "usage: $::argv0 ip usrname password"
    exit 1
}
proc connect {pwd} {
    expect {
        "*(yes/no)?" {
            send "yes\n"
            expect "*assword:" {
                send "$pwd\n"
                expect {
                    "*Last login:*" {
                        return 0
                    }
                }
            }
        }
        "*assword:" {
            send "$pwd\n"
            expect {
                "*Last login:*" {
                    return 0
                }
            }
        }
        "*Last login:*" {
            return 0
        }
    }
    return 1
}

if {$argc != 3} { usage }
set hostip [lindex $argv 0]
set username [lindex $argv 1]
set password [lindex $argv 2]
set timeout 100000
spawn ssh ${username}@${hostip}
if {[connect $password]} {
    exit 1
}

send "ssh-agent bash ~/hadoop/setup/setForMaster.sh\n"
expect {
    "*etForMasterSucces*" {
        sleep 1
        send "exit\n"
    }
    "*assword*" {
        send "$password\n"
        exp_continue
    }
    "*(yes/no)?" {
        send "yes\n"
        exp_continue
    }
}
复制代码

  4.1 setForMaster.sh

复制代码
#!/bin/bash
while read line
do
    ip=`echo $line | cut -d" " -f1`
    name=`echo $line | cut -d" " -f2`
    if [ ! -z $ip ]; then
        if [[ $name == slave* ]]; then
            scp $ip:~/.ssh/authorized_keys ~/tmpkey
            cat ~/tmpkey >> ~/.ssh/authorized_keys
        fi
    fi
done < ~/hadoop/setup/hosts

sleep 1

rm -f ~/tmpkey
while read line
do
    ip=`echo $line | cut -d" " -f1`
    name=`echo $line | cut -d" " -f2`
    if [ ! -z $ip ]; then
        if [[ $name == slave* ]]; then
            scp ~/.ssh/authorized_keys $ip:~/.ssh/authorized_keys
        fi
    fi
done < ~/hadoop/setup/hosts

echo "setForMasterSuccess"
复制代码

 

安装包打包下载地址: http://pan.baidu.com/s/1dDj6LHJ

 

 

 

 

 

分类: hadoop

posted on 2014-01-11 15:39  HackerVirus  阅读(500)  评论(0编辑  收藏  举报