1 准备4台linux PC,确保机器间能ping通/*VMware Bridged*/
(1)编辑每台/etc/hosts文件,如下:
49.123.90.186 redhnamenode
49.123.90.181 redhdatanode1
49.123.90.182 redhdatanode2
49.123.90.184 redhdatanode3
(2)关闭防火墙/*需root权限*/
service iptables stop
(3)在所有机器上建立相同的用户hadoop-user
(4)安装jdk到/home
2 ssh配置/*hadoop-user*/
(1)在所有redhdatanode上建立.ssh
mkdir /home/hadoop-user/.ssh
(2)在redhnamenode上生成密钥对
ssh-keygen -t rsa
cat id_rsa.pub >> authorized_keys
scp -r authorized_keys hadoop-user@49.123.181:/home/hadoop-user/.ssh
scp -r authorized_keys hadoop-user@49.123.182:/home/hadoop-user/.ssh
scp -r authorized_keys hadoop-user@49.123.184:/home/hadoop-user/.ssh
(3)改变所有机器的.ssh和anthorized_keys权限
chmod 700 /home/hadoop-user/.ssh
chmod 600 /home/hadoop-user/.ssh/authorized_keys
(4)测试redhnamenode到各redhdatanode的ssh免密码连接
3 部署hadoop/*hadoop-user*/
在redhnamenode上配置
(1)解压hadoop-1.1.2.tar.gz到/home/hadoop-user,在conf/hadoop-env.sh中添加JAVA_HOME
JAVA_HOME=/home/jdk1.7.0_25
export JAVA_HOME
(2)编辑core-site.xml、hdfs-site.xml、mapred-site.xml/*参见《Hadoop实战》*/
(3)编辑conf/masters,去localhost,添加redhnamenode
编辑conf/slaves,去localhost,添加redhdatanode1、redhdatanode2、redhdatanode3
(4)scp -r hadoop-1.1.2 hadoop-user@redhdatanode1:/home/hadoop-user
scp -r hadoop-1.1.2 hadoop-user@redhdatanode2:/home/hadoop-user
scp -r hadoop-1.1.2 hadoop-user@redhdatanode3:/home/hadoop-user
4 运行及测试/*redhnamenode*/
cd /home/hadoop-user/hadoop-1.1.2/bin
./hadoop namenode -format
./start-all.sh
jps
./hadoop dfsadmin -report