如何部署hadoop集群
假设我们有三台服务器,他们的角色我们做如下划分:
10.96.21.120 master
10.96.21.119 slave1
10.96.21.121 slave2
接下来我们按照这个配置来部署hadoop集群。
1:安装jdk
下载解压。
vi /etc/profile JAVA_HOME=/usr/java/jdk1.6.0_29 CLASS_PATH=$JAVA_HOME/lib:JAVA_HOME/jre/lib:JAVA_HOME/lib/tools.jar:$CLASS_PATH PATH=$JAVA_HOME/bin:$PATH if [ -z "$INPUTRC" -a ! -f "$HOME/.inputrc" ]; then INPUTRC=/etc/inputrc fi export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE INPUTRC export CLASS_PATH JAVA_HOME
判断是否安装成功。
java -version javac
2:安装ssh
命令
yum -y install openssh-server openssh-clients
开启sshd服务
chkconfig sshd on
service sshd start
开启端口
/sbin/iptable -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT service iptables save
当然你也可以使22端口只接受某个ip的连接
/sbin/iptables -A INPUT -s 192.168.1.0/24 -m state --state NEW -p tcp --dport 22 -j ACCEPT service iptables save
配置文件在: /etc/ssh/sshd config
英文地址:http://www.cyberciti.biz/faq/how-to-installing-and-using-ssh-client-server-in-linux/
3:设置host(三台机器都操作)
vi /etc/hosts
10.96.21.120 master 10.96.21.121 slave1 10.96.21.119 slave2
4:创建hadoop账户,并设置本机无密码登陆(三台机器都操作)
创建用户
useradd hadoop
设置密码为空
passwd -d hadoop
切换账户
su - hadoop
生成公私密钥
ssh-keygen -t rsa
到公钥所在的文件夹
cd ~ cd .ssh
追加公钥到信任区域
如果有authorized_keys文件则 cat id_rsa.pub>> authorized_keys 否则 cp id_rsa.pub authorized_keys
测试本机无密码登陆
ssh -p 22 localhost
5:设置master到slave的无密码登陆
到hadoop用户的ssh文件夹(master操作)
su - hadoop cd ~ cd .ssh
复制公钥到slave1,slave2(master操作)
scp -P 60022 id_rsa.pub root@10.96.21.121:/home/hadoop/.ssh/10.96.21.120 scp -P 60022 id_rsa.pub root@10.96.21.119:/home/hadoop/.ssh/10.96.21.120
添加master的公钥到的slave1,slave2信任区域(slave1,slave2上操作)
su - hadoop cd ~ cd .ssh
cat 10.96.21.120 >> authorized_keys
启动sshd客户端(master操作)
ssh-agent
添加id_rsa到ssh-agent(添加私钥到客户端)(master操作)
ssh-add id_rsa
验证
ssh -p 60022 slave1 ssh -p 60022 slave2
6:设置hadoop
配置hadoop-env.sh (三台机器都操作)
vi hadoop-env.sh
export JAVA_HOME=/soft/jdk1.7.0_21 export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true export HADOOP_SSH_OPTS="-p 60022"
配置core-site.xml(三台机器都操作)
vi core-site.xml <property> <name>fs.default.name</name> <value>hdfs://master:54310</value> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.</description> </property> <property> <name>hadoop.tmp.dir</name> <value>/app/hadoop</value> </property>
配置hdfs-site.xml(三台机器都操作)
vi hdfs-site.xml <property> <name>dfs.replication</name> <value>2</value> <description>Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. </description> </property>
配置mapred-site.xml(三台机器上操作)
<property> <name>mapred.job.tracker</name> <value>master:54311</value> <description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description> </property>
配置master(master上操作)
vi masters master
配置slave(master上操作)
vi slaves
slave1
slave2
设置hadoop用到的目录的权限
mkdir /app/hadoop chmod 777 /app/hadoop chmod 777 /soft/hadoop (hadoop所在的目录,默认日志要建立在这里)
7:启动集群
格式名称节点(master上操作)
bin/hadoop namenode -format
开启文件系统(master上操作)
bin/start-dfs.sh
开启map
bin/start-mapred.sh
验证
master
26463 Jps
24660 NameNode
25417 JobTracker
24842 SecondaryNameNode
slave
23823 TaskTracker
4636 DataNode
23964 Jps
执行mapreduce程序用到的命令
bin/hadoop fs -rmr output [删除文件夹] bin/hadoop fs -mkdir input [创建文件夹] bin/hadoop fs -put /soft/hadoop/file.txt input bin/hadoop fs -get /user/hadoop/output/part-r-00000
如果要用到第三方类库{比如把结果写到redis中}需要把类库放到每台服务器的lib文件夹下。
注意hostname名称一致
ssh只需要master到slave连通就行,不需要slave之间连通
关闭防火墙
错误:name node is in safe mode hadoop
解决:bin/hadoop dfsadmin -safemode leave
hdfs的web管理界面是http://10.96.21.120:50070/dfshealth.jsp