hadoop 集群搭建与mapreduce开发实战(一)
hadoop 集群配置步骤:
一:环境准备
完整文档:http://www.aboutyun.com/forum.php?mod=viewthread&tid=7684&highlight=hadoop2.2%2B%2B%B8%DF%BF%C9%BF%BF
设备:
三台centos:
修改/etc/hosts 如下:
127.1.1.1 localhost
192.168.0.50 m50 sp.m50.com
192.168.0.51 s51 sp.s51.com
192.168.0.52 s52 sp.s52.com
s51为master 节点,m50和s52为slave节点。
1.下载hadoop ,地址为:http://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz
2.安装JDK ,配置JAVA_HOME等环境变量;
3.配置ssh免密码登录
参考:http://blog.csdn.net/atco/article/details/44566723
大致步骤如下:
3.1 在master上执行:
二:安装hadoop
解压安装文件到/opt/目录:
以上标红的配置文件需要修改的配置,以上文件默认不存在的,可以复制相应的template文件获得。
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://s51:8020</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> <property> <name>hadoop.tmp.dir</name> <value>file:/opt/hadoop/tmp</value> <description>Abase for other temporary directories.</description> </property> <property> <name>hadoop.proxyuser.aboutyun.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.aboutyun.groups</name> <value>*</value> </property> </configuration>
配置文件5:hdfs-site.xml
<configuration> <property> <name>dfs.namenode.secondary.http-address</name> <value>s51:9001</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/opt/hadoop/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/opt/hadoop/dfs/data</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> </configuration>
配置文件6:mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>s51:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>s51:19888</value> </property> </configuration>
配置文件7:yarn-site.xml
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>s51:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>s51:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>s51:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>s51:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>s51:8088</value> </property> </configuration>
上面配置完毕,我们基本上完成了90%了剩下就是复制。我们可以把整个hadoop复制过去:使用如下命令:
复制hadoop 安装包
sudo scp -r /opt/hadoop-2.7.3 root@s52:~/
sudo scp -r /opt/hadoop-2.7.3 root@m52:~/
复制hadoop 文件目录
sudo scp -r /opt/hadoop root@s52:~/
sudo scp -r /opt/hadoop root@m52:~/
先复制到slave的root目录下面,然后在拷贝到/opt目录。
配置环境变量:
export HADOOP_HOME=/opt/hadoop-2.7.3
export HADOOP_PREFIXD=/opt/hadoop-2.7.3
export JAVA_HOME=/usr/lib/jvm/jre-openjdk
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_PREFIXED/bin
三:启动hadoop
3.1 格式化namenode:
hdfs namenode –format
3.2 启动
./start-all.sh
此时hadoop集群已全部配置完成!!!
http://master:8088/