Hadoop2

本文环境:

OS:CentOS 6.6

JDK:1.7.0_79

Hadoop:2.7.0

User:xavier

[备注]

打开防火墙的特定端口:

编辑/etc/sysconfig/iptables:

 1 #Xavier Setting for Hadoop2
 2 -A INPUT -m state --state NEW -m tcp -p tcp --dport 8020 -j ACCEPT
 3 -A INPUT -m state --state NEW -m tcp -p tcp --dport 8045 -j ACCEPT
 4 -A INPUT -m state --state NEW -m tcp -p tcp --dport 8046 -j ACCEPT
 5 -A INPUT -m state --state NEW -m tcp -p tcp --dport 8047 -j ACCEPT
 6 -A INPUT -m state --state NEW -m tcp -p tcp --dport 8480 -j ACCEPT
 7 -A INPUT -m state --state NEW -m tcp -p tcp --dport 8481 -j ACCEPT
 8 -A INPUT -m state --state NEW -m tcp -p tcp --dport 8485 -j ACCEPT
 9 -A INPUT -m state --state NEW -m tcp -p tcp --dport 8788 -j ACCEPT
10 -A INPUT -m state --state NEW -m tcp -p tcp --dport 10020 -j ACCEPT
11 -A INPUT -m state --state NEW -m tcp -p tcp --dport 10033 -j ACCEPT
12 -A INPUT -m state --state NEW -m tcp -p tcp --dport 19888 -j ACCEPT
13 -A INPUT -m state --state NEW -m tcp -p tcp --dport 50010 -j ACCEPT
14 -A INPUT -m state --state NEW -m tcp -p tcp --dport 50020 -j ACCEPT
15 -A INPUT -m state --state NEW -m tcp -p tcp --dport 50030 -j ACCEPT
16 -A INPUT -m state --state NEW -m tcp -p tcp --dport 50060 -j ACCEPT
17 -A INPUT -m state --state NEW -m tcp -p tcp --dport 50070 -j ACCEPT
18 -A INPUT -m state --state NEW -m tcp -p tcp --dport 50075 -j ACCEPT
19 -A INPUT -m state --state NEW -m tcp -p tcp --dport 50090 -j ACCEPT
20 -A INPUT -m state --state NEW -m tcp -p tcp --dport 50091 -j ACCEPT
21 -A INPUT -m state --state NEW -m tcp -p tcp --dport 50100 -j ACCEPT
22 -A INPUT -m state --state NEW -m tcp -p tcp --dport 50105 -j ACCEPT
23 -A INPUT -m state --state NEW -m tcp -p tcp --dport 50470 -j ACCEPT
24 -A INPUT -m state --state NEW -m tcp -p tcp --dport 50475 -j ACCEPT
25 #Xavier Setting End

 

service iptables restart

[/备注]

 

一、伪分布式Hadoop配置

[备注]

Hadoop位于:/home/xavier/下

Hadoop目录下建立:tmp,dfs/name,dfs/data目录

[/备注]

 

1.设置Hadoop环境变量:

1 #Set Hadoop Environment
2 export HADOOP_HOME="/home/xavier/Hadoop2"
3 export PATH="$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH"

2.编辑etc/hadoop/hadoop-env.sh :

 1 #Set Java Environment 2 export JAVA_HOME="/usr/java/jdk1.7.0_79" 

3.编辑etc/hadoop/core-site.xml:

 1 <configuration>
 2     <property>
 3         <name>fs.defaultFS</name>
 4         <value>hdfs://localhost:9000</value>
 5     </property>
 6     <property>
 7         <name>hadoop.tmp.dir</name>
 8         <value>file:///home/xavier/Hadoop2/tmp</value>
 9     </property>
10 </configuration>

4.编辑etc/hadoop/hdfs-site.xml:

 1 <configuration>
 2     <property>
 3         <name>dfs.replication</name>
 4         <value>1</value>
 5     </property>
 6     <property>
 7         <name>dfs.namenode.name.dir</name>
 8         <value>file:///home/xavier/Hadoop2/dfs/name</value>
 9     </property>
10     <property>
11         <name>dfs.namenode.data.dir</name>
12         <value>file:///home/xavier/Hadoop2/dfs/data</value>
13     </property>
14 </configuration>

5.cp mapred-site.xml.template mapred-site.xml

6.编辑etc/hadoop/mapred-site.xml:

1 <configuration>
2     <property>
3         <name>mapreduce.framework.name</name>
4         <value>yarn</value>
5     </property>
6 </configuration>

7.编辑etc/hadoop/yarn-site.xml:

1 <configuration>
2     <property>
3         <name>yarn.nodemanager.aux-services</name>
4         <value>mapreduce_shuffle</value>
5     </property>
6 </configuration>

8.格式化namenode:

./hdfs namenode -format

9.启动dfs,yarn:

 ./start-dfs.sh

./start-yarn.sh

10.浏览器访问:(稍等一分钟,等电脑运行)

http://localhost:8088/

http://localhost:50070/

如果都能访问成功,看到Hadoop小象,应该就成功了!

 

二、完全分布式Hadoop配置

[备注]

Hadoop位于:/home/xavier/下

Hadoop目录下建立:tmp目录,dfs/name目录,dfs/data目录

Hadoop机器:两台机器:CentOS 6.6 完全相同的环境(相同用户,相同密码,相同Hadoop,相同目录)

其中:

笔记本ip:10.199.155.86   主机名: master
台式机ip:10.199.154.135  主机名:slave

[/备注]

1.设置Hadoop环境变量:

1 #Set Hadoop2 Environment
2 export HADOOP_HOME="/home/xavier/Hadoop2M"
3 export PATH="$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH"

2.编辑etc/hadoop/hadoop-env.sh :

 1 export JAVA_HOME="/usr/java/jdk1.7.0_79" 

3.编辑etc/hadoop/core-site.xml:

 1 <configuration>
 2     <property>
 3         <name>fs.defaultFS</name>
 4         <value>hdfs://master:8020</value>
 5     </property>
 6     
 7     <property>
 8         <name>hadoop.tmp.dir</name>
 9         <value>file:///home/xavier/Hadoop2M/tmp</value>
10     </property>
11 </configuration>

4.编辑etc/hadoop/hdfs-site.xml:

 1 <configuration>
 2     <property>
 3         <name>dfs.namenode.name.dir</name>
 4         <value>file:///home/xavier/Hadoop2M/dfs/name</value>
 5     </property>
 6     <property>
 7         <name>dfs.datanode.data.dir</name>
 8         <value>file:///home/xavier/Hadoop2M/dfs/data</value>
 9     </property>
10     <property>
11         <name>dfs.replication</name>
12         <value>1</value>
13     </property>
14 </configuration>

5.cp mapred-site.xml.template mapred-site.xml

6.编辑etc/hadoop/mapred-site.xml:

1 <configuration>
2     <property>
3         <name>mapreduce.framework.name</name>
4         <value>yarn</value>
5     </property>
6 </configuration>

7.编辑etc/hadoop/yarn-site.xml:

 1 <configuration>
 2 <!-- Site specific YARN configuration properties -->
 3     <property>
 4         <name>yarn.nodemanager.aux-services</name>
 5         <value>mapreduce_shuffle</value>
 6     </property>
 7     <property>
 8         <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
 9         <value>org.apache.hadoop.mapred.ShuffleHandler</value>
10     </property>
11     <property>
12         <name>yarn.resourcemanager.hostname</name>
13         <value>master</value>
14     </property>
15 </configuration>

8.编辑etc/hadoop/ yarn-env.sh:

 1 export JAVA_HOME="/usr/java/jdk1.7.0_79"

9.格式化namenode:

./hdfs namenode -format

10.启动dfs,yarn:

 ./start-dfs.sh

./start-yarn.sh

11.浏览器访问:(稍等一分钟,等电脑运行)

http://master:8088/

http://master:50070/

如果都能访问成功,看到Hadoop小象,应该就成功了!

 

posted @ 2015-07-06 11:13  XavierJZhang  阅读(161)  评论(0编辑  收藏  举报