第一步:所需系统信息:64位CentOS系统
[root@systdt etc]# uname -a Linux systdt 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux [root@systdt etc]# cat /etc/issue CentOS release 6.4 (Final)
下载:hadoop-2.6.0.tar.gz
地址:http://apache.fayea.com/hadoop/common/hadoop-2.6.0/
JDK:jdk1.7.0_71
三台虚拟机地址:10.2.10.27 10.2.10.53 192.168.83.204
这里需要注意:请一定要用64位的Linux操作系统;当然非要用32位的操作系统也可以,那需要将hadoop-2.6.0-src.tar.gz <span > </span>下载下来在32位的操作系统上面从新编译;然后才可以用否则会保存,编译时间大概在1个小时左右;
第二步:需要实现三台虚拟机之间ssh无密码登录
生成公钥密钥对:
[root@systdt /]# ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: 98:3c:31:5c:23:21:73:a0:a0:1f:c6:d3:c3:dc:58:32 root@gifer The key's randomart image is: +--[ RSA 2048]----+ |. E.=.o | |.o = @ o . | |. * * = | | o o o = | | . = S | | . | | | | | | | +-----------------+
看到图形输出,表示密钥生成成功,cd /root/.ssh/目录下多出两个文件
私钥文件:id_raa
公钥文件:id_rsa.pub
将三台虚拟机里(包含它自己的)的公钥文件id_rsa.pub内容放到authorized_keys(一定要有三个)文件中:
举其中一个为例如下:配置SSH无密码登录不懂的请点击查看这篇文章
[root@systdt2 ~]# cd ~/.ssh/ [root@systdt2 .ssh]# ll total 16 -rw------- 1 root root 1578 Dec 26 19:53 authorized_keys -rw------- 1 root root 1675 Dec 26 17:40 id_rsa -rw-r--r-- 1 root root 394 Dec 26 17:40 id_rsa.pub -rw-r--r-- 1 root root 2359 Dec 26 19:42 known_hosts [root@systdt2 .ssh]# cat authorized_keys ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA0cumFYUuxCq+sjSP2yBrk0W+Um/39rA1R4tGExfMyajsCrjUAZUZU6X3MWwCs+j17GQ0Ptj7erfY2bOi5VOM5BCtjdK6h2yWacGV77DkUYwip+mFB42ra9Z6zcaEzVGU52/R3timVQlNtQbL4w7UaOLynGJmhhJ+1KueAXUsNpBCbqEEqin7X3KHaa89LHaJT6hd9szoO6nsfSI2WEGgSyYfdPT63/LMsCRrCmvBKTYbFXWE0z18x8+2zMmc8QqbyGmmO24172DAtr13NEbRxT35PaNx0rvgFI/okq9Jy4TXY9c1HUzQFOARrc8G0JVmZsbfgKtHuuGBBZqsFnGeeQ== root@systdt ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAmp21AqbuCUhjtI+iKhOtW7D/cJENROMNNs9myNtdeTdbrTCvzhuDMpTWmvnDxN/wzI9O/MOO6064zlb7kvtRUdq66ARc3Fj/0yS2BTODRZlHY+DrcjUJyP9Eex4rZ+h/PNj5+8YQeba6y1myT3lXQJQEGqw3LgNcQkIvn8YeiB82GeuISgb0X8rNQGUHbIFUCNKbj1JLSGotJJhiuboIGbW6blCfDAB5HO1FxJUpTtdgoLyQgaHxwAEPfgG1Z+Rr/Tmb8mWlEmsZ2tUURsQbREOmHTQbkgEqEVoES10e1YFNKvg0otGaeFvrZshC/Or8gWASOlCO5Dyzc+qAcQ39rw== root@systdt2 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAvV89vN/BFcSBquFa49JQ7t6j0t3xKnyiudlrUckngKmRkhuH6cjjjOrrTqcYld0tWGcSuPWpHC3hyFw4wCUAV1FC/+bQ3rkQxvyGySNgPK7f3vD2XQiSwMjyBeT15Ucxkl6oxVSg8DofHQn0GGhNJb/Hzx2fmN5SHBqUaPvCR9MKdSAiV69X6ep5QRi+E04aPnONrOW0YOrzAQGdGkJktYohoGJwiFCDCrK06GVqHCJrnylgrLA4KFLs4pu94EBEB5vK0f4Fod5MHddjDEYdrYExBQo6qDVKDRl7r1kXfRHzIaZoTOWN83WBc1kROLuxPnytUVEBsvUjn5lML1pCxQ== root@apptools01
测试SSH之间是否互相通,如果通了,这步算完成,很重要;
[root@systdt etc]# ssh 10.2.10.27 [root@systdt ~]# ssh 10.2.10.53 [root@systdt2 ~]# ssh 192.168.83.204 Last login: Fri Dec 26 19:42:21 2014 from 10.2.10.27 -bash-4.1# -bash-4.1#
第三步:先配置一台Master.Hadoop是OK的(10.2.10.27作为主机);
1)解压hadoop-2.6.0
2)配置环境变量:vi /etc/profile 在最后面添加如下代码:
export JAVA_HOME=/opt/www/jdk1.7.0_71 export HADOOP_HOME=/opt/www/hadoop-2.6.0 export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar: export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:
使之生效:[root@systdt www]# source /etc/profile
3)vi /etc/hosts文件 在最后面添加如下内容:
[root@systdt www]# cat /etc/hosts 10.2.10.27 Master.Hadoop 10.2.10.53 Salve1.Hadoop 192.168.83.204 Salve2.Hadoop
4)cd /opt/www/hadoop-2.6.0/etc/hadoop
[root@systdt hadoop]# ll total 128 -rw-r--r-- 1 root root 3589 Dec 26 19:07 capacity-scheduler.xml -rw-r--r-- 1 root root 1335 Dec 26 19:07 configuration.xsl -rw-r--r-- 1 root root 318 Dec 26 19:07 container-executor.cfg -rw-r--r-- 1 root root 1108 Dec 26 19:07 core-site.xml -rw-r--r-- 1 root root 3670 Dec 26 19:07 hadoop-env.cmd -rw-r--r-- 1 root root 3481 Dec 26 19:07 hadoop-env.sh -rw-r--r-- 1 root root 1774 Dec 26 19:07 hadoop-metrics2.properties -rw-r--r-- 1 root root 2490 Dec 26 19:07 hadoop-metrics.properties -rw-r--r-- 1 root root 9201 Dec 26 19:07 hadoop-policy.xml -rw-r--r-- 1 root root 1400 Dec 26 19:07 hdfs-site.xml -rw-r--r-- 1 root root 1449 Dec 26 19:07 httpfs-env.sh -rw-r--r-- 1 root root 1657 Dec 26 19:07 httpfs-log4j.properties -rw-r--r-- 1 root root 21 Dec 26 19:07 httpfs-signature.secret -rw-r--r-- 1 root root 620 Dec 26 19:07 httpfs-site.xml -rw-r--r-- 1 root root 11118 Dec 26 19:07 log4j.properties -rw-r--r-- 1 root root 938 Dec 26 19:07 mapred-env.cmd -rw-r--r-- 1 root root 1383 Dec 26 19:07 mapred-env.sh -rw-r--r-- 1 root root 4113 Dec 26 19:07 mapred-queues.xml.template -rw-r--r-- 1 root root 1395 Dec 26 19:07 mapred-site.xml -rw-r--r-- 1 root root 1159 Dec 26 19:07 mapred-site.xml.template -rw-r--r-- 1 root root 28 Dec 26 20:11 slaves -rw-r--r-- 1 root root 2316 Dec 26 19:07 ssl-client.xml.example -rw-r--r-- 1 root root 2268 Dec 26 19:07 ssl-server.xml.example -rw-r--r-- 1 root root 2237 Dec 26 19:07 yarn-env.cmd -rw-r--r-- 1 root root 4605 Dec 26 19:07 yarn-env.sh -rw-r--r-- 1 root root 1570 Dec 26 19:07 yarn-site.xml
5) [root@systdt hadoop]# vi core-site.xml
[root@systdt hadoop]# cat core-site.xml <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>hadoop.tmp.dir</name> <value>/usr/hadoop/tmp</value> <description>Abase for other temporary directories.</description> </property> <property> <name>fs.defaultFS</name> <value>hdfs://Master.Hadoop:9000</value> </property> <property> <name>io.file.buffer.size</name> <value>4096</value> </property> </configuration>
6) vi hadoop-env.sh 和 yarn-env.sh 在开头添加如下环境变量(一定要添加切勿少了)
export JAVA_HOME=/opt/www/jdk1.7.0_71
7)vi hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>dfs.namenode.name.dir</name> <value>file:///usr/hadoop/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:///usr/hadoop/dfs/data</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.nameservices</name> <value>hadoop-cluster1</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>Master.Hadoop:50090</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> </configuration>
8) vi mapred-site.xml
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> <final>true</final> </property> <property> <name>mapreduce.jobtracker.http.address</name> <value>Master.Hadoop:50030</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>Master.Hadoop:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>Master.Hadoop:19888</value> </property> <property> <name>mapred.job.tracker</name> <value>http://Master.Hadoop:9001</value> </property> </configuration>
9:vi yarn-site.xml
<?xml version="1.0"?> <configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.resourcemanager.hostname</name> <value>Master.Hadoop</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>Master.Hadoop:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>Master.Hadoop:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>Master.Hadoop:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>Master.Hadoop:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>Master.Hadoop:8088</value> </property> </configuration>
10)执行"hadoop dfsadmin -report" 命令
[root@systdt hadoop]#:<span >hadoop dfsadmin -report</span>
到此:单机的server就算是配好了,单机这个时候就可以启动起来的可以看看效果;
第四步:配置Hadoop的集群;
1)先修改10.2.10.27(即:Master.Hadoop)
[root@systdt hadoop]# pwd /opt/www/hadoop-2.6.0/etc/hadoop [root@systdt hadoop]# vi slaves Salve1.Hadoop Salve2.Hadoop
2)登录:10.2.10.27 将hadoop-2.6.0整个的copy到另外两台机器上面
[root@systdt www]# scp -r /opt/www/hadoop-2.6.0 10.2.10.53:/opt/www/ [root@systdt www]# scp -r /opt/www/hadoop-2.6.0 192.168.83.204:/opt/www/
3)配置两位两台机器上面的环境变量:重复第三步Master.Hadoop机器配置里面的(2,3,10步动作)
4)将三台机器的防火墙关闭掉:
[root@systdt www]# service iptables stop
到此:整个三台Hadoop的集群的机器算是配置完成了;
第五步:启动整个集群及其验证:
1)登录到:10.2.10.27 (即:Master.Hadoop的机器上面)
[root@systdt sbin]# pwd /opt/www/hadoop-2.6.0/sbin [root@systdt sbin]# start-all.sh
2)验证:
#Master上面的如下: [root@systdt sbin]# jps 22984 NameNode 23861 Jps 23239 ResourceManager 19766 NodeManager #Salve1和Salve2上面的如下: [root@systdt2 .ssh]# jps 609 Jps 30062 DataNode 2024 Bootstrap 30174 NodeManager
第六步:就可以通过浏览器访问了哦
http://10.2.10.27:8088/cluster/nodes
http://10.2.10.27:50070/dfshealth.html#tab-overview
最后:各个参数,各个配置文件都是什么意思,后面可以慢慢查看资料,先启动起来体会体会是怎么回事,然后可以慢慢研究;