Hadoop单机模式集群搭建

Hadoop单机模式集群搭建

0、准备192.168.230.13主机,作为搭建hadoop单机模式的主机

1、将压缩/安装文件上传到指定目录

[root@node4 ~]# cd /opt/software
[root@node4 software]# ll
total 723384
-rw-r--r-- 1 root root 311430119 Feb 29 09:07 hadoop-2.5.0.tar.gz
-rw-r--r-- 1 root root 147197492 Feb 29 10:09 hadoop-2.5.2.tar.gz
-rw-r--r-- 1 root root 142376665 Feb 29 09:07 jdk-7u67-linux-x64.tar.gz
-rw-r--r-- 1 root root 138082565 Feb 29 09:06 jdk-7u79-linux-x64.rpm
-rw-r--r-- 1 root root   1653240 Feb 29 08:14 tengine-2.1.0.tar.gz

1.1、将hadoop解压到指定目录下

[root@node4 software]# tar -zxf hadoop-2.5.2.tar.gz -C /opt/modules

1.2、查看主机是否有安装的jdk

[root@node4 hadoop-2.5.2]# rpm -qa | grep jdk 
jdk-1.7.0_79-fcs.x86_64

1.3、删除安装的jdk

[root@node4 hadoop-2.5.2]#  yum -y remove jdk-1.7.0_79-fcs.x86_64

1.4、安装jdk的rpm包

[root@node4 software]# rpm -ivh jdk-7u79-linux-x64.rpm

1.5、检查jdk是否安装成功

[root@node4 software]# javac -version
javac 1.7.0_79

1.6、使用rpm包安装,jdk则默认安装到/usr/java/目录下

[root@node4 software]# cd /usr/java/jdk1.7.0_79

1.7、查看当前路径

[root@node4 jdk1.7.0_79]# pwd
/usr/java/jdk1.7.0_79

2、配置/opt/modules/hadoop-2.5.2/etc/hadoop/目录中的hadoop-env.sh

2.1、配置jdk和hadoop路径

export JAVA_HOME=/usr/java/jdk1.7.0_79
export HADOOP_PREFIX=/opt/modules/hadoop-2.5.2

2.2、进入/opt/modules/hadoop-2.5.2目录,将etc/hadoop/目录下文件全部复制到一个hadoop-2.5.2目录下新创建的文件夹input下

[root@node4 software]#  cd /opt/modules/hadoop-2.5.2
[root@node4 hadoop-2.5.2]# mkdir input
[root@node4 hadoop-2.5.2]#  cp /opt/modules/hadoop-2.5.2/etc/hadoop/*.xml /opt/modules/hadoop-2.5.2/input

查看hadoop命令详细描述
[root@node4 hadoop-2.5.2]# bin/hadoop

3、测试MapReduce单词统计是否能成功

[root@node4 hadoop-2.5.2]# bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.2.jar grep input output 'dfs[a-z.]+'
[root@node4 hadoop-2.5.2]# ls output
part-r-00000  _SUCCESS
[root@node4 hadoop-2.5.2]# cat output/part-r-00000
1	dfsadmin

4、配置/opt/modules/hadoop-2.5.2/etc/hadoop目录下的core-site.xml,hdfs-site.xml文件

4.1、配置/opt/modules/hadoop-2.5.2/etc/hadoop/core-site.xml:

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

4.2、配置/opt/modules/hadoop-2.5.2/etc/hadoop/hdfs-site.xml:

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

5、格式化namenode

[root@node4 hadoop-2.5.2]# ./bin/hdfs namenode -format

6、启动HDFS

[root@node4 hadoop-2.5.2]# sbin/start-dfs.sh(分别启动namenode、datanode、secondarynamenode)

6.1、查看当前进程

[root@node4 hadoop-2.5.2]# jps

7、在浏览器地址输入:

http://192.168.230.13:50070/查看hadoop的WebUI

posted @ 2016-03-08 20:44  我的胡思乱想  阅读(595)  评论(0编辑  收藏  举报