1、Hadoop的伪分布式部署

伪分布式模式搭建:

 

1、环境准备

(1)主机名(root用户)
   
 # vi /etc/sysconfig/network
    HOSTNAME=hadoo1  (不要用下划线)

 

(2)创建普通用户conglitrs(后期课程里面都会用这个用户去操作)
    # useradd conglitrs
    # passwd conglitrs   

 

(3)把IP地址设置成静态IP(服务器是固定IP)
    虚拟机如下:
    # setup   --console
    或者
    
# vi /etc/sysconfig/network-scripts/ifcfg-eth0
        BOOTPROTO=none
        IPADDR=192.168.17.128
        NETMASK=255.255.255.0
        GATEWAY=192.168.17.2
        DNS1=202.96.209.5
        DNS2=8.8.8.8

 

(4)关闭防火墙和selinux
   
 # service iptables stop
    # chkconfig iptables off
 
    # vi /etc/sysconfig/selinux
    SELINUX=disabled  enforing

 

(5)添加hosts文件
 
   # vi /etc/hosts   
    192.168.xxx.xxx          hadoop1

 

(6)重启服务器
  
  # reboot   

 

 

2、目录   

  
  # mkdir /opt/softwares   (root)
    # mkdir /opt/modules   (root)
    # chown -R conglitrs:conglitrs /opt/  (root)

 

 

3、软件包

    xmanager  -->  hadoop2.5.0    jdk
 

4、安装jdk

$ tar zxf jdk-7u67-linux-x64.tar.gz -C /opt/modules/  (conglitrs用户)
# vi /etc/profile    (root)
## JAVA HOME
JAVA_HOME=/opt/modules/jdk1.7.0_67
PATH=$PATH:$JAVA_HOME/bin
 
$ source  /etc/profile      (conglitrs用户)
 
# rpm -e --nodeps java-1.6.0-openjdk-1.6.0.0-1.50.1.11.5.el6_3.x86_64 
# rpm -e --nodeps tzdata-java-2012j-1.el6.noarch 
# rpm -e --nodeps java-1.7.0-openjdk-1.7.0.9-2.3.4.1.el6_3.x86_64   

 

 
 

5、安装hadoop

 
$ tar zxvf hadoop-2.5.0.tar.gz -C /opt/modules/  (conglitrs)

 

 
 

6、使用notepad++工具修改配置

/opt/modules/hadoop-2.5.0/etc/hadoop

 

 
hadoop-env.sh:
export JAVA_HOME=/opt/modules/jdk1.7.0_67
 

 

yarn-env.sh:
export JAVA_HOME=/opt/modules/jdk1.7.0_67

 

 
mapred-env.sh
JAVA_HOME=/opt/modules/jdk1.7.0_67

 

 
core-site.xml:
   
 <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hadoop1:8020</value>
    </property>

 

 
hdfs-site.xml:
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>

 

 

7、启动hdfs

hdfs -- >> (第一次)format  -->  start
 
$ bin/hdfs namenode -format
$ sbin/hadoop-daemon.sh start namenode
$ sbin/hadoop-daemon.sh start datanode
$ jps    查看java进程

 

 

8、测试上传文件

$ bin/hdfs dfs -mkdir /input
$ bin/hdfs dfs -put /etc/yum.conf /input
$ bin/hdfs dfs -ls /input
$ bin/hdfs dfs -cat /input/yum.conf

 

 

9、修改yarn相关

/opt/modules/hadoop-2.5.0/etc/hadoop

 

mapred-site.xml:
 
   <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value> 
    </property>

 

yarn-site.xml:
   
 <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

 

 

10、启动yarn相关服务

$ sbin/yarn-daemon.sh start resourcemanager
$ sbin/yarn-daemon.sh start nodemanager

 

 
 

11、浏览器浏览

http://192.168.xxx.xxx:50070/
http://192.168.xxx.xxx:8088/

 

 

12、简单mapreduce测试

 
$ vi sort.txt
hadoop  marpreduce
map    reduce
hadoop  reduce
hadoop  yarn
 
$ bin/hdfs dfs -put sort.txt /input
$ bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar wordcount /input/sort.txt /output
 

 

总结

core-site.xml
hdfs-site.xml
mapred-site.xml
yarn-site.xml

posted on 2016-11-15 14:25  丛立  阅读(335)  评论(0编辑  收藏  举报

导航