Centos 7 配置单机Hadoop

Centos 7 配置单机Hadoop

系统镜像:CentuOS-7-x86_64-Everything-1708

Java环境:JDK-8u181-linux-x64.tar.gz

Hadoop版本:hadoop-2.7.6.tar.gz

1、安装配置JDK,配置环境变量(路径很重要,我的路径是:/usr/local/java/jdk1.8.0_181)

        java环境的具体安装过程不做记录,下载好相应的压缩包,解压至你想要的路径就好

        环境变量如下::

  1.  
    export JAVA_HOME=/usr/local/java/jdk1.8.0_181
  2.  
    export JRE_HOME=${JAVA_HOME}/jre
  3.  
    export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
  4.  
    export PATH=${JAVA_HOME}/bin:$PATH

2、下载hadoop

http://mirror.bit.edu.cn/apache/hadoop/common/

3、解压到/opt目录下

tar -zxvf hadoop-2.7.6.tar.gz -C /opt/

4、配置hadoop环境变量:
 

  1.  
    vim /etc/profile
  2.  
     
  3.  
    export HADOOP_HOME=/opt/hadoop-2.7.6
  4.  
    export PATH=$PATH:$HADOOP_HOME/bin
  5.  
     
  6.  
    source /etc/profile

5、配置hadoop

  5.1 配置hadoop-env.sh

  1.  
    vim /opt/hadoop-2.7.6/etc/hadoop/hadoop-env.sh
  2.  
     
  3.  
     
  4.  
    export JAVA_HOME=/usr/local/java/jdk1.8.0_181

   5.2 配置core-site.xml

  1.  
    vim /opt/hadoop-2.7.6/etc/hadoop/core-site.xml
  2.  
     
  3.  
     
  4.  
    <configuration>
  5.  
    <property>
  6.  
    <name>hadoop.tmp.dir</name>
  7.  
    <value>file:///opt/hadoop-2.7.6</value>
  8.  
    <description>Abase for other temporary directories.</description>
  9.  
    </property>
  10.  
    <property>
  11.  
    <name>fs.defaultFS</name>
  12.  
    <value>hdfs://192.168.139.129:8888</value>
  13.  
    </property>
  14.  
    </configuration>

  5.3 配置 hdfs-site.xml

vim /opt/hadoop-2.7.6/etc/hadoop/hdfs-site.xml
  1.  
    <configuration>
  2.  
    <property>
  3.  
    <name>dfs.replication</name>
  4.  
    <value>1</value>
  5.  
    </property>
  6.  
    <property>
  7.  
    <name>dfs.namenode.name.dir</name>
  8.  
    <value>file:///opt/hadoop-2.7.6/tmp/dfs/name</value>
  9.  
    </property>
  10.  
    <property>
  11.  
    <name>dfs.datanode.data.dir</name>
  12.  
    <value>file:///opt/hadoop-2.7.6/tmp/dfs/data</value>
  13.  
    </property>
  14.  
    </configuration>

6、配置ssh免密码登录

  1.  
    ssh-keygen -t rsa
  2.  
     
  3.  
    cd ~/.ssh
  4.  
     
  5.  
    cat id_rsa.pub>>authorized_keys
  6.  
     
  7.  
    #如果不能绵密登录,执行以下命令
  8.  
     
  9.  
    chmod 710 authorized_keys

 7、启动

     7.1 格式化HDFS

hdfs namenode -format

     7.2 启动

./sbin/start-dfs.sh

     7.3 测试 http://192.168.139.129:50070

     

  如果访问不成功,请关闭linux系统防火墙,或者开放端口

8、配置yarn

8.1 配置mapred-site.xml

  1.  
    cd /opt/hadoop-2.7.6/etc/hadoop/
  2.  
    cp mapred-site.xml.template mapred-site.xml
  3.  
    vim mapred-site.xml
  4.  
     
  5.  
     
  6.  
    <configuration>
  7.  
    <!-- 通知框架MR使用YARN -->
  8.  
    <property>
  9.  
    <name>mapreduce.framework.name</name>
  10.  
    <value>yarn</value>
  11.  
    </property>
  12.  
    </configuration>
  13.  
     

8.2 配置yarn-site.xml

  1.  
    vim yarn-site.xml
  2.  
     
  3.  
     
  4.  
    <configuration>
  5.  
    <!-- reducer取数据的方式是mapreduce_shuffle -->
  6.  
    <property>
  7.  
    <name>yarn.nodemanager.aux-services</name>
  8.  
    <value>mapreduce_shuffle</value>
  9.  
    </property>
  10.  
    </configuration>

8.3启动

  1.  
    cd /opt/hadoop-2.7.6
  2.  
    ./sbin/start-yarn.sh

8.4 测试 http://192.168.139.129:8088

posted @ 2019-04-09 14:27  代码让自己变强  阅读(983)  评论(0编辑  收藏  举报