hadoop搭建笔记(一)

环境:mac/linux

hadoop版本:3.1.1

安装特性:非HA

 

准备:

1. jdk8以上

2. ssh

3. 下载hadoop安装包

 

配置文件,这里都只有简易配置:

1. core-site.xml

<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/opt/hadoop-3.1.1/tmp</value>
        <description>A base for other temporary directories.</description>
    </property>
    <property>
        <name>io.file.buffer.size</name>
        <value>131072</value>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://HxaMac:9000</value>
    </property>

</configuration>

2. hdfs-site.xml

<configuration>
    <property>
        <name>dfs.namenode.http-address</name>
        <value>0.0.0.0:50070</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/Users/hadoop/hdfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/Users/hadoop/hdfs/data</value>
    </property>
    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.permissions</name>
        <value>false</value>
    </property>
</configuration>

3. yarn-site.xml

<configuration>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>HxaMac:18040</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>HxaMac:18030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>0.0.0.0:8088</value>
         </property>
     <property>
           <name>yarn.resourcemanager.resource-tracker.address</name>
           <value>HxaMac:18025</value>
         </property>
     <property>
           <name>yarn.resourcemanager.admin.address</name>
           <value>HxaMac:18141</value>
         </property>
     <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
         </property>
     <property>
             <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
             <value>org.apache.hadoop.mapred.ShuffleHandler</value>
         </property>

    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.log-aggregation.retain-seconds</name>
        <value>864000</value>
    </property>
    <property>
        <name>yarn.log-aggregation.retain-check-interval-seconds</name>
        <value>-1</value>
    </property>
</configuration>

4. mapred-site.xml

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.application.classpath</name>
        <value>
            /opt/hadoop-3.1.1/etc/hadoop,
            /opt/hadoop-3.1.1/share/hadoop/common/*,
            /opt/hadoop-3.1.1/share/hadoop/common/lib/*,
            /opt/hadoop-3.1.1/share/hadoop/hdfs/*,
            /opt/hadoop-3.1.1/share/hadoop/hdfs/lib/*,
            /opt/hadoop-3.1.1/share/hadoop/mapreduce/*,
            /opt/hadoop-3.1.1/share/hadoop/mapreduce/lib/*,
            /opt/hadoop-3.1.1/share/hadoop/yarn/*,
            /opt/hadoop-3.1.1/share/hadoop/yarn/lib/*
        </value>
    </property>
    <property>
        <name>mapreduce.jobhistory.max-age-ms</name>
        <value>5184000000</value>
    </property>
</configuration>

5. hadoop-env.sh

export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_201.jdk/Contents/Home

export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true

6. yarn-env.sh

一般不动。

7. worker

HxaMac

8.yarn-worker

HxaMac

 

步骤:

1. 完成jdk8、ssh的准备

2. 解压hadoop,一般在/opt目录下

3. 修改环境变量,如/etc/bash.bashrc

JAVA_HOME=jdk_dir

CLASSPATH=$JAVA_HOME/lib/
PATH=$JAVA_HOME/bin:$PATH

export PATH JAVA_HOME CLASSPATH

alias hput='hadoop fs -put'
alias hget='hadoop fs -get'
alias hls='hadoop fs -ls'
alias hrm='hadoop fs -rm -r'
alias hcat='hadoop fs -cat'

HADOOP_INSTALL=/opt/hadoop-3.1.1
PATH=$HADOOP_INSTALL/bin:$PATH
PATH=$HADOOP_INSTALL/sbin:$PATH

export HADOOP_HOME=$HADOOP_INSTALL

export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME

export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_LIBEXEC_DIR=$HADOOP_HOME/libexec
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native:$JAVA_LIBRARY_PATH
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

export HDFS_DATANODE_USER=hadoop
export HDFS_DATANODE_SECURE_USER=hadoop
export HDFS_SECONDARYNAMENODE_USER=hadoop
export HDFS_NAMENODE_USER=hadoop

4. hdfs namenode -format

5. start-dfs.sh

查看http://hxamac:50070/

6. start-yarn.sh

查看http://hxamac:8088

7. 测试hdfs: put一个小文件

hadoop fs -mkdir -p /user/hxa/
hadoop fs -put test.txt /user/hxa/

8. 测试mapreduce任务

hadoop jar hadoop-mapreduce-examples-3.1.1.jar pi 10 10
posted @ 2019-02-28 12:07  PigeonNoir  阅读(226)  评论(0编辑  收藏  举报