BenjaminYang In solitude, where we are least alone

hadoop3.1集成yarn ha

1.角色分配

2.配置

cd /opt/hadoop-3.1.1/etc/hadoop
修改如下配置:

2.1配置mapred-site.xml 

<configuration>
    <!-- 指定mr框架为yarn方式 -->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
 
    <!-- 指定mapreduce jobhistory地址 -->
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>node01:10020</value>
    </property>
 
    <!-- 任务历史服务器的web地址 -->
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>node01:19888</value>
    </property>
    <property>
        <name>yarn.app.mapreduce.am.staging-dir</name>
        <value>/user</value>
    </property>
 
    <property>
      <name>mapreduce.application.classpath</name>
      <value>
    /opt/hadoop-3.1.1/etc/hadoop,
          /opt/hadoop-3.1.1/share/hadoop/common/*,
          /opt/hadoop-3.1.1/share/hadoop/common/lib/*,
          /opt/hadoop-3.1.1/share/hadoop/hdfs/*,
          /opt/hadoop-3.1.1/share/hadoop/hdfs/lib/*,
          /opt/hadoop-3.1.1/share/hadoop/mapreduce/*,
          /opt/hadoop-3.1.1/share/hadoop/mapreduce/lib/*,
          /opt/hadoop-3.1.1/share/hadoop/yarn/*,
          /opt/hadoop-3.1.1/share/hadoop/yarn/lib/*
      </value>
    </property>
</configuration>

 

 

2.2配置yarn-site.xml

<configuration>
<property>
  <name>yarn.resourcemanager.ha.enabled</name>
  <value>true</value>
</property>
<property>
  <name>yarn.resourcemanager.cluster-id</name>
  <value>cluster1</value>
</property>
<property>
  <name>yarn.resourcemanager.ha.rm-ids</name>
  <value>rm1,rm2</value>
</property>
<property>
  <name>yarn.resourcemanager.hostname.rm1</name>
  <value>node03</value>
</property>
<property>
  <name>yarn.resourcemanager.hostname.rm2</name>
  <value>node04</value>
</property>
<property>
  <name>yarn.resourcemanager.webapp.address.rm1</name>
  <value>node03:8088</value>
</property>
<property>
  <name>yarn.resourcemanager.webapp.address.rm2</name>
  <value>node04:8088</value>
</property>
<property>
  <name>yarn.resourcemanager.zk-address</name>
  <value>node02:2181,node03:2181,node04:2181</value>
</property>
<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
</property>
<property>
    <name>yarn.nodemanager.vmem-check-enabled</name>
    <value>false</value>  
</property>>
 
</configuration>

 

3.分发配置

[root@node01 hadoop]# for i in {node02,node03,node04};do scp mapred-site.xml yarn-site.xml $i:`pwd`;done
 

4.启动yarn报错

[root@node01 hadoop]# start-yarn.sh
Starting resourcemanagers on [ node03 node04]
ERROR: Attempting to operate on yarn resourcemanager as root
ERROR: but there is no YARN_RESOURCEMANAGER_USER defined. Aborting operation.
Starting nodemanagers
ERROR: Attempting to operate on yarn nodemanager as root
ERROR: but there is no YARN_NODEMANAGER_USER defined. Aborting operation.
 
解决方法:
[root@node01 hadoop]# vi /opt/hadoop-3.1.1/sbin/start-yarn.sh
[root@node01 hadoop]# vi /opt/hadoop-3.1.1/sbin/stop-yarn.sh
分别编辑启动停止脚本添加用户变量
YARN_RESOURCEMANAGER_USER=root 
HADOOP_SECURE_DN_USER=yarn 
YARN_NODEMANAGER_USER=root
启动成功
[root@node01 hadoop]# start-yarn.sh
Starting resourcemanagers on [ node03 node04]
Last login: Tue Dec 25 17:57:03 CST 2018 on pts/0
Starting nodemanagers
Last login: Tue Dec 25 18:13:32 CST 2018 on pts/0

 

5.查看节点上的进程角色

node2:
node3:
node4:
符合预期规划。
 

6.查看resource manager的节点状态

[root@node01 mapreduce]# yarn rmadmin -getServiceState rm1   #rm1对应node03
standby
[root@node01 mapreduce]# yarn rmadmin -getServiceState rm2   #rm2对应node04
active

 

7.web页面查看

地址栏输入   node03:8088 或者 node04:8088 也会跳转到 node03:8088
因为node03是当前的active  resourcemanager
点击about 可以看到 HA state 为active
地址栏输入  http://node04:8088/cluster/cluster   可以看到 HA state 为standby
 

8.测试自带wordcount程序

cd /opt/hadoop-3.1.1/share/hadoop/mapreduce
hdfs dfs -mkdir /input
[root@node01 mapreduce]# hdfs dfs -put /opt/hadoop-3.1.1/LICENSE.txt /input

 

 
执行wordcount报错
hadoop jar hadoop-mapreduce-examples-3.1.1.jar wordcount /input /output
 
ile invoking ClientNamenodeProtocolTranslatorPB.mkdirs over node01/172.16.76.241:8020. Retrying immediately.
2018-12-25 23:11:55,999 INFO retry.RetryInvocationHandler: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RetriableException): org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /tmp/hadoop-yarn/staging/root/.staging/job_1545750634926_0001. Name node is in safe mode.
关闭安全设定
hdfs dfsadmin -safemode leave
 
再次执行wordcount程序成功
[root@node01 mapreduce]# hadoop jar hadoop-mapreduce-examples-3.1.1.jar wordcount /input/LICENSE.txt /output
2018-12-26 23:38:27,113 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
2018-12-26 23:38:27,544 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /user/root/.staging/job_1545838679858_0001
2018-12-26 23:38:27,796 INFO input.FileInputFormat: Total input files to process : 1
2018-12-26 23:38:27,886 INFO mapreduce.JobSubmitter: number of splits:1
2018-12-26 23:38:27,922 INFO Configuration.deprecation: yarn.resourcemanager.zk-address is deprecated. Instead, use hadoop.zk.address
2018-12-26 23:38:27,924 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
2018-12-26 23:38:28,187 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1545838679858_0001
2018-12-26 23:38:28,188 INFO mapreduce.JobSubmitter: Executing with tokens: []
2018-12-26 23:38:28,379 INFO conf.Configuration: resource-types.xml not found
2018-12-26 23:38:28,379 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2018-12-26 23:38:28,865 INFO impl.YarnClientImpl: Submitted application application_1545838679858_0001
2018-12-26 23:38:28,927 INFO mapreduce.Job: The url to track the job: http://node04:8088/proxy/application_1545838679858_0001/
2018-12-26 23:38:28,928 INFO mapreduce.Job: Running job: job_1545838679858_0001
2018-12-26 23:38:47,218 INFO mapreduce.Job: Job job_1545838679858_0001 running in uber mode : false
2018-12-26 23:38:47,219 INFO mapreduce.Job:  map 0% reduce 0%
2018-12-26 23:39:15,648 INFO mapreduce.Job:  map 100% reduce 0%
2018-12-26 23:39:26,716 INFO mapreduce.Job:  map 100% reduce 100%
2018-12-26 23:39:52,890 INFO mapreduce.Job: Job job_1545838679858_0001 completed successfully
2018-12-26 23:39:53,014 INFO mapreduce.Job: Counters: 53
    File System Counters
        FILE: Number of bytes read=46271
        FILE: Number of bytes written=527741
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=147243
        HDFS: Number of bytes written=34795
        HDFS: Number of read operations=8
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=2
    Job Counters
        Launched map tasks=1
        Launched reduce tasks=1
        Rack-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=26690
        Total time spent by all reduces in occupied slots (ms)=8610
        Total time spent by all map tasks (ms)=26690
        Total time spent by all reduce tasks (ms)=8610
        Total vcore-milliseconds taken by all map tasks=26690
        Total vcore-milliseconds taken by all reduce tasks=8610
        Total megabyte-milliseconds taken by all map tasks=27330560
        Total megabyte-milliseconds taken by all reduce tasks=8816640
    Map-Reduce Framework
        Map input records=2746
        Map output records=21463
        Map output bytes=228869
        Map output materialized bytes=46271
        Input split bytes=99
        Combine input records=21463
        Combine output records=2965
        Reduce input groups=2965
        Reduce shuffle bytes=46271
        Reduce input records=2965
        Reduce output records=2965
        Spilled Records=5930
        Shuffled Maps =1
        Failed Shuffles=0
        Merged Map outputs=1
        GC time elapsed (ms)=230
        CPU time spent (ms)=2200
        Physical memory (bytes) snapshot=327757824
        Virtual memory (bytes) snapshot=5473439744
        Total committed heap usage (bytes)=143884288
        Peak Map Physical memory (bytes)=210673664
        Peak Map Virtual memory (bytes)=2733326336
        Peak Reduce Physical memory (bytes)=117084160
        Peak Reduce Virtual memory (bytes)=2740113408
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters
        Bytes Read=147144
    File Output Format Counters
        Bytes Written=34795

 

查看生成的输出文件

[root@node01 mapreduce]# hdfs dfs -ls /output
Found 2 items
-rw-r--r--   2 root supergroup          0 2018-12-26 23:39 /output/_SUCCESS
-rw-r--r--   2 root supergroup      34795 2018-12-26 23:39 /output/part-r-00000

 

# part-r  表示是reduce输出
 
 
 
 
posted @ 2018-12-26 15:57  benjamin杨  阅读(807)  评论(0编辑  收藏  举报