hdfs对namenode format 之后 应该首先检查内存消耗情况,以判断是否支持开启yarn

 

http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-common/yarn-default.xml  3.0.0

yarn.scheduler.minimum-allocation-mb 1024 The minimum allocation for every container request at the RM in MBs. Memory requests lower than this will be set to the value of this property. Additionally, a node manager that is configured to have less memory than this value will be shut down by the resource manager.
yarn.scheduler.maximum-allocation-mb 8192 The maximum allocation for every container request at the RM in MBs. Memory requests higher than this will throw an InvalidResourceRequestException.

 

http://hadoop.apache.org/docs/r2.7.5/hadoop-yarn/hadoop-yarn-common/yarn-default.xml

yarn.scheduler.minimum-allocation-mb 1024 The minimum allocation for every container request at the RM, in MBs. Memory requests lower than this will throw a InvalidResourceRequestException.
yarn.scheduler.maximum-allocation-mb 8192 The maximum allocation for every container request at the RM, in MBs. Memory requests higher than this will throw a InvalidResourceRequestException.

 

 

 

Additionally, a node manager that is configured to have less memory than this value will be shut down by the resource manager.

 

 

【1】shutdown -r

【2】cd /usr/local/hadoop; 

./root_rm_logs_mydn-nn_roottmp.sh; 

[root@bigdata-server-02 hadoop]# cat root_rm_logs_mydn-nn_roottmp.sh
ssh bigdata-server-01 'cd /usr/local/hadoop;rm -rf {mydatanode,mynamenode}/*;rm -rf /tmp/*;rm -rf logs/*';
ssh bigdata-server-02 'cd /usr/local/hadoop;rm -rf {mydatanode,mynamenode}/*;rm -rf /tmp/*;rm -rf logs/*';
ssh bigdata-server-03 'cd /usr/local/hadoop;rm -rf {mydatanode,mynamenode}/*;rm -rf /tmp/*;rm -rf logs/*';
【3】./bin/hdfs namenode -format;

该步后应该检查内存消耗情况,以判断是否支持开启yarn

【4】free -m;

[root@bigdata-server-02 hadoop]# free -m
total used free shared buff/cache available
Mem: 1839 782 667 0 389 906
Swap: 0 0 0

 

posted @ 2017-12-25 22:18  papering  阅读(218)  评论(0编辑  收藏  举报