hadoop1 & hadoop2 fair-schduler 配置和使用
hadoop1
- 配置 mapred-site.xml,增加如下内容
<property>
<name>mapred.jobtracker.taskScheduler</name>
<value>org.apache.hadoop.mapred.FairScheduler</value>
</property>
<property>
<name>mapred.fairscheduler.allocation.file</name>
<value>/etc/hadoop/conf/pools.xml</value>
</property>
- 配置 pools.xml,增加如下内容
<queue name="default”>
<minResources>1024 mb,1vcores</minResources>
<maxResources>61440 mb,20vcores</maxResources>
<maxRunningApps>10</maxRunningApps>
<weight>2.0</weight>
<schedulingPolicy>fair</schedulingPolicy>
</queue>
<queue name=“hadoop”>
<minResources>1024 mb,10vcores</minResources>
<maxResources>3072000 mb,960vcores</maxResources>
<maxRunningApps>60</maxRunningApps>
<weight>5.0</weight>
<schedulingPolicy>fair</schedulingPolicy>
<aclSubmitApps>hadoop,yarn,spark</aclSubmitApps>
</queue>
<queue name="spark">
<minResources>1024 mb,10vcores</minResources>
<maxResources>61440 mb,20vcores</maxResources>
<maxRunningApps>10</maxRunningApps>
<weight>4.0</weight>
<schedulingPolicy>fair</schedulingPolicy>
<aclSubmitApps>yarn,spark</aclSubmitApps>
</queue>
<userMaxAppsDefault>20</userMaxAppsDefault>
- 提交作业指定队列方式
-Dmapred.job.queue.name=hadoop
hadoop2
- 配置 yarn-site.xml,增加如下内容
<property>
<name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
</property>
<property>
<name>yarn.scheduler.fair.allocation.file</name>
<value>/home/cluster/conf/hadoop/fair-scheduler.xml</value>
</property>
<property>
<name>yarn.scheduler.fair.user-as-default-queue</name>
//如果希望以用户名作为队列,可以将该属性配置为true,默认为true,所以如果不想以用户名为队列的,必须显式的设置成false
<value>false</value>
</property>
- 配置 fair-scheduler.xml,增加如下内容
<queue name="default”>
<minResources>1024 mb,1vcores</minResources>
<maxResources>61440 mb,20vcores</maxResources>
<maxRunningApps>10</maxRunningApps>
<weight>2.0</weight>
<schedulingPolicy>fair</schedulingPolicy>
</queue>
<queue name=“hadoop”>
<minResources>1024 mb,10vcores</minResources>
<maxResources>3072000 mb,960vcores</maxResources>
<maxRunningApps>60</maxRunningApps>
<weight>5.0</weight>
<schedulingPolicy>fair</schedulingPolicy>
<aclSubmitApps>hadoop,yarn,spark</aclSubmitApps>
</queue>
<queue name="spark">
<minResources>1024 mb,10vcores</minResources>
<maxResources>61440 mb,20vcores</maxResources>
<maxRunningApps>10</maxRunningApps>
<weight>4.0</weight>
<schedulingPolicy>fair</schedulingPolicy>
<aclSubmitApps>yarn,spark</aclSubmitApps>
</queue>
<userMaxAppsDefault>20</userMaxAppsDefault>
- 提交作业指定队列方式
-Dmapreduce.job.queuename=root.hadoop
spark
- 提交作业指定队列方式
--queue=root.spark
版权声明:本文为博主原创文章,未经博主允许不得转载。