Speculative Execution in Hadoop

来自:http://blog.csdn.net/macyang/article/details/7880671

 

所谓的推测执行,就是当所有task都开始运行之后,Job Tracker会统计所有任务的平均进度,如果某个task所在的task node机器配置比较低或者CPU load很高(原因很多),导致任务执行比总体任务的平均执行要慢,此时Job Tracker会启动一个新的任务(duplicate task),原有任务和新任务(一个task会有多个attempt同时执行)哪个先执行完就把另外一个kill掉,这也是我们经常在Job Tracker页面看到任务执行成功,但是总有些任务被kill,就是这个原因。另外,根据mapreduce job的特点,同一个task执行多次的结果是一样的,所以task只要有一次执行成功,job就是成功的,被kill的task对job的结果没有影响。


配置参数:

 

mapred.map.tasks.speculative.execution=true

mapred.reduce.tasks.speculative.execution=true

这两个是推测执行的配置项,当然如果你从来不关心这两个选项也没关系,它们默认值是true

 

而Hadoop 会根据task progress score决定是否killed一个task:

 

For a map, the progress score is the fraction of input data read.

For a reduce task, the execution is divided into three phases, each of which accounts for 1/3 of the score:
• The copy phase, when the task fetches map outputs.
• The sort phase, when map outputs are sorted by key.
• The reduce phase, when a user-defined function is applied to the list of map outputs with each key.
In each phase, the score is the fraction of data processed.
For example,
• a task halfway through the copy phase has a progress score of 1 / 2 * 1 / 3 = 1 / 6
• a task halfway through the reduce phase has a progress score of 1 / 3 + 1 / 3 + 1 / 2 * 1 / 3 = 5 / 6

Hadoop looks at the average progress score of each category of tasks (maps and reduces) to define a threshold for speculative execution. When a task’s progress score is less than the average for its category by a threshold, and the task has run for a certain amount of time, it is considered slow. The scheduler also ensures that at most one speculative copy of each task is running at a time. When running multiple jobs, Hadoop uses a FIFO discipline where the earliest submitted job is asked for a task to run, then the second, etc. There is also a priority system for putting jobs into higher-priority queues.

(来源:http://adhoop.wordpress.com/2012/02/24/speculative-execution-in-hadoop/

posted @ 2014-07-07 16:17  悟寰轩-叶秋  阅读(366)  评论(0编辑  收藏  举报