hadoop mapreduce 解决 top K问题
网上搜索到的那个top K问题的解法,我觉得有些地方都没有讲明白。因为我们要找出top K, 那么就应该显式的指明the num of reduce tasks is one.
不然我还真不好理解为什么可以得到top K的结果。这里顺便提及一下,一个map task就是一个进程。有几个map task就有几个中间文件,有几个reduce task就有几个最终输出文件。好了,这就好理解了,我们要找的top K 是指的全局的前K条数据,那么不管中间有几个map, reduce最终只能有一个reduce来汇总数据,输出top K。
下面写出思路和代码:
1. Mappers
使用默认的mapper数据,一个input split(输入分片)由一个mapper来处理。
在每一个map task中,我们找到这个input split的前k个记录。这里我们用TreeMap这个数据结构来保存top K的数据,这样便于更新。下一步,我们来加入新记录到TreeMap中去(这里的TreeMap我感觉就是个大顶堆)。在map中,我们对每一条记录都尝试去更新TreeMap,最后我们得到的就是这个分片中的local top k的k个值。在这里要提醒一下,以往的mapper中,我们都是处理一条数据之后就context.write或者output.collector一次。而在这里不是,这里是把所有这个input split的数据处理完之后再进行写入。所以,我们可以把这个context.write放在cleanup里执行。cleanup就是整个mapper task执行完之后会执行的一个函数。
2.reducers
由于我前面讲了很清楚了,这里只有一个reducer,就是对mapper输出的数据进行再一次汇总,选出其中的top k,即可达到我们的目的。Note that we are using NullWritable here. The reason for this is we want all of the outputs from all of the mappers to be grouped into a single key in the reducer.
1 package seven.ili.patent; 2 3 /** 4 * Created with IntelliJ IDEA. 5 * User: Isaac Li 6 * Date: 12/4/12 7 * Time: 5:48 PM 8 * To change this template use File | Settings | File Templates. 9 */ 10 11 import org.apache.hadoop.conf.Configuration; 12 import org.apache.hadoop.conf.Configured; 13 import org.apache.hadoop.fs.Path; 14 import org.apache.hadoop.io.IntWritable; 15 import org.apache.hadoop.io.LongWritable; 16 import org.apache.hadoop.io.NullWritable; 17 import org.apache.hadoop.io.Text; 18 import org.apache.hadoop.mapreduce.Job; 19 import org.apache.hadoop.mapreduce.Mapper; 20 import org.apache.hadoop.mapreduce.Reducer; 21 import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; 22 import org.apache.hadoop.mapreduce.lib.input.TextInputFormat; 23 import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; 24 import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; 25 import org.apache.hadoop.util.Tool; 26 import org.apache.hadoop.util.ToolRunner; 27 28 import java.io.IOException; 29 import java.util.TreeMap; 30 31 //利用MapReduce求最大值海量数据中的K个数 32 public class Top_k_new extends Configured implements Tool { 33 34 public static class MapClass extends Mapper<LongWritable, Text, NullWritable, Text> { 35 public static final int K = 100; 36 private TreeMap<Integer, Text> fatcats = new TreeMap<Integer, Text>(); 37 public void map(LongWritable key, Text value, Context context) 38 throws IOException, InterruptedException { 39 40 String[] str = value.toString().split(",", -2); 41 int temp = Integer.parseInt(str[8]); 42 fatcats.put(temp, value); 43 if (fatcats.size() > K) 44 fatcats.remove(fatcats.firstKey()) 45 } 46 @Override 47 protected void cleanup(Context context) throws IOException, InterruptedException { 48 for(Text text: fatcats.values()){ 49 context.write(NullWritable.get(), text); 50 } 51 } 52 } 53 54 public static class Reduce extends Reducer<NullWritable, Text, NullWritable, Text> { 55 public static final int K = 100; 56 private TreeMap<Integer, Text> fatcats = new TreeMap<Integer, Text>(); 57 public void reduce(NullWritable key, Iterable<Text> values, Context context) 58 throws IOException, InterruptedException { 59 for (Text val : values) { 60 String v[] = val.toString().split("\t"); 61 Integer weight = Integer.parseInt(v[1]); 62 fatcats.put(weight, val); 63 if (fatcats.size() > K) 64 fatcats.remove(fatcats.firstKey()); 65 } 66 for (Text text: fatcats.values()) 67 context.write(NullWritable.get(), text); 68 } 69 } 70 71 public int run(String[] args) throws Exception { 72 Configuration conf = getConf(); 73 Job job = new Job(conf, "TopKNum"); 74 job.setJarByClass(Top_k_new.class); 75 FileInputFormat.setInputPaths(job, new Path(args[0])); 76 FileOutputFormat.setOutputPath(job, new Path(args[1])); 77 job.setMapperClass(MapClass.class); 78 // job.setCombinerClass(Reduce.class); 79 job.setReducerClass(Reduce.class); 80 job.setInputFormatClass(TextInputFormat.class); 81 job.setOutputFormatClass(TextOutputFormat.class); 82 job.setOutputKeyClass(NullWritable.class); 83 job.setOutputValueClass(Text.class); 84 System.exit(job.waitForCompletion(true) ? 0 : 1); 85 return 0; 86 } 87 public static void main(String[] args) throws Exception { 88 int res = ToolRunner.run(new Configuration(), new Top_k_new(), args); 89 System.exit(res); 90 } 91 92 }
posted on 2012-12-04 18:17 brainworm 阅读(9202) 评论(1) 编辑 收藏 举报