WordCount 分析

WordCount程序中Map和Reduce过程分析

 
Map过程:
(1)文件的拆分,测试用的文件较小,每个文件为一个split,并将文件按行分成<key,value>对,如下图,这一步是由框架完成:
(2)将分割好的<key ,value>对交给用户定义的map方法进行处理,生成新的<key,value>对,如下图,这一步是由用户的map函数完成的,其中的每一行对应一个map进程,总共需要四个map进程:
(3)得到map方法输出的<key,value>后,框架会按照key值进行排序,并执行Combine过程,将相同的key值进行累加,这一步是由框架完成,如下图:
 
Reduce 过程:
Reducer 先对从Mapper接收的数据进行排序,再交由用户定义的Reduce方法处理,得到新的<key,value>对,如下图,由四个reduce进程完成:
import java.io.IOException;
import java.net.URI;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.Reducer.Context;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;


public class WordCount {

	/**
	 * @param args
	 * @author nwpulisz
	 * @date:2016.3.29
	 */
	static final String INPUT_PATH="hdfs://192.168.255.132:9000/INPUT";
	static final String OUTPUT_PATH="hdfs://192.168.255.132:9000/OUTPUT";
	
	public static void main(String[] args) throws Throwable {
		// TODO Auto-generated method stub
		Configuration conf = new Configuration();
		Path outPut_path= new Path(OUTPUT_PATH);
		Job job = new Job(conf, "WordCount");
		
		//如果输出路径是存在的,则提前删除输出路径
		FileSystem fileSystem = FileSystem.get(new URI(OUTPUT_PATH), conf);
		if(fileSystem.exists(outPut_path))
		{
			fileSystem.delete(outPut_path,true);
		}
		
		FileInputFormat.setInputPaths(job, INPUT_PATH);
		FileOutputFormat.setOutputPath(job, outPut_path);
		
		job.setMapperClass(MyMapper.class);
		job.setReducerClass(MyReducer.class);
		job.setOutputKeyClass(Text.class);
		job.setOutputValueClass(LongWritable.class);
		job.waitForCompletion(true);
	}
	
	static class MyMapper extends Mapper<LongWritable, Text, Text, LongWritable>{

		protected void map(LongWritable key, Text value, 
                Context context) throws IOException, InterruptedException {
			String[] splits = value.toString().split("\\W+");
			for (String word : splits) {
				context.write(new Text(word), new LongWritable(1));
			}
		}
		
	}
	
	static class MyReducer extends Reducer<Text, LongWritable, Text, LongWritable>{

		protected void reduce(Text key, Iterable<LongWritable> values, Context context
                ) throws IOException, InterruptedException {
			long times = 0L;
			for(LongWritable value: values) {
					times+=value.get();
			}
			context.write(new Text(key),new LongWritable(times));
			}		
	}
	
}

  

输入:
 
输出:
 
权限报错问题:
第一次运行出现权限报错问题:java.io.IOException: Failed to set permissions...
修正方法:
1、找到Hadoop源码中的FileUtil.java文件,我的路径在E:\hadoop-1.1.2\hadoop-1.1.2\src\core\org\apache\hadoop\fs中;
2、找到checkReturnValue方法,注释掉其中的内容,如下图:
3、重新编译该文件(需要导入Hadoop的相关jar包),之后生产两个.class文件
4、将新生成的两个类覆盖hadoop-core-1.1.2.jar\org\apache\hadoop\fs中的原来的两个类,即可。





posted @ 2016-03-31 11:13  nwpulisz  阅读(227)  评论(0编辑  收藏  举报