Hadoop1.0 Eclipse Plugin-作业提交

1.环境

     Jdk:1.6.0_10-rc2

     Hadoop:hadoop-1.0.0.tar.gz 

     Eclipse 版本:3.4.0

     Hadoop Eclipse 插件 :hadoop-eclipse-plugin-1.0.0.jar   下载地址

     操作系统:Windows7 32位 旗舰版

 2.Eclipse插件配置

    2.1   把"hadoop-eclipse-plugin-1.0.0.jar"放到Eclipse的目录的"plugins"中(eclipse/plugins),重新启动Eclipse生效

    2.2   选择Elipse Window菜单下的"Preference",配置"Hadoop Map/Reduce"选项,选择Hadoop的安装根目录

     

     2.3 配置Hadoop Location

         在配置Hadoop Location之前 确定hadoop 已启动起来

         Eclipse 切换到“Map/Reduce Locations” 视图 , 在"Map/Reduce Locations"视图右击 选择"New Hadoop Location",
        

         * Map/Reduce Master与mapred-site.xml配置文件对应
         * DFS Mast 与core-site.xml配置对应

       创建完成后 ,切换到JavaEE视图    刷新右边的DFS Locations 就会看到dfs文件结构 

     

        可以在节点上右键  创建 删除目录做测试

3.运行wordCount例子程序

  创建一个 Map/Reduce Project项目

 

  创建成功后  WordCount报名对应(源码在hadoop\src\examples\org\apache\hadoop\examples目录下)

 WordCount.java

package org.apache.hadoop.examples;

import java.io.IOException;
import java.net.URI;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.FileUtil;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

public class WordCount {

  public static class TokenizerMapper 
       extends Mapper<Object, Text, Text, IntWritable>{
    
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();
      
    public void map(Object key, Text value, Context context
                    ) throws IOException, InterruptedException {
      StringTokenizer itr = new StringTokenizer(value.toString());
      while (itr.hasMoreTokens()) {
        word.set(itr.nextToken());
        context.write(word, one);
      }
    }
  }
  
  public static class IntSumReducer 
       extends Reducer<Text,IntWritable,Text,IntWritable> {
    private IntWritable result = new IntWritable();
    public void reduce(Text key, Iterable<IntWritable> values, 
                       Context context
                       ) throws IOException, InterruptedException {
      int sum = 0;
      for (IntWritable val : values) {
        sum += val.get();
      }
      result.set(sum);
      context.write(key, result);
    }
  }

  public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
    if (otherArgs.length != 2) {
      System.err.println("Usage: wordcount <in> <out>");
      System.exit(2);
    }
    //删除输出目录
    FileSystem fileSystem = FileSystem.get(URI.create(args[1]),conf);
    fileSystem.delete(new Path("/user/admin/output"), true);
    
    Job job = new Job(conf, "word count");
    job.setJarByClass(WordCount.class);
    job.setMapperClass(TokenizerMapper.class);
    job.setCombinerClass(IntSumReducer.class);
    job.setReducerClass(IntSumReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
    FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
    boolean flag = job.waitForCompletion(true);
    FSDataInputStream  fDataInputStream = null;
    try {
    	fDataInputStream = fileSystem.open(new Path("/user/admin/output/part-r-00000"));
    	String line = null;
	    while ((line = fDataInputStream.readLine()) != null) {
	    	System.out.println(line);;
		}
	} catch (Exception e) {
		e.printStackTrace();
	} finally {
		IOUtils.closeStream(fDataInputStream);
		IOUtils.closeStream(fileSystem);
	}
    System.exit( flag ? 0 : 1);
  }
}

  运行例子

     1.点击WordCount.java,右键-->Run As-->Run Configurations
     2.在弹出的Run Configurations对话框中,点Java Application,右键-->New,这时会新建一个application名为WordCount
     3.配置参数,点Arguments,在Program arguments中配置
        /user/admin/input /user/admin/output
     4.运行时可能会抛出java.lang.OutOfMemoryError: Java heap space异常  配置VM arguments(在Program arguments下)
       -Xms512m -Xmx512m -XX:PermSize=96m

      
     5.右键-->Run on Hadoop        刷新右边的DFS Locations 就会看到结果

  

posted on 2012-05-24 13:01  YangJin  阅读(239)  评论(0编辑  收藏  举报