MapReduce初级编程实践

实验5

MapReduce初级编程实践

 

1.实验目的

(1)通过实验掌握基本的MapReduce编程方法;

(2)掌握用MapReduce解决一些常见的数据处理问题,包括数据去重、数据排序和数据挖掘等。

2.实验平台

(1)操作系统:Linux(建议Ubuntu16.04或Ubuntu18.04)

(2)Hadoop版本:3.1.3

3.实验步骤

(一)编程实现文件合并和去重操作

对于两个输入文件,即文件A和文件B,请编写MapReduce程序,对两个文件进行合并,并剔除其中重复的内容,得到一个新的输出文件C。下面是输入文件和输出文件的一个样例供参考。

输入文件A的样例如下:

 

20170101     x

20170102     y

20170103     x

20170104     y

20170105     z

20170106     x

 

输入文件B的样例如下:

20170101      y

20170102      y

20170103      x

20170104      z

20170105      y

 

根据输入文件A和B合并得到的输出文件C的样例如下:

20170101      x

20170101      y

20170102      y

20170103      x

20170104      y

20170104      z

20170105      y

20170105      z

20170106      x

 

 

代码:

import org.apache.hadoop.conf.Configuration;  

import org.apache.hadoop.fs.Path;  

import org.apache.hadoop.io.Text;  

import org.apache.hadoop.mapreduce.Job;  

import org.apache.hadoop.mapreduce.Mapper;  

import org.apache.hadoop.mapreduce.Reducer;  

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;  

import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;  

 

import java.io.IOException;  

import java.util.HashSet;  

 

public class MergeAndDeduplicate {  

 

    public static class MergeMapper extends Mapper<Object, Text, Text, Text> {  

        private Text line = new Text();  

 

        public void map(Object key, Text value, Context context) throws IOException, InterruptedException {  

            line.set(value);  

            context.write(line, new Text(""));  

        }  

    }  

 

    public static class DeduplicateReducer extends Reducer<Text, Text, Text, Text> {  

        private HashSet<String> uniqueLines = new HashSet<>();  

 

        public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {  

            if (uniqueLines.add(key.toString())) {  

                context.write(key, new Text(""));  

            }  

        }  

    }  

 

    public static void main(String[] args) throws Exception {  

        Configuration conf = new Configuration();  

        Job job = Job.getInstance(conf, "merge and deduplicate");  

        job.setJarByClass(MergeAndDeduplicate.class);  

        job.setMapperClass(MergeMapper.class);  

        job.setReducerClass(DeduplicateReducer.class);  

        job.setOutputKeyClass(Text.class);  

        job.setOutputValueClass(Text.class);  

        FileInputFormat.addInputPath(job, new Path(args[0]));  

        FileOutputFormat.setOutputPath(job, new Path(args[1]));  

        System.exit(job.waitForCompletion(true) ? 0 : 1);  

    }  

}

 

 

(二)编写程序实现对输入文件的排序

现在有多个输入文件,每个文件中的每行内容均为一个整数。要求读取所有文件中的整数,进行升序排序后,输出到一个新的文件中,输出的数据格式为每行两个整数,第一个数字为第二个整数的排序位次,第二个整数为原待排列的整数。下面是输入文件和输出文件的一个样例供参考。

输入文件1的样例如下:

33

37

12

40

 

输入文件2的样例如下:

4

16

39

5

 

输入文件3的样例如下:

1

45

25

 

根据输入文件1、2和3得到的输出文件如下:

1 1

2 4

3 5

4 12

5 16

6 25

7 33

8 37

9 39

10 40

11 45

 

 

 

代码:

import org.apache.hadoop.conf.Configuration;  

import org.apache.hadoop.fs.Path;  

import org.apache.hadoop.io.IntWritable;  

import org.apache.hadoop.io.Text;  

import org.apache.hadoop.mapreduce.Job;  

import org.apache.hadoop.mapreduce.Mapper;  

import org.apache.hadoop.mapreduce.Reducer;  

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;  

import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;  

 

import java.io.IOException;  

import java.util.ArrayList;  

import java.util.Collections;  

 

public class SortIntegers {  

 

    public static class SortMapper extends Mapper<Object, Text, IntWritable, IntWritable> {  

        private IntWritable number = new IntWritable();  

 

        public void map(Object key, Text value, Context context) throws IOException, InterruptedException {  

            number.set(Integer.parseInt(value.toString()));  

            context.write(number, new IntWritable(1));  

        }  

    }  

 

    public static class SortReducer extends Reducer<IntWritable, IntWritable, Text, IntWritable> {  

        private ArrayList<Integer> numbers = new ArrayList<>();  

 

        public void reduce(IntWritable key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {  

            numbers.add(key.get());  

        }  

 

        @Override  

        protected void cleanup(Context context) throws IOException, InterruptedException {  

            Collections.sort(numbers);  

            for (int i = 0; i < numbers.size(); i++) {  

                context.write(new Text(String.valueOf(i + 1)), new IntWritable(numbers.get(i)));  

            }  

        }  

    }  

 

    public static void main(String[] args) throws Exception {  

        Configuration conf = new Configuration();  

        Job job = Job.getInstance(conf, "sort integers");  

        job.setJarByClass(SortIntegers.class);  

        job.setMapperClass(SortMapper.class);  

        job.setReducerClass(SortReducer.class);  

        job.setOutputKeyClass(IntWritable.class);  

        job.setOutputValueClass(IntWritable.class);  

        FileInputFormat.addInputPath(job, new Path(args[0]));  

        FileOutputFormat.setOutputPath(job, new Path(args[1]));  

        System.exit(job.waitForCompletion(true) ? 0 : 1);  

    }  

}

 

(三)对给定的表格进行信息挖掘

下面给出一个child-parent的表格,要求挖掘其中的父子辈关系,给出祖孙辈关系的表格。

输入文件内容如下:

child          parent

Steven        Lucy

Steven        Jack

Jone         Lucy

Jone         Jack

Lucy         Mary

Lucy         Frank

Jack         Alice

Jack         Jesse

David       Alice

David       Jesse

Philip       David

Philip       Alma

Mark       David

Mark       Alma

 

输出文件内容如下:

grandchild       grandparent

Steven          Alice

Steven          Jesse

Jone            Alice

Jone            Jesse

Steven          Mary

Steven          Frank

Jone            Mary

Jone            Frank

Philip           Alice

Philip           Jesse

Mark           Alice

Mark           Jesse

 

代码:

import org.apache.hadoop.conf.Configuration;  

import org.apache.hadoop.fs.Path;  

import org.apache.hadoop.io.Text;  

import org.apache.hadoop.mapreduce.Job;  

import org.apache.hadoop.mapreduce.Mapper;  

import org.apache.hadoop.mapreduce.Reducer;  

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;  

import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;  

 

import java.io.IOException;  

import java.util.HashMap;  

 

public class GrandchildGrandparent {  

 

    public static class RelationshipMapper extends Mapper<Object, Text, Text, Text> {  

        private Text child = new Text();  

        private Text parent = new Text();  

 

        public void map(Object key, Text value, Context context) throws IOException, InterruptedException {  

            String[] parts = value.toString().trim().split("\\s+");  

            if (parts.length == 2) {  

                child.set(parts[0]);  

                parent.set(parts[1]);  

                context.write(child, parent);  

            }  

        }  

    }  

 

    public static class RelationshipReducer extends Reducer<Text, Text, Text, Text> {  

        private HashMap<String, String> parentMap = new HashMap<>();  

 

        public void reduce(Text child, Iterable<Text> parents, Context context) throws IOException, InterruptedException {  

            for (Text parent : parents) {  

                parentMap.put(child.toString(), parent.toString());  

            }  

        }  

 

        @Override  

        protected void cleanup(Context context) throws IOException, InterruptedException {  

            for (String child : parentMap.keySet()) {  

                String parent = parentMap.get(child);  

                for (String grandchild : parentMap.keySet()) {  

                    if (parentMap.get(grandchild).equals(parent)) {  

                        context.write(new Text(child), new Text(parent));  

                    }  

                }  

            }  

        }  

    }  

 

    public static void main(String[] args) throws Exception {  

        Configuration conf = new Configuration();  

        Job job = Job.getInstance(conf, "grandchild grandparent");  

        job.setJarByClass(GrandchildGrandparent.class);  

        job.setMapperClass(RelationshipMapper.class);  

        job.setReducerClass(RelationshipReducer.class);  

        job.setOutputKeyClass(Text.class);  

        job.setOutputValueClass(Text.class);  

        FileInputFormat.addInputPath(job, new Path(args[0]));  

        FileOutputFormat.setOutputPath(job, new Path(args[1]));  

        System.exit(job.waitForCompletion(true) ? 0 : 1);  

    }  

posted @   chrisrmas、  阅读(9)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 无需6万激活码!GitHub神秘组织3小时极速复刻Manus,手把手教你使用OpenManus搭建本
· Manus爆火,是硬核还是营销?
· 终于写完轮子一部分:tcp代理 了,记录一下
· 别再用vector<bool>了!Google高级工程师:这可能是STL最大的设计失误
· 单元测试从入门到精通
点击右上角即可分享
微信分享提示