读取SequenceFile中自定义Writable类型值

1)hadoop允许程序员创建自定义的数据类型,如果是key则必须要继承WritableComparable,因为key要参与排序,而value只需要继承Writable就可以了。以下定义一个DoubleArrayWritable,继承自ArrayWritable。代码如下:

 1 package matrix;
 2 import org.apache.hadoop.io.*;
 3 public class DoubleArrayWritable extends ArrayWritable {   
 4       public DoubleArrayWritable(){
 5           super(DoubleWritable.class);
 6       }
 7       public  double[] convert2double(DoubleWritable[] w){
 8           double[] value=new double[w.length];
 9           for (int i = 0; i < value.length; i++) {
10               value[i]=Double.valueOf(w[i].get());
11           }
12           return value;
13       }
14       
15 
16     }
17     
18    

2)以下就是读取tansB.txt文件,将其值转化为DoubleArrayWritable存储到SequenceFile中。

 1 package convert;
 2 
 3 /**
 4  * Created with IntelliJ IDEA.
 5  * User: hadoop
 6  * Date: 16-1-19
 7  * Time: 下午3:09
 8  * To change this template use File | Settings | File Templates.
 9  */
10 import java.io.IOException;
11 import java.net.URI;
12 
13 import org.apache.hadoop.conf.Configuration;
14 import org.apache.hadoop.fs.FileSystem;
15 import org.apache.hadoop.fs.Path;
16 import org.apache.hadoop.io.DoubleWritable;
17 import org.apache.hadoop.io.IOUtils;
18 import org.apache.hadoop.io.IntWritable;
19 import org.apache.hadoop.io.LongWritable;
20 import org.apache.hadoop.io.SequenceFile;
21 import org.apache.hadoop.io.Text;
22 import org.apache.commons.io.FileUtils;
23 import org.apache.commons.io.LineIterator;
24 
25 
26 
27 //import Jama.Matrix.*;
28 //import  java.io.IOException;
29 import java.io.File;
30 
31 //import javax.sound.midi.SysexMessage;
32 public class SequenceFileWriteDemo {
33     public static void main(String[] args) throws IOException {
34         String uri ="/home/hadoop/srcData/bDoubleArraySeq";
35         Configuration conf = new Configuration();
36         FileSystem fs = FileSystem.get(URI.create(uri), conf);
37         Path path = new Path(uri);
38         IntWritable key = new IntWritable();
39         DoubleArrayWritable value = new DoubleArrayWritable();
40         SequenceFile.Writer writer = null;
41         try {
42             writer = SequenceFile.createWriter(fs, conf, path, key.getClass(),
43                     value.getClass());
44 
45 
46             final LineIterator it2 = FileUtils.lineIterator(new File("/home/hadoop/srcData/transB.txt"), "UTF-8");
47             try {
48                 int i=0;
49                 String[] strings;
50                 DoubleWritable[] ArrayDoubleWritables;
51                 while (it2.hasNext()) {
52                     ++i;
53                     final String line = it2.nextLine();
54                     key.set(i);
55                     strings=line.split("\t");
56                     ArrayDoubleWritables=new DoubleWritable[strings.length];
57                     for (int j = 0; j < ArrayDoubleWritables.length; j++) {
58                         ArrayDoubleWritables[j] =new DoubleWritable(Double.valueOf(strings[j]));
59                         
60                     }
61                     
62                     value.set(ArrayDoubleWritables);
63                     writer.append(key,value);
64                     //System.out.println("ffd");
65 
66                 }
67             } finally {
68                 it2.close();
69             }
70 
71         }finally {
72             IOUtils.closeStream(writer);
73         }
74         System.out.println("ok");
75 
76     }
77 
78 }

3)将Seq文件上传,然后使用命令查看此Seq文件中的内容:

hadoop fs -text /lz/data/transBSeq

结果提示:

 java.lang.RuntimeException: java.io.IOException: WritableName can't load class:matrix.DoubleArrayWritable

4)原因是新定义的Double数组属于第三方包,hadoop不能直接识别,需要将其以上DoubleArrayWritable的源码打成jar包,然后将此jar包的路径在Master端的hadoop-env.sh文件中配置,在其中加入第三方类的位置信息,多个jar包用逗号(,)分割:

export HADOOP_CLASSPATH=/home/hadoop/DoubleArrayWritable.jar;

如下所示,将自定义的DoubleArrayWritable和IntArrayWritable加入到Hadoop_classpath路径中,设置hadoop的环境参数值时可以不加双引号“”,多个jar包以逗号分割,如下所示:

5)然后,使用hadoop fs -text /lz/data/transBSeq就可以看到文件的内容了。如下所示:

 

可以看到,就是把内存中的对象,持久化存储到了文件中(序列化就是实现内存中”活对象“的持久化存储),为什么看不到具体的内容呢(我明明存储的是一个double数组呀)?我们看到的这个convert.DoubleArrayWritable@264b62a2就是内存中对象的序列化之后的结果,再将这个反序列化之后就可以看到你的double数组了。

参考:

http://www.eveningdrum.com/2014/05/04/hadoop%E4%BD%BF%E7%94%A8%E7%AC%AC%E4%B8%89%E6%96%B9%E4%BE%9D%E8%B5%96jar%E5%8C%85/

6)SequenceFile被称为小文件的容器,主要是用来处理小文件的问题,一般小文件远远小于hadoop物理块的大小,那么分片的时候就会将一个小文件切分为一个单独的分片,而一个分片对应一个mapper,如果有大量的小文件,那么运行mapreduce作业的时候就会启动大量的mapper,增加调度开销,而一个mapper运行时间又很小,因此为了优化小文件的问题,提出了SequenceFile文件格式,这种文件由二进制形式的key-value组成,如果将key设计为文件名称,value为文件内容的话,就将大批小文件合并为一个大文件,SequenceFile也就成了名副其实的小文件容器。(http://www.aliog.com/19501.htmlhttp://dongxicheng.org/mapreduce/hdfs-small-files-solution/


posted @ 2016-01-21 11:36  lz3018  阅读(751)  评论(0编辑  收藏  举报