【hadoop代码笔记】Mapreduce shuffle过程之Map输出过程
一、概要描述 shuffle是MapReduce的一个核心过程,因此没有在前面的MapReduce作业提交的过程中描述,而是单独拿出来比较详细的描述。 根据官方的流程图示如下:
本篇文章中只是想尝试从代码分析来说明在map端是如何将map的输出保存下来等待reduce来取。 在执行每个map task时,无论map方法中执行什么逻辑,最终都是要把输出写到磁盘上。如果没有reduce阶段,则直接输出到hdfs上,如果有有reduce作业,则每个map方法的输出在写磁盘前线在内存中缓存。每个map task都有一个环状的内存缓冲区,存储着map的输出结果,默认100m,在每次当缓冲区快满的时候由一个独立的线程将缓冲区的数据以一个溢出文件的方式存放到磁盘,当整个map task结束后再对磁盘中这个map task产生的所有溢出文件做合并,被合并成已分区且已排序的输出文件。然后等待reduce task来拉数据。
二、 流程描述
- 在child进程调用到runNewMapper时,会设置output为NewOutputCollector,来负责map的输出。
- 在map方法的最后,不管经过什么逻辑的map处理,最终一般都要调用到TaskInputOutputContext的write方法,进而调用到设置的output即NewOutputCollector的write方法
- NewOutputCollector其实只是对MapOutputBuffer的一个封装,其write方法调用的是MapOutputBuffer的collect方法。
- MapOutputBuffer的collect方法中把key和value序列化后存储在一个环形缓存中,如果缓存满了则会调用startspill方法设置信号量,使得一个独立的线程SpillThread可以对缓存中的数据进行处理。
- SpillThread线程的run方法中调用sortAndSpill方法对缓存中的数据进行排序后写溢出文件。
- 当map输出完成后,会调用output的close方法。
- 在close方法中调用flush方法,对剩余的缓存进行处理,最后调用mergeParts方法,将前面过程的多个溢出文件合并为一个。
Mapreduce shuffle过程之Map输出过程代码流程
三、代码详细
1 MapTask的runNewMapper方法 注意到有这样一段代码。即当job中只有map没有reduce的时候,这个rg.apache.hadoop.mapreduce.RecordWriter类型的对象 output是一Outputformat中定义的writer,即直接写到输出中。如果是有Reduce,则output是一个NewOutputCollector类型输出。
1 if (job.getNumReduceTasks() == 0) { 2 output = outputFormat.getRecordWriter(taskContext); 3 } else { 4 output = new NewOutputCollector(taskContext, job, umbilical, reporter); 5 } 6 mapperContext = contextConstructor.newInstance(mapper, job, getTaskID(),input, output, committer, reporter, split); 7 input.initialize(split, mapperContext); 8 mapper.run(mapperContext);
和其他的RecordWriter一样,NewOutputCollector也继承自RecordWriter抽象类。除了一个close方法释放资源外,该抽象类定义的最主要的方法就一个void write(K key, V value)。即写入key,value。
2. Mapper的run方法,对每个输出执行map方法。
1 public void run(Context context) throws IOException, InterruptedException { 2 setup(context); 3 while (context.nextKeyValue()) { 4 map(context.getCurrentKey(), context.getCurrentValue(), context); 5 } 6 cleanup(context); 7 }
3. Mapper的map方法,默认是直接把key和value写入
1 protected void map(KEYIN key, VALUEIN value, 2 Context context) throws IOException, InterruptedException { 3 context.write((KEYOUT) key, (VALUEOUT) value); 4 }
一般使用中会做很多我们需要的操作,如著名的wordcount中,把一行单词切分后,数一(value都设为one = new IntWritable(1)),但最终都是要把结果写入。即调用context.write(key,value)
1 public void map(Object key, Text value, Context context 2 ) throws IOException, InterruptedException { 3 StringTokenizer itr = new StringTokenizer(value.toString()); 4 while (itr.hasMoreTokens()) { 5 word.set(itr.nextToken()); 6 context.write(word, one); 7 } 8 }
4.TaskInputOutputContext的write方法。调用的是contex中的RecordWriter的write方法。即调用的是NewOutputCollector的write方法。
1 public void write(KEYOUT key, VALUEOUT value 2 ) throws IOException, InterruptedException { 3 output.write(key, value); 4 }
5.NewOutputCollector的write方法。
1 public void write(K key, V value) throws IOException, InterruptedException { 2 collector.collect(key, value, 3 partitioner.getPartition(key, value, partitions)); 4 }
从方法名上不难看出提供写数据的是MapOutputCollector<K,V>类型的 collector对象从NewOutputCollector的构造函数中看到collector的初始化。
collector = new MapOutputBuffer<K,V>(umbilical, job, reporter);
6.MapOutputBuffer的构造函数,在了解MapOutputBuffer的collect方法前,先了解下期构造函数,看做了哪些初始化。
1 public MapOutputBuffer(TaskUmbilicalProtocol umbilical, JobConf job, 2 TaskReporter reporter 3 ) throws IOException, ClassNotFoundException { 4 this.job = job; 5 this.reporter = reporter; 6 localFs = FileSystem.getLocal(job); 7 //1)设定map的分区数,即作业 配置中的的reduce数 8 partitions = job.getNumReduceTasks(); 9 10 rfs = ((LocalFileSystem)localFs).getRaw(); 11 12 indexCacheList = new ArrayList<SpillRecord>(); 13 14 //2)重要的参数 15 final float spillper = job.getFloat("io.sort.spill.percent",(float)0.8); 16 final float recper = job.getFloat("io.sort.record.percent",(float)0.05); 17 final int sortmb = job.getInt("io.sort.mb", 100); 18 if (spillper > (float)1.0 || spillper < (float)0.0) { 19 throw new IOException("Invalid \"io.sort.spill.percent\": " + spillper); 20 } 21 if (recper > (float)1.0 || recper < (float)0.01) { 22 throw new IOException("Invalid \"io.sort.record.percent\": " + recper); 23 } 24 if ((sortmb & 0x7FF) != sortmb) { 25 throw new IOException("Invalid \"io.sort.mb\": " + sortmb); 26 } 27 //3)sorter,使用其对map的输出在partition内进行内排序。 28 sorter = ReflectionUtils.newInstance( 29 job.getClass("map.sort.class", QuickSort.class, IndexedSorter.class), job); 30 LOG.info("io.sort.mb = " + sortmb); 31 // buffers and accounting 32 //把单位是M的sortmb设定左移20,还原单位为个 33 int maxMemUsage = sortmb << 20; 34 int recordCapacity = (int)(maxMemUsage * recper); 35 recordCapacity -= recordCapacity % RECSIZE; 36 //输出缓存 37 kvbuffer = new byte[maxMemUsage - recordCapacity]; 38 bufvoid = kvbuffer.length; 39 recordCapacity /= RECSIZE; 40 kvoffsets = new int[recordCapacity]; 41 kvindices = new int[recordCapacity * ACCTSIZE]; 42 softBufferLimit = (int)(kvbuffer.length * spillper); 43 softRecordLimit = (int)(kvoffsets.length * spillper); 44 LOG.info("data buffer = " + softBufferLimit + "/" + kvbuffer.length); 45 LOG.info("record buffer = " + softRecordLimit + "/" + kvoffsets.length); 46 // k/v serialization 47 comparator = job.getOutputKeyComparator(); 48 keyClass = (Class<K>)job.getMapOutputKeyClass(); 49 valClass = (Class<V>)job.getMapOutputValueClass(); 50 serializationFactory = new SerializationFactory(job); 51 keySerializer = serializationFactory.getSerializer(keyClass); 52 keySerializer.open(bb); 53 valSerializer = serializationFactory.getSerializer(valClass); 54 valSerializer.open(bb); 55 // counters 56 mapOutputByteCounter = reporter.getCounter(MAP_OUTPUT_BYTES); 57 mapOutputRecordCounter = reporter.getCounter(MAP_OUTPUT_RECORDS); 58 Counters.Counter combineInputCounter = 59 reporter.getCounter(COMBINE_INPUT_RECORDS); 60 combineOutputCounter = reporter.getCounter(COMBINE_OUTPUT_RECORDS); 61 // 4)compression 62 if (job.getCompressMapOutput()) { 63 Class<? extends CompressionCodec> codecClass = 64 job.getMapOutputCompressorClass(DefaultCodec.class); 65 codec = ReflectionUtils.newInstance(codecClass, job); 66 } 67 // 5)combiner是一个NewCombinerRunner类型,调用Job的reducer来对map的输出在map端进行combine。 68 combinerRunner = CombinerRunner.create(job, getTaskID(), 69 combineInputCounter, 70 reporter, null); 71 if (combinerRunner != null) { 72 combineCollector= new CombineOutputCollector<K,V>(combineOutputCounter); 73 } else { 74 combineCollector = null; 75 } 76 //6)启动一个SpillThread线程来 77 minSpillsForCombine = job.getInt("min.num.spills.for.combine", 3); 78 spillThread.setDaemon(true); 79 spillThread.setName("SpillThread"); 80 spillLock.lock(); 81 try { 82 spillThread.start(); 83 while (!spillThreadRunning) { 84 spillDone.await(); 85 } 86 } catch (InterruptedException e) { 87 throw (IOException)new IOException("Spill thread failed to initialize" 88 ).initCause(sortSpillException); 89 } finally { 90 spillLock.unlock(); 91 } 92 if (sortSpillException != null) { 93 throw (IOException)new IOException("Spill thread failed to initialize" 94 ).initCause(sortSpillException); 95 } 96 }
7.MapOutputBuffer的collect方法。
参数partition是partitioner根据key计算得到的当前key value属于的partition索引。写key和value写入缓存,当缓存满足spill条件时,通过调用startSpill方法设置变量并通过spillReady.signal(),通知spillThread;并等待spill结束(通过spillDone.await()等待)缓冲区的作用是批量收集map结果,减少磁盘IO的影响。key/value对以及Partition的结果都会被写入缓冲区。写入之前,key与value值都会被序列化成字节数组。kvindices保持了记录所属的分区,key在缓冲区开始的位置和value在缓冲区开始的位置,通过kvindices,可以在缓冲区中找到对应的记录。
输出缓冲区中,和kvstart,kvend和kvindex对应的是bufstart,bufend和bufmark。这部分还涉及到变量bufvoid,用于表明实际使用的缓冲区结尾和变量bufmark,用于标记记录的结尾。需要bufmark,是因为key或value的输出是变长的。
[caption id="attachment_539" align="alignnone" width="739"] Key Value序列化后缓存[/caption]
public synchronized void collect(K key, V value, int partition ) throws IOException { reporter.progress(); if (key.getClass() != keyClass) { throw new IOException("Type mismatch in key from map: expected " + keyClass.getName() + ", recieved " + key.getClass().getName()); } if (value.getClass() != valClass) { throw new IOException("Type mismatch in value from map: expected " + valClass.getName() + ", recieved " + value.getClass().getName()); } //对kvoffsets的长度取模,暗示我们这是一个环形缓存。 final int kvnext = (kvindex + 1) % kvoffsets.length; //进入临界区 spillLock.lock(); try { boolean kvfull; do { if (sortSpillException != null) { throw (IOException)new IOException("Spill failed" ).initCause(sortSpillException); } // sufficient acct space kvfull = kvnext == kvstart; final boolean kvsoftlimit = ((kvnext > kvend) ? kvnext - kvend > softRecordLimit : kvend - kvnext <= kvoffsets.length - softRecordLimit); if (kvstart == kvend && kvsoftlimit) { LOG.info("Spilling map output: record full = " + kvsoftlimit); //其实是设置变量并通过spillReady.signal(),通知spillThread;并等待spill结束 startSpill(); } if (kvfull) { try { while (kvstart != kvend) { //kvstart不等于kvend,表示系统正在spill,等待spillDone信号 reporter.progress(); spillDone.await(); } } catch (InterruptedException e) { throw (IOException)new IOException( "Collector interrupted while waiting for the writer" ).initCause(e); } } } while (kvfull); } finally { spillLock.unlock(); } try { //先对key串行化,然后对value做串行化,临时变量keystart,valstart和valend分别记录了key结果的开始位置,value结果的开始位置和value结果的结束位置。串行化过程中,往缓冲区写是最终调用了Buffer.write方法 // serialize key bytes into buffer int keystart = bufindex; keySerializer.serialize(key); if (bufindex < keystart) { //如果key串行化后出现bufindex < keystart,那么会调用BlockingBuffer的reset方法。原因是在spill的过程中需要对<key,value>排序,这种情况下,传递给RawComparator的必须是连续的二进制缓冲区,通过BlockingBuffer.reset方法 会把bufvoid设置为bufmark,缓冲区开始部分往后挪,然后将原来位于bufmark到bufvoid出的结果,拷到缓冲区开始处,这样的话,key串行化的结果就连续存放在缓冲区的最开始处。 bb.reset(); keystart = 0; } // serialize value bytes into buffer final int valstart = bufindex; valSerializer.serialize(value); int valend = bb.markRecord(); if (partition < 0 || partition >= partitions) { throw new IOException("Illegal partition for " + key + " (" + partition + ")"); } mapOutputRecordCounter.increment(1); mapOutputByteCounter.increment(valend >= keystart ? valend - keystart : (bufvoid - keystart) + valend); // update accounting info int ind = kvindex * ACCTSIZE; kvoffsets[kvindex] = ind; kvindices[ind + PARTITION] = partition; kvindices[ind + KEYSTART] = keystart; kvindices[ind + VALSTART] = valstart; kvindex = kvnext; } catch (MapBufferTooSmallException e) { LOG.info("Record too large for in-memory buffer: " + e.getMessage()); //如果value的串行化结果太大,不能一次放入缓冲区 spillSingleRecord(key, value, partition); mapOutputRecordCounter.increment(1); return; } }
8.MapOutputBuffer.BlockingBuffer的reset()方法.
如果key串行化后出现bufindex < keystart,那么会调用BlockingBuffer的reset方法。原因是在spill的过程中需要对<key,value>排序,这种情况下,传递给RawComparator的必须是连续的二进制缓冲区,通过BlockingBuffer.reset方法当发现key的串行化结果出现不连续的情况时,会把bufvoid设置为bufmark,缓冲区开始部分往后挪,然后将原来位于bufmark到bufvoid出的结果,拷到缓冲区开始处,这样的话,key串行化的结果就连续存放在缓冲区的最开始处。
BlockingBuffer.reset方法
bufstart前面的缓冲区如果不能放下整个key串行化的结果,,处理的方式是将bufindex置0,然后调用BlockingBuffer内部的out的write方法直接输出,这实际调用了Buffer.write方法,会启动spill过程,最终会成功写入key串行化的结果。
1 protected synchronized void reset() throws IOException { 2 3 int headbytelen = bufvoid - bufmark; 4 bufvoid = bufmark; 5 //当发现key的串行化结果出现不连续的情况时,会把bufvoid设置为bufmark,缓冲区开始部分往后挪,然后将原来位于bufmark到bufvoid出的结果,拷到缓冲区开始处,这样的话,key串行化的结果就连续存放在缓冲区的最开始处。 6 7 if (bufindex + headbytelen < bufstart) { 8 System.arraycopy(kvbuffer, 0, kvbuffer, headbytelen, bufindex); 9 System.arraycopy(kvbuffer, bufvoid, kvbuffer, 0, headbytelen); 10 bufindex += headbytelen; 11 } else { 12 //bufstart前面的缓冲区如果不能够放下整个key串行化的结果,处理的方式是将bufindex置0,然后调用BlockingBuffer内部的out的write方法直接输出 13 byte[] keytmp = new byte[bufindex]; 14 System.arraycopy(kvbuffer, 0, keytmp, 0, bufindex); 15 bufindex = 0; 16 out.write(kvbuffer, bufmark, headbytelen); 17 out.write(keytmp); 18 } 19 } 20 }
9. MapOutputBuffer.Buffer的write方法。在key和value序列列化的时候,被调用写到缓存中。如果spill线程正在把缓存的数据写溢出文件,则阻塞。
1 public synchronized void write(byte b[], int off, int len) 2 throws IOException { 3 boolean buffull = false; 4 boolean wrap = false; 5 spillLock.lock(); 6 try { 7 do {//循环,直到有足够的空间可以写数据 8 if (sortSpillException != null) { 9 throw (IOException)new IOException("Spill failed" 10 ).initCause(sortSpillException); 11 } 12 13 // sufficient buffer space? 14 if (bufstart <= bufend && bufend <= bufindex) { 15 buffull = bufindex + len > bufvoid; 16 wrap = (bufvoid - bufindex) + bufstart > len; 17 } else { 18 // bufindex <= bufstart <= bufend 19 // bufend <= bufindex <= bufstart 20 wrap = false; 21 buffull = bufindex + len > bufstart; 22 } 23 24 if (kvstart == kvend) { 25 // spill thread not running 26 if (kvend != kvindex) { 27 //如果数组中有记录(kvend != kvindex),那么,根据需要(目前输出空间不足或记录数达到spill条件)启动spill过程 28 final boolean bufsoftlimit = (bufindex > bufend) 29 ? bufindex - bufend > softBufferLimit 30 : bufend - bufindex < bufvoid - softBufferLimit; 31 if (bufsoftlimit || (buffull && !wrap)) { 32 LOG.info("Spilling map output: buffer full= " + bufsoftlimit); 33 startSpill(); 34 } 35 } else if (buffull && !wrap) { 36 // 如果空间不够(buffull && !wrap),但是缓存中没有记录,表明这个记录非常大,内存缓冲区不能容下这么大的数据量,抛MapBufferTooSmallException异常,直接写文件不用写缓存 37 final int size = ((bufend <= bufindex) 38 ? bufindex - bufend 39 : (bufvoid - bufend) + bufindex) + len; 40 bufstart = bufend = bufindex = bufmark = 0; 41 kvstart = kvend = kvindex = 0; 42 bufvoid = kvbuffer.length; 43 throw new MapBufferTooSmallException(size + " bytes"); 44 } 45 } 46 47 if (buffull && !wrap) { 48 try { 49 //如果空间不足但是spill在运行,等待spillDone 50 while (kvstart != kvend) { 51 reporter.progress(); 52 spillDone.await(); 53 } 54 } catch (InterruptedException e) { 55 throw (IOException)new IOException( 56 "Buffer interrupted while waiting for the writer" 57 ).initCause(e); 58 } 59 } 60 } while (buffull && !wrap); 61 } finally { 62 spillLock.unlock(); 63 } 64 //真正把数据写缓存的地方!如果buffull,则写数据会不连续,则写满剩余缓冲区,然后设置bufindex=0,并从bufindex处接着写。否则,就是从bufindex处开始写。 65 if (buffull) { 66 //缓存剩余长度 67 final int gaplen = bufvoid - bufindex; 68 //把剩余的写满 69 System.arraycopy(b, off, kvbuffer, bufindex, gaplen); 70 //剩下长度 71 len -= gaplen; 72 //剩下偏移 73 off += gaplen; 74 //写指针移到开头 75 bufindex = 0; 76 } 77 从指定的开头写 78 System.arraycopy(b, off, kvbuffer, bufindex, len); 79 bufindex += len; 80 } 81 }
buffull和wrap条件说明
如图,对bufful和wrap条件进行说明: 在上面两种情况下,即情况1和情况2,
buffull = bufindex + len > bufvoid;
wrap = (bufvoid - bufindex) + bufstart > len;
buffull条件判断为从下次写指针的位置bufindex到缓存结束bufvoid的空间是否有足够的空间容纳写的内容,wrap是图中白颜色部分的空间(前后空白合在一起)是否比输入大,如果是,wrap为true; 情况3和情况4中,
wrap = false; buffull = bufindex + len > bufstart;
buffull判断bufindex到bufstart的空间是否满足条件,而wrap肯定是false。 条件(buffull && !wrap)满足时,目前的空间不够一次写。
10.MapOutputBuffer 的spillSingleRecord方法。如果在collect方法中处理缓存失败,则直接把这条记录些到spill文件中。对应单条记录即使设置了combiner也不用。如果记录非常大,内存缓冲区不能容下这么大的数据量,抛MapBufferTooSmallException异常,直接写文件不用写缓存。
private void spillSingleRecord(final K key, final V value, int partition) throws IOException { long size = kvbuffer.length + partitions * APPROX_HEADER_LENGTH; FSDataOutputStream out = null; try { // 创建spill文件 final SpillRecord spillRec = new SpillRecord(partitions); final Path filename = mapOutputFile.getSpillFileForWrite(getTaskID(), numSpills, size); out = rfs.create(filename); IndexRecord rec = new IndexRecord(); for (int i = 0; i < partitions; ++i) { IFile.Writer<K, V> writer = null; try { long segmentStart = out.getPos(); writer = new IFile.Writer<K,V>(job, out, keyClass, valClass, codec, spilledRecordsCounter); if (i == partition) { final long recordStart = out.getPos(); writer.append(key, value); mapOutputByteCounter.increment(out.getPos() - recordStart); } writer.close(); // 把偏移记录在index中 rec.startOffset = segmentStart; rec.rawLength = writer.getRawLength(); rec.partLength = writer.getCompressedLength(); spillRec.putIndex(rec, i); writer = null; } catch (IOException e) { if (null != writer) writer.close(); throw e; } } //如果index满了,则把index也写到index文件中。没满则把该条index记录加入到indexCacheList中,并更新totalIndexCacheMemory。 if (totalIndexCacheMemory >= INDEX_CACHE_MEMORY_LIMIT) { // create spill index file Path indexFilename = mapOutputFile.getSpillIndexFileForWrite( getTaskID(), numSpills, partitions * MAP_OUTPUT_INDEX_RECORD_LENGTH); spillRec.writeToFile(indexFilename, job); } else { indexCacheList.add(spillRec); totalIndexCacheMemory += spillRec.size() * MAP_OUTPUT_INDEX_RECORD_LENGTH; } ++numSpills; } finally { if (out != null) out.close(); } }
11.MapOutputBuffer的startSpill。唤醒等待spillReady的线程。
1 private synchronized void startSpill() { 2 LOG.info("bufstart = " + bufstart + "; bufend = " + bufmark + 3 "; bufvoid = " + bufvoid); 4 LOG.info("kvstart = " + kvstart + "; kvend = " + kvindex + 5 "; length = " + kvoffsets.length); 6 kvend = kvindex; 7 bufend = bufmark; 8 spillReady.signal(); 9 }
12.SpillThread的run方法。
该Thread会检查内存中的输出缓存区,在满足一定条件的时候将缓冲区中的内容spill到硬盘上。这是一个标准的生产者-消费者模型,MapTask的collect方法是生产者,spillThread是消费者,它们之间同步是通过spillLock(ReentrantLock)和spillLock上的两个条件变量(spillDone和spillReady)完成的。当kvstart == kvend条件成立时,表示没有要spill的记录。
1 public void run() { 2 //临界区 3 spillLock.lock(); 4 spillThreadRunning = true; 5 try { 6 while (true) { 7 spillDone.signal(); 8 当kvstart == kvend条件成立时,表示没有要spill的记录 9 while (kvstart == kvend) { 10 spillReady.await(); 11 } 12 try { 13 spillLock.unlock(); 14 //执行操作 15 sortAndSpill(); 16 } catch (Exception e) { 17 sortSpillException = e; 18 } catch (Throwable t) { 19 sortSpillException = t; 20 String logMsg = "Task " + getTaskID() + " failed : " 21 + StringUtils.stringifyException(t); 22 reportFatalError(getTaskID(), t, logMsg); 23 } finally { 24 spillLock.lock(); 25 if (bufend < bufindex && bufindex < bufstart) { 26 bufvoid = kvbuffer.length; 27 } 28 kvstart = kvend; 29 bufstart = bufend; 30 } 31 } 32 } catch (InterruptedException e) { 33 Thread.currentThread().interrupt(); 34 } finally { 35 spillLock.unlock(); 36 spillThreadRunning = false; 37 } 38 } 39 }
13..MapOutputBuffer的sortAndSpill() 方法 SpillThread线程的run方法中调用sortAndSpill把缓存中的输出写到格式为+ "/spill" + spillNumber + ".out"的spill文件中。索引(kvindices)保持在spill{spill号}.out.index中,数据保存在spill{spill号}.out中 创建SpillRecord记录,输出文件和IndexRecord记录,然后,需要在kvoffsets上做排序,排完序后顺序访问kvoffsets,也就是按partition顺序访问记录。按partition循环处理排完序的数组,如果没有combiner,则直接输出记录,否则,调用combineAndSpill,先做combin然后输出。循环的最后记录IndexRecord到SpillRecord。
1 private void sortAndSpill() throws IOException, ClassNotFoundException, 2 InterruptedException { 3 //approximate the length of the output file to be the length of the 4 //buffer + header lengths for the partitions 5 long size = (bufend >= bufstart 6 ? bufend - bufstart 7 : (bufvoid - bufend) + bufstart) + 8 partitions * APPROX_HEADER_LENGTH; 9 FSDataOutputStream out = null; 10 try { 11 // 创建溢出文件 12 final SpillRecord spillRec = new SpillRecord(partitions); 13 final Path filename = mapOutputFile.getSpillFileForWrite(getTaskID(), 14 numSpills, size); 15 out = rfs.create(filename); 16 17 final int endPosition = (kvend > kvstart) 18 ? kvend 19 : kvoffsets.length + kvend; 20 //使用sorter进行排序, 在内存中进行,参照MapOutputBuffer的compare方法实现的这里的排序也是对序列化的字节做的排序。排序是在kvoffsets上面进行,参照MapOutputBuffer的swap方法实现。 21 sorter.sort(MapOutputBuffer.this, kvstart, endPosition, reporter); 22 int spindex = kvstart; 23 IndexRecord rec = new IndexRecord(); 24 InMemValBytes value = new InMemValBytes(); 25 for (int i = 0; i < partitions; ++i) { 26 IFile.Writer<K, V> writer = null; 27 try { 28 long segmentStart = out.getPos(); 29 writer = new Writer<K, V>(job, out, keyClass, valClass, codec, 30 spilledRecordsCounter); 31 if (combinerRunner == null) { 32 // 如果没有combinner则直接写键值 33 DataInputBuffer key = new DataInputBuffer(); 34 while (spindex < endPosition && 35 kvindices[kvoffsets[spindex % kvoffsets.length] 36 + PARTITION] == i) { 37 final int kvoff = kvoffsets[spindex % kvoffsets.length]; 38 getVBytesForOffset(kvoff, value); 39 key.reset(kvbuffer, kvindices[kvoff + KEYSTART], 40 (kvindices[kvoff + VALSTART] - 41 kvindices[kvoff + KEYSTART])); 42 //键值写到溢出文件 43 writer.append(key, value); 44 ++spindex; 45 } 46 } else { 47 int spstart = spindex; 48 while (spindex < endPosition && 49 kvindices[kvoffsets[spindex % kvoffsets.length] 50 + PARTITION] == i) { 51 ++spindex; 52 } 53 //如果设置了combiner,则调用了combine方法后的结果写到IFile中,writer还是先前的writer。减少溢写到磁盘的数据量。 54 if (spstart != spindex) { 55 combineCollector.setWriter(writer); 56 RawKeyValueIterator kvIter = 57 new MRResultIterator(spstart, spindex); 58 combinerRunner.combine(kvIter, combineCollector); 59 } 60 } 61 62 // close the writer 63 writer.close(); 64 65 // record offsets 66 rec.startOffset = segmentStart; 67 rec.rawLength = writer.getRawLength(); 68 rec.partLength = writer.getCompressedLength(); 69 spillRec.putIndex(rec, i); 70 71 writer = null; 72 } finally { 73 if (null != writer) writer.close(); 74 } 75 } 76 77 if (totalIndexCacheMemory >= INDEX_CACHE_MEMORY_LIMIT) { 78 // 写溢出索引文件,格式如+ "/spill" + spillNumber + ".out.index" 79 Path indexFilename = mapOutputFile.getSpillIndexFileForWrite( 80 getTaskID(), numSpills, 81 partitions * MAP_OUTPUT_INDEX_RECORD_LENGTH); 82 spillRec.writeToFile(indexFilename, job); 83 } else { 84 indexCacheList.add(spillRec); 85 totalIndexCacheMemory += 86 spillRec.size() * MAP_OUTPUT_INDEX_RECORD_LENGTH; 87 } 88 LOG.info("Finished spill " + numSpills); 89 ++numSpills; 90 } finally { 91 if (out != null) out.close(); 92 } 93 }
14 MapOutputBuffer的compare方法和swap方法 MapOutputBuffer实现了IndexedSortable接口,从接口命名上就可以猜想到,这个排序不是移动数据,而是移动数据的索引。在这里要排序的其实是kvindices对象,通过移动其记录在kvoffets上的索引来实现。 如图,表示了写磁盘前Sort的效果。kvindices保持了记录所属的(Reduce)分区,key在缓冲区开始的位置和value在缓冲区开始的位置,通过kvindices,我们可以在缓冲区中找到对应的记录。kvoffets用于在缓冲区满的时候对kvindices的partition进行排序,排完序的结果将输出到输出到本地磁盘上,其中索引(kvindices)保持在spill{spill号}.out.index中,数据保存在spill{spill号}.out中。通过观察MapOutputBuffer的compare知道,先是在partition上排序,然后是在key上排序。 kvindices在kvoffets上排序
-
1 public int compare(int i, int j) { 2 final int ii = kvoffsets[i % kvoffsets.length]; 3 final int ij = kvoffsets[j % kvoffsets.length]; 4 // 先在partition上排序 5 if (kvindices[ii + PARTITION] != kvindices[ij + PARTITION]) { 6 return kvindices[ii + PARTITION] - kvindices[ij + PARTITION]; 7 } 8 // 然后在可以上排序 9 return comparator.compare(kvbuffer, 10 kvindices[ii + KEYSTART], 11 kvindices[ii + VALSTART] - kvindices[ii + KEYSTART], 12 kvbuffer, 13 kvindices[ij + KEYSTART], 14 kvindices[ij + VALSTART] - kvindices[ij + KEYSTART]); 15 } 16 17 public void swap(int i, int j) { 18 i %= kvoffsets.length; 19 j %= kvoffsets.length; 20 //通过交互在kvoffsets上的索引达到排序效果 21 int tmp = kvoffsets[i]; 22 kvoffsets[i] = kvoffsets[j]; 23 kvoffsets[j] = tmp; 24 }
- 15. MapOutputBuffer的flush() 方法
Mapper的结果都已经collect了,需要对缓冲区做一些最后的清理,调用flush方法,合并spill{n}文件产生最后的输出。先等待可能的spill过程完成,然后判断缓冲区是否为空,如果不是,则调用sortAndSpill,做最后的spill,然后结束spill线程.
1 public synchronized void flush() throws IOException, ClassNotFoundException, 2 InterruptedException { 3 LOG.info("Starting flush of map output"); 4 spillLock.lock(); 5 try { 6 while (kvstart != kvend) { 7 reporter.progress(); 8 spillDone.await(); 9 } 10 if (sortSpillException != null) { 11 throw (IOException)new IOException("Spill failed" 12 ).initCause(sortSpillException); 13 } 14 if (kvend != kvindex) { 15 kvend = kvindex; 16 bufend = bufmark; 17 sortAndSpill(); 18 } 19 } catch (InterruptedException e) { 20 throw (IOException)new IOException( 21 "Buffer interrupted while waiting for the writer" 22 ).initCause(e); 23 } finally { 24 spillLock.unlock(); 25 } 26 assert !spillLock.isHeldByCurrentThread(); 27 28 try { 29 spillThread.interrupt(); 30 spillThread.join(); 31 } catch (InterruptedException e) { 32 throw (IOException)new IOException("Spill failed" 33 ).initCause(e); 34 } 35 // release sort buffer before the merge 36 kvbuffer = null; 37 mergeParts(); 38 }
16.MapTask.MapOutputBuffer的mergeParts()方法.
从不同溢写文件中读取出来的,然后再把这些值加起来。因为merge是将多个溢写文件合并到一个文件,所以可能也有相同的key存在,在这个过程中如果配置设置过Combiner,也会使用Combiner来合并相同的key。?mapreduce让每个map只输出一个文件,并且为这个文件提供一个索引文件,以记录每个reduce对应数据的偏移量。
1 private void mergeParts() throws IOException, InterruptedException, 2 ClassNotFoundException { 3 // get the approximate size of the final output/index files 4 long finalOutFileSize = 0; 5 long finalIndexFileSize = 0; 6 final Path[] filename = new Path[numSpills]; 7 final TaskAttemptID mapId = getTaskID(); 8 9 for(int i = 0; i < numSpills; i++) { 10 filename[i] = mapOutputFile.getSpillFile(mapId, i); 11 finalOutFileSize += rfs.getFileStatus(filename[i]).getLen(); 12 } 13 if (numSpills == 1) { //如果只有一个spill文件,则重命名为输出的最终文件 14 rfs.rename(filename[0], 15 new Path(filename[0].getParent(), "file.out")); 16 if (indexCacheList.size() == 0) { 17 rfs.rename(mapOutputFile.getSpillIndexFile(mapId, 0), 18 new Path(filename[0].getParent(),"file.out.index")); 19 } else { 20 indexCacheList.get(0).writeToFile( 21 new Path(filename[0].getParent(),"file.out.index"), job); 22 } 23 return; 24 } 25 26 // read in paged indices 27 for (int i = indexCacheList.size(); i < numSpills; ++i) { 28 Path indexFileName = mapOutputFile.getSpillIndexFile(mapId, i); 29 indexCacheList.add(new SpillRecord(indexFileName, job)); 30 } 31 32 //make correction in the length to include the sequence file header 33 //lengths for each partition 34 finalOutFileSize += partitions * APPROX_HEADER_LENGTH; 35 finalIndexFileSize = partitions * MAP_OUTPUT_INDEX_RECORD_LENGTH; 36 Path finalOutputFile = mapOutputFile.getOutputFileForWrite(mapId, 37 finalOutFileSize); 38 Path finalIndexFile = mapOutputFile.getOutputIndexFileForWrite( 39 mapId, finalIndexFileSize); 40 41 //The output stream for the final single output file 42 FSDataOutputStream finalOut = rfs.create(finalOutputFile, true, 4096); 43 44 if (numSpills == 0) { 45 //如果没有spill文件,则创建一个 dummy files 46 IndexRecord rec = new IndexRecord(); 47 SpillRecord sr = new SpillRecord(partitions); 48 try { 49 for (int i = 0; i < partitions; i++) { 50 long segmentStart = finalOut.getPos(); 51 Writer<K, V> writer = 52 new Writer<K, V>(job, finalOut, keyClass, valClass, codec, null); 53 writer.close(); 54 rec.startOffset = segmentStart; 55 rec.rawLength = writer.getRawLength(); 56 rec.partLength = writer.getCompressedLength(); 57 sr.putIndex(rec, i); 58 } 59 sr.writeToFile(finalIndexFile, job); 60 } finally { 61 finalOut.close(); 62 } 63 return; 64 } 65 { 66 IndexRecord rec = new IndexRecord(); 67 final SpillRecord spillRec = new SpillRecord(partitions); 68 for (int parts = 0; parts < partitions; parts++) { 69 //在循环内对每个分区分别创建segment然后做merge 70 List<Segment<K,V>> segmentList = 71 new ArrayList<Segment<K, V>>(numSpills); 72 for(int i = 0; i < numSpills; i++) { 73 IndexRecord indexRecord = indexCacheList.get(i).getIndex(parts); 74 75 Segment<K,V> s = 76 new Segment<K,V>(job, rfs, filename[i], indexRecord.startOffset, 77 indexRecord.partLength, codec, true); 78 segmentList.add(i, s); 79 80 if (LOG.isDebugEnabled()) { 81 LOG.debug("MapId=" + mapId + " Reducer=" + parts + 82 "Spill =" + i + "(" + indexRecord.startOffset + "," + 83 indexRecord.rawLength + ", " + indexRecord.partLength + ")"); 84 } 85 } 86 87 //merge 88 @SuppressWarnings("unchecked") 89 RawKeyValueIterator kvIter = Merger.merge(job, rfs, 90 keyClass, valClass, codec, 91 segmentList, job.getInt("io.sort.factor", 100), 92 new Path(mapId.toString()), 93 job.getOutputKeyComparator(), reporter, 94 null, spilledRecordsCounter); 95 96 //write merged output to disk 97 //执行merge,并且把merge结果写到"/file.out"的最终输出中去。 98 long segmentStart = finalOut.getPos(); 99 Writer<K, V> writer = 100 new Writer<K, V>(job, finalOut, keyClass, valClass, codec, 101 spilledRecordsCounter); 102 if (combinerRunner == null || numSpills < minSpillsForCombine) { 103 Merger.writeFile(kvIter, writer, reporter, job); 104 } else { 105 combineCollector.setWriter(writer); 106 combinerRunner.combine(kvIter, combineCollector); 107 } 108 109 //close 110 writer.close(); 111 112 // record offsets 113 //把index写到最终的"/file.out.index"文件中。 114 rec.startOffset = segmentStart; 115 rec.rawLength = writer.getRawLength(); 116 rec.partLength = writer.getCompressedLength(); 117 spillRec.putIndex(rec, parts); 118 } 119 spillRec.writeToFile(finalIndexFile, job); 120 finalOut.close(); 121 for(int i = 0; i < numSpills; i++) { 122 rfs.delete(filename[i],true); 123 } 124 } 125 }
合并前后index文件和spill文件的结构图
merge最终生成一个spill.out和spill.out.index文件
从前面的分析指导,多个partition的都在一个输出文件中,但是按照partition排序的。即把maper输出按照partition分段了。一个partition对应一个reducer,因此一个reducer只要获取一段即可。
完。
参考:
参考并补充了http://caibinbupt.iteye.com/blog/401374文章中关于内存中索引结构的分析。谢谢。
为了维护文章的版本一致、最新、可追溯,转载请注明: 转载自idouba本文链接地址: 【hadoop代码笔记】Mapreduce shuffle过程之Map输出过程
posted on 2014-02-26 20:48 idouba.net 阅读(3287) 评论(1) 编辑 收藏 举报