原生的计数器有(格式:<显示名> | <内部名>):

FileSystemCounters | FileSystemCounters
---FILE_BYTES_READ | FILE_BYTES_READ
---FILE_BYTES_WRITTEN | FILE_BYTES_WRITTEN
---HDFS_BYTES_READ | HDFS_BYTES_READ
---HDFS_BYTES_WRITTEN | HDFS_BYTES_WRITTEN

Job Counters  | org.apache.hadoop.mapred.JobInProgress$Counter
---Total time spent by all maps waiting after reserving slots (ms) | FALLOW_SLOTS_MILLIS_MAPS
---Total time spent by all reduces waiting after reserving slots (ms) | FALLOW_SLOTS_MILLIS_REDUCES
---Rack-local map tasks | RACK_LOCAL_MAPS
---SLOTS_MILLIS_MAPS | SLOTS_MILLIS_MAPS
---SLOTS_MILLIS_REDUCES | SLOTS_MILLIS_REDUCES
---Launched map tasks | TOTAL_LAUNCHED_MAPS
---Launched reduce tasks | TOTAL_LAUNCHED_REDUCES

Map-Reduce Framework | org.apache.hadoop.mapred.Task$Counter
---Combine input records | COMBINE_INPUT_RECORDS
---Combine output records | COMBINE_OUTPUT_RECORDS
---Total committed heap usage (bytes) | COMMITTED_HEAP_BYTES
---CPU time spent (ms) | CPU_MILLISECONDS
---Map input records | MAP_INPUT_RECORDS
---Map output bytes | MAP_OUTPUT_BYTES
---Map output materialized bytes | MAP_OUTPUT_MATERIALIZED_BYTES
---Map output records | MAP_OUTPUT_RECORDS
---Physical memory (bytes) snapshot | PHYSICAL_MEMORY_BYTES
---Reduce input groups | REDUCE_INPUT_GROUPS
---Reduce input records | REDUCE_INPUT_RECORDS
---Reduce output records | REDUCE_OUTPUT_RECORDS
---Reduce shuffle bytes | REDUCE_SHUFFLE_BYTES
---Spilled Records | SPILLED_RECORDS
---SPLIT_RAW_BYTES | SPLIT_RAW_BYTES
---Virtual memory (bytes) snapshot | VIRTUAL_MEMORY_BYTES

File Input Format Counters  | org.apache.hadoop.mapreduce.lib.input.FileInputFormat$Counter
---Bytes Read | BYTES_READ

File Output Format Counters  | org.apache.hadoop.mapreduce.lib.output.FileOutputFormat$Counter
---Bytes Written | BYTES_WRITTEN

 

1. 在Map或Reduce中使用:

增加某个计数器(一般用于自定义的计数器):context.getCounter(<groupName>, <counterName>).increment(<incr>);

获取某个计数器的值:context.getCounter(<groupName>, <counterName>).getValue();

2. 在Map或Reduce之外使用:

job.getCounters().findCounter(groupName, counterName)

3. Map output bytes | MAP_OUTPUT_BYTES 和 Reduce output records | REDUCE_OUTPUT_RECORDS 记录的是调用Context.write(<Name>, <K>, <V>)的次数(这是新版API, 至于旧版API使用OutputCollector.collect(<K>, <V>)不清楚),对于调用MultipleOutputs.write(<K>, <V>)写出的记录并不记录

posted on 2012-10-31 11:49  山君  阅读(1213)  评论(2编辑  收藏  举报