hadoop 吞吐量测试

基准测试

1 测试HDFS写性能

测试内容:向HDFS集群写2128M的文件

hadoop jar /opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.3-tests.jar TestDFSIO -write -nrFiles 2 -fileSize 128MB

 

 

2测试HDFS性能

测试内容:读取HDFS集群10128M的文件

 hadoop jar /opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.3-tests.jar TestDFSIO -read -nrFiles 2 -fileSize 128MB

 

 

3)删除测试生成数据

 hadoop jar /opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.3-tests.jar TestDFSIO -clean

 

4)使用Sort程序评测MapReduce

(1)使用RandomWriter来产生随机数,每个节点运行10Map任务,每个Map产生大约1G大小的二进制随机数

hadoop jar /opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar randomwriter random-data

(2)执行Sort程序

hadoop jar /opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar sort random-data sorted-data

(3)验证数据是否真正排好序了

hadoop jar /opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.3-tests.jar testmapredsort -sortInput random-data -sortOutput sorted-data

 

posted @ 2021-08-01 17:54  冰底熊  阅读(311)  评论(0编辑  收藏  举报