使用hadoop自带的例子作测试 笔记三
使用hadoop自带的例子作测试
Hadoop中的常用命令:
//查看目录文件
root@vm:/software/hadoop/hadoop-0.20.2# bin/hadoop dfs -ls /
Found 1 items
drwxr-xr-x - root supergroup 0 2015-10-02 13:25 /opt
#将操作系统中的文件放到hadoop中
Hadoop dfs -put 本地文件 hadopp文件夹
#将文件下载到本地
Hadoop dfs -get (hadoop文件名) 本地路径
#查看文件内容
Hadoop dfs -cat 文件名
#删除文件目录
Hadoop dfs -rmr 文件名
#全局查看datenode的情况
Hadoop dfsadmin -report
#对于当前运行job的操作
Hadoop job (list/kill)
#均衡磁盘负载
Hadoop balancer
#进入和退出安全模式
Hadoop dfsadmin -safemode enter
Hadoop dfsadmin -safemode leave
对安装好的hadoop进行测试
在datanode中建立两个文件
Cd retacn
Mkdir input
Echo “hello world” >test1.txt
Echo “hello hadoop” >test2.txt
Cd ~/hadoop-0.20.2
#将文件上传到dfs 中
Bin/hadoop dfs -put /root/imput in
#查看文件
Bin/hadoop dfs -ls ./in/*
#查看文件内容
Hadoop dfs -cat 文件名
使用hadoop自带的wordcount程序进行测试
#in为数据源,out为输出目录
root@vm:/software/hadoop/hadoop-0.20.2# bin/hadoop jar hadoop-0.20.2-examples.jar wordcount in out
15/10/02 21:28:55 INFO input.FileInputFormat: Total input paths to process : 2
15/10/02 21:28:57 INFO mapred.JobClient: Running job: job_201510022121_0001
15/10/02 21:28:58 INFO mapred.JobClient: map 0% reduce 0%
15/10/02 21:29:50 INFO mapred.JobClient: map 50% reduce 0%
15/10/02 21:30:03 INFO mapred.JobClient: map 100% reduce 0%
15/10/02 21:30:09 INFO mapred.JobClient: map 100% reduce 100%
15/10/02 21:30:11 INFO mapred.JobClient: Job complete: job_201510022121_0001
15/10/02 21:30:11 INFO mapred.JobClient: Counters: 17
15/10/02 21:30:11 INFO mapred.JobClient: Job Counters
15/10/02 21:30:11 INFO mapred.JobClient: Launched reduce tasks=1
15/10/02 21:30:11 INFO mapred.JobClient: Launched map tasks=2
15/10/02 21:30:11 INFO mapred.JobClient: Data-local map tasks=2
15/10/02 21:30:11 INFO mapred.JobClient: FileSystemCounters
15/10/02 21:30:11 INFO mapred.JobClient: FILE_BYTES_READ=55
15/10/02 21:30:11 INFO mapred.JobClient: HDFS_BYTES_READ=25
15/10/02 21:30:11 INFO mapred.JobClient: FILE_BYTES_WRITTEN=180
15/10/02 21:30:11 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=25
15/10/02 21:30:11 INFO mapred.JobClient: Map-Reduce Framework
15/10/02 21:30:11 INFO mapred.JobClient: Reduce input groups=3
15/10/02 21:30:11 INFO mapred.JobClient: Combine output records=4
15/10/02 21:30:11 INFO mapred.JobClient: Map input records=2
15/10/02 21:30:11 INFO mapred.JobClient: Reduce shuffle bytes=61
15/10/02 21:30:11 INFO mapred.JobClient: Reduce output records=3
15/10/02 21:30:11 INFO mapred.JobClient: Spilled Records=8
15/10/02 21:30:11 INFO mapred.JobClient: Map output bytes=41
15/10/02 21:30:11 INFO mapred.JobClient: Combine input records=4
15/10/02 21:30:11 INFO mapred.JobClient: Map output records=4
15/10/02 21:30:11 INFO mapred.JobClient: Reduce input records=4
查看测试结果
#查看根目录下的所有文件
Bin/hadoop dfs -ls
#查看文件
Bin/hadoop dfs -cat ./out/*