hdfs命令行操作
集群环境中,可以在任意一个节点上通过命令行操作hdfs,hdfs命令很多都跟Linux文件系统命令一样,只是都要加上hadoop fs。可通过hadoop fs -help查看hdfs命令:
1,列出目录:
hadoop fs -ls /
![](https://images2018.cnblogs.com/blog/1367698/201807/1367698-20180710135029375-80337313.png)
2,创建目录:
hadoop fs -mkdir /study/mr
![](https://images2018.cnblogs.com/blog/1367698/201807/1367698-20180710135206191-1005882845.png)
加上-p可以创建多级目录:
hadoop fs -mkdir -p /study/mr/wordcount/input
3,删除目录:
hadoop fs -rmdir /study/test
![](https://images2018.cnblogs.com/blog/1367698/201807/1367698-20180710135347209-876876096.png)
4,上传文件:
hadoop fs -put install.log /study/test,也可以用copyFromLocal从本地复制到hdfs。
上传完后通过hadoop fs -ls /study/test查看
![](https://images2018.cnblogs.com/blog/1367698/201807/1367698-20180710135508401-967663899.png)
通过web端也可查看,http://192.168.103.137:50070
![](https://images2018.cnblogs.com/blog/1367698/201807/1367698-20180710135554349-1622763286.png)
上传后的文件有3个备份文件(注意,显示3个备份并不一定有3个备份文件),分别位于三个datanode节点,上传的数据在/usr/local/hadoop/tmp/dfs/data/current下
![](https://images2018.cnblogs.com/blog/1367698/201807/1367698-20180710135706562-869888694.png)
用cat命令查看blk_1073741826文件,跟原上传文件一样。
5,查看文件:
hadoop fs -cat /study/test/install.log
![](https://images2018.cnblogs.com/blog/1367698/201807/1367698-20180710135827183-2058058820.png)
6,下载文件:
hadoop fs -get /study/test/install.log /root/,也可以用copyToLocal从hdfs复制到本地
![](https://images2018.cnblogs.com/blog/1367698/201807/1367698-20180710135930428-2111119783.png)
7,删除文件:
hadoop fs -rm /study/test/install.log
![](https://images2018.cnblogs.com/blog/1367698/201807/1367698-20180710140127237-172602161.png)