与HDFS交互- By java API编程

环境(ubuntu下)

  jdk

  eclipse

  jar(很烦,整了很久才清楚)

    - 导包方法

    查看:https://www.cnblogs.com/floakss/p/9739030.html

1)”/usr/local/hadoop/share/hadoop/common”目录下的hadoop-common-2.9.1.jar和haoop-nfs-2.9.1.jar;
(2)“/usr/local/hadoop/share/hadoop/common/lib”目录下的所有JAR包;
(3)“/usr/local/hadoop/share/hadoop/hdfs”目录下的haoop-hdfs-2.9.1.jar和haoop-hdfs-nfs-2.9.1.jar;
(4)“/usr/local/hadoop/share/hadoop/hdfs/lib”目录下的所有JAR包。

操作

  文件的创建,读入,写入,删除,上传,下载

  目录的创建,删除等

例子 - 文件的创建

 

//工具类

import java.io.BufferedOutputStream; import java.io.IOException; import java.net.URI; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path;
public class Temp { public void createFileOnHDFS() { String rootPath="hdfs://Kouri:9000/"; Configuration conf=new Configuration(); conf.set("fs.defaultFS", "hdfs://Kouri:9000"); conf.set("fs.hdfs.impl", "org.apache.hadoop.hdfs.DistributedFileSystem"); try { FileSystem fs=FileSystem.get(URI.create(rootPath),conf); Path hdfsPath=new Path(rootPath+"/user/hadoop/demo1.txt"); System.out.println(""+fs.getHomeDirectory()); String con="hello world"; FSDataOutputStream fout=fs.create(hdfsPath); BufferedOutputStream bout=new BufferedOutputStream(fout); bout.write(con.getBytes(),0,con.getBytes().length); bout.close(); fout.close(); System.out.println(hdfsPath+"创建"); } catch (IOException e) { e.printStackTrace(); } } }

//测试类

public class Test {
    public static void main(String []args) {
        Temp temp=new Temp();
        temp.createFileOnHDFS();
    }
}

 

结果截图:

 

 

参考:http://dblab.xmu.edu.cn/blog/290-2/

 

posted @ 2018-10-03 01:18  丨Kouch  阅读(291)  评论(0编辑  收藏  举报