[python][spark]wholeTextFiles 读入多个文件的例子

$pwd 

/home/training/mydir

$cat file1.json

{
"firstName":"Fred",
"lastName":"Flintstone",
"userid":"123"
}

$cat file2.json

{
"firstName":"Barney",
"lastName":"Rubble",
"userid":"123"
}

[training@localhost ~]$ hdfs dfs -put /home/training/mydir
[training@localhost ~]$
[training@localhost ~]$ hdfs dfs -ls
Found 4 items
drwxrwxrwx - training supergroup 0 2017-09-23 19:26 .sparkStaging
-rw-rw-rw- 1 training supergroup 48 2017-09-25 05:31 cats.txt
drwxrwxrwx - training supergroup 0 2017-09-25 15:39 mydir ***
-rw-rw-rw- 1 training supergroup 34 2017-09-23 06:16 test.txt
[training@localhost ~]$

myrdd1 = sc.wholeTextFiles("mydir")

myrdd1.count()
Out[32]: 2

In [35]: myrdd1.take(2)

Out[35]:
[(u'hdfs://localhost:8020/user/training/mydir/file1.json',
u'{\n "firstName":"Fred",\n "lastName":"Flintstone",\n "userid":"123"\n}\n'),
(u'hdfs://localhost:8020/user/training/mydir/file2.json',
u'{\n "firstName":"Barney",\n "lastName":"Rubble",\n "userid":"456"\n}\n')]

posted @ 2017-09-26 06:50  健哥的数据花园  阅读(853)  评论(0编辑  收藏  举报