[Spark][Python]获得 key,value形式的 RDD

[Spark][Python]获得 key,value形式的 RDD


[training@localhost ~]$ cat users.txt
user001 Fred Flintstone
user090 Bugs Bunny
user111 Harry Potter
[training@localhost ~]$ hdfs dfs -put users.txt
[training@localhost ~]$
[training@localhost ~]$
[training@localhost ~]$ hdfs dfs -cat users.txt
user001 Fred Flintstone  <<<<<<<<<<<<<<<<<<,  tab 符 分隔
user090 Bugs Bunny
user111 Harry Potter
[training@localhost ~]$

 user01 = sc.textFile("users.txt")

user02 = user01.map(lambda line : line.split("\t"))

In [16]: user02.take(3)
Out[16]:
[[u'user001', u'Fred Flintstone'],
[u'user090', u'Bugs Bunny'],
[u'user111', u'Harry Potter']]

user03 = user02.map(lambda fields: (fields[0],fields[1]))

user03.take(3)

Out[20]:
[(u'user001', u'Fred Flintstone'), <<<<<<<<<<<<<<<< 此处构筑了 key-value pair
(u'user090', u'Bugs Bunny'),
(u'user111', u'Harry Potter')]

posted @ 2017-09-26 21:40  健哥的数据花园  阅读(647)  评论(0编辑  收藏  举报