python-pyspark数据输入
数据容器转rdd对象
通过SparkContext对象的parallelize成员方法,将python数据容器转为rdd对象
from pyspark import SparkConf,SparkContext
conf = SparkConf().setMaster("local[*]").setAppName("test_spark_app")
sc = SparkContext(conf=conf)
data1 = [1, 2, 3, 4, 5]
data2 = (1, 2, 3, 4, 5)
data3 = {1, 2, 3, 4, 5}
data4 = "abcdefg"
data5 = {"key1":"value1", "key2":"value2"}
rdd1 = sc.parallelize(data1)
rdd2 = sc.parallelize(data2)
rdd3 = sc.parallelize(data3)
rdd4 = sc.parallelize(data4)
rdd5 = sc.parallelize(data5)
print(rdd1.collect())
print(rdd2.collect())
print(rdd3.collect())
print(rdd4.collect())
print(rdd5.collect())
sc.stop()
conf = SparkConf().setMaster("local[*]").setAppName("test_spark_app")
sc = SparkContext(conf=conf)
data1 = [1, 2, 3, 4, 5]
data2 = (1, 2, 3, 4, 5)
data3 = {1, 2, 3, 4, 5}
data4 = "abcdefg"
data5 = {"key1":"value1", "key2":"value2"}
rdd1 = sc.parallelize(data1)
rdd2 = sc.parallelize(data2)
rdd3 = sc.parallelize(data3)
rdd4 = sc.parallelize(data4)
rdd5 = sc.parallelize(data5)
print(rdd1.collect())
print(rdd2.collect())
print(rdd3.collect())
print(rdd4.collect())
print(rdd5.collect())
sc.stop()
读取文本文件转rdd对象
通过SparkContext的textfile成员方法,读取文本文件得到rdd对象
from pyspark import SparkConf,SparkContext
conf = SparkConf().setMaster("local[*]").setAppName("test_spark_app")
sc = SparkContext(conf=conf)
rdd = sc.textFile("D:\WordCount\input\data.txt")
print(rdd.collect())
sc.stop()
conf = SparkConf().setMaster("local[*]").setAppName("test_spark_app")
sc = SparkContext(conf=conf)
rdd = sc.textFile("D:\WordCount\input\data.txt")
print(rdd.collect())
sc.stop()
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 被坑几百块钱后,我竟然真的恢复了删除的微信聊天记录!
· 没有Manus邀请码?试试免邀请码的MGX或者开源的OpenManus吧
· 【自荐】一款简洁、开源的在线白板工具 Drawnix
· 园子的第一款AI主题卫衣上架——"HELLO! HOW CAN I ASSIST YOU TODAY
· Docker 太简单,K8s 太复杂?w7panel 让容器管理更轻松!