摘要:
#直接调用sortByKey()函数就可以做到 from pyspark import SparkContext sc = SparkContext('local','Sort') list = ["7","4","8","2","5"] textFile = sc.parallelize(list 阅读全文
摘要:
#基于python的spark #导入pyspark库 from pyspark import SparkContext #配置SparkContext sc = SparkContext('local','wordcount') #创建一个新的RDD,加载本地文件 textFile = sc.te 阅读全文