摘要:
测试数据 1 A 1 1 A 2 1 B 3 2 B 11 2 D 12 2 A 13 3 B 21 3 F 22 3 A 23 4 B 36 4 A 37 1 G 91 2 A 99 3 D 93 4 E 94 ①.row_number() over(partition by X1 order b 阅读全文
摘要:
加载DataFrame的流程: ①.创建SparkSession对象 ②.创建DataFrame对象 ③.创建视图 ④.数据处理 1、读取CSV格式的数据加载DataFrame 1 val session = SparkSession.builder().master("local").appNam 阅读全文
摘要:
1、对RDD的分区重新进行划分:rdd1.coalesce(num,boolean) 1 val rdd1 = sc.parallelize(Array[String]("love1", "love2", "love3", "love4", "love5", "love6", "love7", "l 阅读全文
摘要:
1、RDD的转换,将RDD转换为map:rdd.collectAsMap() val rdd = sc.parallelize(Array[(String, Int)]( ("zhangsan", 18), ("lisi", 19), ("wangwu", 20), ("maliu", 21) )) 阅读全文
摘要:
①.创建SparkConf() val conf = new SparkConf() conf.setMaster.. ;conf.setAppName... ②.创建SparkContext() val sc = new SparkContext(conf) ③.创建RDD val rdd = s 阅读全文
摘要:
1、val rdd = sc.textFile... val lines :RDD[String] = sc.textFile("./data/words") 2、val rdd = sc.parallelize(Seq[xx](... ...)) val result :RDD[String]= 阅读全文