摘要: 方法一:map + reduceByKey package com.cw.bigdata.spark.wordcount import org.apache.spark.rdd.RDD import org.apache.spark.{SparkConf, SparkContext} /** * W 阅读全文
posted @ 2020-07-09 14:33 陈小哥cw 阅读(1863) 评论(1) 推荐(1) 编辑