spark 按照key 分组 然后统计每个key对应的最大、最小、平均值思路——使用groupby,或者reduceby

复制代码
What you're getting back is an object which allows you to iterate over the results. You can turn the results of groupByKey into a list by calling list() on the values, e.g.

example = sc.parallelize([(0, u'D'), (0, u'D'), (1, u'E'), (2, u'F')])

example.groupByKey().collect()
# Gives [(0, <pyspark.resultiterable.ResultIterable object ......]

example.groupByKey().map(lambda x : (x[0], list(x[1]))).collect()
# Gives [(0, [u'D', u'D']), (1, [u'E']), (2, [u'F'])]

# OR:
example.groupByKey().mapValues(list)
 
复制代码
复制代码
Hey Ron, 

It was pretty much exactly as Sean had depicted. I just needed to provide
count an anonymous function to tell it which elements to count. Since I
wanted to count them all, the function is simply "true".

        val grouped = rdd.groupByKey().mapValues { mcs =>
          val values = mcs.map(_.foo.toDouble)
          val n = values.count(x => true)
          val sum = values.sum
          val sumSquares = values.map(x => x * x).sum
          val stddev = math.sqrt(n * sumSquares - sum * sum) / n
          print("stddev: " + stddev)
          stddev
        }


I hope that helps
复制代码

 

 

Just don't. Use reduce by key:

lines.map(lambda x: (x[1][0:4], (x[0], float(x[3])))).map(lambda x: (x, x)) \
    .reduceByKey(lambda x, y: (
        min(x[0], y[0], key=lambda x: x[1]), 
        max(x[1], y[1], , key=lambda x: x[1])))

 

posted @   bonelee  阅读(9299)  评论(0编辑  收藏  举报
编辑推荐:
· 记一次.NET内存居高不下排查解决与启示
· 探究高空视频全景AR技术的实现原理
· 理解Rust引用及其生命周期标识(上)
· 浏览器原生「磁吸」效果!Anchor Positioning 锚点定位神器解析
· 没有源码,如何修改代码逻辑?
阅读排行:
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 记一次.NET内存居高不下排查解决与启示
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
· DeepSeek 开源周回顾「GitHub 热点速览」
点击右上角即可分享
微信分享提示