spark scala分组取最新日期的几条记录max date
val emp = Seq((1,"Smith",-1,"2018","10","M",3000), (2,"Rose",1,"2010","20","M",4000), (1,"Williams",1,"2020","10","M",1000), (2,"Jones",2,"2005","10","F",2000), (1,"Brown",2,"2020","40","",-1), (6,"Brown",2,"2010","50","",-1) ) val empColumns = Seq("emp_id","name","superior_emp_id","year_joined", "emp_dept_id","gender","salary") import spark.sqlContext.implicits._ val empDF = emp.toDF(empColumns:_*) empDF.show(false) scala> val b = empDF scala> b.show +------+--------+---------------+-----------+-----------+------+------+ |emp_id| name|superior_emp_id|year_joined|emp_dept_id|gender|salary| +------+--------+---------------+-----------+-----------+------+------+ | 1| Smith| -1| 2018| 10| M| 3000| | 2| Rose| 1| 2010| 20| M| 4000| | 1|Williams| 1| 2020| 10| M| 1000| | 2| Jones| 2| 2005| 10| F| 2000| | 1| Brown| 2| 2020| 40| | -1| | 6| Brown| 2| 2010| 50| | -1| +------+--------+---------------+-----------+-----------+------+------+ scala> val a = empDF.groupBy("emp_id").agg(max("year_joined").alias("max")) a: org.apache.spark.sql.DataFrame = [emp_id: int, max: string] scala> a.show +------+----+ |emp_id| max| +------+----+ | 1|2020| | 6|2010| | 2|2010| +------+----+ scala> b.join(a, Seq("emp_id"), "left").show +------+--------+---------------+-----------+-----------+------+------+----+ |emp_id| name|superior_emp_id|year_joined|emp_dept_id|gender|salary| max| +------+--------+---------------+-----------+-----------+------+------+----+ | 1| Smith| -1| 2018| 10| M| 3000|2020| | 2| Rose| 1| 2010| 20| M| 4000|2010| | 1|Williams| 1| 2020| 10| M| 1000|2020| | 2| Jones| 2| 2005| 10| F| 2000|2010| | 1| Brown| 2| 2020| 40| | -1|2020| | 6| Brown| 2| 2010| 50| | -1|2010| +------+--------+---------------+-----------+-----------+------+------+----+ scala> b.join(a, Seq("emp_id"), "left").where(s"year_joined = max").show +------+--------+---------------+-----------+-----------+------+------+----+ |emp_id| name|superior_emp_id|year_joined|emp_dept_id|gender|salary| max| +------+--------+---------------+-----------+-----------+------+------+----+ | 2| Rose| 1| 2010| 20| M| 4000|2010| | 1|Williams| 1| 2020| 10| M| 1000|2020| | 1| Brown| 2| 2020| 40| | -1|2020| | 6| Brown| 2| 2010| 50| | -1|2010| +------+--------+---------------+-----------+-----------+------+------+----+
参考:
https://sparkbyexamples.com/spark/spark-sql-dataframe-join/
https://stackoverflow.com/questions/39699495/spark-2-0-groupby-column-and-then-get-maxdate-on-a-datetype-column?rq=1
标签:
Spark
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 从 HTTP 原因短语缺失研究 HTTP/2 和 HTTP/3 的设计差异
· AI与.NET技术实操系列:向量存储与相似性搜索在 .NET 中的实现
· 基于Microsoft.Extensions.AI核心库实现RAG应用
· Linux系列:如何用heaptrack跟踪.NET程序的非托管内存泄露
· 开发者必知的日志记录最佳实践
· TypeScript + Deepseek 打造卜卦网站:技术与玄学的结合
· Manus的开源复刻OpenManus初探
· AI 智能体引爆开源社区「GitHub 热点速览」
· 从HTTP原因短语缺失研究HTTP/2和HTTP/3的设计差异
· 三行代码完成国际化适配,妙~啊~
2018-01-26 Python pandas 获取Excel重复记录