ZhangZhihui's Blog  

2025年2月1日

摘要: from pyspark.sql.functions import flatten, collect_list # create a DataFrame with an array of arrays column df = spark.createDataFrame([ (1, [[1, 2], 阅读全文
posted @ 2025-02-01 22:45 ZhangZhihuiAAA 阅读(7) 评论(0) 推荐(0) 编辑
 
摘要: build.sh: #!/bin/bash # # -- Build Apache Spark Standalone Cluster Docker Images # # -- Variables # BUILD_DATE="$(date -u +'%Y-%m-%d')" SPARK_VERSION= 阅读全文
posted @ 2025-02-01 20:24 ZhangZhihuiAAA 阅读(4) 评论(0) 推荐(0) 编辑
 
点击右上角即可分享
微信分享提示