Spark DataFrame选取多列

val df = sc.parallelize(Seq(
  (0,"cat26",30.9), 
  (1,"cat67",28.5), 
  (2,"cat56",39.6),
  (3,"cat8",35.6))).toDF("Hour", "Category", "Value")

//或者从文件读取成List
val cols = List("Hour", "Value")

scala> df.select(cols.head, cols.tail: _*).show
+----+----------+
|Hour|Value|
+----+----------+
|   1|      28.5|
|   3|      35.6|
|   2|      39.6|
|   0|      30.9|
+----+----------+

 

posted @ 2020-12-29 20:14  船长博客  阅读(2102)  评论(0编辑  收藏  举报
永远相信美好的事情即将发生!