DataFrame 对其列的各种转化处理

前置

Oracle中INSTR的用法:


INSTR(源字符串, 要查找的字符串, 从第几个字符开始, 要找到第几个匹配的序号)

例如:INSTR('CORPORATE FLOOR','OR', 3, 2)中,
源字符串为'CORPORATE FLOOR', 在字符串中查找'OR',
从第三个字符位置开始查找"OR",取第三个字后第2个匹配项的位置。

oracle的substr函数的用法:

 取得字符串中指定起始位置和长度的字符串   
 substr( string, start_position, [ length ] )
  如:
     substr('This is a test', 6, 2)     would return 'is'

import org.apache.spark.sql.functions._

场景1:我们想要对DataFrame a Column 进行进行字符串的长度截取

方式一:Function "expr" 
val data = List("first", "second", "third")
val testDF = sc.parallelize(data).toDF("col")
val result = testDF.withColumn("newcol", expr("substring(col, 1, length(col)-1)"))
result.show(false)

方式二: use $"COLUMN".substr
val testDF = sc.parallelize(List("first", "second", "third")).toDF("col")
val result = testDF.withColumn("newcol", $"col".substr(org.apache.spark.sql.functions.lit(1), length($"col")-1))
result.show(false)

场景2:我们想要对DataFrame a Column 进行进行字段的拆分【某一列变多列,原始列保留】

val testDF = sc.parallelize(List("first|1", "second|2", "third|3")).toDF("col")
val result = testDF.withColumn("newcol",split(col("col"), "\\|") )
scala> result.show(false)
+--------+-----------+
|col     |newcol     |
+--------+-----------+
|first|1 |[first, 1] |
|second|2|[second, 2]|
|third|3 |[third, 3] |
+--------+-----------+

val result2 = testDF.withColumn("newcol1",split(col("col"), "\\|")(0) ).withColumn("newcol2",split(col("col"), "\\|").getItem(1) )
scala> result2.show(false)
+--------+-------+-------+                                                      
|col     |newcol1|newcol2|
+--------+-------+-------+
|first|1 |first  |1      |
|second|2|second |2      |
|third|3 |third  |3      |
+--------+-------+-------+

上面的拆分的那列如果不想要,当然也可以通过通过drop进行删除
split()的结果可以使用(0) 和 getItem(0) 的方式进行获取

场景3:我们想要对DataFrame a Column 进行进行字段的拆分【原始仅有一列,且不用保留原始列】

val testDF = sc.parallelize(List("a.b.c", "d.e.f", "g.h.i")).toDF("columnToSplit")
val result = testDF.withColumn("_tmp", split($"columnToSplit", "\\.")).select(
  $"_tmp".getItem(0).as("col1"),
  $"_tmp".getItem(1).as("col2"),
  $"_tmp".getItem(2).as("col3")
).drop("_tmp")
scala> result.show(false)
+----+----+----+                                                                
|col1|col2|col3|
+----+----+----+
|a   |b   |c   |
|d   |e   |f   |
|g   |h   |i   |
+----+----+----+

场景4

case class Message(others: String, text: String)

val r1 = Message("foo1", "a.b.c")
val r2 = Message("foo2", "d.e.f")

val records = Seq(r1, r2)
val df = spark.createDataFrame(records)

val result = df.withColumn("col1", split(col("text"), "\\.").getItem(0)).withColumn("col2", split(col("text"), "\\.").getItem(1)).withColumn("col3", split(col("text"), "\\.").getItem(2))
scala> result.show(false)
+------+-----+----+----+----+
|others|text |col1|col2|col3|
+------+-----+----+----+----+
|foo1  |a.b.c|a   |b   |c   |
|foo2  |d.e.f|d   |e   |f   |
+------+-----+----+----+----+
posted @ 2019-03-21 11:02  liuge36  阅读(143)  评论(0编辑  收藏  举报