spark 运行 xgboost 脱坑记

坑:

  1. Spark Xgboost 对 spark的dataframe 的空值非常敏感,如果dataframe里有空值(null , “NaN”),xgboost就会报错。
  2. Spark2.4.4 的 Vector Assemble转换dataframe以后,对于0很多的行,会默认转成sparse vector,造成xgboost报错

示例代码:

val schema = new StructType(Array(
	StructField("BIZ_DATE", StringType, true),
	StructField("SKU", StringType, true),
	StructField("WINDGUST", DoubleType, true),
	StructField("WINDSPEED", DoubleType, true)))


val predictDF = spark.read.schema(schema)
      .format("csv")
      .option("header", "true")
      .option("delimiter", ",")
      .load("/mnt/parquet/data.csv")
import scala.collection.mutable.ArrayBuffer

val featureColsBuffer=ArrayBuffer[String]()
for (
posted @ 2019-12-30 18:01  爱知菜  阅读(40)  评论(0编辑  收藏  举报