spark 运行 xgboost 脱坑记
坑:
- Spark Xgboost 对 spark的dataframe 的空值非常敏感,如果dataframe里有空值(null , “NaN”),xgboost就会报错。
- Spark2.4.4 的 Vector Assemble转换dataframe以后,对于0很多的行,会默认转成sparse vector,造成xgboost报错
示例代码:
val schema = new StructType(Array(
StructField("BIZ_DATE", StringType, true),
StructField("SKU", StringType, true),
StructField("WINDGUST", DoubleType, true),
StructField("WINDSPEED", DoubleType, true)))
val predictDF = spark.read.schema(schema)
.format("csv")
.option("header", "true")
.option("delimiter", ",")
.load("/mnt/parquet/data.csv")
import scala.collection.mutable.ArrayBuffer
val featureColsBuffer=ArrayBuffer[String]()
for (