在Spark1.2.0版本中是用parquet存储类型时注意事项:
sql语句:
select * from order_created_dynamic_partition_parquet;
在spark-sql中执行结果:
2014-05 [B@4621484a [B@3311163e 2014-05 [B@70ab973a [B@11559aa0 2014-05 [B@b1a8744 [B@7aa6870d 2014-05 [B@765e2d02 [B@20dd1b04 2014-05 [B@1418b477 [B@61effaef
在beeline中执行结果:
报错:
Error: java.lang.ClassCastException: [B cannot be cast to java.lang.String (state=,code=0)
在hive中执行结果:
ordernumber event_time event_month 10703007267488 2014-05-01 06:01:12.334+01 2014-05 10101043505096 2014-05-01 07:28:12.342+01 2014-05 10103043509747 2014-05-01 07:50:12.33+01 2014-05 10103043501575 2014-05-01 09:27:12.33+01 2014-05 10104043514061 2014-05-01 09:03:12.324+01 2014-05
可以通过设置
set spark.sql.parquet.binaryAsString=true
来解决spark-sql以及beeline中的问题,在spark1.2.0版本中该参数默认值为false;
说明:Some other Parquet-producing systems, in particular Impala and older versions of Spark SQL, do not differentiate between binary data and strings when writing out the Parquet schema. This flag tells Spark SQL to interpret binary data as a string to provide compatibility with these systems.