pyspark向lzo格式hive表插入数据

 

1.在执行插入之前,必须要指定参数

spark.sql("set hive.exec.dynamic.partition.mode=nonstrict")
spark.sql('''set mapred.output.compress=true''')
spark.sql('''set hive.exec.compress.output=true''')
spark.sql('''setmapred.output.compression.codec=com.hadoop.compression.lzo.LzopCodec''')
insert_sql = '''
insert overwrite table test partition(dt,hour) select * from tmp_view
'''
spark.sql(insert_sql)

说明,在pyspark里不像在python直接调用hive一样

from HiveTask import *
ht = HiveTask()
ht.exec_sql("adm",sql,lzo_path="true")
posted @ 2019-03-05 18:38  剑未佩妥出门已是江湖  阅读(696)  评论(1编辑  收藏  举报