大叔经验分享(65)spark读取不到hive表
spark 2.4.3
spark读取hive表,步骤:
1)hive-site.xml
hive-site.xml放到$SPARK_HOME/conf下
2)enableHiveSupport
SparkSession.builder.enableHiveSupport().getOrCreate()
3) 测试代码
val sparkConf = new SparkConf().setAppName(getName) val sc = new SparkContext(sparkConf) val spark = SparkSession.builder.config(sparkConf).enableHiveSupport().getOrCreate() spark.sql("show databases").rdd.foreach(println)
使用$SPARK_HOME/bin/spark-submit提交任务后发现并不能读取到hive的数据库,相关日志如下
19/05/31 13:11:31 WARN SparkContext: Using an existing SparkContext; some configuration may not take effect. 19/05/31 13:11:31 INFO SharedState: loading hive config file: file:/export/spark-2.4.3-bin-hadoop2.6/conf/hive-site.xml 19/05/31 13:11:31 INFO SharedState: spark.sql.warehouse.dir is not set, but hive.metastore.warehouse.dir is set. Setting spark.sql.warehouse.dir to the value of hive.metastore.warehouse.dir ('/user/hive/warehouse'). 19/05/31 13:11:31 INFO SharedState: Warehouse path is '/user/hive/warehouse'. 19/05/31 13:11:31 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoin
说明已经读到hive-site.xml;
进一步测试,使用$SPARK_HOME/bin/spark-sql或者$SPARK_HOME/bin/spark-shell发现都可以读到hive数据库,很神奇有没有,
$SPARK_HOME/bin/spark-shell启动的类为org.apache.spark.repl.Main
"${SPARK_HOME}"/bin/spark-submit --class org.apache.spark.repl.Main --name "Spark shell" "$@"
跟进org.apache.spark.repl.Main代码
... val builder = SparkSession.builder.config(conf) if (conf.get(CATALOG_IMPLEMENTATION.key, "hive").toLowerCase(Locale.ROOT) == "hive") { if (SparkSession.hiveClassesArePresent) { // In the case that the property is not set at all, builder's config // does not have this value set to 'hive' yet. The original default // behavior is that when there are hive classes, we use hive catalog. sparkSession = builder.enableHiveSupport().getOrCreate() logInfo("Created Spark session with Hive support") } else { // Need to change it back to 'in-memory' if no hive classes are found // in the case that the property is set to hive in spark-defaults.conf builder.config(CATALOG_IMPLEMENTATION.key, "in-memory") sparkSession = builder.getOrCreate() logInfo("Created Spark session") } } else { // In the case that the property is set but not to 'hive', the internal // default is 'in-memory'. So the sparkSession will use in-memory catalog. sparkSession = builder.getOrCreate() logInfo("Created Spark session") } sparkContext = sparkSession.sparkContext sparkSession ...
发现和测试代码有些差异,关键是在倒数第二行,这里是先创建SparkSession,再从SparkSession中获取SparkContext,另外注意到之前有个WARN级别的日志
19/05/31 13:11:31 WARN SparkContext: Using an existing SparkContext; some configuration may not take effect.
修改测试代码
val sparkConf = new SparkConf().setAppName(getName) //val sc = new SparkContext(sparkConf) val spark = SparkSession.builder.config(sparkConf).enableHiveSupport().getOrCreate() val sc = spark.sparkContext spark.sql("show databases").rdd.foreach(println)
这次果然ok了,详细原因有空再看,未完待续;
---------------------------------------------------------------- 结束啦,我是大魔王先生的分割线 :) ----------------------------------------------------------------
- 由于大魔王先生能力有限,文中可能存在错误,欢迎指正、补充!
- 感谢您的阅读,如果文章对您有用,那么请为大魔王先生轻轻点个赞,ありがとう
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· AI与.NET技术实操系列:向量存储与相似性搜索在 .NET 中的实现
· 基于Microsoft.Extensions.AI核心库实现RAG应用
· Linux系列:如何用heaptrack跟踪.NET程序的非托管内存泄露
· 开发者必知的日志记录最佳实践
· SQL Server 2025 AI相关能力初探
· winform 绘制太阳,地球,月球 运作规律
· 震惊!C++程序真的从main开始吗?99%的程序员都答错了
· AI与.NET技术实操系列(五):向量存储与相似性搜索在 .NET 中的实现
· 【硬核科普】Trae如何「偷看」你的代码?零基础破解AI编程运行原理
· 超详细:普通电脑也行Windows部署deepseek R1训练数据并当服务器共享给他人