Spark链接hive时 “HikariCP” 问题

IDE本地调试和spark-shell调试报错:

Caused by: org.datanucleus.exceptions.NucleusUserException: The connection pool plugin of type “HikariCP” was not found in the CLASSPATH!

 

需要将hive-site.xml 的 HikariCP 改成 dbcp

<property>
    <name>datanucleus.connectionPoolingType</name>
    <value>dbcp</value>
    <description>
      Expects one of [bonecp, dbcp, hikaricp, none].
      Specify connection pool library for datanucleus
    </description>
  </property>

  

另外:

(1)需要将apache-hive-3.1.1-bin/lib/mysql-connector-java-5.1.47.jar 拷贝到 spark-2.4.0-bin-hadoop2.7/jars下;

(2)需要将hive-site.xml  hdfs-site.xml  core-site.xml 拷贝到 spark-2.4.0-bin-hadoop2.7/conf下;

(3)本地IDE开发调试,需要将hive-site.xml  hdfs-site.xml  core-site.xml 拷贝到 resources 目录。

 

spark-shell调试:

scala> 

import org.apache.spark.sql.hive.HiveContext
val hiveContext = new HiveContext(sc)
hiveContext.sql("show databases").show()

  

附sparksql代码:

def main(args: Array[String]): Unit = {

    val spark: SparkSession = new SparkSession.Builder().
      master("local")
//      master("spark://bogon:7077")
      .enableHiveSupport().
      appName("kafka").getOrCreate()
    val sc = spark.sparkContext
    // 导入spark的隐式转换
    import spark.implicits._
    // 导入spark sql的functions
    import org.apache.spark.sql.functions._

    spark.sql("show databases").show()

    sc.stop()
    spark.stop()

  }

  

 

posted @ 2019-10-16 15:31  Jenkin.K  阅读(1238)  评论(0编辑  收藏  举报