Hive记录-部署Hive环境
1.配置 hive1.2.0(前提要配置hadoop2.7.2,前面文档有介绍)
#官网下载二进制包,解压到/usr/app 下,配置/etc/profile: export HIVE_HOME=/usr/app/hive export PATH=$PATH:$HIVE_HOME/bin #配置 hive/conf #hive-env.sh加入 export HADOOP_HEAPSIZE=1024 export HADOOP_HOME=/usr/app/hadoop export HIVE_CONF_DIR=/usr/app/hive/conf export HIVE_AUX_JARS_PATH=/usr/app/hive/lib #source /etc/profile 立即生效 #新建目录 hdfs dfs -mkdir -p /user/hive/warehouse hdfs dfs -mkdir -p /user/hive/tmp hdfs dfs -mkdir -p /user/hive/log hdfs dfs -chmod o+rwx /user/hive/warehouse hdfs dfs -chmod o+rwx /user/hive/tmp hdfs dfs -chmod o+rwx /user/hive/log #配置日志目录 mkdir -p /usr/app/hive/logs conf/hive-log4j.properties修改: hive.log.dir=/usr/app/hive/logs
2.配置Mysql
#安装mysql->yum -y install mysql-devel mysql-server #根据实际调整/etc/my.cnf配置,找不到可通过locate my.cnf查找 #cp /usr/share/mysql/my-medium.cnf /etc/my.cnf #启动service mysqld start /restart/stop #进行mysql授权操作 #mysql>grant all privileges on *.* to root@'%' identified by '1' with grant option; #mysql>grant all privileges on *.* to root@'localhost' identified by '1' with grant option; #mysql>flush privileges; #mysql>exit
3.mysql 新建 hive 数据库
#mysql -uroot -p #输入密码 #mysql>create database hive; #mysql>alter database hive character set latin1; #mysql>grant all privileges on hive.* to hive@'%' identified by '1'; #mysql>gant all privileges on *.* to hive@'localhost' identified by '1'; #mysql>flush privileges; #exit
4.编译hive war(web接口)#下载hive src源码解压切换到 hive/hwi/web 执行 jar cvf hive-hwi-1.2.0.war ./* 拷贝到hive/lib 下
5.修改配置
#vim hive-site.xml <configuration> <property> <name>hive.metastore.schema.verification</name> <value>false</value> </property> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://192.168.66.66:3306/hive?createDatabaseIfNotExist=true</value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>1</value> </property> <property> <name>hive.hwi.listen.host</name> <value>192.168.66.66</value> </property> <property> <name>hive.hwi.listen.port</name> <value>9999</value> </property> <property> <name>hive.hwi.war.file</name> <value>lib/hive-hwi-1.2.0.war</value> </property> <property> <name>hive.metastore.warehouse.dir</name> <value>/user/hive/warehouse</value> </property> <property> <name>hive.exec.scratchdir</name> <value>/user/hive/tmp</value> </property> <property> <name>hive.querylog.location</name> <value>/user/hive/log</value> </property> <property> <name>hive.server2.thrift.port</name> <value>10000</value> </property> <property> <name>hive.server2.thrift.bind.host</name> <value>192.168.66.66</value> </property>
<property> <name>hive.server2.webui.host</name> <value>192.168.66.66</value> </property> <property> <name>hive.server2.webui.port</name> <value>10002</value> </property> <property> <name>hive.server2.long.polling.timeout</name> <value>5000</value> </property> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>hive.server2.enable.doAs</name> <value>true</value> </property> <property> <name>mapred.job.tracker</name> <value>http://192.168.66.66:9001</value> </property> <property> <name>datanucleus.autoCreateSchema </name> <value>false</value> </property> <property> <name>datanucleus.fixedDatastore </name> <value>true</value> </property> </configuration>
修改hadoop -core-site.xml加入 <property> <name>hadoop.proxyuser.root.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.root.groups</name> <value>*</value> </property>
6.初始化数据库(本文用的是mysql)
#$HIVE_HOME/bin/schematool -initSchema -dbType mysql
7.根据实际情况是否拷贝 jar 到 hive/lib 下(包可到网上搜索下载)
#拷贝 hadoop/lib/hadoop-lzo-0.4.21-SNAPSHOT.jar 到 hive/lib
#拷贝 mysql-connector-java-5.1.34.jar 到 hive/lib
#拷贝 jasper-compiler-5.5.23.jar jasper-runtime-5.5.23.jar commons-el-5.5.23.jar 到 hive/lib
#拷贝 ant/lib/ant-1.9.4.jar ant-launcher.jar 到 hive/lib(如果系统安装有ant就需要调整ant)
#如果启动包日志包重复需要删除
#根据实际修改hive/bin/hive:(根据spark2后的包分散了)
sparkAssemblyPath='ls ${SPARK_HOME}/lib/spark-assembly-*.jar'
将其修改为:sparkAssemblyPath='ls ${SPARK_HOME}/jars/*.jar'
8.启动 hive
#先启动hadoop
#hive --service metastore
#hive --service hiveserver2 #http://192.168.66.66:10002 进入hiveserver2服务 ---2.0+才支持web ui
#netstat -nl |grep 10000
#hive #进入终端
#hive --service hwi #进入 hive web 页面http://192.168.66.66:9999/hwi/
[ERROR] Terminal initialization failed; falling back to unsupported
java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected
错误原因:
在/hive/conf/lib目录中存在jline-x-x.jar,而/hadoop-x-x/share/hadoop/yarn/lib目录中存在老版本的jline-x-x.jar
解决办法:
将hive中新版本的jar包复制到hadoop中
执行命令:cp /usr/app/hive/conf/lib/jline-2-12.jar /usr/app/hadoop/share/hadoop/yarn/lib/
然后重新启动hive即可