hive单机安装(实战)

hive使用与注意事项:http://blog.csdn.net/stark_summer/article/details/44222089

连接命令:beeline -n root -u jdbc:hive2://10.149.11.215:10000

退格乱码解决:http://www.cnblogs.com/BlueBreeze/p/4232369.html

1,安装好hadoop

2,下载hive

http://mirror.bit.edu.cn/apache/hive/hive-2.0.1/

hadoop2.6.2

1.7.0_80

3,配置:

启动单机模式

Hive和Hadoop一样,有3种启动模式,分别是单机模式,伪分布模式,分布模式。这里先来说一下单机模式的启动方式。

mv apache-hive-2.0.1-bin hive-2.0.1

 

vi hive-site.xml

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
     <property>
          <name>hive.metastore.warehouse.dir</name>
          <value>/usr/bigdata/hive-2.0.1/warehouse</value>
          <description>location of default database for the warehouse</description>
          </property>
     <property>
          <name>javax.jdo.option.ConnectionURL</name>
          <value>jdbc:derby:/usr/bigdata/hive-2.0.1/metastore_db;create=true</value>
          <description>JDBC connect string for a JDBC metastore</description>
     </property>
</configuration>

  

4,环境变量:

vi /etc/profile

 

HIVE_HOME=/usr/bigdata/hive-2.0.1

PATH=$PATH:$HIVE_HOME/bin

 

5,初始化数据库

schematool -initSchema -dbType derby

出现以下几行说明初始化成功:

Starting metastore schema

initialization to 2.0.0

Initialization script hive-schema-2.0.0.derby.sql

Initialization script completed

schemaTool completed

 

6,

启动程序

mkdir -p /usr/bigdata/hive-2.0.1/warehouse

chmod a+rwx /usr/bigdata/hive-2.0.1/warehouse

hive

 

如果出现hive>提示符则说明启动成功

 

 

5. 常见错误

5.1 运行hive时出现

Exception in thread "main" java.lang.RuntimeException: Hive metastore database is not initialized. Please use schematool (e.g. ./schematool -initSchema -dbType ...) to create the schema. If needed, don't forget to include the option to auto-create the underlying database in your JDBC connection string (e.g. ?createDatabaseIfNotExist=true for mysql)

错误原因:  数据库没有初始化,请参照4.2

5.2 使用schematool初始化数据库时出现

Initialization script hive-schema-2.0.0.derby.sql
Error: FUNCTION 'NUCLEUS_ASCII' already exists. (state=X0Y68,code=30000)
org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !!
*** schemaTool failed ***

错误原因:数据库文件夹中已经存在一些文件,解决方法就是清空数据库文件夹(也就是前面配置的/opt/hive-2.0.0/metastore_db文件夹)

 

 

 

 

 

HIVE基本使用:

http://blog.csdn.net/f328310543/article/details/42682685

 

 

 

 

 

 

hive-env.xml

cp  hive-env.sh.template  hive-env.sh

$HIVE_HOME/bin的hive-env.sh,增加以下四行

export HADOOP_HOME=/usr/local/hadoop-2.6.0
export HIVE_HOME=/usr/local/hive-1.2.1
export JAVA_HOME=/usr/local/jdk1.7.0_80

  

拷贝mysql链接驱动到 hive/lib下面 

 

HIVE 元数据mysql保存配置 hive-site.xml



     <property>
          <name>hive.metastore.warehouse.dir</name>
          <value>/usr/bigdata/hive-2.0.1/warehouse</value>
          <description>location of default database for the warehouse</description>
          </property>

     <property>
          <name>javax.jdo.option.ConnectionURL</name>
          <value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>
          <description>JDBC connect string for a JDBC metastore</description>
     </property>

     <property>
        <name>javax.jdo.option.ConnectionDriverName</name>
        <value>com.mysql.jdbc.Driver</value>
     </property>

     <property>
        <name>javax.jdo.option.ConnectionUserName</name>
        <value>root</value><!-- In my case UserName is hadoop-->
     </property>

     <property>
        <name>javax.jdo.option.ConnectionPassword</name>
        <value>root</value><!-- In my case password is hadoop-->
     </property>


修改的配置的内容如下
    <property>
        <name>datanucleus.readOnlyDatastore</name>
        <value>false</value>
    </property>
    <property> 
        <name>datanucleus.fixedDatastore</name>
        <value>false</value> 
    </property>
 
加入以下:
    <property> 
        <name>datanucleus.autoCreateSchema</name> 
        <value>true</value> 
    </property>
    <property>
        <name>datanucleus.autoCreateTables</name>
        <value>true</value>
    </property>
    <property>
        <name>datanucleus.autoCreateColumns</name>
        <value>true</value>
    </property>
    




   启动hive服务:

hive --service metastore &

hive --service hiveserver &

高版本用:

hive --service hiveserver2 &

 

 

hive 的metadata从1.0升级到2.0 数据迁移 

 有自带的脚本的 

hive/scripts/metastore/upgrade

 

 

posted @ 2016-08-19 10:30  8899man  阅读(2694)  评论(0编辑  收藏  举报