Hadoop HIVE
数据仓库工具。构建在hadoop上的数据仓库框架,可以把hadoop下的原始结构化数据变成Hive中的表。(主要解决ad-hoc query,即时查询的问题)
支持一种与SQL几乎完全相同的语言HQL。除了不支持更新,索引和事务,几乎SQL其他的特性都支持。
可以看成是SQL到Map-reduce的映射器
提供shell,JDBC/ODBC,Thrift,Web等接口
HIVE组件与体系结构
用户接口:shell,thrift,web
Thrift 服务器
元数据库 (Derby是hive内嵌db 用于本地模式,一般都用mysql,所谓metadata,因为Hive本身不存储数据,完全依赖于HDFS和MapReduce,Hive可以将结构化的数据文件映射为一张数据库表,Hive中表纯逻辑,这些有关逻辑的表的定义数据就是元数据)
HIVE QL(compiler,optimizer,executor)
Hadoop
HIVE数据放在哪里?
在hdfs下的warehouse下面,一个表对应一个子目录
桶与reducer
本地的/tmp目录存放日志和执行计划
HIVE安装:
内嵌模式:元数据保持在内嵌的Derby,只允许一个会话连接(hive服务和metastore服务运行在同一个进程中,derby服务也运行在该进程中)
本地独立模式:hive服务和metastore服务运行在同一个进程中,mysql是单独的进程,可以在同一台机器上,也可以在远程机器上。
该模式只需将hive-site.xml中的ConnectionURL指向mysql,并配置好驱动名、数据库连接账号即可
远程模式:hive服务和metastore在不同的进程内,可能是不同的机器。
该模式需要将hive.metastore.local设置为false,并将hive.metastore.uris设置为metastore服务器URI,如有多个metastore服务器,URI之间用逗号分隔。metastore服务器URI的格式为thrift://host port
<property>
<name>hive.metastore.uris</name>
<value>thrift://127.0.0.1:9083</value>
</property>
Hive2.1.1:
(1)内嵌模式
cp apache-hive-2.1.1-bin.tar.gz /home/hdp/ cd /home/hdp/ tar -zxvf apache-hive-2.1.1-bin.tar.gz
root修改/etc/profile
export HIVE_HOME=/home/hdp/hive211 export PATH=$PATH:$HIVE_HOME/bin export CLASSPATH=$CLASSPATH:$HIVE_HOME/lib:$HIVE_HOME/conf
export HIVE_AUX_JARS_PATH=/home/hdp/hive211/lib
切换hdp修改/home/hdp/hive211/conf 下的
hive-env.sh
cp hive-env.sh.template hive-env.sh vi hive-env.sh HADOOP_HOME=/home/hdp/hadoop
export HIVE_CONF_DIR=/home/hdp/hive211/conf
hive-site.xml
cp hive-default.xml.template hive-site.xml vi hive-site.xml
#该参数指定了 Hive 的数据存储目录,默认位置在 HDFS 上面的 /user/hive/warehouse 路#径下 <property> <name>hive.metastore.warehouse.dir</name> <value>/user/hive/warehouse</value> <description>location of default database for the warehouse</description> </property> #该参数指定了 Hive 的数据临时文件目录,默认位置为 HDFS 上面的 /tmp/hive 路径下 <property> <name>hive.exec.scratchdir</name> <value>/tmp/hive</value> <description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}.</description> </property> <property> <name>hive.querylog.location</name> <value>/home/hdp/hive211/iotmp</value> <description>Location of Hive run time structured log file</description> </property> <property> <name>hive.exec.local.scratchdir</name> <value>/home/hdp/hive211/iotmp</value> <description>Local scratch space for Hive jobs</description> </property> <property> <name>hive.downloaded.resources.dir</name> <value>/home/hdp/hive211/iotmp</value> <description>Temporary local directory for added resources in the remote file system.</description> </property> <property> <name>hive.server2.logging.operation.log.location</name> <value>/home/hdp/hive211/iotmp/operation_logs</value> <description>Top level directory where operation logs are stored if logging functionality is enabled</description> </property> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:derby:;databaseName=metastore_db;create=true</value> <description> JDBC connect string for a JDBC metastore. To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL. For example, jdbc:postgresql://myhost/db?ssl=true for postgres database. </description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>org.apache.derby.jdbc.EmbeddedDriver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>APP</value> <description>Username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>mine</value> <description>password to use against metastore database</description> </property> <property> <name>hive.exec.reducers.bytes.per.reducer</name> <value>256000000</value> <description>size per reducer.The default is 256Mb, i.e if the input size is 1G, it will use 4 reducers.</description> </property> <property> <name>hive.exec.reducers.max</name> <value>1009</value> <description> max number of reducers will be used. If the one specified in the configuration parameter mapred.reduce.tasks is negative, Hive will use this one as the max number of reducers when automatically determine number of reducers. </description> </property>
根据上面的参数创造必要的文件夹
[hdp@hdp265m conf]$ hadoop fs -mkdir -p /user/hive/warehouse [hdp@hdp265m conf]$ hadoop fs -mkdir -p /tmp/hive [hdp@hdp265m conf]$ hadoop fs -chmod 777 /tmp/hive [hdp@hdp265m conf]$ hadoop fs -chmod 777 /user/hive/warehouse [hdp@hdp265m conf]$ hadoop fs ls / [hdp@hdp265m hive211]$ pwd /home/hdp/hive211 [hdp@hdp265m hive211]$ mkdir iotmp [hdp@hdp265m hive211]$ chmod 777 iotmp
把hive-site.xml 中所有包含 ${system:java.io.tmpdir}替换成/home/hdp/hive211/iotmp
用vi打开编辑全局替换命令 先按Esc键 再同时按shift加: 把以下替换命令粘贴按回车即可全局替换
%s#${system:java.io.tmpdir}#/home/hdp/hive211/iotmp#g
运行hive
./bin/hive
hive <parameters> --service serviceName <serviceparameters>启动特定的服务
[hadoop@hadoop~]$ hive --service help Usage ./hive<parameters> --service serviceName <service parameters> Service List: beelinecli help hiveserver2 hiveserver hwi jar lineage metastore metatool orcfiledumprcfilecat schemaTool version Parametersparsed: --auxpath : Auxillary jars --config : Hive configuration directory --service : Starts specificservice/component. cli is default Parameters used: HADOOP_HOME or HADOOP_PREFIX : Hadoop installdirectory HIVE_OPT : Hive options For help on aparticular service: ./hive --service serviceName --help Debug help: ./hive --debug --help
报错
解决办法:./bin/schematool -initSchema -dbType derby
报错
解决方法:删除
/home/hdp/hive211/metastore_db
目录下 rm -rf metastore_db/ 在初始化:./bin/schematool -initSchema -dbType derby
重新运行
./bin/hive
Logging initialized using configuration in jar:file:/home/hdp/hive211/lib/hive-common-2.1.1.jar!/hive-log4j2.properties Async: true Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. tez, spark) or using Hive 1.X releases. hive>
HIVE远程模式
在hdp265dnsnfs上安装mysql
编译hive-site.html
<configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://192.168.56.108:3306/hive?createDatabaseIfNotExist=true</value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>111111</value> </property> <property> <name>hive.metastore.schema.verification</name> <value>false</value> <description> Enforce metastore schema version consistency. True: Verify that version information stored in metastore matches with one from Hive jars. Also disable automatic schema migration attempt. Users are required to manully migrate schema after Hive upgrade which ensures proper metastore schema migration. (Default) False: Warn if the version information stored in metastore doesn't match with one from in Hive jars. </description> </property> </configuration>
报错:Caused by: MetaException(message:Version information not found in metastore. )
解决:hive-site.xml加入
<name>hive.metastore.schema.verification</name> <value>false</value>
报错:缺少mysql jar包
解决:将其(如mysql-connector-java-5.1.42-bin.jar)拷贝到$HIVE_HOME/lib下即可
报错:
Exception in thread "main" java.lang.RuntimeException: Hive metastore database is not initialized.
Please use schematool (e.g. ./schematool -initSchema -dbType ...) to create the schema. If needed,
don't forget to include the option to auto-create the underlying database in your JDBC connection string (e.g. ?createDatabaseIfNotExist=true for mysql)
解决:
#数据库的初始化。
bin/schematool -initSchema -dbType mysql
启动:
bin/hive
启动后mysql 多了hive 数据库
创建数据库
创建测试表
use db_hive_test;
create table student(id int,name string) row format delimited fields terminated by '\t';
加载数据到表中
新建student.txt 文件写入数据(id,name 按tab键分隔)
vi student.txt
- 1001 zhangsan
- 1002 lisi
- 1003 wangwu
- 1004 zhaoli
load data local inpath '/home/hadoop/student.txt' into table db_hive_test.student
查询表信息
select * from student;
查看表的详细信息
desc formatted student;
通过Mysql查看创建的表
查看hive的函数
show functions;
查看函数详细信息
desc function sum;
desc function extended
Hive 快速入门
https://cwiki.apache.org/confluence/display/Hive/GettingStarted
Hive语言手册
https://cwiki.apache.org/confluence/display/Hive/LanguageManual
Hive指南
https://cwiki.apache.org/confluence/display/Hive/Tutorial