Hive——环境搭建
Hive——环境搭建
相关hadoop和mysql环境已经搭建好。我博客中也有相关搭建的博客。
一、下载Hive并解压到指定目录(本次使用版本hive-1.1.0-cdh5.7.0,下载地址:http://archive.cloudera.com/cdh5/cdh/5/)
tar zxvf ./hive-1.1.0-cdh5.7.0.tar.gz -C ~/app/
二、Hive配置:参考官网:https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-InstallationandConfiguration
1、配置环境变量
1)vi .bash_profile
export HIVE_HOME=/home/hadoop/app/hive-1.1.0-cdh5.7.0
export PATH=$HIVE_HOME/bin:$PATH
2)source .bash_profile
source .bash_profile
2、hive-1.1.0-cdh5.7.0/conf/hive-env.sh
1)cp hive-env.sh.template hive-env.sh
cp hive-env.sh.template hive-env.sh
2)vi hive-env.sh 添加HADOOP_HOME
HADOOP_HOME=/home/hadoop/app/hadoop-2.6.0-cdh5.7.0
3、hive-1.1.0-cdh5.7.0/conf/hive-site.xml(自己创建配置)
(mysql驱动包需要自己手动拷贝到hive-1.1.0-cdh5.7.0/lib中)。
<configuration> <!-- 配置连接串 --> <property> <name>javax.jdo.option.ConnectionURL</name> <!-- 数据库名称:zhaotao_hive --> <!-- createDatabaseIfNotExist=true:当数据库不存在的时候,自动帮你创建 --> <value>jdbc:mysql://localhost:3306/rdb_hive?createDatabaseIfNotExist=true</value> </property> <!-- mysql的driver类 --> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <!-- 用户名 --> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> </property> <!-- 密码 --> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>root</value> </property> </configuration>
三、启动hive
hive-1.1.0-cdh5.7.0/bin/hive
启动日志:
[hadoop@hadoop01 bin]$ ./hive
which: no hbase in (/home/hadoop/app/hive-1.1.0-cdh5.7.0/bin:/home/hadoop/app/hadoop-2.6.0-cdh5.7.0
/bin:/home/hadoop/app/jdk1.8.0_131/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/hadoop/
.local/bin:/home/hadoop/bin)
Logging initialized using configuration in jar:file:/home/hadoop/app/hive-1.1.0-cdh5.7.0/lib/
hive-common-1.1.0-cdh5.7.0.jar!/hive-log4j.properties
WARNING: Hive CLI is deprecated and migration to Beeline is recommended.
hive>
启动后会自动在mysql库上建立数据库和表:
mysql> show tables;
+--------------------+
| Tables_in_rdb_hive |
+--------------------+
| CDS |
| DATABASE_PARAMS |
| DBS |
| FUNCS |
| FUNC_RU |
| GLOBAL_PRIVS |
| PARTITIONS |
| PART_COL_STATS |
| ROLES |
| SDS |
| SEQUENCE_TABLE |
| SERDES |
| SKEWED_STRING_LIST |
| TAB_COL_STATS |
| TBLS |
| VERSION |
+--------------------+
四、hive简单入门
使用hive实现wordcount。
1、创建表:create table hive_wordcount(context string);
hive> create table hive_wordcount(context string);
OK
Time taken: 1.203 seconds
hive> show tables;
OK
hive_wordcount
Time taken: 0.19 seconds, Fetched: 1 row(s)
2、导入数据:load data local inpath '/home/hadoop/data/hello.txt' into table hive_wordcount;
hive> load data local inpath '/home/hadoop/data/hello.txt' into table hive_wordcount;
Loading data to table default.hive_wordcount
Table default.hive_wordcount stats: [numFiles=1, totalSize=44]
OK
Time taken: 2.294 seconds
3、查询表数据看是否导成功:select * from hive_wordcount;
hello.txt内容:
Deer Bear River
Car Car River
Deer Car Bear
hive> select * from hive_wordcount;
OK
Deer Bear River
Car Car River
Deer Car Bear
Time taken: 0.588 seconds, Fetched: 3 row(s)
4、使用sql实现wordcount:select word,count(1) from hive_wordcount lateral view explode(split(context,' ')) wc as word group by word;
hive> select word,count(1) from hive_wordcount lateral view explode(split(context,' ')) wc as word group by word; Query ID = hadoop_20180904070404_b23d8c2e-161b-4e65-a2cc-206ce343d9e8 Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1536010835653_0002, Kill Command = /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/bin/hadoop job -kill job_1536010835653_0002 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 2018-09-04 07:05:49,279 Stage-1 map = 0%, reduce = 0% 2018-09-04 07:06:01,893 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.95 sec 2018-09-04 07:06:10,804 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 3.44 sec MapReduce Total cumulative CPU time: 3 seconds 440 msec Ended Job = job_1536010835653_0002 MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 3.44 sec HDFS Read: 8797 HDFS Write: 28 SUCCESS Total MapReduce CPU Time Spent: 3 seconds 440 msec OK Bear 2 Car 3 Deer 2 River 2 Time taken: 37.441 seconds, Fetched: 4 row(s)
可以看到结果:
Bear 2
Car 3
Deer 2
River 2
注意:在创建表的时候遇到一个错误:
Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:
For direct MetaStore DB connections, we don't support retries at the client level.)
从字面意思是是连接msql有问题。从网上查询大概有两种解决办法:
1、换mysql jdbc驱动包,比如换成 mysql-connector-java-5.1.34-bin.jar,但我试过,我这里没有解决
2、换对应mysq 上MetaStore 数据库的编码,换成 latin1,亲测,解决。
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 开发者必知的日志记录最佳实践
· SQL Server 2025 AI相关能力初探
· Linux系列:如何用 C#调用 C方法造成内存泄露
· AI与.NET技术实操系列(二):开始使用ML.NET
· 记一次.NET内存居高不下排查解决与启示
· 阿里最新开源QwQ-32B,效果媲美deepseek-r1满血版,部署成本又又又降低了!
· 开源Multi-agent AI智能体框架aevatar.ai,欢迎大家贡献代码
· Manus重磅发布:全球首款通用AI代理技术深度解析与实战指南
· 被坑几百块钱后,我竟然真的恢复了删除的微信聊天记录!
· AI技术革命,工作效率10个最佳AI工具