暑假周总结3

暑期第三周,本周主要完成jm老师的测试:将web数据提交到虚拟机的hive上

要求使用hive或hbase存储数据,hbase我看比较麻烦所以选择hive

首先准备mysql和hive的压缩包,使用xftp将其上传至虚拟机

解压mysql驱动包进行安装:

安装之前先查看是否安装MySQL

rpm -qa | grep mysql
# 如果有自带或者其他版本的mysql就卸载掉,卸载命令如下:
rpm -e --nodeps mysql包名
还需要卸载自带的mariadb和安装mysql-libs需要的依赖包:

# 查询mariadb
rpm -qa | grep mariadb
# 卸载
rpm -e --nodeps mariadb-xxx(这是查询出来的包名)

# 安装依赖包,安装时输入y即可
yum install perl
(1)安装common

rpm -ivh mysql-community-common-5.7.33-1.el7.x86_64.rpm
(2)安装libs

rpm -ivh mysql-community-libs-5.7.33-1.el7.x86_64.rpm
(3) 安装clients

rpm -ivh mysql-community-client-5.7.33-1.el7.x86_64.rpm
(4)安装server

rpm -ivh mysql-community-server-5.7.33-1.el7.x86_64.rpm
(5)启动MySQL服务

[root@localhost mysql]# systemctl start mysqld
[root@localhost mysql]# systemctl status mysqld
[root@localhost mysql]# ps -ef | grep mysql
(6) 进入MySQL

grep 'temporary password' /var/log/mysqld.log # 获取初始密码
mysql -u root -p
(7)修改初始密码

mysql> set global validate_password_policy=0;
mysql> set global validate_password_length=1;
mysql> alter user root@localhost identified by 'root123456';
(8)创建hive元数据存储数据库

create database hive character set latin1;
(9) 创建数据库用户hive及分配权限

create user 'hive'@'%' identified by 'hive123456';
grant all privileges on hive . * to 'hive'@'%' with grant option;
flush privileges;
show grants;
(10)增加从本地查看数据库用户信息的权限

grant all privileges on hive . * to 'hive'@'localhost' identified by 'hive123456';
flush privileges;
use mysql;
select host,user password from user;
进行hive的配置:

配置hive的环境变量

export HIVE_HOME=/opt/hive-2.3.3
export HIVE_CONF_DIR=$HIVE_HOME/conf
export HCAT_HOME=$HIVE_HOME/hcatalog
export PATH=$HIVE_HOME/bin:$PATH
使配置文件生效

source /etc/profile
(4)配置hive

cd /opt/hive-2.3.3/conf
cp hive-env.sh.template hive-env.sh
vi hive-env.sh
在文件48行,配置hadoop路径

HADOOP_HOME=/opt/hadoop-2.7.3
保存hive-env.sh文件后,进入hive-site.xml文件复制以下配置进去

vi hive-site.xml
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hive123456</value>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
<property>
<name>hive.hbase.snapshot.restoredir</name>
<value>/tmp</value>
</property>

<property>
<name>system:java.io.tmpdir</name>
<value>/opt/hive-2.3.3/iotmp</value>
</property>

<property>
<name>system:user.name</name>
<value>hive</value>
</property>
<property>
<name>hive.exec.local.scratchdir</name>
<value>${system:java.io.tmpdir}/${system:user.name}</value>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>${system:java.io.tmpdir}/${hive.session.id}_resources</value>
</property>

</configuration>

修改hive-log4j2.properties文件和hive-exec-log4j2.properties文件

cp hive-log4j2.properties.template hive-log4j2.properties
cp hive-exec-log4j2.properties.template hive-exec-log4j2.properties
(5)在HDFS中创建hive的目录

保证hadoop节点开启,能够访问HDFS

hadoop fs -mkdir -p /user/hive/warehouse
(6)将MySQL的JDBC驱动包文件复制到$HIVE_HOME/lib

先将MySQL的驱动包下载上传至/opt/目录中

cd /opt
mv mysql-connector-java-5.1.49.jar hive-2.3.3/lib/
(7)Hive的初始化

schematool -dbType mysql -initSchema
至此完成了hive的安装配置,接下来使用idea的bigdatatools连接hive数据库,将前端传回的数据用sql上传到hive中,基本和普通的数据库一样

顺利通过测试,小学期结束,可以收拾东西回家了

posted on 2024-07-20 20:13  带带带集美  阅读(6)  评论(0编辑  收藏  举报