spark安装mysql与hive
第一眼spark安装文件夹lib\spark-assembly-1.0.0-hadoop2.2.0.jar\org\apache\spark\sql下有没有hive文件夹,假设没有的话先下载支持hive版的spark。
安装mysql
lsb_release -a 查看虚拟机版本号
http://dev.mysql.com/downloads/mysql#downloads 官网上下载对应版本号
下载三个
MySQL-server-5.6.20-1.el6.i686.rpm
MySQL-client-5.6.20-1.el6.i686.rpm
MySQL-devel-5.6.20-1.el6.i686.rpm
进入下载的软件存放文件夹进行安装,否则安装后会出问题
rpm -ivh MySQL-server-5.6.20-1.el6.i686.rpm
rpm -ivh MySQL-client-5.6.20-1.el6.i686.rpmrpm -ivh MySQL-devel-5.6.20-1.el6.i686.rpm
启动服务 service mysql start
mysql -uroot -p
直接回车不输password,报错。认真查看安装时打印出来的信息,发现mysql给root用户随机生成了password,写在 /root/.mysql_secret 中,所以。
cat /root/.mysql_secret
查看password,XlP5M_wE8w0LgrCG。
登录成功。
此时假设想继续操作。会报错,要求你改动默认password
SET
PASSWORD = PASSWORD('new_password')
就可以。
重新启动与停止mysql服务
启动方式1:service mysql start
启动方式2:/etc/init.d/mysql start
停止方式1:service mysql stop
停止方式2:/etc/init.d/mysql shutdown
重新启动方式1:service mysql restart
重新启动方式2:/etc/init.d/mysql restart
创建hadoop用户。
create user 'hadoop' identified by 'hadoop';
grant all on *.* to hadoop@'%' with grant option;
exit
又一次用hadoop用户登陆并创建hive数据库
mysql -uhadoop -p
create database hive;
use hive
show tables;
exit
下载apache-hive-0.13.1-bin.tar.gz
解压 tar zxf apache-hive-0.13.1-bin.tar.gz
改名 mv apache-hive-0.13.1-bin hive013
改动配置文件 cd hive013/conf
cp hive-default.xml.template hive-site.xml
cp hive-env.sh.template hive-env.sh
添加驱动程序
改动配置文件 cd hive013/conf
cp hive-default.xml.template hive-site.xml
cp hive-env.sh.template hive-env.sh
vi hive-env.sh
HADOOP_HOME=/app/hadoop/hadoop220
vi hive-site.xml
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://DataNode2:3306/hive?=createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hadoop</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hadoop</value>
<description>password to use against metastore database</description>
</property>
###hive.metastore.warehouse.dir是用缺省位置,能够自行改动
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://DataNode2:3306/hive?=createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hadoop</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hadoop</value>
<description>password to use against metastore database</description>
</property>
###hive.metastore.warehouse.dir是用缺省位置,能够自行改动
mysql-connector-java-5.1.26-bin.jar 到/app/hadoop/hive013/lib/.文件夹下
show databases;
能够看到default数据库
hiveclient安装
从datanode2上直接拷贝到datanode1:
scp hive013/ hadoop@DatNode1:/app/hadoop/
conf文件夹下 vi hive-site.xml
<property>
<name>hive.metastore.uris</name>
<value>thrift://DataNode2:9083</value>
<description>Thrift uri for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property>
<name>hive.metastore.uris</name>
<value>thrift://DataNode2:9083</value>
<description>Thrift uri for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property>
后台执行
前台退出ctrl+c
nohup bin/hive --service metastore > metastore.log 2>&1 &
后台退出
jobs
kill %num
版权声明:本文博主原创文章。博客,未经同意不得转载。