zookeeper + hadoop + hbase + phoenix

准备三台主机

IP 主机名 内容
192.168.233.139 hadoop1 master
192.168.233.141 hadoop2 slave1
192.168.233.142 hadoop3 slave2
首先安装jdk
下载jdk
[root@hadoop1 /]# wget jdk1.8_181.tar.gz
[root@hadoop1 /]# tar xf jdk1.8_181.tar.gz
[root@hadoop1 /]# vim /etc/profile
export JAVA_HOME=/opt/jdk1.8.0_112
export PATH=$PATH:$JAVA_HOME/bin:
[root@hadoop1 /]# source /etc/profile

集群间密钥

[root@hadoop1 /]# ssh-keygen -P "" -f ~/.ssh/id_rsa
[root@hadoop1 /]# ssh-copy-id -i hadoop1
[root@hadoop1 /]# ssh-copy-id -i hadoop2
[root@hadoop1 /]# ssh-copy-id -i hadoop3

注意本机也要
设置hosts解析

[root@hadoop1 /]# vim /etc/hosts
192.168.233.139 hadoop1
192.168.233.141 hadoop2
192.168.233.142 hadoop3

同步到其他机器

[root@hadoop1 /]# scp /etc/hosts hadoop2:/etc/
[root@hadoop1 /]# scp /etc/hosts hadoop3:/etc/

安装zookeeper
建立目录

[root@hadoop1 /]# mkdir /big/ && cd /big/
[root@hadoop1 /]# wget zookeeper.tar.gz
[root@hadoop1 /]# tar xf zookeeper.tar.gz
[root@hadoop1 /]# mv zookeeper-xxx.0.xxx zookeeper
[root@hadoop1 /]# cd zookeeper
[root@hadoop1 zookeeper]# mkdir data

创建数据目录

[root@hadoop1 zookeeper]# cd conf
[root@hadoop1 conf]# cp zoo_sample.cfg zoo.cfg
[root@hadoop1 conf]# vim zoo.cfg
dataDir=/big/zookeeper/data
server.1=hadoop1:2888:3888
server.2=hadoop2:2888:3888
server.3=hadoop3:2888:3888

server.1=hadoop1:2888:3888
server是关键字,1是节点id,hadoop1是主机名称,可用IP代替,后面端口号是主机间通信用到的端口,有几台添加几行,节点id不能重复
在data目录下创建myid文件,其他节点id顺延
[root@hadoop1 conf]# echo 1 > /big/zookeeper/data/myid
同步zookeeper文件夹到其他主机,并修改myid
[root@hadoop1 conf]# scp -r /big/zookeeper 192.168.233.141/big/
[root@hadoop1 conf]# scp -r /big/zookeeper 192.168.233.142/big/
启动zookeeper,三台同步启动
[root@hadoop1 conf]# /big/zookeeper/bin/zkServer.sh start
查看个主机的zookeeper状态

[root@hadoop1 zookeeper]# ./bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /big/zookeeper/bin/../conf/zoo.cfg
Mode: follower
[root@hadoop2 zookeeper]# ./bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /big/zookeeper/bin/../conf/zoo.cfg
Mode: leader
[root@hadoop3 zookeeper]# ./bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /big/zookeeper/bin/../conf/zoo.cfg
Mode: follower

hadoop2是leader,其他主机是follower
zookeeper安装完成

安装hadoop
下载安装包,解压

[root@hadoop3 hadoop]# mv hadoop-2.6.0 hadoop

设置环境变量/etc/profile
export JAVA_HOME=/opt/java
export PATH=$JAVA_HOME/bin:$PATH
export HADOOP_HOME=/big/hadoop/hadoop-2.6.0
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
刷新生效

[root@hadoop3 hadoop]# source /etc/profile

修改配置文件及环境变量
在安装目录下/big/hadoop/下

[root@hadoop3 hadoop]# cd ./etc/hadoop/

编辑配置文件,添加临时目录与默认地址

[root@hadoop3 hadoop]# vim core-site.xml
<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/big/hadoop/tmp</value>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hadoop1:9000</value>
    </property>
</configuration>

添加用户dir与数据dir,添加副本数量3,添加备份路径Hadoop1

[root@hadoop1 hadoop]# vim hdfs-site.xml
<configuration>
   <property>
        <name>dfs.name.dir</name>
        <value>/big/hadoop/name</value>
    </property>
    <property>
        <name>dfs.data.dir</name>
        <value>/big/hadoop/date</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
   <property>
          <name>dfs.namenode.secondary.http-address</name>
        <value>hadoop1:50070</value>
     </property>
</configuration>

拷贝模板文件,并修改配置文件

[root@hadoop1 hadoop]# cp  mapred-site.xml.template  mapred-site.xml

[root@hadoop1 hadoop]# vim mapred-site.xml
<configuration>
    <property>
        <name>mapred.job.tracker</name>
        <value>hadoop1:9001</value>
    </property>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

[root@hadoop1 hadoop]# vim yarn-site.xml 
<configuration>
    <property>
        <name>yarn.resouremanager.hostname</name>
        <value>hadoop1</value>
   </property>
   <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
   </property>
</configuration>

添加slave主机

[root@hadoop1 hadoop]# vim slaves 
hadoop2
hadoop3

添加环境变量

[root@hadoop1 hadoop]# vim hadoop-env.sh 
export JAVA_HOME=/opt/jdk1.8.0_112
export HADOOP_PREFIX=/big/hadoop

创建相关目录

[root@hadoop1 hadoop]# mkdir /big/hadoop/{name,data,tmp}
同步到slave主机
[root@hadoop1 hadoop]# scp -r /big/hadoop hadoop2:/big/
[root@hadoop1 hadoop]# scp -r /big/hadoop hadoop3:/big/
进入主目录,格式化hadoop,启动Hadoop程序
[root@hadoop1 hadoop]# cd /big/hadoop/
[root@hadoop1 hadoop]# ./bin/hadoop namenode -format
[root@hadoop1 hadoop]# ./sbin/start-all.sh 

在相关服务器上验证,master服务器 ,slave服务器

[root@hadoop1 hadoop]# jps
19826 NameNode
22773 Jps
20134 ResourceManager
17834 QuorumPeerMain

[root@hadoop2 hadoop]# jps
18064 DataNode
18273 Jps
7764 QuorumPeerMain
18159 NodeManager

Hadoop安装完成

hbase安装
下载安装包,解压
[root@hadoop1 conf]# cd /big/
[root@hadoop1 conf]# mv hbase-1.1.3 hbase
[root@hadoop1 conf]# cd hbase/conf/
[root@hadoop1 conf]# vim hbase-env.sh
export HBASE_CLASSPATH=/big/hadoop/etc/hadoop/
export JAVA_HOME=/opt/jdk1.8.0_112
export HBASE_MANAGES_ZK=false
修改配置文件,同步到其他主机,启动服务

[root@hadoop1 conf]# vim hbase-site.xml 
<configuration>
    <property>
        <name>hbase.rootdir</name>
        <value>hdfs://hadoop1:9000/hbase</value>
    </property>
    <property>
        <name>hbase.master</name>
        <value>hadoop1</value>
    </property>
    <property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
    </property>
    <property>
        <name>hbase.zookeeper.property.clientPort</name>
        <value>2181</value>
    </property>
    <property>
        <name>hbase.zookeeper.quorum</name>
        <value>hadoop1,hadoop2,hadoop3</value>
    </property>
    <property>
        <name>zookeeper.session.timeout</name>
        <value>60000000</value>
    </property>
    <property>
        <name>dfs.support.append</name>
        <value>true</value>
    </property>
</configuration>
[root@hadoop1 conf]# vim regionservers 
hadoop2
hadoop3
[root@hadoop1 conf]# cd /big
[root@hadoop1 conf]# scp -r hbase hadoop2:/big/
[root@hadoop1 conf]# scp -r hbase hadoop3:/big/
[root@hadoop1 conf]# ./hbase/bin/start-hbase.sh

查看服务是否启动
master服务器

[root@hadoop1 bin]# jps
19826 NameNode
23282 Jps
20134 ResourceManager
17834 QuorumPeerMain
22155 HMaster

slave服务器

[root@hadoop2 zookeeper]# jps
18064 DataNode
19344 Jps
7764 QuorumPeerMain
18873 HRegionServer

进入zookeeper查询hbase是否关联zookeeper,正常会有hbase字样

[root@hadoop1 conf]# ./zookeeper/bin/zkCli.sh
[zk: localhost:2181(CONNECTED) 0] ls /
[zookeeper, hbase]

进入hbase shell命令行,执行scan 'hbase:meta',查看是否有相关信息,quit退出

[root@hadoop1 conf]# hbase/bin/hbase shell
scan ‘hbase:meta’       
quit

安装phoneix
下载安装包,解压
拷贝目录下jar包到hbase的lib目录下

[root@hadoop1 conf]#cp phoenix-4.7.0-HBase-1.1-bin/*.jar hbase/lib/
[root@hadoop1 conf]#scp phoenix-4.7.0-HBase-1.1-bin/*.jar hadoop2:/big/hbase/lib/
[root@hadoop1 conf]#scp phoenix-4.7.0-HBase-1.1-bin/*.jar hadoop3:/big/hbase/lib/

停止hbase集群
[root@hadoop1 conf]#./hbase/bin/stop-hbase.sh
替换掉原来的phoneix的配置文件
[root@hadoop1 conf]# cp /big/hbase/conf/hbase-site.xml .
重新启动hbase集群
[root@hadoop1 conf]# ./hbase/bin/start-hbase.sh
进入到phonenix安装目录下的bin/目录,执行命令进入命令行模式,就证明安装成功了。可以用相关命令查询!tables,!quit退出当前命令行

[root@hadoop1 conf]#./sqlline.py hadoop1:2181
0: jdbc:phoenix:hadoop1:2181> !tables
+------------+--------------+-------------+---------------+----------+------------+----+
| TABLE_CAT  | TABLE_SCHEM  | TABLE_NAME  |  TABLE_TYPE   | REMARKS  | TYPE_NAME  | SE |
+------------+--------------+-------------+---------------+----------+------------+----+
|            | SYSTEM       | CATALOG     | SYSTEM TABLE  |          |            |    |
|            | SYSTEM       | FUNCTION    | SYSTEM TABLE  |          |            |    |
|            | SYSTEM       | SEQUENCE    | SYSTEM TABLE  |          |            |    |
|            | SYSTEM       | STATS       | SYSTEM TABLE  |          |            |    |
+------------+--------------+-------------+---------------+----------+------------+----+
0: jdbc:phoenix:hadoop1:2181> !quit
Closing: org.apache.phoenix.jdbc.PhoenixConnection

可以访问hadoop1的相关端口web页面
http://192.168.233.139:8088/
http://192.168.233.139:16010
http://192.168.233.139:50070

hbase shell 相关命令

status 查看集群状态
version 查看hbase版本
list 列出所有表
创建表
create 'emp', 'personal data', 'professional data'
禁用表
disable 'emp'
查看表
scan 'emp'
查看是否被禁用
is_disabled 'emp'
禁用所有匹配表
disable_all 'r.*'
启用表
enable ’emp'
查看表是否被启用
is_enabled 'emp'
返回表详细说明
describe 'emp'
验证表是否存在
exists 'emp'
删除表,必须先禁用
disable 'emp'
drop 'emp'
exists 'emp'

posted @ 2019-04-11 17:50  大小狮子  阅读(203)  评论(0编辑  收藏  举报