欢迎来到CloudService文涵的博客

人生三从境界:昨夜西风凋碧树,独上高楼,望尽天涯路。 衣带渐宽终不悔,为伊消得人憔悴。 众里寻他千百度,蓦然回首,那人却在灯火阑珊处。

HBase组件安装与配置

1. 实验一:HBase 组件安装与配置

1.1 实验目的

完成本实验,您应该能够:

  • 掌握HBase 安装与配置

  • 掌握HBase 常用 Shell 命令

1.2 实验要求

  • 了解HBase 原理

  • 熟悉HBase 常用 Shell 命令

1.3 实验过程

1.3.1 实验任务一:配置时间同步

[root@master ~]# yum -y install chrony

[root@master ~]# cat /etc/chrony.conf 
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server time1.aliyun.com iburst 

[root@master ~]# systemctl restart chronyd.service 
[root@master ~]# systemctl enable chronyd.service 

[root@master ~]# date 
Fri Apr 15 15:40:14 CST 2022



[root@slave1 ~]# yum -y install chrony

[root@slave1 ~]# cat /etc/chrony.conf 
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server time1.aliyun.com iburst

[root@slave1 ~]# systemctl restart chronyd.service
[root@slave1 ~]# systemctl enable chronyd.service

[root@slave1 ~]# date
Fri Apr 15 15:40:17 CST 2022  



[root@slave2 ~]# yum -y install chrony

[root@slave2 ~]# cat /etc/chrony.conf 
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server time1.aliyun.com iburst

[root@slave2 ~]# systemctl restart chronyd.service
[root@slave2 ~]# systemctl enable chronyd.service 

[root@slave2 ~]# date
Fri Apr 15 15:40:20 CST 2022

1.3.2 实验任务二:HBase 安装与配置

1.3.2.1 步骤一:解压缩 HBase 安装包

[root@master ~]# tar -zxvf hbase-1.2.1-bin.tar.gz -C /usr/local/src/

1.3.2.2 步骤二:重命名 HBase 安装文件夹

[root@master ~]# cd /usr/local/src/

[root@master src]# mv hbase-1.2.1 hbase 

1.3.2.3 步骤三:在所有节点添加环境变量

[root@master ~]# cat /etc/profile
# set hbase environment
export HBASE_HOME=/usr/local/src/hbase
export PATH=$HBASE_HOME/bin:$PATH   

[root@slave1 ~]# cat /etc/profile
# set hbase environment
export HBASE_HOME=/usr/local/src/hbase
export PATH=$HBASE_HOME/bin:$PATH

[root@slave2 ~]# cat /etc/profile
# set hbase environment
export HBASE_HOME=/usr/local/src/hbase
export PATH=$HBASE_HOME/bin:$PATH

1.3.2.4 步骤四:在所有节点使环境变量生效

[root@master ~]# source /etc/profile
[root@master ~]# echo $PATH 
/usr/local/src/hbase/bin:/usr/local/src/jdk/bin:/usr/local/src/hadoop/bin:/usr/local/src/hadoop/sbin:/usr/local/src/hbase/bin:/usr/local/src/jdk/bin:/usr/local/src/hadoop/bin:/usr/local/src/hadoop/sbin:/usr/local/src/jdk/bin:/usr/local/src/hadoop/bin:/usr/local/src/hadoop/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/local/src/hive/bin:/root/bin:/usr/local/src/hive/bin:/usr/local/src/hive/bin  

[root@slave1 ~]# source /etc/profile
[root@slave1 ~]# echo $PATH 
/usr/local/src/hbase/bin:/usr/local/src/jdk/bin:/usr/local/src/hadoop/bin:/usr/local/src/hadoop/sbin:/usr/local/src/hbase/bin:/usr/local/src/jdk/bin:/usr/local/src/hadoop/bin:/usr/local/src/hadoop/sbin:/usr/local/src/jdk/bin:/usr/local/src/hadoop/bin:/usr/local/src/hadoop/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin

[root@slave2 ~]# source /etc/profile
[root@slave2 ~]# echo $PATH 
/usr/local/src/hbase/bin:/usr/local/src/jdk/bin:/usr/local/src/hadoop/bin:/usr/local/src/hadoop/sbin:/usr/local/src/hbase/bin:/usr/local/src/jdk/bin:/usr/local/src/hadoop/bin:/usr/local/src/hadoop/sbin:/usr/local/src/jdk/bin:/usr/local/src/hadoop/bin:/usr/local/src/hadoop/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin

1.3.2.5 步骤五:在 master 节点进入配置文件目录

[root@master ~]# cd /usr/local/src/hbase/conf/

1.3.2.6 步骤六:在 master 节点配置 hbase-env.sh 文件

[root@master conf]# cat hbase-env.sh 
export JAVA_HOME=/usr/local/src/jdk
export HBASE_MANAGES_ZK=true
export HBASE_CLASSPATH=/usr/local/src/hadoop/etc/hadoop/

1.3.2.7 步骤七:在 master 节点配置 hbase-site.xml

[root@master conf]# cat hbase-site.xml 
<configuration>
	<property>
		<name>hbase.rootdir</name>
		<value>hdfs://master:9000/hbase</value> 
	</property>
	<property>
		<name>hbase.master.info.port</name>
		<value>60010</value>
	</property>
	<property>
		<name>hbase.zookeeper.property.clientPort</name>
		<value>2181</value>
	</property>
	<property>
		<name>zookeeper.session.timeout</name>
		<value>120000</value>
	</property>
	<property>
		<name>hbase.zookeeper.quorum</name>
		<value>master,node1,node2</value>
	</property>
	<property>
		<name>hbase.tmp.dir</name>
		<value>/usr/local/src/hbase/tmp</value>
	</property>
	<property>
		<name>hbase.cluster.distributed</name>
		<value>true</value>
	</property>
</configuration>

1.3.2.8 步骤八:在master节点修改 regionservers 文件

[root@master conf]# cat regionservers 
node1
node2

1.3.2.9 步骤九:在master节点创建 hbase.tmp.dir 目录

[root@master ~]# mkdir /usr/local/src/hbase/tmp

1.3.2.10 步骤十:将master上的hbase安装文件同步到 node1 node2

[root@master ~]# scp -r /usr/local/src/hbase/ root@node1:/usr/local/src/

[root@master ~]# scp -r /usr/local/src/hbase/ root@node2:/usr/local/src/

1.3.2.11 步骤十一:在所有节点修改 hbase 目录权限

[root@master ~]# chown -R hadoop:hadoop /usr/local/src/hbase/

[root@slave1 ~]# chown -R hadoop:hadoop /usr/local/src/hbase/

[root@slave2 ~]# chown -R hadoop:hadoop /usr/local/src/hbase/

1.3.2.12 步骤十二:在所有节点切换到hadoop用户

[root@master ~]# su - hadoop 
Last login: Mon Apr 11 00:42:46 CST 2022 on pts/0

[root@slave1 ~]# su - hadoop
Last login: Fri Apr  8 22:57:42 CST 2022 on pts/0

[root@slave2 ~]# su - hadoop
Last login: Fri Apr  8 22:57:54 CST 2022 on pts/0

1.3.2.13 步骤十三:启动 HBase

先启动 Hadoop,然后启动 ZooKeeper,最后启动 HBase。

[hadoop@master ~]$ start-all.sh
[hadoop@master ~]$ jps
2130 SecondaryNameNode
1927 NameNode
2554 Jps
2301 ResourceManager

[hadoop@slave1 ~]$ jps
1845 NodeManager
1977 Jps
1725 DataNode

[hadoop@slave2 ~]$ jps
2080 Jps
1829 DataNode
1948 NodeManager

1.3.2.14 步骤十四:在 master节点启动HBase

[hadoop@master conf]$ start-hbase.sh 

[hadoop@master conf]$ jps
2130 SecondaryNameNode
3572 HQuorumPeer
1927 NameNode
5932 HMaster
2301 ResourceManager
6157 Jps

[hadoop@slave1 ~]$ jps
2724 Jps
1845 NodeManager
1725 DataNode
2399 HQuorumPeer
2527 HRegionServer

[root@slave2 ~]# jps
3795 Jps
1829 DataNode
3529 HRegionServer
1948 NodeManager
3388 HQuorumPeer

1.3.2.15 步骤十五:修改windows上的hosts文件

(C:\Windows\System32\drivers\etc\hosts)

把hots文件拖到桌面上,然后编辑它加入master的主机名与P地址的映射关系后在浏览器上输入http//:master:60010访问hbase的web界面

1.3.3 实验任务二:HBase常用Shell命令

1.3.3.1 步骤一:进入 HBase 命令行

[hadoop@master ~]$ hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/src/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/src/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.1, r8d8a7107dc4ccbf36a92f64675dc60392f85c015, Wed Mar 30 11:19:21 CDT 2016

hbase(main):001:0>  

1.3.3.2 步骤二:建立表 scores,两个列簇:grade 和 course

hbase(main):001:0> create 'scores','grade','course'
0 row(s) in 1.4400 seconds

=> Hbase::Table - scores

1.3.3.3 步骤三:查看数据库状态

hbase(main):002:0> status
1 active master, 0 backup masters, 2 servers, 0 dead, 1.5000 average load

1.3.3.4 步骤四:查看数据库版本

hbase(main):003:0> version
1.2.1, r8d8a7107dc4ccbf36a92f64675dc60392f85c015, Wed Mar 30 11:19:21 CDT 2016

1.3.3.5 步骤五:查看表

hbase(main):004:0> list
TABLE  
scores 
1 row(s) in 0.0150 seconds

=> ["scores"]

1.3.3.6 步骤六:插入记录 1:jie,grade: 143cloud

hbase(main):005:0> put 'scores','jie','grade:','146cloud'
0 row(s) in 0.1060 seconds

1.3.3.7 步骤七:插入记录 2:jie,course:math,86

hbase(main):006:0> put 'scores','jie','course:math','86'
0 row(s) in 0.0120 seconds

1.3.3.8 步骤八:插入记录 3:jie,course:cloud,92

hbase(main):009:0> put 'scores','jie','course:cloud','92'
0 row(s) in 0.0070 seconds

1.3.3.9 步骤九:插入记录 4:shi,grade:133soft

hbase(main):010:0> put 'scores','shi','grade:','133soft'
0 row(s) in 0.0120 seconds

1.3.3.10 步骤十:插入记录 5:shi,grade:math,87

hbase(main):011:0> put 'scores','shi','course:math','87'
0 row(s) in 0.0090 seconds

1.3.3.11 步骤十一:插入记录 6:shi,grade:cloud,96

hbase(main):012:0> put 'scores','shi','course:cloud','96'
0 row(s) in 0.0100 seconds

1.3.3.12 步骤十二:读取 jie 的记录

hbase(main):013:0> get 'scores','jie'
COLUMN  CELL   
 course:cloud   timestamp=1650015032132, value=92  
 course:mathtimestamp=1650014925177, value=86  
 grade: timestamp=1650014896056, value=146cloud
3 row(s) in 0.0250 seconds

1.3.3.13 步骤十三:读取 jie 的班级

hbase(main):014:0> get 'scores','jie','grade'
COLUMN  CELL   
 grade: timestamp=1650014896056, value=146cloud
1 row(s) in 0.0110 seconds

1.3.3.14 步骤十四:查看整个表记录

hbase(main):001:0> scan 'scores'
ROW  COLUMN+CELL  
 jie column=course:cloud, timestamp=1650015032132, value=92   
 jie column=course:math, timestamp=1650014925177, value=86
 jie column=grade:, timestamp=1650014896056, value=146cloud   
 shi column=course:cloud, timestamp=1650015240873, value=96   
 shi column=course:math, timestamp=1650015183521, value=87
2 row(s) in 0.1490 seconds

1.3.3.15 步骤十五:按例查看表记录

hbase(main):002:0> scan 'scores',{COLUMNS=>'course'}
ROW  COLUMN+CELL  
 jie column=course:cloud, timestamp=1650015032132, value=92   
 jie column=course:math, timestamp=1650014925177, value=86
 shi column=course:cloud, timestamp=1650015240873, value=96   
 shi column=course:math, timestamp=1650015183521, value=87
2 row(s) in 0.0160 seconds

1.3.3.16 步骤十六:删除指定记录

hbase(main):003:0> delete 'scores','shi','grade'
0 row(s) in 0.0560 seconds

1.3.3.17 步骤十七:删除后,执行scan 命令

hbase(main):004:0> scan 'scores'
ROW  COLUMN+CELL  
 jie column=course:cloud, timestamp=1650015032132, value=92   
 jie column=course:math, timestamp=1650014925177, value=86
 jie column=grade:, timestamp=1650014896056, value=146cloud   
 shi column=course:cloud, timestamp=1650015240873, value=96   
 shi column=course:math, timestamp=1650015183521, value=87
2 row(s) in 0.0130 seconds

1.3.3.18 步骤十八:增加新的列簇

hbase(main):005:0> alter 'scores',NAME=>'age'
Updating all regions with the new schema...
1/1 regions updated.
Done.
0 row(s) in 2.0110 seconds

1.3.3.19 步骤十九:查看表结构

hbase(main):006:0> describe 'scores'
Table scores is ENABLED   
scores
COLUMN FAMILIES DESCRIPTION   
{NAME => 'age', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', C
OMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}  
{NAME => 'course', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER'
, COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}   
{NAME => 'grade', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER',
 COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}
3 row(s) in 0.0230 seconds

1.3.3.20 步骤二十:删除列簇

hbase(main):007:0> alter 'scores',NAME=>'age',METHOD=>'delete'
Updating all regions with the new schema...
1/1 regions updated.
Done.
0 row(s) in 2.1990 seconds

1.3.3.21 步骤二十一:删除表

hbase(main):008:0> disable 'scores'
0 row(s) in 2.3190 seconds

1.3.3.22 步骤二十二:退出

hbase(main):009:0> quit

1.3.3.23 步骤二十三:关闭 HBase

[hadoop@master ~]$ stop-hbase.sh
stopping hbase.................
master: stopping zookeeper.
node2: stopping zookeeper.
node1: stopping zookeeper.

在 master 节点关闭 Hadoop。

[hadoop@master ~]$ stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [master]
master: stopping namenode
10.10.10.130: stopping datanode
10.10.10.129: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
stopping yarn daemons
stopping resourcemanager
10.10.10.129: stopping nodemanager
10.10.10.130: stopping nodemanager
no proxyserver to stop

[hadoop@master ~]$ jps
3820 Jps

[hadoop@slave1 ~]$ jps 
2220 Jps

[root@slave2 ~]# jps 
2082 Jps

完结,撒花

附件:

第八章(HBase)实验步骤:

1.配置时间同步(在所有节点上执行)

yum -y install chrony
vi /etc/chrony.conf
pool time1.aliyun.com iburst

保存以上配置后执行以下命令

systemctl enable --now chronyd
systemctl status chronyd

执行以上命令后如果看到running则表示成功

2.部署HBase(在master上操作)

使用xftp上传软件包至/opt/software

tar xf /opt/software/hbase-1.2.1-bin.tar.gz -C /usr/local/src/
cd /usr/local/src/
mv hbase-1.2.1 hbase

vi /etc/profile.d/hbase.sh
export HBASE_HOME=/usr/local/src/hbase
export PATH=${HBASE_HOME}/bin:$PATH

保存以上配置后执行以下命令

source /etc/profile.d/hbase.sh
echo $PATH

执行以上命令后如果能看到环境变量中有hbase的路径则表示成功

3.配置HBase(在master上操作)

cd /usr/local/src/hbase/conf/

vi hbase-env.sh
export JAVA_HOME=/usr/local/src/jdk
export HBASE_MANAGES_ZK=true
export HBASE_CLASSPATH=/usr/local/src/hadoop/etc/hadoop/

保存以上配置后执行以下命令

vi hbase-site.xml
<property>
	<name>hbase.rootdir</name>
	<value>hdfs://master:9000/hbase</value>
</property>
<property>
	<name>hbase.master.info.port</name>
	<value>60010</value>
</property>
<property>
	<name>hbase.zookeeper.property.clientPort</name>
	<value>2181</value>
</property>
<property>
	<name>zookeeper.session.timeout</name>
	<value>10000</value>
</property>
<property>
	<name>hbase.zookeeper.quorum</name>
	<value>master,slave1,slave2</value>
</property>
<property>
	<name>hbase.tmp.dir</name>
	<value>/usr/local/src/hbase/tmp</value>
</property>
<property>
	<name>hbase.cluster.distributed</name>
	<value>true</value>
</property>

保存以上配置后执行以下命令

mkdir -p /usr/local/src/hbase/tmp

vi regionservers
10.10.10.129
10.10.10.130

保存以上配置后执行以下命令

scp -r /usr/local/src/hbase slave1:/usr/local/src/
scp -r /usr/local/src/hbase slave2:/usr/local/src/
scp /etc/profile.d/hbase.sh slave1:/etc/profile.d/
scp /etc/profile.d/hbase.sh slave2:/etc/profile.d/

在所有节点(包括master)上执行以下命令

chown -R hadoop.hadoop /usr/local/src
ll /usr/local/src/
su - hadoop

4.启动hbase(在master上执行)

在master上启动分布式hadoop集群

start-all.sh

执行以上命令后要确保master上有NameNode、SecondaryNameNode、ResourceManager进程,slave节点上要有DataNode、NodeManager进程

start-hbase.sh

执行以上命令后要确保master上有HQuorumPeer、HMaster进程,slave节点上要有HQuorumPeer、HRegionServer进程

在windows主机上执行:
在C:\windows\system32\drivers\etc\下面把hosts文件拖到桌面上,然后编辑它加入master的主机名与IP地址的映射关系后在浏览器上输入http://master:60010访问hbase的web界面

5.hbase语法应用(在master上执行)

su - hadoop
hbase shell

创建一张名为scores的表,表内有两个列簇

hbase(main):001:0> create 'scores','grade','course'

查看hbase状态

hbase(main):002:0> status

查看数据库版本

hbase(main):003:0> version

查看表

hbase(main):004:0> list

插入记录 1:jie,grade: 143cloud

hbase(main):005:0> put 'scores','jie','grade:','146cloud'

插入记录 2:jie,course:math,86

hbase(main):006:0> put 'scores','jie','course:math','86'

插入记录 3:jie,course:cloud,92

hbase(main):007:0> put 'scores','jie','course:cloud','92'

插入记录 4:shi,grade:133soft

hbase(main):008:0> put 'scores','shi','grade:','133soft'

插入记录 5:shi,grade:math,87

hbase(main):009:0> put 'scores','shi','course:math','87'

插入记录 6:shi,grade:cloud,96

hbase(main):010:0>  put 'scores','shi','course:cloud','96'

读取jie的记录

hbase(main):011:0> get 'scores','jie'

读取jie的班级

hbase(main):012:0> get 'scores','jie','grade'

查看整个表记录

hbase(main):013:0> scan 'scores'

按例查看表记录

hbase(main):014:0> scan 'scores',{COLUMNS=>'course'}

删除指定记录

hbase(main):016:0> delete 'scores','shi','grade'

增加新的名为age的列簇

hbase(main):019:0> alter 'scores',NAME=>'age'

查看表结构

hbase(main):021:0> describe 'scores'

删除名为age的列簇

hbase(main):023:0> alter 'scores',NAME=>'age',METHOD=>'delete'

删除表

hbase(main):025:0> disable 'scores'
hbase(main):026:0> drop 'scores'
hbase(main):027:0> list

退出hbase

hbase(main):028:0> quit

关闭hbase

stop-hbase.sh
jps
posted on 2022-04-15 22:15  Cloudservice  阅读(170)  评论(0编辑  收藏  举报