作者:@郑琰
本文转载请注明出处!:https://www.cnblogs.com/zhengyan6/p/16148469.html
HBase 安装与配置
配置时间同步(所有节点上执行)
| yum -y install chrony |
| vi /etc/chrony.conf |
| |
| server time1.aliyun.com iburst |
| 或 |
| pool time1.aliyun.com iburst |
| |
| systemctl enable --now chronyd |
| systemctl status chronyd |
| |
| [root@master ~]# tar -zxvf /opt/software/hbase-1.2.1-bin.tar.gz -C /usr/local/src/ |
| [root@master ~]# cd /usr/local/src/ |
| [root@master src]# mv hbase-1.2.1 hbase |
| [root@master ~] |
| |
| export HBASE_HOME=/usr/local/src/hbase |
| export PATH=$HBASE_HOME/bin:$PATH |
| [root@master ~]# source /etc/profile |
| [root@master ~]# cd /usr/local/src/hbase/conf/ |
- 步骤六:在master节点配置 hbase-env.sh 文件
| |
| [root@master conf] |
| |
| export JAVA_HOME=/usr/local/src/jdk |
| |
| export HBASE_MANAGES_ZK=false |
| |
| export HBASE_CLASSPATH=/usr/local/src/hadoop/etc/hadoop/ |
- 步骤七:在master节点配置 hbase-site.xml
| # 进入配置文件 |
| [root@master conf]# vi hbase-site.xml |
| #写入 |
| <property> |
| <name>hbase.rootdir</name> |
| <value>hdfs://master:9000/hbase</value> # 使用 9000端口 |
| <description>The directory shared by region servers.</description> |
| </property> |
| <property> |
| <name>hbase.master.info.port</name> |
| <value>60010</value> # 使用 master节点 60010端口 |
| </property> |
| <property> |
| <name>hbase.zookeeper.property.clientPort</name> |
| <value>2181</value> # 使用 master节点 2181端口 |
| <description>Property from ZooKeeper's config zoo.cfg. The port at which the clients will connect.</description> |
| </property> |
| <property> |
| <name>zookeeper.session.timeout</name> |
| <value>120000</value> # ZooKeeper超时时间 |
| </property> |
| <property> |
| <name>hbase.zookeeper.quorum</name> |
| <value>master,slave1,slave2</value> # ZooKeeper管理节点 |
| </property> |
| <property> |
| <name>hbase.tmp.dir</name> |
| <value>/usr/local/src/hbase/tmp</value> # HBase临时文件路径 |
| </property> |
| <property> |
| <name>hbase.cluster.distributed</name> |
| <value>true</value> # 使用分布式 HBase |
| </property> |
hbase.rootdir:该项配置了数据写入的目录,默认 hbase.rootdir是指向/tmp/hbase-${user.name},也就说你会在重启后丢失数据
(重启的时候操作系统会清理/tmp目录)。
hbase.zookeeper.property.clientPort:指定 zk的连接端口。
zookeeper.session.timeout:RegionServer与 ZooKeeper间的连接超时时间。当超
时时间到后,ReigonServer 会被 ZooKeeper 从 RS 集群清单中移除,HMaster 收到移除通
知后,会对这台 server负责的 regions重新 balance,让其他存活的 RegionServer接管。
hbase.zookeeper.quorum:默认值是 localhost,列出 zookeepr ensemble 中的servers。
hbase.master.info.port:浏览器的访问端口。
- 步骤八:在master节点修改 regionservers 文件
| |
| [root@master conf]$ vi regionservers |
| slave1 |
| slave2 |
- 步骤九:在master节点创建 hbase.tmp.dir 目录
| [root@master usr]# mkdir /usr/local/src/hbase/tmp |
- 步骤十:将master上的 hbase 安装文件同步到 slave1 slave2
| [root@master ~]# scp -r /usr/local/src/hbase/ root@slave1:/usr/local/src/ |
| [root@master ~]# scp -r /usr/local/src/hbase/ root@slave2:/usr/local/src/ |
| #master节点修改权限 |
| [root@master ~]# chown -R hadoop:hadoop /usr/local/src/hbase/ |
| |
| #slave1节点修改权限 |
| [root@slave1 ~]# chown -R hadoop:hadoop /usr/local/src/hbase/ |
| |
| #slave2节点修改权限 |
| [root@slave2 ~]# chown -R hadoop:hadoop /usr/local/src/hbase/ |
| #master节点 |
| [root@master ~]#su – hadoop |
| [root@master ~]#souce /etc/profile |
| |
| #slave1节点 |
| [root@slave1 ~]#su – hadoop |
| [root@slave1 ~]#souce /etc/profile |
| |
| #slave2节点 |
| [root@slave2~]#su – hadoop |
| [root@slave2~]#souce /etc/profile |
先启动 Hadoop,然后启动 ZooKeeper,最后启动 HBase
首先在 master节点启动 Hadoop
| [hadoop@master ~]$ start-all.sh |
| |
| [hadoop@master ~]$ jps |
| 10288 ResourceManager |
| 9939 NameNode |
| 10547 Jps |
| 10136 SecondaryNameNode |
| |
| |
| [hadoop@slave1 ~]$ jps |
| 4465 NodeManager |
| 4356 DataNode |
| 4584 Jps |
| |
| |
| [hadoop@slave2 ~]$ jps |
| 3714 DataNode |
| 3942 Jps |
| 3823 ResourceManager |
如果没有做Zookeeper则不用启动,会显示没有此命令
| [hadoop@master ~]$ zkServer.sh start |
| [hadoop@master ~]$ jps |
| |
| 10288 ResourceManager |
| 9939 NameNode |
| 10599 Jps |
| 10136 SecondaryNameNode |
| 10571 QuorumPeerMain |
| |
| |
| [hadoop@slave1 ~]$ zkServer.sh start |
| [hadoop@slave1 ~]$ jps |
| 1473 QuorumPeerMain |
| 1302 NodeManager |
| 1226 DataNode |
| 1499 Jps |
| |
| |
| [hadoop@slave2 ~]$ zkServer.sh start |
| [hadoop@slave2 ~]$ jps |
| 1296 NodeManager |
| 1493 Jps |
| 1222 DataNode |
| 1469 QuorumPeerMain |
| [hadoop@master ~]$ start-hbase.sh |
| [hadoop@master ~]$ jps |
| |
| 1669 ResourceManager |
| 2327 Jps |
| 1322 NameNode |
| 2107 HMaster |
| 1948 QuorumPeerMain |
| 1517 SecondaryNameNode |
| |
| |
| [hadoop@slave1 ~]$ jps |
| 1473 QuorumPeerMain |
| 1557 HRegionServer |
| 1702 Jps |
| 1302 NodeManager |
| 1226 DataNode |
| |
| |
| [hadoop@slave2 ~]$ jps |
| 1296 NodeManager |
| 1222 DataNode |
| 1545 HRegionServer |
| 1725 Jps |
| 1469 QuorumPeerMain |
- 步骤十六:在浏览器输入 master:60010 出现如下的界面

HBase 常用 Shell 命令
启动 hdfs、zookeeper、hbase 服务
| [hadoop@master ~]$ hbase shell |
- 步骤二:建立表 scores,两个列簇:grade 和 course
| hbase(main):001:0> create 'scores','grade','course' |
| |
| 0 row(s) in 1.4480 seconds |
| => Hbase::Table - scores |
| hbase (main) :001 :0> status |
| |
| 1 active master, 0 backup masters, 2 servers, 0 dead, 1.0000 average load |
| hbase (main) :002:0> version |
| |
| 1.2.1,r8d8a7107dc4ccbf36a92f64675dc60392f85c015,Wed Mar 30 11:19:21 CDT 2016 |
| hbase(main):008:0> list |
| |
| TABLE |
| scores |
| 1 row(s) in 0.0100 seconds |
| =>["scores"] |
- 步骤六:插入记录 1:jie,grade: 143cloud
| hbase(main):003:0> put 'scores','jie','grade:','146cloud' |
| |
| 0 row(s) in 0.2250 seconds |
- 步骤七:插入记录 2:jie,course:math,86
| hbase(main):004:0> put 'scores','jie','course:math','86' |
| |
| 0 row(s) in 0.0190 seconds |
- 步骤八:插入记录 3:jie,course:cloud,92
| hbase(main):005:0> put 'scores','jie','course:cloud','92' |
| |
| 0 row(s) in 0.0170 seconds |
- 步骤九:插入记录 4:shi,grade:133soft
| hbase(main):006:0> put 'scores','shi','grade:','133soft' |
| |
| 0 row(s) in 0.0070 seconds |
- 步骤十:插入记录 5:shi,grade:math,87
| hbase(main):007:0> put 'scores','shi','course:math','87' |
| |
| 0 row(s) in 0.0060 seconds |
- 步骤十一:插入记录 6:shi,grade:cloud,96
| hbase(main):008:0> put 'scores','shi','course:cloud','96' |
| |
| 0 row(s) in 0.0070 seconds |
| hbase(main):009:0> get 'scores','jie' |
| |
| COLUMN CELL |
| course:cloud timestamp=1460479208148, value=92 |
| course:math timestamp=1460479163325,value=86 |
| grade: timestamp=1460479064086,value=146cloud |
| 3 row(s) in 0.0800 seconds |
| hbase(main):012:0> get 'scores','jie','grade' |
| |
| COLUMN CELL |
| grade: timestamp=1460479064086,value=146cloud |
| 1 row( s) in 0.0150 seconds |
| hbase(main):013:0> scan 'scores' |
| |
| ROW COLUMN+CELL |
| jie column=course:cloud, timestamp=1460479208148,value=92 |
| jie column-course:math, timestamp=1460479163325, value=86 |
| jie column=grade:,timestamp=1460479064086,value=146cloud |
| shi column=course:cloud, timestamp=1460479342925,value=96 |
| shi column=course:math, timestamp=1460479312963,value=87 |
| shi column=grade:,timestamp=1460479257429, value=133soft |
| 2 row(s) in 0.0570 seconds |
| hbase(main):014:0> scan 'scores',{COLUMNS=>'course'} |
| |
| ROW COLUMN+CELL |
| jie column=course:cloud, timestamp=1460479208148, value=92 |
| jie column=course:math, timestamp=1460479163325, value=86 |
| shi column=course:cloud, timestamp=1460479342925, value=96 |
| shi column=course:math, times tamp=1460479312963, value=87 |
| 2 row(s) in 0. 0230 seconds |
| hbase(main):015:0> delete 'scores','shi','grade' |
| |
| 0 row(s) in 0.0390 seconds |
| hbase(main):016:0> scan 'scores' |
| |
| ROW COLUMN+CELL |
| jie column=course:cloud, timestamp=1460479208148, value=92 |
| jie column=course:math, timestamp=1460479163325, value=86 |
| jie column=grade:, timestamp=1460479064086, value=146cloud |
| shi column=course:cloud, timestamp=1460479342925, value=96 |
| shi column=course:math, timestamp=1460479312963, value=87 |
| row( s) in 0. 0350 seconds |
| hbase(main):017:0> alter 'scores',NAME=>'age' |
| |
| Updating all regions with the new schema... |
| 0/ 1 regions updated. |
| 1/ 1 regions updated. |
| Done. |
| 0 row(s) in 3.0060 seconds |
| hbase(main):018:0> describe 'scores' |
| |
| Table scores is ENABLED |
| scores |
| COLUMN FAMIL IES DESCRIPTION |
| {NAME => age',BL O0MFILTER =>ROW',VERSIONS => '1' ,IN_ MEMORY |
| => 'false', KEEP DELETED_ CELLS =>FAL SE',DATA BLOCK ENCODING => |
| NONETTL => ' FOREVER', COMPRESSION=>NONE',MIN VERSIONS => '0' , |
| BLOCKCACHE => ' true BLOCKSIZE =>65536',REPLICATION_ SCOPE => |
| '0'} |
| {NAME =>course,BLOOMFILTER => ' ROW', VERSIONS =>IN MEMORY => |
| false', KEEPDELETED CELLS =>FALSE', DATA BLOCK ENCODING => |
| 1 NONE',TTL =>FOREVER',COMPRESSION =>NONE',MIN VERSIONS => '0' |
| BLOCKCACHE => 'true' ,BLOCKSIZE =>65536',REPLICATION SCOPE |
| =>0'}{NAME => grade BLOOMFILTER => 'ROW' ,VERSIONS =>IN_ MEMORY => |
| ' false', KEEPDELETED CELLS =>FALSE',DATA BLOCK ENCODING =>1 NONE |
| 'TTL =>FOREVER',COMPRESSION =>NONE',MIN VERSIONS => '0' |
| BLOCKCACHE => 'true' ,BLOCKSIZE => '65536 ,REPLICATION SCOPE |
| => '0'} |
| 3 row(s) in 0.0400 seconds |
| hbase(main):020:0> alter 'scores',NAME=>'age',METHOD=>'delete' |
| |
| Updating all regions with the new schema… |
| 1/ 1 regions updated. |
| Done. |
| 0 row(s)in 2.1600seconds |
| hbase(main):021:0> disable 'scores' |
| |
| 0 row(s)in 2.2930seconds |
| |
| hbase(main):022:0> drop 'scores' |
| |
| 0 row(s)in 1.2530seconds |
| |
| hbase( main) :023:0> list |
| |
| TABLE |
| 0 row(s)in 0.0150 seconds |
| ==>[] |
| hbase(main):024:0> quit |
| |
| [hadoop@master ~]$ |
| |
| |
| [hadoop@master ~]$ stop-hbase.sh |
| |
| |
| [hadoop@master ~]$ zkServer.sh stop |
| [hadoop@slave1 ~]$ zkServer.sh stop |
| [hadoop@slave2 ~]$ zkServer.sh stop |
| |
| |
| [hadoop@master ~]$ stop-all.sh |
注意:各节点之间时间必须同步,否则 HBase启动不了
在每个节点执行 date命令,查看每个节点的时间是否同步,不同步的话,在各节点执行 date命令,date -s "2022-04-15 12:00:00"
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 分享4款.NET开源、免费、实用的商城系统
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
· 上周热点回顾(2.24-3.2)