Zookeeper学习(3):基本操作

一、单节点操作

1. 启动服务器和client:

1
2
3
4
5
6
7
启动服务器
~# zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /data/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
启动客户端
~# zkCli.sh

2. 查看当前所有节点

1
2
3
4
5
[zk: localhost:2181(CONNECTED) 0] ls -R /<br># Zookeeper系统自带的节点
/
/zookeeper
/zookeeper/config
/zookeeper/quota

3. 新建app1和app2的znode

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[zk: localhost:2181(CONNECTED) 1] create /app1
[zk: localhost:2181(CONNECTED) 1] create /app2
[zk: localhost:2181(CONNECTED) 1] create /app1/p_1
[zk: localhost:2181(CONNECTED) 1] create /app1/p_2
[zk: localhost:2181(CONNECTED) 1] create /app1/p_3
 
# 查看一下
[zk: localhost:2181(CONNECTED) 0] ls -R /
/
/app1
/app2
/zookeeper
/app1/p_1
/app1/p_2
/app1/p_3
/zookeeper/config
/zookeeper/quota

 

二、实现一个分布式锁

分布式锁要求如果锁的持有者宕了,锁可以被释放。ZooKeeper 的 ephemeral 节点恰好具备这样的特性。

 客户端1:创建一个临时znode(获取锁)

1
2
[zk: localhost:2181(CONNECTED) 6] create -e /lock
Created /lock

客户端2:这个时候也去创建一个相同znode(由于被锁定了,客户端2无法获取)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# 创建提示已经存在
[zk: localhost:2181(CONNECTED) 0] create -e /lock
Node already exists: /lock
 
# 监控该znode
[zk: localhost:2181(CONNECTED) 2] stat -w /lock
cZxid = 0xb
ctime = Thu May 19 09:13:48 EDT 2022
mZxid = 0xb
mtime = Thu May 19 09:13:48 EDT 2022
pZxid = 0xb
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x1074e5aa8bc0000
dataLength = 0
numChildren = 0

客户端1:退出客户端,由于是临时znode,断开链接之后会自动删除(释放锁)

1
2
3
4
5
6
7
8
[zk: localhost:2181(CONNECTED) 7] quit
 
WATCHER::
 
WatchedEvent state:Closed type:None path:null
2022-05-19 09:21:01,264 [myid:] - INFO  [main:ZooKeeper@1619] - Session: 0x1074e5aa8bc0000 closed
2022-05-19 09:21:01,266 [myid:] - ERROR [main:ServiceUtils@42] - Exiting JVM with code 0
2022-05-19 09:21:01,267 [myid:] - INFO  [main-EventThread:ClientCnxn$EventThread@578] - EventThread shut down for session: 0x1074e5aa8bc0000

客户端2:由于客户端2有监控这个znode,所以客户端1退出的时候,会收到日志,这个时候客户端2就可以创建了(获取锁)

1
2
3
4
5
6
7
WATCHER::
 
WatchedEvent state:SyncConnected type:NodeDeleted path:/lock
 
# 创建znode
[zk: localhost:2181(CONNECTED) 0] create -e /lock
Created /lock

  

三、3节点quorum模式Zookeeper集群搭建 

节点1:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/data/zookeeper/data1
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
 
## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true
 
server.1=127.0.0.1:3333:3334
server.2=127.0.0.1:4444:4445
server.3=127.0.0.1:5555:5556

节点2:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.存储位置需要修改
dataDir=/data/zookeeper/data2
# the port at which the clients will connect,端口修改下
clientPort=2182
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
 
## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true
 
server.1=127.0.0.1:3333:3334
server.2=127.0.0.1:4444:4445
server.3=127.0.0.1:5555:5556

节点3:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.存储位置需要修改
dataDir=/data/zookeeper/data3
# the port at which the clients will connect,端口修改下
clientPort=2183
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
 
## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true
 
server.1=127.0.0.1:3333:3334
server.2=127.0.0.1:4444:4445
server.3=127.0.0.1:5555:5556 

启动集群:

1
2
3
zkServer.sh /data/zookeeper/conf/zoo-quorum-node1.cfg
zkServer.sh /data/zookeeper/conf/zoo-quorum-node2.cfg
zkServer.sh /data/zookeeper/conf/zoo-quorum-node3.cfg

连接集群:

1
zkCli.sh -server 127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183

  

 

posted on   torotoise512  阅读(105)  评论(0编辑  收藏  举报

相关博文:
阅读排行:
· 开源Multi-agent AI智能体框架aevatar.ai,欢迎大家贡献代码
· Manus重磅发布:全球首款通用AI代理技术深度解析与实战指南
· 被坑几百块钱后,我竟然真的恢复了删除的微信聊天记录!
· 没有Manus邀请码?试试免邀请码的MGX或者开源的OpenManus吧
· 园子的第一款AI主题卫衣上架——"HELLO! HOW CAN I ASSIST YOU TODAY
< 2025年3月 >
23 24 25 26 27 28 1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31 1 2 3 4 5

统计

点击右上角即可分享
微信分享提示