KingbaseES V8R6 集群运维案例 -- 集群断电重新加电后恢复
官方文档介绍:
https://help.kingbase.com.cn/v8/highly/availability/cluster-use/cluster-use-2.html#id35
全局故障恢复(集群多级别自动恢复)
当出现整个集群故障、掉电后重新上电等情况下,对整个集群的自动恢复功能(目前只支持1级自动恢复)。当参数auto_cluster_recovery_level=1时,此功能开启;参数auto_cluster_recovery_level=0时,此功能关闭。此功能默认开启,如果需要修改此参数,在repmgr.conf文件中修改此参数后,需要重启repmgrd/kbha守护进程生效。
自动启动恢复功能在满足以下所有条件后才会生效并恢复集群:集群全部节点均故障、节点间网络正常、集群中只有1个节点为主库状态。
集群故障后,守护进程kbha检查集群其它节点状态,当其它节点都能连通,数据库均处于停库状态且没有其它数据库处于主库状态时,启动本地的主库。随后主库上的守护进程repmgrd通过故障自动恢复尝试恢复其它备库节点。
适用版本:
KingbaseES V8R6
测试版本:
[kingbase@node101 bin]$ ./ksql -V
ksql (Kingbase) V008R006C005B0041
集群节点信息:
ID | Name | Role | Status | Upstream | repmgrd | PID | Paused? | Upstream last seen
----+---------+---------+-----------+----------+---------+-------+---------+--------------------
1 | node101 | primary | * running | | running | 11054 | no | n/a
2 | node102 | standby | running | node101 | running | 17731 | no | 1 second(s) ago
一、集群节点状态
[kingbase@node101 bin]$ ./repmgr cluster show
ID | Name | Role | Status | Upstream | Location | Priority | Timeline | Connection string
----+---------+---------+-----------+----------+----------+----------+----------+----------------------------------------------------------------------------------------------------------------------------------------------------
1 | node101 | primary | * running | | default | 100 | 43 | host=192.168.1.101 user=system dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3
2 | node102 | standby | running | node101 | default | 100 | 43 | host=192.168.1.102 user=system dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3
二、集群配置参数信息
[kingbase@node101 bin]$ cat ../etc/repmgr.conf|grep auto
recovery='automatic'
auto_cluster_recovery_level=1
failover='automatic'
三、集群断电
主备节点同时断电测试。
四、集群加电后节点状态
1、主库节点恢复状态
[kingbase@node101 bin]$ ps -ef |grep kingbase
kingbase 2456 1 0 16:27 ? 00:00:00 /home/kingbase/cluster/R6HA/kha/kingbase/bin/kbha -A daemon -f /home/kingbase/cluster/R6HA/kha/kingbase/bin/../etc/repmgr.conf
kingbase 2507 2456 0 16:27 ? 00:00:00 sh -c /home/kingbase/cluster/R6HA/kha/kingbase/bin/repmgr node service --action start 2>/dev/null
kingbase 2508 2507 0 16:27 ? 00:00:00 /home/kingbase/cluster/R6HA/kha/kingbase/bin/repmgr node service --action start
kingbase 2509 2508 0 16:27 ? 00:00:00 /home/kingbase/cluster/R6HA/kha/kingbase/bin/sys_ctl -w -t 90 -D /data/kingbase/r6ha/data -l /home/kingbase/cluster/R6HA/kha/kingbase/bin/logfile start
kingbase 2511 2509 0 16:27 ? 00:00:00 /home/kingbase/cluster/R6HA/kha/kingbase/bin/kingbase -D /data/kingbase/r6ha/data
kingbase 2512 2511 0 16:27 ? 00:00:00 kingbase: logger
kingbase 2513 2511 0 16:27 ? 00:00:00 kingbase: startup
---如上所示,主库节点启动kbha、sys_ctl进程启动集群及数据库服务。
主库节点恢复完成:
[kingbase@node101 bin]$ ps -ef |grep kingbase
kingbase 2519 2511 0 16:27 ? 00:00:00 kingbase: autovacuum launcher
kingbase 2520 2511 0 16:27 ? 00:00:00 kingbase: archiver
kingbase 2521 2511 0 16:27 ? 00:00:00 kingbase: stats collector
kingbase 2522 2511 0 16:27 ? 00:00:00 kingbase: ksh writer
kingbase 2523 2511 0 16:27 ? 00:00:00 kingbase: ksh collector
kingbase 2524 2511 0 16:27 ? 00:00:00 kingbase: kwr collector
kingbase 2525 2511 0 16:27 ? 00:00:00 kingbase: logical replication launcher
kingbase 2528 2511 0 16:27 ? 00:00:00 kingbase: system esrep 192.168.1.101(41774) idle
kingbase 2530 1 0 16:27 ? 00:00:00 /home/kingbase/cluster/R6HA/kha/kingbase/bin/repmgrd -d -v -f /home/kingbase/cluster/R6HA/kha/kingbase/bin/../etc/repmgr.conf
kingbase 2532 2511 0 16:27 ? 00:00:00 kingbase: system esrep 192.168.1.101(41778) idle
kingbase 2661 2456 0 16:28 ? 00:00:00 ping -q -c3 -w2 192.168.1.1
kingbase 2664 2511 0 16:28 ? 00:00:00 kingbase: system esrep 192.168.1.101(41848) idle
kingbase 2675 2530 0 16:28 ? 00:00:00 ssh -o Batchmode=yes -q -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o ServerAliveInterval=2 -o ServerAliveCountMax=5 -p 22 192.168.1.102 /home/kingbase/cluster/R6HA/kha/kingbase/bin/kbha -h 192.168.1.101 -A rejoin
kingbase 2679 2511 0 16:28 ? 00:00:00 kingbase: system esrep 192.168.1.102(11472) idle
kingbase 2681 2511 0 16:28 ? 00:00:00 kingbase: system esrep 192.168.1.102(11476) COPY
---如上所示,主库加电后启动kbha、remgrd进程及数据库服务,并远程连接备库执行备库的recovery。
2、备库节点状态
[kingbase@node102 ~]$ ps -ef |grep kingbase
kingbase 2306 1 0 16:27 ? 00:00:00 /home/kingbase/cluster/R6HA/kha/kingbase/bin/kbha -A daemon -f /home/kingbase/cluster/R6HA/kha/kingbase/bin/../etc/repmgr.conf
kingbase 2688 1 0 16:28 ? 00:00:00 /home/kingbase/cluster/R6HA/kha/kingbase/bin/kingbase -D /data/kingbase/r6ha/data
kingbase 2689 2688 0 16:28 ? 00:00:00 kingbase: logger
kingbase 2690 2688 0 16:28 ? 00:00:00 kingbase: startup recovering 0000002B0000000500000066
kingbase 2694 2688 0 16:28 ? 00:00:00 kingbase: checkpointer
kingbase 2695 2688 0 16:28 ? 00:00:00 kingbase: background writer
kingbase 2696 2688 0 16:28 ? 00:00:00 kingbase: stats collector
kingbase 2697 2688 0 16:28 ? 00:00:00 kingbase: walreceiver streaming 5/66027EF8
kingbase 2708 2688 0 16:28 ? 00:00:00 kingbase: system esrep 192.168.1.102(56891) idle
kingbase 2710 1 0 16:28 ? 00:00:00 /home/kingbase/cluster/R6HA/kha/kingbase/bin/repmgrd -d -v -f /home/kingbase/cluster/R6HA/kha/kingbase/bin/../etc/repmgr.conf
kingbase 2712 2688 0 16:28 ? 00:00:00 kingbase: system esrep 192.168.1.102(56897) idle
---如上所示,备库节点集群和数据库服务都已经启动。
3、集群状态信息
[kingbase@node101 bin]$ ./repmgr cluster show
ID | Name | Role | Status | Upstream | Location | Priority | Timeline | Connection string
----+---------+---------+-----------+----------+----------+----------+----------+----------------------------------------------------------------------------------------------------------------------------------------------------
1 | node101 | primary | * running | | default | 100 | 43 | host=192.168.1.101 user=system dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3
2 | node102 | standby | running | node101 | default | 100 | 43 | host=192.168.1.102 user=system dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3
---如上所示,集群状态恢复正常。
4、集群加电恢复日志分析
查看主库hamgr.log:
# 主机加电后,主库repmgrd进程启动
[2023-02-02 11:14:35] [NOTICE] repmgrd (repmgrd 5.0.0) starting up
[2023-02-02 11:14:35] [INFO] connecting to database "host=192.168.1.101 user=system dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3"
......
# 主库启动数据库服务后,判断并加载vip
[2023-02-02 11:14:57] [NOTICE] found primary node lost virtual_ip, try to acquire virtual_ip
[2023-02-02 11:14:59] [NOTICE] PING 192.168.1.254 (192.168.1.254) 56(84) bytes of data.
--- 192.168.1.254 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms
[2023-02-02 11:14:59] [WARNING] ping host"192.168.1.254" failed
[2023-02-02 11:14:59] [DETAIL] average RTT value is not greater than zero
[2023-02-02 11:14:59] [DEBUG] executing:
/home/kingbase/cluster/R6HA/kha/kingbase/bin/kbha -A loadvip
[2023-02-02 11:14:59] [DEBUG] result of command was 0 (0)
[2023-02-02 11:14:59] [DEBUG] local_command(): no output returned
[2023-02-02 11:14:59] [DEBUG] executing:
/home/kingbase/cluster/R6HA/kha/kingbase/bin/kbha -A arping
[2023-02-02 11:14:59] [DEBUG] result of command was 0 (0)
[2023-02-02 11:14:59] [DEBUG] local_command(): no output returned
[2023-02-02 11:14:59] [INFO] loadvip result: 1, arping result: 1
[2023-02-02 11:14:59] [NOTICE] acquire the virtual ip 192.168.1.254/24 success on localhost
.......
# 主库判断备库状态,并远程连接到备库执行recovery
[2023-02-02 11:15:16] [INFO] child node: 2; attached: no
[2023-02-02 11:15:16] [INFO] recovery delay time reached. can do recovery now.
[2023-02-02 11:15:16] [DEBUG] update_node_record_set_active():
UPDATE repmgr.nodes SET active = FALSE WHERE node_id = 2
[2023-02-02 11:15:16] [NOTICE] mark node "node102" (ID: 2) as inactive
......
[2023-02-02 11:15:16] [DEBUG] test_ssh_connection(): executing ssh -o Batchmode=yes -q -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o ServerAliveInterval=2 -o ServerAliveCountMax=5 -p 22 192.168.1.102 /bin/true 2>/dev/null
[2023-02-02 11:15:16] [DEBUG] remote_command():
ssh -o Batchmode=yes -q -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o ServerAliveInterval=2 -o ServerAliveCountMax=5 -p 22 192.168.1.102 test -e /data/kingbase/r6ha/data/standby.signal
[2023-02-02 11:15:16] [DEBUG] remote_command(): no output returned
[2023-02-02 11:15:16] [NOTICE] [thread pid:2722] node (ID: 2; host: "192.168.1.102") is not attached, ready to auto-recovery
[2023-02-02 11:15:16] [DEBUG] get_recovery_type(): SELECT pg_catalog.pg_is_in_recovery()
[2023-02-02 11:15:16] [DEBUG] executing:
/home/kingbase/cluster/R6HA/kha/kingbase/bin/repmgr cluster show | grep -w "host" | grep -w "running" | grep -w "primary" | awk -F '|' '{print $NF}'
.......
#主库远程连接到备库,通过kbha进程执行node rejoin操作
[2023-02-02 11:15:16] [NOTICE] [thread pid:2722] Now, the primary host ip: 192.168.1.101
[2023-02-02 11:15:16] [DEBUG] test_ssh_connection(): executing ssh -o Batchmode=yes -q -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o ServerAliveInterval=2 -o ServerAliveCountMax=5 -p 22 192.168.1.102 /bin/true 2>/dev/null
[2023-02-02 11:15:16] [INFO] [thread pid:2722] ES connection to host "192.168.1.102" succeeded, ready to do auto-recovery
[2023-02-02 11:15:16] [DEBUG] remote_command():
ssh -o Batchmode=yes -q -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o ServerAliveInterval=2 -o ServerAliveCountMax=5 -p 22 192.168.1.102 /home/kingbase/cluster/R6HA/kha/kingbase/bin/kbha -h 192.168.1.101 -A rejoin
........
#主库通过pid文件和共享内存的访问判断备库的数据库服务是否启动,并通过执行sys_rewind保证备库和主库的数据一致
[2023-02-02 11:15:14] [INFO] the Kingbase pid file is already exists, check pre-existing shared memory block (key 54321001, ID 2)
[2023-02-02 11:15:14] [INFO] pre-existing shared memory block (key 54321001, ID 2) is not in use
2023-02-02 11:15:14.465 CST [2678] DEBUG: shmem_exit(0): 0 before_shmem_exit callbacks to make
2023-02-02 11:15:14.465 CST [2678] DEBUG: shmem_exit(0): 0 on_shmem_exit callbacks to make
2023-02-02 11:15:14.465 CST [2678] DEBUG: proc_exit(0): 0 callbacks to make
2023-02-02 11:15:14.465 CST [2678] DEBUG: exit(0)
[2023-02-02 11:15:14] [INFO] unlink file /tmp/.s.KINGBASE.54321.lock
[2023-02-02 11:15:14] [NOTICE] executing repmgr command "/home/kingbase/cluster/R6HA/kha/kingbase/bin/repmgr --dbname="host=192.168.1.101 dbname=esrep user=system port=54321" node rejoin --force-rewind"
WARNING: database is not running, but it is not shut down cleanly
DEBUG: connecting to: "user=system connect_timeout=10 dbname=esrep host=192.168.1.101 port=54321 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3 fallback_application_name=repmgr"
INFO: timelines are same, this server is not ahead
DETAIL: local node lsn is 5/6C0000A0, rejoin target lsn is 5/6C006058
NOTICE: executing sys_rewind
DETAIL: sys_rewind command is "/home/kingbase/cluster/R6HA/kha/kingbase/bin/sys_rewind -D '/data/kingbase/r6ha/data' --source-server='host=192.168.1.101 user=system dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3'"
sys_rewind: warning: sys_rewind: target server must be shut down cleanly in control file, and could not open PID file "/data/kingbase/r6ha/data/kingbase.pid": No such file or directorypid file not found that it seems bogus. Trying to start rewind anyway...
sys_rewind: servers diverged at WAL location 0/0 on timeline 43
sys_rewind: the divergerec is invalid, set the diverged to the target server's WAL location at 5/6C000028 on timeline 43
sys_rewind: rewinding from last common checkpoint at 5/6C000028 on timeline 43
sys_rewind: find last common checkpoint start time from 2023-02-02 11:15:14.503077 CST to 2023-02-02 11:15:14.541435 CST, in "0.038358" seconds.
.......
#对备库的recovery完成,备库加入到集群
NOTICE: begin to start server at 2023-02-02 11:15:21.817078
NOTICE: starting server using "/home/kingbase/cluster/R6HA/kha/kingbase/bin/sys_ctl -w -t 90 -D '/data/kingbase/r6ha/data' -l /home/kingbase/cluster/R6HA/kha/kingbase/bin/logfile start"
NOTICE: start server finish at 2023-02-02 11:15:22.625625
NOTICE: NODE REJOIN successful
DETAIL: node 2 is now attached to node 1
[2023-02-02 11:15:22] [NOTICE] kbha: node (ID: 2) rejoin success.
五、总结
对于集群整个断电后,加电后自动恢复功能,在无人值守的生产环境,当机房电力不稳定时,减少人为地干预,快速恢复集群。