KingbaseES集群运维案例之---主备库failover后auto-recovery机制
KingbaseES集群运维案例之---主备库failover后auto-recovery机制
案例说明:
KingbaseES集群,在备库数据库服务down后,可以实现节点数据库服务的自动恢复;在集群触发failover的主备切换后,实现原主库自动被recovery为备库,重新加入到集群。对于Kingbase V8R3和V8R6的实现机制不同,本次案例将描述两种版本的实现机制。
适用版本:
KingbaseES V8R3/R6
一、KingbaseES V8R3实现机制
集群架构:
TEST=# show pool_nodes;
node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_de
lay
---------+---------------+-------+--------+-----------+---------+------------+-------------------+---------------
----
0 | 192.168.1.101 | 54321 | up | 0.333333 | primary | 0 | false | 0
1 | 192.168.1.102 | 54321 | up | 0.333333 | standby | 0 | false | 0
2 | 192.168.1.103 | 54321 | up | 0.333333 | standby | 0 | true | 0
(3 rows)
1、备库数据库服务自动恢复机制
1)备库节点配置
如下所示,集群在部署完成后,会创建network_rewind.sh脚本的crond定时任务。通过此脚本检测数据库服务状态,并恢复。
[kingbase@node102 bin]$ cat /etc/cron.d/KINGBASECRON
*/1 * * * * kingbase /home/kingbase/cluster/HAR3/db/bin/network_rewind.sh
*/1 * * * * root /home/kingbase/cluster/HAR3/kingbasecluster/bin/restartcluster.sh
2)kingbasecluster检测数据库服务
如下所示,kingbasecluster检测节点的数据库服务状态,并在cluster.log中显示
2023-06-16 11:00:13: pid 10530: LOG: failover command lock request from local kingbasecluster node received on IPC interface is forwarded to master watchdog node "192.168.1.102:9999 Linux node102"
2023-06-16 11:00:13: pid 10530: DETAIL: waiting for the reply...
2023-06-16 11:00:13: pid 10487: LOG: starting fail back. reconnect host 192.168.1.102(54321)
......
2023-06-16 11:00:13: pid 10487: LOG: failback done. reconnect host 192.168.1.102(54321)
3)备库节点执行recovery恢复数据库服务
如下所示,备库通过调用sys_rewind,执行备库数据库服务的恢复,在recovery_log中日志记录。
if recover node up, let it down , for rewind
2023-06-16 11:00:04 sys_rewind...
sys_rewind --target-data=/home/kingbase/cluster/HAR3/db/data --source-server="host=192.168.1.101 port=54321 user=SUPERMANAGER_V8ADMIN dbname=TEST"
datadir_source = /home/kingbase/cluster/HAR3/db/data
rewinding from last common checkpoint at 1/69000028 on timeline 13
find last common checkpoint start time from 2023-06-16 11:00:04.453168 CST to 2023-06-16 11:00:04.499843 CST, in "0.046675" seconds.
reading source file list
reading target file list
reading WAL in target
Rewind datadir file from source
Get archive xlog list from source
Rewind archive log from source
update the control file: minRecoveryPoint is '1/69022120', minRecoveryPointTLI is '13', and database state is 'in archive recovery'
rewind start wal location 1/69000028 (file 0000000D0000000100000069), end wal location 1/69022120 (file 0000000D0000000100000069). time from 2023-06-16 11:00:05.453168 CST to 2023-06-16 11:00:06.230588 CST, in "1.777420" seconds.
Done!
2、failover切换后原主库恢复
1)集群配置
如下所示,参数AUTO_PRIMARY_RECOVERY定义了在执行failover切换后,原主库是否自动恢复为备库并加入到集群;AUTO_PRIMARY_RECOVERY=0(默认)将不执行自动恢复,人工干预;AUTO_PRIMARY_RECOVERY=1,执行自动恢复,将原主库恢复为备库加入到集群。
[kingbase@node101 bin]$ cat ../etc/HAmodule.conf |grep -i recovery
#whether to turn on automatic recovery,0->off,1->on.example:AUTO_PRIMARY_RECOVERY="1"
AUTO_PRIMARY_RECOVERY=0
2)配置主库failover自动恢复
如下所示,配置AUTO_PRIMARY_RECOVERY=1,实现failover切换后,原主库自动recovery为备库加入集群。
[kingbase@node101 HAR3]$ cat db/etc/HAmodule.conf |grep -i auto_primary
#whether to turn on automatic recovery,0->off,1->on.example:AUTO_PRIMARY_RECOVERY="1"
AUTO_PRIMARY_RECOVERY=1
主备切换后cluster.log日志:
kingbasecluster进程检测到原主库数据库服务状态,执行recovery。
2023-06-16 13:57:12: pid 21372: LOG: starting fail back. reconnect host 192.168.1.101(54321)
2023-06-16 13:57:12: pid 21372: LOG: Node 1 is not down (status: 2)
2023-06-16 13:57:12: pid 21411: LOG: received the failover command lock request from remote kingbasecluster node "192.168.1.102:9999 Linux node102"
........
2023-06-16 13:57:12: pid 21372: LOG: failback done. reconnect host 192.168.1.101(54321)
原主库recovery.log:
如下所示,原主库执行sys_rewind后,被恢复为备库加入到集群。
2023-06-16 13:57:04 sys_rewind...
sys_rewind --target-data=/home/kingbase/cluster/HAR3/db/data --source-server="host=192.168.1.102 port=54321 user=SUPERMANAGER_V8ADMIN dbname=TEST"
datadir_source = /home/kingbase/cluster/HAR3/db/data
pid file found but it seems bogus. Trying to start rewind anyway...
rewinding from last common checkpoint at 1/69024B90 on timeline 13
find last common checkpoint start time from 2023-06-16 13:57:04.593305 CST to 2023-06-16 13:57:04.616058 CST, in "0.022753" seconds.
reading source file list
reading target file list
reading WAL in target
Rewind datadir file from source
Get archive xlog list from source
Rewind archive log from source
update the control file: minRecoveryPoint is '1/6A025190', minRecoveryPointTLI is '14', and database state is 'in archive recovery'
rewind start wal location 1/69024B58 (file 0000000D0000000100000069), end wal location 1/6A025190 (file 0000000E000000010000006A). time from 2023-06-16 13:57:05.593305 CST to 2023-06-16 13:57:06.544920 CST, in "1.951615" seconds.
Done!
二、KingbaseES V8R6实现机制
集群架构:
ID | Name | Role | Status | Upstream | repmgrd | PID | Paused? | Upstream last seen
----+-------+---------+-----------+----------+---------+-------+---------+--------------------
1 | node1 | primary | * running | | running | 16734 | no | n/a
2 | node2 | standby | running | node1 | running | 23606 | no | 1 second(s) ago
集群参数配置:
[kingbase@node101 bin]$ cat ../etc/repmgr.conf |grep -i recovery
recovery='manual'
节点故障自动恢复,automatic、standby、manual, 默认值为standby。
manual,关闭自动恢复,故障节点无论是主机还是备机, 都不进行自动恢复;
standby,故障节点是备机才自动恢复;
automatic,所有故障节点都自动恢复。
1、备库数据库服务自动恢复机制
1)配置集群参数
[kingbase@node101 bin]$ cat ../etc/repmgr.conf |grep -i recovery
recovery='standby'
可以配置recovery参数为standby或automatic。
2)备库数据库服务down后恢复
如下所示,主库通过sys_stat_replication获取不到备库的节点信息后,将通过远程调用kbha进程将备库数据库服务拉起,并加入到集群。
[2023-06-16 14:40:58] [DEBUG] child node: 2; attached: no
.......
[2023-06-16 14:41:03] [NOTICE] [thread pid:17949] Now, the primary host ip: 192.168.1.101
[2023-06-16 14:41:03] [DEBUG] test_ssh_connection(): executing ssh -o Batchmode=yes -q -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o ServerAliveInterval=2 -o ServerAliveCountMax=5 -p 22 192.168.1.102 /bin/true 2>/dev/null
[2023-06-16 14:41:03] [INFO] [thread pid:17949] ES connection to host "192.168.1.102" succeeded, ready to do auto-recovery
[2023-06-16 14:41:03] [INFO] node "node2" (ID: 2, HOST: 192.168.1.102) auto-recovery: START DATABASE
[2023-06-16 14:41:03] [DEBUG] remote_command():
ssh -o Batchmode=yes -q -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o ServerAliveInterval=2 -o ServerAliveCountMax=5 -p 22 192.168.1.102 /home/kingbase/cluster/R6HA/ha7/kingbase/kingbase/bin/kbha -A startdb
[2023-06-16 14:41:02] [NOTICE] begin to start server at 2023-06-16 14:41:02.000501
[2023-06-16 14:41:02] [NOTICE] starting server using "/home/kingbase/cluster/R6HA/ha7/kingbase/kingbase/bin/repmgr node service --action start 2>/dev/null"
[2023-06-16 14:41:02] [INFO] starting database ...
.......
[2023-06-16 14:41:03] [NOTICE] [thread pid:17949] node "node2" (ID: 2) auto-recovery success
2、failover切换后原主库恢复
1)配置集群参数
[kingbase@node101 bin]$ cat ../etc/repmgr.conf |grep -i recovery
recovery=’automatic'
2)failover切换后原主库recovery
如下所示,在完成failover切换后,如果recovery=’automatic',则对原主库执行auto-recovery,将原主库恢复为备库并加入到集群,以下是新主库hamgr.log日志信息。
[2023-06-16 15:08:45] [INFO] node "node1" (ID: 1, HOST: 192.168.1.101) auto-recovery: NODE REJOIN
..........
[NOTICE] begin to start server at 2023-06-16 15:08:47.668913
[NOTICE] starting server using "/home/kingbase/cluster/R6HA/ha7/kingbase/kingbase/bin/sys_ctl -w -t 90 -D '/data/kingbase/hac7/data' -l /home/kingbase/cluster/R6HA/ha7/kingbase/kingbase/bin/logfile start"
[NOTICE] start server finish at 2023-06-16 15:08:47.779029**
[NOTICE] NODE REJOIN successful
[DETAIL] node 1 is now attached to node 2
[2023-06-16 15:08:47] [NOTICE] kbha: node (ID: 1) rejoin success.
三、总结
对于集群主备库数据库服务的auto-recovery,可以根据业务需求及数据安全性角度考虑是否启用。
1、对于KingbaseES V8R3集群,可以在备库节点关闭rewind的crond计划任务,禁止备库数据库服务的auto-recovery。如下所示:
[kingbase@node101 bin]$ cat /etc/cron.d/KINGBASECRON
##*/1 * * * * kingbase /home/kingbase/cluster/HAR3/db/bin/network_rewind.sh
2、可以配置AUTO_PRIMARY_RECOVERY参数,启用或关闭主库的failover后自动恢复,对于数据安全性高的,可以禁用auto-recovery,通过对业务数据检验安全后,再手工恢复原主库。
3、对于KingbaseES V8R6版本,则根据业务需求及数据安全性要求,配置recovery参数即可。