kingbaseES V8R6集群运维案例之---“双主”故障解决案例

案例说明:
对于主备流复制集群,在运行过程中,因为主备节点之间网络故障、或者failover切换后原主库误启动等,会造成集群“双主”的故障,导致应用数据被写入到不同的主机,数据分裂。本案例描述了在生产环境出现“双主”故障时的解决方案。

适用版本:

KingbaseES V8R6

操作步骤:

1、查看主备库数据库服务进程及集群状态。
2、通过控制文件以及业务数据状态确定最新的主库。
3、确定新主库后,将备库节点重新rejoin作为standby加入集群。
4、测试主备流复制状态及集群状态。
5、确定主备数据安全。

一、集群启动后“双主”故障

如下所示:集群在启动过程中,出现“双主”的故障,对于“双主”故障,需要人工参与,判断集群中那个节点是最新的主库,重新恢复集群。

[kingbase@node1 bin]$ ./sys_monitor.sh restart
2021-03-01 13:30:03 Ready to stop all DB ...
Service process "node_export" was killed at process 8253
......
2021-03-01 13:30:23 execute to start DB on "[192.168.7.249]" success, connect to check it.
2021-03-01 13:30:24 DB on "[192.168.7.249]" start success.
 ID | Name    | Role    | Status    | Upstream | Location | Priority | Timeline | Connection string                                                                                                                                
----+---------+---------+-----------+----------+----------+----------+----------+-------
 1  | node248 | primary | * running |          | default  | 100      | 5        | host=192.168.7.248 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3
 2  | node249 | primary | ! running |          | default  | 100      | 4        | host=192.168.7.249 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3
WARNING: following issues were detected
  - node "node249" (ID: 2) is running but the repmgr node record is inactive
2021-03-01 13:30:24 There are more than one primary DBs([2] DBs are running), will do nothing and exit.

---如上所示,集群原备node249节点状态异常,其状态也是'primary'状态,集群出现“双主”。(在'primary'状态,'* running'标识状态正常,
'! running'标识状态异常。)

二、问题分析

1、查看原备库数据库服务及节点状态

如下所示,备库节点数据库服务启动为主库状态。(备库启动数据库后,应该有startup进程执行recovery。)

# 集群节点状态
[kingbase@node2 bin]$ ./repmgr cluster show
 ID | Name    | Role    | Status               | Upstream | Location | Priority | Timeline | Connection string                                                                                                                                
----+---------+---------+----------------------+----------+----------+----------+--------
 1  | node248 | standby | ! running as primary |          | default  | 100      | 5        | host=192.168.7.248 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3
 2  | node249 | primary | * running            |          | default  | 100      | 4        | host=192.168.7.249 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3

WARNING: following issues were detected
  - node "node248" (ID: 1) is running as primary but the repmgr node record is inactive

# 查看数据库服务进程
[kingbase@node1 etc]$ ps -ef |grep kingbase

kingbase 20612     1  0 13:30 ?        00:00:00 /home/kingbase/cluster/R6HA/KHA/kingbase/bin/kingbase -D /home/kingbase/cluster/R6HA/KHA/kingbase/data
kingbase 20626 20612  0 13:30 ?        00:00:00 kingbase: logger   
kingbase 20628 20612  0 13:30 ?        00:00:00 kingbase: checkpointer   
kingbase 20629 20612  0 13:30 ?        00:00:00 kingbase: background writer   
kingbase 20630 20612  0 13:30 ?        00:00:00 kingbase: walwriter   
kingbase 20631 20612  0 13:30 ?        00:00:00 kingbase: autovacuum launcher   
kingbase 20632 20612  0 13:30 ?        00:00:00 kingbase: archiver   
kingbase 20633 20612  0 13:30 ?        00:00:00 kingbase: stats collector   
kingbase 20634 20612  0 13:30 ?        00:00:00 kingbase: ksh writer   
kingbase 20635 20612  0 13:30 ?        00:00:00 kingbase: ksh collector   
kingbase 20636 20612  0 13:30 ?        00:00:00 kingbase: sys_kwr collector   
kingbase 20637 20612  0 13:30 ?        00:00:00 kingbase: logical replication launcher  

# recovery状态为'f'
test=#  select sys_is_in_recovery();
 sys_is_in_recovery()
----------------------
 f
(1 row)

---在备库节点data目录下有standby的标识文件'standby.signal',如果此文件缺失,将导致备库节点数据库启动后,进入主库运行状态。

2、查看原主库查看集群节点状态和数据库服务:

# 集群节点状态
[kingbase@node2 bin]$ ./repmgr cluster show
 ID | Name    | Role    | Status               | Upstream | Location | Priority | Timeline | Connection string                                                                                                                                
----+---------+---------+----------------------+----------+----------+----------+--------
 1  | node248 | standby | ! running as primary |          | default  | 100      | 5        | host=192.168.7.248 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3
 2  | node249 | primary | * running            |          | default  | 100      | 4        | host=192.168.7.249 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3

WARNING: following issues were detected
  - node "node248" (ID: 1) is running as primary but the repmgr node record is inactive

# 数据库服务状态

[kingbase@node2 bin]$ ps -ef |grep kingbase

kingbase 20161     1  0 13:29 ?        00:00:00 /home/kingbase/cluster/R6HA/KHA/kingbase/bin/kingbase -D /home/kingbase/cluster/R6HA/KHA/kingbase/data
kingbase 20172 20161  0 13:29 ?        00:00:00 kingbase: logger   
kingbase 20176 20161  0 13:29 ?        00:00:00 kingbase: checkpointer   
kingbase 20177 20161  0 13:29 ?        00:00:00 kingbase: background writer   
kingbase 20178 20161  0 13:29 ?        00:00:00 kingbase: walwriter   
kingbase 20179 20161  0 13:29 ?        00:00:00 kingbase: autovacuum launcher   
kingbase 20180 20161  0 13:29 ?        00:00:00 kingbase: archiver   
kingbase 20181 20161  0 13:29 ?        00:00:00 kingbase: stats collector   
kingbase 20182 20161  0 13:29 ?        00:00:00 kingbase: ksh writer   
kingbase 20183 20161  0 13:29 ?        00:00:00 kingbase: ksh collector   
kingbase 20184 20161  0 13:29 ?        00:00:00 kingbase: sys_kwr collector   
kingbase 20185 20161  0 13:29 ?        00:00:00 kingbase: logical replication launcher 

二、确定最新主库节点
如上所示,集群两个节点数据库服务都启动在主库状态,有可能有业务数据分别写入到两个节点,造成数据不一致,首先必须确定最新的主库节点,然后再执行集群的恢复。

1、查询最新的业务数据
配合业务,查询两个节点上的最新的业务数据,判断那个节点业务数据最完整和最新,则以此为依据确定主库节点。

2、查看控制文件对比节点数据差异

新主库node248:

[kingbase@node1 bin]$ ./sys_controldata -D ../data
sys_control version number:            1201
Catalog version number:               201909212
Database system identifier:           6950158917747347623
Database cluster state:               in production
sys_control last modified:             Mon 01 Mar 2021 01:35:16 PM CST
Latest checkpoint location:           1/F2008980
Latest checkpoint's REDO location:    1/F2008948
Latest checkpoint's REDO WAL file:    0000000500000001000000F2
Latest checkpoint's TimeLineID:       5
Latest checkpoint's PrevTimeLineID:   5
Latest checkpoint's full_page_writes: on
Latest checkpoint's NextXID:          0:8813
Latest checkpoint's NextOID:          32951
Latest checkpoint's NextMultiXactId:  1
Latest checkpoint's NextMultiOffset:  0
Latest checkpoint's oldestXID:        839
Latest checkpoint's oldestXID's DB:   1
Latest checkpoint's oldestActiveXID:  8813
Latest checkpoint's oldestMultiXid:   1
Latest checkpoint's oldestMulti's DB: 1
Latest checkpoint's oldestCommitTsXid:0
Latest checkpoint's newestCommitTsXid:0
Time of latest checkpoint:            Mon 01 Mar 2021 01:35:16 PM CST

原主库node249:

[kingbase@node2 bin]$ ./sys_controldata -D ../data
sys_control version number:            1201
Catalog version number:               201909212
Database system identifier:           6950158917747347623
Database cluster state:               in production
sys_control last modified:             Mon 01 Mar 2021 01:34:45 PM CST
Latest checkpoint location:           1/F2002AC0
Latest checkpoint's REDO location:    1/F2002A88
Latest checkpoint's REDO WAL file:    0000000400000001000000F2
Latest checkpoint's TimeLineID:       4
Latest checkpoint's PrevTimeLineID:   4
Latest checkpoint's full_page_writes: on
Latest checkpoint's NextXID:          0:8810
Latest checkpoint's NextOID:          32951
Latest checkpoint's NextMultiXactId:  1
Latest checkpoint's NextMultiOffset:  0
Latest checkpoint's oldestXID:        839
Latest checkpoint's oldestXID's DB:   1
Latest checkpoint's oldestActiveXID:  8810
Latest checkpoint's oldestMultiXid:   1
Latest checkpoint's oldestMulti's DB: 1
Latest checkpoint's oldestCommitTsXid:0
Latest checkpoint's newestCommitTsXid:0
Time of latest checkpoint:            Mon 01 Mar 2021 01:34:45 PM CST

从controlfile对比可以获知,新主库的timeline(5)高于原主库timeline(4);并且新主库的事务id:8813高于原主库事务id:8810,故选择新主库作为集群的primary节点,原主库为standby节点。

Tips:
对于最新主库的判断,除了通过controlfile作为依据判断,还需要在断开业务访问连接后,启动数据库,判断业务数据状态,那个节点数据是最新的。如果业务数据和控制文件判断不一致,应该以业务数据作为最终依据。

三、恢复集群架构

如下所示,将原主库作为新备库重新加入到集群:

1、执行node rejoin加入备库节点

[kingbase@node2 bin]$ ./sys_ctl stop -D ../data
waiting for server to shut down.... done
server stopped

[kingbase@node2 bin]$ ./repmgr node rejoin -h 192.168.7.248 -U esrep -d esrep --force-rewind
NOTICE: sys_rewind execution required for this node to attach to rejoin target node 1
DETAIL: rejoin target server's timeline 5 forked off current database system timeline 4 before current recovery point 1/F2002B70
NOTICE: executing sys_rewind
DETAIL: sys_rewind command is "/home/kingbase/cluster/R6HA/KHA/kingbase/bin/sys_rewind -D '/home/kingbase/cluster/R6HA/KHA/kingbase/data' --source-server='host=192.168.7.248 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3'"
......
NOTICE: NODE REJOIN successful
DETAIL: node 2 is now attached to node 1

---如果在恢复过程中出现wal日志缺失,只能重新clone新备库节点。

2、在新主库查看集群节点状态

[kingbase@node1 bin]$ ./repmgr cluster show
 ID | Name    | Role    | Status    | Upstream | Location | Priority | Timeline | Connection string                                                                                                                                
----+---------+---------+-----------+----------+----------+----------+----------+--------
 1  | node248 | primary | * running |          | default  | 100      | 5        | host=192.168.7.248 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3
 2  | node249 | standby |   running | node248  | default  | 100      | 4        | host=192.168.7.249 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3

3、查看主备流复制状态

test=# select * from sys_stat_replication;
  pid  | usesysid | usename | application_name |  client_addr  | client_hostname | client_port |         backend_s
tart         | backend_xmin |   state   |  sent_lsn  | write_lsn  | flush_lsn  | replay_lsn | write_lag | flush_la
g | replay_lag | sync_priority | sync_state |          reply_time           
-------+----------+---------+------------------+---------------+-----------------+-------
 22853 |    16384 | esrep   | node249          | 192.168.7.249 |                 |       38638 | 2021-03-01 14:07:
24.293687+08 |              | streaming | 1/F20357A8 | 1/F20357A8 | 1/F20357A8 | 1/F20357A8 |           |         
  |            |             0 | async      | 2021-03-01 14:07:57.851500+08
(1 row)

四、重新启动集群测试

1、重启集群

[kingbase@node1 bin]$ ./sys_monitor.sh restart
2021-03-01 14:09:05 Ready to stop all DB ...
......
2021-03-01 14:09:38 repmgrd on "[192.168.7.249]" start success.
 ID | Name    | Role    | Status    | Upstream | repmgrd | PID   | Paused? | Upstream last seen
----+---------+---------+-----------+----------+---------+-------+---------+--------------------
 1  | node248 | primary | * running |          | running | 24725 | no      | n/a                
 2  | node249 | standby |   running | node248  | running | 23587 | no      | n/a                
2021-03-01 14:09:46 Done.

2、查看集群节点状态

[kingbase@node1 bin]$ ./repmgr cluster show
 ID | Name    | Role    | Status    | Upstream | Location | Priority | Timeline | Connection string                                                                                                                                
----+---------+---------+-----------+----------+----------+----------+----------+--------
 1  | node248 | primary | * running |          | default  | 100      | 5        | host=192.168.7.248 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3
 2  | node249 | standby |   running | node248  | default  | 100      | 5        | host=192.168.7.249 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3

3、查看主备流复制状态

test=# select * from sys_stat_replication;
  pid  | usesysid | usename | application_name |  client_addr  | client_hostname | client_port |         backend_s
tart         | backend_xmin |   state   |  sent_lsn  | write_lsn  | flush_lsn  | replay_lsn | write_lag | flush_la
g | replay_lag | sync_priority | sync_state |          reply_time           
-------+----------+---------+------------------+---------------+-----------------+-------
 24269 |    16384 | esrep   | node249          | 192.168.7.249 |                 |       38644 | 2021-03-01 14:09:
25.712281+08 |              | streaming | 1/F2036C10 | 1/F2036C10 | 1/F2036C10 | 1/F2036C10 |           |         
  |            |             1 | quorum     | 2021-03-01 14:09:30.237826+08
(1 row)

4、查看数据库进程

# 主库
[kingbase@node1 bin]$ ps -ef |grep kingbase

kingbase 23993     1  0 14:09 ?        00:00:00 /home/kingbase/cluster/R6HA/KHA/kingbase/bin/kingbase -D /home/kingbase/cluster/R6HA/KHA/kingbase/data
kingbase 24012 23993  0 14:09 ?        00:00:00 kingbase: logger   
kingbase 24014 23993  0 14:09 ?        00:00:00 kingbase: checkpointer   
kingbase 24015 23993  0 14:09 ?        00:00:00 kingbase: background writer   
kingbase 24016 23993  0 14:09 ?        00:00:00 kingbase: walwriter   
kingbase 24017 23993  0 14:09 ?        00:00:00 kingbase: autovacuum launcher   
kingbase 24018 23993  0 14:09 ?        00:00:00 kingbase: archiver   
kingbase 24019 23993  0 14:09 ?        00:00:00 kingbase: stats collector   
kingbase 24020 23993  0 14:09 ?        00:00:00 kingbase: ksh writer   
kingbase 24021 23993  0 14:09 ?        00:00:00 kingbase: ksh collector   
kingbase 24022 23993  0 14:09 ?        00:00:00 kingbase: sys_kwr collector   
kingbase 24023 23993  0 14:09 ?        00:00:00 kingbase: logical replication launcher   
kingbase 24269 23993  0 14:09 ?        00:00:00 kingbase: walsender esrep 192.168.7.249(38644) streaming 1/F2036CF8
kingbase 24719 23993  0 14:09 ?        00:00:02 kingbase: esrep esrep 192.168.7.248(43596) idle

# 备库
[kingbase@node2 bin]$ ps -ef |grep kingbase

kingbase 23173     1  0 14:08 ?        00:00:00 /home/kingbase/cluster/R6HA/KHA/kingbase/bin/kingbase -D /home/kingbase/cluster/R6HA/KHA/kingbase/data
kingbase 23185 23173  0 14:08 ?        00:00:00 kingbase: logger   
kingbase 23186 23173  0 14:08 ?        00:00:00 kingbase: startup   recovering 0000000500000001000000F2
kingbase 23195 23173  0 14:08 ?        00:00:00 kingbase: checkpointer   
kingbase 23196 23173  0 14:08 ?        00:00:00 kingbase: background writer   
kingbase 23197 23173  0 14:08 ?        00:00:00 kingbase: stats collector   
kingbase 23198 23173  0 14:08 ?        00:00:01 kingbase: walreceiver   streaming 1/F2036CF8
kingbase 23561 23173  0 14:09 ?        00:00:00 kingbase: esrep esrep 192.168.7.249(22306) idle

---如上所示,备库数据库启动startup进程执行recovery。

五、总结

1、对于集群“双主”或“脑裂”问题,要人工及时发现并介入处理,首先要确定集群最新的主库节点,对于生产环境,应该以业务数据作为首选项,然后将另外的节点,作为standby节点重新加入集群。
2、当系统启动防火墙、节点间网络阻塞、备库data目录下缺少standby.signal文件等,启动集群后都有可能导致“双主”问题复现,因此在启动集群前,需要做对应的检查。

posted @ 2021-06-17 20:00  天涯客1224  阅读(244)  评论(0编辑  收藏  举报