KingbaseES V8R6集群运维案例-- 备库数据库服务意外down分析

案例说明:
KingbaseES V8R6一主二备架构的集群,两个备库节点sys_log日志分别不同时间点收到‘fast shutdown request’的日志信息,导致备库数据库服务down,需要对备库数据库服务down的原因进行分析。

集群节点信息:

node12:primary
node10:standby
node11:standby

集群参数配置:
recovery='automatic'

一、日志分析

1、集群日志hamgr.log分析

备库日志分析:
1)2023-04-06 09:04:18,检测到本节点node10 数据库服务连接失败

[2023-04-06 09:04:18] [INFO] attempting to reconnect to node "node10" (ID: 1)
[2023-04-06 09:04:18] [ERROR] connection to database failed
[2023-04-06 09:04:18] [DETAIL] 
FATAL:  the database system is shutting down

2)2023-04-06[ 09:03:17,09:47:00],检测到本节点node11 数据库服务连接失败

[2023-04-06 09:03:17] [WARNING] unable to ping "host=90.12.144.11 user=esrep dbname=esrep port=54321 connect_timeout=15 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3"
[2023-04-06 09:03:17] [DETAIL] PQping() returned "PQPING_REJECT"
[2023-04-06 09:03:17] [WARNING] connection to node "node11" (ID: 2) lost
.......
[2023-04-06 09:47:00] [WARNING] unable to ping "host=90.12.144.11 user=esrep dbname=esrep port=54321 connect_timeout=15 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3"
[2023-04-06 09:47:00] [DETAIL] PQping() returned "PQPING_REJECT"
[2023-04-06 09:47:00] [WARNING] connection to node "node11" (ID: 2) lost

主库日志分析:
3)2023-04-06 09:06:20,主库检测到备库 node10及node11状态不正常,执行recovery恢复备库。

[2023-04-06 09:06:20] [NOTICE] mark node "node11" (ID: 2) as inactive
[2023-04-06 09:06:20] [NOTICE] node (ID: 2; host: "90.12.144.11") is not attached, ready to auto-recovery
[2023-04-06 09:06:20] [NOTICE] Now, the primary host ip: 90.12.144.12
[2023-04-06 09:06:20] [INFO] SSH connection to host "90.12.144.11" succeeded, ready to do auto-recovery
[2023-04-06 09:06:20] [ERROR] the database is not shutdown clearly, can not do node rejoin
[2023-04-06 09:06:20] [NOTICE] node "node11" (ID: 2) auto-recovery success

[2023-04-06 09:06:20] [NOTICE] node (ID: 1; host: "90.12.144.10") is not attached, ready to auto-recovery
[2023-04-06 09:06:20] [NOTICE] Now, the primary host ip: 90.12.144.12
[2023-04-06 09:06:21] [INFO] SSH connection to host "90.12.144.10" succeeded, ready to do auto-recovery
[2023-04-06 09:06:21] [ERROR] the database is not shutdown clearly, can not do node rejoin
[2023-04-06 09:07:27] [INFO] SSH connection to host "90.12.144.10" succeeded, ready to do auto-recovery
[2023-04-06 09:07:28] [NOTICE] executing repmgr command "/home/kingbase/cluster/DZGW/oaxt/kingbase/bin/repmgr --dbname="host=90.12.144.12 dbname=esrep user=esrep port=54321" node rejoin --force-rewind"
INFO: the file "db_rewind.status" exists, reset the data at first
NOTICE: executing sys_rewind to reset the data
DETAIL: sys_rewind command is "/home/kingbase/cluster/DZGW/oaxt/kingbase/bin/sys_rewind --reset -D /home/kingbase/cluster/DZGW/oaxt/kingbase/data"
sys_rewind: start to reset the data ...
sys_rewind: status check: restore all the backup temp file, Done!
sys_rewind: finish to reset the data, Done!
......

4)2023-04-06 09:46:47,主库检测到备库node11状态不正常,执行recovery恢复node11。

[2023-04-06 09:46:47] [NOTICE] check node status again, try 6 / 6 times
[2023-04-06 09:46:57] [INFO] child node: 2; attached: no
[2023-04-06 09:46:57] [NOTICE] node (ID: 2; host: "90.12.144.11") is not attached, ready to auto-recovery
[2023-04-06 09:46:58] [NOTICE] Now, the primary host ip: 90.12.144.12
[2023-04-06 09:46:58] [INFO] SSH connection to host "90.12.144.11" succeeded, ready to do auto-recovery
[2023-04-06 09:46:58] [NOTICE] kbha: node (ID: 2) is running as standby, stop it and do rejoin.

sys_ctl: server does not shut down
[2023-04-06 09:47:58] [ERROR] kbha: stop the standby node (ID: 2) failed, can not do rejoin.

2、数据库sys_log日志分析

备库日志分析:

  1. 在2023-04-06 09:04:05时间点,备库node10 出现大量的慢查询(时长超过200s)。
2023-04-06 09:04:05 CST [780483]: [31-1] user=oaxt,db=oasj,client=90.12.144.6LOG:  duration: 211785.707 ms  execute S_23: select count(1) as total, db.lcswsx_dm, nvl(db.bllx_dm, '01') bllx_Dm    FROM.........
2023-04-06 09:04:09 CST [784573]: [3-1] user=oaxt,db=oasj,client=90.12.144.6LOG:  duration: 215442.772 ms  execute <unnamed>: select count(1) as total, db.lcswsx_dm, nvl(db.bllx_dm, '01') .......

2)2023-04-06 09:04:17 备库node10收到'shutdown request'。

2023-04-06 09:04:17 CST [459987]: [38-1] user=,db=,client=LOG:  received fast shutdown request
2023-04-06 09:04:17 CST [459987]: [39-1] user=,db=,client=LOG:  aborting any active transactions
2023-04-06 09:04:17 CST [786405]: [1-1] user=oaxt,db=oasj,client=90.12.144.8FATAL:  terminating connection due to administrator command

3)2023-04-06 09:00:21,备库node11出现client和server端的通讯错误

2023-04-06 09:00:21 CST [811951]: [39-1] user=oaxt,db=oasj,client=90.12.144.6LOG:  could not send data to client: Broken pipe
2023-04-06 09:00:21 CST [811951]: [40-1] user=oaxt,db=oasj,client=90.12.144.6FATAL:  connection to client lost
2023-04-06 09:01:15 CST [812713]: [41-1] user=oaxt,db=oasj,client=90.12.144.8LOG:  could not send data to client: Broken pipe
2023-04-06 09:01:15 CST [812713]: [42-1] user=oaxt,db=oasj,client=90.12.144.8FATAL:  connection to client lost

4)2023-04-06 09:03:16 ,备库node11收到shutdown request。

2023-04-06 09:03:16 CST [871171]: [26-1] user=,db=,client=LOG:  received fast shutdown request
2023-04-06 09:03:16 CST [871171]: [27-1] user=,db=,client=LOG:  aborting any active transactions

主库日志分析:
5)2023-04-06 09:02:18,主库node12 sys_log日志显示,由于超时,终止与node10的wal_sender进程,数据库出现锁等待。

2023-04-06 09:02:18 CST [1720372]: [2-1] user=esrep,db=[unknown],client=90.12.144.10LOG:  terminating walsender process due to replication timeout
2023-04-06 09:02:18 CST [1358211]: [16-1] user=oaxt,db=oasj,client=90.12.144.7LOG:  process 1358211 still waiting for ShareLock on transaction 144774511 after 1000.188 ms
2023-04-06 09:02:18 CST [1358211]: [17-1] user=oaxt,db=oasj,client=90.12.144.7DETAIL:  Process holding the lock: 1357661. Wait queue: 1358211.
2023-04-06 09:02:18 CST [1358211]: [18-1] user=oaxt,db=oasj,client=90.12.144.7CONTEXT:  while updating tuple (26322535,16) in relation "zh_dbgz"

6)2023-04-06 09:00:16,主库node12 sys_log日志显示,由于超时,终止与node11的wal_sender进程。
2023-04-06 09:00:16 CST [1677714]: [2-1] user=esrep,db=[unknown],client=90.12.144.11LOG: terminating walsender process due to replication timeout

7)2023-04-06 09:04:07,主库node12 sys_log日志显示数据库出现大量慢查询(时长超过200s)。

2023-04-06 09:04:07 CST [1358473]: [2-1] user=oaxt,db=oasj,client=90.12.144.8LOG:  duration: 218645.108 ms  execute <unnamed>: select count(1) as total, db.lcswsx_dm, nvl(db.bllx_dm, '01') 
........
2023-04-06 09:04:17 CST [1357789]: [5-1] user=oaxt,db=oasj,client=90.12.144.6LOG:  duration: 153150.784 ms  execute <unnamed>: select count(1) as total, db.jsgw_dm, nvl(db.bllx_dm, '01') .
.......
2023-04-06 09:04:34 CST [1358433]: [15-1] user=oaxt,db=oasj,client=90.12.144.9LOG:  duration: 41054.597 ms  execute <unnamed>: select          TT.numcount   ,          TT.pagexh     ,          
......
2023-04-06 09:04:35 CST [1358193]: [2-1] user=oaxt,db=oasj,client=90.12.144.6LOG:  duration: 248129.891 ms  execute <unnamed>: select          TT.numcount   ,          TT.pagexh     ,          
......

二、故障分析汇总

1、2023-04-06 09:00:48,备库node10检测到本节点数据库服务连接失败
2、2023-04-06 09:04:05,备库node10出现大量的慢查询。
3、2023-04-06 09:04:17,备库node10收到,shutdown request。
4、2023-04-06 09:00:21,备库node11出现client和server端的通讯错误。
5、2023-04-06 09:03:16 ,备库node11收到shutdown request。
6、2023-04-06 09:06:20,主库检测到备库状态,通过sys_stat_replicaiton查询检测node10及node11状态不正常,执行recovery恢复备库。
7、2023-04-06 09:46:47,主库检测到备库node11状态不正常,执行recovery恢复node11。
8、2023-04-06 09:00:16,主库sys_log日志显示,由于超时,终止与node10的wal_sender进程。
9、2023-04-06 09:00:16,主库sys_log日志显示,由于超时,终止与node11的wal_sender进程。
10、2023-04-06 09:04:07,主库出现大量慢查询和锁等待。

由以上分析获知:
1)主库在执行sys_stat_replication获取不到备库节点的信息,由于recovery参数配置为‘automatic’,主库执行备库的recovery操作,导致备库数据库服务被关闭重启。
2)主库无法获取到备库节点的正常状态,从sys_log日志初步获悉,是因为主库和备库出现锁等待及大量的慢查询,导致主备库的状态之间的通讯响应超时,关闭了wal_sender进程,流复制状态异常,从而导致主库无法获取备库节点信息。
3)建议:优化业务及慢查询SQL。

三、总结
对于集群故障分析,可以根据故障时间点,分析集群、数据库及系统日志,通过相同时间点的日志分析,综合对比,获取故障最终发生的原因。

posted @ 2023-09-20 14:11  KINGBASE研究院  阅读(134)  评论(0编辑  收藏  举报