代码改变世界

MySQL 8与复制可观察性

2023-08-26 18:50  abce  阅读(124)  评论(0编辑  收藏  举报

许多老MySQL DBA都使用show replica status中的seconds_behind_source来了解(异步)复制的状态和正确执行情况。

不过,MySQL复制已经有了很大的发展,复制团队已经努力为MySQL的所有复制方式提供了大量有用的信息。

例如,增加了并行复制、组复制......所有这些信息都是老的show replica status结果中缺少的。

performance_schema为监控和观察复制过程提供了更好的方法。

 

目前,performance_schema中有15个表与复制相关:

+------------------------------------------------------+
| Tables_in_performance_schema (replication%)          |
+------------------------------------------------------+
| replication_applier_configuration                    |
| replication_applier_filters                          |
| replication_applier_global_filters                   |
| replication_applier_status                           |
| replication_applier_status_by_coordinator            |
| replication_applier_status_by_worker                 |
| replication_asynchronous_connection_failover         |
| replication_asynchronous_connection_failover_managed |
| replication_connection_configuration                 |
| replication_connection_status                        |
| replication_group_communication_information          |
| replication_group_configuration_version              |
| replication_group_member_actions                     |
| replication_group_member_stats                       |
| replication_group_members                            |
+------------------------------------------------------+

 

但是,要理解这些指标的含义以及在哪里可以找到对我们这些MySQL DBA很重要的有用信息,确实并非易事:比如复制是否落后于源库?

可以在sys库中创建一些视图,使用这些指标中的大多数指标来获取相关的信息:mysql_8_replication_observability.sql。下载地址:https://gist.github.com/lefred/1bad64403923664a14e0f20f572d7526

下面来看看这写视图。
复制延迟:

select * from sys.replication_lag;
+--------------+-----------------------+------------------------+
| channel_name | max_lag_from_original | max_lag_from_immediate |
+--------------+-----------------------+------------------------+
|              | 0                     | 0                      |
+--------------+-----------------------+------------------------+


复制状态:

>select * from sys.replication_status;
+---------+----------+----------+---------+-------------------+--------------------+
| channel | io_state | co_state | w_state | lag_from_original | lag_from_immediate |
+---------+----------+----------+---------+-------------------+--------------------+
|  (1)    | ON       | ON       | ON      | none              | none               |
|  (2)    | ON       | ON       | ON      | none              | none               |
|  (3)    | ON       | ON       | ON      | none              | none               |
|  (4)    | ON       | ON       | ON      | none              | none               |
+---------+----------+----------+---------+-------------------+--------------------+

 

查看复制更详细的信息:

>select * from sys.replication_status_full;
+---------+---------------+------+-----------+--------------------------------------+------------+----------------------------+--------------------+----------+----------------------------------+----------+-----------+----------------------------+----------+----------------------------------------------------------+----------+-----------+----------------------------+---------+---------------------------------------+---------+----------+----------------------------+-------------------------+--------------------+-------------------+--------------------+----------------+-------------------+------------+-----------------------------------------------+-----------------------------------------------+--------------------------+
| channel | host          | port | user      | source_uuid                          | group_name | last_heartbeat_timestamp   | heartbeat_interval | io_state | io_thread_state                  | io_errno | io_errmsg | io_errtime                 | co_state | co_thread_state                                          | co_errno | co_errmsg | co_errtime                 | w_state | w_thread_state                        | w_errno | w_errmsg | w_errtime                  | time_since_last_message | applier_busy_state | lag_from_original | lag_from_immediate | transport_time | time_to_relay_log | apply_time | last_applied_transaction                      | last_queued_transaction                       | queued_gtid_set_to_apply |
+---------+---------------+------+-----------+--------------------------------------+------------+----------------------------+--------------------+----------+----------------------------------+----------+-----------+----------------------------+----------+----------------------------------------------------------+----------+-----------+----------------------------+---------+---------------------------------------+---------+----------+----------------------------+-------------------------+--------------------+-------------------+--------------------+----------------+-------------------+------------+-----------------------------------------------+-----------------------------------------------+--------------------------+
|  (1)    | 192.168.137.1 | 3306 | repl_test | 125fcb39-eb11-11ed-9aec-235056b0rhj3 |            | 2023-08-18 14:38:20.978058 |             30.000 | ON       | Waiting for source to send event |        0 |           | 0000-00-00 00:00:00.000000 | ON       | Replica has read all relay log; waiting for more updates |        0 |           | 0000-00-00 00:00:00.000000 | ON      | Waiting for an event from Coordinator |       0 |          | 0000-00-00 00:00:00.000000 | 00:00:01.656046         | IDLE               | none              | none               | 79.00 us       | 14.00 us          | 2.50 ms    | 125fcb39-eb11-11ed-9aec-235056b0rhj3:82415939 | 125fcb39-eb11-11ed-9aec-235056b0rhj3:82415939 |                          |
|  (2)    | 192.168.137.1 | 3306 | repl_test | 125fcb39-eb11-11ed-9aec-235056b0rhj3 |            | 2023-08-18 14:38:20.978058 |             30.000 | ON       | Waiting for source to send event |        0 |           | 0000-00-00 00:00:00.000000 | ON       | Replica has read all relay log; waiting for more updates |        0 |           | 0000-00-00 00:00:00.000000 | ON      | Waiting for an event from Coordinator |       0 |          | 0000-00-00 00:00:00.000000 | 00:00:01.656046         | IDLE               | none              | none               | 79.00 us       | 14.00 us          | 2.63 ms    | 125fcb39-eb11-11ed-9aec-235056b0rhj3:82415928 | 125fcb39-eb11-11ed-9aec-235056b0rhj3:82415939 |                          |
|  (3)    | 192.168.137.1 | 3306 | repl_test | 125fcb39-eb11-11ed-9aec-235056b0rhj3 |            | 2023-08-18 14:38:20.978058 |             30.000 | ON       | Waiting for source to send event |        0 |           | 0000-00-00 00:00:00.000000 | ON       | Replica has read all relay log; waiting for more updates |        0 |           | 0000-00-00 00:00:00.000000 | ON      | Waiting for an event from Coordinator |       0 |          | 0000-00-00 00:00:00.000000 | 00:00:01.656046         | IDLE               | none              | none               | 79.00 us       | 14.00 us          | 2.63 ms    | 125fcb39-eb11-11ed-9aec-235056b0rhj3:82415929 | 125fcb39-eb11-11ed-9aec-235056b0rhj3:82415939 |                          |
|  (4)    | 192.168.137.1 | 3306 | repl_test | 125fcb39-eb11-11ed-9aec-235056b0rhj3 |            | 2023-08-18 14:38:20.978058 |             30.000 | ON       | Waiting for source to send event |        0 |           | 0000-00-00 00:00:00.000000 | ON       | Replica has read all relay log; waiting for more updates |        0 |           | 0000-00-00 00:00:00.000000 | ON      | Waiting for an event from Coordinator |       0 |          | 0000-00-00 00:00:00.000000 | 00:00:01.656046         | IDLE               | none              | none               | 79.00 us       | 14.00 us          | 3.24 ms    | 125fcb39-eb11-11ed-9aec-235056b0rhj3:82415879 | 125fcb39-eb11-11ed-9aec-235056b0rhj3:82415939 |                          |
+---------+---------------+------+-----------+--------------------------------------+------------+----------------------------+--------------------+----------+----------------------------------+----------+-----------+----------------------------+----------+----------------------------------------------------------+----------+-----------+----------------------------+---------+---------------------------------------+---------+----------+----------------------------+-------------------------+--------------------+-------------------+--------------------+----------------+-------------------+------------+-----------------------------------------------+-----------------------------------------------+--------------------------+