Mysql MGR(2)基本搭建与使用
【0】深入MGR的要求与限制
(0.1)组复制的要求
深入MGR限制,官网参考:https://dev.mysql.com/doc/refman/8.0/en/group-replication-requirements.html
基本限制:
- 仅InnoDB Engine(Transactional and row level lock)
- 表必须有主键
- 网络性能要求很高
服务端限制:
- 唯一的服务器标识符 server_id=1
- 二进制日志活动 log_bin=url
- 已记录副本更新 log_replica_update=on
- 二进制日志行格式 binlog_format=row
- 二进制日志校验和关闭 binlog_checksum=NONE
- gtid全局事务标识打开 gtid_mode=on 、enforce_gtid_consistency=on
- 复制信息存储到表 master_info_repository=TABLE 、relay_log_info_repository=TABLE
- 事务写入集提取 transaction_write_set_extraction=XXHASH64
- 默认表加密
default_table_encryption
默认禁用也无需特意加密,所有的节点一致即可 - 小写表名 lower_case_table_names=1 建议设置,但其实所有节点设置成一样即可
- 二进制日志依赖跟踪
binlog_transaction_dependency_tracking=WRITESET_SESSION 说是建议使用这个,能加性能,但实际上使用 writeset 或默认的 commit_order 即可
- SQL多线程应用:有比较多,见下面的 parallel replication
- 分离的 XA 事务
xa_detach_on_prepare = ON 默认开启 无需理会
# general server_id=1 lower_case_table_names=1 # gtid gtid_mode=ON enforce_gtid_consistency=ON skip_name_resolve=on # binlog log_bin=binlog log_slave_updates=ON binlog_format=ROW binlog_checksum=NONE # default CRC32 master_info_repository=TABLE relay_log_info_repository=TABLE # parallel replication replica_parallel_type=LOGICAL_CLOCK # 8.0.26 before slave_parallel_type=LOGICAL_CLOCK replica_preserve_commit_order=ON # 8.0.26 before slave_preserve_commit_order=1 replica_parallel_workers=1 # 8.0.26 before slave_parallel_workers=1 default 4 binlog_transaction_dependency_tracking=WRITESET_SESSION # default COMMIT_ORDER # MGR disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY" transaction_write_set_extraction=XXHASH64group_replication_consistency=BEFORE_ON_PRIMARY_FAILOVER
(0.2)组复制的限制
下面是关于MGR使用的一些限制:
- 所有表必须是InnoDB引擎。可以创建非InnoDB引擎表,但无法写入数据,在利用Clone构建新节点时也会报错。
- 所有表都必须要有主键。同上,能创建没有主键的表,但无法写入数据,在利用Clone构建新节点时也会报错。
- 不要使用大事务,默认地,事务超过143MB会报错,最大可支持2GB的事务(在GreatSQL未来的版本中,会增加对大事务的支持,提高大事务上限)
- 如果是从旧版本进行升级,则不能选择 MINIMAL 模式升级,建议选择 AUTO 模式,即
upgrade=AUTO
。 - 由于MGR的事务认证线程不支持
gap lock
,因此建议把所有节点的事务隔离级别都改成READ COMMITTED
。基于相同的原因,MGR集群中也不要使用table lock
及name lock
(即GET_LOCK()
函数 )。 - 在多主(
multi-primary
)模式下不支持串行(SERIALIZABLE
)隔离级别。 - 不支持在不同的MGR节点上,对同一个表分别执行DML和DDL,可能会造成数据丢失或节点报错退出。
- 在多主(
multi-primary
)模式下不支持多层级联外键表。另外,为了避免因为使用外键造成MGR报错,建议设置group_replication_enforce_update_everywhere_checks=ON
。 - 在多主(
multi-primary
)模式下,如果多个节点都执行SELECT ... FOR UPDATE
后提交事务会造成死锁。 - 不支持复制过滤(Replication Filters)设置。
- 组大小限制为最多9个节点
看起来限制有点多,但绝大多数时候并不影响正常的业务使用。
此外,想要启用MGR还有几个要求:
- 每个节点都要启用binlog。
- 每个节点都要转存binlog,即设置
log_slave_updates=1
。 - binlog format务必是row模式,即
binlog_format=ROW
。 - 每个节点的
server_id
及server_uuid
不能相同。 - 在8.0.20之前,要求
binlog_checksum=NONE
,但是从8.0.20后,可以设置binlog_checksum=CRC32(不设置也默认是这个)
。 - 要求启用 GTID,即设置
gtid_mode=ON
。 - 要求
master_info_repository=TABLE
及relay_log_info_repository=TABLE
,不过从MySQL 8.0.23开始,这两个选项已经默认设置TABLE,因此无需再单独设置。 - 所有节点上的表名大小写参数
lower_case_table_names
设置要求一致。 - 最好在局域网内部署MGR,而不要跨公网,网络延迟太大的话,会导致MGR性能很差或很容易出错。
- 建议启用writeset模式,即设置以下几个参数(8.0.23后自动开启)
slave_parallel_type = LOGICAL_CLOCK
slave_parallel_workers = N
,N>0,可以设置为逻辑CPU数的2倍(默认为4,注意死锁问题)binlog_transaction_dependency_tracking = WRITESET
slave_preserve_commit_order = 1
slave_checkpoint_period = 2
(0.3)数据流通信规模限制
单个事务导致消息内容大到无法在 5 秒窗口内通过网络在组成员之间复制消息,则成员可能会被怀疑失败,然后被驱逐,因为他们正忙于处理交易。
由于内存分配问题,大型事务也可能导致系统变慢。为避免这些问题,请使用以下缓解措施:
《1》系统变量 group_replication_member_expel_timeout
:在被怀疑失败的成员被驱逐之前留出额外的时间。
在最初的 5 秒检测期过后,您最多可以允许一个小时内将可疑成员逐出群组。从 MySQL 8.0.21 开始,默认允许额外的 5 秒。
《2》使用系统变量 group_replication_transaction_size_limit
指定组接受的最大事务大小。
在 MySQL 8.0 中,此系统变量默认为最大事务大小 150000000 字节(约 143 MB)。
超过此大小的事务将回滚,并且不会发送到组复制的组通信系统 (GCS) 以分发给组。根据您需要组容忍的最大消息大小调整此变量的值,请记住处理事务所花费的时间与其大小成正比。
《3》使用系统变量 group_replication_compression_threshold
指定应用压缩的消息大小。
此系统变量默认为 1000000 字节 (1 MB),因此自动压缩大消息。
压缩由 Group Replication 的 Group Communication System (GCS) 在收到 group_replication_transaction_size_limit
设置允许但超出
《4》使用系统变量 group_replication_communication_max_message_size
指定消息大小,超过该大小的消息将被分段。
此系统变量默认为 10485760 字节 (10 MiB),因此大消息会自动分片。
如果压缩后的消息仍然超过 group_replication_communication_max_message_size
限制,GCS 会在压缩后进行分片。
为了使复制组使用分片,所有组成员必须在 MySQL 8.0.16 或更高版本,并且组使用的 Group Replication 通信协议版本必须允许分片。
(0.4)总结:MGR必备配置
《1》下面这几个是多线程复制的参数。8.0.23 及以上版本,默认开启。
slave_preserve_commit_order=1
slave_parallel_type=LOGICAL_CLOCK
slave_parallel_workers=4
binlog_transaction_dependency_tracking=WRITESET
《2》MGR必备相关参数
# general server_id=1 # 要根据不同机器修改成不同值 lower_case_table_names=1 # gtid gtid_mode=ON enforce_gtid_consistency=ON skip_name_resolve=on # binlog log_bin=binlog log_slave_updates=ON binlog_format=ROW binlog_checksum=NONE # default CRC32 master_info_repository=TABLE relay_log_info_repository=TABLE # parallel replication replica_parallel_type=LOGICAL_CLOCK # 8.0.26 before slave_parallel_type=LOGICAL_CLOCK replica_preserve_commit_order=ON # 8.0.26 before slave_preserve_commit_order=1 replica_parallel_workers=1 # 8.0.26 before slave_parallel_workers=1 default 4 binlog_transaction_dependency_tracking=WRITESET_SESSION # default COMMIT_ORDER # MGR disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY" transaction_write_set_extraction=XXHASH64 plugin_load_add='group_replication.so'group_replication_consistency=BEFORE_ON_PRIMARY_FAILOVER
group_replication_enforce_update_everywhere_checks = OFF group_replication_single_primary_mode = on group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" group_replication_start_on_boot=off group_replication_local_address= "192.168.191.25:33061" # 要根据不同机器修改成自己本机的IP group_replication_group_seeds= "192.168.191.25:33061,192.168.191.44:33061,192.168.191.51:33061" group_replication_bootstrap_group=off
(0.5)参数详细释义
binlog_checksum = NONE : MGR不支持带checksum的binlog event
transaction_write_set_extraction = XXHASH64 # Group Replication要求每个表必须要有主键,用来做冲突检测。组内所有成员必须配置相同的哈希算法。
group_replication_group_name = 'ee99091a-e1e6-11ea-90d7-fa163ec61b6c' # 一个UUID值,组内唯一,组内都是用这个值,用来标记组内所有成员上产生的binlog event。
loose # loose 前缀的意义在于第一次启动时还没加载组复制的plugin,可以让mysql server忽略该参数,继续启动
loose_group_replication_start_on_boot = off # 指示插件mysql启动的时候不要自动启动组复制
loose_group_replication_start_on_boot = off # 节点重启后不自动加入mgr集群
loose_group_replication_local_address = '192.168.191.198:33061' # 当前节点组复制通信地址、端口(必填啊)
loose_group_replication_ip_whitelist = '192.0.0.0/8' # 白名单
loose_group_replication_group_seeds = '192.168.191.198:33061,192.168.191.199:33061,192.168.191.201:33061' # 种子成员,新节点加入时会从第一个开始依次尝试连接,任意一个正常就可以加入组
loose_group_replication_bootstrap_group = off # 指定引导组,只能在一个上设置,相当于初始化时候的主节点,统一设置成off
group_replication_flow_control_mode=disabled # 指定流量控制模式,修改此系统变量无需重启或重置 组复制即可生效
group_replication_member_weight=70 # 权重百分比,主要节点脱离组时,权重值越高的成员,越优先被选中为主要节点
group_replication_autorejoin_tries=0
如果成员被剔除出组、或者在超过系统变量group_replication_unreachable_majority_timeout设置的时间内无法与组中大多数成员取得联系时,该系统变量设置成员自动执行重新加入组的尝试次数。
默认设置为0,表示不尝试自动重新加入组,而是继续往后执行系统变量group_replication_exit_state_action指定的退出的操作
如果为该系统变量设置一个非0值,则当成员被驱逐出组、超过系统变量group_replication_unreachable_majority_timeout设置的时间内无法与组中大多数成员取得联系时,
该成员将按照系统变量group_replication_autorejoin_tries设置的次数尝试自动重新加入组,直到达到系统变量group_replication_autorejoin_tries定义的最大尝试次数为止。在尝试自动重新加入组的过程中,每一次尝试之后,会等待5分钟之后再做下一次尝试。在整个自动重试的过程中,该组成员始终保持超级只读模式,并在其组视图上显示为ERROR状态。对于该成员,可以随时使用STOP GROUP_REPLICATION语句停止组辅助,或者关闭Server进程。当该成员达到系统变量group_replication_autorejoin_tries定义的最大重试次数之后,仍然没有成功加入组时,就会继续执行系统变量group_replication_exit_state_action指定的退出操作。
group_replication_unreachable_majority_timeout=5
设置成0,网络分区而处于少数状态的成员将永远等待,就是说状态会一直保持为unreachable。
设置成非0,当超出设置时间时,网络分区而处于少数状态的成员阻塞的事务会被回滚,而且状态变为ERROR,并且如果group_replication_autorejoin_tries 设置为非0,则此时成员会在超级只读模式下按照指定的次数尝试自动重新加入组,耗尽了指定的尝试次数之后,则将按照系统变量group_replication_exit_state_action指定的值执行退出操作
group_replication_member_expel_timeout=1
表示当组复制成员被怀疑失败之后,被驱逐出组之前的等待时间(以秒为单位)。在怀疑产生之前的最初5秒的检测周期不会计入此时间内。
更改某个组成员上的系统变量group_replication_member_expel_timeout的值将立即生效。组复制并不强制要求组中的所有成员将此系统变量设置为相同值,但建议设置为相同值,以避免发生意外的驱逐事件。
默认情况下,系统变量group_replication_member_expel_timeout设置值为0,这意味着没有等待期,在5秒的检测期结束后,可疑成员可能被立即驱逐出组。为了避免在较慢的网络上、或在可预期的瞬时网络故障、或在可预期的Server性能变差的情况下发生不必要的驱逐事件,可以为该系统变量指定一个大于0的值(以便在有成员被怀疑失败时,有一个缓冲等待期)。如果可疑成员在怀疑超时之前再次变为活跃状态(恢复组通讯),则它将重新加入组,并应用组中其他成员缓冲的所有消息(追赶组中最新的数据),然后进入ONLINE状态对外提供服务。
如果超过了怀疑时间仍未成功加入组,则被怀疑成员将被立即驱逐出组。如果该成员后续能够恢复组通讯,它将会接收到一个被驱逐出组的视图。此时,如果该成员的系统变量group_replication_autorejoin_tries设置了一个非0值,则该成员在超级只读模式下,将按照该数值执行自动重新加入组的尝试;如果该成员没有指定自动重新连接尝试次数(设置0值),或者当它耗尽了指定的尝试次数时,则它将按照系统变量group_replication_exit_state_action指定的值执行退出操作
(0.6)事务一致性配置:group_replication_consistency
参数值:
EVENTUAL
RO 和 RW 事务在执行之前都不会等待应用前面relay中未完成的事务;
这也意味着在主节点故障转移的情况下,新主节点可以在之前的主节点事务全部应用完成之前接受新的 RO 和 RW 事务。RO 事务可能会导致值过时,RW 事务可能会由于冲突而导致回滚。
BEFORE_ON_PRIMARY_FAILOVER
新选的主节点会阻塞只读事务和读写事务,直到应用完从老主节点发过来的所有事务(即应用完 relay log 中的所有事务);
这样可以保证主节点上的数据一定是最新的;但注意,要是积压了很多 relay log没有用完,等待应用老主库事务时,新主库是不可用的;
BEFORE
RW、RO 事务在应用之前等待所有前面的事务完成。包含 BEFORE_ON_PRIMARY_FAILOVER
AFTER
RW 事务一直等到其更改已应用于所有其他成员。此值对 RO 事务没有影响。包含 BEFORE_ON_PRIMARY_FAILOVER
您的应用程序可以使用它来确保后续读取获取包括最新写入的最新数据。
BEFORE_AND_AFTER
RW 事务等待
1) 所有先前的事务在被应用之前完成
2) 直到其更改已应用于其他成员
RO 事务在执行之前等待所有前面的事务完成。包含 BEFORE_ON_PRIMARY_FAILOVER
应用场景:
当前读:BEFORE 读写事务已同步落盘:AFTER
-
场景 1,写多读少,且不必读最新数据,推荐 AFTER;您想要负载平衡读取而不用担心过时的读取,您的组写入操作比您的组读取操作少得多。在这种情况下,您应该选择
AFTER
. -
场景 2,写多读少,但要求当前读,推荐 BEFORE;您有一个应用大量写入的数据集,并且您希望偶尔进行读取,而不必担心读取过时的数据。在这种情况下,您应该选择
BEFORE
. -
场景 3,当前读,推荐 BEFORE;您希望工作负载中的特定事务始终从组中读取最新数据,以便每当更新敏感数据(例如文件或类似数据的凭据)时,您希望强制读取始终读取最新的值。在这种情况下,您应该选择
BEFORE
. -
场景 4,您有一个主要包含只读 (RO) 数据的组,您希望在提交后将读写 (RW) 事务应用到任何地方,以便后续读取包括最新数据您最近的写入,并且您无需为每个 RO 交易支付同步费用,而只为 RW 交易支付同步费用。在这种情况下,您应该选择
AFTER
. -
场景 5,您有一个主要包含只读数据的组,您希望您的读写 (RW) 事务始终从该组读取最新数据,并在它们提交后应用到任何地方,以便后续读取对包括您最新写入的最新数据进行处理,并且您无需为每个只读 (RO) 事务支付同步费用,而只在 RW 事务上支付同步费用。在这种情况下,您应该选择
BEFORE_AND_AFTER
. -
场景 6,临时某些事务需要修改一致性级别: SET @@SESSION.group_replication_consistency= 'BEFORE';
事务不能永远保持,如果保持时间超过 wait_timeout
它会返回ER_GR_HOLD_WAIT_TIMEOUT 错误。
8.0.27及之后,一致性规则下,无论你是什么一致性级别参数,都可以查询一些不修改数据的查询,例如:show set use 等等;
【1】MGR 单主配置
192.168.191.25 db1 M
192.168.191.44 db2 S
192.168.191.51 db3 S
(1.1)前置 /etc/hosts 配置
cat <<EOF>>/etc/hosts 192.168.191.25 db1 192.168.191.44 db2 192.168.191.51 db3 EOF
(1.2)配置文件
必备配置
# general server_id=1 # 要根据不同机器修改成不同值 lower_case_table_names=1 # gtid gtid_mode=ON enforce_gtid_consistency=ON skip_name_resolve=on # binlog log_bin=binlog log_slave_updates=ON binlog_format=ROW binlog_checksum=NONE # default CRC32 master_info_repository=TABLE relay_log_info_repository=TABLE # parallel replication replica_parallel_type=LOGICAL_CLOCK # 8.0.26 before slave_parallel_type=LOGICAL_CLOCK replica_preserve_commit_order=ON # 8.0.26 before slave_preserve_commit_order=1 replica_parallel_workers=1 # 8.0.26 before slave_parallel_workers=1 default 4 binlog_transaction_dependency_tracking=COMMIT_ORDER # default COMMIT_ORDER,建议 writeset # MGR disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY" transaction_write_set_extraction=XXHASH64 plugin_load_add='group_replication.so' group_replication_enforce_update_everywhere_checks = OFF group_replication_single_primary_mode = on group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" group_replication_start_on_boot=off group_replication_local_address= "192.168.191.25:33061" # 要根据不同机器修改成自己本机的IP group_replication_group_seeds= "192.168.191.25:33061,192.168.191.44:33061,192.168.191.51:33061" group_replication_bootstrap_group=off
我的配置文件,所有
[mysql] password=bfgame20 prompt="\u@mysqldb \R:\m:\s [\d]> " no-auto-rehash socket = /data/mysql/mysql.sock [client] password=bfgame20 port = 3306 socket = /data/mysql/mysql.sock user=root [mysqld] user=mysql basedir = /usr/local/mysql datadir = /data/mysql socket = /data/mysql/mysql.sock pid-file = /data/mysql/mysql.pid tmpdir = /data/mysql slow_query_log_file = /data/mysql_log/slowlog/slow.log log_error = /data/mysql_log/errorlog/error.log log_bin = /data/mysql_log/binlog/mysql-bin relay-log = /data/mysql_log/relaylog/relay-bin port = 3306 server_id=1652430883 # 注意不同机器也要不一样 character_set_server = utf8mb4 skip_name_resolve = 1 max_connections = 4096 max_connect_errors = 100000 max_allowed_packet = 128M tmp_table_size = 128M sort_buffer_size=4M slow_query_log = 1 long_query_time = 1 lock_wait_timeout=36000 secure-file-priv='' default_authentication_plugin=mysql_native_password log_timestamps=system group_concat_max_len=65535
lower_case_table_names=1
#replication gtid_mode = on enforce_gtid_consistency = on log_slave_updates sync_binlog = 0 max_binlog_size = 1G #binlog_expire_logs_seconds = 864000 binlog_format = row master_info_repository = TABLE relay_log_info_repository = TABLE #innodb transaction_isolation = READ-COMMITTED innodb_flush_method=O_DIRECT innodb_buffer_pool_size = m innodb_flush_log_at_trx_commit = 2 innodb_log_buffer_size = 8M innodb_log_file_size = 1G innodb_flush_neighbors = 0 innodb_thread_concurrency = 32 innodb_io_capacity = 10000 innodb_io_capacity_max = 20000 innodb_buffer_pool_load_at_startup = 1 innodb_buffer_pool_dump_at_shutdown = 1 innodb_rollback_segments = 128 innodb_undo_tablespaces = 3 innodb_undo_log_truncate = 1 innodb_max_undo_log_size = 1G #performance-schema performance-schema-instrument='memory/%=COUNTED' performance_schema_digests_size = 40000 performance_schema_max_table_handles = 40000 performance_schema_max_table_instances = 40000 performance_schema_max_sql_text_length = 4096 performance_schema_max_digest_length = 4096 #table cache performance settings table_open_cache = 6000 table_definition_cache = 6000 table_open_cache_instances = 36 transaction_write_set_extraction=XXHASH64 group_replication_enforce_update_everywhere_checks = OFF group_replication_single_primary_mode = on skip_name_resolve=on plugin_load_add='group_replication.so' group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" group_replication_start_on_boot=off group_replication_local_address= "192.168.191.51:33061" # 注意不同机器要改成对应机器IP group_replication_group_seeds= "192.168.191.25:33061,192.168.191.44:33061,192.168.191.51:33061" group_replication_bootstrap_group=off
(1.3)准备复制账户
-- (1)构造复制用户 3个机器上都运行 SET SQL_LOG_BIN=0; CREATE USER rpl_user@'192.168.191.%' IDENTIFIED BY 'password'; GRANT REPLICATION SLAVE,CONNECTION_ADMIN,BACKUP_ADMIN,GROUP_REPLICATION_STREAM ON *.* TO rpl_user@'192.168.191.%'; FLUSH PRIVILEGES; SET SQL_LOG_BIN=1; -- (2)构造复制通道,3个机器上都运行 CHANGE REPLICATION SOURCE TO SOURCE_USER='rpl_user', SOURCE_PASSWORD='password' FOR CHANNEL "group_replication_recovery";
疑问:
1,master_auto_position=1 为什么不需要了?
2,master_host 为什么不要了?
3,FOR CHANNEL 'group_replication_recovery' 这一句的原理? group_replication_recovery:标准异步复制通道
理解就是,可能是在 channel 通道名称为 group_replication_recovery 里面,mysql 已经内置好了吧;
(1.4)MGR 主节点引导组复制启动
选择 db1 或者任意做主库的执行
-- 主库上执行 db1 192.168.191.25 SET GLOBAL group_replication_bootstrap_group=ON; START GROUP_REPLICATION USER='rpl_user', PASSWORD='password'; SET GLOBAL group_replication_bootstrap_group=OFF; SELECT * FROM performance_schema.replication_group_members;
root@mysqldb 17:11: [test]> SELECT * FROM performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ | group_replication_applier | b1336aa3-c13a-11ec-9cab-fa163e6ae43e | db1 | 3306 | ONLINE | PRIMARY | 8.0.28 | XCom | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ 1 row in set (0.00 sec)
解析这一步 :
SET GLOBAL group_replication_bootstrap_group=ON;
SET GLOBAL group_replication_bootstrap_group=OFF;
啥意思呀 为什么要重复开关,第一次接触一脸懵逼!!! ###后面了解到 这个是默认引导主,理解起来就是整个集群的主。
只在一台做这个操作,做完之后要关闭,不关闭会怎么样? 不关闭就引导没有完成
构造语句:
-- 主句上执行 测试数据 CREATE DATABASE IF NOT EXISTS test; USE test; CREATE TABLE t1 (c1 INT PRIMARY KEY, c2 TEXT NOT NULL); INSERT INTO t1 VALUES (1, 'Luis'); SELECT * FROM t1; show binlog events; root@mysqldb 17:11: [test]> show binlog events; +------------------+------+----------------+------------+-------------+---------------------------------------------------------------------------------+ | Log_name | Pos | Event_type | Server_id | End_log_pos | Info | +------------------+------+----------------+------------+-------------+---------------------------------------------------------------------------------+ | mysql-bin.000001 | 4 | Format_desc | 1650521822 | 126 | Server ver: 8.0.28, Binlog ver: 4 | | mysql-bin.000001 | 126 | Previous_gtids | 1650521822 | 157 | | | mysql-bin.000001 | 157 | Gtid | 1650521822 | 243 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1' | | mysql-bin.000001 | 243 | Query | 1650521822 | 309 | BEGIN | | mysql-bin.000001 | 309 | View_change | 1650521822 | 412 | view_id=16526922299317057:1 | | mysql-bin.000001 | 412 | Query | 1650521822 | 484 | COMMIT | | mysql-bin.000001 | 484 | Gtid | 1650521822 | 568 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:2' | | mysql-bin.000001 | 568 | Query | 1650521822 | 690 | CREATE DATABASE IF NOT EXISTS test /* xid=26 */ | | mysql-bin.000001 | 690 | Gtid | 1650521822 | 774 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:3' | | mysql-bin.000001 | 774 | Query | 1650521822 | 916 | use `test`; CREATE TABLE t1 (c1 INT PRIMARY KEY, c2 TEXT NOT NULL) /* xid=29 */ | | mysql-bin.000001 | 916 | Gtid | 1650521822 | 1002 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:4' | | mysql-bin.000001 | 1002 | Query | 1650521822 | 1077 | BEGIN | | mysql-bin.000001 | 1077 | Table_map | 1650521822 | 1132 | table_id: 116 (test.t1) | | mysql-bin.000001 | 1132 | Write_rows | 1650521822 | 1178 | table_id: 116 flags: STMT_END_F | | mysql-bin.000001 | 1178 | Xid | 1650521822 | 1209 | COMMIT /* xid=30 */ | +------------------+------+----------------+------------+-------------+---------------------------------------------------------------------------------+ 15 rows in set (0.00 sec)
如上图,我们可以看到,binlog 已经在我们新建的 group_name 的 GTID 下生成事务了;
(1.5)MGR 从节点加入
与之前在 db1 上执行的步骤不同,这里的不同之处在于您 不需要引导该组,因为该组已经存在。
换句话说,在 db2 group_replication_bootstrap_group
上设置为 OFF
,并且在开始组复制之前不要发出 SET GLOBAL group_replication_bootstrap_group=ON;
这是因为组已经由服务器 db1 创建和引导。此时服务器 db2 只需添加到已经存在的组中。
start group_replication; SELECT * FROM performance_schema.replication_group_members; root@mysqldb 17:45: [(none)]> SELECT * FROM performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ | group_replication_applier | 8f5f3f26-d297-11ec-ba45-fa163eb26cb2 | db3 | 3306 | ONLINE | SECONDARY | 8.0.28 | XCom | | group_replication_applier | b1336aa3-c13a-11ec-9cab-fa163e6ae43e | db1 | 3306 | ONLINE | PRIMARY | 8.0.28 | XCom | | group_replication_applier | da27603d-d29b-11ec-bda2-fa163e8bc590 | db2 | 3306 | RECOVERING | SECONDARY | 8.0.28 | XCom | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ 3 rows in set (0.01 sec) root@mysqldb 17:45: [(none)]> SELECT * FROM performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ | group_replication_applier | 8f5f3f26-d297-11ec-ba45-fa163eb26cb2 | db3 | 3306 | ONLINE | SECONDARY | 8.0.28 | XCom | | group_replication_applier | b1336aa3-c13a-11ec-9cab-fa163e6ae43e | db1 | 3306 | ONLINE | PRIMARY | 8.0.28 | XCom | | group_replication_applier | da27603d-d29b-11ec-bda2-fa163e8bc590 | db2 | 3306 | ONLINE | SECONDARY | 8.0.28 | XCom | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ 3 rows in set (0.00 sec)
如果发现一直不是online 状态,则可以查看连接情况:
select * from performance_schema.replication_connection_status\G
如上所述,我们可以看到新加入的从节点会有短暂的 RECOVERING,那是因为主节点有事务在复制组缓存内;
然后GTID 所有事务都可以完美衔接(这里直接从事务号 1 开始当然可以接得上),所以从库在重做
从库查询,数据也同步过来了;
root@mysqldb 17:57: [test]> select * from test.t1; +----+------+ | c1 | c2 | +----+------+ | 1 | Luis | +----+------+ 1 row in set (0.00 sec) root@mysqldb 17:57: [test]>
至此,一个新的 MGR 单主集群已经构成;
(1.6)数据验证
主库操作
create table t2(id int primary key,str varchar(100)); insert into t2 values(1,'a'); insert into t2 values(2,'b'),(3,'c');
从库查看
OK,完成,没有问题;
【2】MGR 单主测试
(2.1)从库只读
不仅仅是 read_only,连 super_read_only 都开了
root@mysqldb 17:33: [(none)]> show variables like '%read_only%'; +-----------------------+-------+ | Variable_name | Value | +-----------------------+-------+ | innodb_read_only | OFF | | read_only | ON | | super_read_only | ON | | transaction_read_only | OFF | +-----------------------+-------+
《1》关闭主节点,剩下2台会自动选举一台作为主(很快1s不到)
《2》加入剩余2台再关闭一台呢?剩下2台再关闭一台 那会继续选举一台作为主(很快1s不到)
《3》那如果在《2》的情况下,2台机器断开呢
这里有个疑问。官方默认最少节点是3台。不知道为何这种情况就可以1台运行
(2.2)故障转移测试----1主2从=》正常关闭主节点(shutdown)
[root@db1 ~]# service mysqld stop
Shutting down MySQL........ SUCCESS!
如上图,故障转移成功;1秒就切换成功;
(2.2)故障转移测试----1主2从=》非正常关闭主节点(kill -9)
如下面所示,自动故障转移切换成功,大概花了22 秒左右吧(无业务无事务访问的情况下);
5 秒后,主节点状态变成了 UNREACHABLE ,然后花了15-17秒,主节点被踢出集群;
root@mysqldb 16:12: [(none)]> select now(),member_host,member_state,member_role from performance_schema.replication_group_members; +---------------------+-------------+--------------+-------------+ | now() | member_host | member_state | member_role | +---------------------+-------------+--------------+-------------+ | 2022-05-18 16:13:38 | db3 | ONLINE | PRIMARY | | 2022-05-18 16:13:38 | db1 | ONLINE | SECONDARY | | 2022-05-18 16:13:38 | db2 | ONLINE | SECONDARY | +---------------------+-------------+--------------+-------------+ root@mysqldb 16:13: [(none)]> select now(),member_host,member_state,member_role from performance_schema.replication_group_members; +---------------------+-------------+--------------+-------------+ | now() | member_host | member_state | member_role | +---------------------+-------------+--------------+-------------+ | 2022-05-18 16:13:43 | db3 | UNREACHABLE | PRIMARY | | 2022-05-18 16:13:43 | db1 | ONLINE | SECONDARY | | 2022-05-18 16:13:43 | db2 | ONLINE | SECONDARY | +---------------------+-------------+--------------+-------------+ root@mysqldb 16:13: [(none)]> select now(),member_host,member_state,member_role from performance_schema.replication_group_members; +---------------------+-------------+--------------+-------------+ | now() | member_host | member_state | member_role | +---------------------+-------------+--------------+-------------+ | 2022-05-18 16:14:00 | db1 | ONLINE | PRIMARY | | 2022-05-18 16:14:00 | db2 | ONLINE | SECONDARY | +---------------------+-------------+--------------+-------------+
(2.3)故障转移测试----1主1从=》再次关闭主节点(正常关闭)
如下,主节点会自动告知剩余节点,剩余节点变成主节点,1秒切换成功
root@mysqldb 11:28: [(none)]> SELECT * FROM performance_schema.replication_group_members ; +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ | group_replication_applier | 8f5f3f26-d297-11ec-ba45-fa163eb26cb2 | db3 | 3306 | ONLINE | PRIMARY | 8.0.28 | XCom | | group_replication_applier | da27603d-d29b-11ec-bda2-fa163e8bc590 | db2 | 3306 | ONLINE | SECONDARY | 8.0.28 | XCom | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ 2 rows in set (0.00 sec) root@mysqldb 11:44: [(none)]> SELECT * FROM performance_schema.replication_group_members ; +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ | group_replication_applier | da27603d-d29b-11ec-bda2-fa163e8bc590 | db2 | 3306 | ONLINE | PRIMARY | 8.0.28 | XCom | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ 1 row in set (0.00 sec)
(2.3)故障转移测试----1主1从=》再次关闭主节点(非正常关闭 kill -9 )
5秒检测,变成 UNREACHABLE
root@mysqldb 11:18: [(none)]> SELECT * FROM performance_schema.replication_group_members ; +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ | group_replication_applier | 8f5f3f26-d297-11ec-ba45-fa163eb26cb2 | db3 | 3306 | ONLINE | SECONDARY | 8.0.28 | XCom | | group_replication_applier | da27603d-d29b-11ec-bda2-fa163e8bc590 | db2 | 3306 | ONLINE | PRIMARY | 8.0.28 | XCom | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ 2 rows in set (0.00 sec) root@mysqldb 11:18: [(none)]> SELECT * FROM performance_schema.replication_group_members ; +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ | group_replication_applier | 8f5f3f26-d297-11ec-ba45-fa163eb26cb2 | db3 | 3306 | ONLINE | SECONDARY | 8.0.28 | XCom | | group_replication_applier | da27603d-d29b-11ec-bda2-fa163e8bc590 | db2 | 3306 | UNREACHABLE | PRIMARY | 8.0.28 | XCom | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ 2 rows in set (0.01 sec)
然后就算把主库重新拉起来也没用;不会自动加入、恢复
-- 主库DB2 重新启动后操作 root@mysqldb 11:54: [(none)]> start group_replication; ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.
然后恢复不了啦;整个集群就挂了;如何恢复?这必须要重建 组复制了; db2 执行(时间生成上,见4.2)
SET GLOBAL group_replication_bootstrap_group=ON; START GROUP_REPLICATION USER='rpl_user', PASSWORD='password'; SET GLOBAL group_replication_bootstrap_group=OFF; SELECT * FROM performance_schema.replication_group_members;
从库上显示好了;
但主库上却并没有识别到
把 db3,从库, stop group_replication 再 start group_replication 一下,就好了;
【3】监控查阅组复制
(3.1)相关表、视图、DMV
-- 查看所有复制源
SELECT * FROM performance_schema.replication_applier_status\G
-- 查看组复制 所有成员情况、成员角色 等等 select * from performance_schema.replication_group_members; -- 查看组复制 每个成员节点的复制状态 SELECT * FROM performance_schema.replication_group_member_stats\G -- 查看组复制的连接状态 select * from performance_schema.replication_connection_status\G
具体信息如下:
root@mysqldb 15:19: [(none)]> SELECT * FROM performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ | group_replication_applier | 8f5f3f26-d297-11ec-ba45-fa163eb26cb2 | db3 | 3306 | ONLINE | SECONDARY | 8.0.28 | XCom | | group_replication_applier | b1336aa3-c13a-11ec-9cab-fa163e6ae43e | db1 | 3306 | ONLINE | SECONDARY | 8.0.28 | XCom | | group_replication_applier | da27603d-d29b-11ec-bda2-fa163e8bc590 | db2 | 3306 | ONLINE | PRIMARY | 8.0.28 | XCom | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ root@mysqldb 11:04: [(none)]> SELECT * FROM performance_schema.replication_group_member_stats\G *************************** 1. row *************************** CHANNEL_NAME: group_replication_applier VIEW_ID: 16527599779653303:3 MEMBER_ID: 8f5f3f26-d297-11ec-ba45-fa163eb26cb2 COUNT_TRANSACTIONS_IN_QUEUE: 0 COUNT_TRANSACTIONS_CHECKED: 0 COUNT_CONFLICTS_DETECTED: 0 COUNT_TRANSACTIONS_ROWS_VALIDATING: 0 TRANSACTIONS_COMMITTED_ALL_MEMBERS: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-9 LAST_CONFLICT_FREE_TRANSACTION: COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE: 1 COUNT_TRANSACTIONS_REMOTE_APPLIED: 1 COUNT_TRANSACTIONS_LOCAL_PROPOSED: 0 COUNT_TRANSACTIONS_LOCAL_ROLLBACK: 0 *************************** 2. row *************************** CHANNEL_NAME: group_replication_applier VIEW_ID: 16527599779653303:3 MEMBER_ID: b1336aa3-c13a-11ec-9cab-fa163e6ae43e COUNT_TRANSACTIONS_IN_QUEUE: 0 COUNT_TRANSACTIONS_CHECKED: 0 COUNT_CONFLICTS_DETECTED: 0 COUNT_TRANSACTIONS_ROWS_VALIDATING: 0 TRANSACTIONS_COMMITTED_ALL_MEMBERS: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-9 LAST_CONFLICT_FREE_TRANSACTION: COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE: 1 COUNT_TRANSACTIONS_REMOTE_APPLIED: 0 COUNT_TRANSACTIONS_LOCAL_PROPOSED: 0 COUNT_TRANSACTIONS_LOCAL_ROLLBACK: 0 *************************** 3. row *************************** CHANNEL_NAME: group_replication_applier VIEW_ID: 16527599779653303:3 MEMBER_ID: da27603d-d29b-11ec-bda2-fa163e8bc590 COUNT_TRANSACTIONS_IN_QUEUE: 0 COUNT_TRANSACTIONS_CHECKED: 0 COUNT_CONFLICTS_DETECTED: 0 COUNT_TRANSACTIONS_ROWS_VALIDATING: 0 TRANSACTIONS_COMMITTED_ALL_MEMBERS: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-9 LAST_CONFLICT_FREE_TRANSACTION: COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE: 0 COUNT_TRANSACTIONS_REMOTE_APPLIED: 3 COUNT_TRANSACTIONS_LOCAL_PROPOSED: 0 COUNT_TRANSACTIONS_LOCAL_ROLLBACK: 0 3 rows in set (0.02 sec) root@mysqldb 11:04: [(none)]> select * from performance_schema.replication_connection_status\G *************************** 1. row *************************** CHANNEL_NAME: group_replication_applier GROUP_NAME: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa SOURCE_UUID: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa THREAD_ID: NULL SERVICE_STATE: ON COUNT_RECEIVED_HEARTBEATS: 0 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00.000000 RECEIVED_TRANSACTION_SET: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-9 LAST_ERROR_NUMBER: 0 LAST_ERROR_MESSAGE: LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00.000000 LAST_QUEUED_TRANSACTION: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:9 LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 2022-05-18 11:04:28.496647 LAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 2022-05-18 11:04:28.496647 LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP: 2022-05-18 11:04:28.496803 LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP: 2022-05-18 11:04:28.496901 QUEUEING_TRANSACTION: QUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000 QUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000 QUEUEING_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00.000000 *************************** 2. row *************************** CHANNEL_NAME: group_replication_recovery GROUP_NAME: SOURCE_UUID: THREAD_ID: NULL SERVICE_STATE: OFF COUNT_RECEIVED_HEARTBEATS: 0 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00.000000 RECEIVED_TRANSACTION_SET: LAST_ERROR_NUMBER: 0 LAST_ERROR_MESSAGE: LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00.000000 LAST_QUEUED_TRANSACTION: LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000 LAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000 LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00.000000 LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP: 0000-00-00 00:00:00.000000 QUEUEING_TRANSACTION: QUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000 QUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000 QUEUEING_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00.000000
(3.2)组复制成员状态
官网:https://dev.mysql.com/doc/refman/8.0/en/group-replication-server-states.html
状态查看:
-- 查看组复制 所有成员情况、成员角色 等等 select * from performance_schema.replication_group_members;
ONLINE
服务器是组的活动成员并处于完全正常运行的状态。其他组成员可以连接到它,如果适用,客户端也可以连接到它。成员只有在处于状态时才与组完全同步并参与其中ONLINE
。
RECOVERING
服务器已加入群组并正在成为活跃成员。当前正在进行分布式恢复,其中成员使用远程克隆操作或捐赠者的二进制日志从捐赠者接收状态转移。
OFFLINE
组复制插件已加载,但该成员不属于任何组。当成员加入或重新加入组时,此状态可能会短暂出现。
ERROR
该成员处于错误状态,不能作为组成员正常工作。成员可以在应用事务时或在恢复阶段进入错误状态。该状态的成员不参与该组的交易。
UNREACHABLE
本地故障检测器怀疑无法联系到该成员,因为该组的消息超时。例如,如果成员非自愿断开连接,就会发生这种情况。
如果您看到其他服务器的此状态,也可能意味着您查询此表的成员是分区的一部分,其中组服务器的子集可以相互联系,但无法联系组中的其他服务器。
【4】组复制 运维操作
(4.0)运维管理操作汇总
# 单主模式下,把主库切换到指定节点 SELECT group_replication_set_as_primary('00371d66-3c45-11ea-804b-080027337932')
# 切换到单主模式 SELECT group_replication_switch_to_single_primary_mode(member_uuid);
# 在集群任意节点上,切换到多主模式 SELECT group_replication_switch_to_multi_primary_mode()
# 查看切换线程状态,耗时等 SELECT event_name, work_completed, work_estimated FROM performance_schema.events_stages_current WHERE event_name LIKE "%stage/group_rpl%";
# 查看/设置 组的写并发
SELECT group_replication_get_write_concurrency();
SELECT group_replication_set_write_concurrency(instances);
# 查看/设置 组的通信协议
SELECT group_replication_get_communication_protocol();
SELECT group_replication_set_communication_protocol("8.0.16");
# 配置 member action
(1)查看支持哪些动作:select * from performance_schema.replication_group_member_actions ;
(2)比如动作 mysql_disable_super_read_only_if_primary,当它是 enable 就表示如果是主节点,那么就关闭 super_read_only,让主节点可以接受读写事务
(4.1)单主模式下,手动切换到某一个节点
如下面代码,切换成功;1秒
root@mysqldb 15:21: [(none)]> select * from performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ | group_replication_applier | 8f5f3f26-d297-11ec-ba45-fa163eb26cb2 | db3 | 3306 | ONLINE | SECONDARY | 8.0.28 | XCom | | group_replication_applier | b1336aa3-c13a-11ec-9cab-fa163e6ae43e | db1 | 3306 | ONLINE | SECONDARY | 8.0.28 | XCom | | group_replication_applier | da27603d-d29b-11ec-bda2-fa163e8bc590 | db2 | 3306 | ONLINE | PRIMARY | 8.0.28 | XCom | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ 3 rows in set (0.01 sec) root@mysqldb 15:43: [(none)]> select group_replication_set_as_primary('b1336aa3-c13a-11ec-9cab-fa163e6ae43e'); +--------------------------------------------------------------------------+ | group_replication_set_as_primary('b1336aa3-c13a-11ec-9cab-fa163e6ae43e') | +--------------------------------------------------------------------------+ | Primary server switched to: b1336aa3-c13a-11ec-9cab-fa163e6ae43e | +--------------------------------------------------------------------------+ 1 row in set (0.02 sec) root@mysqldb 15:43: [(none)]> select * from performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ | group_replication_applier | 8f5f3f26-d297-11ec-ba45-fa163eb26cb2 | db3 | 3306 | ONLINE | SECONDARY | 8.0.28 | XCom | | group_replication_applier | b1336aa3-c13a-11ec-9cab-fa163e6ae43e | db1 | 3306 | ONLINE | PRIMARY | 8.0.28 | XCom | | group_replication_applier | da27603d-d29b-11ec-bda2-fa163e8bc590 | db2 | 3306 | ONLINE | SECONDARY | 8.0.28 | XCom | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+----------------------------+ 3 rows in set (0.00 sec)
8.0.29及以后,给一个超时时间:
SELECT group_replication_set_as_primary(‘00371d66-3c45-11ea-804b-080027337932’, 300)
要检查超时状态,请使用 PROCESSLIST_INFO
性能模式线程表中的列:
SELECT NAME, PROCESSLIST_INFO FROM performance_schema.threads WHERE NAME=\"thread/group_rpl/THD_transaction_monitor\"; +---------------------------------------------+----------------------------------------------------------------------+ | NAME | PROCESSLIST_INFO | +--------------------------------------------------------------------------------------------------------------------+ | thread/group_rpl/THD_transaction_monitor | Group replication transaction monitor: Stopped client connections | +--------------------------------------------------------------------------------------------------------------------+ 状态显示何时创建事务监视线程,何时停止新事务,何时断开未提交事务的客户端连接,以及何时完成进程并再次允许新事务。 在操作运行时,您可以通过发出以下命令来检查其进度: SELECT event_name, work_completed, work_estimated FROM performance_schema.events_stages_current WHERE event_name LIKE "%stage/group_rpl%"; +----------------------------------------------------------------------------------+----------------+----------------+ | event_name | work_completed | work_estimated | +----------------------------------------------------------------------------------+----------------+----------------+ | stage/group_rpl/Primary Election: Waiting for members to turn on super_read_only | 3 | 5 |
(4.2)如何重建组,重新引导生成组
1、依次对每个组成员,以任意顺序:
- 将客户端连接到组成员。如果 Group Replication 尚未停止,请发出
STOP GROUP_REPLICATION
语句并等待 Group Replication 停止。 - 编辑 MySQL 服务器配置文件(通常
my.cnf
在 Linux 和 Unix 系统或my.ini
Windows 系统上命名)并设置系统变量group_replication_start_on_boot=OFF
. 此设置防止组复制在 MySQL 服务器启动时启动,这是默认设置。 - 启动 MySQL 服务器实例,并验证 Group Replication 尚未启动(或启动失败)。在此阶段不要启动组复制。
- 收集信息(每个机器都要)
- 获取 已执行的 GTID 事务信息:SELECT @@GLOBAL.GTID_EXECUTED
- 查看当前复制组的已认证 GTID 事务信息:SELECT received_transaction_set FROM performance_schema.replication_connection_status WHERE channel_name="group_replication_applier";
2、所有节点比较 GTID 事务大小
当您收集了所有组成员的事务集后,比较它们以找出哪个成员总体上具有最大的事务集,包括已执行的事务(gtid_executed
)和认证的事务(在 group_replication_applier
通道上)。
3、找到最大事务集的成员来重新引导组
mysql> SET GLOBAL group_replication_bootstrap_group=ON; mysql> START GROUP_REPLICATION; mysql> SET GLOBAL group_replication_bootstrap_group=OFF;
注意:不要将设置存储 group_replication_bootstrap_group=ON
在配置文件中,否则当服务器再次重新启动时,会设置第二个同名组。
4、核验是否新生成组了
SELECT * FROM performance_schema.replication_group_members;
5、从库重新 start group_replication 加入组
6、再次查阅 SELECT * FROM performance_schema.replication_group_members;
【5】MGR性能和故障排除
(5.1)XCom 缓存管理
XCom 的消息缓存:group_replication_message_cache_size
(默认 1GB,最小128M)
决定了 MGR 消息缓存大小,如果缓存大小达到限制,会先删除最老的数据;
必须所有节点设置的都一样;
因为尝试重新连接的无法访问的成员会随机选择任何其他成员来恢复丢失的消息;
不建议使用 非常低的 设置,因为较小的消息缓存会使组成员在暂时失去连接后更难重新连接。(因为如果已经本身GTID 与消息缓存中的GTID 衔接不上链断了,就需要重新初始化了)
(5.2)故障检测与网络分区
group_replication_member_expel_timeout
= N :当有成员5秒未能通过事务,会预留 N 秒之后 才会逐出 replication_group_member 视图;利于后续重新加入
《1》驱逐超时
在组复制组成员怀疑另一个成员(或自身)之前,有一个初始的 5 秒检测期。
当另一个成员对它的怀疑(或它自己对自己的怀疑)超时时,一个组成员就会被开除。
在此之后,在驱逐机制检测并执行驱逐之前,可能会经过更短的一段时间。 group_replication_member_expel_timeout
指定组成员在创建怀疑和驱逐可疑成员之间等待的时间段(以秒为单位),称为驱逐超时。
可疑成员被列为UNREACHABLE
在此等待期间,但不会从组的成员列表中删除。
-
如果可疑成员在等待期结束时怀疑超时之前再次变为活动状态,则该成员将应用由 XCom 消息缓存中剩余组成员缓冲的所有消息并进入
ONLINE
状态,无需操作员干预。在这种情况下,该成员被该组视为相同的化身。 -
如果可疑成员仅在怀疑超时并能够恢复通信后才变为活动状态,则它会收到一个被驱逐的视图,并且此时意识到它已被驱逐。您可以使用
group_replication_autorejoin_tries
MySQL 8.0.16 中提供的系统变量,使成员此时自动尝试重新加入组。从 MySQL 8.0.21 开始,默认情况下会激活此功能,并且成员会尝试 3 次自动重新加入。如果自动重新加入过程不成功或未尝试,则被驱逐的成员将遵循 指定的退出操作group_replication_exit_state_action
。
如果 group_replication_member_expel_timeout
设置为0,则没有等待期,可疑成员在5秒的检测期结束后立即被驱逐。此设置是 MySQL 8.0.20 及之前版本的默认设置;
8.0.21及之后,该参数默认为 5
(5.3)5台机器挂了3台,另外两台强行恢复服务
注意,确定另外3台是无法使用、恢复了,不然则会变成人为脑裂 构造出2个可访问的组来了;
《1》获取本机MGR监听地址和端口:
SELECT @@group_replication_local_address;
《2》强制不同的配置来接触对组的组织
SET GLOBAL group_replication_force_members="127.0.0.1:10000,127.0.0.1:10001";
【疑惑1】为什么故障转移选新主需要22秒?
主库故障 5秒后会变成状态 UNREACHABLE
,为什么故障 20秒后,才重新选主;
查阅:performance_schema.replication_group_members;
(1)group_replication_unreachable_majority_timeout =1
set global group_replication_unreachable_majority_timeout =1 ; --原本是5 set global group_replication_member_expel_timeout =5; set global group_replication_autorejoin_tries = 3;
无用,还是需要21秒
root@mysqldb 17:38: [(none)]> select now() as now_date,member_host,member_state,member_role from performance_schema.replication_group_members; +---------------------+-------------+--------------+-------------+ | now_date | member_host | member_state | member_role | +---------------------+-------------+--------------+-------------+ | 2022-05-24 17:39:21 | db3 | ONLINE | PRIMARY | | 2022-05-24 17:39:21 | db1 | ONLINE | SECONDARY | | 2022-05-24 17:39:21 | db2 | ONLINE | SECONDARY | +---------------------+-------------+--------------+-------------+ root@mysqldb 17:39: [(none)]> select now() as now_date,member_host,member_state,member_role from performance_schema.replication_group_members; +---------------------+-------------+--------------+-------------+ | now_date | member_host | member_state | member_role | +---------------------+-------------+--------------+-------------+ | 2022-05-24 17:39:26 | db3 | UNREACHABLE | PRIMARY | | 2022-05-24 17:39:26 | db1 | ONLINE | SECONDARY | | 2022-05-24 17:39:26 | db2 | ONLINE | SECONDARY | +---------------------+-------------+--------------+-------------+ root@mysqldb 17:39: [(none)]> select now() as now_date,member_host,member_state,member_role from performance_schema.replication_group_members; +---------------------+-------------+--------------+-------------+ | now_date | member_host | member_state | member_role | +---------------------+-------------+--------------+-------------+ | 2022-05-24 17:39:41 | db3 | UNREACHABLE | PRIMARY | | 2022-05-24 17:39:41 | db1 | ONLINE | SECONDARY | | 2022-05-24 17:39:41 | db2 | ONLINE | SECONDARY | +---------------------+-------------+--------------+-------------+ root@mysqldb 17:39: [(none)]> select now() as now_date,member_host,member_state,member_role from performance_schema.replication_group_members; +---------------------+-------------+--------------+-------------+ | now_date | member_host | member_state | member_role | +---------------------+-------------+--------------+-------------+ | 2022-05-24 17:39:42 | db1 | ONLINE | PRIMARY | | 2022-05-24 17:39:42 | db2 | ONLINE | SECONDARY | +---------------------+-------------+--------------+-------------+
(2)group_replication_autorejoin_tries=0
set global group_replication_member_expel_timeout =5; set global group_replication_autorejoin_tries=0; set global group_replication_unreachable_majority_timeout =1 ;
依然没用,32秒开始,53秒选新主,花费21秒
(3)group_replication_member_expel_timeout =0(成功 6秒就切换成功)
本次测试如下 6秒切换成功:
set global group_replication_member_expel_timeout =0; set global group_replication_autorejoin_tries=0; set global group_replication_unreachable_majority_timeout =1 ;
相关测试,但当我设置如下,依然 6 秒就切换成功了;
set global group_replication_member_expel_timeout =0;
set global group_replication_autorejoin_tries=3;
set global group_replication_unreachable_majority_timeout =5;
相关测试,但当我设置如下,20 秒就切换成功了;
set global group_replication_member_expel_timeout =1;
set global group_replication_autorejoin_tries=3;
set global group_replication_unreachable_majority_timeout =5; -- 改成1 也是一样
结果如下:
+---------------------+-------------+--------------+-------------+ | now_date | member_host | member_state | member_role | +---------------------+-------------+--------------+-------------+ | 2022-05-24 18:21:21 | db3 | ONLINE | SECONDARY | | 2022-05-24 18:21:21 | db1 | ONLINE | PRIMARY | | 2022-05-24 18:21:21 | db2 | ONLINE | SECONDARY | +---------------------+-------------+--------------+-------------+ root@mysqldb 18:21: [(none)]> select now() as now_date,member_host,member_state,member_role from performance_schema.replication_group_members; +---------------------+-------------+--------------+-------------+ | now_date | member_host | member_state | member_role | +---------------------+-------------+--------------+-------------+ | 2022-05-24 18:21:26 | db3 | ONLINE | SECONDARY | | 2022-05-24 18:21:26 | db1 | UNREACHABLE | PRIMARY | | 2022-05-24 18:21:26 | db2 | ONLINE | SECONDARY | +---------------------+-------------+--------------+-------------+ root@mysqldb 18:21: [(none)]> select now() as now_date,member_host,member_state,member_role from performance_schema.replication_group_members; +---------------------+-------------+--------------+-------------+ | now_date | member_host | member_state | member_role | +---------------------+-------------+--------------+-------------+ | 2022-05-24 18:21:27 | db3 | ONLINE | PRIMARY | | 2022-05-24 18:21:27 | db2 | ONLINE | SECONDARY | +---------------------+-------------+--------------+-------------+
【参考文档】
官网必看:https://dev.mysql.com/doc/refman/8.0/en/group-replication-deploying-in-single-primary-mode.html