One month ago I wrote about how a big read_buffer_size could break the replication. The bug is not solved but now there is an official workaround to ease this problem using a new configuration variable:
一个月前我写了一篇文章简述大的read_buffer_size是怎么中断MYSQL复制的。这个BUG没有被修复,但是现在有了一个官方解决方案,使用一个新的配置参数去缓解这个问题:
This new variable will be available in 5.1.64, 5.5.26, and 5.6.6 and can establish a different limit on the max_allowed_packet for the slave servers (IO Thread). Now on a slave server the maximum size of a packet is checked against this variable and not max_allowed_packet.
这个新参数将在5.1.64, 5.5.26和5.6.6版本里生效,为从服务器(IO线程)设定一个和max_allowed_packet不同的限制 . 现在从服务器端最大的数据包大小由这个参数决定,而不是max_allowed_packet.
The default value is 1GB and that means if we don’t tune the variable our I/O thread can read up to that amount of data. This is important if we use the binary logs for PITR. This variable solves only one part of the problem, the slave doesn’t stop working but binary log events can still be bigger than max_allowed_packet. During the recovery process the slave_max_allowed_packet is not going to help us and our recovery process could fail.
这个参数的默认值是1GB,这意味着如果我们不调整这个参数,IO线程能够读取到的最大数据是1GB。如果我们使用二进制日志去做PITR(Point-in-time recovery基于时间点的恢复 ),这就很重要了。这个参数值解决了这个问题的一部分,slave不会停止,但是二进制事务仍然可以超过max_allowed_packet大小。在恢复的过程中slave_max_allowed_packet不会帮助我们,恢复过程可能会失败。
Conclusion 总结
Now we have two workarounds for the same problem. First, use a small value for your read_buffer_size (less than max_allowed_packet) and the second one, upgrade to a version that has slave_max_allowed_packet.
现在对这个问题我们有2个解决方案了。第一,为read_buffer_size配置一个小一点的值(小于max_allowed_packet),第二,升级到支持slave_max_allowed_packet的版本。