How can I optimize a mysqldump of a large database?

StackExchage

https://dba.stackexchange.com/questions/20/how-can-i-optimize-a-mysqldump-of-a-large-database

The main bottleneck in the dump like this is drive I/O. You are reading a load of data and writing it again.
You can speed this up in a number of ways:

  • Make sure your output is going to a different drive(s) than the one(s) the database files are stored on - this will make a massive difference with spinning disks as the drive heads will not be constantly flicking between the location being read from and the location being written to.
  • The output of mysqldump will be very compressible, so if you can not separate the output from the input as mentioned above pipe the output through gzip or similar. This will reduce the amount of writing being done (so reduce the overall IO load, and the amount of head movement) at the expense of some CPU time (which you may have a lot of spare at these times anyway).
  • Also, (as well or instead of compression) pass the output through a pipe utility (like pv) that supports large write buffers to group blocks written to the drives together more, again to reduce the effect of head-movement latency - this will make quite a difference if using the --quick option to reduce the RAM impact of backing up large tables).
渣翻
  • 备份输出到其他的设备, 不会因为磁盘驱动器在读写间切换.
  • 输出生成到压缩文件, 如gzi或其他相似的. 这将会降低写在cpu上所花费的时间.
  • 另外, 压缩和管道支持大缓冲区写入到驱动器, 以降低磁头移动的延迟时间, 使用 --quick选项 备份大表, 会降低存储器的读取次数.
mysqldump
  • --quick
    This option is useful for dumping large tables.
    It forces mysqldump to retrieve rows for a table from the server a row at a time rather than retrieving the entire row set and buffering it in memory before writing it out.
posted @ 2022-04-01 10:36  德克斯特的实验室  阅读(63)  评论(0编辑  收藏  举报