使用MySQL Shell备份和还原MySQL
2023-07-01 21:31 abce 阅读(1179) 评论(0) 编辑 收藏 举报MySQL Shell是MySQL的高级客户端和代码编辑器。除了提供的SQL功能之外,与MySQL类似,MySQL Shell还为JavaScript和Python提供脚本功能,并包含用于使用MySQL的API。X DevAPI使用户能够处理关系型和文档数据,强烈建议MySQL Server 8.0和5.7与MySQL Shell 8.0一起使用。
MySQL Shell包含用于MySQL的实用程序。要从MySQL Shell中访问实用程序,请使用util全局对象,它在JavaScript和Python模式下可用,但在SQL模式下不可用。这些都是用来做备份的工具;来看看一些基本命令。
·util.dumpTables – 从单个数据库中导出一个或多个表
·util.dumpSchemas – 导出一个或多个数据库
·util.dumpInstance – 导出整个实例
·util.loadDump – 还原
1.转储表导出
下面的命令将从sysbench数据库中导出表sbtest1,并将备份存储在目标目录sysbench_dumps中。当目标目录不存在时,util.dumpTables程序将创建目录,并以csv作为转储文件的扩展名(dialect:"csv")。
缺省情况下,开启了压缩和分块(chunking)。启用分块后,dump表时将根据块大小拆分到多个文件中。默认情况下,dump文件将以tsv(Table separated value)扩展名创建。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | MySQL localhost JS > util.dumpTables( "sysbench" , [ "sbtest1" ], "sysbench_dumps" , { "compression" : "none" , "threads" :6, chunking: false , dialect: "csv" }) Acquiring global read lock Global read lock acquired Initializing - done 1 tables and 0 views will be dumped. Gathering information - done All transactions have been started Locking instance for backup Global read lock has been released Writing global DDL files Running data dump using 6 threads. NOTE: Progress information uses estimated values and may not be accurate. Writing schema metadata - done Writing DDL - done Writing table metadata - done Starting data dump 101% (1000.00K rows / ~986.40K rows ), 317.96K rows /s, 63.91 MB/s Dump duration: 00:00:03s Total duration: 00:00:04s Schemas dumped: 1 Tables dumped: 1 Data size : 198.89 MB Rows written: 1000000 Bytes written: 198.89 MB Average throughput: 60.96 MB/s MySQL localhost JS > |
以下是执行结果:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | [root@centos12 sysbench_dumps]# ls -lrth total 190M -rw-r -----. 1 root root 869 Jun 21 13:08 @.json -rw-r -----. 1 root root 240 Jun 21 13:08 @.sql -rw-r -----. 1 root root 240 Jun 21 13:08 @.post.sql -rw-r -----. 1 root root 231 Jun 21 13:08 sysbench.json -rw-r -----. 1 root root 638 Jun 21 13:08 sysbench@sbtest1.json -rw-r -----. 1 root root 474 Jun 21 13:08 sysbench.sql -rw-r -----. 1 root root 789 Jun 21 13:08 sysbench@sbtest1.sql -rw-r -----. 1 root root 1.5K Jun 21 13:08 sysbench@sbtest1.csv.idx -rw-r -----. 1 root root 190M Jun 21 13:08 sysbench@sbtest1.csv -rw-r -----. 1 root root 233 Jun 21 13:08 @.done.json [root@centos12 sysbench_dumps]# pwd /home/vagrant/sysbench_dumps |
其中:
@.json :转储的全部信息,包含转储参数、服务器名称,用户名,日志名以及位置信息等
@.sql, @.post.sql :server版本和转储版本信息
sysbench.json :数据库和表的信息
sysbench@sbtest1.json :表sbtest1的详细信息,包含列名、索引、触发器、字符集和分区
sysbench.sql :创建sysbench库的语句
sysbench@sbtest1.sql :创建表sbtest1语句
@.done.json :结束时间和转储文件大小
sysbench@sbtest1.csv :表转储文件
2.只备份表结构
参数ddlOnly:true用于只备份表结构。
下面的命令将从sysbench数据库中获取sbtest1的表结构,并将其存储在sysbench_dumps路径中。
1 | MySQL localhost JS > util.dumpTables( "sysbench" , [ "sbtest1" ], "sysbench_dumps" , { "compression" : "none" , "threads" :6, chunking: false , ddlOnly: true }) |
3.只转储表数据
参数dataOnly:true表示只转储数据。
下面的命令将从sysbench数据库中获取sbtest1的表数据,并将其存储在sysbench_dumps路径中。
1 | MySQL localhost JS > util.dumpTables( "sysbench" , [ "sbtest1" ], "sysbench_dumps" , { "compression" : "none" , "threads" :6, chunking: false , dataOnly: true }) |
4.转储选中的数据
选项"where": {"databasename.tablename": "condition"}用于转储符合一定条件的数据。
下面的命令就是转储id在1到10之间的数据:
1 | MySQL localhost JS > util.dumpTables( "sysbench" , [ "sbtest1" ], "sysbench_dumps" , { "compression" : "none" , "threads" :6 ,chunking: false , dataOnly: true , "where" : { "sysbench.sbtest1" : "id between 1 and 10" }}) |
在单个命令中,也可以指定多个表。语法如下:
1 | "where" : { "databasename1.tablename1" : "condition for databasename1.tablename1" , "databasename2.tablename2" : "condition for databasename2.tablename2" } |
例如,导出表sbtest1中id在1到10,sbtest12表中id在100到110之间的数据:
1 | MySQL localhost JS > util.dumpTables( "sysbench" , [ "sbtest1" , "sbtest2" ], "sysbench_dumps" , { "compression" : "none" , "threads" :6, chunking: false , dataOnly: true , "where" : { "sysbench.sbtest1" : "id between 1 and 10" , "sysbench.sbtest2" : "id between 100 and 110" }}) |
5.从分区中转储数据
语法:
1 | "partitions" : { "db1.table1" : [ "list of partitions" ], "db2.table1" : [ "list of partitions" ]} |
例如导出sbtest1表的p1、p2分区;sbtest2表的p4、p5分区:
1 | MySQL localhost JS > util.dumpTables( "sysbench" , [ "sbtest1" , "sbtest2" ], "sysbench_dumps" , { "compression" : "none" , "threads" :6, chunking: false , dataOnly: true , "partitions" : { "sysbench.sbtest1" : [ "p1" , "p2" ], "sysbench.sbtest2" : [ "p4" , "p5" ]}}) |
6.转储数据库
转储数据库的时候,默认是将事件、触发器、routines都备份。这些内容存储在database_name.sql文件中。
例如,转储出percona、sakila数据库:
1 | MySQL localhost JS > util.dumpSchemas([ "percona" , "sakila" ], "schema_dump" , { "compression" : "none" , "threads" :6, chunking: false }) |
下面的命令是忽略events, routines, triggers:
1 | MySQL localhost JS > util.dumpSchemas([ "percona" , "sakila" ], "schema_dump" , {events: false , routines: false , triggers: false , "compression" : "none" , "threads" :6, chunking: false }) |
也可以使用下面这些选项包含、或排除events, routines, triggers:
1 2 3 4 5 6 7 | includeEvents : [db1.include_event1,db2.include_event2...] includeRoutines : [db1.include_procedure1,db2.include_function2...] includeTriggers : [db1.include_trigger1,db2.include_trigger2...] excludeEvents : [db1.exclude_event1,db2.exclude_event2...] excludeTriggers : [db1.exclude_trigger1,db2.exclude_trigger2...] excludeRoutines : [db1.exclude_procedure1,db2.exclude_function2...] |
7.从不同的数据库中备份指定的表
有时我们可能需要从不同的数据库中获取选定的不同表。我们可以使用includeables选项来实现这一点。
语法:
1 | includeTables:[ "db1.table1" , "db2.table2" ....] |
下面的命令将从percona数据库中获取表users,并从sakila数据库中获取actor表
1 | MySQL localhost JS > util.dumpSchemas([ "percona" , "sakila" ], "schema_dump" , {includeTables:[ "percona.users" , "sakila.actor" ], "compression" : "none" , "threads" :6, chunking: false }) |
8.实例级别的转储
命令util.dumpInstance将整个实例的数据保存在"/backup/instance_dump"路径下。系统数据库(mysql、sys、information_schema、performance_schema)被排除在外,默认情况下,从实例中获取所有用户信息并存储在文件@.users.sql中。该用户转储文件包含所有用户的创建和授予语句。
1 | MySQL localhost JS > util.dumpInstance( "/backup/instance_dump" , { "compression" : "none" , "threads" :6, chunking: false }) |
其它选项:
1 2 3 4 5 6 7 | users: false - Skip users dump excludeUsers : [‘user1’,’user2’] - Execute particular users includeUsers : [‘user1’,’user2’]. - Include particular users excludeSchemas : [“db1”,”db2”] - Exclude particular schemas includeSchemas : [“db1”,”db2”]. - Include particular schemas excludeTables : [“db1.table1”,”db2.table2”] - Exclude particular tables includeTables : [“db1.table1”,”db2.table2”] - Include particular tables |
9.将数据还原到单个数据库
util.loadDump命令用与还原数据。本地加载需要开启参数local_infile
语法:
1 | util.loadDump( "/path/of/the/dump" , {options}) |
下面的命令将数据恢复到数据库test_restore。当我们需要在不同的schema上进行恢复时,我们必须使用这个模式选项:"test_restore"。否则,它将在获取它的源schema恢复。
1 2 3 4 5 | MySQL localhost SQL > create database test_restore; Query OK, 1 row affected (0.3658 sec) MySQL localhost SQL > js Switching to JavaScript mode... MySQL localhost JS > util.loadDump( "/home/vagrant/schema_dump" , { schema : "test_restore" , progressFile: "progress.json" , threads: 8, showProgress: true , resetProgress: true }) |
10.还原整个实例并配置复制
这里,我们只是用8个并行线程从/home/vagrant/instance_dump路径加载了完整的实例转储。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | MySQL localhost JS > util.loadDump( "/home/vagrant/instance_dump" , {progressFile: "progress.json" , threads: 8, showProgress: true , resetProgress: true }) Loading DDL and Data from '/home/vagrant/instance_dump' using 8 threads. Opening dump... Target is MySQL 8.0.33-25. Dump was produced from MySQL 8.0.32-24 NOTE: Load progress file detected for the instance but 'resetProgress' option was enabled. Load progress will be discarded and the whole dump will be reloaded. Scanning metadata - done Checking for pre-existing objects... Executing common preamble SQL Executing DDL - done Executing view DDL - done Starting data load 8 thds loading / 100% (19.18 MB / 19.18 MB), 541.36 KB/s, 6 / 23 tables and partitions done Recreating indexes - done Executing common postamble SQL 23 chunks (100.00K rows , 19.18 MB) for 11 tables in 2 schemas were loaded in 11 sec ( avg throughput 2.53 MB/s) 0 warnings were reported during the load . MySQL localhost JS > |
从文件@.json中获取日志名和日志位置,配置复制:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 | [root@centos12 instance_dump]# cat @.json | grep -i binlog "binlogFile" : "centos12-bin.000006" , "binlogPosition" : 760871466, [root@centos12 instance_dump]# MySQL localhost SQL > CHANGE REPLICATION SOURCE TO SOURCE_HOST= "192.168.33.12" , SOURCE_USER= "bhuvan" ,SOURCE_PASSWORD= "Bhuvan@123" , SOURCE_LOG_FILE= 'centos12-bin.000006' ,SOURCE_LOG_POS=760871466; Query OK, 0 rows affected, 2 warnings (0.1762 sec) MySQL localhost SQL > START REPLICA; Query OK, 0 rows affected (0.1156 sec) MySQL localhost SQL > show replica statusG *************************** 1. row *************************** Replica_IO_State: Waiting for source to send event Source_Host: 192.168.33.12 Source_User: bhuvan Source_Port: 3306 Connect_Retry: 60 Source_Log_File: centos12-bin.000006 Read_Source_Log_Pos: 823234119 Relay_Log_File: centos11-relay-bin.000002 Relay_Log_Pos: 1152129 Relay_Source_Log_File: centos12-bin.000006 Replica_IO_Running: Yes Replica_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Source_Log_Pos: 762023266 Relay_Log_Space: 62363195 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Source_SSL_Allowed: No Source_SSL_CA_File: Source_SSL_CA_Path: Source_SSL_Cert: Source_SSL_Cipher: Source_SSL_Key: Seconds_Behind_Source: 718 Source_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Source_Server_Id: 100 Source_UUID: f46a1600-045e-11ee-809f-0800271333ce Source_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Replica_SQL_Running_State: Waiting for dependent transaction to commit Source_Retry_Count: 86400 Source_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Source_SSL_Crl: Source_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: Auto_Position: 0 Replicate_Rewrite_DB: Channel_Name: Source_TLS_Version: Source_public_key_path: Get_Source_public_key: 0 Network_Namespace: 1 row in set (0.1470 sec) |
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· DeepSeek 开源周回顾「GitHub 热点速览」
· 物流快递公司核心技术能力-地址解析分单基础技术分享
· .NET 10首个预览版发布:重大改进与新特性概览!
· AI与.NET技术实操系列(二):开始使用ML.NET
· .NET10 - 预览版1新功能体验(一)
2022-07-01 【PostgreSQL】PostgreSQL的vacuum调优和客户化调度vacuum任务
2016-07-01 mysql参数sql_log_bin
2016-07-01 mysql -prompt选项
2016-07-01 Starting MySQL...The server quit without updating PID file
2015-07-01 RMAN兼容性列表
2015-07-01 修改RMAN list命令输出的时间格式
2015-07-01 RMAN-06059: expected archived log not found, loss of archived log compromises recoverability