使用MySQL Shell备份和还原MySQL
2023-07-01 21:31 abce 阅读(1077) 评论(0) 编辑 收藏 举报MySQL Shell是MySQL的高级客户端和代码编辑器。除了提供的SQL功能之外,与MySQL类似,MySQL Shell还为JavaScript和Python提供脚本功能,并包含用于使用MySQL的API。X DevAPI使用户能够处理关系型和文档数据,强烈建议MySQL Server 8.0和5.7与MySQL Shell 8.0一起使用。
MySQL Shell包含用于MySQL的实用程序。要从MySQL Shell中访问实用程序,请使用util全局对象,它在JavaScript和Python模式下可用,但在SQL模式下不可用。这些都是用来做备份的工具;来看看一些基本命令。
·util.dumpTables – 从单个数据库中导出一个或多个表
·util.dumpSchemas – 导出一个或多个数据库
·util.dumpInstance – 导出整个实例
·util.loadDump – 还原
1.转储表导出
下面的命令将从sysbench数据库中导出表sbtest1,并将备份存储在目标目录sysbench_dumps中。当目标目录不存在时,util.dumpTables程序将创建目录,并以csv作为转储文件的扩展名(dialect:"csv")。
缺省情况下,开启了压缩和分块(chunking)。启用分块后,dump表时将根据块大小拆分到多个文件中。默认情况下,dump文件将以tsv(Table separated value)扩展名创建。
MySQL localhost JS > util.dumpTables("sysbench", ["sbtest1"], "sysbench_dumps", {"compression":"none", "threads":6, chunking:false, dialect:"csv"}) Acquiring global read lock Global read lock acquired Initializing - done 1 tables and 0 views will be dumped. Gathering information - done All transactions have been started Locking instance for backup Global read lock has been released Writing global DDL files Running data dump using 6 threads. NOTE: Progress information uses estimated values and may not be accurate. Writing schema metadata - done Writing DDL - done Writing table metadata - done Starting data dump 101% (1000.00K rows / ~986.40K rows), 317.96K rows/s, 63.91 MB/s Dump duration: 00:00:03s Total duration: 00:00:04s Schemas dumped: 1 Tables dumped: 1 Data size: 198.89 MB Rows written: 1000000 Bytes written: 198.89 MB Average throughput: 60.96 MB/s MySQL localhost JS >
以下是执行结果:
[root@centos12 sysbench_dumps]# ls -lrth total 190M -rw-r-----. 1 root root 869 Jun 21 13:08 @.json -rw-r-----. 1 root root 240 Jun 21 13:08 @.sql -rw-r-----. 1 root root 240 Jun 21 13:08 @.post.sql -rw-r-----. 1 root root 231 Jun 21 13:08 sysbench.json -rw-r-----. 1 root root 638 Jun 21 13:08 sysbench@sbtest1.json -rw-r-----. 1 root root 474 Jun 21 13:08 sysbench.sql -rw-r-----. 1 root root 789 Jun 21 13:08 sysbench@sbtest1.sql -rw-r-----. 1 root root 1.5K Jun 21 13:08 sysbench@sbtest1.csv.idx -rw-r-----. 1 root root 190M Jun 21 13:08 sysbench@sbtest1.csv -rw-r-----. 1 root root 233 Jun 21 13:08 @.done.json [root@centos12 sysbench_dumps]# pwd /home/vagrant/sysbench_dumps
其中:
@.json :转储的全部信息,包含转储参数、服务器名称,用户名,日志名以及位置信息等
@.sql, @.post.sql :server版本和转储版本信息
sysbench.json :数据库和表的信息
sysbench@sbtest1.json :表sbtest1的详细信息,包含列名、索引、触发器、字符集和分区
sysbench.sql :创建sysbench库的语句
sysbench@sbtest1.sql :创建表sbtest1语句
@.done.json :结束时间和转储文件大小
sysbench@sbtest1.csv :表转储文件
2.只备份表结构
参数ddlOnly:true用于只备份表结构。
下面的命令将从sysbench数据库中获取sbtest1的表结构,并将其存储在sysbench_dumps路径中。
MySQL localhost JS > util.dumpTables("sysbench", ["sbtest1"], "sysbench_dumps", {"compression":"none", "threads":6, chunking:false, ddlOnly:true})
3.只转储表数据
参数dataOnly:true表示只转储数据。
下面的命令将从sysbench数据库中获取sbtest1的表数据,并将其存储在sysbench_dumps路径中。
MySQL localhost JS > util.dumpTables("sysbench", ["sbtest1"], "sysbench_dumps", {"compression":"none", "threads":6, chunking:false, dataOnly:true})
4.转储选中的数据
选项"where": {"databasename.tablename": "condition"}用于转储符合一定条件的数据。
下面的命令就是转储id在1到10之间的数据:
MySQL localhost JS > util.dumpTables("sysbench", ["sbtest1"], "sysbench_dumps", {"compression":"none", "threads":6 ,chunking:false, dataOnly:true, "where" : {"sysbench.sbtest1": "id between 1 and 10"}})
在单个命令中,也可以指定多个表。语法如下:
"where" : {"databasename1.tablename1": "condition for databasename1.tablename1", "databasename2.tablename2": "condition for databasename2.tablename2"}
例如,导出表sbtest1中id在1到10,sbtest12表中id在100到110之间的数据:
MySQL localhost JS > util.dumpTables("sysbench", ["sbtest1", "sbtest2"], "sysbench_dumps", {"compression":"none", "threads":6, chunking:false, dataOnly:true, "where" : {"sysbench.sbtest1": "id between 1 and 10", "sysbench.sbtest2": "id between 100 and 110"}})
5.从分区中转储数据
语法:
"partitions" : {"db1.table1": ["list of partitions"],"db2.table1": ["list of partitions"]}
例如导出sbtest1表的p1、p2分区;sbtest2表的p4、p5分区:
MySQL localhost JS > util.dumpTables("sysbench", ["sbtest1", "sbtest2"], "sysbench_dumps", {"compression":"none", "threads":6, chunking:false, dataOnly:true, "partitions" : {"sysbench.sbtest1": ["p1", "p2"],"sysbench.sbtest2": ["p4", "p5"]}})
6.转储数据库
转储数据库的时候,默认是将事件、触发器、routines都备份。这些内容存储在database_name.sql文件中。
例如,转储出percona、sakila数据库:
MySQL localhost JS > util.dumpSchemas(["percona", "sakila"], "schema_dump", {"compression":"none", "threads":6, chunking:false})
下面的命令是忽略events, routines, triggers:
MySQL localhost JS > util.dumpSchemas(["percona", "sakila"], "schema_dump", {events:false, routines:false, triggers:false, "compression":"none", "threads":6, chunking:false})
也可以使用下面这些选项包含、或排除events, routines, triggers:
includeEvents : [db1.include_event1,db2.include_event2...] includeRoutines : [db1.include_procedure1,db2.include_function2...] includeTriggers : [db1.include_trigger1,db2.include_trigger2...] excludeEvents : [db1.exclude_event1,db2.exclude_event2...] excludeTriggers : [db1.exclude_trigger1,db2.exclude_trigger2...] excludeRoutines : [db1.exclude_procedure1,db2.exclude_function2...]
7.从不同的数据库中备份指定的表
有时我们可能需要从不同的数据库中获取选定的不同表。我们可以使用includeables选项来实现这一点。
语法:
includeTables:["db1.table1", "db2.table2"....]
下面的命令将从percona数据库中获取表users,并从sakila数据库中获取actor表
MySQL localhost JS > util.dumpSchemas(["percona", "sakila"], "schema_dump", {includeTables:["percona.users", "sakila.actor"], "compression":"none", "threads":6, chunking:false})
8.实例级别的转储
命令util.dumpInstance将整个实例的数据保存在"/backup/instance_dump"路径下。系统数据库(mysql、sys、information_schema、performance_schema)被排除在外,默认情况下,从实例中获取所有用户信息并存储在文件@.users.sql中。该用户转储文件包含所有用户的创建和授予语句。
MySQL localhost JS > util.dumpInstance("/backup/instance_dump", {"compression":"none", "threads":6, chunking:false})
其它选项:
users: false - Skip users dump excludeUsers : [‘user1’,’user2’] - Execute particular users includeUsers : [‘user1’,’user2’]. - Include particular users excludeSchemas : [“db1”,”db2”] - Exclude particular schemas includeSchemas : [“db1”,”db2”]. - Include particular schemas excludeTables : [“db1.table1”,”db2.table2”] - Exclude particular tables includeTables : [“db1.table1”,”db2.table2”] - Include particular tables
9.将数据还原到单个数据库
util.loadDump命令用与还原数据。本地加载需要开启参数local_infile
语法:
util.loadDump("/path/of/the/dump", {options})
下面的命令将数据恢复到数据库test_restore。当我们需要在不同的schema上进行恢复时,我们必须使用这个模式选项:"test_restore"。否则,它将在获取它的源schema恢复。
MySQL localhost SQL > create database test_restore; Query OK, 1 row affected (0.3658 sec) MySQL localhost SQL > js Switching to JavaScript mode... MySQL localhost JS > util.loadDump("/home/vagrant/schema_dump", {schema:"test_restore", progressFile: "progress.json", threads: 8, showProgress:true, resetProgress:true})
10.还原整个实例并配置复制
这里,我们只是用8个并行线程从/home/vagrant/instance_dump路径加载了完整的实例转储。
MySQL localhost JS > util.loadDump("/home/vagrant/instance_dump", {progressFile: "progress.json", threads: 8, showProgress:true, resetProgress:true}) Loading DDL and Data from '/home/vagrant/instance_dump' using 8 threads. Opening dump... Target is MySQL 8.0.33-25. Dump was produced from MySQL 8.0.32-24 NOTE: Load progress file detected for the instance but 'resetProgress' option was enabled. Load progress will be discarded and the whole dump will be reloaded. Scanning metadata - done Checking for pre-existing objects... Executing common preamble SQL Executing DDL - done Executing view DDL - done Starting data load 8 thds loading / 100% (19.18 MB / 19.18 MB), 541.36 KB/s, 6 / 23 tables and partitions done Recreating indexes - done Executing common postamble SQL 23 chunks (100.00K rows, 19.18 MB) for 11 tables in 2 schemas were loaded in 11 sec (avg throughput 2.53 MB/s) 0 warnings were reported during the load. MySQL localhost JS >
从文件@.json中获取日志名和日志位置,配置复制:
[root@centos12 instance_dump]# cat @.json | grep -i binlog "binlogFile": "centos12-bin.000006", "binlogPosition": 760871466, [root@centos12 instance_dump]# MySQL localhost SQL > CHANGE REPLICATION SOURCE TO SOURCE_HOST="192.168.33.12", SOURCE_USER="bhuvan",SOURCE_PASSWORD="Bhuvan@123", SOURCE_LOG_FILE='centos12-bin.000006',SOURCE_LOG_POS=760871466; Query OK, 0 rows affected, 2 warnings (0.1762 sec) MySQL localhost SQL > START REPLICA; Query OK, 0 rows affected (0.1156 sec) MySQL localhost SQL > show replica statusG *************************** 1. row *************************** Replica_IO_State: Waiting for source to send event Source_Host: 192.168.33.12 Source_User: bhuvan Source_Port: 3306 Connect_Retry: 60 Source_Log_File: centos12-bin.000006 Read_Source_Log_Pos: 823234119 Relay_Log_File: centos11-relay-bin.000002 Relay_Log_Pos: 1152129 Relay_Source_Log_File: centos12-bin.000006 Replica_IO_Running: Yes Replica_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Source_Log_Pos: 762023266 Relay_Log_Space: 62363195 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Source_SSL_Allowed: No Source_SSL_CA_File: Source_SSL_CA_Path: Source_SSL_Cert: Source_SSL_Cipher: Source_SSL_Key: Seconds_Behind_Source: 718 Source_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Source_Server_Id: 100 Source_UUID: f46a1600-045e-11ee-809f-0800271333ce Source_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Replica_SQL_Running_State: Waiting for dependent transaction to commit Source_Retry_Count: 86400 Source_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Source_SSL_Crl: Source_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: Auto_Position: 0 Replicate_Rewrite_DB: Channel_Name: Source_TLS_Version: Source_public_key_path: Get_Source_public_key: 0 Network_Namespace: 1 row in set (0.1470 sec)