mysqlsh备份恢复小记

参考文档:

mysqlshell 备份恢复

1、安装mysqlshell

1.1 下载地址

1.2 安装

[root@vm1 ~]# wget https://cdn.mysql.com//Downloads/MySQL-Shell/mysql-shell-8.0.35-linux-glibc2.17-x86-64bit.tar.gz
[root@vm1 ~]# tar xvf mysql-shell-8.0.35-linux-glibc2.17-x86-64bit.tar.gz -C /usr/local/
[root@vm1 ~]# mv /usr/local/mysql-shell-8.0.35-linux-glibc2.17-x86-64bit/ /usr/local/mysql-shell
[root@vm1 ~]# vi ~/.bash_profile     --添加 /usr/local/mysql-shell/bin
[root@vm1 ~]# source ~/.bash_profile
[root@vm1 ~]# mysqlsh

 MySQL  JS > \sql
Switching to SQL mode... Commands end with ;
 MySQL  SQL > 

2、写入测试数据

[root@vm1 sysbench]# sysbench --db-driver=mysql --mysql-host=127.0.0.1 --mysql-port=3380 --mysql-user=root --mysql-password=123456 --mysql-db=sbtest --table_size=250000 --tables=10 --events=0 --time=600  oltp_read_write prepare

3、mysqlsh登录数据库

[root@vm1 ~]# mysqlsh -uroot -p -h127.0.0.1
Please provide the password for 'root@127.0.0.1': ******
Save password for 'root@127.0.0.1'? [Y]es/[N]o/Ne[v]er (default No): 
MySQL Shell 8.2.1
 MySQL  127.0.0.1:33060+ ssl  JS > 

4、备份恢复

  • 单表导出命令格式 util.exportTable(table, outputUrl[, options])

4.1 导出,按where条件导出(默认格式tab)

MySQL  127.0.0.1:33060+ ssl  JS > util.exportTable("sbtest.sbtest1", "file:///root/bak/sbtest1.txt")
MySQL  127.0.0.1:3380 ssl  sbtest  JS > util.exportTable("sbtest.sbtest1", "file:///root/bak/sbtest1where.txt", {"where" : "id > 10000"})

4.2 导出,按where条件导出(指定格式csv)

MySQL  127.0.0.1:33060+ ssl  JS > util.exportTable("sbtest.sbtest1", "file:///root/bak/sbtest100.txt",{dialect: "csv"})
MySQL  127.0.0.1:3380 ssl  sbtest  JS > util.exportTable("sbtest.sbtest1", "file:///root/bak/sbtest100where.txt", {"where" : "id > 10000",dialect: "csv"})

4.3 导入(默认格式):

 MySQL  127.0.0.1:3380 ssl  sbtest  JS > util.importTable("file:///root/bak/sbtest1.txt", {schema: "sbtest", table: "sbtest100",  showProgress: true})
Importing from file '/root/bak/sbtest1.txt' to table `sbtest`.`sbtest100` in MySQL Server at 127.0.0.1:3380 using 1 thread
[Worker000] sbtest1.txt: Records: 250000  Deleted: 0  Skipped: 0  Warnings: 0
100% (48.38 MB / 48.38 MB), 48.23 MB/s
File '/root/bak/sbtest1.txt' (48.38 MB) was imported in 1.3189 sec at 36.68 MB/s
Total rows affected in sbtest.sbtest100: Records: 250000  Deleted: 0  Skipped: 0  Warnings: 0
 MySQL  127.0.0.1:3380 ssl  sbtest  JS > 

4.4 导入(csv格式):

 MySQL  127.0.0.1:3380 ssl  sbtest  JS > util.importTable("file:///root/bak/sbtest100.txt", {schema: "sbtest", table: "sbtest100",  showProgress: true, dialect: "csv"})
Importing from file '/root/bak/sbtest100.txt' to table `sbtest`.`sbtest100` in MySQL Server at 127.0.0.1:3380 using 1 thread
[Worker000] sbtest100.txt: Records: 250000  Deleted: 0  Skipped: 0  Warnings: 0
100% (49.63 MB / 49.63 MB), 48.23 MB/s
File '/root/bak/sbtest100.txt' (49.63 MB) was imported in 1.4062 sec at 35.30 MB/s
Total rows affected in sbtest.sbtest100: Records: 250000  Deleted: 0  Skipped: 0  Warnings: 0
 MySQL  127.0.0.1:3380 ssl  sbtest  JS > 

4.5 多线程导出导入

util.dumpInstance(outputUrl[, options]) 
util.dumpSchemas(schemas, outputUrl[, options])
util.dumpTables(schema, tables, outputUrl[, options])

4.6 备份实例

提示: 会使用 flsuh table with read lock 获取一致性位点

 MySQL  127.0.0.1:3380 ssl  sbtest  JS > util.dumpInstance("/root/bak/instance", {showProgress: true})
Util.dumpInstance: Cannot proceed with the dump, the specified directory '/root/bak/instance' already exists at the target location /root/bak/instance and is not empty. (ArgumentError)
 MySQL  127.0.0.1:3380 ssl  sbtest  JS > util.dumpInstance("/root/bak/instance", {showProgress: true})
NOTE: Backup lock is not available to the account 'root'@'%' and DDL changes will not be blocked. The dump may fail with an error if schema changes are made while dumping.
Acquiring global read lock
Global read lock acquired
Initializing - done 
6 out of 10 schemas will be dumped and within them 15 tables, 0 views.
1 out of 4 users will be dumped.
Gathering information - done 
All transactions have been started
Global read lock has been released
Writing global DDL files
Writing users DDL
Running data dump using 4 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Writing schema metadata - done       
Writing DDL - done         
Writing table metadata - done         
Starting data dump
101% (2.75M rows / ~2.71M rows), 1.29M rows/s, 264.73 MB/s uncompressed, 120.47 MB/s compressed                
Dump duration: 00:00:02s                                                                       
Total duration: 00:00:02s                                                                      
Schemas dumped: 6                                                                              
Tables dumped: 15                                                                              
Uncompressed data size: 532.20 MB                                                              
Compressed data size: 242.12 MB                                                                
Compression ratio: 2.2                                                                         
Rows written: 2750012                                                                          
Bytes written: 242.12 MB                                                                       
Average uncompressed throughput: 258.56 MB/s                                                   
Average compressed throughput: 117.63 MB/s                                                     
 MySQL  127.0.0.1:3380 ssl  sbtest  JS > 

备份产生文件说明:

  • @.done.json:会记录备份的结束时间,备份集的大小。备份结束时生成。
  • @.json:会记录备份的一些元数据信息,包括备份时的一致性位置点信息:binlogFile,binlogPosition 和 gtidExecuted,这些信息可用来建立复制。
  • @.sql,@.post.sql:这两个文件只有一些注释信息。不过在通过 util.loadDump 导入数据时,我们可以通过这两个文件自定义一些 SQL。其中,@.sql 是数据导入前执行,@.post.sql 是数据导入后执行。
  • sbtest.json:记录 sbtest 中已经备份的表、视图、定时器、函数和存储过程。
  • *.tsv:数据文件。我们看看数据文件的内容。
  • # head -2 sbtest@sbtest1@0.tsv 1 6461363 68487932199-96439406143-93774651418-41631865787-96406072701-20604855487-25459966574-28203206787-41238978918-19503783441 22195207048-70116052123-74140395089-76317954521-98694025897 2 1112248 13241531885-45658403807-79170748828-69419634012-13605813761-77983377181-01582588137-21344716829-87370944992-02457486289 28733802923-10548894641-11867531929-71265603657-36546888392
  • TSV 格式,每一行储存一条记录,字段与字段之间用制表符(\t)分隔。
  • sbtest@sbtest1.json:记录了表相关的一些元数据信息,如列名,字段之间的分隔符(fieldsTerminatedBy)等。
  • sbtest@sbtest1.sql:sbtest.sbtest1 的建表语句。
  • sbtest.sql:建库语句。如果这个库中存在存储过程、函数、定时器,也是写到这个文件中。
  • @.users.sql:创建账号及授权语句。默认不会备份 mysql.session,mysql.session,mysql.sys 这三个内部账号。

4.7 备份多个库

 MySQL  127.0.0.1:3380 ssl  sbtest  JS > util.dumpSchemas(["sbtest","ceshi"], "/root/bak/schemas",{threads:8})
NOTE: Backup lock is not available to the account 'root'@'%' and DDL changes will not be blocked. The dump may fail with an error if schema changes are made while dumping.
Acquiring global read lock
Global read lock acquired
Initializing - done 
2 schemas will be dumped and within them 13 tables, 0 views.
Gathering information - done 
All transactions have been started
Global read lock has been released
Writing global DDL files
Running data dump using 8 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Writing schema metadata - done       
Writing DDL - done         
Writing table metadata - done         
Starting data dump
101% (2.75M rows / ~2.71M rows), 1.35M rows/s, 271.71 MB/s uncompressed, 123.68 MB/s compressed                
Dump duration: 00:00:02s                                                                       
Total duration: 00:00:02s                                                                      
Schemas dumped: 2                                                                              
Tables dumped: 13                                                                              
Uncompressed data size: 532.20 MB                                                              
Compressed data size: 242.12 MB                                                                
Compression ratio: 2.2                                                                         
Rows written: 2750008                                                                          
Bytes written: 242.12 MB                                                                       
Average uncompressed throughput: 259.92 MB/s                                                   
Average compressed throughput: 118.25 MB/s                                                     
 MySQL  127.0.0.1:3380 ssl  sbtest  JS > 

4.7 备份表结构,不锁表

 MySQL  127.0.0.1:3380 ssl  JS > util.dumpSchemas(["sbtest","ceshi"], "/root/bak/ddls",{threads:8, ddlOnly:true, consistent:false})
Initializing - done 
2 schemas will be dumped and within them 13 tables, 0 views.
Gathering information - done 
WARNING: The dumped value of gtid_executed is not guaranteed to be consistent
Writing global DDL files
Running data dump using 8 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Writing schema metadata - done       
Writing DDL - done         
Writing table metadata - done         
Starting data dump
0% (0 rows / ~2.71M rows), 0.00 rows/s, 0.00 B/s uncompressed, 0.00 B/s compressed
Dump duration: 00:00:00s                                                          
Total duration: 00:00:00s                                                         
Schemas dumped: 2                                                                 
Tables dumped: 13                                                                 
Uncompressed data size: 0 bytes                                                   
Compressed data size: 0 bytes                                                     
Compression ratio: 0.0                                                            
Rows written: 0                                                                   
Bytes written: 0 bytes                                                            
Average uncompressed throughput: 0.00 B/s                                         
Average compressed throughput: 0.00 B/s              

4.9 备份部分表

 MySQL  127.0.0.1:3380 ssl  JS > util.dumpSchemas(["sbtest"], "/root/bak/includeTables",{threads:8, consistent:false, includeTables:["sbtest.sbtest1"]})
Initializing - done 
1 schemas will be dumped and within them 1 out of 12 tables, 0 views.
Gathering information - done 
WARNING: The dumped value of gtid_executed is not guaranteed to be consistent
Writing global DDL files
Running data dump using 8 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Writing schema metadata - done       
Writing DDL - done       
Writing table metadata - done       
Starting data dump
101% (250.00K rows / ~246.67K rows), 0.00 rows/s, 0.00 B/s uncompressed, 0.00 B/s compressed
Dump duration: 00:00:00s                                                                    
Total duration: 00:00:00s                                                                   
Schemas dumped: 1                                                                           
Tables dumped: 1                                                                            
Uncompressed data size: 48.38 MB                                                            
Compressed data size: 22.01 MB                                                              
Compression ratio: 2.2                                                                      
Rows written: 250000                                                                        
Bytes written: 22.01 MB                                                                     
Average uncompressed throughput: 48.38 MB/s                                                 
Average compressed throughput: 22.01 MB/s                                                   
 MySQL  127.0.0.1:3380 ssl  JS > 

5、恢复数据

注意:默认导入数据时会写 binlog。如果要关闭写 binlog,需要将 skipBinlog 参数开启。

5.1恢复实例数据

 MySQL  172.17.0.2:33060+ ssl  JS > util.loadDump("/root/bak/instance", {threads: 8})
Loading DDL and Data from '/root/bak/instance' using 8 threads.
Opening dump...
Target is MySQL 8.0.35. Dump was produced from MySQL 8.0.32
Scanning metadata - done       
Checking for pre-existing objects...
Executing common preamble SQL
Executing DDL - done       
Executing view DDL - done       
Starting data load
3 thds loading \ 100% (532.20 MB / 532.20 MB), 34.93 MB/s, 12 / 15 tables done
Recreating indexes - done       
Executing common postamble SQL                                                
17 chunks (2.75M rows, 532.20 MB) for 15 tables in 6 schemas were loaded in 12 sec (avg throughput 47.29 MB/s)
0 warnings were reported during the load.                                     
 MySQL  172.17.0.2:33060+ ssl  JS > 

5.2导入之前备份的【ceshi,sbtest】库,导入时禁止写binlog

 MySQL  172.17.0.2:33060+ ssl  JS > util.loadDump("/root/bak/schemas", {threads: 8,skipBinlog: true})
Loading DDL and Data from '/root/bak/schemas' using 8 threads.
Opening dump...
Target is MySQL 8.0.35. Dump was produced from MySQL 8.0.32
Scanning metadata - done       
Checking for pre-existing objects...
Executing common preamble SQL
Executing DDL - done       
Executing view DDL - done       
Starting data load
2 thds loading | 100% (532.20 MB / 532.20 MB), 41.26 MB/s, 11 / 13 tables done
Executing common postamble SQL                                                
Recreating indexes - done       
14 chunks (2.75M rows, 532.20 MB) for 13 tables in 2 schemas were loaded in 10 sec (avg throughput 53.20 MB/s)
0 warnings were reported during the load.                                     
 MySQL  172.17.0.2:33060+ ssl  JS > 

posted on 2023-11-30 14:34  柴米油盐酱醋  阅读(258)  评论(0编辑  收藏  举报

导航