MySQL Shell 的简单使用
2024-06-06 11:10 abce 阅读(172) 评论(0) 编辑 收藏 举报util.dumpTables():导出表
util.dumpSchemas():导出单个或多个 schema
util.dumpInstance():导出整个实例
util.dumpTables()的使用
语法:
1 | util.dumpTables( schema , tables, outputUrl[, options]) |
其中:
·schema:是表所在的schema
·tables:是一个数组,表示要导出的表的列表
·outputUrl:存放的路径,该目录必须为空
比如导出单个表:
1 | util.dumpTables( 'myabc' ,[ 'abce' ], '/data/dump' ) |
这里将 myabc 库中的表 abce 导出到 /data/dump 目录中。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | MySQL localhost JS > util.dumpTables( 'myabc' ,[ 'abce' ], '/data/dump' ) Acquiring global read lock Global read lock acquired Initializing - done 1 tables and 0 views will be dumped. Gathering information - done All transactions have been started Locking instance for backup Global read lock has been released Writing global DDL files Running data dump using 4 threads. NOTE: Progress information uses estimated values and may not be accurate. Writing schema metadata - done Writing DDL - done Writing table metadata - done Starting data dump 100% (1.82K rows / ~1.82K rows ), 0.00 rows /s, 0.00 B/s uncompressed, 0.00 B/s compressed Dump duration: 00:00:00s Total duration: 00:00:00s Schemas dumped: 1 Tables dumped: 1 Uncompressed data size : 573.68 KB Compressed data size : 146.79 KB Compression ratio: 3.9 Rows written: 1818 Bytes written: 146.79 KB Average uncompressed throughput: 573.68 KB/s Average compressed throughput: 146.79 KB/s |
以下是导出产生的文件:

缺省的并发度是4,可以手动指定并发度:
1 | util.dumpTables( 'myabc' ,[ 'abce' ], '/data/dump' ,{threads:8}) |
导出过程可以做一些过滤,格式是使用json格式的过滤条件:
1 | util.dumpTables( 'myabc' ,[ 'abce' ], '/data/dump' , { where : { "myabc.abce" : "title like 'abc%'" }}) |
数据加载
无论是使用util.dumpTables(), util.dumpSchemas(), 还是 util.dumpInstance()导出数据后,都是使用 util.loadDump() 加载数据。
1 | util.loadDump( '/data/dump' , { schema : 'myabc-2' , threads:8}) |
这会将上面备份的数据加载到新库 myabc-2 中。如果该数据库不存在,则会新建数据库。如果表已经存在,则会给出报错信息。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | MySQL localhost JS > util.loadDump( '/data/dump' , { schema : 'myabc-2' }) Loading DDL and Data from '/data/dump' using 4 threads. Opening dump... Target is MySQL 8.0.36. Dump was produced from MySQL 8.0.36 Scanning metadata - done Checking for pre-existing objects... Executing common preamble SQL Executing DDL - done Executing view DDL - done Starting data load Executing common postamble SQL 100% (573.68 KB / 573.68 KB), 0.00 B/s, 1 / 1 tables done Recreating indexes - done 2 chunks (1.82K rows , 573.68 KB) for 1 tables in 1 schemas were loaded in 0 sec ( avg throughput 573.68 KB/s) 0 warnings were reported during the load . |
util.dumpSchemas()的使用
可以同时导出多个数据库:
1 | util.dumpSchemas(schemas, outputUrl[, options]) |
其中:
·schemas:是一个数据库数组
导出单个 schema
1 | util.dumpSchemas([ 'myabc' ], '/data/dump' , {threads: 4}) |
加载数据
1 | util.loadDump( '/data/dump' , { schema : 'myabc-3' , threads: 8}) |
这会将上面备份的数据加载到新库 myabc-3 中。如果该数据库不存在,则会新建数据库。
导出多个schema
1 | util.dumpSchemas([ 'myabc' , 'myabc-2' ], '/data/dump' , {threads: 4}) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | MySQL localhost JS > util.dumpSchemas([ 'myabc' , 'myabc-2' ], '/data/dump' , {threads: 4}) Acquiring global read lock Global read lock acquired Initializing - done 2 schemas will be dumped and within them 3 tables, 0 views. Gathering information - done All transactions have been started Locking instance for backup Global read lock has been released Writing global DDL files Running data dump using 4 threads. NOTE: Progress information uses estimated values and may not be accurate. Writing schema metadata - done Writing DDL - done Writing table metadata - done Starting data dump 100% (8.76K rows / ~8.76K rows ), 0.00 rows /s, 0.00 B/s uncompressed, 0.00 B/s compressed Dump duration: 00:00:00s Total duration: 00:00:00s Schemas dumped: 2 Tables dumped: 3 Uncompressed data size : 2.51 MB Compressed data size : 532.64 KB Compression ratio: 4.7 Rows written: 8755 Bytes written: 532.64 KB Average uncompressed throughput: 2.51 MB/s Average compressed throughput: 532.64 KB/s |
加载数据
1 | util.loadDump( '/data/dump' ) |
如果导入多个schemas,就不用指定schema选项了。但是需要确目标实例不包含相同的库或者库中不包含有相同的表。否则会报错。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | MySQL localhost JS > util.loadDump( '/data/dump' , {threads:8}) Loading DDL and Data from '/data/dump' using 8 threads. Opening dump... Target is MySQL 8.0.36. Dump was produced from MySQL 8.0.36 Scanning metadata - done Checking for pre-existing objects... Executing common preamble SQL Executing DDL - done Executing view DDL - done Starting data load Executing common postamble SQL 100% (2.51 MB / 2.51 MB), 0.00 B/s, 3 / 3 tables done Recreating indexes - done 6 chunks (8.76K rows , 2.51 MB) for 3 tables in 2 schemas were loaded in 0 sec ( avg throughput 2.51 MB/s) 0 warnings were reported during the load . |
util.dumpInstance()的使用
1 | util.dumpInstance( '/data/dump' , {threads:8}) |
不过,不会导出MySQL 的系统表(比如information_schema
, performance_schema
等)
加载数据
1 | util.loadDump( '/data/dump' , {threads: 8}) |
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· DeepSeek 开源周回顾「GitHub 热点速览」
· 物流快递公司核心技术能力-地址解析分单基础技术分享
· .NET 10首个预览版发布:重大改进与新特性概览!
· AI与.NET技术实操系列(二):开始使用ML.NET
· .NET10 - 预览版1新功能体验(一)
2022-06-06 【PostgreSQL】部分索引、覆盖索引和多列索引是如何降低update操作的
2017-06-06 收缩sqlserver事务日志
2017-06-06 ORACLE在线切换undo表空间