1、依赖安装
1 | dnf install -y cmake gcc gcc -c++ git make |
2、mydumper下载和安装
1 2 3 4 5 | # 下载 wget https: //github .com /mydumper/mydumper/releases/download/v0 .14.1-1 /mydumper-0 .14.1-1.el9.x86_64.rpm # 安装 dnf install mydumper-0.14.1-1.el9.x86_64.rpm -y |
3、mydumper参数
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 | # mydumper --help Usage: mydumper [OPTION…] multi-threaded MySQL dumping Connection Options -h, --host The host to connect to -u, --user Username with the necessary privileges -p, --password User password -a, --ask-password Prompt For User password -P, --port TCP /IP port to connect to -S, --socket UNIX domain socket file to use for connection -C, --compress-protocol Use compression on the MySQL connection --ssl Connect using SSL --ssl-mode Desired security state of the connection to the server: DISABLED, PREFERRED, REQUIRED, VERIFY_CA, VERIFY_IDENTITY --key The path name to the key file --cert The path name to the certificate file --ca The path name to the certificate authority file --capath The path name to a directory that contains trusted SSL CA certificates in PEM format --cipher A list of permissible ciphers to use for SSL encryption --tls-version Which protocols the server permits for encrypted connections Filter Options -x, --regex Regular expression for 'db.table' matching -B, --database Database to dump -i, --ignore-engines Comma delimited list of storage engines to ignore --where Dump only selected records. -U, --updated-since Use Update_time to dump only tables updated in the last U days -O, --omit-from- file File containing a list of database.table entries to skip, one per line (skips before applying regex option) -T, --tables-list Comma delimited table list to dump (does not exclude regex option). Table name must include database name. For instance: test .t1, test .t2 Lock Options -z, --tidb-snapshot Snapshot to use for TiDB -k, --no-locks Do not execute the temporary shared read lock. WARNING: This will cause inconsistent backups --use-savepoints Use savepoints to reduce metadata locking issues, needs SUPER privilege --no-backup-locks Do not use Percona backup locks --lock-all-tables Use LOCK TABLE for all, instead of FTWRL -- less -locking Minimize locking time on InnoDB tables. --trx-consistency-only Transactional consistency only PMM Options --pmm-path which default value will be /usr/local/percona/pmm2/collectors/textfile-collector/high-resolution --pmm-resolution which default will be high Exec Options -- exec -threads Amount of threads to use with -- exec -- exec Command to execute using the file as parameter -- exec -per-thread Set the command that will receive by STDIN and write in the STDOUT into the output file -- exec -per-thread-extension Set the extension for the STDOUT file when -- exec -per-thread is used If long query running found: --long-query-retries Retry checking for long queries, default 0 ( do not retry) --long-query-retry-interval Time to wait before retrying the long query check in seconds, default 60 -l, --long-query-guard Set long query timer in seconds, default 60 -K, -- kill -long-queries Kill long running queries (instead of aborting) Job Options --max-rows Limit the number of rows per block after the table is estimated, default 1000000 --char-deep --char-chunk -r, --rows Try to split tables into chunks of this many rows. -- split -partitions Dump partitions into separate files. This options overrides the --rows option for partitioned tables. Checksum Options -M, --checksum-all Dump checksums for all elements --data-checksums Dump table checksums with the data --schema-checksums Dump schema table and view creation checksums --routine-checksums Dump triggers, functions and routines checksums Objects Options -m, --no-schemas Do not dump table schemas with the data and triggers -Y, --all-tablespaces Dump all the tablespaces. -d, --no-data Do not dump table data -G, --triggers Dump triggers. By default, it do not dump triggers -E, --events Dump events. By default, it do not dump events -R, --routines Dump stored procedures and functions. By default, it do not dump stored procedures nor functions --views-as-tables Export VIEWs as they were tables -W, --no-views Do not dump VIEWs Statement Options --load-data --csv Automatically enables --load-data and set variables to export in CSV format . --fields-terminated-by --fields-enclosed-by --fields-escaped-by Single character that is going to be used to escape characters in theLOAD DATA stament, default: '\' --lines-starting-by Adds the string at the begining of each row. When --load-data is usedit is added to the LOAD DATA statement. Its affects INSERT INTO statementsalso when it is used. --lines-terminated-by Adds the string at the end of each row. When --load-data is used it isadded to the LOAD DATA statement. Its affects INSERT INTO statementsalso when it is used. --statement-terminated-by This might never be used, unless you know what are you doing -N, --insert-ignore Dump rows with INSERT IGNORE --replace Dump rows with REPLACE --complete-insert Use complete INSERT statements that include column names --hex-blob Dump binary columns using hexadecimal notation --skip-definer Removes DEFINER from the CREATE statement. By default, statements are not modified -s, --statement-size Attempted size of INSERT statement in bytes, default 1000000 --tz-utc SET TIME_ZONE= '+00:00' at top of dump to allow dumping of TIMESTAMP data when a server has data in different time zones or data is being moved between servers with different time zones, defaults to on use --skip-tz-utc to disable. --skip-tz-utc -- set -names Sets the names, use it at your own risk, default binary Extra Options -F, --chunk-filesize Split tables into chunks of this output file size. This value is in MB -- exit - if -broken-table-found Exits if a broken table has been found --success-on-1146 Not increment error count and Warning instead of Critical in case of table doesn't exist -e, --build-empty-files Build dump files even if no data available from table --no-check-generated-fields Queries related to generated fields are not going to be executed.It will lead to restoration issues if you have generated columns --order-by-primary Sort the data by Primary Key or Unique key if no primary key exists -c, --compress Compress output files Daemon Options -D, --daemon Enable daemon mode -I, --snapshot-interval Interval between each dump snapshot ( in minutes), requires --daemon, default 60 -X, --snapshot-count number of snapshots, default 2 Application Options: -?, --help Show help options -o, --outputdir Directory to output files to --stream It will stream over STDOUT once the files has been written. Since v0.12.7-1, accepts NO_DELETE, NO_STREAM_AND_NO_DELETE and TRADITIONAL which is the default value and used if no parameter is given -L, --logfile Log file name to use, by default stdout is used --disk-limits Set the limit to pause and resume if determines there is no enough disk space.Accepts values like: '<resume>:<pause>' in MB.For instance: 100:500 will pause when there is only 100MB free and willresume if 500MB are available -t, --threads Number of threads to use, default 4 -V, --version Show the program version and exit --identifier-quote-character This set the identifier quote character that is used to INSERT statements onlyon mydumper and to split statement on myloader. Use SQL_MODE to change theCREATE TABLE statementsPosible values are: BACKTICK and DOUBLE_QUOTE. Default: BACKTICK - v , --verbose Verbosity of output, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2 --defaults- file Use a specific defaults file . Default: /etc/mydumper .cnf |
4、myloader参数
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 | # myloader --help Usage: myloader [OPTION…] multi-threaded MySQL loader Connection Options -h, --host The host to connect to -u, --user Username with the necessary privileges -p, --password User password -a, --ask-password Prompt For User password -P, --port TCP /IP port to connect to -S, --socket UNIX domain socket file to use for connection -C, --compress-protocol Use compression on the MySQL connection --ssl Connect using SSL --ssl-mode Desired security state of the connection to the server: DISABLED, PREFERRED, REQUIRED, VERIFY_CA, VERIFY_IDENTITY --key The path name to the key file --cert The path name to the certificate file --ca The path name to the certificate authority file --capath The path name to a directory that contains trusted SSL CA certificates in PEM format --cipher A list of permissible ciphers to use for SSL encryption --tls-version Which protocols the server permits for encrypted connections Filter Options -x, --regex Regular expression for 'db.table' matching -s, -- source -db Database to restore --skip-triggers Do not import triggers. By default, it imports triggers --skip-post Do not import events, stored procedures and functions. By default, it imports events, stored procedures nor functions --no-data Do not dump or import table data -O, --omit-from- file File containing a list of database.table entries to skip, one per line (skips before applying regex option) -T, --tables-list Comma delimited table list to dump (does not exclude regex option). Table name must include database name. For instance: test .t1, test .t2 PMM Options --pmm-path which default value will be /usr/local/percona/pmm2/collectors/textfile-collector/high-resolution --pmm-resolution which default will be high Execution Options -e, -- enable -binlog Enable binary logging of the restore data --innodb-optimize-keys Creates the table without the indexes and it adds them at the end. Options: AFTER_IMPORT_PER_TABLE and AFTER_IMPORT_ALL_TABLES. Default: AFTER_IMPORT_PER_TABLE --purge-mode This specify the truncate mode which can be: NONE, DROP, TRUNCATE and DELETE --disable-redo-log Disables the REDO_LOG and enables it after, doesn't check initial status -o, --overwrite-tables Drop tables if they already exist --serialized-table-creation Table recreation will be executed in series, one thread at a time --stream It will receive the stream from STDIN and creates the file in the disk before start processing. Since v0.12.7-1, accepts NO_DELETE, NO_STREAM_AND_NO_DELETE and TRADITIONAL which is the default value and used if no parameter is given Threads Options --max-threads-per-table Maximum number of threads per table to use, default 4 --max-threads- for -index-creation Maximum number of threads for index creation, default 4 Statement Options -r, --rows Split the INSERT statement into this many rows. -q, --queries-per-transaction Number of queries per transaction, default 1000 --append- if -not-exist Appends IF NOT EXISTS to the create table statements. This will be removed when https: //bugs .mysql.com /bug .php? id =103791 has been implemented -- set -names Sets the names, use it at your own risk, default binary --skip-definer Removes DEFINER from the CREATE statement. By default, statements are not modified Application Options: -?, --help Show help options -d, --directory Directory of the dump to import -L, --logfile Log file name to use, by default stdout is used -B, --database An alternative database to restore into --resume Expect to find resume file in backup dir and will only process those files -t, --threads Number of threads to use, default 4 -V, --version Show the program version and exit --identifier-quote-character This set the identifier quote character that is used to INSERT statements onlyon mydumper and to split statement on myloader. Use SQL_MODE to change theCREATE TABLE statementsPosible values are: BACKTICK and DOUBLE_QUOTE. Default: BACKTICK - v , --verbose Verbosity of output, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2 --defaults- file Use a specific defaults file . Default: /etc/mydumper .cnf |
5、mydumper使用
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | # 备份全部数据库 mydumper -u root -p rootroot -o /data/backup/all_db/ # 备份全部数据库 包含触发器、事件、存储过程及函数 mydumper -u root -p rootroot -G -R -E -o /data/backup/all/ # 备份指定库 mydumper -u root -p rootroot -G -R -E -B demo -o /data/backup/demo/ # 使用正则 排除系统库 mydumper -u root -p rootroot -G -R -E --regex '^(?!(mysql|sys))' -o /data/backup/db # 备份指定表 mydumper -u root -p rootroot -B demo -T table1,table2 -o /data/backup/table/ # 只备份表结构 mydumper -u root -p rootroot -d -B demo -o /data/backup/nodata/ # 只备份表数据 mydumper -u root -p rootroot -m -B demo -o /data/backup/noschema/ # 压缩备份某个表 mydumper -u root -p rootroot -B demo -T table1 -c -o /data/backup/compress/ |
6、myload使用
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | # 恢复全部备份文件(若表已存在则先删除) myloader -u root -p rootroot -o -d /data/backup/all # 从全备中恢复指定库 myloader -u root -p rootroot -s demo -o -d /data/backup/demo # 将某个数据库备份还原到另一个数据库中(目标库不存在则会新建) myloader -u root -p rootroot -B recover_demo -s demo -o -d /data/backup/demo myloader -u root -p rootroot -B recover_demo -o -d /data/backup/demo # 恢复时开启binlog(有备库的时候需要开启) myloader -u root -p rootroot -e -o -d /data/backup/demo # 无法还原单表,只能进入命令行执行还原 source demo.table1-schema.sql #还原表结构 source demo.table1.sql #还原表数据 |
参考:https://github.com/mydumper/mydumper
分类:
数据库 / MySQL
, 数据库
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 分享4款.NET开源、免费、实用的商城系统
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
· 记一次.NET内存居高不下排查解决与启示