W
e
l
c
o
m
e
: )

10.备份与恢复

备份类型介绍

  • 备份类型可以分为物理备份和逻辑备份两种

    • 物理备份是指通过拷贝数据库文件的方式完成备份,这种备份方式适用于数据库很大,数据重要且需要快速恢复的数据库逻辑备份是指通过备份数据库的逻辑结构(create database/table语句)和数据内容(insert语句或者文本文件)的方式完成备份。这种备份方式适用于数据库不是很大,或者你需要对导出的文件做一定的修改,又或者是希望在另外的不同类型服务器上重新建立此数据库的情况
    • 通常情况下物理备份的速度要快于逻辑备份,另外物理备份的备份和恢复粒度范围为整个数据库或者是单个文件。对单表是否有恢复能力取决于存储引擎,比如在MyISAM存储引擎下每个表对应了独立的文件,可以单独恢复;但对于InnoDB存储引擎表来说,可能每个表示对应了独立的文件,也可能表使用了共享数据文件物理备份通常要求在数据库关闭的情况下执行,但如果是在数据库运行情况下执行,则要求备份期间数据库不能修改逻辑备份的速度要慢于物理备份,是因为逻辑备份需要访问数据库并将内容转化成逻辑备份需要的格式;通常输出的备份文件大小也要比物理备份大;另外逻辑备份也不包含数据库的配置文件和日志文件内容;备份和恢复的粒度可以是所有数据库,也可以是单个数据库,也可以是单个表;逻辑备份需要再数据库运行的状态下执行;它的执行工具可以是
      • mysqldump
      • select … into outfile
  • 备份又可以分为在线备份和离线备份两种

    • 在线备份是指在数据库运行的状态下执行的备份
    • 而离线备份是指在数据库关闭情况下执行的备份
  • 备份还可以分为本地备份和远程备份两种

    • 本地备份是指备份是在和当前MySQL运行的相同主机上发起和执行
    • 远程备份是指备份是在和当前MySQL运行的不同主机上发起和执行
    • 比如mysqldump命令可以连接本机MySQL,也可以连接远程MySQL;在比如select … into outfile命令可以通过本地或者远程的MySQL客户端执行,但生成的文件则会存放在MySQL实例运行的主机上对物理备份来说启动备份的过程是MySQL实例主机,但备份的地址有可能是远程的某个存储
  • 备份还可以分为全量备份和增量备份两种

    • 全量备份是指备份中包含所有的数据,而增量备份是指备份中仅包含在某个指定时间内的变化情况
    • 全量备份的方法正如之前说到的物理备份和逻辑备份方式
    • 而增量备份的方法在MySQL中需要借助二进制日志完成

备份方法

  • mysqldump命令执行备份

  • 通过拷贝物理表文件生成备份:当前存储引擎下每个表都有自己独立的数据文件时就可以使用拷贝物理表文件的方式。如果当前数据库是运行状态,则需要下对此表加上一个只读锁,防止备份期间的修改操作FLUSH TABLES tbl_list WITH READ LOCK;这种表级的备份方式对MyISAM存储引擎支持很好,因为MyISAM的表天生就分成了三个独立的数据文件(.frm, .MYD, and *.MYI),但对InnoDB存储引擎的表就不太支持

  • 通过select … into outfile方式生成文本文件:第一种方式是通过SELECT * INTO OUTFILE 'file_name' FROM tbl_name命令生成在服务器上的文本文件,另一种方式是通过mysqldump命令加--tab参数生成文本文件; 但这种方式只会生成表数据,不会生成表结构

  • MySQL增量备份:将MySQL实例设置为开启log-bin参数,备份增量生成的二进制日志到指定的备份地

  • Xtrabackup工具执行全量备份或增量备份

物理文件拷贝

• 对MyISAM表来说:如果当前此表在拷贝过程中没有任何写操作时,可以直接拷贝
CREATE TABLE `students_myisam` (
 `sid` int(11) NOT NULL,
 `sname` varchar(64) DEFAULT NULL,
 `gender` int(11) DEFAULT NULL,
 `dept_id` int(11) DEFAULT NULL,
 PRIMARY KEY (`sid`),
 KEY `idx_sname` (`sname`),
 KEY `idx_gender` (`gender`),
 KEY `dept_id` (`dept_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
insert into students_myisam values(1,'a',1,1),(2,'b',2,2),(3,'c',3,3);

把这个表相关的三个文件students_myisam.frm, students_myisam.MYD, students_myisam.MYI文件拷贝到另外的数据库实例对应的数据库目录下
[root@master course]# sftp slave1
sftp> cd /usr/local/mysql/data/test2
sftp> put students_myisam*
sftp> exit
[root@slave1 ~]# cd /usr/local/mysql/data/test2
[root@slave1 test2]# chown mysql:mysql students_myisam.*
[root@slave1 test2]# mysql -u root –p
mysql> use test2;
mysql> select * from students_myisam; 
+-----+-------+--------+---------+
| sid | sname | gender | dept_id | 
+-----+-------+--------+---------+
| 1 | a | 1 | 1 |
| 2 | b | 2 | 2 |
| 3 | c | 3 | 3 |


• 对InnoDB表来说:即使是innodb_file_per_table=on的情况下直接拷贝文件也不行
CREATE TABLE `students` (
 `sid` int(11) NOT NULL,
 `sname` varchar(64) DEFAULT NULL,
 `gender` int(11) DEFAULT NULL,
 `dept_id` int(11) DEFAULT NULL,
 PRIMARY KEY (`sid`),
 KEY `idx_sname` (`sname`),
 KEY `idx_gender` (`gender`),
 KEY `dept_id` (`dept_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
insert into students values(1,'a',1,1),(2,'b',2,2),(3,'c',3,3);
把这个表相关的两个文件students.frm,students.ibd文件拷贝到另外的数据库实例对应的数据库目录下
[root@master course]# sftp slave1
sftp> cd /usr/local/mysql/data/test2
sftp> put students.*
sftp> exit
[root@slave1 ~]# cd /usr/local/mysql/data/test2
[root@slave1 test2]# chown mysql:mysql students.*
[root@slave1 test2]# mysql -u root –p
mysql> use test2;
mysql> select * from students;
ERROR 1146 (42S02): Table 'test2.students' doesn't exist
查看错误日志信息:
2017-07-10T12:44:32.998684Z 3 [Warning] InnoDB: Cannot open table test2/students from the internal 
data dictionary of InnoDB though the .frm file for the table exists. Please refer to 
http://dev.mysql.com/doc/refman/5.7/en/innodb-troubleshooting.html for how to resolve the issue.


• 对InnoDB来说,可以通过拷贝整个data目录的方式来完成备份和恢复
[root@master mysql]# /etc/init.d/mysql.server stop
Shutting down MySQL.. SUCCESS! 
[root@master mysql]# tar -zcvf data.tar.gz
[root@master mysql]# sftp slave1
Connected to slave1.
sftp> cd /usr/local/mysql/
sftp> put data.tar.gz
[root@slave1 data]# mv data data_bak
[root@slave1 data]# tar -zxvf data.tar.gz
[root@slave1 data]# /etc/init.d/mysql.server start

mysqldump

Mysqldump工具用来生成MySQL的逻辑备份文件,其文件内容就是构成数据库对象和数据内容的可重复执行的SQL语句

Mysqldump工具使用的三种方式:
shell> mysqldump [options] db_name [tbl_name ...]
shell> mysqldump [options] --databases db_name ...
shell> mysqldump [options] --all-databases

options里的关键参数:
--host=host_name, -h host_name指定要导出的目标数据库所在的主机,默认是localhost

--user=user_name, -u user_name指定链接目标数据库的数据库用户名

--password[=password], -p[password]指定链接目标数据库的数据库密码

--port=port_num, -P port_num指定链接目标数据库的端口

--add-drop-database表示在使用--databases或者是--all-databases参数时在每个create database命令前都加上drop database的命令

--add-drop-table表示在每个create table命令前加上drop table命令

--add-drop-trigger表示在每个create trigger命令前加上drop trigger命令

--replace表示使用replace命令插入数据而不是使用insert命令

--default-character-set=charset_name指定默认的字符集,默认是UTF8

--set-charset表示将SET NAMES default_character_set命令写入到导出备份文件中,默认是开启状态

--dump-slave[=value]参数表示从复制的slave从库导出备份,且其中包含了change master to语句,则利用此备份文件可以直接建立另一个复制从库。value参数如果不写或者=1的情况下,则change master to语句写入到dump文件中,如果设置=2则代表也写入到dump文件中,只是会注释掉

--master-data[=value]参数表示从复制的主库上导出备份,和--dump-slave参数配置类似,可以利用此备份文件直接建立另一个复制从库。其value的取值含义也和dump-slave相同。使用该参数会自动打开—lock-all-table参数,除非同时使--single-transaction参数

--tab=dir_name, -T dir_name参数表示将备份文件以文本文件的方式生成,dir_name指定了存放的文件路径,每个表会生成两个文件,一个是.sql文件保存的表结构信息,一个是.txt文件保存的表数据信息

--all-databases, -A参数代表导出所有数据库里的所有的表

--databases, -B参数代表导出指定的一个或多个数据库

--ignore-table=db_name.tbl_name参数代表导出过程中忽略某个指定表的导出,如果要忽略多个表则这个参数要使用多次

--no-data, -d参数代表不要导出数据,只导出表结构

--routines, -R参数代表也要把存储过程和函数导出来

--triggers参数代表也要将触发器导出来

--where='where_condition', -w ‘where_condition’参数代表仅导出符合条件的数据,比如

		--where="user='jimf'"

		-w"userid>1"

		-w"userid<1"

--lock-all-tables, -x参数代表在导出过程中对每个数据库的每个表加上一个只读锁

--no-autocommit参数代表对每个表的数据导出内容用set autocommit=0和commit两个语句包围

--single-transaction参数代表将事务隔离级别设置为可重复读并在导出开始时执行start transaction开启一个新的事务,在dump的执行过程中也不会阻止任何的读写操作


导出一个数据库: 
[root@master ~]# mysqldump -uroot -p -P3308 --databases course>backup.sql

导出多个数据库: 
[root@master ~]# mysqldump -u root -p -P 3308 --databases course test>course.sql
[root@master ~]# mysqldump -u root -p -P 3308 -B course test>course.sql

导出所有数据库: 
[root@master ~]# mysqldump -u root -p -P 3308 --all-databases>course.sql

导出一个数据库的某几个表: 
[root@master ~]# mysqldump -u root -p -P 3308 course students students_myisam>course.sql

仅导出course数据库的数据而不包含表结构: 
[root@master ~]# mysqldump -u root -p -P 3308 --no-create-info course>course.sql

仅导出course数据库中的students和students_myisam两个表的数据: 
[root@master ~]# mysqldump -u root -p -P 3308 --no-create-info course students students_myisam>course.sql

仅导出course数据库的表结构:
[root@master ~]# mysqldump -u root -p -P 3308 --no-data course>course.sql

导出course数据库中除了teacher和score两个表的其他表结构和数据:
[root@master ~]# mysqldump -u root -p -P 3308 --ignore-table=course.teacher --ignoretable=course.score course>course.sql

导出course数据库的表和存储过程和触发器:
[root@master ~]# mysqldump -u root -p -P 3308 --routine --trigger course>course.sql

导出course数据库中符合where条件的数据:
[root@master ~]# mysqldump -u root -p -P 3308 --where="sid in (1,2)" course students students_myisam>course.sql

远程导出course数据库,导出文件在发起导出命令的服务器上:
[root@slave1 course]# mysqldump -u root -p -P 3308 -h 192.168.237.128 course >course.sql
Enter password: 
[root@slave1 course]# ls
course.sql



FLUSH TABLES WITH READ LOCK简称(FTWRL),该命令主要用于备份工具获取一致性备份(数据与binlog位点匹配)。由于FTWRL总共需要持有两把全局的MDL锁,并且还需要关闭所有表对象,因此这个命令的杀伤性很大,执行命令时容易导致库hang住
FTWRL主要包括3个步骤:
1.上全局读锁(lock_global_read_lock)
2.清理表缓存(close_cached_tables)
3.上全局COMMIT锁(make_global_read_lock_block_commit)

FTWRL每个操作的影响
 上全局读锁会导致所有更新操作都会被堵塞;关闭表过程中,如果有大查询导致关闭表等待,那么所有访问这个表的查询和更新都需要等待;上全局COMMIT锁时,会堵塞活跃事务提交
 
比如在第一个session里执行:
mysql> flush tables with read lock;
在第二个session里执行写操作,会被阻止:
mysql> update dept set dept_name=‘bcd'; ##等待超时


FLUSH TABLES WITH READ LOCK与LOCK table read local的区别:
在第一个session里执行修改操作,产生一个排除锁:
mysql> set autocommit=0;
Query OK, 0 rows affected (0.00 sec)
mysql> update dept set dept_name='aaa';
Query OK, 0 rows affected (0.00 sec)
Rows matched: 4 Changed: 0 Warnings: 0
在第二个shell窗口里执行备份操作:
当存在—master-data参数时,导出可以成功
root@ip-172-31-18-152:/usr/local/mysql/bin# ./mysqldump -u root -p -P 3307 --master-data course >course.sql
Enter password:
当是普通导出时,则发生锁等待情况
root@ip-172-31-18-152:/usr/local/mysql/bin# ./mysqldump -u root -p -P 3307 course >course.sql
Enter password:
mysql> show processlist;
 18 | root | localhost | course | Query | 369 | Waiting for table metadata lock | LOCK TABLES `course` READ /*!32311 LOCAL */,`dept` READ /*!32311 LOCAL */,`score` READ /*!32311 LOCA |
 
 
在可重复读隔离级别下,start transaction和start transaction with consistent snapshot语句的区别:
1)START TRANSACTION 时,是第一条语句的执行时间点,就是事务开始的时间点,第一条select语句建立一致性读的snapshot; 2)START TRANSACTION WITH consistent snapshot 时,则是立即建立本事务的一致性读snapshot,当然也开始事务了;
实验1:
session A 										session B
mysql> set tx_isolation=‘repeatable-read’; 		mysql> set tx_isolation='repeatable-read';
 												mysql> select * from t1;
 												Empty set (0.01 sec)
mysql> start transaction;
 												mysql> insert into t1(c1,c2) values(1,1);
mysql> select * from t1;
+----+------+
| c1 | c2 |
+----+------+
| 1 | 1 |
+----+------+
1 row in set (0.00 sec)
实验2:
mysql> set tx_isolation='repeatable-read';		mysql> set tx_isolation='repeatable-read';
 												mysql> select * from t1;
 												Empty set (0.01 sec)
mysql> start transaction with consistent snapshot;
 
 												mysql> insert into t1(c1,c2) values(1,1);
mysql> select * from t1;
Empty set (0.00 sec)
上面两个实验很好的说明了 start transaction 和 start tansaction with consistent snapshot的区别。第一个实验说明,start transaction执行之后,事务并没有开始,所以insert发生在session A的事务开始之前,所以可以读到session B插入的值。第二个实验说明,start transaction with consistent snapshot已经开始了事务,所以insert语句发生在事务开始之后,所以读不到insert的数据



打开gerenal_log参数,分析导出过程的具体逻辑:
general_log=on
general_log_file=/usr/local/mysql/data/general_log
/etc/init.d/mysql.server restart 
普通导出过程中日志分析:
root@ip-172-31-18-152:~# mysqldump -u root -p -P 3307 course >course.sql
017-07-14T03:00:15.053209Z 7 Connect root@localhost on using Socket
2017-07-14T03:00:15.053307Z 7 Query /*!40100 SET @@SQL_MODE='' */
2017-07-14T03:00:15.053392Z 7 Query /*!40103 SET TIME_ZONE='+00:00' */
2017-07-14T03:00:15.053474Z 7 Query SHOW VARIABLES LIKE 'gtid\_mode'
2017-07-14T03:00:15.055523Z 7 Query SELECT LOGFILE_GROUP_NAME, FILE_NAME, TOTAL_EXTENTS, INITIAL_SIZE, ENGINE, EXTRA FROM 
INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'UNDO LOG' AND FILE_NAME IS NOT NULL AND LOGFILE_GROUP_NAME IS NOT NULL AND 
LOGFILE_GROUP_NAME IN (SELECT DISTINCT LOGFILE_GROUP_NAME FROM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND 
TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SCHEMA.PARTITIONS WHERE TABLE_SCHEMA IN ('course'))) GROUP 
BY LOGFILE_GROUP_NAME, FILE_NAME, ENGINE, TOTAL_EXTENTS, INITIAL_SIZE ORDER BY LOGFILE_GROUP_NAME
2017-07-14T03:00:15.056948Z 7 Query SELECT DISTINCT TABLESPACE_NAME, FILE_NAME, LOGFILE_GROUP_NAME, EXTENT_SIZE, INITIAL_SIZE, 
ENGINE FROM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM 
INFORMATION_SCHEMA.PARTITIONS WHERE TABLE_SCHEMA IN ('course')) ORDER BY TABLESPACE_NAME, LOGFILE_GROUP_NAME
2017-07-14T03:00:15.057412Z 7 Query SHOW VARIABLES LIKE 'ndbinfo\_version'
2017-07-14T03:00:15.058452Z 7 Init DB course
2017-07-14T03:00:15.058494Z 7 Query show tables
2017-07-14T03:00:15.058611Z 7 Query LOCK TABLES `course` READ /*!32311 LOCAL */,`dept` READ /*!32311 LOCAL */,`score` READ /*!32311 LOCAL 
*/,`students` READ /*!32311 LOCAL */,`students_noindex` READ /*!32311 LOCAL */
2017-07-14T03:00:15.058692Z 7 Query show table status like 'course'
2017-07-14T03:00:15.058865Z 7 Query SET SQL_QUOTE_SHOW_CREATE=1
2017-07-14T03:00:15.058910Z 7 Query SET SESSION character_set_results = 'binary'
2017-07-14T03:00:15.058952Z 7 Query show create table `course`
2017-07-14T03:00:15.059011Z 7 Query SET SESSION character_set_results = 'utf8'
2017-07-14T03:00:15.059060Z 7 Query show fields from `course`
2017-07-14T03:00:15.059284Z 7 Query show fields from `course`
2017-07-14T03:00:15.059500Z 7 Query SELECT /*!40001 SQL_NO_CACHE */ * FROM `course`
2017-07-14T03:00:15.059611Z 7 Query SET SESSION character_set_results = 'binary'
2017-07-14T03:00:15.059655Z 7 Query use `course`
2017-07-14T03:00:15.059700Z 7 Query select @@collation_database
2017-07-14T03:00:15.059756Z 7 Query SHOW TRIGGERS LIKE 'course'


打开gerenal_log参数,分析导出过程的具体逻辑:
general_log=on
general_log_file=/usr/local/mysql/data/general_log
/etc/init.d/mysql.server restart 
导出过程中设置master-data参数时:
root@ip-172-31-18-152:~# mysqldump -u root -p -P 3307 --master-data course >course.sql
2017-07-14T02:13:55.482345Z 4 Connect root@localhost on using Socket
2017-07-14T02:13:55.482457Z 4 Query /*!40100 SET @@SQL_MODE='' */
2017-07-14T02:13:55.482530Z 4 Query /*!40103 SET TIME_ZONE='+00:00' */
2017-07-14T02:13:55.482611Z 4 Query FLUSH /*!40101 LOCAL */ TABLES
2017-07-14T02:13:55.483186Z 4 Query FLUSH TABLES WITH READ LOCK
2017-07-14T02:13:55.483250Z 4 Query SHOW VARIABLES LIKE 'gtid\_mode'
2017-07-14T02:13:55.485211Z 4 Query SHOW MASTER STATUS
2017-07-14T02:13:55.485328Z 4 Query SELECT LOGFILE_GROUP_NAME, FILE_NAME, TOTAL_EXTENTS, INITIAL_SIZE, ENGINE, EXTRA FROM 
INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'UNDO LOG' AND FILE_NAME IS NOT NULL AND LOGFILE_GROUP_NAME IS NOT NULL AND 
LOGFILE_GROUP_NAME IN (SELECT DISTINCT LOGFILE_GROUP_NAME FROM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND 
TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SCHEMA.PARTITIONS WHERE TABLE_SCHEMA IN ('course'))) GROUP BY 
LOGFILE_GROUP_NAME, FILE_NAME, ENGINE, TOTAL_EXTENTS, INITIAL_SIZE ORDER BY LOGFILE_GROUP_NAME
2017-07-14T02:13:55.490685Z 4 Query SELECT DISTINCT TABLESPACE_NAME, FILE_NAME, LOGFILE_GROUP_NAME, EXTENT_SIZE, INITIAL_SIZE, ENGINE 
FROM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM 
INFORMATION_SCHEMA.PARTITIONS WHERE TABLE_SCHEMA IN ('course')) ORDER BY TABLESPACE_NAME, LOGFILE_GROUP_NAME
2017-07-14T02:13:55.491155Z 4 Query SHOW VARIABLES LIKE 'ndbinfo\_version'
2017-07-14T02:13:55.492479Z 4 Init DB course
2017-07-14T02:13:55.492514Z 4 Query show tables
2017-07-14T02:13:55.492622Z 4 Query show table status like 'course'
2017-07-14T02:13:55.492775Z 4 Query SET SQL_QUOTE_SHOW_CREATE=1
2017-07-14T02:13:55.492832Z 4 Query SET SESSION character_set_results = 'binary'
2017-07-14T02:13:55.492874Z 4 Query show create table `course`
2017-07-14T02:13:55.492942Z 4 Query SET SESSION character_set_results = 'utf8'
2017-07-14T02:13:55.492990Z 4 Query show fields from `course`
2017-07-14T02:13:55.493217Z 4 Query show fields from `course`
2017-07-14T02:13:55.493439Z 4 Query SELECT /*!40001 SQL_NO_CACHE */ * FROM `course`
2017-07-14T02:13:55.493543Z 4 Query SET SESSION character_set_results = 'binary'
2017-07-14T02:13:55.493595Z 4 Query use `course`
2017-07-14T02:13:55.493644Z 4 Query select @@collation_database
2017-07-14T02:13:55.493704Z 4 Query SHOW TRIGGERS LIKE 'course'





导出过程中设置master-data和single-transaction两个参数时:
root@ip-172-31-18-152:~# mysqldump -u root -p -P 3307 --master-data --single-transaction course >course.sql
2017-07-14T02:25:20.174098Z 4 Connect root@localhost on using Socket
2017-07-14T02:25:20.174202Z 4 Query /*!40100 SET @@SQL_MODE='' */
2017-07-14T02:25:20.174283Z 4 Query /*!40103 SET TIME_ZONE='+00:00' */
2017-07-14T02:25:20.174353Z 4 Query FLUSH /*!40101 LOCAL */ TABLES
2017-07-14T02:25:20.175098Z 4 Query FLUSH TABLES WITH READ LOCK
2017-07-14T02:25:20.175152Z 4 Query SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ
2017-07-14T02:25:20.175186Z 4 Query START TRANSACTION /*!40100 WITH CONSISTENT SNAPSHOT */ 
2017-07-14T02:25:20.175245Z 4 Query SHOW VARIABLES LIKE 'gtid\_mode'
2017-07-14T02:25:20.176523Z 4 Query SHOW MASTER STATUS
2017-07-14T02:25:20.176573Z 4 Query UNLOCK TABLES
2017-07-14T02:25:20.176675Z 4 Query SELECT LOGFILE_GROUP_NAME, FILE_NAME, TOTAL_EXTENTS, INITIAL_SIZE, ENGINE, EXTRA FROM 
INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'UNDO LOG' AND FILE_NAME IS NOT NULL AND LOGFILE_GROUP_NAME IS NOT NULL AND 
LOGFILE_GROUP_NAME IN (SELECT DISTINCT LOGFILE_GROUP_NAME FROM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND 
TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SCHEMA.PARTITIONS WHERE TABLE_SCHEMA IN ('course'))) GROUP 
BY LOGFILE_GROUP_NAME, FILE_NAME, ENGINE, TOTAL_EXTENTS, INITIAL_SIZE ORDER BY LOGFILE_GROUP_NAME
2017-07-14T02:25:20.178051Z 4 Query SELECT DISTINCT TABLESPACE_NAME, FILE_NAME, LOGFILE_GROUP_NAME, EXTENT_SIZE, INITIAL_SIZE, 
ENGINE FROM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM 
INFORMATION_SCHEMA.PARTITIONS WHERE TABLE_SCHEMA IN ('course')) ORDER BY TABLESPACE_NAME, LOGFILE_GROUP_NAME
2017-07-14T02:25:20.178540Z 4 Query SHOW VARIABLES LIKE 'ndbinfo\_version'
2017-07-14T02:25:20.179327Z 4 Init DB course
2017-07-14T02:25:20.179363Z 4 Query SAVEPOINT sp
2017-07-14T02:25:20.179401Z 4 Query show tables
2017-07-14T02:25:20.179519Z 4 Query show table status like 'course'
2017-07-14T02:25:20.179671Z 4 Query SET SQL_QUOTE_SHOW_CREATE=1
2017-07-14T02:25:20.179713Z 4 Query SET SESSION character_set_results = 'binary'
2017-07-14T02:25:20.179761Z 4 Query show create table `course`
2017-07-14T02:25:20.179823Z 4 Query SET SESSION character_set_results = 'utf8'
2017-07-14T02:25:20.179872Z 4 Query show fields from `course`
2017-07-14T02:25:20.180094Z 4 Query show fields from `course`
2017-07-14T02:25:20.180313Z 4 Query SELECT /*!40001 SQL_NO_CACHE */ * FROM `course`
2017-07-14T02:25:20.180422Z 4 Query SET SESSION character_set_results = 'binary'
2017-07-14T02:25:20.180467Z 4 Query use `course`
2017-07-14T02:25:20.180510Z 4 Query select @@collation_database
2017-07-14T02:25:20.180565Z 4 Query SHOW TRIGGERS LIKE 'course'



从主库备份: [root@master ~]# mysqldump -u root -p -P 3308 --master-data=2 --single-transaction course >course.sql
此备份方式会在备份开始之初在所有表上加一个只读锁(FLUSH TABLES WITH READ LOCK),当成功获取了该锁并开始备份之后,此锁就会立即释放。而后续的dump过程中不会影响其他的读写操作从主库备份如果--dump-slave参数,则备份失败: 
[root@master ~]# mysqldump -u root -p -P 3308 --dump-slave=2 --single-transaction course >course.sql
Enter password: 
mysqldump: Couldn't execute 'START SLAVE': The server is not configured as slave; fix in config file or 
with CHANGE MASTER TO (1200)
在从库上备份数据: 
[root@slave1 course]# mysqldump -u root -p -P 3308 --dump-slave --single-transaction test >course.sql


导入一个备份文件:
[root@master ~]# mysql -u root -p course<course.sql
也可以使用source命令导入一个备份文件:
[root@master ~]# mysql -u root –p
mysql> use course;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> source course.sql


使用mysqldump命令导出文本文件,通过指定-tab=dir_name参数来指定文本文件的导出路径
需要在my.cnf中先配置secure_file_priv=/usr/local/mysql/backup/参数
比如导出course数据库的所有表的文本文件到=/usr/local/mysql/backup/目录下: 
[root@master backup]# mysqldump -u root -p --tab=/usr/local/mysql/backup course
[root@master backup]# pwd
/usr/local/mysql/backup
[root@master backup]# ls
course.sql dept.sql score.sql students_myisam.sql students.sql teacher.sql
course.txt dept.txt score.txt students_myisam.txt students.txt teacher.txt
--fields-terminated-by=str参数指定每个字段值之间的间隔符,默认是tab
--fields-enclosed-by=char参数指定每个字段值使用什么字符括起来,默认是没有
--fields-optionally-enclosed-by=char参数指定对每个非数字类型的字段值使用什么字符括起来,默认没有
--lines-terminated-by=str参数指定行之间的结束符,默认是newline
比如
[root@master backup]# mysqldump -u root -p --tab=/usr/local/mysql/backup --fields-terminated-by=, --fields-enclosed-by='"' --lines-terminated-by=0x0d0a course


针对文本文件到导入,分为两步,先是导入表结构文件,再导入数据文件
表结构文件的导入可以使用mysql db1 < t1.sql或者source命令
数据文件的导入可以使用mysqlimport或者load data infile命令
[root@master backup]# mysqlimport -u root -p --fields-terminated-by=, --fields-enclosed-by='"' --lines-terminated-by=0x0d0a course /usr/local/mysql/backup/students.txt 
Enter password: 
course.students: Records: 3 Deleted: 0 Skipped: 0 Warnings: 0
或者
mysql> USE course;
mysql> load data infile '/usr/local/mysql/backup/students.txt' into table students FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\r\n';
Query OK, 3 rows affected (0.00 sec)
Records: 3 Deleted: 0 Skipped: 0 Warnings: 0


检测所有用户定义的数据库,定时备份所有的数据库,并上传到备份服务器
#!/bin/sh
####################################
##date:14:17 2016-3-28 
##Function: mysql_dump 
##Version: 1.1 
# #####################################
MYUSER=system
PORT=5001
DB_DATE=$(date +%F)
DB_NAME=$(uname -n)
MYPASS=********
MYLOGIN=" /data/application/mysql/bin/mysql -u$MYUSER -p$MYPASS -P$PORT “
MYDUMP=" /data/application/mysql/bin/mysqldump -u$MYUSER -p$MYPASS -P$PORT -B “
DATABASE=" $($MYLOGIN -e "show databases;" |egrep -vi 
"information_schema|database|performance_schema|mysql") “
for dbname in $DATABASE do
 MYDIR=/server/backup/$dbname 
[ ! -d $MYDIR ] && mkdir -p $MYDIR 
$MYDUMP $dbname --ignore-table=opsys.user_action|gzip 
>$MYDIR/${dbname}_${DB_NAME}_${DB_DATE}_sql.gz
Done
find /server/backup/ -type f -name "*.gz" -mtime +3|xargs rm –rf
find /server/backup/* -type d -name "*" -exec rsync -avz {} data_backup:/data/backup/ \;

Select … into outfield

select … into outfile命令可以用来导出表中符合条件的数据到文本文件,不导出表结构,仅用来导出数据
SELECT INTO…OUTFILE语法:
select * from Table into outfile '/路径/文件名'
fields terminated by ','
enclosed by '"'
lines terminated by ‘\r\n'
TERMINATED BY用来指定字段值之间的符号,例如,“TERMINATED BY ','”指定了逗号作为两个字段值之间的标志。
ENCLOSED BY子句用来指定包裹文件中字符值的符号,例如,“ENCLOSED BY ' " '”表示文件中字符值放在双引号之间,若加上关键字OPTIONALLY表示所有的值都放在双引号之间。
ESCAPED BY子句用来指定转义字符,例如,“ESCAPED BY '*'”将“*”指定为转义字符,取代“\”,如空格将表示为“*N”。
LINES子句:在LINES子句中使用TERMINATED BY指定一行结束的标志,如“LINES TERMINATED BY '?'”表示一行以“?”作为结束标志
比如导出students表里完整的数据:
mysql> select * from students into outfile '/usr/local/mysql/backup/students.txt' fields terminated by ',' enclosed by '"' lines 
terminated by ‘\r\n';
导出students表里的部分数据:
mysql> select * from students where sid in (1,2) into outfile '/usr/local/mysql/backup/students2.txt' fields terminated by ',' 
enclosed by '"' lines terminated by ‘\r\n';
导出表关联之后的数据:
mysql> select * from students a inner join dept b on a.dept_id=b.id into outfile '/usr/local/mysql/backup/students3.txt' fields 
terminated by ',' enclosed by '"' lines terminated by '\r\n';



对select … into outfile导出的文本文件,可以通过load data infile命令导入到表里
基本语法:
load data [low_priority] [local] infile 'file_name txt' [replace | ignore]
into table tbl_name
[fields
[terminated by't']
[OPTIONALLY] enclosed by '']
[escaped by'\' ]]
[lines terminated by'n']
[ignore number lines]
[(col_name, )]
terminated by分隔符:意思是以什么字符作为分隔符
enclosed by字段括起字符
escaped by转义字符
terminated by描述字段的分隔符,默认情况下是tab字符(\t)
enclosed by描述的是字段的括起字符。
escaped by描述的转义字符。默认的是反斜杠(backslash:\ ) 
lines 关键字指定了每条记录的分隔符默认为'\n'即为换行符
如果指定local关键词,则表明从客户主机读文件。如果local没指定,文件必须位于服务器上。
replace和ignore关键词控制对现有的唯一键记录的重复的处理。如果你指定replace,新行将代替有相同的唯一键值的现有行
。如果你指定ignore,跳过有唯一键的现有行的重复行的输入。如果你不指定任何一个选项,当找到重复键时,出现一个错误,并且文本文件的余下部分被忽略


比如将students.txt文件中的数据导入到表里:
mysql> load data infile '/usr/local/mysql/backup/students2.txt' into table students fields terminated by ',' enclosed by '"' lines terminated by 
‘\r\n';
导入过程中忽略错误,继续将符合条件的数据导入:
mysql> load data infile '/usr/local/mysql/backup/students.txt' ignore into table students fields terminated by ',' enclosed by '"' lines terminated 
by '\r\n';
Query OK, 1 row affected, 2 warnings (0.03 sec)
Records: 3 Deleted: 0 Skipped: 2 Warnings: 2
mysql> select * from students;
+-----+-------+--------+---------+
| sid | sname | gender | dept_id |
+-----+-------+--------+---------+
| 1 | bbb | 1 | 1 |
| 2 | bbb | 2 | 2 |
| 3 | c | 3 | 3 |
导入过程中如果碰到唯一性约束冲突则执行替换语句:
mysql> load data infile '/usr/local/mysql/backup/students.txt' replace into table students fields terminated by ',' enclosed by '"' lines 
terminated by '\r\n';
Query OK, 5 rows affected (0.00 sec)
Records: 3 Deleted: 2 Skipped: 0 Warnings: 0
mysql> select * from students;
+-----+-------+--------+---------+
| sid | sname | gender | dept_id |
+-----+-------+--------+---------+
| 1 | a | 1 | 1 |
| 2 | b | 2 | 2 |
| 3 | c | 3 | 3 |

恢复类型介绍

恢复类型分为全量恢复和基于时间点恢复两种

全量恢复是指将备份文件中所有的数据都进行恢复,恢复完成之后的数据就是为生成备份的那一刻的数据状态

而基于时间点的恢复是指将数据库恢复到指定的某个时间点的状态,通常需要依赖二进制日志将指定时间点前的所有数据库操作都重新操作一遍

基于时间点恢复

基于时间点的恢复是将数据库恢复到指定的时间点状态,通常是先通过全量恢复的方式先将数据库恢复到上一个全量
恢复的时间点,然后再利用二进制日志恢复到指定的时间点
Mysqlbinlog工具可以用来解析二进制日志的内容,让日志可以被执行或者是查看
在数据库中重新执行某个二进制日志文件中的内容,可以使用如下命令:
shell> mysqlbinlog binlog_files | mysql -u root -p
shell> mysqlbinlog binlog.000001 binlog.000002 | mysql -u root -p
或者是先把日志文件内容解析出来,然后再执行
shell> mysqlbinlog binlog_files > tmpfile
shell> mysql -u root -p < tmpfile
比如:
mysql> alter table students add tstamp timestamp;
mysql> flush logs;
mysql> insert into students(sid,sname,gender,dept_id) values(1,'a',1,1),(2,'b',2,2);
mysql> flush logs;
mysql> insert into students(sid,sname,gender,dept_id) values(3,'c',3,3),(4,'d',4,4);
mysql> insert into students(sid,sname,gender,dept_id) values(5,'e',5,3),(6,'f',6,4);
mysql> flush logs;
mysql> select * from students;
+-----+-------+--------+---------+---------------------+
| sid | sname | gender | dept_id | tstamp | 
+-----+-------+--------+---------+---------------------+
| 1 | a | 1 | 1 | 2017-07-11 22:02:23 |
| 2 | b | 2 | 2 | 2017-07-11 22:02:23 |
| 3 | c | 3 | 3 | 2017-07-11 22:03:11 |
| 4 | d | 4 | 4 | 2017-07-11 22:03:11 |
| 5 | e | 5 | 3 | 2017-07-11 22:04:05 |
| 6 | f | 6 | 4 | 2017-07-11 22:04:05 |



mysql> truncate table students; ##先删除表里的全部数据
恢复第一个二进制日志文件:
[root@master data]# mysqlbinlog mysql-bin.000050 |mysql -u root –p
mysql> select * from students;
+-----+-------+--------+---------+---------------------+
| sid | sname | gender | dept_id | tstamp |
+-----+-------+--------+---------+---------------------+
| 1 | a | 1 | 1 | 2017-07-11 22:02:23 |
| 2 | b | 2 | 2 | 2017-07-11 22:02:23 |
恢复第二个二进制日志文件:
[root@master data]# mysqlbinlog -v mysql-bin.000051 |mysql -u root -p
mysql> select * from students;
+-----+-------+--------+---------+---------------------+
| sid | sname | gender | dept_id | tstamp |
+-----+-------+--------+---------+---------------------+
| 1 | a | 1 | 1 | 2017-07-11 22:02:23 |
| 2 | b | 2 | 2 | 2017-07-11 22:02:23 |
| 3 | c | 3 | 3 | 2017-07-11 22:26:17 |
| 4 | d | 4 | 4 | 2017-07-11 22:26:17 |
| 5 | e | 5 | 3 | 2017-07-11 22:26:25 |
| 6 | f | 6 | 4 | 2017-07-11 22:26:25 |



如果是恢复某个日志文件中的一部分内容,可以通过指定—start-datetime或者是—stop-datetime参数
来确定开始恢复时间和停止恢复时间
shell> mysqlbinlog --stop-datetime="2005-04-20 9:59:59" \
 /var/log/mysql/bin.123456 | mysql -u root -p
shell> mysqlbinlog --start-datetime="2005-04-20 10:01:00" \
 /var/log/mysql/bin.123456 | mysql -u root –p
比如将数据库恢复到2017-07-11 22:26:20的时刻:
涉及到mysql-bin.000050文件和mysql-bin.000051中的部分内容
mysql> truncate table students;
[root@master data]# mysqlbinlog -v mysql-bin.000050 |mysql -u root -p 
[root@master data]# mysqlbinlog --stop-datetime="2017-07-11 22:26:20" mysql-bin.000051 |mysql -u root -p
mysql> select * from students;
+-----+-------+--------+---------+---------------------+
| sid | sname | gender | dept_id | tstamp | 
+-----+-------+--------+---------+---------------------+
| 1 | a | 1 | 1 | 2017-07-11 22:02:23 |
| 2 | b | 2 | 2 | 2017-07-11 22:02:23 |
| 3 | c | 3 | 3 | 2017-07-11 22:26:17 |
| 4 | d | 4 | 4 | 2017-07-11 22:26:17 |


比如跳过2017-07-11 22:26:17和2017-07-11 22:26:24之间的数据不恢复
[root@master data]# mysqlbinlog -v mysql-bin.000050 |mysql -u root -p
Enter password: 
[root@master data]# mysqlbinlog --start-datetime="2017-07-11 22:26:24" mysql-bin.000051 |mysql -u root –p
mysql> select * from students;
+-----+-------+--------+---------+---------------------+
| sid | sname | gender | dept_id | tstamp |
+-----+-------+--------+---------+---------------------+
| 1 | a | 1 | 1 | 2017-07-11 22:02:23 |
| 2 | b | 2 | 2 | 2017-07-11 22:02:23 |
| 5 | e | 5 | 3 | 2017-07-11 22:26:25 |
| 6 | f | 6 | 4 | 2017-07-11 22:26:25 |



基于时间点的恢复也可以指定日志文件中的位置,通过设置--start-position和--stop-position两个参数
shell> mysqlbinlog --stop-position=368312 /var/log/mysql/bin.123456 \
		| mysql -u root -p
shell> mysqlbinlog --start-position=368315 /var/log/mysql/bin.123456 \
 		| mysql -u root –p
比如恢复到第二个日志文件的497位置:
mysql> truncate table students;
[root@master data]# mysqlbinlog -v mysql-bin.000050 |mysql -u root -p
Enter password: 
[root@master data]# mysqlbinlog --stop-position=497 mysql-bin.000052 |mysql -u root -p
mysql> select * from students;
+-----+-------+--------+---------+---------------------+
| sid | sname | gender | dept_id | tstamp |
+-----+-------+--------+---------+---------------------+
| 1 | a | 1 | 1 | 2017-07-11 22:02:23 |
| 2 | b | 2 | 2 | 2017-07-11 22:02:23 |
| 3 | c | 3 | 3 | 2017-07-11 22:26:17 |
| 4 | d | 4 | 4 | 2017-07-11 22:26:17 |

Xtrabackup

Xtrabackup是一个对Mysql做数据备份的工具,支持在线热备份(备份时不影响数据读写),Xtrabackup有两个主要的工具:

xtrabackup、innobackupex,其中innobackupex已经逐渐被xtrabackup取代

特点:

​ (1)备份过程快速、可靠;

​ (2)备份过程不会打断正在执行的事务;

​ (3)能够基于压缩等功能节约磁盘空间和流量;

​ (4)自动实现备份检验;

​ (5)还原速度快

下载地址:

https://www.percona.com/downloads/XtraBackup/LATEST/

Xtrabackup安装

wget https://www.percona.com/downloads/XtraBackup/Percona-XtraBackup-2.4.9/binary/tarball/percona-xtrabackup-2.4.9-Linux-x86_64.tar.gz

tar -zxvf percona-xtrabackup-2.4.9-Linux-x86_64.tar.gz

cp xtrabackup-2.4.9/bin/* /usr/bin/

Xtrabackup使用

全量备份:
$ [root@master backups]# xtrabackup --backup --target-dir=/data/backups/ -u root -p mysql -P 3308 --host=127.0.0.1
[root@master backups]# ls
backup-my.cnf ibdata1 performance_schema test2 xtrabackup_checkpoints
course ibdata2 sys test3 xtrabackup_info
ib_buffer_pool mysql test xtrabackup_binlog_info xtrabackup_logfile
恢复过程中首先要执行prepare,将所有的数据文件都准备到同一个时间点,因为在备份过程中所有的数据文件备份都是在不同的时间点,如果直接启动可能会导致冲突
mv /usr/local/mysql/data /usr/local/mysql/data_bak
mkdir data
chown mysql:mysql data
$ xtrabackup --prepare --target-dir=/data/backups/
全量恢复: 
[root@master backups]# xtrabackup --copy-back --target-dir=/data/backups/ --datadir=/usr/local/mysql/data
$ chown -R mysql:mysql /var/lib/mysql
/etc/init.d/mysql.server start


增量备份: 
[root@master backups]# xtrabackup --backup --target-dir=/data/backups/base -u root -p mysql -P 3308 --host=127.0.0.1
mysql> insert into students values(11,'aa',1,1,current_timestamp);
mysql> insert into students values(12,'bb',2,2,current_timestamp);
[root@master backups]# xtrabackup --backup --target-dir=/data/backups/inc1 --incremental-basedir=/data/backups/base -u root -p mysql -P 3308 --host=127.0.0.1
mysql> insert into students values(13,'cc',3,3,current_timestamp);
mysql> insert into students values(14,'dd',4,4,current_timestamp);
[root@master backups]# xtrabackup --backup --target-dir=/data/backups/inc2 --incremental-basedir=/data/backups/inc1 -u root -p mysql -P 3308 --host=127.0.0.1


mv data data_bak
mkdir data
chown mysql:mysql data
增量备份恢复:
$ xtrabackup --prepare --apply-log-only --target-dir=/data/backups/base --datadir=/usr/local/mysql/data
$ xtrabackup --prepare --apply-log-only --target-dir=/data/backups/base --incremental-dir=/data/backups/inc1 --datadir=/usr/local/mysql/data
$ xtrabackup --prepare --target-dir=/data/backups/base --incremental-dir=/data/backups/inc2 --datadir=/usr/local/mysql/data
[root@master backups]# xtrabackup --copy-back --target-dir=/data/backups/base --datadir=/usr/local/mysql/data
mysql> select * from students;
+-----+-------+--------+---------+---------------------+
| sid | sname | gender | dept_id | tstamp | 
+-----+-------+--------+---------+---------------------+
| 1 | a | 1 | 1 | 2017-07-11 22:02:23 |
| 2 | b | 2 | 2 | 2017-07-11 22:02:23 |
| 3 | c | 3 | 3 | 2017-07-11 22:26:17 |
| 4 | d | 4 | 4 | 2017-07-11 22:26:17 |
| 11 | aa | 1 | 1 | 2017-07-11 23:40:26 |
| 12 | bb | 2 | 2 | 2017-07-11 23:40:37 |
| 13 | cc | 3 | 3 | 2017-07-11 23:41:47 |
| 14 | dd | 4 | 4 | 2017-07-11 23:41:56 |

Xtrabackup使用之innobackupex

全量备份:
innobackupex --user=root --password=mysql --host=127.0.0.1 --port=3308 /data/backups
执行后会在/data/backups目录下自动生成一个备份目录,比如2017-07-13_19-47-44
全量还原: /etc/rc.d/init.d/mysqld stop
rm –rf /data/dbdata/*
#生成ib_logfile文件
innobackupex --apply-log --user=root --password=mysql --host=127.0.0.1 --port=3308 
/data/backups/2017-07-13_19-47-44
#还原数数据到mysql的数据目录
##确保在my.cnf中设置datadir = /usr/local/mysql/data
innobackupex --copy-back --user=root --password=mysql --host=127.0.0.1 --port=3308 
/data/backups/2017-07-13_19-47-44
#还原过去后默认是root权限,因此修改属组和属主
chown –R mysql.mysql /usr/local/mysql/data
/etc/rc.d/init.d/mysqld start



增量备份:
首先要有一个全量备份,比如2017-07-13_21-25-39
innobackupex --user=root --password=mysql --host=127.0.0.1 --port=3308 /data/backups
mysql> insert into students values(41, 'ff',2,1,current_timestamp);
mysql> insert into students values(42, 'ff',1,1,current_timestamp);
innobackupex --user=root --password=mysql --host=127.0.0.1 --port=3308 --incremental-basedir=/data/backups/2017-07-14_22-32-11 --incremental /data/backups
mysql> insert into students values(31,'ff',2,2,current_timestamp);
mysql> insert into students values(32,'ff',2,2,current_timestamp);
innobackupex --user=root --password=mysql --host=127.0.0.1 --port=3308 --incremental-basedir=/data/backups/2017-07-14_22-36-23 --incremental /data/backups



增量恢复:
innobackupex --apply-log --redo-only /data/backups/2017-07-14_22-32-11
innobackupex --apply-log --redo-only /data/backups/2017-07-14_22-32-11 --incrementaldir=/data/backups/2017-07-14_22-36-23
innobackupex --apply-log /data/backups/2017-07-14_22-32-11 --incrementaldir=/data/backups/2017-07-14_22-37-32 

##最后一个增量备份恢复不需要指定—redo-only
innobackupex --apply-log /data/backups/2017-07-14_22-32-11
innobackupex --copy-back --user=root --password=mysql --host=127.0.0.1 --port=3308 
/data/backups/2017-07-14_22-32-11
#还原过去后默认是root权限,因此修改属组和属主
chown –R mysql.mysql /usr/local/mysql/data
/etc/rc.d/init.d/mysqld start
mysql> select * from students;
+-----+-------+--------+---------+---------------------+
| sid | sname | gender | dept_id | tstamp | 
+-----+-------+--------+---------+---------------------+
| 31 | ff | 2 | 2 | 2017-07-14 22:36:50 |
| 32 | ff | 2 | 2 | 2017-07-14 22:36:54 |
| 41 | ff | 2 | 1 | 2017-07-14 22:35:48 |
| 42 | ff | 1 | 1 | 2017-07-14 22:35:56 |
posted @ 2024-12-16 17:04  水一RAR  阅读(2)  评论(0编辑  收藏  举报