redis RDB 和AOF
参考文献
- Redis源码学习-AOF数据持久化原理分析(0)
- Redis源码学习-AOF数据持久化原理分析(1)
- Redis · 特性分析 · AOF Rewrite 分析
- 深入剖析 redis AOF 持久化策略
- 函数sync、fsync与fdatasync总结整理
redis是一个内存数据库,它将数据保存在自己的内存之中。这意味着如果机器宕机或者断电,将会导致内存中的数据失效。为了能让数据不会出现丢失的情况,redis提供了RDB和AOF两种持久化的功能。接下来讲分别介绍RDB和AOF的原理和实现过程
RDB
在redis中提供了两个命令用于生成RDB文件:
- SAVE
- BGSAVE
通过名字上的区别,可以大体看出两个命令的实现方式。SAVE命令在执行期间会阻塞服务进程。而BGSAVE命令在执行期间会fork一个子进程,然后由子进程来完成RDB文件的生成。 值得一提的是,redis在启动的时候如果存在AOF文件会优先使用AOF文件还原数据库状态。如果不存在则默认使用RDB文件还原。
BGSAVE在执行过程中仍然可以接受客户端的命令。但是对于SAVE 、BGSVAE、BGREWRITEAOF的执行逻辑和平时有所不同。 在BGSAVE执行的过程中,SAVE和BGSAVE命令都将会被服务器禁止。而BGREWRITEAOF要等到BGSAVE执行完毕之后,才能执行。
tip:
- 执行lastsave命令可以获取最后一次生成RDB的时间,对应于info统计中的rdb_last_save_time
RDB 间隔性自动保存
redis允许用户通过设定服务器的save选项让服务器每隔一段时间调用BGSVAE自动生成一次RDB。
例如我们在配置文件中写入如下的配置:
save 900 1
save 300 10
save 60 10000
那么当以下的三个条件之一被满足了之后,BGSAVE命令就会执行:
- 服务器在900秒内对数据库至少进行了1次修改
- 服务器在300秒内对数据库至少进行了10次修改
- 服务器在60秒内至少对数据库进行了10000次修改
tip:
- 如果redis启动的时候,用户没有通过指定配置文件和传入启动参数的方式设置save选项。那么服务器会为save设置默认的值:
save 900 1 save 300 10 save 60 10000
- RDB 文件的保存路径在dir配置下指定。文件名通过dbfilename配置指定。可以通过动态命令config set dir ${newdir} 和config set dbfilename ${newfilename} 在运行期间动态的修改。
- redis默认的情况下对于RDB文件采用LZF算法进行压缩。可以通过config set rdbcompression {yes|no} 来开启或者关闭。
- redis-check-dump 工具可以用来检测RDB文件
自动保存的基本实现
在redisServer结构体中有saveparams属性:
struct redisServer{
struct saveparam * saveparams;
}
saveparams是一个数组,数组中的每一个元素都保存了一组参数:
struct saveparam {
time_t seconds;//时间
int changes;//修改数
}
在redisServer结构体中还维持了一个dirty计数器和lastsave属性。
- dirty:记录了距离上一次成功执行save或者bgsave命令之后,服务器对数据库进行了多少次的修改。
- lastsave:这个属性是个Unix时间戳,记录了服务器上次执行bgsave或者save到现在的时间。
时间事件检查条件是否满足
redis服务器周期性函数serverCron默认的执行时间间隔是100ms。其中有一项工作就是检查save的条件是否满足。在这个函数中,会依次遍历saveparams中的参数,看是否满足条件。只要有一个条件被满足,服务器就会执行bgsave指令。在执行之后,清零dirty并且将lastsave 属性更新为当前时间。
AOF
AOF文件的格式如下所示:
*<count>\r\n$<length>\r\n<content>\r\n
例如一个命令 :
select 0
set k1 v1
可以被翻译成如下的AOF格式:
##AOF文件格式,其中"##"为注释,非文件实际内容
##选择DB
*2
$6
SELECT
$1
0
##SET k-v
*3
$3
SET
$2
k1
$2
v1
AOF的写入
调用路径
如果打开了redis的aof写入配置,则一个命令从到达服务器到最后写入aof大概要经过以下几个路径的调用:
- aeMain
- aeProcessEvents
- readQueryFromClient
- processInputBuffer
- processCommand
- call
- propagate
- feedAppendOnlyFile
命令转换为AOF格式的过程
feedAppendOnlyFile函数
可以看出,真正起作用的函数是aof.c中的feedAppendOnlyFile函数。函数的具体定义如下:
void feedAppendOnlyFile(struct redisCommand *cmd, int dictid, robj **argv, int argc) {
sds buf = sdsempty();
robj *tmpargv[3];
/* The DB this command was targeting is not the same as the last command
* we appendend. To issue a SELECT command is needed.
*
* 使用 SELECT 命令,显式设置数据库,确保之后的命令被设置到正确的数据库
*/
if (dictid != server.aof_selected_db) {
char seldb[64];
snprintf(seldb,sizeof(seldb),"%d",dictid);
buf = sdscatprintf(buf,"*2\r\n$6\r\nSELECT\r\n$%lu\r\n%s\r\n",
(unsigned long)strlen(seldb),seldb);
server.aof_selected_db = dictid;
}
// EXPIRE 、 PEXPIRE 和 EXPIREAT 命令
if (cmd->proc == expireCommand || cmd->proc == pexpireCommand ||
cmd->proc == expireatCommand) {
/* Translate EXPIRE/PEXPIRE/EXPIREAT into PEXPIREAT
*
* 将 EXPIRE 、 PEXPIRE 和 EXPIREAT 都翻译成 PEXPIREAT
*/
buf = catAppendOnlyExpireAtCommand(buf,cmd,argv[1],argv[2]);
// SETEX 和 PSETEX 命令
} else if (cmd->proc == setexCommand || cmd->proc == psetexCommand) {
/* Translate SETEX/PSETEX to SET and PEXPIREAT
*
* 将两个命令都翻译成 SET 和 PEXPIREAT
*/
// SET
tmpargv[0] = createStringObject("SET",3);
tmpargv[1] = argv[1];
tmpargv[2] = argv[3];
buf = catAppendOnlyGenericCommand(buf,3,tmpargv);
// PEXPIREAT
decrRefCount(tmpargv[0]);
buf = catAppendOnlyExpireAtCommand(buf,cmd,argv[1],argv[2]);
// 其他命令
} else {
/* All the other commands don't need translation or need the
* same translation already operated in the command vector
* for the replication itself. */
buf = catAppendOnlyGenericCommand(buf,argc,argv);
}
/* Append to the AOF buffer. This will be flushed on disk just before
* of re-entering the event loop, so before the client will get a
* positive reply about the operation performed.
*
* 将命令追加到 AOF 缓存中,
* 在重新进入事件循环之前,这些命令会被冲洗到磁盘上,
* 并向客户端返回一个回复。
*/
if (server.aof_state == REDIS_AOF_ON)
server.aof_buf = sdscatlen(server.aof_buf,buf,sdslen(buf));
/* If a background append only file rewriting is in progress we want to
* accumulate the differences between the child DB and the current one
* in a buffer, so that when the child process will do its work we
* can append the differences to the new append only file.
*
* 如果 BGREWRITEAOF 正在进行,
* 那么我们还需要将命令追加到重写缓存中,
* 从而记录当前正在重写的 AOF 文件和数据库当前状态的差异。
*/
if (server.aof_child_pid != -1)
aofRewriteBufferAppend((unsigned char*)buf,sdslen(buf));
// 释放
sdsfree(buf);
}
- 如果是select 命令,将其转换为对用的格式。并将AOF的当前目标数据库设定为dictid的值
- 如果是EXPIRE 、 PEXPIRE 和 EXPIREAT 命令则都翻译成PEXPIREAT
- 如果是SETEX 和 PSETEX命令,则翻译成 SET 和 PEXPIREAT
- 如果是其他命令则使用catAppendOnlyGenericCommand函数将命令转换成AOF的格式。
catAppendOnlyGenericCommand 函数
catAppendOnlyGenericCommand函数的实现如下:
/*
* 根据传入的命令和命令参数,将它们还原成协议格式。
*/
sds catAppendOnlyGenericCommand(sds dst, int argc, robj **argv) {
char buf[32];
int len, j;
robj *o;
// 重建命令的个数,格式为 *<count>\r\n
// 例如 *3\r\n
buf[0] = '*';
len = 1+ll2string(buf+1,sizeof(buf)-1,argc);
buf[len++] = '\r';
buf[len++] = '\n';
dst = sdscatlen(dst,buf,len);
// 重建命令和命令参数,格式为 $<length>\r\n<content>\r\n
// 例如 $3\r\nSET\r\n$3\r\nKEY\r\n$5\r\nVALUE\r\n
for (j = 0; j < argc; j++) {
o = getDecodedObject(argv[j]);
// 组合 $<length>\r\n
buf[0] = '$';
len = 1+ll2string(buf+1,sizeof(buf)-1,sdslen(o->ptr));
buf[len++] = '\r';
buf[len++] = '\n';
dst = sdscatlen(dst,buf,len);
// 组合 <content>\r\n
dst = sdscatlen(dst,o->ptr,sdslen(o->ptr));
dst = sdscatlen(dst,"\r\n",2);
decrRefCount(o);
}
// 返回重建后的协议内容
return dst;
}
可以见得函数其实做了2个事情:
- 根据一个命令的个数,创建*
\r\n - 重建命令和命令参数,格式为 $
\r\n \r\n
写入AOF
在将一个命令生成了AOF格式的数据之后,会将AOF数据放入server.aof_buf(AOF缓存区)中。如果 BGREWRITEAOF 正在进行,那么还需要将命令追加到重写缓存中,从而记录当前正在重写的 AOF 文件和数据库当前状态的差异。
void feedAppendOnlyFile(struct redisCommand *cmd, int dictid, robj **argv, int argc) {
....
....
if (server.aof_state == REDIS_AOF_ON)
server.aof_buf = sdscatlen(server.aof_buf,buf,sdslen(buf));
/* If a background append only file rewriting is in progress we want to
* accumulate the differences between the child DB and the current one
* in a buffer, so that when the child process will do its work we
* can append the differences to the new append only file.
*
* 如果 BGREWRITEAOF 正在进行,
* 那么我们还需要将命令追加到重写缓存中,
* 从而记录当前正在重写的 AOF 文件和数据库当前状态的差异。
*/
if (server.aof_child_pid != -1)
aofRewriteBufferAppend((unsigned char*)buf,sdslen(buf));
// 释放
sdsfree(buf);
}
AOF重写
从前面的部分可以看出,当Redis在运行过程中如果打开了AOF的功能,则随着时间的推移AOF文件会越来越大。因此Redis提供了AOF重写功能。触发AOF重写的时机有2个:
- 用户设置“config set appendonly yes”开启AOF的时候调用一次
- 用户设置“bgrewriteaof”命令的时候,如果当前没有aof/rdb进程在持久化数据,则调用一次;
- 如果用户设置了auto-aof-rewrite-percentage和auto-aof-rewrite-min-size指令,且aof文件增长到min-size以上,并且增长率大于percentage的时候,自动触发AOF rewrite。
上述指令发送的时候,当前已经有进程在处理这个动作了,那么redis会设置server.aof_rewrite_scheduled标志。然后在serverCron定时任务里面就会判断这种情况,从而再调用rewriteAppendOnlyFileBackground()。
指令打开AOF Rewrite(appendonly yes)
当指令打开了appendonly yes的时候,会调用startAppendOnly函数(aof.c中)来执行。初始化的时候如果配置文件里面指定了这个选项为打开状态,当然就会自动从一开始就是有AOF机制的,这种情况下不能发送这个命令,否则redis会直接死掉。函数startAppendOnly的实现如下:
/* Called when the user switches from "appendonly no" to "appendonly yes"
* at runtime using the CONFIG command.-
*
* 当用户在运行时使用 CONFIG 命令,
* 从 appendonly no 切换到 appendonly yes 时执行
*/
int startAppendOnly(void) {
// 将开始时间设为 AOF 最后一次 fsync 时间
server.aof_last_fsync = server.unixtime;
// 打开 AOF 文件
server.aof_fd = open(server.aof_filename,O_WRONLY|O_APPEND|O_CREAT,0644);
redisAssert(server.aof_state == REDIS_AOF_OFF);
// 文件打开失败
if (server.aof_fd == -1) {
redisLog(REDIS_WARNING,"Redis needs to enable the AOF but can't open the append only file: %s",strerror(errno));
return REDIS_ERR;
}
if (rewriteAppendOnlyFileBackground() == REDIS_ERR) {
// AOF 后台重写失败,关闭 AOF 文件
close(server.aof_fd);
redisLog(REDIS_WARNING,"Redis needs to enable the AOF but can't trigger a background AOF rewrite operation. Check the above logs for more info about the error.");
return REDIS_ERR;
}
/* We correctly switched on AOF, now wait for the rerwite to be complete
* in order to append data on disk.
*
* 等待重写执行完毕
*/
server.aof_state = REDIS_AOF_WAIT_REWRITE;
return REDIS_OK;
}
可以发现真正的快照保存在rewriteAppendOnlyFileBackground函数中。试想下,在Redis运行的过程中,怎么才能把其某个时刻的数据全部原原本本的,一致的保存起来?
- 应用程序自己做快照,比如copy一份数据出来,这个的缺点是需要加锁,copy的过程中无法支持写入操作,会导致逻辑“卡住”;
- 为了避免第一种情况的卡,应用代码中实现COW(Copy-on-write)的机制,这样效率会更高,没有修改的数据不会导致卡顿。
- 写时重定向(Redirect-on-write),将新写入的数据写到其他地方,后续再同步回来,这样也可以支持,实际上redis的AOF某些方面也借签了这个。
- Split-Mirror技术,这个比较麻烦,需要硬件和软件支持,一般在存储系统中应用。
Redis采用了COW技术,利用fork进程的原理,对当前进程建立一个一模一样的快照。下面来看看rewriteAppendOnlyFileBackground函数的实现:
子进程
当fork了子进程之后,子进程里面关闭了监听的端口,然后立马调用了rewriteAppendOnlyFile函数将数据写到临时文件"temp-rewriteaof-bg-%d.aof"中去。
int rewriteAppendOnlyFileBackground(void) {
pid_t childpid;
long long start;
// 已经有进程在进行 AOF 重写了
if (server.aof_child_pid != -1) return REDIS_ERR;
// 记录 fork 开始前的时间,计算 fork 耗时用
start = ustime();
if ((childpid = fork()) == 0) {
char tmpfile[256];
/* Child */
// 关闭网络连接 fd
closeListeningSockets(0);
// 为进程设置名字,方便记认
redisSetProcTitle("redis-aof-rewrite");
// 创建临时文件,并进行 AOF 重写
snprintf(tmpfile,256,"temp-rewriteaof-bg-%d.aof", (int) getpid());
if (rewriteAppendOnlyFile(tmpfile) == REDIS_OK) {
size_t private_dirty = zmalloc_get_private_dirty();
if (private_dirty) {
redisLog(REDIS_NOTICE,
"AOF rewrite: %zu MB of memory used by copy-on-write",
private_dirty/(1024*1024));
}
// 发送重写成功信号
exitFromChild(0);
} else {
// 发送重写失败信号
exitFromChild(1);
}
}
.....
.....
.....
}
父进程
父进程的工作比较简单,只需要清空server.aof_rewrite_scheduled标志,避免下次serverCron函数又进行AOF rewrite,然后记录子进程的pid为server.aof_child_pid ,然后调用updateDictResizePolicy,这个updateDictResizePolicy函数里面会考虑如果当前正在后台有快照进程在写数据,那他不会对字典进行resize,这样能够避免COW机制被打乱,导致大量的COW触发,分配很多内存。
else {
/* Parent */
// 记录执行 fork 所消耗的时间
server.stat_fork_time = ustime()-start;
if (childpid == -1) {
redisLog(REDIS_WARNING,
"Can't rewrite append only file in background: fork: %s",
strerror(errno));
return REDIS_ERR;
}
redisLog(REDIS_NOTICE,
"Background append only file rewriting started by pid %d",childpid);
// 记录 AOF 重写的信息
server.aof_rewrite_scheduled = 0;
server.aof_rewrite_time_start = time(NULL);
server.aof_child_pid = childpid;
// 关闭字典自动 rehash
updateDictResizePolicy();
/* We set appendseldb to -1 in order to force the next call to the
* feedAppendOnlyFile() to issue a SELECT command, so the differences
* accumulated by the parent into server.aof_rewrite_buf will start
* with a SELECT statement and it will be safe to merge.
*
* 将 aof_selected_db 设为 -1 ,
* 强制让 feedAppendOnlyFile() 下次执行时引发一个 SELECT 命令,
* 从而确保之后新添加的命令会设置到正确的数据库中
*/
server.aof_selected_db = -1;
replicationScriptCacheFlush();
return REDIS_OK;
}
return REDIS_OK; /* unreached */
}
rewriteAppendOnlyFile函数
上面的介绍中,提到了在子进程的运作过程中会调用rewriteAppendOnlyFile函数来重写AOF文件。下面来介绍下这个函数的实现过程。函数首先打开一个临时文件"temp-rewriteaof-%d.aof",然后循环每一个db,也就是server.dbnum,一个个将DB的数据,具体的数据格式就是将当前内存的数据还原成跟客户端的协议格式,文本形式然后写入到文件中。
/* Write a sequence of commands able to fully rebuild the dataset into
* "filename". Used both by REWRITEAOF and BGREWRITEAOF.
*
* In order to minimize the number of commands needed in the rewritten
* log Redis uses variadic commands when possible, such as RPUSH, SADD
* and ZADD. However at max REDIS_AOF_REWRITE_ITEMS_PER_CMD items per time
* are inserted using a single command. */
int rewriteAppendOnlyFile(char *filename) {
//rewriteAppendOnlyFileBackground调用这里,将文件写入aof文件里面去。
dictIterator *di = NULL;
dictEntry *de;
rio aof;
FILE *fp;
char tmpfile[256];
int j;
long long now = mstime();
/* Note that we have to use a different temp name here compared to the
* one used by rewriteAppendOnlyFileBackground() function. */
snprintf(tmpfile,256,"temp-rewriteaof-%d.aof", (int) getpid());
fp = fopen(tmpfile,"w");
if (!fp) {
redisLog(REDIS_WARNING, "Opening the temp file for AOF rewrite in rewriteAppendOnlyFile(): %s", strerror(errno));
return REDIS_ERR;
}
//设置rioFileIO等信息
rioInitWithFile(&aof,fp);
if (server.aof_rewrite_incremental_fsync)//设置r->io.file.autosync = bytes;每32M刷新一次。
rioSetAutoSync(&aof,REDIS_AOF_AUTOSYNC_BYTES);
for (j = 0; j < server.dbnum; j++) {//遍历每一个db.将其内容写入磁盘。
char selectcmd[] = "*2\r\n6\r\nSELECT\r\n";
redisDb *db = server.db+j;
dict *d = db->dict;//找到这个db的key字典
if (dictSize(d) == 0) continue;
di = dictGetSafeIterator(d);
if (!di) {
fclose(fp);
return REDIS_ERR;
}
/* SELECT the new DB */
//写入select,后面写入当前所指的db序号。这样就写入: SELECT db_id
if (rioWrite(&aof,selectcmd,sizeof(selectcmd)-1) == 0) goto werr;
if (rioWriteBulkLongLong(&aof,j) == 0) goto werr;
/* Iterate this DB writing every entry */
while((de = dictNext(di)) != NULL) {//一个个遍历这个字典的所有key,将其写到AOF文件里面去。
sds keystr;
robj key, *o;
long long expiretime;
keystr = dictGetKey(de);
o = dictGetVal(de);
initStaticStringObject(key,keystr);//初始化一个字符串对象。
expiretime = getExpire(db,&key);//获取超时时间。
/* Save the key and associated value */
if (o->type == REDIS_STRING) {
//插入KV赋值语句: set keystr valuestr
/* Emit a SET command */
char cmd[]="*3\r\n3\r\nSET\r\n";
if (rioWrite(&aof,cmd,sizeof(cmd)-1) == 0) goto werr;
/* Key and value */
if (rioWriteBulkObject(&aof,&key) == 0) goto werr;
if (rioWriteBulkObject(&aof,o) == 0) goto werr;
} else if (o->type == REDIS_LIST) {
if (rewriteListObject(&aof,&key,o) == 0) goto werr;
} else if (o->type == REDIS_SET) {
if (rewriteSetObject(&aof,&key,o) == 0) goto werr;
} else if (o->type == REDIS_ZSET) {
if (rewriteSortedSetObject(&aof,&key,o) == 0) goto werr;
} else if (o->type == REDIS_HASH) {
if (rewriteHashObject(&aof,&key,o) == 0) goto werr;
} else {
redisPanic("Unknown object type");
}
/* Save the expire time */
if (expiretime != -1) {
char cmd[]="*3\r\n9\r\nPEXPIREAT\r\n";
/* If this key is already expired skip it */
if (expiretime < now) continue;
if (rioWrite(&aof,cmd,sizeof(cmd)-1) == 0) goto werr;
if (rioWriteBulkObject(&aof,&key) == 0) goto werr;
if (rioWriteBulkLongLong(&aof,expiretime) == 0) goto werr;
}
}
dictReleaseIterator(di);
}
/* Make sure data will not remain on the OS's output buffers */
// 冲洗并关闭新 AOF 文件
if (fflush(fp) == EOF) goto werr;
if (aof_fsync(fileno(fp)) == -1) goto werr;
if (fclose(fp) == EOF) goto werr;
if (rename(tmpfile,filename) == -1) {
redisLog(REDIS_WARNING,"Error moving temp append only file on the final destination: %s", strerror(errno));
unlink(tmpfile);
return REDIS_ERR;
}
redisLog(REDIS_NOTICE,"SYNC append only file rewrite performed");
return REDIS_OK;
werr:
fclose(fp);
unlink(tmpfile);
redisLog(REDIS_WARNING,"Write error writing append only file on disk: %s", strerror(errno));
if (di) dictReleaseIterator(di);
return REDIS_ERR;
}
值得注意的是,这里的写文件是用标准库写入的,为什么呢?缓存,能够充分利用标准库的缓存机制,这样不用每次调用都调用系统调用。如果用户配置了“aof-rewrite-incremental-fsync on”,则表示要fwrite写一部分数据后就调用fsync刷一下数据到磁盘。这里会每fwrite 32M(REDIS_AOF_AUTOSYNC_BYTES宏)数据后,就显示调用一次fsync,保证数据写入正确。
/* Make sure data will not remain on the OS's output buffers */
// 冲洗并关闭新 AOF 文件
if (fflush(fp) == EOF) goto werr;
if (aof_fsync(fileno(fp)) == -1) goto werr;
if (fclose(fp) == EOF) goto werr;
if (rename(tmpfile,filename) == -1) {
redisLog(REDIS_WARNING,"Error moving temp append only file on the final destination: %s", strerror(errno));
unlink(tmpfile);
return REDIS_ERR;
}
redisLog(REDIS_NOTICE,"SYNC append only file rewrite performed");
return REDIS_OK;
werr:
fclose(fp);
unlink(tmpfile);
redisLog(REDIS_WARNING,"Write error writing append only file on disk: %s", strerror(errno));
if (di) dictReleaseIterator(di);
return REDIS_ERR;
}
在AOF重写的最后,rewriteAppendOnlyFile将快照进程的数据写到磁盘里面去之后,关闭文件,然后退出。那么在AOF重写的过程中,执行的Redis命令怎么处理呢?答案就是在上面AOF写入的时候提到过的,在AOF写入的最后会把当前的命令放入AOF重写缓冲区。但是有个问题,这个操作只是把数据放入缓冲区,而没有flush到磁盘的AOF文件中。查阅代码,发现是在定时任务serverCron中完成的。这个函数顶是每隔一毫秒调用。这是initServer函数调用如下命令设置的每毫秒定时器:aeCreateTimeEvent(server.el, 1, serverCron, NULL, NULL)。
serverCron函数比较长,跟我们相关的就一个if-else分支,条件是是否有快照进程在做AOF rewrite操作。
如果有快照进程或rdb进程在刷快照到磁盘,那么wait3()看一下是否结束,如果结束就做响应的扫尾工作;
/* Check if a background saving or AOF rewrite in progress terminated. */
if (server.rdb_child_pid != -1 || server.aof_child_pid != -1) {
int statloc;
pid_t pid;
if ((pid = wait3(&statloc,WNOHANG,NULL)) != 0) {
int exitcode = WEXITSTATUS(statloc);
int bysignal = 0;
if (WIFSIGNALED(statloc)) bysignal = WTERMSIG(statloc);
if (pid == server.rdb_child_pid) {
//把数据保存到磁盘上去,跟AOF的区别是AOF会不断的追加改动到文件。
//RDB只会将快照保存,并且通知其他slave
backgroundSaveDoneHandler(exitcode,bysignal);
} else if (pid == server.aof_child_pid) {
//退出的进程的pid为aof日志的进程,也就是在rewriteAppendOnlyFileBackground这里fork创建的进程
//用户敲入这样的命令可以出发AOF文件重写 config set appendonly yes
//从而在定时任务中检测到AOF进程已经写完快照并退出,从而下面必须写在此期间写入的数据到文件。
backgroundRewriteDoneHandler(exitcode,bysignal);
} else {
redisLog(REDIS_WARNING,
"Warning, detected child with unmatched pid: %ld",
(long)pid);
}
updateDictResizePolicy();
}
} else {
在这里,如果判断当前AOF的重写已经结束,则调用backgroundRewriteDoneHandler函数来将AOF缓存区中的数据写入新的AOF文件。
void backgroundRewriteDoneHandler(int exitcode, int bysignal) {
if (!bysignal && exitcode == 0) {
int newfd, oldfd;
char tmpfile[256];
long long now = ustime();
redisLog(REDIS_NOTICE,
"Background AOF rewrite terminated with success");
/* Flush the differences accumulated by the parent to the
* rewritten AOF. */
// 打开保存新 AOF 文件内容的临时文件
snprintf(tmpfile,256,"temp-rewriteaof-bg-%d.aof",
(int)server.aof_child_pid);
newfd = open(tmpfile,O_WRONLY|O_APPEND);
if (newfd == -1) {
redisLog(REDIS_WARNING,
"Unable to open the temporary AOF produced by the child: %s", strerror(errno));
goto cleanup;
}
// 将累积的重写缓存写入到临时文件中
// 这个函数调用的 write 操作会阻塞主进程
if (aofRewriteBufferWrite(newfd) == -1) {
redisLog(REDIS_WARNING,
"Error trying to flush the parent diff to the rewritten AOF: %s", strerror(errno));
close(newfd);
goto cleanup;
}
redisLog(REDIS_NOTICE,
"Parent diff successfully flushed to the rewritten AOF (%lu bytes)", aofRewriteBufferSize());
if (server.aof_fd == -1) {
/* AOF disabled */
/* Don't care if this fails: oldfd will be -1 and we handle that.
* One notable case of -1 return is if the old file does
* not exist. */
oldfd = open(server.aof_filename,O_RDONLY|O_NONBLOCK);
} else {
/* AOF enabled */
oldfd = -1; /* We'll set this to the current AOF filedes later. */
}
/* Rename the temporary file. This will not unlink the target file if
* it exists, because we reference it with "oldfd".
*
* 对临时文件进行改名,替换现有的 AOF 文件。
*
* 旧的 AOF 文件不会在这里被 unlink ,因为 oldfd 引用了它。
*/
if (rename(tmpfile,server.aof_filename) == -1) {
redisLog(REDIS_WARNING,
"Error trying to rename the temporary AOF file: %s", strerror(errno));
close(newfd);
if (oldfd != -1) close(oldfd);
goto cleanup;
}
if (server.aof_fd == -1) {
/* AOF disabled, we don't need to set the AOF file descriptor
* to this new file, so we can close it.
*
* AOF 被关闭,直接关闭 AOF 文件,
* 因为关闭 AOF 本来就会引起阻塞,所以这里就算 close 被阻塞也无所谓
*/
close(newfd);
} else {
/* AOF enabled, replace the old fd with the new one.
*
* 用新 AOF 文件的 fd 替换原来 AOF 文件的 fd
*/
oldfd = server.aof_fd;
server.aof_fd = newfd;
// 因为前面进行了 AOF 重写缓存追加,所以这里立即 fsync 一次
if (server.aof_fsync == AOF_FSYNC_ALWAYS)
aof_fsync(newfd);
else if (server.aof_fsync == AOF_FSYNC_EVERYSEC)
aof_background_fsync(newfd);
// 强制引发 SELECT
server.aof_selected_db = -1; /* Make sure SELECT is re-issued */
// 更新 AOF 文件的大小
aofUpdateCurrentSize();
// 记录前一次重写时的大小
server.aof_rewrite_base_size = server.aof_current_size;
/* Clear regular AOF buffer since its contents was just written to
* the new AOF from the background rewrite buffer.
*
* 清空 AOF 缓存,因为它的内容已经被写入过了,没用了
*/
sdsfree(server.aof_buf);
server.aof_buf = sdsempty();
}
server.aof_lastbgrewrite_status = REDIS_OK;
redisLog(REDIS_NOTICE, "Background AOF rewrite finished successfully");
/* Change state from WAIT_REWRITE to ON if needed
*
* 如果是第一次创建 AOF 文件,那么更新 AOF 状态
*/
if (server.aof_state == REDIS_AOF_WAIT_REWRITE)
server.aof_state = REDIS_AOF_ON;
/* Asynchronously close the overwritten AOF.
*
* 异步关闭旧 AOF 文件
*/
if (oldfd != -1) bioCreateBackgroundJob(REDIS_BIO_CLOSE_FILE,(void*)(long)oldfd,NULL,NULL);
redisLog(REDIS_VERBOSE,
"Background AOF rewrite signal handler took %lldus", ustime()-now);
// BGREWRITEAOF 重写出错
} else if (!bysignal && exitcode != 0) {
server.aof_lastbgrewrite_status = REDIS_ERR;
redisLog(REDIS_WARNING,
"Background AOF rewrite terminated with error");
// 未知错误
} else {
server.aof_lastbgrewrite_status = REDIS_ERR;
redisLog(REDIS_WARNING,
"Background AOF rewrite terminated by signal %d", bysignal);
}
cleanup:
// 清空 AOF 缓冲区
aofRewriteBufferReset();
// 移除临时文件
aofRemoveTempFile(server.aof_child_pid);
// 重置默认属性
server.aof_child_pid = -1;
server.aof_rewrite_time_last = time(NULL)-server.aof_rewrite_time_start;
server.aof_rewrite_time_start = -1;
/* Schedule a new rewrite if we are waiting for it to switch the AOF ON. */
if (server.aof_state == REDIS_AOF_WAIT_REWRITE)
server.aof_rewrite_scheduled = 1;
}
在函数里面,会调用aofRewriteBufferWrite把AOF重写缓冲区里面的数据写入新的AOF文件中。 在完成之后调用一次fsync将数据刷到磁盘中。在完成所有工作之后,就是替换原来的文件为新文件。至此AOF重写的过程全部完成。
/* Write the buffer (possibly composed of multiple blocks) into the specified
* fd. If a short write or any other error happens -1 is returned,
* otherwise the number of bytes written is returned.-
*
* 将重写缓存中的所有内容(可能由多个块组成)写入到给定 fd 中。
*
* 如果没有 short write 或者其他错误发生,那么返回写入的字节数量,
* 否则,返回 -1 。
*/
ssize_t aofRewriteBufferWrite(int fd) {
listNode *ln;
listIter li;
ssize_t count = 0;
// 遍历所有缓存块
listRewind(server.aof_rewrite_buf_blocks,&li);
while((ln = listNext(&li))) {
aofrwblock *block = listNodeValue(ln);
ssize_t nwritten;
if (block->used) {
// 写入缓存块内容到 fd
nwritten = write(fd,block->buf,block->used);
if (nwritten != block->used) {
if (nwritten == 0) errno = EIO;
return -1;
}
// 积累写入字节
count += nwritten;
}
}
return count;
}