Redist的主从配置
Redis的主从复制一般用来在线备份和读写分离,提高服务器的负载能力。主数据库主要进行写操作,而从数据库负责读操作。
主从的配置:
主节点:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 | [root@master ~] # grep -v -E "^#|^$" /etc/redis/6379.conf bind 0.0.0.0 // 绑定的主机地址。说明只能通过这个ip地址连接本机的redis。最好绑定0.0.0.0 protected-mode yes port 6379 tcp-backlog 511 timeout 0 tcp-keepalive 300 daemonize yes // 这个修改为 yes ,守护进程 supervised no pidfile /var/run/redis_6379 .pid loglevel notice logfile /var/log/redis_6379 .log databases 16 always-show-logo yes save 900 1 #启用RDB快照功能,默认就是启用的 save 300 10 save 60 10000 stop-writes-on-bgsave-error yes rdbcompression yes rdbchecksum yes dbfilename dump.rdb dir /var/lib/redis/6379 #redis数据目录 replica-serve-stale-data yes replica- read -only yes repl-diskless- sync no repl-diskless- sync -delay 5 repl-disable-tcp-nodelay no replica-priority 100 lazyfree-lazy-eviction no lazyfree-lazy-expire no lazyfree-lazy-server-del no replica-lazy-flush no appendonly yes #启用AOF持久化方式 appendfilename "appendonly.aof" #AOF文件的名称,默认为appendonly.aof appendfsync everysec #每秒钟强制写入磁盘一次,在性能和持久化方面做了很好的折中,是受推荐的方式。 no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb aof-load-truncated yes aof-use-rdb-preamble yes lua- time -limit 5000 slowlog-log-slower-than 10000 slowlog-max-len 128 latency-monitor-threshold 0 notify-keyspace-events "" hash -max-ziplist-entries 512 hash -max-ziplist-value 64 list-max-ziplist-size -2 list-compress-depth 0 set -max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 hll-sparse-max-bytes 3000 stream-node-max-bytes 4096 stream-node-max-entries 100 activerehashing yes client-output-buffer-limit normal 0 0 0 client-output-buffer-limit replica 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 hz 10 dynamic-hz yes aof-rewrite-incremental-fsync yes rdb-save-incremental-fsync yes |
从节点的配置,只需加一行replicaof 192.168.0.10 6379
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 | [root@node2 redis] # grep -v -E "^#|^$" /etc/redis/6379.conf bind 0.0.0.0 protected-mode yes port 6379 tcp-backlog 511 timeout 0 tcp-keepalive 300 daemonize yes supervised no pidfile /var/run/redis_6379 .pid loglevel notice logfile /var/log/redis_6379 .log databases 16 always-show-logo yes save 900 1 save 300 10 save 60 10000 stop-writes-on-bgsave-error yes rdbcompression yes rdbchecksum yes dbfilename dump.rdb dir /var/lib/redis/6379 replicaof 192.168.0.10 6379 #主节点redis的IP和端口 replica-serve-stale-data yes replica- read -only yes repl-diskless- sync no repl-diskless- sync -delay 5 repl-disable-tcp-nodelay no replica-priority 100 lazyfree-lazy-eviction no lazyfree-lazy-expire no lazyfree-lazy-server-del no replica-lazy-flush no appendonly yes appendfilename "appendonly.aof" appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb aof-load-truncated yes aof-use-rdb-preamble yes lua- time -limit 5000 slowlog-log-slower-than 10000 slowlog-max-len 128 latency-monitor-threshold 0 notify-keyspace-events "" hash -max-ziplist-entries 512 hash -max-ziplist-value 64 list-max-ziplist-size -2 list-compress-depth 0 set -max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 hll-sparse-max-bytes 3000 stream-node-max-bytes 4096 stream-node-max-entries 100 activerehashing yes client-output-buffer-limit normal 0 0 0 client-output-buffer-limit replica 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 hz 10 dynamic-hz yes aof-rewrite-incremental-fsync yes rdb-save-incremental-fsync yes |
启动主,从节点数据就开始同步了
=============================================配置密码======================================
主节点;
requirepass pass1234 # 注意,这个是连接master节点,同步数据用的密码. slave节点需要设置这两个密码 |
从节点:
1 2 | requirepass pass1234 这个是slave节点上登录自己的redis用的密码 masterauth pass1234 # 注意,这个是连接master节点,同步数据用的密码. slave节点需要设置这两个密码 |
使用role和info 查看主从信息
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | 127.0.0.1:6379> role 1) "master" #主节点 2) (integer) 4740 #偏移量 3) 1) 1) "192.168.0.12" #从节点 2) "6379" 3) "4740" 127.0.0.1:6379> INFO replication # Replication role:master connected_slaves:1 slave0:ip=192.168.0.12,port=6379,state=online,offset=5300,lag=1 master_replid:3f8d3c3ab80297ef3916615f8c3b2c22733814cf master_replid2:0000000000000000000000000000000000000000 master_repl_offset:5300 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:530 |
参考:
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 基于Microsoft.Extensions.AI核心库实现RAG应用
· Linux系列:如何用heaptrack跟踪.NET程序的非托管内存泄露
· 开发者必知的日志记录最佳实践
· SQL Server 2025 AI相关能力初探
· Linux系列:如何用 C#调用 C方法造成内存泄露
· Manus爆火,是硬核还是营销?
· 终于写完轮子一部分:tcp代理 了,记录一下
· 别再用vector<bool>了!Google高级工程师:这可能是STL最大的设计失误
· 单元测试从入门到精通
· 震惊!C++程序真的从main开始吗?99%的程序员都答错了
2016-10-31 JavaWeb学习总结(七)—HttpServletRequest