redis主从复制,哨兵以及集群搭建部署
redis主从复制
1.redis支持多实例的功能,一台机器上,可以运行多个单个的redis数据库
环境准备,运行3个redis数据库,达到 1主 2从的配置 主库 6379.conf port 6379 daemonize yes pidfile /data/6379/redis.pid loglevel notice logfile "/data/6379/redis.log" dbfilename dump.rdb dir /data/6379 从库 6380 port 6380 daemonize yes pidfile /data/6380/redis.pid loglevel notice logfile "/data/6380/redis.log" dbfilename dump.rdb dir /data/6380 slaveof 127.0.0.1 6379 从库 6381 port 6381 daemonize yes pidfile /data/6381/redis.pid loglevel notice logfile "/data/6381/redis.log" dbfilename dump.rdb dir /data/6381 slaveof 127.0.0.1 6379 [root@mcw01 ~/msRedis]$ ls [root@mcw01 ~/msRedis]$ tree /data/ /data/ ├── 6379 ├── 6380 └── 6381 3 directories, 0 files [root@mcw01 ~/msRedis]$ vim 6379.conf [root@mcw01 ~/msRedis]$ vim 6380.conf [root@mcw01 ~/msRedis]$ vim 6381.conf [root@mcw01 ~/msRedis]$ cat 6379.conf port 6379 daemonize yes pidfile /data/6379/redis.pid loglevel notice logfile "/data/6379/redis.log" dbfilename dump.rdb dir /data/6379 [root@mcw01 ~/msRedis]$ cat 6380.conf port 6380 daemonize yes pidfile /data/6380/redis.pid loglevel notice logfile "/data/6380/redis.log" dbfilename dump.rdb dir /data/6380 [root@mcw01 ~/msRedis]$ redis-server 6379.conf [root@mcw01 ~/msRedis]$ redis-server 6380.conf #创建三个配置文件,并启动redis服务 [root@mcw01 ~/msRedis]$ redis-server 6381.conf [root@mcw01 ~/msRedis]$ ps -ef|grep -v grep |grep redis root 25270 1 0 11:43 ? 00:00:00 redis-server *:6379 root 25275 1 0 11:43 ? 00:00:00 redis-server *:6380 root 25280 1 0 11:43 ? 00:00:00 redis-server *:6381 [root@mcw01 ~/msRedis]$ redis-cli 127.0.0.1:6379> keys * (empty list or set) 127.0.0.1:6379> set name1 mcw1 OK 127.0.0.1:6379> keys * 1) "name1" 127.0.0.1:6379> [root@mcw01 ~/msRedis]$ redis-cli -p 6380 #6379上创建的key,在6380和6381上都看不到,三者没有关系 127.0.0.1:6380> get name1 (nil) 127.0.0.1:6380> keys * (empty list or set) 127.0.0.1:6380> [root@mcw01 ~/msRedis]$ redis-cli -p 6381 127.0.0.1:6381> keys * (empty list or set) 127.0.0.1:6381> [root@mcw01 ~/msRedis]$ redis-cli -p 6379 get name1 #连接进入的后面加命令,像MySQL一样的免交互执行命令 "mcw1" [root@mcw01 ~/msRedis]$ tree /data/ #查看目录结构 /data/ ├── 6379 │ ├── redis.log │ └── redis.pid ├── 6380 │ ├── redis.log │ └── redis.pid └── 6381 ├── redis.log └── redis.pid 3 directories, 6 files [root@mcw01 ~/msRedis]$
2.开启主从复制功能
redis-cli info #查看数据库信息,查出所有 redis-cli info replication #快速查看复制信息。只查info里的Replication部分 在6380 和6381数据库上 ,配置主从信息,通过参数形式修改配置,临时生效,注意要写入配置文件 redis-cli -p 6380 slaveof 127.0.0.1 6379 redis-cli -p 6381 slaveof 127.0.0.1 6379 此时检查6379的复制信息,以及6380 6381的复制信息 redis-cli -p 6380 info replication redis-cli -p 6381 info replication 主从复制是 读写分离的,master可写, slave只读 [root@mcw01 ~/msRedis]$ redis-cli -p 6379 info # Server redis_version:4.0.10 redis_git_sha1:00000000 redis_git_dirty:0 redis_build_id:279e3e51d6e7969b redis_mode:standalone os:Linux 3.10.0-693.el7.x86_64 x86_64 arch_bits:64 multiplexing_api:epoll atomicvar_api:atomic-builtin gcc_version:4.8.5 process_id:25270 run_id:29d570833f783120fd9e884ea8d4108384abaeae tcp_port:6379 uptime_in_seconds:466 uptime_in_days:0 hz:10 lru_clock:2371478 executable:/root/msRedis/redis-server config_file:/root/msRedis/6379.conf # Clients connected_clients:1 client_longest_output_list:0 client_biggest_input_buf:0 blocked_clients:0 # Memory used_memory:849456 used_memory_human:829.55K used_memory_rss:7688192 used_memory_rss_human:7.33M used_memory_peak:849456 used_memory_peak_human:829.55K used_memory_peak_perc:100.12% used_memory_overhead:836206 used_memory_startup:786504 used_memory_dataset:13250 used_memory_dataset_perc:21.05% total_system_memory:1911832576 total_system_memory_human:1.78G used_memory_lua:37888 used_memory_lua_human:37.00K maxmemory:0 maxmemory_human:0B maxmemory_policy:noeviction mem_fragmentation_ratio:9.05 mem_allocator:jemalloc-4.0.3 active_defrag_running:0 lazyfree_pending_objects:0 # Persistence loading:0 rdb_changes_since_last_save:1 rdb_bgsave_in_progress:0 rdb_last_save_time:1646538180 rdb_last_bgsave_status:ok rdb_last_bgsave_time_sec:-1 rdb_current_bgsave_time_sec:-1 rdb_last_cow_size:0 aof_enabled:0 aof_rewrite_in_progress:0 aof_rewrite_scheduled:0 aof_last_rewrite_time_sec:-1 aof_current_rewrite_time_sec:-1 aof_last_bgrewrite_status:ok aof_last_write_status:ok aof_last_cow_size:0 # Stats total_connections_received:3 total_commands_processed:5 instantaneous_ops_per_sec:0 total_net_input_bytes:131 total_net_output_bytes:10197 instantaneous_input_kbps:0.00 instantaneous_output_kbps:0.00 rejected_connections:0 sync_full:0 sync_partial_ok:0 sync_partial_err:0 expired_keys:0 expired_stale_perc:0.00 expired_time_cap_reached_count:0 evicted_keys:0 keyspace_hits:1 keyspace_misses:0 pubsub_channels:0 pubsub_patterns:0 latest_fork_usec:0 migrate_cached_sockets:0 slave_expires_tracked_keys:0 active_defrag_hits:0 active_defrag_misses:0 active_defrag_key_hits:0 active_defrag_key_misses:0 # Replication role:master connected_slaves:0 master_replid:e3784eb64085052e7107d06d1fc605ae6dbb5b59 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:0 second_repl_offset:-1 repl_backlog_active:0 repl_backlog_size:1048576 repl_backlog_first_byte_offset:0 repl_backlog_histlen:0 # CPU used_cpu_sys:0.32 used_cpu_user:0.11 used_cpu_sys_children:0.00 used_cpu_user_children:0.00 # Cluster cluster_enabled:0 # Keyspace db0:keys=1,expires=0,avg_ttl=0 [root@mcw01 ~/msRedis]$ [root@mcw01 ~/msRedis]$ [root@mcw01 ~/msRedis]$ redis-cli -p 6379 info replication #现在6379是master,但是连接的从为0 # Replication role:master connected_slaves:0 master_replid:e3784eb64085052e7107d06d1fc605ae6dbb5b59 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:0 second_repl_offset:-1 repl_backlog_active:0 repl_backlog_size:1048576 repl_backlog_first_byte_offset:0 repl_backlog_histlen:0 [root@mcw01 ~/msRedis]$ redis-cli -p 6379 info keyspace #info加上后面这个keyspace,就是info里被注释的英文 # Keyspace db0:keys=1,expires=0,avg_ttl=0 [root@mcw01 ~/msRedis]$ redis-cli -p 6379 info cluster # Cluster cluster_enabled:0 [root@mcw01 ~/msRedis]$ 开启主从复制功能 [root@mcw01 ~/msRedis]$ redis-cli -p 6380 slaveof 127.0.0.1 6379 #开启6380为6379的从 OK [root@mcw01 ~/msRedis]$ redis-cli -p 6381 slaveof 127.0.0.1 6379 #开启6381为6379的从 OK [root@mcw01 ~/msRedis]$ redis-cli -p 6379 info replication #查看主6379的信息,角色是主,连接的从有两个,从0和从1的ip端口等信息 # Replication role:master connected_slaves:2 slave0:ip=127.0.0.1,port=6381,state=online,offset=28,lag=0 slave1:ip=127.0.0.1,port=6380,state=online,offset=28,lag=0 master_replid:99b8b1b5d61e13152f4025821626574ed7f92ac9 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:28 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:28 [root@mcw01 ~/msRedis]$ redis-cli -p 6380 info replication #查看从6380的复制信息。角色是从,主的ip端口是什么,主的连接状态,是否只读等信息 # Replication role:slave master_host:127.0.0.1 master_port:6379 master_link_status:up master_last_io_seconds_ago:2 master_sync_in_progress:0 slave_repl_offset:56 slave_priority:100 slave_read_only:1 connected_slaves:0 master_replid:99b8b1b5d61e13152f4025821626574ed7f92ac9 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:56 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:56 [root@mcw01 ~/msRedis]$ 再来查看刚刚主上创建的数据,现在已经复制到从库上去了 [root@mcw01 ~/msRedis]$ redis-cli -p 6379 127.0.0.1:6379> keys * 1) "name1" 127.0.0.1:6379> [root@mcw01 ~/msRedis]$ redis-cli -p 6380 keys * (error) ERR wrong number of arguments for 'keys' command [root@mcw01 ~/msRedis]$ redis-cli -p 6380 127.0.0.1:6380> keys * 1) "name1" 127.0.0.1:6380> [root@mcw01 ~/msRedis]$ redis-cli -p 6381 127.0.0.1:6381> keys * 1) "name1" 127.0.0.1:6381> [root@mcw01 ~/msRedis]$ [root@mcw01 ~/msRedis]$ redis-cli -p 6379 #在主上创建一个新的数据,也很快同步到从上去了,说明主从复制正常运行 127.0.0.1:6379> set name2 mcw2 OK 127.0.0.1:6379> [root@mcw01 ~/msRedis]$ redis-cli -p 6381 127.0.0.1:6381> keys * 1) "name1" 2) "name2" 127.0.0.1:6381> 127.0.0.1:6381> set name3 mcw3 #然后在从上写入一个数据,发现从是只读的。这也是实现了读写分离的 (error) READONLY You can't write against a read only slave. 127.0.0.1:6381> [root@mcw01 ~/msRedis]$ 如果命令行设置了主从复制了,但是没有写进配置文件,需要记得把执行过的命令加进去,这样即使重启redis,也会自动加载上。 slaveof 127.0.0.1 6379
3.模拟主从复制故障,手动切换master-slave身份
1.杀死6379进程 ,干掉主库 2.手动切换 6381为新的主库,需要先关闭它的从库身份 redis-cli -p 6381 slaveof no one 3.修改6380的新主库是 6381 redis-cli -p 6380 slaveof 127.0.0.1 6381 1.杀死6379进程 ,干掉主库 [root@mcw01 ~/msRedis]$ ps -ef|grep -v grep |grep redis root 25270 1 0 11:43 ? 00:00:02 redis-server *:6379 root 25275 1 0 11:43 ? 00:00:02 redis-server *:6380 root 25280 1 0 11:43 ? 00:00:02 redis-server *:6381 [root@mcw01 ~/msRedis]$ kill 25270 [root@mcw01 ~/msRedis]$ ps -ef|grep -v grep |grep redis root 25275 1 0 11:43 ? 00:00:02 redis-server *:6380 root 25280 1 0 11:43 ? 00:00:02 redis-server *:6381 [root@mcw01 ~/msRedis]$ [root@mcw01 ~/msRedis]$ redis-cli -p 6380 info replication #虽然主库已经杀掉了,但是现在还是连接的6379 # Replication role:slave master_host:127.0.0.1 master_port:6379 master_link_status:down .......... [root@mcw01 ~/msRedis]$ redis-cli -p 6381 info replication # Replication role:slave master_host:127.0.0.1 master_port:6379 master_link_status:down master_last_io_seconds_ago:-1 master_sync_in_progress:0 ........ [root@mcw01 ~/msRedis]$ 2.手动切换 6381为新的主库,需要先关闭它的从库身份 redis-cli -p 6381 slaveof no one [root@mcw01 ~/msRedis]$ redis-cli -p 6381 slaveof no one # OK [root@mcw01 ~/msRedis]$ redis-cli -p 6381 127.0.0.1:6381> keys * 1) "name1" 2) "name2" 127.0.0.1:6381> set name3 mcw3 #取消它从库身份,就可以写入数据了 OK 127.0.0.1:6381> [root@mcw01 ~/msRedis]$ redis-cli -p 6381 info replication #查看自己的复制信息,是主了,现在还没有从 # Replication role:master connected_slaves:0 ........ [root@mcw01 ~/msRedis]$ redis-cli -p 6380 info replication #查看6380,现在的从不是新的主6381的从,需要手动修改 # Replication role:slave master_host:127.0.0.1 master_port:6379 master_link_status:down ...... [root@mcw01 ~/msRedis]$ 3.修改6380的新主库是 6381 redis-cli -p 6380 slaveof 127.0.0.1 6381 [root@mcw01 ~/msRedis]$ redis-cli -p 6380 slaveof 127.0.0.1 6381 OK [root@mcw01 ~/msRedis]$ redis-cli -p 6380 info replication #上面是修改新的主,下面是查看已经修改成功,连接状态up # Replication role:slave master_host:127.0.0.1 master_port:6381 master_link_status:up ......... [root@mcw01 ~/msRedis]$ redis-cli -p 6380 #从上面可以看到刚刚主上创建的name3了 127.0.0.1:6380> keys * 1) "name2" 2) "name3" 3) "name1" 127.0.0.1:6380> [root@mcw01 ~/msRedis]$ redis-cli -p 6381 #主上创建一条数据name4 127.0.0.1:6381> set name4 mcw4 OK 127.0.0.1:6381> [root@mcw01 ~/msRedis]$ redis-cli -p 6380 #从上能查到新建的数据集name4.并且从不能写入数据,是只读的。 127.0.0.1:6380> keys * 1) "name2" 2) "name3" 3) "name1" 4) "name4" 127.0.0.1:6380> set name5 mcw5 (error) READONLY You can't write against a read only slave. 127.0.0.1:6380> [root@mcw01 ~/msRedis]$
哨兵搭建
redis哨兵高可用
redis-sentinel功能
环境准备:redis主从复制集群部署
三个redis数据库实例 ,配置好 1主 2从的配置 6379.conf port 6379 daemonize yes logfile "6379.log" dbfilename "dump-6379.rdb" dir "/var/redis/data/" 6380.conf port 6380 daemonize yes logfile "6380.log" dbfilename "dump-6380.rdb" dir "/var/redis/data/" slaveof 127.0.0.1 6379 6381.conf port 6381 daemonize yes logfile "6381.log" dbfilename "dump-6381.rdb" dir "/var/redis/data/" slaveof 127.0.0.1 6379 [root@mcw01 ~/msRedis]$ ps -ef|grep redis root 25275 1 0 11:43 ? 00:00:08 redis-server *:6380 root 25280 1 0 11:43 ? 00:00:07 redis-server *:6381 root 45901 25154 0 13:51 pts/0 00:00:00 grep --color=auto redis [root@mcw01 ~/msRedis]$ kill 25275 [root@mcw01 ~/msRedis]$ kill 25280 [root@mcw01 ~/msRedis]$ [root@mcw01 ~/msRedis]$ [root@mcw01 ~/msRedis]$ ps -ef|grep redis root 45903 25154 0 13:51 pts/0 00:00:00 grep --color=auto redis [root@mcw01 ~/msRedis]$ [root@mcw01 ~/msRedis]$ ls 6379.conf 6380.conf 6381.conf [root@mcw01 ~/msRedis]$ vim 6380.conf [root@mcw01 ~/msRedis]$ vim 6381.conf [root@mcw01 ~/msRedis]$ cat 6379.conf port 6379 daemonize yes pidfile /data/6379/redis.pid loglevel notice logfile "/data/6379/redis.log" dbfilename dump.rdb dir /data/6379 [root@mcw01 ~/msRedis]$ cat 6380.conf #我这里直接写进配置文件,这样用配置文件启动redis,就不需要执行相关命令了 port 6380 daemonize yes pidfile /data/6380/redis.pid loglevel notice logfile "/data/6380/redis.log" dbfilename dump.rdb dir /data/6380 slaveof 127.0.0.1 6379 [root@mcw01 ~/msRedis]$ redis-server 6379.conf [root@mcw01 ~/msRedis]$ redis-server 6380.conf [root@mcw01 ~/msRedis]$ redis-server 6381.conf [root@mcw01 ~/msRedis]$ ps -ef|grep -v grep |grep redis #启动三个服务 root 45949 1 0 13:53 ? 00:00:00 redis-server *:6379 root 45954 1 0 13:53 ? 00:00:00 redis-server *:6380 root 45960 1 0 13:53 ? 00:00:00 redis-server *:6381 [root@mcw01 ~/msRedis]$ redis-cli -p 6379 info replication #查看主从复制情况,一主两从正常 # Replication role:master connected_slaves:2 slave0:ip=127.0.0.1,port=6380,state=online,offset=84,lag=1 slave1:ip=127.0.0.1,port=6381,state=online,offset=84,lag=0 ........ [root@mcw01 ~/msRedis]$ redis-cli -p 6380 info replication # Replication role:slave master_host:127.0.0.1 master_port:6379 master_link_status:up ....... [root@mcw01 ~/msRedis]$
环境准备:redis三个redis哨兵进程部署
三个redis哨兵进程,指定好,检测着谁 也是准备三个配置文件,内容如下 sentinel-26379.conf port 26379 dir /var/redis/data/ logfile "26379.log" // 当前Sentinel节点监控 192.168.182.130:6379 这个主节点 // 2代表判断主节点失败至少需要2个Sentinel节点节点同意 // mymaster是主节点的别名 sentinel monitor s21ms 0.0.0.0 6379 2 //每个Sentinel节点都要定期PING命令来判断Redis数据节点和其余Sentinel节点是否可达,如果超过30000毫秒30s且没有回复,则判定不可达 sentinel down-after-milliseconds s21ms 20000 //当Sentinel节点集合对主节点故障判定达成一致时,Sentinel领导者节点会做故障转移操作,选出新的主节点, 原来的从节点会向新的主节点发起复制操作,限制每次向新的主节点发起复制操作的从节点个数为1 sentinel parallel-syncs mymaster 1 //故障转移超时时间为180000毫秒 sentinel failover-timeout mymaster 180000 #三个哨兵的配置文件,一模一样,仅仅是端口的区别 #三个哨兵的配置文件,一模一样,仅仅是端口的区别 #三个哨兵的配置文件,一模一样,仅仅是端口的区别 #三个哨兵的配置文件,一模一样,仅仅是端口的区别 sentinel-26380.conf sentinel-26381.conf 2.分别启动 三个redis数据库, 以及三个 哨兵进程 ,注意 ,哨兵第一次启动后,会修改配置文件,如果错了,得删除配置文件,重新写 2.分别启动 三个redis数据库, 以及三个 哨兵进程 ,注意 ,哨兵第一次启动后,会修改配置文件,如果错了,得删除配置文件,重新写 2.分别启动 三个redis数据库, 以及三个 哨兵进程 ,注意 ,哨兵第一次启动后,会修改配置文件,如果错了,得删除配置文件,重新写 2.分别启动 三个redis数据库, 以及三个 哨兵进程 ,注意 ,哨兵第一次启动后,会修改配置文件,如果错了,得删除配置文件,重新写 配置文件在这里,这里用的是127.0.0.1 sentinel-26379.conf port 26379 dir /var/redis/data/ logfile "26379.log" sentinel monitor s21ms 127.0.0.1 6379 2 sentinel down-after-milliseconds s21ms 20000 sentinel parallel-syncs s21ms 1 sentinel failover-timeout s21ms 180000 #加一个后台运行 daemonize yes #仅仅是端口的不同 sentinel-26380.conf sentinel-26381.conf #启动 1244 redis-sentinel sentinel-26379.conf 1245 redis-sentinel sentinel-26380.conf 1246 redis-sentinel sentinel-26381.conf sentinel-26379.conf port 26379 dir /data/sentinel logfile "26379.log" sentinel monitor mcw 127.0.0.1 6379 2 sentinel down-after-milliseconds mcw 20000 sentinel parallel-syncs mcw 1 sentinel failover-timeout mcw 180000 daemonize yes [root@mcw01 ~/msRedis]$ ss -lntup|grep 6379 tcp LISTEN 0 511 *:6379 *:* users:(("redis-server",pi tcp LISTEN 0 511 :::6379 :::* users:(("redis-server",pi [root@mcw01 ~/msRedis]$ ss -anp|grep 6379 tcp LISTEN 0 511 *:6379 *:* users:(("redis-server",pi tcp ESTAB 0 0 127.0.0.1:6379 127.0.0.1:39093 users:(("redis- tcp ESTAB 0 0 127.0.0.1:6379 127.0.0.1:39091 users:(("redis- tcp ESTAB 0 0 127.0.0.1:39093 127.0.0.1:6379 users:(("redis- tcp ESTAB 0 0 127.0.0.1:39091 127.0.0.1:6379 users:(("redis- tcp LISTEN 0 511 :::6379 :::* users:(("redis-server",pi [root@mcw01 ~/msRedis]$ [root@mcw01 ~/msRedis]$ ls /data/63 6379/ 6380/ 6381/ [root@mcw01 ~/msRedis]$ mkdir /data/sentinel [root@mcw01 ~/msRedis]$ vim 26379.conf [root@mcw01 ~/msRedis]$ sed "s#26379#26380#g" 26379.conf >26380.conf [root@mcw01 ~/msRedis]$ sed "s#26379#26381#g" 26379.conf >26381.conf [root@mcw01 ~/msRedis]$ cat 26379.conf port 26379 dir /data/sentinel logfile "26379.log" sentinel monitor mcw 127.0.0.1 6379 2 sentinel down-after-milliseconds mcw 20000 sentinel parallel-syncs mcw 1 sentinel failover-timeout mcw 180000 daemonize yes [root@mcw01 ~/msRedis]$ cat 26380.conf port 26380 dir /data/sentinel logfile "26380.log" sentinel monitor mcw 127.0.0.1 6379 2 sentinel down-after-milliseconds mcw 20000 sentinel parallel-syncs mcw 1 sentinel failover-timeout mcw 180000 daemonize yes [root@mcw01 ~/msRedis]$ [root@mcw01 ~/msRedis]$ ls 26379.conf 26380.conf 26381.conf 6379.conf 6380.conf 6381.conf [root@mcw01 ~/msRedis]$ mv 26379.conf sentinel-26379.conf #起个方便看的名字把 [root@mcw01 ~/msRedis]$ mv 26380.conf sentinel-26380.conf [root@mcw01 ~/msRedis]$ mv 26381.conf sentinel-26381.conf [root@mcw01 ~/msRedis]$ ls 6379.conf 6380.conf 6381.conf sentinel-26379.conf sentinel-26380.conf sentinel-26381.conf [root@mcw01 ~/msRedis]$ 报错: [root@mcw01 ~/msRedis]$ redis-server sentinel-26379.conf *** FATAL CONFIG FILE ERROR *** Reading the configuration file, at line 4 >>> 'sentinel monitor mcw 127.0.0.1 6379 2' sentinel directive while not in sentinel mode [root@mcw01 ~/msRedis]$ 部署哨兵模式的时候,使用的命令不对造成的,正确的命令如下: ./redis-server.sh sentinel.conf --sentinel 为了避免把后面的--sentinel竟然给忘了类似的事情发生,建议使用redis-sentinel sentinel.conf代替redis-server命令 [root@mcw01 ~/msRedis]$ redis-sentinel sentinel-26379.conf [root@mcw01 ~/msRedis]$ redis-sentinel sentinel-26380.conf [root@mcw01 ~/msRedis]$ redis-sentinel sentinel-26381.conf [root@mcw01 ~/msRedis]$ ps -ef|grep -v grep |grep redis #启动了三个哨兵 root 45949 1 0 13:53 ? 00:00:01 redis-server *:6379 root 45954 1 0 13:53 ? 00:00:02 redis-server *:6380 root 45960 1 0 13:53 ? 00:00:02 redis-server *:6381 root 46061 1 1 14:27 ? 00:00:00 redis-sentinel *:26379 [sentinel] root 46066 1 0 14:28 ? 00:00:00 redis-sentinel *:26380 [sentinel] root 46071 1 0 14:28 ? 00:00:00 redis-sentinel *:26381 [sentinel] [root@mcw01 ~/msRedis]$ ls 6379.conf 6380.conf 6381.conf sentinel-26379.conf sentinel-26380.conf sentinel-26381.conf [root@mcw01 ~/msRedis]$ cat 6379.conf port 6379 daemonize yes pidfile /data/6379/redis.pid loglevel notice logfile "/data/6379/redis.log" dbfilename dump.rdb dir /data/6379 [root@mcw01 ~/msRedis]$ cat sentinel-26379.conf #查看配置,启动后配置被重写了,添加了部分配置 port 26379 dir "/data/sentinel" logfile "26379.log" sentinel myid 750ea6253069388e5897651086d50af8fc604a7f sentinel monitor mcw 127.0.0.1 6379 2 sentinel down-after-milliseconds mcw 20000 sentinel config-epoch mcw 0 daemonize yes # Generated by CONFIG REWRITE sentinel leader-epoch mcw 0 sentinel known-slave mcw 127.0.0.1 6380 sentinel known-slave mcw 127.0.0.1 6381 sentinel known-sentinel mcw 127.0.0.1 26380 3098a36a80a4d1f1b3f01f9a332c8abdf21ee56a sentinel known-sentinel mcw 127.0.0.1 26381 50aa98a452708c83334286a9a8e2f06cef1f9fd2 sentinel current-epoch 0 [root@mcw01 ~/msRedis]$
3.验证哨兵是否正常
redis-cli -p 26379 info sentinel master0:name=s21ms,status=ok,address=127.0.0.1:6379,slaves=2,sentinels=3 哨兵的名字,状态,监控的地址,监控的从的数量,以及哨兵的个数。查看到哨兵的信息,就说明哨兵的部署是正常的 [root@mcw01 ~/msRedis]$ redis-cli -p 26379 info sentinel # Sentinel sentinel_masters:1 sentinel_tilt:0 sentinel_running_scripts:0 sentinel_scripts_queue_length:0 sentinel_simulate_failure_flags:0 master0:name=mcw,status=ok,address=127.0.0.1:6379,slaves=2,sentinels=3 [root@mcw01 ~/msRedis]$
4.干掉主库 ,检查主从切换状态
1253 kill -9 12749 1254 ps -ef|grep redis 1255 redis-cli -p 6380 info replication 1256 redis-cli -p 6381 info replication 1257 redis-cli -p 6380 info replication 1258 redis-cli -p 6381 info replication [root@mcw01 ~/msRedis]$ ps -ef|grep -v grep |grep redis root 45949 1 0 13:53 ? 00:00:02 redis-server *:6379 root 45954 1 0 13:53 ? 00:00:03 redis-server *:6380 root 45960 1 0 13:53 ? 00:00:02 redis-server *:6381 root 46061 1 0 14:27 ? 00:00:02 redis-sentinel *:26379 [sentinel] root 46066 1 0 14:28 ? 00:00:01 redis-sentinel *:26380 [sentinel] root 46071 1 0 14:28 ? 00:00:01 redis-sentinel *:26381 [sentinel] [root@mcw01 ~/msRedis]$ kill 45949 #杀掉主服务 [root@mcw01 ~/msRedis]$ ps -ef|grep -v grep |grep redis root 45954 1 0 13:53 ? 00:00:03 redis-server *:6380 root 45960 1 0 13:53 ? 00:00:03 redis-server *:6381 root 46061 1 0 14:27 ? 00:00:02 redis-sentinel *:26379 [sentinel] root 46066 1 0 14:28 ? 00:00:02 redis-sentinel *:26380 [sentinel] root 46071 1 0 14:28 ? 00:00:02 redis-sentinel *:26381 [sentinel] [root@mcw01 ~/msRedis]$ redis-cli -p 6380 info replication #迅速查看80和81,发现主从复制链接down了,但是还是指向79的 # Replication role:slave master_host:127.0.0.1 master_port:6379 master_link_status:down ....... [root@mcw01 ~/msRedis]$ redis-cli -p 6381 info replication # Replication role:slave master_host:127.0.0.1 master_port:6379 master_link_status:down ........ [root@mcw01 ~/msRedis]$ [root@mcw01 ~/msRedis]$ redis-cli -p 6380 info replication #过一会再看,大约20秒。发现81已经自动成为主了,80已经成为81的从了,实现了自动故障转移 # Replication role:slave master_host:127.0.0.1 master_port:6381 master_link_status:up ....... [root@mcw01 ~/msRedis]$ redis-cli -p 6381 info replication # Replication role:master connected_slaves:1 slave0:ip=127.0.0.1,port=6380,state=online,offset=166686,lag=0 ........ [root@mcw01 ~/msRedis]$
redis-cluster搭建
环境准备
1.准备好6个redis节点的配置文件 redis-7000.conf port 7000 daemonize yes dir "/opt/redis/data" logfile "7000.log" dbfilename "dump-7000.rdb cluster-enabled yes #开启集群模式 cluster-config-file nodes-7000.conf #集群内部的配置文件 cluster-require-full-coverage no #redis cluster需要16384个slot都正常的时候才能对外提供服务,换句话说,只要任何一个slot异常那么整个cluster不对外提供服务。 因此生产环境一般为no #6个配置文件,仅仅是端口的区别 redis-7000.conf port 7000 daemonize yes dir "/opt/redis/data" logfile "7000.log" dbfilename "dump-7000.rdb" cluster-enabled yes cluster-config-file nodes-7000.conf cluster-require-full-coverage no redis-7001.conf redis-7002.conf redis-7003.conf redis-7004.conf redis-7005.conf [root@mcw01 ~/msRedis]$ cd cluster/ [root@mcw01 ~/msRedis/cluster]$ ls 7000.conf 7001.conf 7002.conf 7003.conf 7004.conf [root@mcw01 ~/msRedis/cluster]$ cat 7000.conf port 7000 daemonize yes dir "/opt/redis/data" logfile "7000.log" dbfilename "dump-7000.rdb cluster-enabled yes cluster-config-file nodes-7000.conf cluster-require-full-coverage no [root@mcw01 ~/msRedis/cluster]$ sed "s#7000#7005#g" 7000.conf >7005.conf [root@mcw01 ~/msRedis/cluster]$ ls 7000.conf 7001.conf 7002.conf 7003.conf 7004.conf 7005.conf [root@mcw01 ~/msRedis/cluster]$ cat 7005.conf port 7005 daemonize yes dir "/opt/redis/data" logfile "7005.log" dbfilename "dump-7005.rdb cluster-enabled yes cluster-config-file nodes-7005.conf cluster-require-full-coverage no [root@mcw01 ~/msRedis/cluster]$
2.根据配置文件启动6个节点
1288 redis-server 7000.conf 1290 redis-server 7001.conf 1291 redis-server 7002.conf 1292 redis-server 7003.conf 1293 redis-server 7004.conf 1294 redis-server 7005.conf 如下,6个集群的结点 7000-7005已经起来了,并且显示是cluster [root@mcw01 ~/msRedis/cluster]$ ls 7000.conf 7001.conf 7002.conf 7003.conf 7004.conf 7005.conf [root@mcw01 ~/msRedis/cluster]$ mkdir /opt/redis/data mkdir: cannot create directory ‘/opt/redis/data’: No such file or directory [root@mcw01 ~/msRedis/cluster]$ mkdir -p /opt/redis/data [root@mcw01 ~/msRedis/cluster]$ ps -ef|grep -v grep|grep redis root 45954 1 0 13:53 ? 00:00:06 redis-server *:6380 root 45960 1 0 13:53 ? 00:00:06 redis-server *:6381 root 46061 1 0 14:27 ? 00:00:07 redis-sentinel *:26379 [sentinel] root 46066 1 0 14:28 ? 00:00:07 redis-sentinel *:26380 [sentinel] root 46071 1 0 14:28 ? 00:00:07 redis-sentinel *:26381 [sentinel] [root@mcw01 ~/msRedis/cluster]$ [root@mcw01 ~/msRedis/cluster]$ redis-se redis-sentinel redis-server [root@mcw01 ~/msRedis/cluster]$ redis-server 7000.conf [root@mcw01 ~/msRedis/cluster]$ redis-server 7001.conf [root@mcw01 ~/msRedis/cluster]$ redis-server 7002.conf [root@mcw01 ~/msRedis/cluster]$ redis-server 7003.conf [root@mcw01 ~/msRedis/cluster]$ redis-server 7004.conf [root@mcw01 ~/msRedis/cluster]$ redis-server 7005.conf [root@mcw01 ~/msRedis/cluster]$ ps -ef|grep -v grep|grep redis root 45954 1 0 13:53 ? 00:00:06 redis-server *:6380 root 45960 1 0 13:53 ? 00:00:06 redis-server *:6381 root 46061 1 0 14:27 ? 00:00:08 redis-sentinel *:26379 [sentinel] root 46066 1 0 14:28 ? 00:00:07 redis-sentinel *:26380 [sentinel] root 46071 1 0 14:28 ? 00:00:07 redis-sentinel *:26381 [sentinel] root 46293 1 0 15:17 ? 00:00:00 redis-server *:7000 [cluster] root 46298 1 0 15:17 ? 00:00:00 redis-server *:7001 [cluster] root 46303 1 0 15:18 ? 00:00:00 redis-server *:7002 [cluster] root 46308 1 0 15:18 ? 00:00:00 redis-server *:7003 [cluster] root 46313 1 0 15:18 ? 00:00:00 redis-server *:7004 [cluster] root 46318 1 1 15:18 ? 00:00:00 redis-server *:7005 [cluster] [root@mcw01 ~/msRedis/cluster]$
3.分配redis slot 槽位
- 手动写c语言 分配 - 使用ruby大神 写的一个redis模块,自动分配 现在登录上去,添加一个数据,显示槽位没有被提供。意思是,7000这个节点没有分配槽位,而数据是放到槽位里的。 [root@mcw01 ~/msRedis/cluster]$ redis-cli -p 7000 127.0.0.1:7000> keys * (empty list or set) 127.0.0.1:7000> set name1 mcw1 (error) CLUSTERDOWN Hash slot not served 127.0.0.1:7000> [root@mcw01 ~/msRedis/cluster]$
4.配置ruby脚本环境
1.yum安装最简单 yum install ruby 2.自动配置好 PATH环境变量 ruby和gem的环境变量 3.下载ruby操作redis的模块 wget http://rubygems.org/downloads/redis-3.3.0.gem 4.用ruby的包管理工具 gem 安装这个模块 gem install -l redis-3.3.0.gem ruby和python一样的解释型语言,gem相当于pip [root@mcw01 ~/msRedis/cluster]$ wget http://rubygems.org/downloads/redis-3.3.0.gem --2022-03-06 16:10:28-- http://rubygems.org/downloads/redis-3.3.0.gem Resolving rubygems.org (rubygems.org)... 151.101.193.227, 151.101.1.227, 151.101.65.227, ... Connecting to rubygems.org (rubygems.org)|151.101.193.227|:80... failed: Connection timed out. Connecting to rubygems.org (rubygems.org)|151.101.1.227|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 92160 (90K) [application/octet-stream] Saving to: ‘redis-3.3.0.gem’ 100%[===============================================================>] 92,160 --.-K/s in 0.08s 2022-03-06 16:10:31 (1.12 MB/s) - ‘redis-3.3.0.gem’ saved [92160/92160] [root@mcw01 ~/msRedis/cluster]$ ls 7000.conf 7001.conf 7002.conf 7003.conf 7004.conf 7005.conf redis-3.3.0.gem [root@mcw01 ~/msRedis/cluster]$ gem install -l redis-3.3.0.gem Successfully installed redis-3.3.0 Parsing documentation for redis-3.3.0 Installing ri documentation for redis-3.3.0 1 gem installed
5.通过ruby一键分配redis-cluster集群的槽位
找到机器上的redis-trib.rb命令,用绝对命令创建 开启集群,分配槽位 /opt/redis-4.0.10/src/redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 如下已经分配好槽位了 [root@mcw01 ~/msRedis/cluster]$ /opt/redis-4.0.10/src/redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 >>> Creating cluster >>> Performing hash slots allocation on 6 nodes... Using 3 masters: #三个主,三个复制的从 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 Adding replica 127.0.0.1:7004 to 127.0.0.1:7000 Adding replica 127.0.0.1:7005 to 127.0.0.1:7001 Adding replica 127.0.0.1:7003 to 127.0.0.1:7002 >>> Trying to optimize slaves allocation for anti-affinity [WARNING] Some slaves are in the same host as their master M: 2d2654b26e43f6245b41dc2cae44f7bb6ec846bb 127.0.0.1:7000 #三个主上分别分配了槽位区间 slots:0-5460 (5461 slots) master M: 102d887feee9b5d2db0284ecff83d893f4736aef 127.0.0.1:7001 slots:5461-10922 (5462 slots) master M: c2ba927b42526110a7f840643a1c653057c6b811 127.0.0.1:7002 slots:10923-16383 (5461 slots) master S: 22bcee4dc0439600aae268b9de093db39d73e2cf 127.0.0.1:7003 replicates c2ba927b42526110a7f840643a1c653057c6b811 S: fb7ed8949b7ab36351aabe72a9cb1ae8f859d6d8 127.0.0.1:7004 replicates 2d2654b26e43f6245b41dc2cae44f7bb6ec846bb S: cc0da6e5df726d65df53849dd92e2edd6ef66760 127.0.0.1:7005 replicates 102d887feee9b5d2db0284ecff83d893f4736aef Can I set the above configuration? (type 'yes' to accept): yes #接受分配 >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join..... >>> Performing Cluster Check (using node 127.0.0.1:7000) M: 2d2654b26e43f6245b41dc2cae44f7bb6ec846bb 127.0.0.1:7000 slots:0-5460 (5461 slots) master 1 additional replica(s) M: 102d887feee9b5d2db0284ecff83d893f4736aef 127.0.0.1:7001 slots:5461-10922 (5462 slots) master 1 additional replica(s) S: 22bcee4dc0439600aae268b9de093db39d73e2cf 127.0.0.1:7003 slots: (0 slots) slave replicates c2ba927b42526110a7f840643a1c653057c6b811 S: fb7ed8949b7ab36351aabe72a9cb1ae8f859d6d8 127.0.0.1:7004 slots: (0 slots) slave replicates 2d2654b26e43f6245b41dc2cae44f7bb6ec846bb M: c2ba927b42526110a7f840643a1c653057c6b811 127.0.0.1:7002 slots:10923-16383 (5461 slots) master 1 additional replica(s) S: cc0da6e5df726d65df53849dd92e2edd6ef66760 127.0.0.1:7005 slots: (0 slots) slave replicates 102d887feee9b5d2db0284ecff83d893f4736aef [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. #分配槽位成功 [root@mcw01 ~/msRedis/cluster]$
6.分配好集群后,可以向集群内写入数据了
redis-cli -c 指定集群模式,登录数据库 登录redis写入数据,发现槽位分配,且重定向之后,集群搭建成功 集群模式登录redis。 [root@mcw01 ~/msRedis/cluster]$ redis-cli -c -p 7000 127.0.0.1:7000> keys * (empty list or set) 127.0.0.1:7000> set name1 mcw1 #7000上设置数据,分配的槽位是12933,它在10923-16383之间,也就是7002服务上 -> Redirected to slot [12933] located at 127.0.0.1:7002 OK 127.0.0.1:7002> keys * #添加之后,自动切换到7002服务了 1) "name1" 127.0.0.1:7002> get name1 "mcw1" 127.0.0.1:7002> [root@mcw01 ~/msRedis/cluster]$ redis-cli -c -p 7000 #退出重新登录7000服务 127.0.0.1:7000> keys * #key不在7000上 (empty list or set) 127.0.0.1:7000> get name1 #7000上能get到其它节点上的数据,数据存储在其它节点上的槽位, -> Redirected to slot [12933] located at 127.0.0.1:7002 "mcw1" 127.0.0.1:7002> keys * #get到之后,命令行切换到这个数据所在节点上 1) "name1" 127.0.0.1:7002> [root@mcw01 ~/msRedis/cluster]$
redis 用systemd管理启动和停止
# cat /usr/lib/systemd/system/redis.service [Unit] Description=Redis persistent key-value database After=network.target [Service] ExecStart=/usr/bin/redis-server /etc/redis.conf --supervised systemd ExecStop=/usr/libexec/redis-shutdown Type=notify User=redis Group=redis RuntimeDirectory=redis RuntimeDirectoryMode=0755 [Install] WantedBy=multi-user.target
redis 一个配置参考
参考1
# grep -Ev "^$|^#" /etc/redis.conf bind 10.xx.x.43 protected-mode yes port 6379 tcp-backlog 511 timeout 0 tcp-keepalive 300 daemonize no supervised no pidfile /var/run/redis_6379.pid loglevel notice logfile /var/log/redis/redis.log databases 16 always-show-logo yes save 900 1 save 300 10 save 60 10000 stop-writes-on-bgsave-error yes rdbcompression yes rdbchecksum yes dbfilename dump.rdb dir /var/lib/redis replica-serve-stale-data yes replica-read-only yes repl-diskless-sync no repl-diskless-sync-delay 5 repl-disable-tcp-nodelay no replica-priority 100 lazyfree-lazy-eviction no lazyfree-lazy-expire no lazyfree-lazy-server-del no replica-lazy-flush no appendonly no appendfilename "appendonly.aof" appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb aof-load-truncated yes aof-use-rdb-preamble yes lua-time-limit 5000 slowlog-log-slower-than 10000 slowlog-max-len 128 latency-monitor-threshold 0 notify-keyspace-events "" hash-max-ziplist-entries 512 hash-max-ziplist-value 64 list-max-ziplist-size -2 list-compress-depth 0 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 hll-sparse-max-bytes 3000 stream-node-max-bytes 4096 stream-node-max-entries 100 activerehashing yes client-output-buffer-limit normal 0 0 0 client-output-buffer-limit replica 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 hz 10 dynamic-hz yes aof-rewrite-incremental-fsync yes rdb-save-incremental-fsync yes