Docker之Redis集群部署
创建redis网络
| [root@aliyun ~]# docker network create redis --subnet 172.38.0.0/16 |
| 7a2883c293df940988b0d539a3f7af67451d709a8883cae3c94b2801190b2f9c |
| [root@aliyun ~]# docker network ls |
| NETWORK ID NAME DRIVER SCOPE |
| 8df3cdb08d2a bridge bridge local |
| c3009610274a host host local |
| 20cbe3257eda mynet bridge local |
| e6d7cbd64aa7 none null local |
| 7a2883c293df redis bridge local |
创建6个redis配置文件
脚本如下:
| for port in $(seq 1 6); \ |
| do \ |
| mkdir -p /home/redis/node-${port}/conf |
| touch /home/redis/node-${port}/conf/redis.conf |
| cat << EOF >/home/redis/node-${port}/conf/redis.conf |
| port 6379 |
| bind 0.0.0.0 |
| cluster-enabled yes |
| cluster-config-file nodes.conf |
| cluster-node-timeout 5000 |
| cluster-announce-ip 172.38.0.1${port} |
| cluster-announce-port 6379 |
| cluster-announce-bus-port 16379 |
| appendonly yes |
| EOF |
| done |
效果:
| [root@aliyun ~]# tree /home/redis/ |
| /home/redis/ |
| ├── node-1 |
| │ └── conf |
| │ └── redis.conf |
| ├── node-2 |
| │ └── conf |
| │ └── redis.conf |
| ├── node-3 |
| │ └── conf |
| │ └── redis.conf |
| ├── node-4 |
| │ └── conf |
| │ └── redis.conf |
| ├── node-5 |
| │ └── conf |
| │ └── redis.conf |
| └── node-6 |
| └── conf |
| └── redis.conf |
| |
| 12 directories, 6 files |
批量启动redis
| for port in $(seq 1 6); \ |
| do \ |
| docker run -p 637${port}:6379 -p 1637${port}:16379 --name redis-${port} \ |
| -v /home/redis/node-${port}/data:/data \ |
| -v /home/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \ |
| -d \ |
| --net redis \ |
| --ip 172.38.0.1${port} \ |
| redis:latest redis-server /etc/redis/redis.conf |
| done |
| [root@aliyun ~]# for port in $(seq 1 6); \ |
| > do \ |
| > docker run -p 637${port}:6379 -p 1637${port}:16379 --name redis-${port} \ |
| > -v /home/redis/node-${port}/data:/data \ |
| > -v /home/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \ |
| > -d \ |
| > --net redis \ |
| > --ip 172.38.0.1${port} \ |
| > redis:latest redis-server /etc/redis/redis.conf |
| > done |
| Unable to find image 'redis:latest' locally |
| latest: Pulling from library/redis |
| a2abf6c4d29d: Already exists |
| c7a4e4382001: Pull complete |
| 4044b9ba67c9: Pull complete |
| c8388a79482f: Pull complete |
| 413c8bb60be2: Pull complete |
| 1abfd3011519: Pull complete |
| Digest: sha256:db485f2e245b5b3329fdc7eff4eb00f913e09d8feb9ca720788059fdc2ed8339 |
| Status: Downloaded newer image for redis:latest |
| 9196417df8df1c6922ccc29043ea35f8d966ea252fb210ec57fdda416ca26615 |
| 3cf8b52d9ced15a919bccc239dd00e49d24de25a5bbfe228dc4c1ca15bbdc688 |
| 6127b7b675400997bc4ac44ce07f306b83ba5b6ae6541e493714d6d3052a3bfa |
| 2ee0a26a8c28873ff99f874a8b307fcb8181f61368f6549721fad875ab8187ac |
| 07a04baa9b904401df31f2f77ada3ac86c730f071db279954b905c42177a54e4 |
| c7977e97fb7cbcbac691d7558ce2599f0fafb251090bf8960d66b8af25c587ac |
验证:
| [root@aliyun ~]# docker ps |
| CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES |
| c7977e97fb7c redis:latest "docker-entrypoint.s…" 46 seconds ago Up 46 seconds 0.0.0.0:6376->6379/tcp, 0.0.0.0:16376->16379/tcp redis-6 |
| 07a04baa9b90 redis:latest "docker-entrypoint.s…" 47 seconds ago Up 46 seconds 0.0.0.0:6375->6379/tcp, 0.0.0.0:16375->16379/tcp redis-5 |
| 2ee0a26a8c28 redis:latest "docker-entrypoint.s…" 47 seconds ago Up 47 seconds 0.0.0.0:6374->6379/tcp, 0.0.0.0:16374->16379/tcp redis-4 |
| 6127b7b67540 redis:latest "docker-entrypoint.s…" 48 seconds ago Up 47 seconds 0.0.0.0:6373->6379/tcp, 0.0.0.0:16373->16379/tcp redis-3 |
| 3cf8b52d9ced redis:latest "docker-entrypoint.s…" 48 seconds ago Up 48 seconds 0.0.0.0:6372->6379/tcp, 0.0.0.0:16372->16379/tcp redis-2 |
| 9196417df8df redis:latest "docker-entrypoint.s…" 49 seconds ago Up 48 seconds 0.0.0.0:6371->6379/tcp, 0.0.0.0:16371->16379/tcp redis-1 |
创建redis集群
| #随机进入一个redis容器 |
| [root@aliyun ~]# docker exec -it redis-1 /bin/sh #redis内部默认是sh,不是bash |
| #创建redis集群 |
| [root@aliyun ~]# docker exec -it redis-1 |
| # redis-cli --cluster create 172.38.0.11:6379 \ |
| 172.38.0.12:6379 \ |
| 172.38.0.13:6379 \ |
| 172.38.0.14:6379 \ |
| 172.38.0.15:6379 \ |
| 172.38.0.16:6379 \ |
| --cluster-replicas 1 |
| >>> Performing hash slots allocation on 6 nodes... |
| Master[0] -> Slots 0 - 5460 |
| Master[1] -> Slots 5461 - 10922 |
| Master[2] -> Slots 10923 - 16383 |
| Adding replica 172.38.0.15:6379 to 172.38.0.11:6379 |
| Adding replica 172.38.0.16:6379 to 172.38.0.12:6379 |
| Adding replica 172.38.0.14:6379 to 172.38.0.13:6379 |
| M: 2f3a8fadb9d347f36eaa36a5e7a505e1672095fa 172.38.0.11:6379 |
| slots:[0-5460] (5461 slots) master |
| M: a15f45f63d16170990514a8a33a4f901353a17b9 172.38.0.12:6379 |
| slots:[5461-10922] (5462 slots) master |
| M: 0437f1897dd89d222923715e13a6bc7bf6a3f974 172.38.0.13:6379 |
| slots:[10923-16383] (5461 slots) master |
| S: 858f31594580b4066526e99adf2299c277622ef9 172.38.0.14:6379 |
| replicates 0437f1897dd89d222923715e13a6bc7bf6a3f974 |
| S: c8e0588915cb115a2f33941e65e181130fe0087f 172.38.0.15:6379 |
| replicates 2f3a8fadb9d347f36eaa36a5e7a505e1672095fa |
| S: 6c40e935f14e9687111e1385bb517547d0833588 172.38.0.16:6379 |
| replicates a15f45f63d16170990514a8a33a4f901353a17b9 |
| Can I set the above configuration? (type 'yes' to accept): yes |
| >>> Nodes configuration updated |
| >>> Assign a different config epoch to each node |
| >>> Sending CLUSTER MEET messages to join the cluster |
| Waiting for the cluster to join |
| . |
| >>> Performing Cluster Check (using node 172.38.0.11:6379) |
| M: 2f3a8fadb9d347f36eaa36a5e7a505e1672095fa 172.38.0.11:6379 |
| slots:[0-5460] (5461 slots) master |
| 1 additional replica(s) |
| M: 0437f1897dd89d222923715e13a6bc7bf6a3f974 172.38.0.13:6379 |
| slots:[10923-16383] (5461 slots) master |
| 1 additional replica(s) |
| M: a15f45f63d16170990514a8a33a4f901353a17b9 172.38.0.12:6379 |
| slots:[5461-10922] (5462 slots) master |
| 1 additional replica(s) |
| S: 858f31594580b4066526e99adf2299c277622ef9 172.38.0.14:6379 |
| slots: (0 slots) slave |
| replicates 0437f1897dd89d222923715e13a6bc7bf6a3f974 |
| S: c8e0588915cb115a2f33941e65e181130fe0087f 172.38.0.15:6379 |
| slots: (0 slots) slave |
| replicates 2f3a8fadb9d347f36eaa36a5e7a505e1672095fa |
| S: 6c40e935f14e9687111e1385bb517547d0833588 172.38.0.16:6379 |
| slots: (0 slots) slave |
| replicates a15f45f63d16170990514a8a33a4f901353a17b9 |
| [OK] All nodes agree about slots configuration. |
| >>> Check for open slots... |
| >>> Check slots coverage... |
| [OK] All 16384 slots covered. #创建成功 |
进入redis集群
| $ redis-cli -c |
| #查看集群信息 |
| 127.0.0.1:6379> cluster info |
| cluster_state:ok |
| cluster_slots_assigned:16384 |
| cluster_slots_ok:16384 |
| cluster_slots_pfail:0 |
| cluster_slots_fail:0 |
| cluster_known_nodes:6 |
| cluster_size:3 |
| cluster_current_epoch:6 |
| cluster_my_epoch:1 |
| cluster_stats_messages_ping_sent:531 |
| cluster_stats_messages_pong_sent:538 |
| cluster_stats_messages_sent:1069 |
| cluster_stats_messages_ping_received:533 |
| cluster_stats_messages_pong_received:531 |
| cluster_stats_messages_meet_received:5 |
| cluster_stats_messages_received:1069 |
| #查看节点信息 |
| 127.0.0.1:6379> cluster nodes |
| 2f3a8fadb9d347f36eaa36a5e7a505e1672095fa 172.38.0.11:6379@16379 myself,master - 0 1650502408000 1 connected 0-5460 |
| 0437f1897dd89d222923715e13a6bc7bf6a3f974 172.38.0.13:6379@16379 master - 0 1650502409252 3 connected 10923-16383 |
| a15f45f63d16170990514a8a33a4f901353a17b9 172.38.0.12:6379@16379 master - 0 1650502408048 2 connected 5461-10922 |
| 858f31594580b4066526e99adf2299c277622ef9 172.38.0.14:6379@16379 slave 0437f1897dd89d222923715e13a6bc7bf6a3f974 0 1650502409000 3 connected |
| c8e0588915cb115a2f33941e65e181130fe0087f 172.38.0.15:6379@16379 slave 2f3a8fadb9d347f36eaa36a5e7a505e1672095fa 0 1650502409554 1 connected |
| 6c40e935f14e9687111e1385bb517547d0833588 172.38.0.16:6379@16379 slave a15f45f63d16170990514a8a33a4f901353a17b9 0 1650502409554 2 connected |
模拟服务宕机
| 127.0.0.1:6379> set a b |
| -> Redirected to slot [15495] located at 172.38.0.13:6379 |
| OK |
| #停止写入数据的那个容器 |
| [root@aliyun ~]# docker stop redis-3 |
| redis-3 |
| #再次查看数据 |
| 127.0.0.1:6379> get a |
| -> Redirected to slot [15495] located at 172.38.0.14:6379 |
| "b" |
| 172.38.0.14:6379> cluster nodes |
| 6c40e935f14e9687111e1385bb517547d0833588 172.38.0.16:6379@16379 slave a15f45f63d16170990514a8a33a4f901353a17b9 0 1650502837151 2 connected |
| 858f31594580b4066526e99adf2299c277622ef9 172.38.0.14:6379@16379 myself,master - 0 1650502837000 7 connected 10923-16383 |
| a15f45f63d16170990514a8a33a4f901353a17b9 172.38.0.12:6379@16379 master - 0 1650502838659 2 connected 5461-10922 |
| 0437f1897dd89d222923715e13a6bc7bf6a3f974 172.38.0.13:6379@16379 master,fail - 1650502623573 1650502621000 3 connected |
| 2f3a8fadb9d347f36eaa36a5e7a505e1672095fa 172.38.0.11:6379@16379 master - 0 1650502838157 1 connected 0-5460 |
| c8e0588915cb115a2f33941e65e181130fe0087f 172.38.0.15:6379@16379 slave 2f3a8fadb9d347f36eaa36a5e7a505e1672095fa 0 1650502838157 1 connected |

查看slot与对应节点关系
| 172.38.0.14:6379> cluster slots |
| 1) 1) (integer) 0 #(分区范围) |
| 2) (integer) 5460 |
| 3) 1) "172.38.0.11" #(主节点在前) |
| 2) (integer) 6379 |
| 3) "2f3a8fadb9d347f36eaa36a5e7a505e1672095fa" |
| 4) 1) "172.38.0.15" #(从节点在后) |
| 2) (integer) 6379 |
| 3) "c8e0588915cb115a2f33941e65e181130fe0087f" |
| 2) 1) (integer) 5461 |
| 2) (integer) 10922 |
| 3) 1) "172.38.0.12" |
| 2) (integer) 6379 |
| 3) "a15f45f63d16170990514a8a33a4f901353a17b9" |
| 4) 1) "172.38.0.16" |
| 2) (integer) 6379 |
| 3) "6c40e935f14e9687111e1385bb517547d0833588" |
| 3) 1) (integer) 10923 |
| 2) (integer) 16383 |
| 3) 1) "172.38.0.14" |
| 2) (integer) 6379 |
| 3) "858f31594580b4066526e99adf2299c277622ef9" |
| 4) 1) "172.38.0.13" |
| 2) (integer) 6379 |
| 3) "0437f1897dd89d222923715e13a6bc7bf6a3f974" |
| 172.38.0.14:6379> |
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 震惊!C++程序真的从main开始吗?99%的程序员都答错了
· 【硬核科普】Trae如何「偷看」你的代码?零基础破解AI编程运行原理
· 单元测试从入门到精通
· 上周热点回顾(3.3-3.9)
· winform 绘制太阳,地球,月球 运作规律