在 Kubernetes 中部署 Redis 集群

在 Kubernetes 中部署 Redis 集群

Kubernetes中部署Redis集群面临挑战,因为每个 Redis 实例都依赖于一个配置文件,该文件可以跟踪其他集群实例及其角色。为此,我们需要结合使用Kubernetes StatefulSetsPersistentVolumes

克隆部署文件

git clone https://github.com/llmgo/redis-sts.git

创建 statefulset 类型资源

[root@node01 redis-sts]# cat redis-sts.yml --- apiVersion: v1 kind: ConfigMap metadata: name: redis-cluster data: update-node.sh: | #!/bin/sh REDIS_NODES="/data/nodes.conf" sed -i -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${REDIS_NODES} exec "$@" redis.conf: |+ cluster-enabled yes cluster-require-full-coverage no cluster-node-timeout 15000 cluster-config-file /data/nodes.conf cluster-migration-barrier 1 appendonly yes protected-mode no --- apiVersion: apps/v1 kind: StatefulSet metadata: name: redis-cluster spec: serviceName: redis-cluster replicas: 6 selector: matchLabels: app: redis-cluster template: metadata: labels: app: redis-cluster spec: containers: - name: redis image: redis:5.0.5-alpine ports: - containerPort: 6379 name: client - containerPort: 16379 name: gossip command: ["/conf/update-node.sh", "redis-server", "/conf/redis.conf"] env: - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP volumeMounts: - name: conf mountPath: /conf readOnly: false - name: data mountPath: /data readOnly: false volumes: - name: conf configMap: name: redis-cluster defaultMode: 0755 volumeClaimTemplates: - metadata: name: data spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 5Gi storageClassName: standard $ kubectl apply -f redis-sts.yml configmap/redis-cluster created statefulset.apps/redis-cluster created $ kubectl get pods -l app=redis-cluster NAME READY STATUS RESTARTS AGE redis-cluster-0 1/1 Running 0 53s redis-cluster-1 1/1 Running 0 49s redis-cluster-2 1/1 Running 0 46s redis-cluster-3 1/1 Running 0 42s redis-cluster-4 1/1 Running 0 38s redis-cluster-5 1/1 Running 0 34s

创建 service

[root@node01 redis-sts]# cat redis-svc.yml --- apiVersion: v1 kind: Service metadata: name: redis-cluster spec: type: ClusterIP clusterIP: 10.96.0.100 ports: - port: 6379 targetPort: 6379 name: client - port: 16379 targetPort: 16379 name: gossip selector: app: redis-cluster $ kubectl apply -f redis-svc.yml service/redis-cluster created $ kubectl get svc redis-cluster NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE redis-cluster ClusterIP 10.96.0.100 <none> 6379/TCP,16379/TCP 35s

初始化 redis cluster

下一步是形成Redis集群。为此,我们运行以下命令并键入yes以接受配置。前三个节点成为主节点,后三个节点成为从节点。

$ kubectl exec -it redis-cluster-0 -- redis-cli --cluster create --cluster-replicas 1 $(kubectl get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 ') >>> Performing hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 10.244.2.11:6379 to 10.244.9.19:6379 Adding replica 10.244.9.20:6379 to 10.244.6.10:6379 Adding replica 10.244.8.15:6379 to 10.244.7.8:6379 M: 00721c43db194c8f2cacbafd01fd2be6a2fede28 10.244.9.19:6379 slots:[0-5460] (5461 slots) master M: 9c36053912dec8cb20a599bda202a654f241484f 10.244.6.10:6379 slots:[5461-10922] (5462 slots) master M: 2850f24ea6367de58fb50e632fc56fe4ba5ef016 10.244.7.8:6379 slots:[10923-16383] (5461 slots) master S: 554a58762e3dce23ca5a75886d0ccebd2d582502 10.244.8.15:6379 replicates 2850f24ea6367de58fb50e632fc56fe4ba5ef016 S: 20028fd0b79045489824eda71fac9898f17af896 10.244.2.11:6379 replicates 00721c43db194c8f2cacbafd01fd2be6a2fede28 S: 87e8987e314e4e5d4736e5818651abc1ed6ddcd9 10.244.9.20:6379 replicates 9c36053912dec8cb20a599bda202a654f241484f Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join ... >>> Performing Cluster Check (using node 10.244.9.19:6379) M: 00721c43db194c8f2cacbafd01fd2be6a2fede28 10.244.9.19:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) M: 9c36053912dec8cb20a599bda202a654f241484f 10.244.6.10:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: 87e8987e314e4e5d4736e5818651abc1ed6ddcd9 10.244.9.20:6379 slots: (0 slots) slave replicates 9c36053912dec8cb20a599bda202a654f241484f S: 554a58762e3dce23ca5a75886d0ccebd2d582502 10.244.8.15:6379 slots: (0 slots) slave replicates 2850f24ea6367de58fb50e632fc56fe4ba5ef016 S: 20028fd0b79045489824eda71fac9898f17af896 10.244.2.11:6379 slots: (0 slots) slave replicates 00721c43db194c8f2cacbafd01fd2be6a2fede28 M: 2850f24ea6367de58fb50e632fc56fe4ba5ef016 10.244.7.8:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.

验证集群

[root@node01 redis-sts]# kubectl exec -it redis-cluster-0 -- redis-cli cluster info cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:1 cluster_stats_messages_ping_sent:16 cluster_stats_messages_pong_sent:22 cluster_stats_messages_sent:38 cluster_stats_messages_ping_received:17 cluster_stats_messages_pong_received:16 cluster_stats_messages_meet_received:5 cluster_stats_messages_received:38 [root@node01 redis-sts]# for x in $(seq 0 5); do echo "redis-cluster-$x"; kubectl exec redis-cluster-$x -- redis-cli role; echo; done redis-cluster-0 master 14 10.244.2.11 6379 14 redis-cluster-1 master 28 10.244.9.20 6379 28 redis-cluster-2 master 28 10.244.8.15 6379 28 redis-cluster-3 slave 10.244.7.8 6379 connected 28 redis-cluster-4 slave 10.244.9.19 6379 connected 14 redis-cluster-5 slave 10.244.6.10 6379 connected 28

测试集群

我们想使用集群,然后模拟节点的故障。对于前一项任务,我们将部署一个简单的 Python 应用程序,而对于后者,我们将删除一个节点并观察集群行为。

部署点击计数器应用

我们将一个简单的应用程序部署到集群中,并在其前面放置一个负载平衡器。此应用程序的目的是在将计数器值作为 HTTP 响应返回之前,增加计数器并将其存储在 Redis 集群中。

$ kubectl apply -f app-deployment-service.yml service/hit-counter-lb created deployment.apps/hit-counter-app created

在此过程中,如果我们继续加载页面,计数器将继续增加,并且在删除Pod之后,我们看到没有数据丢失。

$ curl `kubectl get svc hit-counter-lb -o json|jq -r .spec.clusterIP` I have been hit 20 times since deployment. $ curl `kubectl get svc hit-counter-lb -o json|jq -r .spec.clusterIP` I have been hit 21 times since deployment. $ curl `kubectl get svc hit-counter-lb -o json|jq -r .spec.clusterIP` I have been hit 22 times since deployment. $ kubectl delete pods redis-cluster-0 pod "redis-cluster-0" deleted $ kubectl delete pods redis-cluster-1 pod "redis-cluster-1" deleted $ curl `kubectl get svc hit-counter-lb -o json|jq -r .spec.clusterIP` I have been hit 23 times since deployment.

__EOF__

本文作者带着泥土
本文链接https://www.cnblogs.com/obitoma/p/14547336.html
关于博主:评论和私信会在第一时间回复。或者直接私信我。
版权声明:本博客所有文章除特别声明外,均采用 BY-NC-SA 许可协议。转载请注明出处!
声援博主:如果您觉得文章对您有帮助,可以点击文章右下角推荐一下。您的鼓励是博主的最大动力!
posted @   带着泥土  阅读(597)  评论(0编辑  收藏  举报
编辑推荐:
· AI与.NET技术实操系列:基于图像分类模型对图像进行分类
· go语言实现终端里的倒计时
· 如何编写易于单元测试的代码
· 10年+ .NET Coder 心语,封装的思维:从隐藏、稳定开始理解其本质意义
· .NET Core 中如何实现缓存的预热?
阅读排行:
· 25岁的心里话
· 闲置电脑爆改个人服务器(超详细) #公网映射 #Vmware虚拟网络编辑器
· 零经验选手,Compose 一天开发一款小游戏!
· 通过 API 将Deepseek响应流式内容输出到前端
· AI Agent开发,如何调用三方的API Function,是通过提示词来发起调用的吗
点击右上角即可分享
微信分享提示

喜欢请打赏

扫描二维码打赏

了解更多