使用docker-compose创建Cluster模式 redis集群

使用docker-compose创建Cluster模式 redis集群

版本信息:5.0.6
参考链接

Cluster是真正的集群模式了,哨兵模式解决了主从模式不能自动故障恢复的问题,但是同时也存在难以扩容以及单机存储、读写能力受限的问题,并且集群之间都是一台redis全量的数据拷贝,这样所有的redis都冗余一份,就会大大消耗内存空间。

集群模式实现了redis数据的分布式存储,实现数据的分片,每个redis节点存储不同的内容,并且解决了在线的节点收缩(下线)和扩容(上线)问题。

集群模式真正意义上实现了系统的高可用和高性能,但是集群同时进一步使系统变得越来越复杂,下面来了解下集群的运作原理。

一、数据分区原理

集群的原理图还是很好理解的,在redis集群中采用的是虚拟槽分区算法,会把redis集群分成16384个槽(0-16383)。

比如:下图所示三个master,会把0-16383范围内的槽可能分成三部分(0-5000)、(5001-11000)、(11001-16383)分成三个数据缓存点的槽范围。
image.png
当客户端请求过来,会首先通过对key进行CRC16(数据通信领域中最常用的一种差错校验码)校验并对16384取模( CRC16(key)%16384 )计算出key所在的槽,然后再到对应的槽上进行取数据或者存数据,这样就实现了数据的访问更新。

image.png
之所以进行分槽存储,是将一整堆的数据进行分片,防止单台的redis数据量过大,影响性能的问题。

二、节点通信

节点之间实现了将数据进行分片存储,那么节点之间又是怎么通信的呢?这个和哨兵模式中用到的命令一样。

首先新上线的节点,会通过Gossip协议向老成员发送Meet消息,表示自己是新加入的成员。

老成员收到Meet消息后,在没有故障的情况下会回复PONG消息,表示欢迎新节点的加入,除了第一次发送Meet消息后,之后都会定期发送PING消息,实现节点之间的通信。
image.png
通信的过程中会为每一个通信的节点开通一条tcp通道,之后就是定时任务,不断的向其他节点发送PING消息,这样做的目的就是为了了解节点之间的元数据存储情况,以及健康状况,以便及时发现问题。

三、数据请求

上面说到了槽信息,在Redis的底层维护了一个数组存放每个节点的槽信息 unsigned char myslots[CLUSTER_SLOTS/8]。
因为它是一个二进制数组,只有存储0和1值,如下图所示:
image.png
这样数组只表示自己是否存储对应的槽数据,若是1表示存在该数据,0表示不存在该数据,这样查询的效率就会非常的高,类似于布隆过滤器,二进制存储。

比如:集群节点1负责存储0-5000的槽数据,但是此时只有0、1、2存储数据,其他的槽还没有存数据,所以0、1、2对应的值是1。

并且,每个redis底层还维护了一个clusterNode数组,大小也是16384,用于存储负责对应槽的节点IP、端口信息,这样每一个节点就为了其他节点的元数据信息,便于及时的找到对应的节点。

当新节点加入或者节点收缩,通过PING命令通信,及时的更新自己clusterNode数组中的元数据信息,这样有请求过来也就能及时的找到对应的节点。
image.png
有两种其他的情况就是,若是请求过来发现,数据发生了迁移,比如新节点加入,会使旧的缓存节点数据迁移到新节点。

请求过来发现旧节点已经发生了数据迁移并且数据被迁移到新节点,由于每个节点都有clusterNode信息,通过该信息的IP和端口,此时旧节点就会向客户端发送一个MOVED的重定向请求,表示数据已经迁移到新节点上,你要访问这个新节点的IP和端口就能拿到数据,这样就能重新获取到数据。

倘若正在发生数据迁移呢?旧节点就会向客户端发送一个ASK重定向请求,并返回给客户端迁移的目标节点的IP和端口,这样也能获取到数据。

四、扩容和收缩

扩容和收缩也就是节点的上线和下线,可能节点发生故障了,故障自动恢复的过程(节点收缩)。节点的收缩和扩容时,会重新计算每一个节点负责的槽范围,并发根据虚拟槽算法,将对应的数据更新到对应的节点。

还有前面讲的新加入的节点会首先发送Meet消息,以及发生故障后,哨兵老大节点的选举,master节点的重新选举,slave怎样晋升为master节点,和哨兵模式一样。

五、优缺点

5.1优点

集群模式是一个无中心的架构模式,将数据进行分片,分布到对应的槽中,每个节点存储不同的数据内容,通过路由能够找到对应的节点负责存储的槽,能够实现高效率的查询。

并且集群模式增加了横向和纵向的扩展能力,实现节点加入和收缩,集群模式是哨兵模式的升级版,哨兵的优点集群都有。

5.2缺点

缓存的最大问题就是带来数据一致性问题,在平衡数据一致性的问题时,兼顾性能与业务要求,大多数都是以最终一致性的方案进行解决,而不是强一致性。

并且集群模式带来节点数量的剧增,一个集群模式最少要六台机器,因为要满足半数原则的选举方式,所以也带来了架构的复杂性。

slave只充当冷备,并不能缓解master的读的压力。

六、实操

6.1 目录结构

image.png

  • 一个docker-compose.yaml文件
  • 六个节点的redis.conf文件以及对应的data目录

6.2 docker-compose.yaml文件

version: '3'

services:
  #1 节点
  redis_cluster1:
    image: redis:latest
    container_name: redis_cluster1
    restart: always
    ports:
      - 6379:6379
    networks:
      mynetwork:
        ipv4_address: 172.19.0.10
    volumes:
      - ./redis_cluster1.conf:/usr/local/etc/redis/redis.conf:rw
      - ./data1:/data:rw
    command:
      /bin/bash -c "redis-server /usr/local/etc/redis/redis.conf "
  #2 节点
  redis_cluster2:
    image: redis:latest
    container_name: redis_cluster2
    restart: always
    ports:
      - 6380:6379
    networks:
      mynetwork:
        ipv4_address: 172.19.0.11
    volumes:
      - ./redis_cluster2.conf:/usr/local/etc/redis/redis.conf:rw
      - ./data2:/data:rw
    command:
      /bin/bash -c "redis-server /usr/local/etc/redis/redis.conf "
  #3 节点
  redis_cluster3:
    image: redis:latest
    container_name: redis_cluster3
    restart: always
    ports:
      - 6381:6379
    networks:
      mynetwork:
        ipv4_address: 172.19.0.12
    volumes:
      - ./redis_cluster3.conf:/usr/local/etc/redis/redis.conf:rw
      - ./data3:/data:rw
    command:
      /bin/bash -c "redis-server /usr/local/etc/redis/redis.conf " 
  #4 节点
  redis_cluster4:
    image: redis:latest
    container_name: redis_cluster4
    restart: always
    ports:
      - 6382:6379
    networks:
      mynetwork:
        ipv4_address: 172.19.0.13
    volumes:
      - ./redis_cluster4.conf:/usr/local/etc/redis/redis.conf:rw
      - ./data4:/data:rw
    command:
      /bin/bash -c "redis-server /usr/local/etc/redis/redis.conf " 
  #5 节点
  redis_cluster5:
    image: redis:latest
    container_name: redis_cluster5
    restart: always
    ports:
      - 6383:6379
    networks:
      mynetwork:
        ipv4_address: 172.19.0.14
    volumes:
      - ./redis_cluster5.conf:/usr/local/etc/redis/redis.conf:rw
      - ./data5:/data:rw
    command:
      /bin/bash -c "redis-server /usr/local/etc/redis/redis.conf " 
  #6 节点
  redis_cluster6:
    image: redis:latest
    container_name: redis_cluster6
    restart: always
    ports:
      - 6384:6379
    networks:
      mynetwork:
        ipv4_address: 172.19.0.15
    volumes:
      - ./redis_cluster6.conf:/usr/local/etc/redis/redis.conf:rw
      - ./data6:/data:rw
    command:
      /bin/bash -c "redis-server /usr/local/etc/redis/redis.conf " 
networks:
  mynetwork:
    external: true
  • 六个节点的IP分别是172.19.0.10172.19.0.11172.19.0.12172.19.0.13172.19.0.14172.19.0.15
  • 六个节点内部端口均为6379
  • 六个节点与宿主机映射的端口为637963806381638263836384
  • 指定对应的redis.conf配置文件和数据目录的映射关系

6.3 redis.conf文件

port 6379
bind 0.0.0.0
protected-mode no
timeout 0
save 900 1 # 900s内至少一次写操作则执行bgsave进行RDB持久化
save 300 10
save 60 10000
rdbcompression yes
dbfilename dump.rdb
dir /data
appendonly yes
appendfsync everysec
# requirepass 12345678
# 开启集群模式 
cluster-enabled yes 
# 如果设置了密码,需要指定master密码
# masterauth 12345678 
# 请求超时时间
cluster-node-timeout 10000 
# 集群的配置 配置文件首次启动自动生成 
cluster-config-file nodes_6061.conf 
  • 注意不要设置密码

6.4 启动

6.4.1 启动各个节点服务

在docker-compose.yaml目录执行 docker-composet up 命令启动所有的节点服务

~/Documents/workspace/docker_mapping_volume/redis_cluster docker-compose up
Creating redis_cluster2 ... done
Creating redis_cluster4 ... done
Creating redis_cluster6 ... done
Creating redis_cluster5 ... done
Creating redis_cluster1 ... done
Creating redis_cluster3 ... done
Attaching to redis_cluster6, redis_cluster1, redis_cluster4, redis_cluster3, redis_cluster2, redis_cluster5
redis_cluster1    | 1:C 08 Sep 2020 14:02:09.773 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_cluster2    | 1:C 08 Sep 2020 14:02:10.230 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_cluster2    | 1:C 08 Sep 2020 14:02:10.230 # Redis version=5.0.6, bits=64, commit=00000000, modified=0, pid=1, just started
redis_cluster2    | 1:C 08 Sep 2020 14:02:10.230 # Configuration loaded
redis_cluster3    | 1:C 08 Sep 2020 14:02:10.109 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_cluster3    | 1:C 08 Sep 2020 14:02:10.110 # Redis version=5.0.6, bits=64, commit=00000000, modified=0, pid=1, just started
redis_cluster3    | 1:C 08 Sep 2020 14:02:10.110 # Configuration loaded
redis_cluster1    | 1:C 08 Sep 2020 14:02:09.778 # Redis version=5.0.6, bits=64, commit=00000000, modified=0, pid=1, just started
redis_cluster4    | 1:C 08 Sep 2020 14:02:10.007 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_cluster4    | 1:C 08 Sep 2020 14:02:10.007 # Redis version=5.0.6, bits=64, commit=00000000, modified=0, pid=1, just started
redis_cluster4    | 1:C 08 Sep 2020 14:02:10.007 # Configuration loaded
redis_cluster3    | 1:M 08 Sep 2020 14:02:10.129 * No cluster configuration found, I'm 6c6613503ce3076684afa34fb5891be8d2ba4e94
redis_cluster2    | 1:M 08 Sep 2020 14:02:10.276 * No cluster configuration found, I'm 3da7f792d1878a2434a6b6bb543d7b12837d1509
redis_cluster2    | 1:M 08 Sep 2020 14:02:10.292 * Running mode=cluster, port=6379.
redis_cluster2    | 1:M 08 Sep 2020 14:02:10.292 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_cluster2    | 1:M 08 Sep 2020 14:02:10.292 # Server initialized
redis_cluster2    | 1:M 08 Sep 2020 14:02:10.292 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_cluster5    | 1:C 08 Sep 2020 14:02:10.250 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_cluster5    | 1:C 08 Sep 2020 14:02:10.250 # Redis version=5.0.6, bits=64, commit=00000000, modified=0, pid=1, just started
redis_cluster5    | 1:C 08 Sep 2020 14:02:10.250 # Configuration loaded
redis_cluster1    | 1:C 08 Sep 2020 14:02:09.788 # Configuration loaded
redis_cluster4    | 1:M 08 Sep 2020 14:02:10.031 * No cluster configuration found, I'm 54af7ff69f5c432bd35314ed5387be1e98a2a093
redis_cluster4    | 1:M 08 Sep 2020 14:02:10.037 * Running mode=cluster, port=6379.
redis_cluster4    | 1:M 08 Sep 2020 14:02:10.037 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_cluster4    | 1:M 08 Sep 2020 14:02:10.037 # Server initialized
redis_cluster4    | 1:M 08 Sep 2020 14:02:10.037 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_cluster3    | 1:M 08 Sep 2020 14:02:10.135 * Running mode=cluster, port=6379.
redis_cluster3    | 1:M 08 Sep 2020 14:02:10.135 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_cluster3    | 1:M 08 Sep 2020 14:02:10.135 # Server initialized
redis_cluster3    | 1:M 08 Sep 2020 14:02:10.135 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_cluster4    | 1:M 08 Sep 2020 14:02:10.048 * Ready to accept connections
redis_cluster1    | 1:M 08 Sep 2020 14:02:09.840 * No cluster configuration found, I'm 2565c2f0f9fbeb06ed335115e29a56802c9b07e8
redis_cluster3    | 1:M 08 Sep 2020 14:02:10.141 * Ready to accept connections
redis_cluster6    | 1:C 08 Sep 2020 14:02:09.773 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_cluster6    | 1:C 08 Sep 2020 14:02:09.774 # Redis version=5.0.6, bits=64, commit=00000000, modified=0, pid=1, just started
redis_cluster6    | 1:C 08 Sep 2020 14:02:09.774 # Configuration loaded
redis_cluster1    | 1:M 08 Sep 2020 14:02:09.848 * Running mode=cluster, port=6379.
redis_cluster5    | 1:M 08 Sep 2020 14:02:10.288 * No cluster configuration found, I'm 23db4d75289b983095061b94472e21a3f99cacf2
redis_cluster6    | 1:M 08 Sep 2020 14:02:09.810 * No cluster configuration found, I'm de7e2df63cab77ebd431e425f058d380aa44cee9
redis_cluster6    | 1:M 08 Sep 2020 14:02:09.858 * Running mode=cluster, port=6379.
redis_cluster6    | 1:M 08 Sep 2020 14:02:09.858 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_cluster6    | 1:M 08 Sep 2020 14:02:09.858 # Server initialized
redis_cluster6    | 1:M 08 Sep 2020 14:02:09.859 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_cluster1    | 1:M 08 Sep 2020 14:02:09.850 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_cluster6    | 1:M 08 Sep 2020 14:02:09.862 * Ready to accept connections
redis_cluster1    | 1:M 08 Sep 2020 14:02:09.850 # Server initialized
redis_cluster1    | 1:M 08 Sep 2020 14:02:09.851 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_cluster1    | 1:M 08 Sep 2020 14:02:09.855 * Ready to accept connections
redis_cluster5    | 1:M 08 Sep 2020 14:02:10.346 * Running mode=cluster, port=6379.
redis_cluster5    | 1:M 08 Sep 2020 14:02:10.346 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_cluster5    | 1:M 08 Sep 2020 14:02:10.346 # Server initialized
redis_cluster5    | 1:M 08 Sep 2020 14:02:10.346 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_cluster2    | 1:M 08 Sep 2020 14:02:10.361 * Ready to accept connections
redis_cluster5    | 1:M 08 Sep 2020 14:02:10.379 * Ready to accept connections

6.4.2 创建集群

在redis5.0之后创建集群统一使用redis-cli。创建集群命令如下

redis-cli --cluster create --cluster-replicas 1 172.19.0.10:6379 172.19.0.11:6379 172.19.0.12:6379 172.19.0.13:6379 172.19.0.14:6379 172.19.0.15:6379

其中 cluster-replicas 1 代表一个master后面有几个slave节点。

~ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
2c2a8fdb47f7        redis:latest        "docker-entrypoint.s…"   3 minutes ago       Up 3 minutes        0.0.0.0:6384->6379/tcp   redis_cluster6
531d8789b9cf        redis:latest        "docker-entrypoint.s…"   3 minutes ago       Up 3 minutes        0.0.0.0:6381->6379/tcp   redis_cluster3
7e4394ac21bd        redis:latest        "docker-entrypoint.s…"   3 minutes ago       Up 3 minutes        0.0.0.0:6383->6379/tcp   redis_cluster5
620dfd7fca31        redis:latest        "docker-entrypoint.s…"   3 minutes ago       Up 3 minutes        0.0.0.0:6380->6379/tcp   redis_cluster2
14eda4f4ad53        redis:latest        "docker-entrypoint.s…"   3 minutes ago       Up 3 minutes        0.0.0.0:6379->6379/tcp   redis_cluster1
f5b5db4a777b        redis:latest        "docker-entrypoint.s…"   3 minutes ago       Up 3 minutes        0.0.0.0:6382->6379/tcp   redis_cluster4
~ docker exec -it 2c2a8fdb47f7 redis-cli --cluster create --cluster-replicas 1 172.19.0.10:6379 172.19.0.11:6379 172.19.0.12:6379 172.19.0.13:6379 172.19.0.14:6379 172.19.0.15:6379
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.19.0.14:6379 to 172.19.0.10:6379
Adding replica 172.19.0.15:6379 to 172.19.0.11:6379
Adding replica 172.19.0.13:6379 to 172.19.0.12:6379
M: 2565c2f0f9fbeb06ed335115e29a56802c9b07e8 172.19.0.10:6379
   slots:[0-5460] (5461 slots) master
M: 3da7f792d1878a2434a6b6bb543d7b12837d1509 172.19.0.11:6379
   slots:[5461-10922] (5462 slots) master
M: 6c6613503ce3076684afa34fb5891be8d2ba4e94 172.19.0.12:6379
   slots:[10923-16383] (5461 slots) master
S: 54af7ff69f5c432bd35314ed5387be1e98a2a093 172.19.0.13:6379
   replicates 6c6613503ce3076684afa34fb5891be8d2ba4e94
S: 23db4d75289b983095061b94472e21a3f99cacf2 172.19.0.14:6379
   replicates 2565c2f0f9fbeb06ed335115e29a56802c9b07e8
S: de7e2df63cab77ebd431e425f058d380aa44cee9 172.19.0.15:6379
   replicates 3da7f792d1878a2434a6b6bb543d7b12837d1509
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
....
>>> Performing Cluster Check (using node 172.19.0.10:6379)
M: 2565c2f0f9fbeb06ed335115e29a56802c9b07e8 172.19.0.10:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 6c6613503ce3076684afa34fb5891be8d2ba4e94 172.19.0.12:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 54af7ff69f5c432bd35314ed5387be1e98a2a093 172.19.0.13:6379
   slots: (0 slots) slave
   replicates 6c6613503ce3076684afa34fb5891be8d2ba4e94
S: de7e2df63cab77ebd431e425f058d380aa44cee9 172.19.0.15:6379
   slots: (0 slots) slave
   replicates 3da7f792d1878a2434a6b6bb543d7b12837d1509
M: 3da7f792d1878a2434a6b6bb543d7b12837d1509 172.19.0.11:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 23db4d75289b983095061b94472e21a3f99cacf2 172.19.0.14:6379
   slots: (0 slots) slave
   replicates 2565c2f0f9fbeb06ed335115e29a56802c9b07e8
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

由于这里使用的是docker容器,所以我们使用 docker exec -it 2c2a8fdb47f7 redis-cli --cluster create --cluster-replicas 1 172.19.0.10:6379 172.19.0.11:6379 172.19.0.12:6379 172.19.0.13:6379 172.19.0.14:6379 172.19.0.15:6379 命令来创建集群。

过程中会有一个提示Can I set the above configuration? (type 'yes' to accept) 输入yes继续,集群会自动分配结果。

如果要创建带密码的集群,添加 -a password即可

redis-cli --cluster create --cluster-replicas 1 172.19.0.10:6379 172.19.0.11:6379 172.19.0.12:6379 172.19.0.13:6379 172.19.0.14:6379 172.19.0.15:6379 -a password

6.5 验证集群是否创建成功

使用 redis-cli -c 命令选择连接一个节点。

~ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
2c2a8fdb47f7        redis:latest        "docker-entrypoint.s…"   7 minutes ago       Up 7 minutes        0.0.0.0:6384->6379/tcp   redis_cluster6
531d8789b9cf        redis:latest        "docker-entrypoint.s…"   7 minutes ago       Up 7 minutes        0.0.0.0:6381->6379/tcp   redis_cluster3
7e4394ac21bd        redis:latest        "docker-entrypoint.s…"   7 minutes ago       Up 7 minutes        0.0.0.0:6383->6379/tcp   redis_cluster5
620dfd7fca31        redis:latest        "docker-entrypoint.s…"   7 minutes ago       Up 7 minutes        0.0.0.0:6380->6379/tcp   redis_cluster2
14eda4f4ad53        redis:latest        "docker-entrypoint.s…"   7 minutes ago       Up 7 minutes        0.0.0.0:6379->6379/tcp   redis_cluster1
f5b5db4a777b        redis:latest        "docker-entrypoint.s…"   7 minutes ago       Up 7 minutes        0.0.0.0:6382->6379/tcp   redis_cluster4
~ docker exec -it 2c2a8fdb47f7 redis-cli -c
127.0.0.1:6379> 
127.0.0.1:6379> cluster nodes
6c6613503ce3076684afa34fb5891be8d2ba4e94 172.19.0.12:6379@16379 master - 0 1599574177026 3 connected 10923-16383
23db4d75289b983095061b94472e21a3f99cacf2 172.19.0.14:6379@16379 slave 2565c2f0f9fbeb06ed335115e29a56802c9b07e8 0 1599574176008 5 connected
de7e2df63cab77ebd431e425f058d380aa44cee9 172.19.0.15:6379@16379 myself,slave 3da7f792d1878a2434a6b6bb543d7b12837d1509 0 1599574173000 6 connected
2565c2f0f9fbeb06ed335115e29a56802c9b07e8 172.19.0.10:6379@16379 master - 0 1599574176000 1 connected 0-5460
54af7ff69f5c432bd35314ed5387be1e98a2a093 172.19.0.13:6379@16379 slave 6c6613503ce3076684afa34fb5891be8d2ba4e94 0 1599574175000 4 connected
3da7f792d1878a2434a6b6bb543d7b12837d1509 172.19.0.11:6379@16379 master - 0 1599574175000 2 connected 5461-10922
127.0.0.1:6379> set name zhangsan
-> Redirected to slot [5798] located at 172.19.0.11:6379
OK
172.19.0.11:6379> 
  • docker exec -it 2c2a8fdb47f7 redis-cli -c 连接一个节点
  • cluster nodes 查看主从配对情况
  • set name zhangsan 存储一个数据,由于我们连接时指定了 -c 参数,就可以自动分配到172.19.0.11节点

6.6 应用程序连接

由于这里的集群使用docker创建的,所以会出现连接不上集群的问题。可以参考这里。

6.6.1 使用RedisTemplate 连接redis 集群

使用RedisTemplate来操作非常简单,添加如下配置文件,项目里像使用单机redis服务操作即可

spring:
    redis:
      cluster:
       nodes: 127.0.0.1:6379,127.0.0.1:6380,127.0.0.1:6381,127.0.0.1:6382,127.0.0.1:6383,127.0.0.1:6384
      timeout: 3000
package com.lucky.spring;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.data.redis.core.RedisTemplate;
import redis.clients.jedis.HostAndPort;
import redis.clients.jedis.JedisCluster;

import java.util.HashSet;
import java.util.Set;

@SpringBootApplication
public class Application implements CommandLineRunner {


    @Autowired
    RedisTemplate redisTemplate;


    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }

    @Override
    public void run(String... args) throws Exception {
        
        opeByRedisTemplate();
    }

    private void opeByRedisTemplate() {
        redisTemplate.opsForValue().set("name", "张三");
        String name = (String) redisTemplate.opsForValue().get("name");
        System.out.println(name);
    }
}

6.6.2 使用Jedis连接redis集群

spring:
    redis:
      cluster:
       nodes: 127.0.0.1:6379,127.0.0.1:6380,127.0.0.1:6381,127.0.0.1:6382,127.0.0.1:6383,127.0.0.1:6384
      timeout: 3000
      jedis:
        pool:
#        最大活跃数
          max-active: 1000
#          能够保持idle状态的最大数量
          max-idle: 100
          min-idle: 0
          max-wait: 3000
package com.lucky.spring.config;

import lombok.Data;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;

/**
 * Created by zhangdd on 2020/9/9
 */
@Component
@Data
public class RedisProperties {

    @Value("${spring.redis.cluster.nodes}")
    private String[] nodes;

    @Value("${spring.redis.timeout}")
    private int connectionTimeout;
    
}

package com.lucky.spring.config;

import org.apache.commons.pool2.impl.GenericObjectPoolConfig;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import redis.clients.jedis.HostAndPort;
import redis.clients.jedis.JedisCluster;

import java.util.HashSet;
import java.util.Set;

/**
 * Created by zhangdd on 2020/9/9
 */
@Configuration
public class JedisClusterFactory {

    @Autowired
    private RedisProperties redisProperties;


    @Bean
    public JedisCluster jedisCluster() {
        Set<HostAndPort> nodes = new HashSet<>();
        for (String ipPort : redisProperties.getNodes()) {
            String[] ipPortPair = ipPort.split(":");
            nodes.add(new HostAndPort(ipPortPair[0].trim(),
                    Integer.valueOf(ipPortPair[1].trim())));
        }
        // jedis 连接池的配置没有使用
        return new JedisCluster(nodes,
                redisProperties.getConnectionTimeout(),
                1000,
                1,
                new GenericObjectPoolConfig());//需要密码连接的创建对象方式
    }
}

package com.lucky.spring;

import com.lucky.spring.config.RedisProperties;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.data.redis.core.RedisTemplate;
import redis.clients.jedis.HostAndPort;
import redis.clients.jedis.JedisCluster;

import java.util.HashSet;
import java.util.Set;

@SpringBootApplication
public class Application implements CommandLineRunner {


    @Autowired
    RedisTemplate redisTemplate;


    @Autowired
    private JedisCluster jedisCluster;


    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }

    @Override
    public void run(String... args) throws Exception {
        //opeByRedisTemplate();
        opeByJedisCluster();
    }

    private void opeByRedisTemplate() {
        redisTemplate.opsForValue().set("name", "张三");
        String name = (String) redisTemplate.opsForValue().get("name");
        System.out.println(name);
    }

    private void opeByJedisCluster() {
        jedisCluster.set("name", "李四");
        System.out.println(jedisCluster.get("name"));
    }
}

posted @ 2020-09-10 20:48  在线打工者  阅读(392)  评论(0编辑  收藏  举报