redis集群

一哨兵的不足之处:

因为只有一个master,slave只读不写,如果数据很多的情况下,master在读写量非常大的情况下,主库的压力会非常大。并且在主从切换的过程会丢失数据,而且只能单点写,水平扩容得不到解决。

二集群的重要概念:

1.Redis集群,无论有几个节点,一共只有16384个槽位

2.所有的槽都必须被正确分配,哪怕有1个槽不正常,整个集群都不可用

3.每个节点的槽的顺序不重要,重要的是槽的数量

4.HASH算法足够平均,足够随机

5.每个槽被分配到数据的概率是大致相当的

6.集群的高可用依赖于主从复制

7.集群节点之间槽位的数量允许在2%的误差范围内

8.集群通讯会使用基础端口号+10000的端口,自动创建的,不是配置文件配置的,生产要注意的是防火墙注意要放开此端口

三集群部署

#端口规划

主节点 6380

从节点 6381

10.0.0.101上操作步骤:

#发送ssh认证

ssh-keygen 

 ssh-copy-id 10.0.0.102

 ssh-copy-id 10.0.0.103

#创建目录

mkdir -p /opt/redis_{6380,6381}/{conf,logs,pid}
mkdir -p /data/redis_{6380,6381}

#生成主节点配置文件

cat >/opt/redis_6380/conf/redis_6380.conf<<EOF

bind 10.0.0.101

port 6380

daemonize yes

pidfile "/opt/redis_6380/pid/redis_6380.pid"

logfile "/opt/redis_6380/logs/redis_6380.log"

dbfilename "redis_6380.rdb"

dir "/data/redis_6380/"

appendonly yes

appendfilename "redis.aof"

appendfsync everysec

cluster-enabled yes

cluster-config-file nodes_6380.conf

cluster-node-timeout 15000

EOF

#复制主节点配置文件到从节点并更改端口号

cp /opt/redis_6380/conf/redis_6380.conf  /opt/redis_6381/conf/redis_6381.conf
sed -i 's#6380#6381#g' /opt/redis_6381/conf/redis_6381.conf

#更改权限为redis

chown -R redis.redis /opt/redis_*
chown -R redis.redis /data/redis_*

#生成主节点的sytemd启动文件

cat >/usr/lib/systemd/system/redis-master.service<<EOF
> [Unit]
> Description=Redis persistent key-value database
> After=network.target
> After=network-online.target
> Wants=network-online.target
>
> [Service]
> ExecStart=/usr/local/bin/redis-server /opt/redis_6380/conf/redis_6380.conf --supervised systemd
> ExecStop=/usr/local/bin/redis-cli -h $(ifconfig eth0|awk 'NR==2{print $2}') -p 6380 shutdown
> Type=notify
> User=redis
> Group=redis
> RuntimeDirectory=redis
> RuntimeDirectoryMode=0755
>
> [Install]
> WantedBy=multi-user.target
> EOF

#复制主节点的systemd启动文件到从节点上并修改端口

cd /usr/lib/systemd/system/
cp redis-master.service redis-slave.service
sed -i 's#6380#6381#g' redis-slave.service

#重载并启动集群节点

systemctl daemon-reload
systemctl start redis-master.service
systemctl start redis-slave.service
ps -ef|grep redis
redis 2606 1 0 19:52 ? 00:00:53 /usr/local/bin/redis-server 127.0.0.1:6379
redis 2650 1 0 19:52 ? 00:01:17 /usr/local/bin/redis-sentinel 10.0.0.101:26379 [sentinel]
redis 3873 1 0 23:51 ? 00:00:00 /usr/local/bin/redis-server 10.0.0.101:6380 [cluster]
redis 3900 1 0 23:51 ? 00:00:00 /usr/local/bin/redis-server 10.0.0.101:6381 [cluster]
root 3905 1762 0 23:52 pts/0 00:00:00 grep --color=auto redis

#把创建好的目录和启动文件传输给另外2台服务器

rsync -avz /opt/redis_638* 10.0.0.102:/opt/

rsync -avz /opt/redis_638* 10.0.0.103:/opt/

rsync -avz /usr/lib/systemd/system/redis-*.service 10.0.0.102:/usr/lib/systemd/system/

rsync -avz /usr/lib/systemd/system/redis-*.service 10.0.0.103:/usr/lib/systemd/system/

 

10.0.0.102上操作步骤:

find /opt/redis_638* -type f -name "*.conf"|xargs sed -i "/bind/s#101#102#g"

cd /usr/lib/systemd/system/

sed -i 's#101#102#g' redis-*.service

mkdir –p /data/redis_{6380,6381}

chown -R redis:redis /opt/redis_*

chown -R redis:redis /data/redis_*

systemctl daemon-reload

systemctl start redis-master

systemctl start redis-slave

ps -ef|grep redis

 

10.0.0.103 上操作:

find /opt/redis_638* -type f -name "*.conf"|xargs sed -i "/bind/s#101#103#g"

cd /usr/lib/systemd/system/

sed -i 's#101#103#g' redis-*.service

mkdir –p /data/redis_{6380,6381}

chown -R redis:redis /opt/redis_*

chown -R redis:redis /data/redis_*

systemctl daemon-reload

systemctl start redis-master

systemctl start redis-slave

ps -ef|grep redis

 

四集群手动发现节点:(任何一台服务器上写入则其余服务器能够自动学习到)

 redis-cli -h 10.0.0.101 -p 6380 cluster meet 10.0.0.102 6380

 redis-cli -h 10.0.0.101 -p 6380 cluster meet 10.0.0.103 6380

 redis-cli -h 10.0.0.101 -p 6380 cluster meet 10.0.0.102 6381

 redis-cli -h 10.0.0.101 -p 6380 cluster meet 10.0.0.103 6381

 redis-cli -h 10.0.0.101 -p 6380 cluster meet 10.0.0.101 6381

 redis-cli -h 10.0.0.101 -p 6380 cluster nodes

 

 

 

五集群手动分配槽位

#槽位规划

主机:端口                 编号        范围

10.0.0.101:6380         5461       0-5460

10.0.0.102:6380         5461       5461-10921

10.0.0.103:6380         5462       10922-16383

#分配槽位

 redis-cli -h 10.0.0.101 -p 6380 cluster addslots {0..5460}

 redis-cli -h 10.0.0.102 -p 6380 cluster addslots {5461..10921}

 redis-cli -h 10.0.0.103 -p 6380 cluster addslots {10922..16383}

#查看集群状态

 redis-cli -h 10.0.0.101 -p 6380 cluster nodes

 redis-cli -h 10.0.0.101 -p 6380 cluster info

 

六手动分配复制关系

#先获取集群节点信息

[root@oldboyedu ~]# redis-cli -h 10.0.0.101 -p 6381 cluster nodes
dfb9b51309b1e3d75beea755f8b73a7d3eea89c3 10.0.0.103:6380@16380 master - 0 1625811869604 0 connected 10922-16383
c186a784c585abf99866ea26c76dcd13dce2d8e0 10.0.0.101:6380@16380 master - 0 1625811871633 2 connected 0-5460
c636aeba37d5227c12fab51499b46ca8bfbdd7f9 10.0.0.103:6381@16381 master - 0 1625811872000 4 connected
6b233de2027031e9174a4ec99d891ca01293214c 10.0.0.102:6380@16380 master - 0 1625811870000 1 connected 5461-10921
6426f2689ca258b402d9b57c35abe6839456bc63 10.0.0.101:6381@16381 myself,master - 0 1625811872000 5 connected
d28f1804e04f6cd29052fd2ba42787ec5fc0d7ee 10.0.0.102:6381@16381 master - 0 1625811872647 3 connected

#配置复制关系

[root@oldboyedu ~]# redis-cli -h 10.0.0.101 -p 6381 cluster replicate 6b233de2027031e9174a4ec99d891ca01293214c  (102的6380ID)
OK
[root@oldboyedu ~]# redis-cli -h 10.0.0.102 -p 6381 cluster replicate dfb9b51309b1e3d75beea755f8b73a7d3eea89c3    (103的6380ID)
OK
[root@oldboyedu ~]# redis-cli -h 10.0.0.103 -p 6381 cluster replicate c186a784c585abf99866ea26c76dcd13dce2d8e0     (101的6380ID)
OK

#检查前后关系对比

[root@oldboyedu ~]# redis-cli -h 10.0.0.101 -p 6381 cluster nodes
dfb9b51309b1e3d75beea755f8b73a7d3eea89c3 10.0.0.103:6380@16380 master - 0 1625812213000 0 connected 10922-16383
c186a784c585abf99866ea26c76dcd13dce2d8e0 10.0.0.101:6380@16380 master - 0 1625812214000 2 connected 0-5460
c636aeba37d5227c12fab51499b46ca8bfbdd7f9 10.0.0.103:6381@16381 slave c186a784c585abf99866ea26c76dcd13dce2d8e0 0 1625812213000 4 connected
6b233de2027031e9174a4ec99d891ca01293214c 10.0.0.102:6380@16380 master - 0 1625812211880 1 connected 5461-10921
6426f2689ca258b402d9b57c35abe6839456bc63 10.0.0.101:6381@16381 myself,slave 6b233de2027031e9174a4ec99d891ca01293214c 0 1625812213000 5 connected
d28f1804e04f6cd29052fd2ba42787ec5fc0d7ee 10.0.0.102:6381@16381 slave dfb9b51309b1e3d75beea755f8b73a7d3eea89c3 0 1625812214937 3 connected

结论:

配置完复制关系后,3个master3个slave,交叉进行数据备份,及时master宕机了,slave数据也不会丢失。

 

posted on 2021-07-09 00:03  弓长三寿  阅读(86)  评论(0编辑  收藏  举报