二、redis6 集群 扩缩容
环境:
1.基于【redis6 集群部署】
2.新增一台服务器部署redis,[192.168.109.139:7000,192.168.109.139:7001,192.168.109.139:7002]演示扩容缩容
3.模拟线上环境,简单写了2个脚本,1实时写入key到集群,2验证整改过程key的完整性,可用性
vim set_key.sh
![](https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif)
#redis集群信息:6台为准 server01=192.168.109.137 server02=192.168.109.137 server03=192.168.109.137 server04=192.168.109.138 server05=192.168.109.138 server06=192.168.109.138 S01_PORT01=7000 S02_PORT02=7001 S03_PORT03=7002 S04_PORT04=7000 S05_PORT05=7001 S06_PORT06=7002 pass_w='wang!321' Key_EXtime=100 function redis_connect() { /apprun/redis/bin/redis-cli -c -h $1 -p $2 -a $pass_w set yx:kc:w$a wang$a ex $Key_EXtime 2>/dev/null echo -e "yx:kc:w$a" >> /apprun/keys.txt sleep 1 } if [ -f /apprun/keys.txt ]; then rm -rf /apprun/keys.txt fi a=0 while true do a=`expr $a + 1` echo $a r1=`/apprun/redis/bin/redis-cli -c -h $server01 -p $S01_PORT01 -a $pass_w ping 2> /dev/null` r2=`/apprun/redis/bin/redis-cli -c -h $server02 -p $S02_PORT02 -a $pass_w ping 2> /dev/null` r3=`/apprun/redis/bin/redis-cli -c -h $server03 -p $S03_PORT03 -a $pass_w ping 2> /dev/null` r4=`/apprun/redis/bin/redis-cli -c -h $server04 -p $S04_PORT04 -a $pass_w ping 2> /dev/null` r5=`/apprun/redis/bin/redis-cli -c -h $server05 -p $S05_PORT05 -a $pass_w ping 2> /dev/null` r6=`/apprun/redis/bin/redis-cli -c -h $server06 -p $S06_PORT06 -a $pass_w ping 2> /dev/null` if [ "$r1" = "PONG" ];then #/apprun/redis/bin/redis-cli -c -h $server01 -p $S01_PORT01 -a $pass_w set yx:kc:w$a wang$a ex $Key_EXtime 2>/dev/null #echo -e "yx:kc:w$a" >> /apprun/keys.txt #sleep 1 redis_connect $server01 $S01_PORT01 elif [ "$r2" = "PONG" ];then redis_connect $server02 $S01_PORT02 elif [ "$r3" = "PONG" ];then redis_connect $server03 $S01_PORT03 elif [ "$r4" = "PONG" ];then redis_connect $server04 $S01_PORT04 elif [ "$r5" = "PONG" ];then redis_connect $server05 $S01_PORT05 elif [ "$r6" = "PONG" ];then redis_connect $server06 $S01_PORT06 else echo "集群已挂..." fi done
vim get_key.sh
![](https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif)
#redis集群信息:6台为准 server01=192.168.109.137 server02=192.168.109.137 server03=192.168.109.137 server04=192.168.109.138 server05=192.168.109.138 server06=192.168.109.138 S01_PORT01=7000 S02_PORT02=7001 S03_PORT03=7002 S04_PORT04=7000 S05_PORT05=7001 S06_PORT06=7002 pass_w='wang!321' function redis_getkeys() { /apprun/redis/bin/redis-cli -c -h $1 -p $2 -a $pass_w get $i 2>/dev/null info=`/apprun/redis/bin/redis-cli -c -h $1 -p $2 -a $pass_w EXISTS $i 2>/dev/null` if [ $info = 1 ];then echo "键${i} 存在" else echo "XXOO~~~info=$info" fi #sleep 1 } if [ ! -f /apprun/keys.txt ]; then exit 253 fi for i in `cat /apprun/keys.txt`; do echo ${i}; r1=`/apprun/redis/bin/redis-cli -c -h $server01 -p $S01_PORT01 -a $pass_w ping 2> /dev/null` r2=`/apprun/redis/bin/redis-cli -c -h $server02 -p $S02_PORT02 -a $pass_w ping 2> /dev/null` r3=`/apprun/redis/bin/redis-cli -c -h $server03 -p $S03_PORT03 -a $pass_w ping 2> /dev/null` r4=`/apprun/redis/bin/redis-cli -c -h $server04 -p $S04_PORT04 -a $pass_w ping 2> /dev/null` r5=`/apprun/redis/bin/redis-cli -c -h $server05 -p $S05_PORT05 -a $pass_w ping 2> /dev/null` r6=`/apprun/redis/bin/redis-cli -c -h $server06 -p $S06_PORT06 -a $pass_w ping 2> /dev/null` if [ "$r1" = "PONG" ];then redis_getkeys $server01 $S01_PORT01 elif [ "$r2" = "PONG" ];then redis_getkeys $server02 $S01_PORT02 elif [ "$r3" = "PONG" ];then redis_getkeys $server03 $S01_PORT03 elif [ "$r4" = "PONG" ];then redis_getkeys $server04 $S01_PORT04 elif [ "$r5" = "PONG" ];then redis_getkeys $server05 $S01_PORT05 elif [ "$r6" = "PONG" ];then redis_getkeys $server06 $S01_PORT06 fi done;
扩容:
新增一个主节点:192.168.109.139:7000
![](https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif)
1.启动新增主节点:(检查略) [apprun@localhost redis_cluster]$ /apprun/redis/bin/redis-server /apprun/redis_cluster/7000/redis.conf 2.添加节点到集群,首先查看当前集群情况、集群节点情况 当前集群情况:[apprun@localhost soft]$ /apprun/redis/bin/redis-cli -c -h 192.168.109.137 -p 7000 -a 'wang!321' cluster info 添加节点命令: 192.168.109.139:7000加入集群操作:【add-node new_host:new_port existing_host:existing_port】 [apprun@localhost soft]$ /apprun/redis/bin/redis-cli --cluster add-node 192.168.109.139:7000 192.168.109.138:7000 -a 'wang!321' 命令注解: [apprun@localhost apprun]$ /apprun/redis/bin/redis-cli --cluster help add-node new_host:new_port existing_host:existing_port ##new_host:new_port 增加的新节点 existing_host:existing_port 集群中已存在的任意一个节点 --cluster-slave --cluster-master-id <arg> 3.分槽:集群虽然成立了,但是并没有分配槽位,所以无法保存数据,必须要分配槽位。此刻需要为新加入的集群分配槽位 [apprun@localhost apprun]$ /apprun/redis/bin/redis-cli --cluster reshard 192.168.109.137:7000 -a 'wang!321' #注意:这里192.168.109.137:7000为此处使用集群中任意一个节点都可以(除新加入节点) 1).需要分配多少个卡槽 How many slots do you want to move (from 1 to 16384)? 10 2).分配给谁?输入刚才 192.168.109.139:7000 的 node-id What is the receiving node ID? c7a05ce237bd8177af154af3ce48fd89d7c86453 3).需要谁分配给你,我们可以依次输入 6379、6380、6381 的 node-id,需要谁就输入谁,不需要的话就输入 done。如果输入 all 就是已存在的 3 个节点平摊(推荐输入 all)。 Source node #1: all 4).根据提示输入 yes 查看集群信息、节点信息,查看数据等判断 新增一个从节点:192.168.109.139:7001 比如作为刚新增的主节点的从
新增一个从节点:192.168.109.139:7001 比如作为刚新增的主节点的从
![](https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif)
1.启动新增主节点:(检查略) [apprun@localhost redis_cluster]$ /apprun/redis/bin/redis-server /apprun/redis_cluster/7001/redis.conf 当前集群情况: [apprun@localhost soft]$ /apprun/redis/bin/redis-cli -c -h 192.168.109.137 -p 7000 -a 'wang!321' cluster info 集群节点情况: [apprun@localhost soft]$ /apprun/redis/bin/redis-cli -c -h 192.168.109.137 -p 7000 -a 'wang!321' cluster nodes 192.168.109.139:7001 以从身份加入集群操作并作为192.168.109.139:7000的从: /apprun/redis/bin/redis-cli --cluster add-node 192.168.109.139:7001 192.168.109.138:7000 -a 'wang!321' --cluster-slave --cluster-master-id c7a05ce237bd8177af154af3ce48fd89d7c86453 注解: add-node 192.168.109.139:7001 192.168.109.138:7000 增加一个节点(192.168.109.139:7001) 接入集群,192.168.109.138:7000为集群的一个接入点可任意 --cluster-slave 以从身份加入 --cluster-master-id 对应的主节点ID
杀掉主(192.168.109.139:7000)后,从(192.168.109.139:7001)接替查看一切正常,来回切换都正常
缩容:
主节点:192.168.109.139:7000删除,包含从节点192.168.109.139:7001删除(集群主节点一般都至少配一个从,不然集群的高可用性无法保障)
![](https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif)
1).删除对应的从节点: 查看:/apprun/redis/bin/redis-cli --cluster check 192.168.109.137:7000 -a 'wang!321' 删除对应的从节点(从节点:192.168.109.139:7001 从节点ID:8f2ed53955dfe9fc22de255e91ba2a9e696055b2) [apprun@localhost ~]$ /apprun/redis/bin/redis-cli --cluster del-node 192.168.109.139:7001 8f2ed53955dfe9fc22de255e91ba2a9e696055b2 -a 'wang!321' 注解: del-node 需删除的节点 需删除的节点ID 2).删除主节点之前需要先归还[迁移]槽位 注:对应删除的主节点有从节点存在时,先删除从节点;否则在迁移完槽会报 (Node 192.168.109.139:7000 replied with error:ERR Please use SETSLOT only with masters.) 方法一: 查看节点、槽情况 /apprun/redis/bin/redis-cli --cluster check 192.168.109.137:7000 -a 'wang!321' 将节点ID(c7a05ce237bd8177af154af3ce48fd89d7c86453)的槽迁移给节点ID为(fe0d43871950c7b2a9b7082bfe9b52f56aa4c65f)的节点 /apprun/redis/bin/redis-cli --cluster reshard 192.168.109.137:7000 -a 'wang!321' --cluster-from c7a05ce237bd8177af154af3ce48fd89d7c86453 --cluster-to fe0d43871950c7b2a9b7082bfe9b52f56aa4c65f --cluster-slots 75 --cluster-yes 注解: reshard 重新分片 --cluster-from 来自集群的哪个节点ID,如这里这里跟我们需要删除主节点192.168.109.139:7000的ID:c7a05ce237bd8177af154af3ce48fd89d7c86453 --cluster-to 去集群的哪个节点ID,这里得是集群中除正在被删的主节点外的任意主节点ID,比我如我选择的:fe0d43871950c7b2a9b7082bfe9b52f56aa4c65f --cluster-slots 重新分片的槽数量,如我们这里要删除主节点192.168.109.139:7000通过上面的观察我这主节点槽数量为75 --cluster-yes 指定迁移时的确认输入 方法二: [apprun@localhost ~]$ /apprun/redis/bin/redis-cli --cluster reshard 192.168.109.137:7000 -a 'wang!321' 分配[迁移]多少个槽 即被删除主节点的槽数量 How many slots do you want to move (from 1 to 16384)? 10 分配[迁移]给谁? 除要删除的主节点外的任意一主节点 What is the receiving node ID? fe0d43871950c7b2a9b7082bfe9b52f56aa4c65f 需要谁分配[迁移]给你 即需要删除的主节点ID Source node #1: c7a05ce237bd8177af154af3ce48fd89d7c86453 Source node #2: done Do you want to proceed with the proposed reshard plan (yes/no)? yes 3).删除节点 [apprun@localhost ~]$ /apprun/redis/bin/redis-cli --cluster del-node 192.168.109.139:7000 c7a05ce237bd8177af154af3ce48fd89d7c86453 -a 'wang!321' 注解: del-node 需删除的节点 需删除的节点ID
总结:
扩缩容本质都一样:分配[迁移]槽
分配[迁移]多少个槽 How many slots do you want to move (from 1 to 16384)?
分配[迁移]给谁? What is the receiving node ID?
需要谁分配[迁移]给你