centos redis集群搭建
说明:
10.0.0.111部署6500,6501,6502三个主节点
10.0.0.222部署6500,6501,6502三个备份节点
1.安装redis:略
2.配置内核参数
# 配置 vm.overcommit_memory 为1,这可以避免数据被截断
sysctl -w vm.overcommit_memory=1
3.建文件夹
cd /usr/local/
mkdir redis-cluster
cd redis-cluster
mkdir 6500 6501 6502
cd /var/log/
mkdir redis
4.修改配置文件
cd /etc/redis/
cp redis.conf redis_bak.conf
vim /etc/redis/redis.conf
bind 192.168.2.247(需要不同服务器的节点连通,就不能设置为 127.0.0.1)
protected-mode no(需要不同服务器的节点连通,这个就要设置为 no)
daemonize yes(设置后台运行redis)
cluster-enabled yes
cluster-node-timeout 5000
appendonly yes
5.拷贝文件到不同目录,并做如下修改
cp redis.conf /usr/local/redis-cluster/6500
port 6500
pidfile /var/run/redis_6500.pid
logfile /var/log/redis/redis_6500.log
dbfilename dump_6500.rdb
appendfilename "appendonly_6500.aof"
cluster-config-file nodes_6500.conf
6.启动redis
7.测试远程连接
10.0.0.111跟10.0.0.222均进行1-8操作
8.创建集群
cd /root/redis-4.0.6/src/ ./redis-trib.rb create --replicas 1 10.0.0.111:6500 10.0.0.111:6501 10.0.0.111:6502 10.0.0.222:6500 10.0.0.222:6501 10.0.0.222:6502 yes(不能直接回车或者输入y,必须输yes)
9.连接测试
可连接任意节点,集群会重定向处理,连接需加参数-c(cluster首字母?)
redis-cli -h 10.0.0.222 -c -p 6500
10.模拟主节点挂掉
关掉主机的三个进程,连接备用节点,发现仍然可以取到a的值,因为在前面设了。
连接任意一个节点都能取到所有节点的数据(前提:ip+端口通,可以做个连接池,在一个ip+端口不通时切换到另一个)
11.重新开启111上的6500,6501,6502结点并加入集群(此处为新增节点的操作,如节点挂掉,只需重新启动进程就好,无需此操作,暂时不删,请忽略)
依此把三个节点加入集群:
语法 ./redis-trib.rb add-node new_host:new_port existing_host:existing_port
cd /root/redis-4.0.6/src/ ./redis-trib.rb add-node 10.0.0.111:6500 10.0.0.222:6500
运行结果
>>> Adding node 10.0.0.111:6500 to cluster 10.0.0.222:6500
>>> Performing Cluster Check (using node 10.0.0.222:6500)
M: 16ed34de7815c7c13e5263a03685082fda1783a8 10.0.0.222:6500
slots:5461-10922 (5462 slots) master
0 additional replica(s)
M: a0f0745883e3eaec9e469db66ced83e9ce114629 10.0.0.222:6502
slots:10923-16383 (5461 slots) master
0 additional replica(s)
M: 458741ac8b916c95c8093573d0718af07da19832 10.0.0.222:6501
slots:0-5460 (5461 slots) master
0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 10.0.0.111:6500 to make it join the cluster.
[OK] New node added correctly.
连接111的6500,取上一步设的bbb,可以取到值,并且设置ccc的值
继续加入6501,6502
./redis-trib.rb add-node 10.0.0.111:6501 10.0.0.222:6501 ./redis-trib.rb add-node 10.0.0.111:6502 10.0.0.222:6502
连接111的6501,取上一步设置的ccc值,可以取到
12.重新分配哈希槽测试(不然节点虽然能存取数据,但都是转向到其他节点做的)
E:\Program Files\Redis>redis-cli -h 10.0.0.111 -c -p 6500 cluster nodes
d64d42cc090440053eb464539f8a8bf0a93543d6 10.0.0.222:6502@16502 slave 21f1919531e456bdff078ab7351ed4e900287f9c 0 1534490272266 6 connected
21f1919531e456bdff078ab7351ed4e900287f9c 10.0.0.111:6501@16501 master - 0 1534490273000 2 connected 10923-16383
a6990cc5fa94c8e713bcc44536bf87d2e889dce8 10.0.0.222:6500@16500 master - 0 1534490273000 4 connected 5461-10922
9049ba4a344dbd1e75ee62f40e12dc07881f0a37 10.0.0.111:6500@16500 myself,master - 0 1534490273000 1 connected 0-5460
246b6da627b00c8dbc5e513592c227af6185a765 10.0.0.111:6502@16502 slave a6990cc5fa94c8e713bcc44536bf87d2e889dce8 0 1534490274269 4 connected
0957bbd8905c9d5a50d4545571304e175bc8f892 10.0.0.222:6501@16501 slave 9049ba4a344dbd1e75ee62f40e12dc07881f0a37 0 1534490273267 5 connected
找到master节点id: a6990cc5fa94c8e713bcc44536bf87d2e889dce8
redis-cli -h 10.0.0.111 -c -p 6501 cluster nodes
./redis-trib.rb reshard 10.0.0.111:6500 输入你要分配的数量 4000
填入id: a6990cc5fa94c8e713bcc44536bf87d2e889dce8
输入:all
输入:yes
13.集群重建测试
1.关掉所有节点的进程:pkill -9 redis
2.删掉相关文件(111,222均要执行)
cd /usr/local/redis-cluster/ rm *.rdb rm *.conf
cd /root
rm *.rdb
rm *.conf
3.开启所有节点进程
[root@centos7 redis]# redis-server /usr/local/redis-cluster/6500/redis.conf [root@centos7 redis]# redis-server /usr/local/redis-cluster/6501/redis.conf [root@centos7 redis]# redis-server /usr/local/redis-cluster/6502/redis.conf
4.重新创建集群(步骤8)
不知道为什么没有按顺序前三个成为master节点。。
5.看看集群情况:
E:\Program Files\Redis>redis-cli -h 10.0.0.111 -c -p 6500 cluster nodes d64d42cc090440053eb464539f8a8bf0a93543d6 10.0.0.222:6502@16502 slave 21f1919531e456bdff078ab7351ed4e900287f9c 0 1534490272266 6 connected 21f1919531e456bdff078ab7351ed4e900287f9c 10.0.0.111:6501@16501 master - 0 1534490273000 2 connected 10923-16383 a6990cc5fa94c8e713bcc44536bf87d2e889dce8 10.0.0.222:6500@16500 master - 0 1534490273000 4 connected 5461-10922 9049ba4a344dbd1e75ee62f40e12dc07881f0a37 10.0.0.111:6500@16500 myself,master - 0 1534490273000 1 connected 0-5460 246b6da627b00c8dbc5e513592c227af6185a765 10.0.0.111:6502@16502 slave a6990cc5fa94c8e713bcc44536bf87d2e889dce8 0 1534490274269 4 connected 0957bbd8905c9d5a50d4545571304e175bc8f892 10.0.0.222:6501@16501 slave 9049ba4a344dbd1e75ee62f40e12dc07881f0a37 0 1534490273267 5 connected
111的6500,6501跟222的6500为主节点,似乎只有主节点有哈希槽
6.尝试将222的6500主节点转到111的6502上
杀掉222上6500对应的端口,111上的5002自动升为master,接管对应哈希槽的处理
再重新开启222的6500进程,可以看到6500的状态由上图的master.fail转为slave
参考:
官方文档:Redis 集群教程
官方文档:Redis 集群规范
(error) CLUSTERDOWN Hash slot not served