Redis集群部署
一.介绍
二.环境介绍
当前环境:centos7.4 一台
系统:CentOS Linux release 7.4.1708 (Core)
工具:SecureCRT(Xshell)以及SecureFX(Xftp)
编译环境:gcc version 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC)
安装包:
redis-4.0.9.tar.gz下载(3.0开始支持集群功能)
ruby-2.5.1.tar.gz 下载(一般我们选择2.4或者2.5就行,尽量不要用yum安装,因为yum安装的ruby是2.0.0的,gem install redis ruby的版本必须是2.2.2以上)
redis-4.0.0.gem下载
zlib-1.2.11.tar.gz下载
启动端口:7001 7002 7003 7004 7005 7006
配置文件:/app/redis/redis-cluster/
本文当中,我们在一台Linux上搭建6个节点的Redis集群(实际生产环境,需要3台Linux服务器分布存放3个Master)
使用root用户安装,因为普通用户没有创建目录及编译等权限,需要授权
以下只讲述离线安装步骤
三.部署redis
第一步:安装gcc环境
查看gcc版本,如果没有安装则需要进行安装
gcc -v
gcc version 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC)
使用yum命令在线安装yum -y install gcc gcc-c++
第二步:解压redis并编译安装
/app/redis/
tar -xvf redis-4.0.9.tar.gz
cd redis-4.0.9/
make
/app/redis/redis-4.0.9/src
make install
Hint: It’s a good idea to run ‘make test’ ;)
INSTALL install
INSTALL install
INSTALL install
INSTALL install
INSTALL install
redis安装完之后会在/usr/local/bin目录下多几个文件
/usr/local/bin
-rwxr-xr-x 1 root root 5600190 Jun 19 14:33 redis-benchmark
-rwxr-xr-x 1 root root 8331349 Jun 19 14:33 redis-check-aof
-rwxr-xr-x 1 root root 8331349 Jun 19 14:33 redis-check-rdb
-rwxr-xr-x 1 root root 5739794 Jun 19 14:33 redis-cli
lrwxrwxrwx 1 root root 12 Jun 19 14:33 redis-sentinel -> redis-server
-rwxr-xr-x 1 root root 8331349 Jun 19 14:33 redis-server
第三步:解压ruby并编译安装
/app/redis
tar -xvf ruby-2.5.1.tar.gz
cd ruby-2.5.1/
./configure --prefix=/usr/local/ruby --prefix是将ruby安装到指定目录,也可以自定义
make && make install
/usr/local/ruby
可以看到在ruby目录下多了四个目录
drwxr-xr-x 2 root root 4096 Jun 19 14:48 bin
drwxr-xr-x 3 root root 4096 Jun 19 14:48 include
drwxr-xr-x 4 root root 4096 Jun 19 14:48 lib
drwxr-xr-x 5 root root 4096 Jun 19 14:48 share
配置ruby环境变量
vi /etc/profile
export PATH=$PATH:/usr/local/ruby/bin
:wq
source /etc/profile
echo $PATH
/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/openssh-7.5p1/bin:/root/bin:/usr/local/ruby/bin
查看ruby版本号
ruby -v
ruby 2.5.1p57 (2018-03-29 revision 63029) [x86_64-linux]
第四步:创建redis-cluster集群目录并拷贝redis的gem包以及在src下将redis-trib.rb 集群管理工具拷贝到集群目录
/app/redis
mkdir redis-cluster
cp redis-4.0.0.gem redis-cluster/
/app/redis/redis-4.0.9/src
cp redis-trib.rb /app/redis/redis-cluster/
第五步:使用gem安装redis的gem包
/app/redis/redis-cluster
gem install redis-4.0.0.gem
如果安装没有任何问题会出现以下提示
Successfully installed redis-4.0.0
Parsing documentation for redis-4.0.0
Installing ri documentation for redis-4.0.0
Done installing documentation for redis after 1 seconds
1 gem installed
如在第五步报错需要zlib或者openssl,可参照报错合集2
第六步:找到原先ruby安装目录,并做如下修改
/app/redis/ruby-2.5.1/ext/zlib
ruby extconf.rb --成功会出现creating Makefile
creating Makefile --如果没有出现creating Makefile,执行下面的命令
ruby extconf.rb --with-zlib-dir=/usr/local/zlib/
vi Makefile
将 zlib.o: $(top_srcdir)/include/ruby.h
修改为 zlib.o: ../../include/ruby.h
make
linking shared-object zlib.so
make install
/usr/bin/install -c -m 0755 zlib.so /usr/local/ruby/lib/ruby/site_ruby/2.5.0/x86_64-linux
如果上面再执行make之前不修改Makefile,将会报下面的错误
make: * No rule to make target/include/ruby.h', needed by
zlib.o’. Stop.
第七步:找到原先的ruby安装目录并做如下修改
/app/redis/ruby-2.5.1/ext/openssl
ruby extconf.rb --成功会出现creating Makefile
如发现没有出现creating Makefile,可参照报错合集2
vi Makefile
将所有的$(top_srcdir)修改为 ../..($(top_srcdir)不止一个)
make
linking shared-object openssl.so
make install
/usr/bin/install -c -m 0755 openssl.so /usr/local/ruby/lib/ruby/site_ruby/2.5.0/x86_64-linux
installing default openssl libraries
如果上面再执行make之前不修改Makefile,将会报下面的错误
make: * No rule to make target/include/ruby.h', needed by
ossl.o’. Stop.
四.创建集群
之前讲到是我们需要6个节点的Redis作为集群,所以我们需要创建6个文件夹,分别存放6个节点的配置信息,6个节点需要对应6个端口号,比如7001~7006,这个端口号我们可以自行定义。
/app/redis/redis-cluster/
mkdir 700{1,2,3,4,5,6} --批量创建六个文件夹
将原先redis安装目录下的配置文件redis.conf拷贝到新创建的六个文件夹下面
/app/redis/redis-4.0.9
cp redis.conf /app/redis/redis-cluster/7001/
cp redis.conf /app/redis/redis-cluster/7002/
cp redis.conf /app/redis/redis-cluster/7003/
cp redis.conf /app/redis/redis-cluster/7004/
cp redis.conf /app/redis/redis-cluster/7005/
cp redis.conf /app/redis/redis-cluster/7006/
将redis安装之后生成的服务端与客户端拷贝到新创建的六个文件夹下面
cd /usr/local/bin/
cp redis-server redis-cli /app/redis/redis-cluster/7001/
cp redis-server redis-cli /app/redis/redis-cluster/7002/
cp redis-server redis-cli /app/redis/redis-cluster/7003/
cp redis-server redis-cli /app/redis/redis-cluster/7004/
cp redis-server redis-cli /app/redis/redis-cluster/7005/
cp redis-server redis-cli /app/redis/redis-cluster/7006/
修改新创建的六个文件夹下面的配置文件redis.conf的部分参数
/app/redis/redis-cluster/
修改7001-7006中的redis.conf
bind 192.168.4.212 连入主机的ip地址,不修改外部无法连入你的redis缓存服务器中
port 700X x为文件夹名称,你在700几就填几
daemonize yes 开启守护进程模式。在该模式下,redis会在后台运行,并将进程pid号写入至redis.conf选项pidfile设置的文件中,此时redis将一直运行,除非手动kill该进程。
pidfile /app/redis/redis-cluster/700x/redis_700x.pid x为文件夹名称,你在700几就填几
cluster-enabled yes 开启集群模式
/app/redis/redis-cluster
写个批处理 vim start-all.sh,如果vim不支持就用vi
vim start-all.sh
cd 7001
./redis-server redis.conf
cd ..
cd 7002
./redis-server redis.conf
cd ..
cd 7003
./redis-server redis.conf
cd ..
cd 7004
./redis-server redis.conf
cd ..
cd 7005
./redis-server redis.conf
cd ..
cd 7006
./redis-server redis.conf
cd ..
并执行 chmod +x start-all.sh
命令进行授权
启动实例之前在根路径下配置多种环境yum -y install gcc gcc-c++
启动集群实例,
./start-all.sh
ps -ef|grep redis
root 22911 1 0 17:20 ? 00:00:00 ./redis-server 192.168.4.212:7001 [cluster]
root 22913 1 0 17:20 ? 00:00:00 ./redis-server 192.168.4.212:7002 [cluster]
root 22915 1 0 17:20 ? 00:00:00 ./redis-server 192.168.4.212:7003 [cluster]
root 22917 1 0 17:20 ? 00:00:00 ./redis-server 192.168.4.212:7004 [cluster]
root 22919 1 0 17:20 ? 00:00:00 ./redis-server 192.168.4.212:7005 [cluster]
root 22927 1 0 17:20 ? 00:00:00 ./redis-server 192.168.4.212:7006 [cluster]
root 22943 24007 0 17:20 pts/9 00:00:00 grep redis
配置集群
/app/redis/redis-cluster
创建了三个主节点,三个从节点。其中—replicas1 表示每个主节点下面有1个从节点,从节点可以是任意多个。
./redis-trib.rb create --replicas 1 192.168.4.212:7001 192.168.4.212:7002 192.168.4.212:7003 192.168.4.212:7004 192.168.4.212:7005 192.168.4.212:7006
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
192.168.4.212:7001 --主节点
192.168.4.212:7002 --主节点
192.168.4.212:7003 --主节点
Adding replica 192.168.4.212:7005 to 192.168.4.212:7001 --主节点对应的从节点
Adding replica 192.168.4.212:7006 to 192.168.4.212:7002 --主节点对应的从节点
Adding replica 192.168.4.212:7004 to 192.168.4.212:7003 --主节点对应的从节点
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: 878f46c817107cd7a59269b0335f9a8be7529080 192.168.4.212:7001
slots:0-5460 (5461 slots) master --主节点分配的hash槽
M: cfda6a1cf00977311fa46f2920a4e1198e8b9f36 192.168.4.212:7002
slots:5461-10922 (5462 slots) master --主节点分配的hash槽
M: efcf4009a56f0a488b72b6c50d0deefd7b416df3 192.168.4.212:7003
slots:10923-16383 (5461 slots) master --主节点分配的hash槽
S: 1a0bbbb75c8fa0f377767fe0ba194c8d63814b23 192.168.4.212:7004
replicates cfda6a1cf00977311fa46f2920a4e1198e8b9f36 --从节点没有hash槽
S: c8d88f8e36b5741c7e013a6b84f24a64976f2901 192.168.4.212:7005
replicates efcf4009a56f0a488b72b6c50d0deefd7b416df3 --从节点没有hash槽
S: 10f48bfea3db737b55d8fee14ad795252d396e2f 192.168.4.212:7006
replicates 878f46c817107cd7a59269b0335f9a8be7529080 --从节点没有hash槽
Can I set the above configuration? (type 'yes' to accept): yes --选择yes, 意思是服从这种主从分配方式,我们也可以通过配置文件自己指定slave
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join....
>>> Performing Cluster Check (using node 192.168.4.212:7001) --以下是详细的主从节点分布
M: 878f46c817107cd7a59269b0335f9a8be7529080 192.168.4.212:7001
slots:0-5460 (5461 slots) master
1 additional replica(s)
M: efcf4009a56f0a488b72b6c50d0deefd7b416df3 192.168.4.212:7003
slots:10923-16383 (5461 slots) master
1 additional replica(s)
M: cfda6a1cf00977311fa46f2920a4e1198e8b9f36 192.168.4.212:7002
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: 1a0bbbb75c8fa0f377767fe0ba194c8d63814b23 192.168.4.212:7004
slots: (0 slots) slave
replicates cfda6a1cf00977311fa46f2920a4e1198e8b9f36
S: 10f48bfea3db737b55d8fee14ad795252d396e2f 192.168.4.212:7006
slots: (0 slots) slave
replicates 878f46c817107cd7a59269b0335f9a8be7529080
S: c8d88f8e36b5741c7e013a6b84f24a64976f2901 192.168.4.212:7005
slots: (0 slots) slave
replicates efcf4009a56f0a488b72b6c50d0deefd7b416df3
[OK] All nodes agree about slots configuration.