redis cluster部署
4.5.1 服务器规划
系统:centos7.3.1611
内核:4.17.11-1.el7.elrepo.x86_64
6节点规划
主 |
从 |
spark01,192.168.234.21:7001 |
spark04,192.168.234.24:7004 |
spark02,192.168.234.22:7002 |
spark05,192.168.234.25:7005 |
spark03,192.168.234.23:7003 |
spark06,192.168.234.26:7006 |
3节点规划
主 |
从 |
spark01,192.168.234.21:7001 |
spark02,192.168.234.22:7004 |
spark02,192.168.234.22:7002 |
spark03,192.168.234.23:7005 |
spark03,192.168.234.23:7003 |
spark01,192.168.234.21:7006 |
4.5.2 安装软件包
链接: https://pan.baidu.com/s/1VzhKRw5hRtmHEkgCj0ZmNg 提取码: 3fxv
在所有redis主机进行
# 1、下载redis_cluster3.2.10.tar.gz # 2、解压 tar xf redis_cluster3.2.10.tar.gz -C /data/egon/software/ # 3、配置ruby.repo,原理包含haproxy与keepalived cat > /etc/yum.repos.d/ruby.repo << EOF [ruby] name=ruby baseurl=file:///data/egon/software/redis_cluster/rpms/ enabled=1 gpgcheck=0 EOF # 4、安装ruby环境 yum install ruby -y # 5、安装redis-trib.rb运行依赖的ruby的包redis-3.2.2.gem cd /data/egon/software/redis_cluster/ gem install redis-3.2.1.gem # 6、为gem添加密码 # 6.1 密码与配置文件保持一致,默认为Redis@egon_2022 cd /data/egon/software/redis_cluster cat redis/conf/7001/redis.conf # 最后两行写着密码 # 6.2 修改配置,passord ==> "密码一定要被引号包含" find / -name client.rb # 找到路径 vim /usr/local/share/gems/gems/redis-3.2.1/lib/redis/client.rb # 修改 class Client DEFAULTS = { :url => lambda { ENV["REDIS_URL"] }, :scheme => "redis", :host => "127.0.0.1", :port => 6379, :path => nil, :timeout => 5.0, :password => "Redis@egon_2022", :db => 0, :driver => nil, :id => nil, :tcp_keepalive => 0, :reconnect_attempts => 1, :inherit_socket => false } 或者使用sed修改 sed -ri '/:password =>/s/nil/"Redis@egon_2022"/' /usr/local/share/gems/gems/redis-3.2.1/lib/redis/client.rb # 7、配置环境变量 vim /etc/profile # 在PATH后添加路径/data/egon/software/redis_cluster/redis/bin source /etc/profile
4.5.3 启动redis cluster
按照服务器规划启动集群,以三节点为例先启动redis实例
在spark01上启动7001、7006
cd /data/egon/software/redis_cluster/redis/ cd conf/7001 redis-server redis.conf cd /data/egon/software/redis_cluster/redis/ cd conf/7006 redis-server redis.conf
在spark02启动7002、7004
cd /data/egon/software/redis_cluster/redis/ cd conf/7002 redis-server redis.conf cd /data/egon/software/redis_cluster/redis/ cd conf/7004 redis-server redis.conf
在spark03启动7003、7005
cd /data/egon/software/redis_cluster/redis/ cd conf/7003 redis-server redis.conf cd /data/egon/software/redis_cluster/redis/ cd conf/7005 redis-server redis.conf
然后使用redis-trib.rb创建集群,设置主从,前三个是主,后三个为从与前三个主对应
redis-trib.rb create --replicas 1 spark01:7001 spark02:7002 spark03:7003 spark02:7004 spark03:7005 spark01:7006
4.5.4 集群测试
在任意节点登陆,从库没有槽位,但也可以登陆进行写操作,会自动切换到相应的master
# 注意:端口所在主机的ip地址要填对 # 写入 [root@yq01-aip-aixxx19 7006]# redis-cli -a Redis@egon_2022 -c -h 10.61.187.20 -p 7005 10.61.187.20:7005> set name tom -> Redirected to slot [5798] located at 10.61.187.24:7002 # 依据计算出的槽5798自动写入相应的master OK 10.61.187.24:7002> # 读取 [root@yq01-aip-aixxx19 ~]# redis-cli -a Redis@egon_2022 -c -h 10.61.187.24 -p 7002 10.61.187.24:7002> get name "tom"
测试节点选举,关掉一个redis
# 1、检测集群 redis-trib.rb check 10.61.187.24:7002 # 会发现有三个M三个S # 2、干掉一个主redis 用kill -9 pid或者用下述指令 redis-cli -a Redis@egon_2022 -c -h 10.61.187.24 -p 7002 shutdown nosave # 3、再次检测集群 redis-trib.rb check 10.153.204.22:7001 # 会发现有一个S变为了M,现在有三个M和两个S # 4、启动redis 在spark02 cd /data/egon/software/redis_cluster/redis/conf/7002 redis-server redis.conf # 5、再次检测发现新启动的只能作为从 redis-trib.rb check 10.153.204.22:7001
4.5.5 部署haproxy+keepalived
在spark02与spark03部署
yum -y install haproxy keepalived -y # yum源已配置过了,此处直接安装即可
在spark02与spark03修改haproxy配置:vim /etc/haproxy/haproxy.cfg
global chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats defaults mode tcp option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen stats mode http bind 0.0.0.0:8099 stats enable stats hide-version stats uri /haproxy_status stats realm Haproxy\ Statistics stats auth admin:ha.egon@2022 stats admin if TRUE listen redis bind 0.0.0.0:6379 mode tcp balance roundrobin server c1 192.168.234.21:7001 check server c2 192.168.234.22:7002 check server c3 192.168.234.23:7003 check server c4 192.168.234.22:7004 check server c5 192.168.234.23:7005 check server c6 192.168.234.21:7006 check # haproxy代理所有的redis包括从,因为即便你在从主机上进行写操作,也会根据计算出的插槽号切换到对应主redis # 我们haproxy上对redis调度上下线没什么用,因为redis cluster当前节点查不到数据是自动跳到对应的节点的,只要后端没宕机的话,它仍然会切换的。
在spark02修改keepalived配置:vim /etc/keepalived/keepalived.conf
global_defs { script_user root enable_script_security } vrrp_script chk_haproxy { script "/etc/keepalived/check_haproxy.sh" interval 3 weight 2 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 50 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 6666 } track_script { chk_haproxy #监测haproxy进程状态 } virtual_ipaddress { 10.61.187.51/25 dev eth0 } track_interface { eth0 } # 上联交换机若禁用了arp的广播限制,会造成keepalive无法通过广播通信,两台服务器抢占vip,出现同时都有vip的情况 # 所以此处可以调整为单播 # 本机ip,本例中为spark01主机的ip unicast_src_ip 10.61.187.24 unicast_peer { # 对端ip,本例中为spark02主机的ip 10.61.187.20 } }
在spark03修改keepalived配置:vim/etc/keepalived/keepalived.conf
global_defs { script_user root enable_script_security } vrrp_script chk_haproxy { script "/etc/keepalived/check_haproxy.sh" interval 3 weight 2 } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 50 # 可理解为分组,应该与主保持一致 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 6666 } track_script { chk_haproxy #监测haproxy进程状态 } virtual_ipaddress { 10.61.187.51/25 dev eth0 } track_interface { eth0 } # 源ip为自己本机的ip地址 unicast_src_ip 10.61.187.20 unicast_peer { # 对端ip 10.61.187.24 } }
在spark02与spark03创建检测脚本: /etc/keepalived/check_haproxy.sh
# 1、vim /etc/keepalived/check_haproxy.sh #!/bin/bash if [ $(ps -C haproxy --no-header | wc -l) -eq 0 ]; then #条件成立,证明haproxy挂掉 systemctl start haproxy #先尝试启动haproxy程序 fi sleep 2 #睡眠两秒钟,等待haproxy完全启动 if [ $(ps -C haproxy --no-header | wc -l) -eq 0 ]; then #若haproxy仍未启动成功,则关闭keepalived,飘逸vip systemctl stop keepalived fi # 2、赋予可执行权限 chmod +x /etc/keepalived/check_haproxy.sh
在spark02与spark03开启keepalived日志
#配置keepalived [root@lb01 ~]# vim /etc/sysconfig/keepalived KEEPALIVED_OPTIONS="-D -d -S 0" #配置rsyslog抓取日志 [root@lb01 ~]# vim /etc/rsyslog.conf # 文件末尾追加 local0.* /var/log/keepalived.log #重启服务 [root@lb01 ~]# systemctl restart rsyslog
在spark02与spark03启动服务:
systemctl start haproxy systemctl start keepalived systemctl enable haproxy systemctl enable keepalived # 可以在管理界面查看:http://spark01的ip地址:8099/haproxy_status # 输入配置文件的中的账号密码
测试使用vip链接redis
redis-cli -a Redis@egon_2022 -c -h 10.61.187.51 -p 6379
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 25岁的心里话
· 闲置电脑爆改个人服务器(超详细) #公网映射 #Vmware虚拟网络编辑器
· 基于 Docker 搭建 FRP 内网穿透开源项目(很简单哒)
· 零经验选手,Compose 一天开发一款小游戏!
· 一起来玩mcp_server_sqlite,让AI帮你做增删改查!!