Centos7+LVS-DR+keepalived实验(包含sorry-server、日志、及HTTP-GET的健康检测)
一、简介
1、lvs-dr原理请参考原理篇
2、keepalived原理请参考原理篇
3、基于lvs-dr+keepalived故障切换架构图如下:
二、部署
1、环境
web1 | lvs+keepalived | 192.168.216.51 |
web2 | lvs+keepalived | 192.168.216.52 |
web3 | web | 192.168.216.53 |
web4 | web | 192.168.216.54 |
client | 物理机 |
注意:确保每台机器防火墙、selinux关闭,时间同步
2、准备RS的web服务,这里安装httpd
web3/web4
yum install httpd -y
web3
echo "welcome to web3" >/var/www/html/index.html
systemctl start httpd
systemctl enable httpd
web4
echo "welcome to web4" >/var/www/html/index.html
systemctl start httpd
systemctl enable httpd
互相访问一下,在客户机浏览器上也访问一下
1 [root@web3 ~]# curl 192.168.216.54 2 welcome to web4 3 [root@web3 ~]# 4 5 [root@web4 ~]# curl 192.168.216.54 6 welcome to web4 7 [root@web4 ~]#
arp抑制的意义 ,修改的应答级别
arp_ignore 改为1的意义是,响应报文,请求报文从哪个地址进来的,就只能这个接口地址响应
arp_announce 改为2的意义是通知,不通告不同网段
脚本实现:web3/web4,都运行一下
1 [root@web3 ~]# cd /arp 2 [root@web3 arp]# ll 3 total 4 4 -rwxr-xr-x. 1 root root 469 Apr 23 16:04 arp.sh 5 [root@web3 arp]# cat arp.sh 6 #!/bin/bash 7 case $1 in 8 start) 9 echo 1 >/proc/sys/net/ipv4/conf/all/arp_ignore 10 echo 1 >/proc/sys/net/ipv4/conf/lo/arp_ignore 11 echo 2 >/proc/sys/net/ipv4/conf/all/arp_announce 12 echo 2 >/proc/sys/net/ipv4/conf/lo/arp_announce 13 ;; 14 stop) 15 echo 0 >/proc/sys/net/ipv4/conf/all/arp_ignore 16 echo 0 >/proc/sys/net/ipv4/conf/lo/arp_ignore 17 echo 0 >/proc/sys/net/ipv4/conf/all/arp_announce 18 echo 0 >/proc/sys/net/ipv4/conf/lo/arp_announce 19 ;; 20 esac 21 22 23 [root@web3 arp]# chmod +x arp.sh 24 [root@web3 arp]# ./arp.sh
4、RS配置VIP接口
web3/web4 同时配置
首先几个问题解释一下:
为什么配置到lo接口
既然需要rs能够处理目标地址的vip的ip报文,首先需要接收这个包,在lo上配置vip就能够完全接收包并将结果返回client
配置到其他网卡上,会影响客户端的arp request,影响arp表,从而影响负载均衡
为什么是rs的掩码是255.255.255.255
由于rs的vip不对外通信,用做侦首部,所以一定要设置位32位掩码
1 ifconfig lo:0 192.168.216.200 netmask 255.255.255.255 broadcast 192.168.216.200 up 2 route add -host 192.168.216.200 dev lo:0 3 4 [root@web3 arp]# route -n 5 Kernel IP routing table 6 Destination Gateway Genmask Flags Metric Ref Use Iface 7 0.0.0.0 192.168.216.2 0.0.0.0 UG 100 0 0 ens33 8 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 9 192.168.216.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33 10 192.168.216.200 0.0.0.0 255.255.255.255 UH 0 0 0 lo
5、准备director的ipvsadm
web1/web2
yum install ipvsadm -y
1 [root@web2 keepalived]# ipvsadm -C 2 [root@web2 keepalived]# ipvsadm -A -t 192.168.216.200:80 -s rr 3 [root@web2 keepalived]# ipvsadm -a -t 192.168.216.200:80 -r 192.168.216.53 -g -w 1 4 [root@web2 keepalived]# ipvsadm -a -t 192.168.216.200:80 -r 192.168.216.54 -g -w 2
6、sorry-server的配置
web1/web2-安装web软件
yum install nginx -y
web1-
echo "sorry,under maintanance #####web1" >/usr/share/nginx/html/index.html
web2
echo "sorry,under maintanance #####web2" >/usr/share/nginx/html/index.html
web1/web2
systemctl start nginx
systemctl enable nginx
客户端访问web应用是否正常
后面在keepalived配置文件virtual_server区域添加sorry_server 127.0.0.1 80
7、配置keepalived,及基于HTTP-GET做监控检测
web1/web2-安装软件
yum install keepalived -y
web1-master配置
1 [root@web1 keepalived]# cat keepalived.conf 2 ! Configuration File for keepalived 3 4 global_defs { 5 # notification_email { 6 # acassen@firewall.loc 7 # failover@firewall.loc 8 # sysadmin@firewall.loc 9 # } 10 # notification_email_from Alexandre.Cassen@firewall.loc 11 # smtp_server 192.168.200.1 12 # smtp_connect_timeout 30 13 router_id LVS_DEVEL 14 # vrrp_skip_check_adv_addr 15 # vrrp_strict 16 # vrrp_garp_interval 0 17 # vrrp_gna_interval 0 18 } 19 vrrp_script chk_maintanance { 20 21 script "/etc/keepalived/chkdown.sh" 22 interval 1 23 weight -20 24 } 25 #vrrp_script chk_nginx { 26 # script "/etc/keepalived/chknginx.sh" 27 # interval 1 28 # weight -20 29 #} 30 31 #VIP1 32 vrrp_instance VI_1 { 33 state MASTER 34 interface ens33 35 virtual_router_id 50 36 priority 100 37 advert_int 1 38 authentication { 39 auth_type PASS 40 auth_pass 1111 41 } 42 virtual_ipaddress { 43 192.168.216.200 44 } 45 track_script { 46 chk_maintanance 47 } 48 # track_script { 49 # chk_nginx 50 # } 51 } 52 #VIP2 53 #vrrp_instance VI_2 { 54 # state BAKCUP 55 # interface ens33 56 # virtual_router_id 51 57 # priority 90 58 # advert_int 1 59 # authentication { 60 # auth_type PASS 61 # auth_pass 1111 62 # } 63 # virtual_ipaddress { 64 # 192.168.216.210 65 # } 66 # track_script { 67 # chk_maintanance 68 # } 69 # track_script { 70 # chk_nginx 71 # } 72 #} 73 74 virtual_server 192.168.216.200 80{ 75 delay_loop 6 76 lb_algo wrr 77 lb_kind DR 78 nat_mask 255.255.0.0 79 protocol TCP 80 81 real_server 192.168.216.53 80 { 82 weight 1 83 HTTP_GET { 84 url { 85 path / 86 status_code 200 87 } 88 connect_timeout 3 89 nb_get_retry 3 90 delay_before_retry 3 91 } 92 } 93 94 real_server 192.168.216.54 80 { 95 weight 2 96 HTTP_GET { 97 url { 98 path / 99 status_code 200 100 } 101 connect_timeout 3 102 nb_get_retry 3 103 delay_before_retry 3 104 } 105 } 106 }
web2-backup配置
1 [root@web2 keepalived]# cat keepalived.conf 2 ! Configuration File for keepalived 3 4 global_defs { 5 # notification_email { 6 # acassen@firewall.loc 7 # failover@firewall.loc 8 # sysadmin@firewall.loc 9 # } 10 # notification_email_from Alexandre.Cassen@firewall.loc 11 # smtp_server 192.168.200.1 12 # smtp_connect_timeout 30 13 router_id LVS_DEVEL1 14 # vrrp_skip_check_adv_addr 15 # vrrp_strict 16 # vrrp_garp_interval 0 17 # vrrp_gna_interval 0 18 } 19 vrrp_script chk_maintanance { #这里是脚本通过实现动态切换在Centos7+nginx+keepalived集群及双主架构案例文章有介绍 20 script "/etc/keepalived/chkdown.sh” 21 interval 1 22 weight -20 23 } 24 25 vrrp_script chk_nginx { 26 script "/etc/keepalived/chknginx.sh" 27 interval 1 28 weight -20 29 } 30 31 #VIP1 32 vrrp_instance VI_1 { 33 state BACKUP 34 interface ens33 35 virtual_router_id 50 36 priority 90 37 advert_int 1 38 authentication { 39 auth_type PASS 40 auth_pass 1111 41 } 42 virtual_ipaddress { 43 192.168.216.200 44 } 45 track_script { 46 chk_maintanance 47 } 48 # track_script { 49 # chk_nginx 50 # } 51 } 52 53 #VIP2 54 #vrrp_instance VI_2 { 55 # state MASTER 56 # interface ens33 57 # virtual_router_id 51 58 # priority 100 59 # advert_int 1 60 # authentication { 61 # auth_type PASS 62 # auth_pass 1111 63 # } 64 # virtual_ipaddress { 65 # 192.168.216.210 66 # } 67 # track_script { 68 # chk_maintanance 69 # } 70 # track_script { 71 # chk_nginx 72 # } 73 #} 74 75 virtual_server 192.168.216.200 80{ #vip区域 76 delay_loop 6 #延迟轮询时间 77 lb_algo wrr #后端算法 78 lb_kind DR #调度类型 79 nat_mask 255.255.0.0 # 80 protocol TCP #监控服务协议类型 81 sorry_server 127.0.0.1 80 #sorry-server 82 real_server 192.168.216.53 80 { #真实服务器 83 weight 1 #权重 84 HTTP_GET { #健康检测方式 HTTP_GET|SSL_GET|TCP_CHECK|SMTP_CHECK|MISC_CHECK,这里用的HTTP_GET据说效率比TCP_CHECK高 85 url { 86 path / #请求rs上的路径 87 status_code 200 #状态码检测 88 } 89 connect_timeout 3 #超时时长 90 nb_get_retry 3 #重复次数
91 delay_before_retry 3 #下次重试时间延迟 92 } 93 } 94 95 real_server 192.168.216.54 80 { 96 weight 2 97 HTTP_GET { 98 url { 99 path / 100 status_code 200 101 } 102 connect_timeout 3 103 nb_get_retry 3 104 delay_before_retry 3 105 } 106 } 107 }
添加keepalived ,down脚本
[root@web1 keepalived]# cat chkdown.sh #!/bin/bash [[ -f /etc/keepalived/down ]]&&exit 1 || exit 0 [root@web1 keepalived]#
8、开启日志功能
vim /etc/sysconfig/keepalived
KEEPALIVED_OPTIONS="-D" 修改成KEEPALIVED_OPTIONS="-D -d -S 0"
1 [root@web1 keepalived]# cat /etc/sysconfig/keepalived 2 # Options for keepalived. See `keepalived --help' output and keepalived(8) and 3 # keepalived.conf(5) man pages for a list of all options. Here are the most 4 # common ones : 5 # 6 # --vrrp -P Only run with VRRP subsystem. 7 # --check -C Only run with Health-checker subsystem. 8 # --dont-release-vrrp -V Dont remove VRRP VIPs & VROUTEs on daemon stop. 9 # --dont-release-ipvs -I Dont remove IPVS topology on daemon stop. 10 # --dump-conf -d Dump the configuration data. 11 # --log-detail -D Detailed log messages. 12 # --log-facility -S 0-7 Set local syslog facility (default=LOG_DAEMON) 13 # 14 15 KEEPALIVED_OPTIONS="-D -d -S 0"
开启rsyslog
vim /etc/rsyslog.conf
#keepalived -S 0
local0.* /var/log/keepalived.log
重启服务
systemctl restart keepalived
systemctl start rsyslog
systemctl enable rsyslog
三、验证
1、验证keepalived
web1上
touch down
ip a #查看vip 消失
rm -rf down
ip a #vip自动跳回
1 [root@web1 keepalived]# touch down 2 [root@web1 keepalived]# ll 3 total 20 4 -rwxr-xr-x 1 root root 62 Apr 19 12:45 chkdown.sh 5 -rwxr-xr-x 1 root root 151 Apr 22 19:04 chkmysql.sh 6 -rwxr-xr-x 1 root root 127 Apr 22 14:50 chknginx.sh 7 -rw-r--r-- 1 root root 0 Apr 24 17:31 down 8 -rw-r--r-- 1 root root 1877 Apr 24 17:08 keepalived.conf 9 -rw-r--r-- 1 root root 494 Apr 19 12:09 notify.sh 10 [root@web1 keepalived]# ip a 11 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 12 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 13 inet 127.0.0.1/8 scope host lo 14 valid_lft forever preferred_lft forever 15 inet6 ::1/128 scope host 16 valid_lft forever preferred_lft forever 17 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 18 link/ether 00:0c:29:1c:8b:39 brd ff:ff:ff:ff:ff:ff 19 inet 192.168.216.51/24 brd 192.168.216.255 scope global ens33 20 valid_lft forever preferred_lft forever 21 inet6 fe80::3409:e73d:1ef:2e1/64 scope link 22 valid_lft forever preferred_lft forever 23 3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000 24 link/ether 52:54:00:23:a5:7c brd ff:ff:ff:ff:ff:ff 25 inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 26 valid_lft forever preferred_lft forever 27 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000 28 link/ether 52:54:00:23:a5:7c brd ff:ff:ff:ff:ff:ff 29 [root@web1 keepalived]# rm -rf downn 30 [root@web1 keepalived]# rm -rf down 31 [root@web1 keepalived]# ip a 32 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 33 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 34 inet 127.0.0.1/8 scope host lo 35 valid_lft forever preferred_lft forever 36 inet6 ::1/128 scope host 37 valid_lft forever preferred_lft forever 38 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 39 link/ether 00:0c:29:1c:8b:39 brd ff:ff:ff:ff:ff:ff 40 inet 192.168.216.51/24 brd 192.168.216.255 scope global ens33 41 valid_lft forever preferred_lft forever 42 inet6 fe80::3409:e73d:1ef:2e1/64 scope link 43 valid_lft forever preferred_lft forever 44 3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000 45 link/ether 52:54:00:23:a5:7c brd ff:ff:ff:ff:ff:ff 46 inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 47 valid_lft forever preferred_lft forever 48 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000 49 link/ether 52:54:00:23:a5:7c brd ff:ff:ff:ff:ff:ff 50 [root@web1 keepalived]# ip a 51 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 52 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 53 inet 127.0.0.1/8 scope host lo 54 valid_lft forever preferred_lft forever 55 inet6 ::1/128 scope host 56 valid_lft forever preferred_lft forever 57 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 58 link/ether 00:0c:29:1c:8b:39 brd ff:ff:ff:ff:ff:ff 59 inet 192.168.216.51/24 brd 192.168.216.255 scope global ens33 60 valid_lft forever preferred_lft forever 61 inet 192.168.216.200/32 scope global ens33 #vip自动跳回 62 valid_lft forever preferred_lft forever 63 inet6 fe80::3409:e73d:1ef:2e1/64 scope link 64 valid_lft forever preferred_lft forever 65 3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000 66 link/ether 52:54:00:23:a5:7c brd ff:ff:ff:ff:ff:ff 67 inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 68 valid_lft forever preferred_lft forever 69 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000 70 link/ether 52:54:00:23:a5:7c brd ff:ff:ff:ff:ff:ff 71 [root@web1 keepalived]#
2、验证健康检测
1)、首先检查一下ipvsadm,并访问
1 [root@web1 keepalived]# ipvsadm -L -n 2 IP Virtual Server version 1.2.1 (size=4096) 3 Prot LocalAddress:Port Scheduler Flags 4 -> RemoteAddress:Port Forward Weight ActiveConn InActConn 5 TCP 192.168.216.200:80 wrr 6 -> 192.168.216.53:80 Route 1 0 0 7 -> 192.168.216.54:80 Route 2 0 0 8 [root@web1 keepalived]#
正常状态
2)、web3 停止httpd测试健康检测
systemctl stop httpd
web1上查看,ipvs策略已经剔除web3 ,日志文件也显示Removing service
1 [root@web1 keepalived]# ipvsadm -L -n 2 IP Virtual Server version 1.2.1 (size=4096) 3 Prot LocalAddress:Port Scheduler Flags 4 -> RemoteAddress:Port Forward Weight ActiveConn InActConn 5 TCP 192.168.216.200:80 wrr 6 -> 192.168.216.54:80 Route 2 0 2
1 [root@web1 keepalived]# cat /var/log/keepalived.log |tail -10 2 Apr 24 17:32:08 web1 Keepalived_vrrp[50391]: Sending gratuitous ARP on ens33 for 192.168.216.200 3 Apr 24 17:32:08 web1 Keepalived_vrrp[50391]: Sending gratuitous ARP on ens33 for 192.168.216.200 4 Apr 24 17:32:08 web1 Keepalived_vrrp[50391]: Sending gratuitous ARP on ens33 for 192.168.216.200 5 Apr 24 17:32:08 web1 Keepalived_vrrp[50391]: Sending gratuitous ARP on ens33 for 192.168.216.200 6 Apr 24 17:40:34 web1 Keepalived_healthcheckers[50390]: Error connecting server [192.168.216.53]:80. 7 Apr 24 17:40:37 web1 Keepalived_healthcheckers[50390]: Error connecting server [192.168.216.53]:80. 8 Apr 24 17:40:40 web1 Keepalived_healthcheckers[50390]: Error connecting server [192.168.216.53]:80. 9 Apr 24 17:40:43 web1 Keepalived_healthcheckers[50390]: Error connecting server [192.168.216.53]:80. 10 Apr 24 17:40:43 web1 Keepalived_healthcheckers[50390]: Check on service [192.168.216.53]:80 failed after 3 retry. 11 Apr 24 17:40:43 web1 Keepalived_healthcheckers[50390]: Removing service [192.168.216.53]:80 from VS [192.168.216.200]:80 12 [root@web1 keepalived]#
恢复web3的httpd
systemctl start httpd
web1上查看已经添加到负载均衡上,日志文件显示HTTP status code success 和adding service to VS
1 [root@web1 keepalived]# ipvsadm -L -n 2 IP Virtual Server version 1.2.1 (size=4096) 3 Prot LocalAddress:Port Scheduler Flags 4 -> RemoteAddress:Port Forward Weight ActiveConn InActConn 5 TCP 192.168.216.200:80 wrr 6 -> 192.168.216.53:80 Route 1 0 0 7 -> 192.168.216.54:80 Route 2 0 0 8 [root@web1 keepalived]# cat /var/log/keepalived.log |tail -10 9 Apr 24 17:32:08 web1 Keepalived_vrrp[50391]: Sending gratuitous ARP on ens33 for 192.168.216.200 10 Apr 24 17:40:34 web1 Keepalived_healthcheckers[50390]: Error connecting server [192.168.216.53]:80. 11 Apr 24 17:40:37 web1 Keepalived_healthcheckers[50390]: Error connecting server [192.168.216.53]:80. 12 Apr 24 17:40:40 web1 Keepalived_healthcheckers[50390]: Error connecting server [192.168.216.53]:80. 13 Apr 24 17:40:43 web1 Keepalived_healthcheckers[50390]: Error connecting server [192.168.216.53]:80. 14 Apr 24 17:40:43 web1 Keepalived_healthcheckers[50390]: Check on service [192.168.216.53]:80 failed after 3 retry. 15 Apr 24 17:40:43 web1 Keepalived_healthcheckers[50390]: Removing service [192.168.216.53]:80 from VS [192.168.216.200]:80 16 Apr 24 17:44:37 web1 Keepalived_healthcheckers[50390]: HTTP status code success to [192.168.216.53]:80 url(1). 17 Apr 24 17:44:37 web1 Keepalived_healthcheckers[50390]: Remote Web server [192.168.216.53]:80 succeed on service. 18 Apr 24 17:44:37 web1 Keepalived_healthcheckers[50390]: Adding service [192.168.216.53]:80 to VS [192.168.216.200]:80 19 [root@web1 keepalived]#
3、验证sorry-server
web3/web4
systemctl stop httpd
web1上查看
1 [root@web1 keepalived]# cat /var/log/keepalived.log |tail -10 2 Apr 24 17:40:43 web1 Keepalived_healthcheckers[50390]: Error connecting server [192.168.216.53]:80. 3 Apr 24 17:40:43 web1 Keepalived_healthcheckers[50390]: Check on service [192.168.216.53]:80 failed after 3 retry. 4 Apr 24 17:40:43 web1 Keepalived_healthcheckers[50390]: Removing service [192.168.216.53]:80 from VS [192.168.216.200]:80 5 Apr 24 17:44:37 web1 Keepalived_healthcheckers[50390]: HTTP status code success to [192.168.216.53]:80 url(1). 6 Apr 24 17:44:37 web1 Keepalived_healthcheckers[50390]: Remote Web server [192.168.216.53]:80 succeed on service. 7 Apr 24 17:44:37 web1 Keepalived_healthcheckers[50390]: Adding service [192.168.216.53]:80 to VS [192.168.216.200]:80 8 Apr 24 17:47:31 web1 Keepalived_healthcheckers[50390]: Error connecting server [192.168.216.53]:80. 9 Apr 24 17:47:34 web1 Keepalived_healthcheckers[50390]: Error connecting server [192.168.216.53]:80. 10 Apr 24 17:47:37 web1 Keepalived_healthcheckers[50390]: Error connecting server [192.168.216.53]:80. 11 Apr 24 17:47:37 web1 Keepalived_healthcheckers[50390]: Error connecting server [192.168.216.54]:80. 12 [root@web1 keepalived]# cat /var/log/keepalived.log |tail -10 13 Apr 24 17:44:37 web1 Keepalived_healthcheckers[50390]: Adding service [192.168.216.53]:80 to VS [192.168.216.200]:80 14 Apr 24 17:47:31 web1 Keepalived_healthcheckers[50390]: Error connecting server [192.168.216.53]:80. 15 Apr 24 17:47:34 web1 Keepalived_healthcheckers[50390]: Error connecting server [192.168.216.53]:80. 16 Apr 24 17:47:37 web1 Keepalived_healthcheckers[50390]: Error connecting server [192.168.216.53]:80. 17 Apr 24 17:47:37 web1 Keepalived_healthcheckers[50390]: Error connecting server [192.168.216.54]:80. 18 Apr 24 17:47:40 web1 Keepalived_healthcheckers[50390]: Error connecting server [192.168.216.53]:80. 19 Apr 24 17:47:40 web1 Keepalived_healthcheckers[50390]: Check on service [192.168.216.53]:80 failed after 3 retry. 20 Apr 24 17:47:40 web1 Keepalived_healthcheckers[50390]: Removing service [192.168.216.53]:80 from VS [192.168.216.200]:80 21 Apr 24 17:47:40 web1 Keepalived_healthcheckers[50390]: Error connecting server [192.168.216.54]:80. 22 Apr 24 17:47:43 web1 Keepalived_healthcheckers[50390]: Error connecting server [192.168.216.54]:80. 23 [root@web1 keepalived]# cat /var/log/keepalived.log |tail -10 24 Apr 24 17:47:40 web1 Keepalived_healthcheckers[50390]: Check on service [192.168.216.53]:80 failed after 3 retry. 25 Apr 24 17:47:40 web1 Keepalived_healthcheckers[50390]: Removing service [192.168.216.53]:80 from VS [192.168.216.200]:80 26 Apr 24 17:47:40 web1 Keepalived_healthcheckers[50390]: Error connecting server [192.168.216.54]:80. 27 Apr 24 17:47:43 web1 Keepalived_healthcheckers[50390]: Error connecting server [192.168.216.54]:80. 28 Apr 24 17:47:46 web1 Keepalived_healthcheckers[50390]: Error connecting server [192.168.216.54]:80. 29 Apr 24 17:47:46 web1 Keepalived_healthcheckers[50390]: Check on service [192.168.216.54]:80 failed after 3 retry. 30 Apr 24 17:47:46 web1 Keepalived_healthcheckers[50390]: Removing service [192.168.216.54]:80 from VS [192.168.216.200]:80 31 Apr 24 17:47:46 web1 Keepalived_healthcheckers[50390]: Lost quorum 1-0=1 > 0 for VS [192.168.216.200]:80 32 Apr 24 17:47:46 web1 Keepalived_healthcheckers[50390]: Adding sorry server [127.0.0.1]:80 to VS [192.168.216.200]:80 33 Apr 24 17:47:46 web1 Keepalived_healthcheckers[50390]: Removing alive servers from the pool for VS [192.168.216.200]:80
日志显示,Adding sorry server