Keepalived高可用

1. Keepalived是什么

Keepalived 软件起初是专为LVS负载均衡软件设计的,用来管理并监控LVS集群系统中各个服务节点的状态,后来又加入了可以实现高可用的VRRP功能。因此,Keepalived除了能够管理LVS软件外,还可以作为其他服务(例如:Nginx、Haproxy、MySQL等)的高可用解决方案软件。

Keepalived软件主要是通过VRRP协议实现高可用功能的。VRRP是Virtual Router RedundancyProtocol(虚拟路由器冗余协议)的缩写,VRRP出现的目的就是为了解决静态路由单点故障问题的,它能够保证当个别节点宕机时,整个网络可以不间断地运行。

所以,Keepalived 一方面具有配置管理LVS的功能,同时还具有对LVS下面节点进行健康检查的功能,另一方面也可实现系统网络服务的高可用功能。

1.2 Keepalived重要功能

keepalived 有三个重要的功能,分别是:

  • 管理LVS负载均衡软件
  • 实现LVS集群节点的健康检查
  • 作为系统网络服务的高可用性(failover)

1.3 Keepalived原理

架构图

Keepalived高可用是通过 VRRP 进行通信的, VRRP是通过竞选机制来确定主备的,主的优先级高于备,因此,工作时主会优先获得所有的资源,备节点处于等待状态,当主挂了的时候,备节点就会接管主节点的资源,然后顶替主节点对外提供服务。

在 Keepalived 服务之间,只有作为主的服务器会一直发送 VRRP 广播包,告诉备它还活着,此时备不会枪占主,当主不可用时,即备监听不到主发送的广播包时,就会启动相关服务接管资源,保证业务的连续性.接管速度最快可以小于1秒。

2. Keepalived实现nginx负载均衡高可用

此实验并没有将nginx负载均衡启用,只是演示如何去配置高可用,nginx负载均衡高可用可参考《Nginx高可用》
环境说明

系统 主机名 IP
centos8 master 192.168.169.142
centos8 backup 192.168.169.140

本次高可用虚拟IP(VIP)地址为192.168.169.250

2.1 Keepalived安装

master

//关闭防火墙、selinux
[root@master ~]# systemctl disable --now firewalld.service 
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@master ~]# sed -ri 's/^(SELINUX=).*/\1disabled/g' /etc/selinux/config
[root@master ~]# setenforce 0
[root@master ~]# reboot

//配置网络源
[root@master ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
[root@master ~]# sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

[root@master ~]# dnf clean all
[root@master ~]# dnf list all
[root@master ~]# dnf -y install epel-release wget gcc gcc-c++

//安装keepalived
[root@master ~]# dnf -y install keepalived

//安装nginx
[root@master ~]# dnf -y install nginx
[root@master ~]# cd /usr/share/nginx/html/
[root@master html]# ls
404.html  50x.html  index.html  nginx-logo.png  poweredby.png
[root@master html]# echo "master" > index.html 
[root@master html]# systemctl enable --now nginx.service 
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.

backup

//关闭防火墙、selinux
[root@backup ~]# systemctl disable --now firewalld.service 
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@backup ~]# sed -ri 's/^(SELINUX=).*/\1disabled/g' /etc/selinux/config
[root@backup ~]# setenforce 0
[root@backup ~]# reboot

//配置网络源
[root@backup ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
[root@backup ~]# sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

[root@backup ~]# dnf clean all
[root@backup ~]# dnf list all
[root@backup ~]# dnf -y install epel-release wget gcc gcc-c++

//安装keepalived
[root@backup ~]# dnf -y install keepalived

//安装nginx
[root@backup ~]# dnf -y install nginx
[root@backup ~]# cd /usr/share/nginx/html/
[root@backup html]# ls
404.html  50x.html  index.html  nginx-logo.png  poweredby.png
[root@backup html]# echo "backup" > index.html 
[root@backup html]# systemctl start nginx.service 

访问两台主机的nginx服务,确保正常进行访问

2.2 Keepalived配置

2.2.1 配置主keepalived

! Configuration File for keepalived

global_defs {
   router_id lb01
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass zzd123...
    }
    virtual_ipaddress {
        192.168.169.250
    }
}
virtual_server 192.168.169.250 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.169.142 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.169.140 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@master ~]# systemctl enable --now keepalived.service 
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.

2.2.2 配置备Keepalived

[root@backup ~]# vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   router_id lb02
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass zzd123...
    }
    virtual_ipaddress {
        192.168.169.250
    }
}
virtual_server 192.168.169.250 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.169.142 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.169.140 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@backup ~]# systemctl enable --now keepalived.service 
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.

//停掉backup主机上的nginx服务
[root@backup ~]# systemctl stop nginx.service 

2.2.3 查看VIP在哪里

master

[root@master ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:17:39:8e brd ff:ff:ff:ff:ff:ff
    inet 192.168.169.142/24 brd 192.168.169.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.169.250/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe17:398e/64 scope link 
       valid_lft forever preferred_lft forever

backup

[root@backup ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:06:43:16 brd ff:ff:ff:ff:ff:ff
    inet 192.168.169.140/24 brd 192.168.169.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe06:4316/64 scope link 
       valid_lft forever preferred_lft forever

2.3 让Keepalived监控nginx负载均衡

在master上编写脚本

[root@master ~]# mkdir /scripts
[root@master ~]# cd /scripts/
[root@master scripts]# vim check_nginx.sh 
#!/bin/bash
nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
if [ $nginx_status -lt 1 ];then
        systemctl stop keepalived
fi
[root@master scripts]# chmod +x check_nginx.sh

[root@master scripts]# vim notify.sh
#!/bin/bash
VIP=$2
case "$1" in
  master)
        nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
        if [ $nginx_status -lt 1 ];then
            systemctl start nginx
        fi
  ;;
  backup)
        nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
        if [ $nginx_status -gt 0 ];then
            systemctl stop nginx
        fi
  ;;
  *)
        echo "Usage:$0 master|backup VIP"
  ;;
esac
[root@master scripts]# chmod +x notify.sh

在backup上编写脚本

[root@backup ~]# mkdir /scripts
[root@backup ~]# cd /scripts
[root@backup scripts]# scp -r root@192.168.169.142:/scripts/* .
[root@backup scripts]# ll
total 8
-rwxr-xr-x 1 root root 139 Oct  8 23:06 check_nginx.sh
-rwxr-xr-x 1 root root 434 Oct  8 23:06 notify.sh

2.4 配置Keepalived加入监控脚本

配置主Keepalived

[root@master ~]# vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   router_id lb01
}

vrrp_script nginx_check{
    script "/scripts/check_nginx.sh"
    interval 1
    weight -20
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass zzd123...
    }
    virtual_ipaddress {
        192.168.169.250
    }
        track_script {
        nginx_check
    }
    notify_master "/scripts/notify.sh master 192.168.169.250"
}
virtual_server 192.168.169.250 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.169.142 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.169.140 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@master ~]# systemctl restart keepalived.service
[root@master ~]# systemctl restart nginx

配置备Keepalived

[root@backup ~]# vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   router_id lb02
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass zzd123...
    }
    virtual_ipaddress {
        192.168.169.250
    }
    notify_master "/scripts/notify.sh master 192.168.169.250"
    notify_backup "/scripts/notify.sh backup 192.168.169.250"
}
virtual_server 192.168.169.250 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.169.142 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.169.140 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@backup ~]# systemctl restart keepalived.service

2.5 验证效果

//当master的nginx服务正常运行时
[root@master scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:17:39:8e brd ff:ff:ff:ff:ff:ff
    inet 192.168.169.142/24 brd 192.168.169.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.169.250/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe17:398e/64 scope link 
       valid_lft forever preferred_lft forever

[root@master scripts]# curl 192.168.169.250
master

//当nginx服务突然停掉
[root@master scripts]# systemctl stop nginx.service 
[root@master scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:17:39:8e brd ff:ff:ff:ff:ff:ff
    inet 192.168.169.142/24 brd 192.168.169.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe17:398e/64 scope link 
       valid_lft forever preferred_lft forever

[root@master scripts]# curl 192.168.169.250
backup

//当nginx再次恢复
[root@master scripts]# systemctl start nginx.service 
[root@master scripts]# systemctl start keepalived.service
[root@master scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:17:39:8e brd ff:ff:ff:ff:ff:ff
    inet 192.168.169.142/24 brd 192.168.169.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.169.250/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe17:398e/64 scope link 
       valid_lft forever preferred_lft forever
[root@master scripts]# curl 192.168.169.250
master

3. 脑裂

在高可用(HA)系统中,当联系2个节点的“心跳线”断开时,本来为一整体、动作协调的HA系统,就分裂成为2个独立的个体。由于相互失去了联系,都以为是对方出了故障。两个节点上的HA软件像“裂脑人”一样,争抢“共享资源”、争起“应用服务”,就会发生严重后果——或者共享资源被瓜分、2边“服务”都起不来了;或者2边“服务”都起来了,但同时读写“共享存储”,导致数据损坏(常见如数据库轮询着的联机日志出错)。
  
对付HA系统“裂脑”的对策,目前达成共识的的大概有以下几条:

  • 添加冗余的心跳线,例如:双线条线(心跳线也HA),尽量减少“裂脑”发生几率;
  • 启用磁盘锁。正在服务一方锁住共享磁盘,“裂脑”发生时,让对方完全“抢不走”共享磁盘资源。但使用锁磁盘也会有一个不小的问题,如果占用共享盘的一方不主动“解锁”,另一方就永远得不到共享磁盘。现实中假如服务节点突然死机或崩溃,就不可能执行解锁命令。后备节点也就接管不了共享资源和应用服务。于是有人在HA中设计了“智能”锁。即:正在服务的一方只在发现心跳线全部断开(察觉不到对端)时才启用磁盘锁。平时就不上锁了。
  • 设置仲裁机制。例如设置参考IP(如网关IP),当心跳线完全断开时,2个节点都各自ping一下参考IP,不通则表明断点就出在本端。不仅“心跳”、还兼对外“服务”的本端网络链路断了,即使启动(或继续)应用服务也没有用了,那就主动放弃竞争,让能够ping通参考IP的一端去起服务。更保险一些,ping不通参考IP的一方干脆就自我重启,以彻底释放有可能还占用着的那些共享资源

3.1 脑裂产生的原因

一般来说,脑裂的发生,有以下几种原因:

  • 高可用服务器对之间心跳线链路发生故障,导致无法正常通信
    • 因心跳线坏了(包括断了,老化)
    • 因网卡及相关驱动坏了,ip配置及冲突问题(网卡直连)
    • 因心跳线间连接的设备故障(网卡及交换机)
    • 因仲裁的机器出问题(采用仲裁的方案)
  • 高可用服务器上开启了 iptables防火墙阻挡了心跳消息传输
  • 高可用服务器上心跳网卡地址等信息配置不正确,导致发送心跳失败
  • 其他服务配置不当等原因,如心跳方式不同,心跳广插冲突、软件Bug等

注意:

Keepalived配置里同一 VRRP实例如果 virtual_router_id两端参数配置不一致也会导致裂脑问题发生。

下面有关脑裂的实验,就会使用 virtual_router_id来进行模拟产生脑裂

3.2 脑裂常见解决方案

在实际生产环境中,我们可以从以下几个方面来防止裂脑问题的发生:

  • 同时使用串行电缆和以太网电缆连接,同时用两条心跳线路,这样一条线路坏了,另一个还是好的,依然能传送心跳消息
  • 当检测到裂脑时强行关闭一个心跳节点(这个功能需特殊设备支持,如Stonith、feyce)。相当于备节点接收不到心跳消患,通过单独的线路发送关机命令关闭主节点的电源
  • 做好对裂脑的监控报警(如邮件及手机短信等或值班).在问题发生时人为第一时间介入仲裁,降低损失。例如,百度的监控报警短信就有上行和下行的区别。报警消息发送到管理员手机上,管理员可以通过手机回复对应数字或简单的字符串操作返回给服务器.让服务器根据指令自动处理相应故障,这样解决故障的时间更短.
      
    当然,在实施高可用方案时,要根据业务实际需求确定是否能容忍这样的损失。对于一般的网站常规业务.这个损失是可容忍的

3.3 对脑裂进行监控

对脑裂的监控应在备用服务器上进行,通过添加zabbix自定义监控进行。

监控备上有无VIP地址,备机上出现VIP有两种情况:

  • 发生了脑裂
  • 正常的主备切换

监控只是监控发生脑裂的可能性,不能保证一定是发生了脑裂,因为正常的主备切换VIP也是会到备上的。

实验环境

系统 主机名 IP 需要安装软件
centos8 zabbix 192.168.169.139 zabbix_server
zabbix_agentd
centos8 master 192.168.169.142
centos8 backup 192.168.169.142 zabbix_agentd

关于zabbix_server的部署可以参考《zabbix部署》

主机master与backup是在前置环境2.Keepalived实现nginx负载均衡高可用的基础上进行实现的

3.3.1 将主机backup添加到zabbix服务器的host主机中

//在backup上安装zabbix_agentd

//创建zabbix用户
[root@backup ~]# useradd -rMs /sbin/nologin zabbix

//安装依赖包
[root@backup ~]# dnf -y install openssl-devel pcre-devel expat-devel gcc gcc-c++ make

//解压zabbix
[root@backup ~]# cd /usr/src/
[root@backup src]# tar xf zabbix-6.2.2.tar.gz 
[root@backup src]# cd zabbix-6.2.2/
[root@backup zabbix-6.2.2]# ./configure --enable-agent
[root@backup zabbix-6.2.2]# make install

//修改zabbix_agentd配置文件
[root@backup zabbix-6.2.2]# cd /usr/local/etc/
[root@backup etc]# vim zabbix_agentd.conf
Server=192.168.169.139		//指向zabbix_server服务端
ServerActive=192.168.169.139	//指向zabbix_server服务端
Hostname=zabbix_node1

//启动zabbix_agentd
[root@backup etc]# zabbix_agentd 

在浏览器输入zabbix主机的IP地址访问zabbix管理界面,并将backup主机添加到host

添加过程请参考《zabbix监控配置》创建监控主机并将主机加入主机组

3.3.2 编写脚本监控backup是否存在虚拟ip

[root@backup etc]# mkdir -p /scripts/zabbix
[root@backup etc]# cd /scripts/zabbix/
[root@backup zabbix]# touch check_keepalived.sh
[root@backup zabbix]# chmod +x check_keepalived.sh 
[root@backup zabbix]# vim check_keepalived.sh
#!/bin/bash

vip_num=$(ip a | grep 192.168.169.250 | wc -l)
if [ $vip_num -eq 0 ];then
    echo 0		//打印0说明backup节点不存在VIP,没有问题
else
    echo 1		//打印1说明backup节点存在VIP,有问题
fi

//测试脚本
[root@backup zabbix]# ./check_keepalived.sh 
0		//结果为0,没问题,因为backup节点没有VIP

3.3.3 配置监控项

//编辑/usr/local/etc/zabbix_agentd.conf文件
[root@backup zabbix]# vim /usr/local/etc/zabbix_agentd.conf
UnsafeUserParameters=1
UserParameter=check_keepalived,/bin/bash /scripts/zabbix/check_keepalived.sh

//重启zabbix_agentd
[root@backup zabbix]# pkill zabbix_agentd 
[root@backup zabbix]# zabbix_agentd 



//在zabbix_server端(zabbix)进行验证,查看key值是否能够使用
[root@localhost ~]# zabbix_get -s 192.168.169.140 -k check_keepalived
0
//OK没问题,可以正常使用

查看最新数据

3.3.4 添加触发器

3.3.5 验证效果

在master主机上停止nginx服务,模拟keepalived故障转移

[root@master ~]# systemctl stop nginx.service 
[root@master ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:17:39:8e brd ff:ff:ff:ff:ff:ff
    inet 192.168.169.142/24 brd 192.168.169.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe17:398e/64 scope link 
       valid_lft forever preferred_lft forever

等待告警触发

将master端的nginx服务和keepalived重新启动

[root@master ~]# systemctl start nginx.service 
[root@master ~]# systemctl start keepalived.service 
[root@master ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:17:39:8e brd ff:ff:ff:ff:ff:ff
    inet 192.168.169.142/24 brd 192.168.169.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.169.250/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe17:398e/64 scope link 
       valid_lft forever preferred_lft forever

告警已经解决

更改master主机keepalived配置文件,将 virtual_router_id的值更改为52(与backup不一样即可),模拟脑裂的产生

[root@master ~]# vim /etc/keepalived/keepalived.conf 
virtual_router_id 51 ——> virtual_router_id 52

//重启keepalived
[root@master ~]# systemctl restart keepalived.service 

//查看ip发现有VIP
[root@master ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:17:39:8e brd ff:ff:ff:ff:ff:ff
    inet 192.168.169.142/24 brd 192.168.169.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.169.250/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe17:398e/64 scope link 
       valid_lft forever preferred_lft forever
//nginx服务也是启动的
[root@master ~]# ss -antl
State          Recv-Q         Send-Q                   Local Address:Port                   Peer Address:Port         Process         
LISTEN         0              128                            0.0.0.0:80                          0.0.0.0:*                            
LISTEN         0              128                            0.0.0.0:22                          0.0.0.0:*                            
LISTEN         0              128                               [::]:80                             [::]:*                            
LISTEN         0              128                               [::]:22                             [::]:*


//在backup上查看ip,发现backup也拥有VIP
[root@backup ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:06:43:16 brd ff:ff:ff:ff:ff:ff
    inet 192.168.169.140/24 brd 192.168.169.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.169.250/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe06:4316/64 scope link 
       valid_lft forever preferred_lft forever

//nginx服务也启动了
[root@backup ~]# ss -antl
State          Recv-Q         Send-Q                  Local Address:Port                    Peer Address:Port         Process         
LISTEN         0              128                           0.0.0.0:111                          0.0.0.0:*                            
LISTEN         0              128                           0.0.0.0:80                           0.0.0.0:*                            
LISTEN         0              128                           0.0.0.0:22                           0.0.0.0:*                            
LISTEN         0              128                           0.0.0.0:10050                        0.0.0.0:*                            
LISTEN         0              128                              [::]:111                             [::]:*                            
LISTEN         0              128                              [::]:80                              [::]:*                            
LISTEN         0              128                              [::]:22                              [::]:* 

如上所述的情况就是出现了脑裂

4. Keepalived实现haproxy负载均衡高可用

系统 主机 IP
centos8 haproxy-master 192.168.169.139
centos8 haproxy-backup 192.168.169.140
centos8 RS1 192.168.169.142
centos8 RS2 192.168.169.145

虚拟IP(VIP)为192.168.169.250

四台主机都需要关闭防火墙和selinux

systemctl disable --now firewalld.service
sed -ri '/^SELINUX=/s/(SELINUX=).*/\1disabled/g' /etc/selinux/config
reboot

4.1 在RS1和RS2上部署httpd服务

RS1

[root@RS1 ~]# dnf -y install httpd
[root@RS1 ~]# cd /var/www/html/
[root@RS1 html]# echo RS1 > index.html
[root@RS1 html]# systemctl enable --now httpd
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.
[root@RS1 html]# curl http://127.0.0.1
RS1

RS2

[root@RS2 ~]# dnf -y install httpd
[root@RS2 ~]# cd /var/www/html/
[root@RS2 html]# echo RS2 > index.html
[root@RS2 html]# systemctl enable --now httpd
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.
[root@RS2 html]# curl http://127.0.0.1
RS2

4.2 在master和backup上部署haproxy

haproxy-master

[root@haproxy-master ~]# wget https://src.fedoraproject.org/repo/pkgs/haproxy/haproxy-2.6.0.tar.gz/sha512/7bb70bfb5606bbdac61d712bc510c5e8d5a5126ed8827d699b14a2f4562b3bd57f8f21344d955041cee0812c661350cca8082078afe2f277ff1399e461ddb7bb/haproxy-2.6.0.tar.gz

[root@haproxy-master ~]# vim /etc/sysctl.conf 
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
[root@haproxy-master ~]# sysctl -p
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1

[root@haproxy-master ~]# dnf -y install make gcc pcre-devel bzip2-devel openssl-devel systemd-devel

[root@haproxy-master ~]# useradd -rMs /sbin/nologin haproxy
[root@haproxy-master ~]# tar xf haproxy-2.6.0.tar.gz 
[root@haproxy-master ~]# cd haproxy-2.6.0/
[root@haproxy-master haproxy-2.6.0]# make clean
[root@haproxy-master haproxy-2.6.0]# make -j $(grep 'processor' /proc/cpuinfo |wc -l) \
TARGET=linux-glibc \
USE_OPENSSL=1 \
USE_ZLIB=1 \
USE_PCRE=1 \
USE_SYSTEMD=1
[root@haproxy-master haproxy-2.6.0]# make install PREFIX=/usr/local/haproxy

[root@haproxy-master haproxy-2.6.0]# cd /usr/local/haproxy/
[root@haproxy-master haproxy]# ls
doc  sbin  share
[root@haproxy-master haproxy]# cp /root/haproxy-2.6.0/haproxy /usr/sbin

[root@haproxy-master haproxy]# mkdir -p /etc/haproxy
[root@haproxy-master haproxy]# vim /etc/haproxy/haproxy.cfg
global
    daemon
    maxconn 256
 
defaults
    mode http
    timeout connect 5000ms
    timeout client 50000ms
    timeout server 50000ms
 
frontend http-in
    bind *:80
    default_backend zic
 
backend zic
    server web01 192.168.169.142:80
    server web02 192.168.169.145:80

[root@haproxy-master haproxy]# vim /usr/lib/systemd/system/haproxy.service
[Unit]
Description=HAProxy Load Balancer
After=syslog.target network.target
 
[Service]
ExecStartPre=/usr/local/haproxy/sbin/haproxy -f /etc/haproxy/haproxy.cfg   -c -q
ExecStart=/usr/local/haproxy/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg  -p /var/run/haproxy.pid
ExecReload=/bin/kill -USR2 $MAINPID
 
[Install]
WantedBy=multi-user.target
[root@haproxy-master haproxy]# systemctl enable --now haproxy.service 
Created symlink /etc/systemd/system/multi-user.target.wants/haproxy.service → /usr/lib/systemd/system/haproxy.service.

[root@haproxy-master haproxy]# systemctl daemon-reload 
[root@haproxy-master haproxy]# systemctl enable --now haproxy.service 
Created symlink /etc/systemd/system/multi-user.target.wants/haproxy.service → /usr/lib/systemd/system/haproxy.service.

[root@haproxy-master haproxy]# curl 192.168.169.139
RS1
[root@haproxy-master haproxy]# curl 192.168.169.139
RS2

haproxy-backup

[root@haproxy-backup ~]# wget https://src.fedoraproject.org/repo/pkgs/haproxy/haproxy-2.6.0.tar.gz/sha512/7bb70bfb5606bbdac61d712bc510c5e8d5a5126ed8827d699b14a2f4562b3bd57f8f21344d955041cee0812c661350cca8082078afe2f277ff1399e461ddb7bb/haproxy-2.6.0.tar.gz

[root@haproxy-backup ~]# vim /etc/sysctl.conf
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
[root@haproxy-backup ~]# sysctl -p
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1

[root@haproxy-backup ~]# dnf -y install make gcc pcre-devel bzip2-devel openssl-devel systemd-devel

[root@haproxy-backup ~]# tar xf haproxy-2.6.0.tar.gz 
[root@haproxy-backup ~]# cd haproxy-2.6.0/
[root@haproxy-backup haproxy-2.6.0]# make clean 
[root@haproxy-backup haproxy-2.6.0]# make -j $(grep 'processor' /proc/cpuinfo |wc -l) \
> TARGET=linux-glibc \
> USE_OPENSSL=1 \
> USE_ZLIB=1 \
> USE_PCRE=1 \
> USE_SYSTEMD=1
[root@haproxy-backup haproxy-2.6.0]# make install PREFIX=/usr/local/haproxy

[root@haproxy-backup haproxy-2.6.0]# cd /usr/local/haproxy/
[root@haproxy-backup haproxy]# ls
doc  sbin  share
[root@haproxy-backup haproxy]# cp /root/haproxy-2.6.0/haproxy /usr/sbin/

[root@haproxy-backup haproxy]# mkdir -p /etc/haproxy
[root@haproxy-backup haproxy]# vim /etc/haproxy/haproxy.cfg
global
    daemon
    maxconn 256
 
defaults
    mode http
    timeout connect 5000ms
    timeout client 50000ms
    timeout server 50000ms
 
frontend http-in
    bind *:80
    default_backend zic
 
backend zic
    server web01 192.168.169.142:80
    server web02 192.168.169.145:80
    
[root@haproxy-backup haproxy]# vim /usr/lib/systemd/system/haproxy.service
[root@haproxy-backup haproxy]# systemctl daemon-reload 
[root@haproxy-backup haproxy]# systemctl start haproxy.service 
[root@haproxy-backup haproxy]# curl 192.168.169.140
RS1
[root@haproxy-backup haproxy]# curl 192.168.169.140
RS2
[root@haproxy-backup haproxy]# systemctl stop haproxy.service 

4.3 Keepalived安装

haproxy-master

//安装Keepalived
[root@haproxy-master haproxy]# dnf -y install keepalived

haproxy-backup

//安装Keepalived
[root@haproxy-backup haproxy]# dnf -y install keepalived

4.4 配置Keepalived

haproxy-master

[root@haproxy-master haproxy]# vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
 
global_defs {
   router_id lb01
}
 
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass zzd123...
    }
    virtual_ipaddress {
        192.168.169.250
    }
}
virtual_server 192.168.169.250 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP
 
    real_server 192.168.169.139 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
 
    real_server 192.168.169.140 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@haproxy-master haproxy]# systemctl enable --now keepalived.service 
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.

haproxy-backup

[root@haproxy-backup haproxy]# vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
 
global_defs {
   router_id lb02
}
 
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass zzd123...
    }
    virtual_ipaddress {
        192.168.169.250
    }
}
virtual_server 192.168.169.250 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP
 
    real_server 192.168.169.139 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
 
    real_server 192.168.169.140 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@haproxy-backup haproxy]# systemctl enable --now keepalived.service 
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.

查看VIP在哪里

master

[root@haproxy-master haproxy]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:f1:77:ce brd ff:ff:ff:ff:ff:ff
    inet 192.168.169.139/24 brd 192.168.169.255 scope global dynamic noprefixroute ens33
       valid_lft 996sec preferred_lft 996sec
    inet 192.168.169.250/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::ac0:aa7e:f1b9:248e/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

backup

[root@haproxy-backup haproxy]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:06:43:16 brd ff:ff:ff:ff:ff:ff
    inet 192.168.169.140/24 brd 192.168.169.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe06:4316/64 scope link 
       valid_lft forever preferred_lft forever

//VIP在haproxy-master上

4.5 让Keepalived监控haproxy负载均衡

4.5.1 编写脚本

haproxy-master

[root@haproxy-master haproxy]# mkdir /scripts
[root@haproxy-master haproxy]# cd /scripts/
[root@haproxy-master scripts]# touch check_haproxy.sh
[root@haproxy-master scripts]# chmod +x check_haproxy.sh
[root@haproxy-master scripts]# vim check_haproxy.sh 
#!/bin/bash
haproxy_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bhaproxy\b'|wc -l)
if [ $haproxy_status -lt 1 ];then
    systemctl stop keepalived
fi

[root@haproxy-master scripts]# touch notify.sh
[root@haproxy-master scripts]# chmod +x notify.sh 
[root@haproxy-master scripts]# vim notify.sh 
#!/bin/bash
VIP=$2
case "$1" in
  master)
        haproxy_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bhaproxy\b'|wc -l)
        if [ $haproxy_status -lt 1 ];then
            systemctl start haproxy
        fi
  ;;
  backup)
        haproxy_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bhaproxy\b'|wc -l)
        if [ $haproxy_status -gt 0 ];then
            systemctl stop haproxy
        fi
  ;;
  *)
        echo "Usage:$0 master|backup VIP"
  ;;
esac

haproxy-backup

[root@haproxy-backup haproxy]# mkdir /scripts
[root@haproxy-backup haproxy]# cd /scripts/
[root@haproxy-backup scripts]# scp -r root@192.168.169.139:/scripts/* .
[root@haproxy-backup scripts]# ll
total 8
-rwxr-xr-x 1 root root 148 Oct  9 23:46 check_haproxy.sh
-rwxr-xr-x 1 root root 450 Oct  9 23:46 notify.sh

4.5.2 配置Keepalived加入监控脚本

haproxy-master

[root@haproxy-master scripts]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
 
global_defs {
   router_id lb01
}

vrrp_script haproxy_check{
    script "/scripts/check_haproxy.sh"
    interval 1
    weight -20
}
 
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass zzd123...
    }
    virtual_ipaddress {
        192.168.169.250
    }
    track_script {
        haproxy_check
    }
    notify_master "/scripts/notify.sh master 192.168.169.250"
}
virtual_server 192.168.169.250 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP
 
    real_server 192.168.169.139 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
 
    real_server 192.168.169.140 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@haproxy-master scripts]# systemctl restart keepalived.service 

haproxy-backup

[root@haproxy-backup scripts]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
 
global_defs {
   router_id lb02
}
 
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass zzd123...
    }
    virtual_ipaddress {
        192.168.169.250
    }
    notify_master "/scripts/notify.sh master 192.168.169.250"
    notify_backup "/scripts/notify.sh backup 192.168.169.250"
}
virtual_server 192.168.169.250 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP
 
    real_server 192.168.169.139 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
 
    real_server 192.168.169.140 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
} 
[root@haproxy-backup scripts]# systemctl restart keepalived.service

4.5.3 验证效果

当master的haproxy服务正常运行的时候

//VIP在haproxy-master主机上
[root@haproxy-master scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:f1:77:ce brd ff:ff:ff:ff:ff:ff
    inet 192.168.169.139/24 brd 192.168.169.255 scope global dynamic noprefixroute ens33
       valid_lft 1426sec preferred_lft 1426sec
    inet 192.168.169.250/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::ac0:aa7e:f1b9:248e/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

[root@haproxy-master scripts]# curl http://192.168.169.250
RS1
[root@haproxy-master scripts]# curl http://192.168.169.250
RS2

当master的haproxy服务异常的时候

//停止haproxy
[root@haproxy-master scripts]# systemctl stop haproxy.service 

//VIP在haproxy-master上不存在
[root@haproxy-master scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:f1:77:ce brd ff:ff:ff:ff:ff:ff
    inet 192.168.169.139/24 brd 192.168.169.255 scope global dynamic noprefixroute ens33
       valid_lft 1142sec preferred_lft 1142sec
    inet6 fe80::ac0:aa7e:f1b9:248e/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
//在haproxy-backup上查看,发现拥有VIP
[root@haproxy-backup scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:06:43:16 brd ff:ff:ff:ff:ff:ff
    inet 192.168.169.140/24 brd 192.168.169.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.169.250/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe06:4316/64 scope link 
       valid_lft forever preferred_lft forever

//并且haproxy服务也自动启动
[root@haproxy-backup scripts]# systemctl status haproxy.service 
● haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/usr/lib/systemd/system/haproxy.service; disabled; vendor preset: disabled)
   Active: active (running) since Sun 2022-10-09 23:59:23 CST; 1min 49s ago
  Process: 175161 ExecStartPre=/usr/local/haproxy/sbin/haproxy -f /etc/haproxy/haproxy.cfg -c -q (code=exited, status=0/SUCCESS)
 Main PID: 175164 (haproxy)
    Tasks: 2 (limit: 5770)
   Memory: 20.3M
   CGroup: /system.slice/haproxy.service
           ├─175164 /usr/local/haproxy/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid
           └─175166 /usr/local/haproxy/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid

//访问VIP
[root@haproxy-backup scripts]# curl http://192.168.169.250
RS1
[root@haproxy-backup scripts]# curl http://192.168.169.250
RS2

再次将haproxy-master的haproxy服务与Keepalived启动

[root@haproxy-master scripts]# systemctl start haproxy.service 
[root@haproxy-master scripts]# systemctl start keepalived.service

//VIP又回到了master上,因为Keepalived默认是MASTER抢占VIP的
[root@haproxy-master scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:f1:77:ce brd ff:ff:ff:ff:ff:ff
    inet 192.168.169.139/24 brd 192.168.169.255 scope global dynamic noprefixroute ens33
       valid_lft 1785sec preferred_lft 1785sec
    inet 192.168.169.250/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::ac0:aa7e:f1b9:248e/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
       
[root@haproxy-master scripts]# curl http://192.168.169.250
RS1
[root@haproxy-master scripts]# curl http://192.168.169.250
RS2

与此同时,backup上的haproxy服务已经自动停止,并且Keepalived变成了备用节点

[root@haproxy-backup scripts]# systemctl status haproxy.service 
● haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/usr/lib/systemd/system/haproxy.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
   
[root@haproxy-backup scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:06:43:16 brd ff:ff:ff:ff:ff:ff
    inet 192.168.169.140/24 brd 192.168.169.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe06:4316/64 scope link 
       valid_lft forever preferred_lft forever
posted @ 2022-10-08 23:30  Zic师傅  阅读(60)  评论(0编辑  收藏  举报