Linux keepalived+lvs实现高可用负载均衡
LVS的具有强大的负载均衡功能,但是它缺少对负载层节点(DS)的健康状态检测功能,也不能对后端服务(RS)进行健康状态检测;keepalived是专门用来监控高可用集群架构的中各服务的节点状态,如果某个节点异常或故障,它可以检测到并将故障节点从集群中剔除,当故障节点恢复后,可以自动将该节点加入到集群中。
部署环境
LVS+keepalived 主节点 DIP:172.30.100.111 VIP:172.30.100.1
LVS+keepalived 备节点 DIP:172.30.100.125 VIP:172.30.100.1
nginx1:172.30.100.126
nginx2:172.30.100.127
lvs主节点
keepalived的配置文件中有设置lvs信息的参数,所以不需要手动添加虚拟IP、ipvs规则等。
! Configuration File for keepalived # 全局配置定义 global_defs { notification_email { root@localhost.localdomain } notification_email_from keepalived@msfpay.com smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_1 vrrp_skip_check_adv_addr vrrp_garp_interval 0 vrrp_gna_interval 0 vrrp_mcast_group4 224.17.17.17 } # 虚拟服务设置 vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 168 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 3333 } virtual_ipaddress { 172.30.100.1 } } # lvs规则模式等和realserver信息设置 virtual_server 172.30.100.1 80 { delay_loop 5 lb_algo wrr lb_kind DR persistence_timeout 50 protocol TCP sorry_server 172.30.100.105 80 real_server 172.30.100.126 80 { weight 2 HTTP_GET { url { path / status_code 200 } connect_timeout 3 nb_get_retry 2 delay_before_retry 1 } } real_server 172.30.100.127 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 3 nb_get_retry 2 delay_before_retry 1 } } }
lvs备节点
备节点主要是修改了state为BACKUP,并且设置优先级低于主节点。
! Configuration File for keepalived global_defs { notification_email { root@localhost.localdomain } notification_email_from keepalived@msfpay.com smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_2 vrrp_skip_check_adv_addr #vrrp_strict vrrp_garp_interval 0 vrrp_gna_interval 0 vrrp_mcast_group4 224.17.17.17 } vrrp_instance VI_1 { state BACKUP # 设置为备节点 interface eth0 virtual_router_id 168 priority 50 # 优先级低于主节点 advert_int 1 authentication { auth_type PASS auth_pass 3333 } virtual_ipaddress { 172.30.100.1 } } virtual_server 172.30.100.1 80 { delay_loop 5 lb_algo wrr lb_kind DR persistence_timeout 50 protocol TCP sorry_server 172.30.100.105 80 real_server 172.30.100.127 80 { weight 2 HTTP_GET { url { path / status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 172.30.100.126 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } }
nginx服务器
本次测试使用的是DR模式,所以real server必须添加VIP和修改内核arp参数等操作。DR模式配置详解:https://www.cnblogs.com/houyongchong/p/10535993.html。后端nginx服务必须能够正常访问。
验证
1、验证keepalived对lvs服务的高可用
当配置完成,所有服务正常工作,此时keepalived主节点应该是111机器,然后手动停掉主节点的keepalived服务,模拟主节点崩溃导致服务不可用,然后会发现VIP自动转移到备节点机器上,nginx服务也可以正常访问。
2、验证keepalived对后端服务的健康状态检测
恢复正常工作,查看keepalived服务自动配置的ipvs规则,正常情况有两台real server;然后手动停掉nginx1的服务,模拟一台real server崩溃导致nginx服务不可用,查看ipvs规则发现奔溃掉的real server已被剔除;然后再恢复nginx1服务,再次查看ipvs规则发现被剔除的nginx1又被添加进来。