安装Haproxy
yum -y install gcc pcre-devel openssl-devel cd /usr/local/src tar zxf haproxy-1.6.8.tar.gz mv haproxy-1.6.8 haproxy cd haproxy uname -r make TARGET=linux2628 USE_OPENSSL=1 ADDLIB=-lz PREFIX=/usr/local/haproxy #将haproxy安装到指定目录 make install PREFIX=/usr/local/haproxy
安装声明 1.To build haproxy, you will need : - GNU make. Neither Solaris nor OpenBSD's make work with the GNU Makefile. If you get many syntax errors when running "make", you may want to retry with "gmake" which is the name commonly used for GNU make on BSD systems. - GCC between 2.95 and 4.8. Others may work, but not tested. - GNU ld Also, you might want to build with libpcre support, which will provide a very efficient regex implementation and will also fix some badness on Solaris' one. 2.To build haproxy, you have to choose your target OS amongst the following ones and assign it to the TARGET variable : - linux22 for Linux 2.2 - linux24 for Linux 2.4 and above (default) - linux24e for Linux 2.4 with support for a working epoll (> 0.21) - linux26 for Linux 2.6 and above - linux2628 for Linux 2.6.28, 3.x, and above (enables splice and tproxy) - solaris for Solaris 8 or 10 (others untested) - freebsd for FreeBSD 5 to 10 (others untested) - netbsd for NetBSD - osx for Mac OS/X - openbsd for OpenBSD 3.1 and above - aix51 for AIX 5.1 - aix52 for AIX 5.2 - cygwin for Cygwin - generic for any other OS or version. - custom to manually adjust every setting 3.You may also choose your CPU to benefit from some optimizations. This is particularly important on UltraSparc machines. For this, you can assign one of the following choices to the CPU variable : - i686 for intel PentiumPro, Pentium 2 and above, AMD Athlon - i586 for intel Pentium, AMD K6, VIA C3. - ultrasparc : Sun UltraSparc I/II/III/IV processor - native : use the build machine's specific processor optimizations. Use with extreme care, and never in virtualized environments (known to break). - generic : any other processor or no CPU-specific optimization. (default)
配置
cd /usr/local/haproxy
mkdir conf
mkdir logs
vi conf/haproxy.cfg
配置文件:
global maxconn 51200 chroot /usr/local/haproxy uid 99 gid 99 daemon #quiet nbproc 1 #进程数 pidfile /usr/local/haproxy/logs/haproxy.pid tune.ssl.default-dh-param 2048 defaults mode http #默认的模式mode { tcp|http|health },tcp是4层,http是7层,health只会返回OK #retries 2 #两次连接失败就认为是服务器不可用,也可以通过后面设置 option redispatch #当serverId对应的服务器挂掉后,强制定向到其他健康的服务器 option abortonclose #当服务器负载很高的时候,自动结束掉当前队列处理比较久的链接 timeout connect 5000ms #连接超时 timeout client 30000ms #客户端超时 timeout server 30000ms #服务器超时 #timeout check 2000 #=心跳检测超时 log 127.0.0.1 local0 err #[err warning info debug] balance roundrobin #负载均衡算法 # option httplog #日志类别,采用httplog # option httpclose #每次请求完毕后主动关闭http通道,ha-proxy不支持keep-alive,只能>模拟这种模式的实现 # option dontlognull # option forwardfor #如果后端服务器需要获得客户端真实ip需要配置的参数,可以从Http Header中获得客户端ip listen admin_stats bind 0.0.0.0:8888 #监听端口 option httplog #采用http日志格式 stats refresh 30s #统计页面自动刷新时间 stats uri /stats #统计页面url stats realm Haproxy Manager #统计页面密码框上提示文本 stats auth admin:123456 #统计页面用户名和密码设置 #stats hide-version #隐藏统计页面上HAProxy的版本信息 listen sqlserver bind *:1433 mode tcp balance roundrobin option httpclose server WN4_1433 192.168.100.21:1433 weight 1 maxconn 6000 check port 1433 inter 2000 rise 2 fall 2 server WN5_1433 192.168.100.22:1433 weight 1 maxconn 6000 check port 1433 inter 2000 rise 2 fall 2 frontend https_frontend bind *:80 bind *:443 ssl crt /etc/ssl/certs/ssl.pem acl ssl hdr_reg(host) -i ^(login.cnblogs.com|login1.cnblogs.com|cloud1.cnblogs.com|upload1.cnblogs.com|download1.cnblogs.com)$ redirect scheme https code 301 if !{ ssl_fc } ssl mode http option httpclose option forwardfor acl host_login1_cnblogs.com hdr_beg(host) -i login.cnblogs.com login1.cnblogs.com use_backend login1_100mubiao.com if host_login1_100mubiao.com acl host_cloud1_cnblogs.com hdr_beg(host) -i cloud1.cnblogs.com use_backend cloud1_cnblogs.com if host_cloud1_cnblogs.com acl host_upload1_cnblogs.com hdr_beg(host) -i upload1.cnblogs.com use_backend upload1_cnblogs.com if host_upload1_cnblogs.com acl host_download1_cnblogs.com hdr_beg(host) -i download1.cnblogs.com use_backend download1_cnblogs.com if host_download1_cnblogs.com backend login1_cnblogs.com mode http balance roundrobin cookie SERVERID insert indirect nocache server WN1_8059 192.168.100.11:8007 check weight 1 minconn 1 maxconn 3 check inter 40000 server WN2_8059 192.168.100.12:8007 check weight 1 minconn 1 maxconn 3 check inter 40000 backend cloud1_cnblogs.com mode http balance roundrobin cookie SERVERID insert indirect nocache server WN1_8059 192.168.100.11:8059 check weight 1 minconn 1 maxconn 3 check inter 40000 server WN2_8059 192.168.100.12:8059 check weight 1 minconn 1 maxconn 3 check inter 40000 backend upload1_cnblogs.com mode http balance roundrobin cookie SERVERID insert indirect nocache server WN1_36003 192.168.100.11:36003 check weight 1 minconn 1 maxconn 3 check inter 40000 server WN2_36003 192.168.100.12:36003 check weight 1 minconn 1 maxconn 3 check inter 40000 backend download1_cnblogs.com mode http balance roundrobin cookie SERVERID insert indirect nocache server LN1_36004 192.168.100.202:36004 check weight 1 minconn 1 maxconn 3 check inter 40000
启动服务
启动haproxy /usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/conf/haproxy.cfg 重启haproxy /usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/conf/haproxy.cfg -sf `cat /usr/local/haproxy/logs/haproxy.pid` 停止haproxy ps aux | grep haproxy kill -9 16795
添加到系统服务中
vi /usr/lib/systemd/system/haproxy.service [Unit] Description=Haproxy [Service] Type=forking ExecStart=/usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/conf/haproxy.cfg ExecReload=/usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/conf/haproxy.cfg -sf `cat /usr/local/haproxy/logs/haproxy.pid` [Install] WantedBy=multi-user.target
验证:
访问http://ip:8888/stats,出现如下界面
访问http://ip:80,实现代理效果。
健康监测:
1、通过监听端口进行健康检测
这种检测方式,haproxy只会去检查后端server的端口,并不能保证服务的真正可用。
listen http_proxy 0.0.0.0:80 mode http cookie SERVERID balance roundrobin option httpchk server web1 192.168.1.1:80 cookie server01 check server web2 192.168.1.2:80 cookie server02 check inter 500 rise 1 fall 2
2、通过URI获取进行健康检测
这种检测方式,是用过去GET后端server的的web页面,基本上可以代表后端服务的可用性。
listen http_proxy 0.0.0.0:80 mode http cookie SERVERID balance roundrobin option httpchk GET /index.html server web1 192.168.1.1:80 cookie server01 check server web2 192.168.1.2:80 cookie server02 check inter 500 rise 1 fall 2
3、通过request获取的头部信息进行匹配进行健康检测
这种检测方式,则是基于高级,精细的一些监测需求。通过对后端服务访问的头部信息进行匹配检测。
listen http_proxy 0.0.0.0:80 mode http cookie SERVERID balance roundrobin option httpchk HEAD /index.jsp HTTP/1.1\r\nHost:\ www.xxx.com server web1 192.168.1.1:80 cookie server01 check server web2 192.168.1.2:80 cookie server02 check inter 500 rise 1 fall 2
haproxy实现持久连接:
1 调度算法source haroxy 将用户IP经过hash计算后 指定到固定的真实服务器上(类似于nginx 的IP hash 指令) 配置指令 balance source 2 cookie 识别 haproxy 将WEB服务端发送给客户端的cookie中插入(或添加加前缀)haproxy定义的后端的服务器COOKIE ID。 配置指令例举 cookie SESSION_COOKIE insert indirect nocache 3 session 识别 haproxy 将后端服务器产生的session和后端服务器标识存在haproxy中的一张表里。客户端请求时先查询这张表。然后根据session分配后端server。 配置指令:appsession <cookie> len <length> timeout <holdtime>
对mysql读集群做负载均衡
只是对于读请求可以做负载均衡,如果对于写做负载均衡的时候直接这样调度是不合适的
frontendmysqlservers bind *:3306 default_backend myservs backend myservs balance leastconn option mysqlchk user root server myserv1 172.16.100.11:3306 check server myserv2 172.16.100.12:3306 check
对sqlserver读集群做负载均衡
只是对于读请求可以做负载均衡,如果对于写做负载均衡的时候直接这样调度是不合适的
listen sqlserver bind *:1433 mode tcp balance roundrobin option httpclose server WN4_1433 192.168.100.21:1433 weight 1 maxconn 6000 check port 1433 inter 2000 rise 2 fall 2 server WN5_1433 192.168.100.22:1433 weight 1 maxconn 6000 check port 1433 inter 2000 rise 2 fall 2
基于COOKIE做持久连接
只要在listen中还是在backend中是要使用cookie指令 就意味着server中去引用这个cookie的,每个用户都加上sessionid,因此会为每个用户请求插入一个会话ID,因此基于这个会话id做负载均衡调度
listen webfarm bind 192.168.0.99:80 mode http stats enable stats auth someuser:somepassword #指定某个用户某个密码 balance roundrobin #指定调度算法 cookie JSESSIONID prefix #基于cookie做负载均衡 option httpclose option forwardfor #添加首部信息 option httpchk HEAD /check.txt HTTP/1.0 #http首部请求的方法是head 请求的是 /check.txt 协议是1.0 ,没有跟主机就意味着请求的是默认主机,而不是检测虚拟主机 server webA 192.168.0.102:80 cookie A check #使用cookie做了负载均衡 server webB 192.168.0.103:80 cookie B check