nginx实现Load Balance

 

前景说明:


 

  使用集群是网站解决高并发、海量数据问题的常用手段。当一台服务器的处理能力、存储空间不足时,不要企图去换更强大的服务器,对大型网站而言,不管多么强大的服务器,都满足不了网站持续增长的业务需求。这种情况下,更恰当的做法是增加一台服务器分担原有服务器的访问及存储压力。通过负载均衡调度服务器,将来自浏览器的访问请求分发到应用服务器集群中的任何一台服务器上,如果有更多的用户,就在集群中加入更多的应用服务器,使应用服务器的负载压力不再成为整个网站的瓶颈。

下面这个例子,就是简单演示一下通过nginx来负载均衡实现

 

环境准备


 

192.168.11.25 nginx负载均衡服务器

192.168.11.200 nginx负载均衡服务器

192.168.11.57 web

192.168.11.98 web

 

 

web部署 方法1


 

web就采用最简单的flask实现

# app.py
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Date    : 2020-02-14 16:12:15
# @Author  : Your Name (you@example.org)b
# @Link    : http://example.org
# @Version : $Id$

from flask import Flask
app = Flask(__name__)


import socket
ip = ([(s.connect(('8.8.8.8', 53)), s.getsockname()[0], s.close()) for s in [socket.socket(socket.AF_INET, socket.SOCK_DGRAM)]][0][1])

@app.route('/')
def hello_world():
    return ip

if __name__ == '__main__':
    app.run(host='0.0.0.0', debug=True, port=5000)

分别在两台web服务器上启动web

python3 app.py

* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 307-743-873

 web部署 方法2


 

这里部署web,也可以用容器来部署,最方便不过😜

# Dockerfile
FROM python:3.6 LABEL maintainer="web demo" RUN pip install flask -i http://pypi.douban.com/simple --trusted-host pypi.douban.com ADD app.py /app.py CMD ["python", "/app.py"]

编译出web demo镜像

docker build -t flask_demo .

 

启动web demo

docker run -itd --name flask_demo -p 5000:5000 --network host flask_demo:latest

 


 

 

启动成功后,在浏览器查看,成功是这样

 

 

通过nginx进行负载均衡设置


 

 在192.168.11.25/200服务器上部署nginx

docker run -itd --name nginx_for_ha -p 8000:80 nginx:latest

进入nginx容器内,配置nginx.conf

root@ubuntu:/home/flask_app# cat nginx.conf 

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;
    upstream flask_pool 
    {
        server 192.168.11.98:5000 weight=4 max_fails=2 fail_timeout=30s;
        server 192.168.11.57:5000 weight=4 max_fails=2 fail_timeout=30s;
    }
    server {
        listen    80;
        server_name    localhost;
        location / {
            proxy_pass http://flask_pool;    #转向flask处理
        }
    }
    include /etc/nginx/conf.d/*.conf;
}

修改好后,重启nginx

docker restart nginx

浏览器打开192.168.11.25:8000验证负载均衡是否生效, 成功的话,会发现请求被均匀的转发到98, 57上

 

 浏览器打开192.168.11.200(偷懒用了上面已有的nginx,端口是80😄)验证负载均衡是否生效, 成功的话,也会发现请求被均匀的转发到98, 57上

当停掉其中一台如57时,再去请求,就只转发到98上了(我个人的理解,高可用就体现在这里了)。

虽然此时两台nginx服务器服务正常,此时是没有主从之分的,两者级别一样高,当配置keepalived之后就有了主从之分了

 

keepalived实现主备


 

还在是这两台机器上部署keepalived(这里演示的是docker方式)

192.168.11.25 nginx负载均衡服务器 + keepalived(master)
# docker-compose.yml for keepalived
  version: '3'
  services:
  keepalived:
  image: keepalived:x86
  volumes:
  - /var/run/docker.sock:/var/run/docker.sock
  - /home/keepalived/check_ng.sh:/container/service/keepalived/assets/check_ng.sh
  environment:
  - KEEPALIVED_INTERFACE=eno1 # 25宿主机网卡信息  命令是:ip route |awk '$2=="via" {print $5}' |head -1
  - KEEPALIVED_STATE=BACKUP  # 表示该节点是keepalived和备节点
  - KEEPALIVED_PRIORITY=90   #
  - KEEPALIVED_VIRTUAL_IPS=192.168.11.58 #  虚拟ip
  - KEEPALIVED_UNICAST_PEERS=192.168.11.200 #  取keepalived主节点的宿主机ip
  - KEEPALIVED_ROUTER_ID=25 # 主备节点通信标志,要一致
  privileged: true
  restart: always
  container_name: keepalived
  network_mode: host

 

192.168.11.200 nginx负载均衡服务器 + keepalived(backup)

# docker-compose.yml for keepalived
  version: '3'
  services:
  keepalived:
  image: keepalived:x86
  volumes:
  - /var/run/docker.sock:/var/run/docker.sock
  - /home/keepalived/check_ng.sh:/container/service/keepalived/assets/check_ng.sh
  environment:
  - KEEPALIVED_INTERFACE=eno1 # 25宿主机网卡信息  命令是:ip route |awk '$2=="via" {print $5}' |head -1
  - KEEPALIVED_STATE=BACKUP  # 表示该节点是keepalived和备节点
  - KEEPALIVED_PRIORITY=90   #
  - KEEPALIVED_VIRTUAL_IPS=192.168.11.58 #  虚拟ip
  - KEEPALIVED_UNICAST_PEERS=192.168.11.200 #  取keepalived主节点的宿主机ip
  - KEEPALIVED_ROUTER_ID=25
  privileged: true
  restart: always
  container_name: keepalived
  network_mode: host

 分别启动主备节点的keepalived

docker-compose up -d # docker-compose所在目录

 

查看主节点日志:

# 关键部分
I'm the MASTER! Whup whup.
Mon Apr 13 15:32:59 2020: Sending gratuitous ARP on ens160 for 192.168.11.58
Mon Apr 13 15:32:59 2020: (VI_1) Sending/queueing gratuitous ARPs on ens160 for 192.168.11.58
Mon Apr 13 15:32:59 2020: Sending gratuitous ARP on ens160 for 192.168.11.58
Mon Apr 13 15:32:59 2020: Sending gratuitous ARP on ens160 for 192.168.11.58
Mon Apr 13 15:32:59 2020: Sending gratuitous ARP on ens160 for 192.168.11.58
Mon Apr 13 15:32:59 2020: Sending gratuitous ARP on ens160 for 192.168.11.58

 

查看备节点日志:

Ok, i'm just a backup, great.
Mon Apr 13 15:42:09 2020: (VI_1) Backup received priority 0 advertisement
Mon Apr 13 15:42:09 2020: (VI_1) Receive advertisement timeout
Mon Apr 13 15:42:09 2020: (VI_1) Entering MASTER STATE
Mon Apr 13 15:42:09 2020: (VI_1) setting VIPs.
Mon Apr 13 15:42:09 2020: Sending gratuitous ARP on eno1 for 192.168.11.58
Mon Apr 13 15:42:09 2020: (VI_1) Sending/queueing gratuitous ARPs on eno1 for 192.168.11.58
Mon Apr 13 15:42:09 2020: Sending gratuitous ARP on eno1 for 192.168.11.58
Mon Apr 13 15:42:09 2020: Sending gratuitous ARP on eno1 for 192.168.11.58
Mon Apr 13 15:42:09 2020: Sending gratuitous ARP on eno1 for 192.168.11.58
Mon Apr 13 15:42:09 2020: Sending gratuitous ARP on eno1 for 192.168.11.58

 

接下来,通过虚拟ip来请求我们之前部署的web服务:

 查看web端日志,可以发现,该请求都是从主节点11.200发出的:

192.168.11.200 - - [13/Apr/2020 16:17:07] "GET / HTTP/1.0" 200 -

 

停掉主节点的keepalived再来测试:

通过备节点的keepalived日志可以看到,备节点迅速切换成master

I'm the MASTER! Whup whup.
Mon Apr 13 16:03:46 2020: Sending gratuitous ARP on eno1 for 192.168.11.58
Mon Apr 13 16:03:46 2020: (VI_1) Sending/queueing gratuitous ARPs on eno1 for 192.168.11.58

而此时的web服务访问,并不受任何影响(我这边备节点的nginx映射的是8000端口):

再查看web端日志,可以发现,该请求都是从备节点11.25发出的:

192.168.11.25 - - [13/Apr/2020 16:15:15] "GET / HTTP/1.0" 200 -

再来,启动主节点

通过备节点的keepalived日志可以看到,备节点迅速切又换回backup

Ok, i'm just a backup, great.

此时,通过备节点访问,已无法访问。

至此,记录也到此结束,后面有新的认知,我再更新😁

https://www.cnblogs.com/youzhibing/p/7327342.html

https://blog.csdn.net/hellojoy/article/details/80805899

posted @ 2020-04-09 17:52  cydit  阅读(1721)  评论(0编辑  收藏  举报