envoy-02- 基于位置优先级调度+加权
1,基于位置优先级调度
EDS配置中,属于某个特定位置的一组端点称为LocalityLbEndpoints,它们具有相同的位置(locality)、权重(load_balancing_weight)和优先级(priority);
◼ locality:从大到小可由region(地域)、zone(区域)和sub_zone(子区域)进行逐级标识;
◼ load_balancing_weight:可选参数,用于为每个priority/region/zone/sub_zone配置权重,取值范围[1,n);通常,一个locality权重除以具有相同优先级的所有locality的权重之和即为当前locality的流量比例;
◆ 此配置仅启用了位置加权负载均衡机制时才会生效;
◼ priority:此LocalityLbEndpoints组的优先级,默认为最高优先级0;
通常,Envoy调度时仅挑选最高优先级的一组端点,且仅此优先级的所有端点均不可用时才进行故障转移至下一个优先级的相关端点;
调度时,Envoy仅将流量调度至最高优先级的一组端点(LocalityLbEnpoints)
◼ 在最高优先级的端点变得不健康时,流量才会按比例转移至次一个优先级的端点;例如一个优先级中20%的端点不健康时,也将有20%的流量转移至次一个优先级端点;
◼ 超配因子:也可为一组端点设定超配因子,实现部分端点故障时仍将更大比例的流量导向至本组端点;
◆计算公式:转移的流量=100%-健康的端点比例*超配因子;于是,对于1.4的因子来说,20%的故障比例时,所有流量仍将保留在当前组;当健康的端点比例低于72%时,才会有部分流量转移至次优先级端点;
◆ 一个优先级别当前处理流量的能力也称为健康评分(健康主机比例*超配因子,上限为100%);
◼ 若各个优先级的健康评分总和(也称为标准化的总健康状态)小于100,则Envoy会认为没有足够的健康端点来分配所有待处理的流量,此时,各级别会根据其健康分值的比例重新分配100%的流量;例如,
对于具有{20,30}健康评分的两个组(标准化的总健康状况为50)将被标准化,并导致负载比例为40%和60%;
另外,优先级调度还支持同一优先级内部的端点降级(DEGRADED)机制,其工作方式类同于在两个不同优先级之间的端点分配流量的机制
◼ 非降级端点健康比例*超配因子大于等于100%时,降级端点不承接流量;
◼ 非降级端点的健康比例*超配因子小于100%时,降级端点承接与100%差额部分的流量;
Panic阈值
调度期间,Envoy仅考虑上游主机列表中的可用(健康或降级)端点,但可用端点的百分比过低时,Envoy将忽略所有端点的健康状态并将流量调度给所有端点;此百分比即为Panic阈值,也称为恐慌阈值;
◼ 默认的Panic阈值为50%;
◼ Panic阈值用于避免在流量增长时导致主机故障进入级联状态;
恐慌阈值可与优先级一同使用
◼ 给定优先级中的可用端点数量下降时,Envoy会将一些流量转移至较低优先级的端点;
◆ 若在低优先级中找到的承载所有流量的端点,则忽略恐慌阈值;
◆ 否则,Envoy会在所有优先级之间分配流量,并在给定的优先级的可用性低于恐慌阈值时将该优先的流量分配至该优先级的所有主机;
root@user:/opt/servicemesh_in_practise/Cluster-Manager/priority-levels# cat front-envoy-v2.yaml admin: access_log_path: "/dev/null" address: socket_address: address: 0.0.0.0 port_value: 9901 static_resources: listeners: - address: socket_address: address: 0.0.0.0 port_value: 80 name: listener_http filter_chains: - filters: - name: envoy.http_connection_manager typed_config: "@type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager codec_type: auto stat_prefix: ingress_http route_config: name: local_route virtual_hosts: - name: backend domains: - "*" routes: - match: prefix: "/" route: cluster: webcluster1 http_filters: - name: envoy.router clusters: - name: webcluster1 connect_timeout: 0.5s type: STRICT_DNS lb_policy: ROUND_ROBIN http2_protocol_options: {} load_assignment: cluster_name: webcluster1 policy: overprovisioning_factor: 140 endpoints: - locality: region: cn-north-1 priority: 0 lb_endpoints: - endpoint: address: socket_address: address: webservice1 port_value: 80 - locality: region: cn-north-2 priority: 1 lb_endpoints: - endpoint: address: socket_address: address: webservice2 port_value: 80 health_checks: - timeout: 5s interval: 10s unhealthy_threshold: 2 healthy_threshold: 1 http_health_check: path: /livez
root@user:/opt/servicemesh_in_practise/Cluster-Manager/priority-levels# cat envoy-sidecar-proxy.yaml admin: profile_path: /tmp/envoy.prof access_log_path: /tmp/admin_access.log address: socket_address: address: 0.0.0.0 port_value: 9901 static_resources: listeners: - name: listener_0 address: socket_address: { address: 0.0.0.0, port_value: 80 } filter_chains: - filters: - name: envoy.filters.network.http_connection_manager typed_config: "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager stat_prefix: ingress_http codec_type: AUTO route_config: name: local_route virtual_hosts: - name: local_service domains: ["*"] routes: - match: { prefix: "/" } route: { cluster: local_cluster } http_filters: - name: envoy.filters.http.router clusters: - name: local_cluster connect_timeout: 0.25s type: STATIC lb_policy: ROUND_ROBIN load_assignment: cluster_name: local_cluster endpoints: - lb_endpoints: - endpoint: address: socket_address: { address: 127.0.0.1, port_value: 8080 }
root@user:/opt/servicemesh_in_practise/Cluster-Manager/priority-levels# cat docker-compose.yaml version: '3' services: front-envoy: image: envoyproxy/envoy-alpine:v1.13-latest volumes: - ./front-envoy-v2.yaml:/etc/envoy/envoy.yaml networks: - envoymesh expose: # Expose ports 80 (for general traffic) and 9901 (for the admin server) - "80" - "9901" webserver01-sidecar: image: envoyproxy/envoy-alpine:v1.18-latest volumes: - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml hostname: red networks: envoymesh: ipv4_address: 172.31.29.11 aliases: - webservice1 - red webserver01: image: ikubernetes/demoapp:v1.0 environment: - PORT=8080 - HOST=127.0.0.1 network_mode: "service:webserver01-sidecar" depends_on: - webserver01-sidecar webserver02-sidecar: image: envoyproxy/envoy-alpine:v1.18-latest volumes: - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml hostname: blue networks: envoymesh: ipv4_address: 172.31.29.12 aliases: - webservice1 - blue webserver02: image: ikubernetes/demoapp:v1.0 environment: - PORT=8080 - HOST=127.0.0.1 network_mode: "service:webserver02-sidecar" depends_on: - webserver02-sidecar webserver03-sidecar: image: envoyproxy/envoy-alpine:v1.18-latest volumes: - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml hostname: green networks: envoymesh: ipv4_address: 172.31.29.13 aliases: - webservice1 - green webserver03: image: ikubernetes/demoapp:v1.0 environment: - PORT=8080 - HOST=127.0.0.1 network_mode: "service:webserver03-sidecar" depends_on: - webserver03-sidecar webserver04-sidecar: image: envoyproxy/envoy-alpine:v1.18-latest volumes: - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml hostname: gray networks: envoymesh: ipv4_address: 172.31.29.14 aliases: - webservice2 - gray webserver04: image: ikubernetes/demoapp:v1.0 environment: - PORT=8080 - HOST=127.0.0.1 network_mode: "service:webserver04-sidecar" depends_on: - webserver04-sidecar webserver05-sidecar: image: envoyproxy/envoy-alpine:v1.18-latest volumes: - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml hostname: black networks: envoymesh: ipv4_address: 172.31.29.15 aliases: - webservice2 - black webserver05: image: ikubernetes/demoapp:v1.0 environment: - PORT=8080 - HOST=127.0.0.1 network_mode: "service:webserver05-sidecar" depends_on: - webserver05-sidecar networks: envoymesh: driver: bridge ipam: config: - subnet: 172.31.29.0/24
root@user:/opt/servicemesh_in_practise/Cluster-Manager/priority-levels# while true;do curl 172.31.29.2;sleep 0.5;done iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.29.12! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.29.11! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.29.12! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: green, ServerIP: 172.31.29.13! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.29.11! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.29.12! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: green, ServerIP: 172.31.29.13! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: red, ServerIP: 172.31.29.11!
无 black gray 响应
挂掉一个 3-1.4*2=0.2 2.8:0.2=14:1 14比1调度到下一优先级
root@user:/opt/servicemesh_in_practise/Cluster-Manager/priority-levels# curl -X POST -d 'livez=FAIL' http://172.31.29.11/livez
root@user:/opt/servicemesh_in_practise/Cluster-Manager/priority-levels# while true; do curl 172.31.29.2; sleep .5; done iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.29.12! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: green, ServerIP: 172.31.29.13! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.29.12! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: gray, ServerIP: 172.31.29.14! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.29.12! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: green, ServerIP: 172.31.29.13! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.29.12! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: green, ServerIP: 172.31.29.13! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: blue, ServerIP: 172.31.29.12! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: green, ServerIP: 172.31.29.13! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: green, ServerIP: 172.31.29.13! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: black, ServerIP: 172.31.29.15!
挂掉两个
root@user:/opt/servicemesh_in_practise/Cluster-Manager/priority-levels# curl -X POST -d 'livez=FAIL' http://172.31.29.11/livez root@user:/opt/servicemesh_in_practise/Cluster-Manager/priority-levels# curl -X POST -d 'livez=FAIL' http://172.31.29.12/livez
有black gray 响应
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: green, ServerIP: 172.31.29.13! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: black, ServerIP: 172.31.29.15! iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: gray, ServerIP: 172.31.29.14!
2, 基于位置加权优先级调度