Cilium BGP ControlePlane(转载)

Cilium BGP(转载)

一、环境信息

主机 IP
ubuntu 172.16.94.141
软件 版本
docker 26.1.4
helm v3.15.0-rc.2
kind 0.18.0
clab 0.54.2
kubernetes 1.23.4
ubuntu os Ubuntu 20.04.6 LTS
kernel 5.11.5 内核升级文档

二、Cilium BGP 介绍

随着 Kubernetes 在企业中的应用越来越多,用户在其环境中同时拥有传统应用程序和云原生应用程序,为了让彼此联通,同时还要允许 Kubernetes 集群外部的网络能够通过 BGP 协议动态获取到访问的 Pod 的路由,Cilium 引入了对 BGP 协议的支持。

  • BGP beta方案(Cilium+MetalLB
    • Cilium 1.10 版本开始集成了对 BGP 的支持,通过集成 MetalLB 将 Kubernetes 暴露给外部。Cilium 现在能够为 LoadBalancer 类型的service分配IP,并向 BGP 邻居宣告路由信息。无需任何其他组件,就可以允许集群外部的流量访问k8s内运行的服务。
    • cilium 在 1.10 开始支持BGP ,并在 1.11 中得到进一步增强。它使运维人员能够通告 Pod CIDR 和 ServiceIP,并在 Cilium 管理的 Pod 和现有网络基础设施之间提供路由对等。这些功能仅适用于 IPv4 协议并依赖 MetalLB。

img


  • Cilium BGP ControlePlane 方案
    • 随着 IPv6 的使用热度不断增长,Cilium 需要提供BGP IPv6 能力 。虽然 MetalLB 通过 FRR 提供了一些有限的 IPv6 支持,但它仍然处于实验阶段。 Cilium 团队评估了各种选项,并决定转向功能更丰富的 GoBGP 。
    • 该方案也是Cilium社区主要的演进方向,更加推荐使用。
      img

三、Cilium BGP 使用背景

  • 随着用户的集群规模不断扩大,不同数据中心的节点都加入了同一个kubernetes 集群,集群内节点不处于同一个二层网络平面。(或者是混合云的场景中)
  • 为了解决跨平面网络互通的问题,虽然通过overlay的隧道封装(Cilium vxlan)方案可以解决该问题,但同时也引入了新问题,即隧道封装带来的数据包封装和解封装的额外网络性能损耗。
  • 这时我们可以基于 Cilium BGP ControlePlane 的能力 实现直接路由的 underlay 网络方案,该方案打通了node-node,node-pod,pod-pod之间的网络,能够保证较高的网络性能,并且支持大规模集群扩展。

四、Cilium BGP ControlePlane 模式环境搭建

网络架构

img

  • 架构图如上所示
    • 包括 ip 地址 ASN 编号
    • 该 k8s 集群一共有4台节点: 1 master3 node ,同时节点之间不在同一个网段

kind 配置文件信息

root@kind:~# cat install.sh

#!/bin/bash
date
set -v

cat <<EOF | kind create cluster --image=kindest/node:v1.23.4 --config=-
kind: Cluster
name: clab-bgp
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  # kind 默认使用 rancher cni,cni 我们需要自己创建
  disableDefaultCNI: true
  # 设置 pod 网段
  podSubnet: "10.98.0.0/16"
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-ip: 10.1.5.10
        # 设置 node 的 标签,后续创建 bgp 协议使用此标签
        node-labels: "rack=rack0"

- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-ip: 10.1.5.11
        # 设置 node 的 标签,后续创建 bgp 协议使用此标签
        node-labels: "rack=rack0"

- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-ip: 10.1.8.10
        # 设置 node 的 标签,后续创建 bgp 协议使用此标签
        node-labels: "rack=rack1"

- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-ip: 10.1.8.11
        # 设置 node 的 标签,后续创建 bgp 协议使用此标签
        node-labels: "rack=rack1"

containerdConfigPatches:
- |-
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."harbor.evescn.com"]
    endpoint = ["https://harbor.evescn.com"]
EOF

# 2.remove taints
controller_node=`kubectl get nodes --no-headers  -o custom-columns=NAME:.metadata.name| grep control-plane`
kubectl taint nodes $controller_node node-role.kubernetes.io/master:NoSchedule-
kubectl get nodes -o wide

# 3.install necessary tools
for i in $(docker ps -a --format "table {{.Names}}" | grep cilium) 
do
    echo $i
    docker cp /usr/bin/ping $i:/usr/bin/ping
    docker exec -it $i bash -c "sed -i -e 's/jp.archive.ubuntu.com\|archive.ubuntu.com\|security.ubuntu.com/old-releases.ubuntu.com/g' /etc/apt/sources.list"
    docker exec -it $i bash -c "apt-get -y update >/dev/null && apt-get -y install net-tools tcpdump lrzsz bridge-utils >/dev/null 2>&1"
done
  • 安装 k8s 集群
root@kind:~# ./install.sh

Creating cluster "clab-bgp" ...
 ✓ Ensuring node image (kindest/node:v1.23.4) 🖼
 ✓ Preparing nodes 📦 📦 📦 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing StorageClass 💾 
 ✓ Joining worker nodes 🚜 
Set kubectl context to "kind-clab-bgp"
You can now use your cluster with:

kubectl cluster-info --context kind-clab-bgp

Not sure what to do next? 😅  Check out https://kind.sigs.k8s.io/docs/user/quick-start/
root@kind:~# kubectl get node -o wide
NAME                     STATUS     ROLES                  AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE       KERNEL-VERSION           CONTAINER-RUNTIME
clab-bgp-control-plane   NotReady   control-plane,master   2m41s   v1.23.4   <none>        <none>        Ubuntu 21.10   5.11.5-051105-generic    containerd://1.5.10
clab-bgp-worker          NotReady   <none>                 2m10s   v1.23.4   <none>        <none>        Ubuntu 21.10   5.11.5-051105-generic    containerd://1.5.10
clab-bgp-worker2         NotReady   <none>                 2m10s   v1.23.4   <none>        <none>        Ubuntu 21.10   5.11.5-051105-generic    containerd://1.5.10
clab-bgp-worker3         NotReady   <none>                 2m10s   v1.23.4   <none>        <none>        Ubuntu 21.10   5.11.5-051105-generic    containerd://1.5.10

集群创建完成后,可以看到节点状态因为没有 cni 插件,状态为 NotReady ,同时也没有 ip 地址。这是正常现象

创建 clab 容器环境

img

创建网桥
root@kind:~# brctl addbr br-leaf0
root@kind:~# ifconfig br-leaf0 up
root@kind:~# brctl addbr br-leaf1
root@kind:~# ifconfig br-leaf1 up

root@kind:~# ip a l
13: br-leaf0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 72:55:98:38:b1:04 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::7055:98ff:fe38:b104/64 scope link 
       valid_lft forever preferred_lft forever
14: br-leaf1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 72:19:49:f9:71:83 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::7019:49ff:fef9:7183/64 scope link 
       valid_lft forever preferred_lft forever

img

创建这两个网桥主要是为了让 kind 上节点通过虚拟交换机连接到 containerLab ,为什么不直连接 containerLab ,如果 10.1.5.10/24 使用 vethPaircontainerLab 进行连接, 10.1.5.11/24 就没有额外的端口进行连接

clab 网络拓扑文件
# cilium.bgp.clab.yml
name: bgp
topology:
  nodes:
    spine0:
      kind: linux
      image: vyos/vyos:1.2.8
      cmd: /sbin/init
      binds:
        - /lib/modules:/lib/modules
        - ./startup-conf/spine0-boot.cfg:/opt/vyatta/etc/config/config.boot


    spine1:
      kind: linux
      image: vyos/vyos:1.2.8
      cmd: /sbin/init
      binds:
        - /lib/modules:/lib/modules
        - ./startup-conf/spine1-boot.cfg:/opt/vyatta/etc/config/config.boot

    leaf0:
      kind: linux
      image: vyos/vyos:1.2.8
      cmd: /sbin/init
      binds:
        - /lib/modules:/lib/modules
        - ./startup-conf/leaf0-boot.cfg:/opt/vyatta/etc/config/config.boot

    leaf1:
      kind: linux
      image: vyos/vyos:1.2.8
      cmd: /sbin/init
      binds:
        - /lib/modules:/lib/modules
        - ./startup-conf/leaf1-boot.cfg:/opt/vyatta/etc/config/config.boot

    br-leaf0:
      kind: bridge
  
    br-leaf1:
      kind: bridge


    server1:
      kind: linux
      image: harbor.dayuan1997.com/devops/nettool:0.9
      # 复用节点网络,共享网络命名空间
      network-mode: container:clab-bgp-control-plane
      # 配置是为了设置节点上的业务网卡,同时将默认路由的网关进行更改,使用业务网卡为出接口。
      exec:
      - ip addr add 10.1.5.10/24 dev net0
      - ip route replace default via 10.1.5.1

    server2:
      kind: linux
      image: harbor.dayuan1997.com/devops/nettool:0.9
      # 复用节点网络,共享网络命名空间
      network-mode: container:clab-bgp-worker
      # 配置是为了设置节点上的业务网卡,同时将默认路由的网关进行更改,使用业务网卡为出接口。
      exec:
      - ip addr add 10.1.5.11/24 dev net0
      - ip route replace default via 10.1.5.1

    server3:
      kind: linux
      image: harbor.dayuan1997.com/devops/nettool:0.9
      # 复用节点网络,共享网络命名空间
      network-mode: container:clab-bgp-worker2
      # 配置是为了设置节点上的业务网卡,同时将默认路由的网关进行更改,使用业务网卡为出接口。
      exec:
      - ip addr add 10.1.8.10/24 dev net0
      - ip route replace default via 10.1.8.1

    server4:
      kind: linux
      image: harbor.dayuan1997.com/devops/nettool:0.9
      # 复用节点网络,共享网络命名空间
      network-mode: container:clab-bgp-worker3
      # 配置是为了设置节点上的业务网卡,同时将默认路由的网关进行更改,使用业务网卡为出接口。
      exec:
      - ip addr add 10.1.8.11/24 dev net0
      - ip route replace default via 10.1.8.1


  links:
    - endpoints: ["br-leaf0:br-leaf0-net0", "server1:net0"]
    - endpoints: ["br-leaf0:br-leaf0-net1", "server2:net0"]

    - endpoints: ["br-leaf1:br-leaf1-net0", "server3:net0"]
    - endpoints: ["br-leaf1:br-leaf1-net1", "server4:net0"]

    - endpoints: ["leaf0:eth1", "spine0:eth1"]
    - endpoints: ["leaf0:eth2", "spine1:eth1"]
    - endpoints: ["leaf0:eth3", "br-leaf0:br-leaf0-net2"]

    - endpoints: ["leaf1:eth1", "spine0:eth2"]
    - endpoints: ["leaf1:eth2", "spine1:eth2"]
    - endpoints: ["leaf1:eth3", "br-leaf1:br-leaf1-net2"]
VyOS 配置文件
  • spine0-boot.cfg
配置文件
# ./startup-conf/spine0-boot.cfg
interfaces {
    ethernet eth1 {
        address 10.1.10.2/24
        duplex auto
        smp-affinity auto
        speed auto
    }
    ethernet eth2 {
        address 10.1.34.2/24
        duplex auto
        smp-affinity auto
        speed auto
    }
    loopback lo {
    }
}
protocols {
    # 配置 bgp 信息,bpg 自治系统编号 500
    bgp 500 {
        # 配置其他 AS 信息 包括 自治系统编号 
        neighbor 10.1.10.1 {
            remote-as 65005
        }
        # 配置其他 AS 信息 包括 自治系统编号 
        neighbor 10.1.34.1 {
            remote-as 65008
        }
        parameters {
            # 指定了 BGP 路由器的路由器 ID
            router-id 10.1.10.2
        }
    }
}
system {
    config-management {
        commit-revisions 100
    }
    console {
        device ttyS0 {
            speed 9600
        }
    }
    host-name spine0
    login {
        user vyos {
            authentication {
                encrypted-password $6$QxPS.uk6mfo$9QBSo8u1FkH16gMyAVhus6fU3LOzvLR9Z9.82m3tiHFAxTtIkhaZSWssSgzt4v4dGAL8rhVQxTg0oAG9/q11h/
                plaintext-password ""
            }
            level admin
        }
    }
    ntp {
        server 0.pool.ntp.org {
        }
        server 1.pool.ntp.org {
        }
        server 2.pool.ntp.org {
        }
    }
    syslog {
        global {
            facility all {
                level info
            }
            facility protocols {
                level debug
            }
        }
    }
    time-zone UTC
}


/* Warning: Do not remove the following line. */
/* === vyatta-config-version: "qos@1:dhcp-server@5:webgui@1:pppoe-server@2:webproxy@2:firewall@5:pptp@1:dns-forwarding@1:mdns@1:quagga@7:webproxy@1:snmp@1:system@10:conntrack@1:l2tp@1:broadcast-relay@1:dhcp-relay@2:conntrack-sync@1:vrrp@2:ipsec@5:ntp@1:config-management@1:wanloadbalance@3:ssh@1:nat@4:zone-policy@1:cluster@1" === */
/* Release version: 1.2.8 */
  • spine1-boot.cfg
配置文件
# ./startup-conf/spine1-boot.cfg
interfaces {
    ethernet eth1 {
        address 10.1.12.2/24
        duplex auto
        smp-affinity auto
        speed auto
    }
    ethernet eth2 {
        address 10.1.11.2/24
        duplex auto
        smp-affinity auto
        speed auto
    }
    loopback lo {
    }
}
protocols {
    bgp 800 {
        neighbor 10.1.11.1 {
            remote-as 65008
        }
        neighbor 10.1.12.1 {
            remote-as 65005
        }
        parameters {
            router-id 10.1.12.2
        }
    }
}
system {
    config-management {
        commit-revisions 100
    }
    console {
        device ttyS0 {
            speed 9600
        }
    }
    host-name spine1
    login {
        user vyos {
            authentication {
                encrypted-password $6$QxPS.uk6mfo$9QBSo8u1FkH16gMyAVhus6fU3LOzvLR9Z9.82m3tiHFAxTtIkhaZSWssSgzt4v4dGAL8rhVQxTg0oAG9/q11h/
                plaintext-password ""
            }
            level admin
        }
    }
    ntp {
        server 0.pool.ntp.org {
        }
        server 1.pool.ntp.org {
        }
        server 2.pool.ntp.org {
        }
    }
    syslog {
        global {
            facility all {
                level info
            }
            facility protocols {
                level debug
            }
        }
    }
    time-zone UTC
}


/* Warning: Do not remove the following line. */
/* === vyatta-config-version: "qos@1:dhcp-server@5:webgui@1:pppoe-server@2:webproxy@2:firewall@5:pptp@1:dns-forwarding@1:mdns@1:quagga@7:webproxy@1:snmp@1:system@10:conntrack@1:l2tp@1:broadcast-relay@1:dhcp-relay@2:conntrack-sync@1:vrrp@2:ipsec@5:ntp@1:config-management@1:wanloadbalance@3:ssh@1:nat@4:zone-policy@1:cluster@1" === */
/* Release version: 1.2.8 */
  • leaf0-boot.cfg
配置文件
# ./startup-conf/leaf0-boot.cfg
interfaces {
    ethernet eth1 {
        address 10.1.10.1/24
        duplex auto
        smp-affinity auto
        speed auto
    }
    ethernet eth2 {
        address 10.1.12.1/24
        duplex auto
        smp-affinity auto
        speed auto
    }
    ethernet eth3 {
        address 10.1.5.1/24
        duplex auto
        smp-affinity auto
        speed auto
    }
    loopback lo {
    }
}
# 配置 nat 信息,网络下的服务器可以访问外网
nat {
    source {
        rule 100 {
            outbound-interface eth0
            source {
                address 10.1.0.0/16
            }
            translation {
                address masquerade
            }
        }
    }
}
protocols {
    # 配置 bgp 信息,bpg 自治系统编号 65005
    bgp 65005 {
        # 本地网络段
        address-family {
            ipv4-unicast {
                network 10.1.5.0/24 {
                }
            }
        }
        # 配置 客户机(Client) 以及 Client 自治系统编号
        neighbor 10.1.5.10 {
            address-family {
                ipv4-unicast {
                    route-reflector-client
                }
            }
            remote-as 65005
        }
        # 配置 客户机(Client) 以及 Client 自治系统编号
        neighbor 10.1.5.11 {
            address-family {
                ipv4-unicast {
                    route-reflector-client
                }
            }
            remote-as 65005
        }
        # 配置其他 AS 信息 包括 自治系统编号 
        neighbor 10.1.10.2 {
            remote-as 500
        }
        # 配置其他 AS 信息 包括 自治系统编号 
        neighbor 10.1.12.2 {
            remote-as 800
        }
        parameters {
            # BGP 能够在不同的 AS-PATH 下使用多路径路由 ECMP
            bestpath {
                as-path {
                    multipath-relax
                }
            }
            # 指定了 BGP 路由器的路由器 ID
            router-id 10.1.5.1
        }
    }
}
system {
    config-management {
        commit-revisions 100
    }
    console {
        device ttyS0 {
            speed 9600
        }
    }
    host-name leaf0
    login {
        user vyos {
            authentication {
                encrypted-password $6$QxPS.uk6mfo$9QBSo8u1FkH16gMyAVhus6fU3LOzvLR9Z9.82m3tiHFAxTtIkhaZSWssSgzt4v4dGAL8rhVQxTg0oAG9/q11h/
                plaintext-password ""
            }
            level admin
        }
    }
    ntp {
        server 0.pool.ntp.org {
        }
        server 1.pool.ntp.org {
        }
        server 2.pool.ntp.org {
        }
    }
    syslog {
        global {
            facility all {
                level info
            }
            facility protocols {
                level debug
            }
        }
    }
    time-zone UTC
}


/* Warning: Do not remove the following line. */
/* === vyatta-config-version: "qos@1:dhcp-server@5:webgui@1:pppoe-server@2:webproxy@2:firewall@5:pptp@1:dns-forwarding@1:mdns@1:quagga@7:webproxy@1:snmp@1:system@10:conntrack@1:l2tp@1:broadcast-relay@1:dhcp-relay@2:conntrack-sync@1:vrrp@2:ipsec@5:ntp@1:config-management@1:wanloadbalance@3:ssh@1:nat@4:zone-policy@1:cluster@1" === */
/* Release version: 1.2.8 */
  • leaf1-boot.cfg
配置文件
# ./startup-conf/leaf1-boot.cfg
interfaces {
    ethernet eth1 {
        address 10.1.34.1/24
        duplex auto
        smp-affinity auto
        speed auto
    }
    ethernet eth2 {
        address 10.1.11.1/24
        duplex auto
        smp-affinity auto
        speed auto
    }
    ethernet eth3 {
        address 10.1.8.1/24
        duplex auto
        smp-affinity auto
        speed auto
    }
    loopback lo {
    }
}
nat {
    source {
        rule 100 {
            outbound-interface eth0
            source {
                address 10.1.0.0/16
            }
            translation {
                address masquerade
            }
        }
    }
}
protocols {
    bgp 65008 {
        address-family {
            ipv4-unicast {
                network 10.1.8.0/24 {
                }
            }
        }
        neighbor 10.1.8.10 {
            address-family {
                ipv4-unicast {
                    route-reflector-client
                }
            }
            remote-as 65008
        }
        neighbor 10.1.8.11 {
            address-family {
                ipv4-unicast {
                    route-reflector-client
                }
            }
            remote-as 65008
        }
        neighbor 10.1.11.2 {
            remote-as 800
        }
        neighbor 10.1.34.2 {
            remote-as 500
        }
        parameters {
            bestpath {
                as-path {
                    multipath-relax
                }
            }
            router-id 10.1.8.1
        }
    }
}
system {
    config-management {
        commit-revisions 100
    }
    console {
        device ttyS0 {
            speed 9600
        }
    }
    host-name leaf1
    login {
        user vyos {
            authentication {
                encrypted-password $6$QxPS.uk6mfo$9QBSo8u1FkH16gMyAVhus6fU3LOzvLR9Z9.82m3tiHFAxTtIkhaZSWssSgzt4v4dGAL8rhVQxTg0oAG9/q11h/
                plaintext-password ""
            }
            level admin
        }
    }
    ntp {
        server 0.pool.ntp.org {
        }
        server 1.pool.ntp.org {
        }
        server 2.pool.ntp.org {
        }
    }
    syslog {
        global {
            facility all {
                level info
            }
            facility protocols {
                level debug
            }
        }
    }
    time-zone UTC
}


/* Warning: Do not remove the following line. */
/* === vyatta-config-version: "qos@1:dhcp-server@5:webgui@1:pppoe-server@2:webproxy@2:firewall@5:pptp@1:dns-forwarding@1:mdns@1:quagga@7:webproxy@1:snmp@1:system@10:conntrack@1:l2tp@1:broadcast-relay@1:dhcp-relay@2:conntrack-sync@1:vrrp@2:ipsec@5:ntp@1:config-management@1:wanloadbalance@3:ssh@1:nat@4:zone-policy@1:cluster@1" === */
/* Release version: 1.2.8 */
部署服务
# tree -L 2 ./
./
├── cilium.bgp.clab.yml
└── startup-conf
    ├── spine0-boot.cfg
    ├── spine1-boot.cfg
    ├── leaf0-boot.cfg
    └── leaf1-boot.cfg

# clab deploy -t cilium.bgp.clab.yml
INFO[0000] Containerlab v0.54.2 started                 
INFO[0000] Parsing & checking topology file: clab.yaml  
INFO[0000] Creating docker network: Name="clab", IPv4Subnet="172.20.20.0/24", IPv6Subnet="2001:172:20:20::/64", MTU=1500 
INFO[0000] Creating lab directory: /root/wcni-kind/cilium/cilium_1.13.0-rc5/cilium-bgp-control-plane-lb-ipam/clab-bgp 
WARN[0000] node clab-bgp-control-plane referenced in namespace sharing not found in topology definition, considering it an external dependency. 
WARN[0000] node clab-bgp-worker referenced in namespace sharing not found in topology definition, considering it an external dependency. 
WARN[0000] node clab-bgp-worker2 referenced in namespace sharing not found in topology definition, considering it an external dependency. 
WARN[0000] node clab-bgp-worker3 referenced in namespace sharing not found in topology definition, considering it an external dependency. 
INFO[0000] Creating container: "leaf1"                  
INFO[0000] Creating container: "spine0"                 
INFO[0001] Created link: leaf1:eth1 <--> spine0:eth2    
INFO[0001] Creating container: "spine1"                 
INFO[0001] Created link: leaf1:eth3 <--> br-leaf1:br-leaf1-net2 
INFO[0001] Creating container: "leaf0"                  
INFO[0003] Created link: leaf1:eth2 <--> spine1:eth2    
INFO[0003] Creating container: "server3"                
INFO[0003] Created link: leaf0:eth1 <--> spine0:eth1    
INFO[0003] Created link: leaf0:eth2 <--> spine1:eth1    
INFO[0003] Created link: leaf0:eth3 <--> br-leaf0:br-leaf0-net2 
INFO[0003] Creating container: "server1"                
INFO[0004] Created link: br-leaf1:br-leaf1-net0 <--> server3:net0 
INFO[0005] Created link: br-leaf0:br-leaf0-net0 <--> server1:net0 
INFO[0005] Creating container: "server4"                
INFO[0006] Creating container: "server2"                
INFO[0006] Created link: br-leaf1:br-leaf1-net1 <--> server4:net0 
INFO[0007] Created link: br-leaf0:br-leaf0-net1 <--> server2:net0 
INFO[0007] Executed command "ip addr add 10.1.8.10/24 dev net0" on the node "server3". stdout: 
INFO[0007] Executed command "ip route replace default via 10.1.8.1" on the node "server3". stdout: 
INFO[0007] Executed command "ip addr add 10.1.5.10/24 dev net0" on the node "server1". stdout: 
INFO[0007] Executed command "ip route replace default via 10.1.5.1" on the node "server1". stdout: 
INFO[0007] Executed command "ip addr add 10.1.8.11/24 dev net0" on the node "server4". stdout: 
INFO[0007] Executed command "ip route replace default via 10.1.8.1" on the node "server4". stdout: 
INFO[0007] Executed command "ip addr add 10.1.5.11/24 dev net0" on the node "server2". stdout: 
INFO[0007] Executed command "ip route replace default via 10.1.5.1" on the node "server2". stdout: 
INFO[0007] Adding containerlab host entries to /etc/hosts file 
INFO[0007] Adding ssh config for containerlab nodes     
INFO[0007] 🎉 New containerlab version 0.56.0 is available! Release notes: https://containerlab.dev/rn/0.56/
Run 'containerlab version upgrade' to upgrade or go check other installation options at https://containerlab.dev/install/ 
+---+------------------+--------------+------------------------------------------+-------+---------+----------------+----------------------+
| # |       Name       | Container ID |                  Image                   | Kind  |  State  |  IPv4 Address  |     IPv6 Address     |
+---+------------------+--------------+------------------------------------------+-------+---------+----------------+----------------------+
| 1 | clab-bgp-leaf0   | 862dd78315dc | vyos/vyos:1.2.8                          | linux | running | 172.20.20.5/24 | 2001:172:20:20::5/64 |
| 2 | clab-bgp-leaf1   | 97c3a88de0e5 | vyos/vyos:1.2.8                          | linux | running | 172.20.20.2/24 | 2001:172:20:20::2/64 |
| 3 | clab-bgp-server1 | 4b857414c61d | harbor.dayuan1997.com/devops/nettool:0.9 | linux | running | N/A            | N/A                  |
| 4 | clab-bgp-server2 | e8d5118ecf24 | harbor.dayuan1997.com/devops/nettool:0.9 | linux | running | N/A            | N/A                  |
| 5 | clab-bgp-server3 | 6423690481c2 | harbor.dayuan1997.com/devops/nettool:0.9 | linux | running | N/A            | N/A                  |
| 6 | clab-bgp-server4 | 6ce574829ad9 | harbor.dayuan1997.com/devops/nettool:0.9 | linux | running | N/A            | N/A                  |
| 7 | clab-bgp-spine0  | c653397be1ac | vyos/vyos:1.2.8                          | linux | running | 172.20.20.3/24 | 2001:172:20:20::3/64 |
| 8 | clab-bgp-spine1  | 5bf97a972034 | vyos/vyos:1.2.8                          | linux | running | 172.20.20.4/24 | 2001:172:20:20::4/64 |
+---+------------------+--------------+------------------------------------------+-------+---------+----------------+----------------------+
检查 k8s 集群信息
root@kind:~# kubectl get node -o wide
NAME                     STATUS     ROLES                  AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE       KERNEL-VERSION          CONTAINER-RUNTIME
clab-bgp-control-plane   NotReady   control-plane,master   5m33s   v1.23.4   10.1.5.10     <none>        Ubuntu 21.10   5.11.5-051105-generic   containerd://1.5.10
clab-bgp-worker          NotReady   <none>                 4m55s   v1.23.4   10.1.5.11     <none>        Ubuntu 21.10   5.11.5-051105-generic   containerd://1.5.10
clab-bgp-worker2         NotReady   <none>                 4m54s   v1.23.4   10.1.8.10     <none>        Ubuntu 21.10   5.11.5-051105-generic   containerd://1.5.10
clab-bgp-worker3         NotReady   <none>                 5m7s    v1.23.4   10.1.8.11     <none>        Ubuntu 21.10   5.11.5-051105-generic   containerd://1.5.10

# 查看 node 节点 ip 信息
root@kind:~# docker exec -it clab-bgp-control-plane ip a l
9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.18.0.3/16 brd 172.18.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fc00:f853:ccd:e793::3/64 scope global nodad 
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe12:3/64 scope link 
       valid_lft forever preferred_lft forever
34: net0@if35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9500 qdisc noqueue state UP group default 
    link/ether aa:c1:ab:f9:fe:fb brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.1.5.10/24 scope global net0
       valid_lft forever preferred_lft forever
    inet6 fe80::a8c1:abff:fef9:fefb/64 scope link 
       valid_lft forever preferred_lft forever

# 查看 node 节点路由信息
root@kind:~# docker exec -it clab-bgp-control-plane ip r s
default via 10.1.5.1 dev net0 
10.1.5.0/24 dev net0 proto kernel scope link src 10.1.5.10 
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.3 

查看 k8s 集群发现 node 节点 ip 地址分配了,登陆容器查看到了新的 ip 地址,并且默认路由信息调整为了 10.1.5.0/24 dev net0 proto kernel scope link src 10.1.5.10

安装 cilium 服务

root@kind:~# cat cilium.sh
#/bin/bash
helm repo add cilium https://helm.cilium.io > /dev/null 2>&1
helm repo update > /dev/null 2>&1

helm install cilium cilium/cilium \
  --version 1.13.0-rc5 \
  --namespace kube-system \
  --set debug.enabled=true \
  --set debug.verbose=datapath \
  --set monitorAggregation=none \
  --set cluster.name=clab-bgp-cplane \
  --set ipam.mode=kubernetes \
  --set tunnel=disabled \
  --set ipv4NativeRoutingCIDR=10.0.0.0/8 \
  --set bgpControlPlane.enabled=true \
  --set k8s.requireIPv4PodCIDR=true

root@kind:~# bash cilium.sh

--set 参数解释

  1. --set ipam.mode=kubernetes

    • 含义: 设置 IP 地址管理模式为 Kubernetes 模式
    • 用途: Cilium 将依赖 Kubernetes 提供的 IP 地址分配。
  2. --set tunnel=disabled

    • 含义: 禁用隧道模式。
    • 用途: 禁用后,Cilium 将不使用 vxlan 技术,直接在主机之间路由数据包,即 direct-routing 模式。
  3. --set ipv4NativeRoutingCIDR="10.0.0.0/8"

    • 含义: 指定用于 IPv4 本地路由的 CIDR 范围,这里是 10.0.0.0/8
    • 用途: 配置 Cilium 使其知道哪些 IP 地址范围应该通过本地路由进行处理,不做 snat , Cilium 默认会对所用地址做 snat。
  4. --set bgpControlPlane.enabled=true

    • 含义: 启用 BGP 控制平面
    • 用途: Cilium 将使用 BGP(边界网关协议)进行路由控制和广告。
  5. --set k8s.requireIPv4PodCIDR=true:

    • 含义: 要求 Kubernetes 提供 IPv4 Pod CIDR(类网)
    • 用途: Kubernetes 提供 IPv4 Pod CIDR,Cilium 使用这些 CIDR 进行 IP 地址分配和路由。

无需指定 k8sServiceHostk8sServicePort,应该集群在初始化的时候有些信息模式使用的 172.18.0.x 这个 ip 地址进行设置,此处指定为 10.0.5.10 反而 cilium 安装不成功

  • 查看安装的服务
root@kind:~# kubectl get pods -A
NAMESPACE            NAME                                             READY   STATUS    RESTARTS   AGE
kube-system          cilium-64jd5                                     1/1     Running   0          2m24s
kube-system          cilium-9hxns                                     1/1     Running   0          2m24s
kube-system          cilium-gcfdm                                     1/1     Running   0          2m24s
kube-system          cilium-operator-76564696fd-j6rcw                 1/1     Running   0          2m24s
kube-system          cilium-operator-76564696fd-tltxv                 1/1     Running   0          2m24s
kube-system          cilium-r7r5x                                     1/1     Running   0          2m24s
kube-system          coredns-64897985d-fp477                          1/1     Running   0          18m
kube-system          coredns-64897985d-qtcj7                          1/1     Running   0          18m
kube-system          etcd-clab-bgp-control-plane                      1/1     Running   0          18m
kube-system          kube-apiserver-clab-bgp-control-plane            1/1     Running   0          18m
kube-system          kube-controller-manager-clab-bgp-control-plane   1/1     Running   0          18m
kube-system          kube-proxy-cxd7j                                 1/1     Running   0          17m
kube-system          kube-proxy-jxm2g                                 1/1     Running   0          18m
kube-system          kube-proxy-n6jzx                                 1/1     Running   0          17m
kube-system          kube-proxy-xf9rs                                 1/1     Running   0          17m
kube-system          kube-scheduler-clab-bgp-control-plane            1/1     Running   0          18m
local-path-storage   local-path-provisioner-5ddd94ff66-9d9xn          1/1     Running   0          18m

创建 Cilium BGPPeeringPolicy 规则

root@kind:~# cat bgp_peering_policy.yaml
---
apiVersion: "cilium.io/v2alpha1"
kind: CiliumBGPPeeringPolicy
metadata:
  name: rack0
spec:
  nodeSelector:
    # 结合 kind 配置文件信息,选择 clab-bgp-control-plane clab-bgp-worker 节点
    matchLabels:
      rack: rack0
  virtualRouters:
  - localASN: 65005
    exportPodCIDR: true
    neighbors:
    - peerAddress: "10.1.5.1/24"
      peerASN: 65005
---
apiVersion: "cilium.io/v2alpha1"
kind: CiliumBGPPeeringPolicy
metadata:
  name: rack1
spec:
  nodeSelector:
    # 结合 kind 配置文件信息,选择 clab-bgp-worker2 clab-bgp-worker3 节点
    matchLabels:
      rack: rack1
  virtualRouters:
  - localASN: 65008
    exportPodCIDR: true
    neighbors:
    - peerAddress: "10.1.8.1/24"
      peerASN: 65008


root@kind:~# kubectl apply -f bgp_peering_policy.yaml
ciliumbgppeeringpolicy.cilium.io/rack0 created
ciliumbgppeeringpolicy.cilium.io/rack1 created

root@kind:~# kubectl get ciliumbgppeeringpolicy
NAME    AGE
rack0   9s
rack1   9s

k8s 集群安装 PodService

root@kind:~# cat cni.yaml
---
apiVersion: apps/v1
kind: DaemonSet
#kind: Deployment
metadata:
  labels:
    app: cni
  name: cni
spec:
  #replicas: 1
  selector:
    matchLabels:
      app: cni
  template:
    metadata:
      labels:
        app: cni
    spec:
      containers:
      - image: harbor.dayuan1997.com/devops/nettool:0.9
        name: nettoolbox
        securityContext:
          privileged: true

---
apiVersion: v1
kind: Service
metadata:
  name: cni
  labels:
    color: red
spec:
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
  type: LoadBalancer
  selector:
    app: cni
root@kind:~# kubectl apply -f cni.yaml
daemonset.apps/cni created
service/serversvc created
  • 查看安装服务信息
root@kind:~# kubectl get pods -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP            NODE                     NOMINATED NODE   READINESS GATES
cni-5nrdx   1/1     Running   0          18s   10.98.1.39    clab-bgp-worker3         <none>           <none>
cni-8cnm5   1/1     Running   0          18s   10.98.0.18    clab-bgp-control-plane   <none>           <none>
cni-lzwwq   1/1     Running   0          18s   10.98.2.249   clab-bgp-worker2         <none>           <none>
cni-rcbdx   1/1     Running   0          18s   10.98.3.166   clab-bgp-worker          <none>           <none>

root@kind:~# kubectl get svc
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
cni          LoadBalancer   10.96.10.182   <pending>     80:31354/TCP    29s
kubernetes   ClusterIP      10.96.0.1      <none>        443/TCP         19m

五、Cilium BGP ControlPlane 特性验证

img

root@kind:~# kubectl get node -o wide
NAME                     STATUS   ROLES                  AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE       KERNEL-VERSION          CONTAINER-RUNTIME
clab-bgp-control-plane   Ready    control-plane,master   145m   v1.23.4   10.1.5.10     <none>        Ubuntu 21.10   5.11.5-051105-generic   containerd://1.5.10
clab-bgp-worker          Ready    <none>                 144m   v1.23.4   10.1.5.11     <none>        Ubuntu 21.10   5.11.5-051105-generic   containerd://1.5.10
clab-bgp-worker2         Ready    <none>                 144m   v1.23.4   10.1.8.10     <none>        Ubuntu 21.10   5.11.5-051105-generic   containerd://1.5.10
clab-bgp-worker3         Ready    <none>                 144m   v1.23.4   10.1.8.11     <none>        Ubuntu 21.10   5.11.5-051105-generic   containerd://1.5.10

查看网络图谱和 k8s 集群信息可得: clab-bgp-control-plane 和 clab-bgp-worker 处于同一二层网络平面,clab-bgp-worker2 和 clab-bgp-worker3 处于另外一个二层平面

2 层网络访问

clab-bgp-control-plane 节点 Pod 访问 clab-bgp-worker2 节点 Pod

  • Pod 节点信息
root@kind:~# kubectl exec -it cni-8cnm5 -- bash

## ip 信息
cni-8cnm5~$ ip a l
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9500 qdisc noqueue state UP group default qlen 1000
    link/ether 32:ad:f8:13:67:5f brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.98.0.18/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::30ad:f8ff:fe13:675f/64 scope link 
       valid_lft forever preferred_lft forever

cni-8cnm5~$ ip r s
default via 10.98.0.124 dev eth0 mtu 9500 
10.98.0.124 dev eth0 scope link 

## 路由信息
root@kind:~# kubectl exec -it net -- ip r s
default via 10.0.2.34 dev eth0 mtu 1500 
10.0.2.34 dev eth0 scope link 

查看 Pod 信息发现数据包会走默认路由通过 eth0 网卡送出去,送往 pod eth0 网卡的 veth pair 网卡上

  • Pod 节点所在 Node 节点信息
root@kind:~# docker exec -it clab-bgp-control-plane bash

## ip 信息
root@clab-bgp-control-plane:/# ip a l 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 9500 qdisc noqueue state UP group default qlen 1000
    link/ether fe:16:4a:39:ef:c5 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc16:4aff:fe39:efc5/64 scope link 
       valid_lft forever preferred_lft forever
3: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 9500 qdisc noqueue state UP group default qlen 1000
    link/ether 16:bf:db:b8:a8:57 brd ff:ff:ff:ff:ff:ff
    inet 10.98.0.124/32 scope link cilium_host
       valid_lft forever preferred_lft forever
    inet6 fe80::14bf:dbff:feb8:a857/64 scope link 
       valid_lft forever preferred_lft forever
5: lxc_health@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9500 qdisc noqueue state UP group default qlen 1000
    link/ether 7a:e4:32:21:70:79 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::78e4:32ff:fe21:7079/64 scope link 
       valid_lft forever preferred_lft forever
7: lxc191ea5f1c80c@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9500 qdisc noqueue state UP group default qlen 1000
    link/ether 0e:05:bf:61:e5:dc brd ff:ff:ff:ff:ff:ff link-netns cni-f879b8ee-048d-56a7-3380-19a2d79db129
    inet6 fe80::c05:bfff:fe61:e5dc/64 scope link 
       valid_lft forever preferred_lft forever
9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.18.0.3/16 brd 172.18.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fc00:f853:ccd:e793::3/64 scope global nodad 
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe12:3/64 scope link 
       valid_lft forever preferred_lft forever
34: net0@if35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9500 qdisc noqueue state UP group default 
    link/ether aa:c1:ab:f9:fe:fb brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.1.5.10/24 scope global net0
       valid_lft forever preferred_lft forever
    inet6 fe80::a8c1:abff:fef9:fefb/64 scope link 
       valid_lft forever preferred_lft forever

## 路由信息
root@clab-bgp-control-plane:/# ip r s
default via 10.1.5.1 dev net0 
10.1.5.0/24 dev net0 proto kernel scope link src 10.1.5.10 
10.98.0.0/24 via 10.98.0.124 dev cilium_host src 10.98.0.124 
10.98.0.124 dev cilium_host scope link 
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.3 
  • Pod 节点进行 ping 包测试
root@kind:~# kubectl exec -it cni-8cnm5 -- ping -c 1 10.98.2.249
PING 10.98.2.249 (10.98.2.249): 56 data bytes
64 bytes from 10.98.2.249: seq=0 ttl=57 time=1.230 ms

--- 10.98.2.249 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 1.230/1.230/1.230 ms
  • Pod 节点 eth0 网卡抓包
net~$ tcpdump -pne -i eth0
09:34:33.044132 32:ad:f8:13:67:5f > 0e:05:bf:61:e5:dc, ethertype IPv4 (0x0800), length 98: 10.98.0.18 > 10.98.2.249: ICMP echo request, id 49, seq 0, length 64
09:34:33.045041 0e:05:bf:61:e5:dc > 32:ad:f8:13:67:5f, ethertype IPv4 (0x0800), length 98: 10.98.2.249 > 10.98.0.18: ICMP echo reply, id 49, seq 0, length 64
  • clab-bgp-control-plane 节点 Podveth pair 网卡 lxc191ea5f1c80c 抓包
root@clab-bgp-control-plane:/# tcpdump -pne -i lxc191ea5f1c80c 
09:36:13.329632 32:ad:f8:13:67:5f > 0e:05:bf:61:e5:dc, ethertype IPv4 (0x0800), length 98: 10.98.0.18 > 10.98.2.249: ICMP echo request, id 60, seq 0, length 64
09:36:13.330266 0e:05:bf:61:e5:dc > 32:ad:f8:13:67:5f, ethertype IPv4 (0x0800), length 98: 10.98.2.249 > 10.98.0.18: ICMP echo reply, id 60, seq 0, length 64

Pod 节点 eth0 网卡抓包信息同 clab-bgp-control-plane 节点 Podveth pair 网卡 lxc191ea5f1c80c 抓包信息,这2个 mac 地址分别是 eth0 网卡和 lxc191ea5f1c80c 网卡 mac 地址

  • 查看 clab-bgp-control-plane 节点路由信息,发现数据包会在通过 default via 10.1.5.1 dev net0 路由信息转发,送往 clab-bgp-leaf0 节点 10.1.5.1 ip 在此节点上

clab-bgp-control-plane 节点 net0 网卡抓包

root@clab-bgp-control-plane:/# tcpdump -pne -i net0
09:41:43.232909 aa:c1:ab:f9:fe:fb > aa:c1:ab:83:50:0f, ethertype IPv4 (0x0800), length 98: 10.98.0.18 > 10.98.2.249: ICMP echo request, id 80, seq 0, length 64
09:41:43.233234 aa:c1:ab:83:50:0f > aa:c1:ab:f9:fe:fb, ethertype IPv4 (0x0800), length 98: 10.98.2.249 > 10.98.0.18: ICMP echo reply, id 80, seq 0, length 64

在此网卡上抓起到了数据包信息,目的 mac aa:c1:ab:f9:fe:fbnet0 网卡 mac 信息,即此数据包时送往目标 IP 的数据包信息

查看 net0 网卡抓包信息,数据包源 mac aa:c1:ab:f9:fe:fbnet0 网卡 mac 信息,目的 mac aa:c1:ab:83:50:0f 应该是 10.1.5.1 网卡 mac 地址

root@clab-bgp-control-plane:/# arp -n
Address                  HWtype  HWaddress           Flags Mask            Iface
172.18.0.5               ether   02:42:ac:12:00:05   C                     eth0
172.18.0.1               ether   02:42:6a:31:74:79   C                     eth0
10.1.5.1                 ether   aa:c1:ab:83:50:0f   C                     net0
10.1.5.11                ether   aa:c1:ab:93:7a:7a   C                     net0
172.18.0.2               ether   02:42:ac:12:00:02   C                     eth0
172.18.0.4               ether   02:42:ac:12:00:04   C                     eth0
  • 查看 clab-bgp-leaf0 节点网络信息
root@leaf0:/# ip a l
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:14:14:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.20.20.2/24 brd 172.20.20.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2001:172:20:20::2/64 scope global nodad 
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe14:1402/64 scope link 
       valid_lft forever preferred_lft forever
20: eth2@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9500 qdisc noqueue state UP group default 
    link/ether aa:c1:ab:69:b5:28 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet 10.1.12.1/24 brd 10.1.12.255 scope global eth2
       valid_lft forever preferred_lft forever
    inet6 fe80::a8c1:abff:fe69:b528/64 scope link 
       valid_lft forever preferred_lft forever
25: eth1@if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9500 qdisc noqueue state UP group default 
    link/ether aa:c1:ab:16:3f:66 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet 10.1.10.1/24 brd 10.1.10.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a8c1:abff:fe16:3f66/64 scope link 
       valid_lft forever preferred_lft forever
29: eth3@if28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9500 qdisc noqueue state UP group default 
    link/ether aa:c1:ab:83:50:0f brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.1.5.1/24 brd 10.1.5.255 scope global eth3
       valid_lft forever preferred_lft forever
    inet6 fe80::a8c1:abff:fe83:500f/64 scope link 
       valid_lft forever preferred_lft forever

root@leaf0:/# ip r  s
default via 172.20.20.1 dev eth0 
10.1.5.0/24 dev eth3 proto kernel scope link src 10.1.5.1 
10.1.8.0/24 proto bgp metric 20 
        nexthop via 10.1.10.2 dev eth1 weight 1 
        nexthop via 10.1.12.2 dev eth2 weight 1 
10.1.10.0/24 dev eth1 proto kernel scope link src 10.1.10.1 
10.1.12.0/24 dev eth2 proto kernel scope link src 10.1.12.1 
10.98.0.0/24 via 10.1.5.10 dev eth3 proto bgp metric 20 
10.98.1.0/24 proto bgp metric 20 
        nexthop via 10.1.10.2 dev eth1 weight 1 
        nexthop via 10.1.12.2 dev eth2 weight 1 
10.98.2.0/24 proto bgp metric 20 
        nexthop via 10.1.10.2 dev eth1 weight 1 
        nexthop via 10.1.12.2 dev eth2 weight 1 
10.98.3.0/24 via 10.1.5.11 dev eth3 proto bgp metric 20 
172.20.20.0/24 dev eth0 proto kernel scope link src 172.20.20.2 

查看 leaf0 节点路由信息,发现到达目的地址 10.98.2.0/24 的下一条有2条,这个就是 BGPECMP 特性,网卡流量分摊。

  • 分别在 leaf0 节点 eth1: 10.1.10.1eth2: 10.1.12.1 这2个网卡抓包,查看路由是否符合 ECMP 特性
root@leaf0:/# tcpdump -pne -i eth1
09:59:22.703188 aa:c1:ab:f0:a5:5d > aa:c1:ab:16:3f:66, ethertype IPv4 (0x0800), length 98: 10.98.2.249 > 10.98.0.18: ICMP echo reply, id 87, seq 0, length 64
root@leaf0:/# tcpdump -pne -i eth2
09:59:22.702800 aa:c1:ab:69:b5:28 > aa:c1:ab:db:64:c5, ethertype IPv4 (0x0800), length 98: 10.98.0.18 > 10.98.2.249: ICMP echo request, id 87, seq 0, length 64

通过抓包我们发现了 ICMPreply 包和 request 包,分别通过 eth1eth2 2个网卡进行数据传输,符合 ECMP 特性,同理的我们在上层 spine0spine1 节点,也只能分别抓到一条对应的 ICMP 数据包信息

  • 查看 leaf0 节点 bgp 信息
root@leaf0:/# show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued route, r - rejected route

K>* 0.0.0.0/0 [0/0] via 172.20.20.1, eth0, 00:48:16
C>* 10.1.5.0/24 is directly connected, eth3, 00:48:04
B>* 10.1.8.0/24 [20/0] via 10.1.10.2, eth1, 00:47:53
  *                    via 10.1.12.2, eth2, 00:47:53
C>* 10.1.10.0/24 is directly connected, eth1, 00:48:01
C>* 10.1.12.0/24 is directly connected, eth2, 00:48:02
B>* 10.98.0.0/24 [200/0] via 10.1.5.10, eth3, 00:44:28
B>* 10.98.1.0/24 [20/0] via 10.1.10.2, eth1, 00:44:30
  *                     via 10.1.12.2, eth2, 00:44:30
B>* 10.98.2.0/24 [20/0] via 10.1.10.2, eth1, 00:44:29
  *                     via 10.1.12.2, eth2, 00:44:29
B>* 10.98.3.0/24 [200/0] via 10.1.5.11, eth3, 00:44:29
C>* 172.20.20.0/24 is directly connected, eth0, 00:48:16

root@leaf0:/# show ip bgp
BGP table version is 7, local router ID is 10.1.5.1, vrf id 0
Default local pref 100, local AS 65005
Status codes:  s suppressed, d damped, h history, * valid, > best, = multipath,
               i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes:  i - IGP, e - EGP, ? - incomplete

   Network          Next Hop            Metric LocPrf Weight Path
*> 10.1.5.0/24      0.0.0.0                  0         32768 i
*= 10.1.8.0/24      10.1.12.2                              0 800 65008 i
*>                  10.1.10.2                              0 500 65008 i
*>i10.98.0.0/24     10.1.5.10                     100      0 i
*= 10.98.1.0/24     10.1.12.2                              0 800 65008 i
*>                  10.1.10.2                              0 500 65008 i
*= 10.98.2.0/24     10.1.12.2                              0 800 65008 i
*>                  10.1.10.2                              0 500 65008 i
*>i10.98.3.0/24     10.1.5.11                     100      0 i
  • 查看 spine0 节点 bgp 信息
root@spine0:/# show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued route, r - rejected route

K>* 0.0.0.0/0 [0/0] via 172.20.20.1, eth0, 00:48:51
B>* 10.1.5.0/24 [20/0] via 10.1.10.1, eth1, 00:48:28
B>* 10.1.8.0/24 [20/0] via 10.1.34.1, eth2, 00:48:30
C>* 10.1.10.0/24 is directly connected, eth1, 00:48:41
C>* 10.1.34.0/24 is directly connected, eth2, 00:48:39
B>* 10.98.0.0/24 [20/0] via 10.1.10.1, eth1, 00:45:03
B>* 10.98.1.0/24 [20/0] via 10.1.34.1, eth2, 00:45:05
B>* 10.98.2.0/24 [20/0] via 10.1.34.1, eth2, 00:45:04
B>* 10.98.3.0/24 [20/0] via 10.1.10.1, eth1, 00:45:04
C>* 172.20.20.0/24 is directly connected, eth0, 00:48:51

root@spine0:/# show ip bgp
BGP table version is 6, local router ID is 10.1.10.2, vrf id 0
Default local pref 100, local AS 500
Status codes:  s suppressed, d damped, h history, * valid, > best, = multipath,
               i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes:  i - IGP, e - EGP, ? - incomplete

   Network          Next Hop            Metric LocPrf Weight Path
*> 10.1.5.0/24      10.1.10.1                0             0 65005 i
*> 10.1.8.0/24      10.1.34.1                0             0 65008 i
*> 10.98.0.0/24     10.1.10.1                              0 65005 i
*> 10.98.1.0/24     10.1.34.1                              0 65008 i
*> 10.98.2.0/24     10.1.34.1                              0 65008 i
*> 10.98.3.0/24     10.1.10.1                              0 65005 i

六、BGP ControlePlane + MetalLB 宣告 SVC IP

集群里的服务最终还是需要对外暴露的,仅仅在集群内可访问往往不能满足生产需求。借助metalLB的能力进行LB IP管理,然后通过 BGPLB svc 进行宣告。

安装 metallb 服务,提供 LoadBanlencer 功能

  • 安装 metallb 服务
root@kind:~# kubectl apply -f https://gh.api.99988866.xyz/https://raw.githubusercontent.com/metallb/metallb/v0.13.10/config/manifests/metallb-native.yaml

namespace/metallb-system created
customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created
customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created
customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created
serviceaccount/controller created
serviceaccount/speaker created
role.rbac.authorization.k8s.io/controller created
role.rbac.authorization.k8s.io/pod-lister created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/controller created
rolebinding.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
configmap/metallb-excludel2 created
secret/webhook-server-cert created
service/webhook-service created
deployment.apps/controller created
daemonset.apps/speaker created
validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created
  • 创建 metalLb layer2 的 IPAddressPool
root@kind:~# cat metallb-l2-ip-config.yaml
---
# metallb 分配给 loadbanlencer 的 ip 地址段定义
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: ippool
  namespace: metallb-system
spec:
  addresses:
  - 172.18.0.200-172.18.0.210

---
# 创建 L2Advertisement 进行 IPAddressPool 地址段
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: ippool
  namespace: metallb-system
spec:
  ipAddressPools:
  - ippool


root@kind:~# kubectl apply -f metallb-l2-ip-config.yaml
ipaddresspool.metallb.io/ippool unchanged
l2advertisement.metallb.io/ippool created

root@kind:~# kubectl get ipaddresspool -n metallb-system
NAME     AUTO ASSIGN   AVOID BUGGY IPS   ADDRESSES
ippool   true          false             ["172.18.0.200-172.18.0.210"]

root@kind:~# kubectl get l2advertisement -n metallb-system
NAME     IPADDRESSPOOLS   IPADDRESSPOOL SELECTORS   INTERFACES
ippool   ["ippool"]  

重新查看 cni svc

root@kind:~# kubectl get svc
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
cni          LoadBalancer   10.96.10.182   172.18.0.200   80:31354/TCP   78m
kubernetes   ClusterIP      10.96.0.1      <none>         443/TCP        97m

cni svc 以及获取到了 EXTERNAL-IP: 172.18.0.200 ,接下来检查下 bgp 是否有宣告这个 ip 信息?其他地方使用有路由到达此 ip ?

查看 leaf0 路由信息

root@leaf0:/# show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued route, r - rejected route

K>* 0.0.0.0/0 [0/0] via 172.20.20.1, eth0, 01:24:40
C>* 10.1.5.0/24 is directly connected, eth3, 01:24:28
B>* 10.1.8.0/24 [20/0] via 10.1.10.2, eth1, 01:24:17
  *                    via 10.1.12.2, eth2, 01:24:17
C>* 10.1.10.0/24 is directly connected, eth1, 01:24:25
C>* 10.1.12.0/24 is directly connected, eth2, 01:24:26
B>* 10.98.0.0/24 [200/0] via 10.1.5.10, eth3, 01:20:52
B>* 10.98.1.0/24 [20/0] via 10.1.10.2, eth1, 01:20:54
  *                     via 10.1.12.2, eth2, 01:20:54
B>* 10.98.2.0/24 [20/0] via 10.1.10.2, eth1, 01:20:53
  *                     via 10.1.12.2, eth2, 01:20:53
B>* 10.98.3.0/24 [200/0] via 10.1.5.11, eth3, 01:20:53
C>* 172.20.20.0/24 is directly connected, eth0, 01:24:40

查看路由后发现 此 ip 还为进行宣告,此时 k8s 节点能访问成功,但是 leaf0 leaf1 spine0 spine1 这些节点无法访问,因为他们没有到达 172.18.0.200 的路由信息

leaf0 节点访问测试

# 网络访问超时
root@leaf0:/# curl 172.18.0.200

修改 Cilium BGPPeeringPolicy 规则,进行路由宣告

root@kind:~# cat bgp_peering_policy.yaml
---
apiVersion: "cilium.io/v2alpha1"
kind: CiliumBGPPeeringPolicy
metadata:
  name: rack0
spec:
  nodeSelector:
    # 结合 kind 配置文件信息,选择 clab-bgp-control-plane clab-bgp-worker 节点
    matchLabels:
      rack: rack0
  virtualRouters:
  - localASN: 65005
    serviceSelector:
      # 新增配置
      matchExpressions:
        - {key: evn, operator: NotIn, values: ["test"]}
    exportPodCIDR: true
    neighbors:
    - peerAddress: "10.1.5.1/24"
      peerASN: 65005
---
apiVersion: "cilium.io/v2alpha1"
kind: CiliumBGPPeeringPolicy
metadata:
  name: rack1
spec:
  nodeSelector:
    # 结合 kind 配置文件信息,选择 clab-bgp-worker2 clab-bgp-worker3 节点
    matchLabels:
      rack: rack1
  virtualRouters:
  - localASN: 65008
    serviceSelector:
      # 新增配置
      matchExpressions:
        - {key: env, operator: NotIn, values: ["test"]}
    exportPodCIDR: true
    neighbors:
    - peerAddress: "10.1.8.1/24"
      peerASN: 65008

添加了 serviceSelector.matchExpressions=[xxx] ,该配置表示除了带有 env:test 标签的 service EXTERNAL-IP 外,其余 service EXTERNAL-IP 地址均会被 BGP 路由发布

root@kind:~# kubectl delete ciliumbgppeeringpolicies rack0 rack1
ciliumbgppeeringpolicy.cilium.io "rack0" deleted
ciliumbgppeeringpolicy.cilium.io "rack1" deleted
root@kind:~# kubectl apply -f bgp_peering_policy.yaml
ciliumbgppeeringpolicy.cilium.io/rack0 created
ciliumbgppeeringpolicy.cilium.io/rack1 created

再次查看 leaf0 spine0 路由信息

root@vyos:/# show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued route, r - rejected route

K>* 0.0.0.0/0 [0/0] via 172.20.20.1, eth0, 01:32:30
C>* 10.1.5.0/24 is directly connected, eth3, 01:32:18
B>* 10.1.8.0/24 [20/0] via 10.1.10.2, eth1, 01:32:07
  *                    via 10.1.12.2, eth2, 01:32:07
C>* 10.1.10.0/24 is directly connected, eth1, 01:32:15
C>* 10.1.12.0/24 is directly connected, eth2, 01:32:16
B>* 10.98.0.0/24 [200/0] via 10.1.5.10, eth3, 00:00:00
B>* 10.98.1.0/24 [20/0] via 10.1.10.2, eth1, 00:00:01
  *                     via 10.1.12.2, eth2, 00:00:01
B>* 10.98.2.0/24 [20/0] via 10.1.10.2, eth1, 00:00:03
  *                     via 10.1.12.2, eth2, 00:00:03
B>* 10.98.3.0/24 [200/0] via 10.1.5.11, eth3, 00:00:00
B>* 172.18.0.200/32 [200/0] via 10.1.5.10, eth3, 00:00:00
  *                         via 10.1.5.11, eth3, 00:00:00
C>* 172.20.20.0/24 is directly connected, eth0, 01:32:30

root@vyos:/# show ip bgp  
BGP table version is 28, local router ID is 10.1.5.1, vrf id 0
Default local pref 100, local AS 65005
Status codes:  s suppressed, d damped, h history, * valid, > best, = multipath,
               i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes:  i - IGP, e - EGP, ? - incomplete

   Network          Next Hop            Metric LocPrf Weight Path
*> 10.1.5.0/24      0.0.0.0                  0         32768 i
*= 10.1.8.0/24      10.1.12.2                              0 800 65008 i
*>                  10.1.10.2                              0 500 65008 i
*>i10.98.0.0/24     10.1.5.10                     100      0 i
*= 10.98.1.0/24     10.1.12.2                              0 800 65008 i
*>                  10.1.10.2                              0 500 65008 i
*= 10.98.2.0/24     10.1.12.2                              0 800 65008 i
*>                  10.1.10.2                              0 500 65008 i
*>i10.98.3.0/24     10.1.5.11                     100      0 i
*>i172.18.0.200/32  10.1.5.10                     100      0 i
*=i                 10.1.5.11                     100      0 i
*                   10.1.12.2                              0 800 65008 i
*                   10.1.10.2                              0 500 65008 i
root@vyos:/# show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued route, r - rejected route

K>* 0.0.0.0/0 [0/0] via 172.20.20.1, eth0, 01:32:50
B>* 10.1.5.0/24 [20/0] via 10.1.10.1, eth1, 01:32:27
B>* 10.1.8.0/24 [20/0] via 10.1.34.1, eth2, 01:32:29
C>* 10.1.10.0/24 is directly connected, eth1, 01:32:40
C>* 10.1.34.0/24 is directly connected, eth2, 01:32:38
B>* 10.98.0.0/24 [20/0] via 10.1.10.1, eth1, 00:00:20
B>* 10.98.1.0/24 [20/0] via 10.1.34.1, eth2, 00:00:21
B>* 10.98.2.0/24 [20/0] via 10.1.34.1, eth2, 00:00:23
B>* 10.98.3.0/24 [20/0] via 10.1.10.1, eth1, 00:00:20
B>* 172.18.0.200/32 [20/0] via 10.1.34.1, eth2, 00:00:23
C>* 172.20.20.0/24 is directly connected, eth0, 01:32:50

root@vyos:/# show ip bgp
BGP table version is 25, local router ID is 10.1.10.2, vrf id 0
Default local pref 100, local AS 500
Status codes:  s suppressed, d damped, h history, * valid, > best, = multipath,
               i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes:  i - IGP, e - EGP, ? - incomplete

   Network          Next Hop            Metric LocPrf Weight Path
*> 10.1.5.0/24      10.1.10.1                0             0 65005 i
*> 10.1.8.0/24      10.1.34.1                0             0 65008 i
*> 10.98.0.0/24     10.1.10.1                              0 65005 i
*> 10.98.1.0/24     10.1.34.1                              0 65008 i
*> 10.98.2.0/24     10.1.34.1                              0 65008 i
*> 10.98.3.0/24     10.1.10.1                              0 65005 i
*  172.18.0.200/32  10.1.10.1                              0 65005 i
*>                  10.1.34.1                              0 65008 i

leaf0 spine0 节点均有到达 172.18.0.200 的路由信息,其他 2 个节点亦如此

spine0 节点测试

# 网络正常访问
root@spine0:/# curl 172.18.0.200
PodName: cni-8cnm5 | PodIP: eth0 10.98.0.18/32

七、BGP ControlPlane + LB IPAM 宣告 SVC IP

Cilium 本身也支持 service EXTERNAL-IP 地址的分配管理,除了 MetalLB 的方案外,还可以使用 IPAM 的特性完成 service 的宣告

清理metalLB环境

root@kind:~# kubectl apply -f metallb-l2-ip-config.yaml
ipaddresspool.metallb.io "ippool" deleted
l2advertisement.metallb.io "ippool" deleted

root@kind:~# kubectl delete -f https://gh.api.99988866.xyz/https://raw.githubusercontent.com/metallb/metallb/v0.13.10/config/manifests/metallb-native.yaml
namespace "metallb-system" deleted
customresourcedefinition.apiextensions.k8s.io "addresspools.metallb.io" deleted
customresourcedefinition.apiextensions.k8s.io "bfdprofiles.metallb.io" deleted
customresourcedefinition.apiextensions.k8s.io "bgpadvertisements.metallb.io" deleted
customresourcedefinition.apiextensions.k8s.io "bgppeers.metallb.io" deleted
customresourcedefinition.apiextensions.k8s.io "communities.metallb.io" deleted
customresourcedefinition.apiextensions.k8s.io "ipaddresspools.metallb.io" deleted
customresourcedefinition.apiextensions.k8s.io "l2advertisements.metallb.io" deleted
serviceaccount "controller" deleted
serviceaccount "speaker" deleted
role.rbac.authorization.k8s.io "controller" deleted
role.rbac.authorization.k8s.io "pod-lister" deleted
clusterrole.rbac.authorization.k8s.io "metallb-system:controller" deleted
clusterrole.rbac.authorization.k8s.io "metallb-system:speaker" deleted
rolebinding.rbac.authorization.k8s.io "controller" deleted
rolebinding.rbac.authorization.k8s.io "pod-lister" deleted
clusterrolebinding.rbac.authorization.k8s.io "metallb-system:controller" deleted
clusterrolebinding.rbac.authorization.k8s.io "metallb-system:speaker" deleted
configmap "metallb-excludel2" deleted
secret "webhook-server-cert" deleted
service "webhook-service" deleted
deployment.apps "controller" deleted
daemonset.apps "speaker" deleted
validatingwebhookconfiguration.admissionregistration.k8s.io "metallb-webhook-configuration" deleted

重新部署下 Pod Svc

root@kind:~# kubectl delete -f cni.yaml 
daemonset.apps "cni" deleted
service "cni" deleted
root@kind:~# kubectl apply -f cni.yaml 
daemonset.apps/cni created
service/cni created
root@kind:~# kubectl get svc
NAME         TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
cni          LoadBalancer   10.96.154.8   <pending>     80:31337/TCP   2s
kubernetes   ClusterIP      10.96.0.1     <none>        443/TCP        117m

创建 CiliumLoadBalancerIPPool 定义IP地址池

root@kind:~# cat lb-ipam.yaml
---
apiVersion: "cilium.io/v2alpha1"
kind: CiliumLoadBalancerIPPool
metadata:
  name: "blue-pool"
spec:
  cidrs:
  - cidr: "20.0.10.0/24"
  serviceSelector:
    matchExpressions:
      - {key: color, operator: In, values: [blue, cyan]}
---
apiVersion: "cilium.io/v2alpha1"
kind: CiliumLoadBalancerIPPool
metadata:
  name: "red-pool"
spec:
  cidrs:
  - cidr: "30.0.10.0/24"
  serviceSelector:
    # 该 ip 池只对指定标签 `color: red` 的 svc 生效 
    matchLabels:
      color: red

root@kind:~# kubectl apply -f lb-ipam.yaml
ciliumloadbalancerippool.cilium.io/blue-pool created
ciliumloadbalancerippool.cilium.io/red-pool created

root@kind:~# kubectl get CiliumLoadBalancerIPPool
NAME        DISABLED   CONFLICTING   IPS AVAILABLE   AGE
blue-pool   false      False         254             3m17s
red-pool    false      False         253             3m17s

root@kind:~# kubectl get svc
NAME         TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
cni          LoadBalancer   10.96.154.8   30.0.10.203   80:31337/TCP   5m13s
kubernetes   ClusterIP      10.96.0.1     <none>        443/TCP        122m

cni svc 已经分配到了 EXTERNAL-IP: 30.0.10.203

创建 CiliumBGPPeeringPolicy

由于上述步骤已经创建 CiliumBGPPeeringPolicy 资源配置,无需重复创建,复用即可

root@kind:~# cat bgp_peering_policy.yaml
---
apiVersion: "cilium.io/v2alpha1"
kind: CiliumBGPPeeringPolicy
metadata:
  name: rack0
spec:
  nodeSelector:
    # 结合 kind 配置文件信息,选择 clab-bgp-control-plane clab-bgp-worker 节点
    matchLabels:
      rack: rack0
  virtualRouters:
  - localASN: 65005
    serviceSelector:
      # 新增配置
      matchExpressions:
        - {key: evn, operator: NotIn, values: ["test"]}
    exportPodCIDR: true
    neighbors:
    - peerAddress: "10.1.5.1/24"
      peerASN: 65005
---
apiVersion: "cilium.io/v2alpha1"
kind: CiliumBGPPeeringPolicy
metadata:
  name: rack1
spec:
  nodeSelector:
    # 结合 kind 配置文件信息,选择 clab-bgp-worker2 clab-bgp-worker3 节点
    matchLabels:
      rack: rack1
  virtualRouters:
  - localASN: 65008
    serviceSelector:
      # 新增配置
      matchExpressions:
        - {key: env, operator: NotIn, values: ["test"]}
    exportPodCIDR: true
    neighbors:
    - peerAddress: "10.1.8.1/24"
      peerASN: 65008

查看 leaf0 spine0 路由信息

root@vyos:/# show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued route, r - rejected route

K>* 0.0.0.0/0 [0/0] via 172.20.20.1, eth0, 01:52:27
C>* 10.1.5.0/24 is directly connected, eth3, 01:52:15
B>* 10.1.8.0/24 [20/0] via 10.1.10.2, eth1, 01:52:04
  *                    via 10.1.12.2, eth2, 01:52:04
C>* 10.1.10.0/24 is directly connected, eth1, 01:52:12
C>* 10.1.12.0/24 is directly connected, eth2, 01:52:13
B>* 10.98.0.0/24 [200/0] via 10.1.5.10, eth3, 00:00:22
B>* 10.98.1.0/24 [20/0] via 10.1.10.2, eth1, 00:00:25
  *                     via 10.1.12.2, eth2, 00:00:25
B>* 10.98.2.0/24 [20/0] via 10.1.10.2, eth1, 00:00:23
  *                     via 10.1.12.2, eth2, 00:00:23
B>* 10.98.3.0/24 [200/0] via 10.1.5.11, eth3, 00:00:24
B>* 30.0.10.203/32 [200/0] via 10.1.5.10, eth3, 00:00:22
  *                        via 10.1.5.11, eth3, 00:00:22
C>* 172.20.20.0/24 is directly connected, eth0, 01:52:27
root@vyos:/# show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued route, r - rejected route

K>* 0.0.0.0/0 [0/0] via 172.20.20.1, eth0, 01:52:41
B>* 10.1.5.0/24 [20/0] via 10.1.10.1, eth1, 01:52:18
B>* 10.1.8.0/24 [20/0] via 10.1.34.1, eth2, 01:52:20
C>* 10.1.10.0/24 is directly connected, eth1, 01:52:31
C>* 10.1.34.0/24 is directly connected, eth2, 01:52:29
B>* 10.98.0.0/24 [20/0] via 10.1.10.1, eth1, 00:00:36
B>* 10.98.1.0/24 [20/0] via 10.1.34.1, eth2, 00:00:39
B>* 10.98.2.0/24 [20/0] via 10.1.34.1, eth2, 00:00:37
B>* 10.98.3.0/24 [20/0] via 10.1.10.1, eth1, 00:00:38
B>* 30.0.10.203/32 [20/0] via 10.1.34.1, eth2, 00:00:39
C>* 172.20.20.0/24 is directly connected, eth0, 01:52:41

leaf0 spine0 节点均有到达 30.0.10.203 的路由信息,其他 2 个节点亦如此

  • spine0 节点测试
# spine0 节点网络正常访问
root@spine0:/# curl 30.0.10.203
PodName: cni-dzn57 | PodIP: eth0 10.98.0.104/32

# clab-bgp-control-plane 节点网络正常访问
root@clab-bgp-control-plane:/# curl 30.0.10.203
PodName: cni-67gqw | PodIP: eth0 10.98.1.43/32

八、BGP 方案不是万能的

  • Underlay + 三层路由的方案,在传统机房中是非常流行的方案,因为它的性能很好。但是在公有云vpc的场景下,受限使用,不是每个公有云供应商都让用,每一个云厂商对网络保护的定义不一样。Calio的BGP在AWS中可以实现,但是在Azure中不允许,它的VPC不允许不受管控范围的IP通过。
  • 在BGP场景下,BGP Underlay即使能使用,但无法跨AZ。在公有云中跨AZ一般意味着跨子网,跨子网就意味着跨路由。VPC的 vRouter一般不支持BGP。BGP underlay可以用,也仅限于单az。

九、vyos 网关配置

spine0

set interfaces ethernet eth1 address '10.1.10.2/24'
set interfaces ethernet eth2 address '10.1.34.2/24'
set interfaces loopback lo
set protocols bgp 500 neighbor 10.1.10.1 remote-as '65005'
set protocols bgp 500 neighbor 10.1.34.1 remote-as '65008'
set protocols bgp 500 parameters router-id '10.1.10.2'
set system config-management commit-revisions '100'
set system console device ttyS0 speed '9600'
set system host-name 'spine0'
set system login user vyos authentication encrypted-password '$6$QxPS.uk6mfo$9QBSo8u1FkH16gMyAVhus6fU3LOzvLR9Z9.82m3tiHFAxTtIkhaZSWssSgzt4v4dGAL8rhVQxTg0oAG9/q11h/'
set system login user vyos authentication plaintext-password ''
set system login user vyos level 'admin'
set system ntp server 0.pool.ntp.org
set system ntp server 1.pool.ntp.org
set system ntp server 2.pool.ntp.org
set system syslog global facility all level 'info'
set system syslog global facility protocols level 'debug'

spine1

set interfaces ethernet eth1 address '10.1.12.2/24'
set interfaces ethernet eth2 address '10.1.11.2/24'
set interfaces loopback lo
set protocols bgp 800 neighbor 10.1.11.1 remote-as '65008'
set protocols bgp 800 neighbor 10.1.12.1 remote-as '65005'
set protocols bgp 800 parameters router-id '10.1.12.2'
set system config-management commit-revisions '100'
set system console device ttyS0 speed '9600'
set system host-name 'spine1'
set system login user vyos authentication encrypted-password '$6$QxPS.uk6mfo$9QBSo8u1FkH16gMyAVhus6fU3LOzvLR9Z9.82m3tiHFAxTtIkhaZSWssSgzt4v4dGAL8rhVQxTg0oAG9/q11h/'
set system login user vyos authentication plaintext-password ''
set system login user vyos level 'admin'
set system ntp server 0.pool.ntp.org
set system ntp server 1.pool.ntp.org
set system ntp server 2.pool.ntp.org
set system syslog global facility all level 'info'
set system syslog global facility protocols level 'debug'

leaf0

set interfaces ethernet eth1 address '10.1.10.1/24'
set interfaces ethernet eth2 address '10.1.12.1/24'
set interfaces ethernet eth3 address '10.1.5.1/24'
set interfaces loopback lo
set nat source rule 100 outbound-interface 'eth0'
set nat source rule 100 source address '10.1.0.0/16'
set nat source rule 100 translation address 'masquerade'
set protocols bgp 65005 address-family ipv4-unicast network 10.1.5.0/24
set protocols bgp 65005 neighbor 10.1.5.10 address-family ipv4-unicast route-reflector-client
set protocols bgp 65005 neighbor 10.1.5.10 remote-as '65005'
set protocols bgp 65005 neighbor 10.1.5.11 address-family ipv4-unicast route-reflector-client
set protocols bgp 65005 neighbor 10.1.5.11 remote-as '65005'
set protocols bgp 65005 neighbor 10.1.10.2 remote-as '500'
set protocols bgp 65005 neighbor 10.1.12.2 remote-as '800'
set protocols bgp 65005 parameters bestpath as-path multipath-relax
set protocols bgp 65005 parameters router-id '10.1.5.1'
set system config-management commit-revisions '100'
set system console device ttyS0 speed '9600'
set system host-name 'leaf0'
set system login user vyos authentication encrypted-password '$6$QxPS.uk6mfo$9QBSo8u1FkH16gMyAVhus6fU3LOzvLR9Z9.82m3tiHFAxTtIkhaZSWssSgzt4v4dGAL8rhVQxTg0oAG9/q11h/'
set system login user vyos authentication plaintext-password ''
set system login user vyos level 'admin'
set system ntp server 0.pool.ntp.org
set system ntp server 1.pool.ntp.org
set system ntp server 2.pool.ntp.org
set system syslog global facility all level 'info'
set system syslog global facility protocols level 'debug'

leaf1

set interfaces ethernet eth1 address '10.1.34.1/24'
set interfaces ethernet eth2 address '10.1.11.1/24'
set interfaces ethernet eth3 address '10.1.8.1/24'
set interfaces loopback lo
set nat source rule 100 outbound-interface 'eth0'
set nat source rule 100 source address '10.1.0.0/16'
set nat source rule 100 translation address 'masquerade'
set protocols bgp 65008 address-family ipv4-unicast network 10.1.8.0/24
set protocols bgp 65008 neighbor 10.1.8.10 address-family ipv4-unicast route-reflector-client
set protocols bgp 65008 neighbor 10.1.8.10 remote-as '65008'
set protocols bgp 65008 neighbor 10.1.8.11 address-family ipv4-unicast route-reflector-client
set protocols bgp 65008 neighbor 10.1.8.11 remote-as '65008'
set protocols bgp 65008 neighbor 10.1.11.2 remote-as '800'
set protocols bgp 65008 neighbor 10.1.34.2 remote-as '500'
set protocols bgp 65008 parameters bestpath as-path multipath-relax
set protocols bgp 65008 parameters router-id '10.1.8.1'
set system config-management commit-revisions '100'
set system console device ttyS0 speed '9600'
set system host-name 'leaf1'
set system login user vyos authentication encrypted-password '$6$QxPS.uk6mfo$9QBSo8u1FkH16gMyAVhus6fU3LOzvLR9Z9.82m3tiHFAxTtIkhaZSWssSgzt4v4dGAL8rhVQxTg0oAG9/q11h/'
set system login user vyos authentication plaintext-password ''
set system login user vyos level 'admin'
set system ntp server 0.pool.ntp.org
set system ntp server 1.pool.ntp.org
set system ntp server 2.pool.ntp.org
set system syslog global facility all level 'info'
set system syslog global facility protocols level 'debug'

转载地址

https://github.com/HFfleming/k8s-network-learning/blob/main/cilium-cni/BGP/Cilium-BGP-ControlPlane.md

posted @   evescn  阅读(335)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 震惊!C++程序真的从main开始吗?99%的程序员都答错了
· 【硬核科普】Trae如何「偷看」你的代码?零基础破解AI编程运行原理
· 单元测试从入门到精通
· 上周热点回顾(3.3-3.9)
· Vue3状态管理终极指南:Pinia保姆级教程
历史上的今天:
2023-07-18 kubeasz K8S测试环境删除多余 node 节点
2017-07-18 Linux基础命令-系统时间
2017-07-18 Linux基础命令-echo
  1. 1 毛不易
  2. 2 青丝 等什么君(邓寓君)
  3. 3 最爱 周慧敏
  4. 4 青花 (Live) 摩登兄弟刘宇宁/周传雄
  5. 5 怨苍天变了心 葱香科学家(王悠然)
  6. 6 吹梦到西洲 恋恋故人难/黄诗扶/王敬轩(妖扬)
  7. 7 姑娘别哭泣 柯柯柯啊
  8. 8 我会好好的 王心凌
  9. 9 半生雪 七叔-叶泽浩
  10. 10 用力活着 张茜
  11. 11 山茶花读不懂白玫瑰 梨笑笑
  12. 12 赴春寰 张壹ZHANG/Mukyo木西/鹿予/弦上春秋Official
  13. 13 故事终章 程响
  14. 14 沿海独白 王唯一(九姨太)
  15. 15 若把你 越南电音 云音乐AI/网易天音
  16. 16 世间美好与你环环相扣 柏松
  17. 17 愿你如愿 陆七言
  18. 18 多情种 胡杨林
  19. 19 和你一样 李宇春
  20. 20 晚风心里吹 李克勤
  21. 21 世面 黄梓溪
  22. 22 等的太久 杨大六
  23. 23 微醺状态 张一
  24. 24 醉今朝 安小茜
  25. 25 阿衣莫 阿吉太组合
  26. 26 折风渡夜 沉默书生
  27. 27 星河万里 王大毛
  28. 28 满目星辰皆是你 留小雨
  29. 29 老人与海 海鸣威/吴琼
  30. 30 海底 一支榴莲
  31. 31 只要有你 曹芙嘉
  32. 32 兰花指 阿里郎
  33. 33 口是心非 张大帅
  34. 34 爱不得忘不舍 白小白
  35. 35 惊鸿醉 指尖笑
  36. 36 如愿 葱香科学家(王悠然)
  37. 37 晚风心里吹 阿梨粤
  38. 38 惊蛰·归云 陈拾月(只有影子)/KasaYAYA
  39. 39 风飞沙 迪克牛仔
  40. 40 把孤独当做晚餐 井胧
  41. 41 星星点灯 郑智化
  42. 42 客子光阴 七叔-叶泽浩
  43. 43 走马观花 王若熙
  44. 44 沈园外 阿YueYue/戾格/小田音乐社
  45. 45 盗将行 花粥/马雨阳
  46. 46 她的眼睛会唱歌 张宇佳
  47. 47 一笑江湖 姜姜
  48. 48 虎二
  49. 49 人间烟火 程响
  50. 50 不仅仅是喜欢 萧全/孙语赛
  51. 51 你的眼神(粤语版) Ecrolyn
  52. 52 剑魂 李炜
  53. 53 虞兮叹 闻人听書_
  54. 54 时光洪流 程响
  55. 55 桃花诺 G.E.M.邓紫棋
  56. 56 行星(PLANET) 谭联耀
  57. 57 别怕我伤心 悦开心i/张家旺
  58. 58 上古山海经 小少焱
  59. 59 你的眼神 七元
  60. 60 怨苍天变了心 米雅
  61. 61 绝不会放过 王亚东
  62. 62 可笑的孤独 黄静美
  63. 63 错位时空 艾辰
  64. 64 像个孩子 仙屁孩
  65. 65 完美世界 [主题版] 水木年华
  66. 66 我们的时光 赵雷
  67. 67 万字情诗 椒椒JMJ
  68. 68 妖王 浮生
  69. 69 天地无霜 (合唱版) 杨紫/邓伦
  70. 70 塞北殇 王若熙
  71. 71 花亦山 祖娅纳惜
  72. 72 醉今朝 是可乐鸭
  73. 73 欠我个未来 艾岩
  74. 74 缘分一道桥 容云/青峰AomineDaiky
  75. 75 不知死活 子无余/严书
  76. 76 不可说 霍建华/赵丽颖
  77. 77 孤勇者 陈奕迅
  78. 78 让酒 摩登兄弟刘宇宁
  79. 79 红尘悠悠DJ沈念版 颜一彦
  80. 80 折风渡夜 (DJ名龙版) 泽国同学
  81. 81 吹灭小山河 国风堂/司南
  82. 82 等什么君 - 辞九门回忆 张大帅
  83. 83 绝世舞姬 张曦匀/戚琦
  84. 84 阿刁(无修音版|live) 张韶涵网易云资讯台
  85. 85 往事如烟 蓝波
  86. 86 清明上河图 李玉刚
  87. 87 望穿秋水 坤坤阿
  88. 88 太多 杜宣达
  89. 89 小阿七
  90. 90 霞光-《精灵世纪》片尾曲 小时姑娘
  91. 91 放开 爱乐团王超
  92. 92 醉仙美 娜美
  93. 93 虞兮叹(完整版) 黎林添娇kiki
  94. 94 单恋一枝花 夏了个天呐(朴昱美)/七夕
  95. 95 一个人挺好 (DJ版) 69/肖涵/沈子凡
  96. 96 一笑江湖 闻人听書_
  97. 97 赤伶 李玉刚
  98. 98 达拉崩吧 (Live) 周深
  99. 99 等你归来 程响
  100. 100 责无旁贷 阿悠悠
  101. 101 你是人间四月天(钢琴弹唱版) 邵帅
  102. 102 虐心 徐良/孙羽幽
  103. 103 大天蓬 (女生版) 清水er
  104. 104 赤伶 是二智呀
  105. 105 有种关系叫知己 刘大壮
  106. 106 怎随天下 王若熙
  107. 107 有人 赵钶
  108. 108 海底 三块木头
  109. 109 有何不可 许嵩
  110. 110 大天蓬 (抖音版) 璐爷
  111. 111 我吹过你吹过的晚风(翻自 ac) 辛辛
  112. 112 只爱西经 林一
  113. 113 关山酒 等什么君(邓寓君)
  114. 114 曾经的你 年少不川
  115. 115 倔强 五月天
  116. 116 Lydia F.I.R.
  117. 117 爱你 王心凌
  118. 118 杀破狼 哥哥妹妹
  119. 119 踏山河 七叔-叶泽浩
  120. 120 错过的情人 雷婷
  121. 121 你看到的我 黄勇/任书怀
  122. 122 新欢渡旧爱 黄静美
  123. 123 慕容晓晓-黄梅戏(南柯一梦 / 明洋 remix) 南柯一梦/MINGYANG
  124. 124 浮白 花粥/王胜娚
  125. 125 叹郁孤 霄磊
  126. 126 贝加尔湖畔 (Live) 李健
  127. 127 不虞 王玖
  128. 128 麻雀 李荣浩
  129. 129 一场雨落下来要用多久 鹿先森乐队
  130. 130 野狼disco 宝石Gem
  131. 131 我们不该这样的 张赫煊
  132. 132 海底 一支榴莲
  133. 133 爱情错觉 王娅
  134. 134 你一定要幸福 何洁
  135. 135 往后余生 马良
  136. 136 放你走 正点
  137. 137 只要平凡 张杰/张碧晨
  138. 138 只要平凡-小石头和孩子们 小石头和孩子们
  139. 139 红色高跟鞋 (Live) 韩雪/刘敏涛/万茜
  140. 140 明月天涯 五音Jw
  141. 141 华年 鹿先森乐队
  142. 142 分飞 徐怀钰
  143. 143 你是我撞的南墙 刘楚阳
  144. 144 同簪 小时姑娘/HITA
  145. 145 我的将军啊-唯美独特女版 熙宝(陆迦卉)
  146. 146 我的将军啊(女版戏腔) Mukyo木西
  147. 147 口是心非 南柯nanklo/乐小桃
  148. 148 DAY BY DAY (Japanese Ver.) T-ara
  149. 149 我承认我怕黑 雅楠
  150. 150 我要找到你 冯子晨
  151. 151 你的答案 子尧
  152. 152 一剪梅 费玉清
  153. 153 纸船 薛之谦/郁可唯
  154. 154 那女孩对我说 (完整版) Uu
  155. 155 我好像在哪见过你 薛之谦
  156. 156 林中鸟 葛林
  157. 157 渡我不渡她 (正式版) 苏谭谭
  158. 158 红尘来去梦一场 大壮
  159. 159 都说 龙梅子/老猫
  160. 160 산다는 건 (Cheer Up) 洪真英
  161. 161 听说 丛铭君
  162. 162 那个女孩 张泽熙
  163. 163 最近 (正式版) 王小帅
  164. 164 不谓侠 萧忆情Alex
  165. 165 芒种 音阙诗听/赵方婧
  166. 166 恋人心 魏新雨
  167. 167 Trouble Is A Friend Lenka
  168. 168 风筝误 刘珂矣
  169. 169 米津玄師-lemon(Ayasa绚沙 Remix) Ayasa
  170. 170 可不可以 张紫豪
  171. 171 告白の夜 Ayasa
  172. 172 知否知否(翻自 胡夏) 凌之轩/rainbow苒
  173. 173 琵琶行 奇然/沈谧仁
  174. 174 一曲相思 半阳
  175. 175 起风了 吴青峰
  176. 176 胡广生 任素汐
  177. 177 左手指月 古琴版 古琴唐彬/古琴白无瑕
  178. 178 清明上河图 排骨教主
  179. 179 左手指月 萨顶顶
  180. 180 刚刚好 薛之谦
  181. 181 悟空 戴荃
  182. 182 易燃易爆炸 陈粒
  183. 183 漫步人生路 邓丽君
  184. 184 不染 萨顶顶
  185. 185 不染 毛不易
  186. 186 追梦人 凤飞飞
  187. 187 笑傲江湖 刘欢/王菲
  188. 188 沙漠骆驼 展展与罗罗
  189. 189 外滩十八号 男才女貌
  190. 190 你懂得 小沈阳/沈春阳
  191. 191 铁血丹心 罗文/甄妮
  192. 192 温柔乡 陈雅森
  193. 193 似水柔情 王备
  194. 194 我只能爱你 彭青
  195. 195 年轻的战场 张杰
  196. 196 七月七日晴 许慧欣
  197. 197 心爱 金学峰
  198. 198 Something Just Like This (feat. Romy Wave) Anthony Keyrouz/Romy Wave
  199. 199 ブルーバード いきものがかり
  200. 200 舞飞扬 含笑
  201. 201 时间煮雨 郁可唯
  202. 202 英雄一怒为红颜 小壮
  203. 203 天下有情人 周华健/齐豫
  204. 204 白狐 陈瑞
  205. 205 River Flows In You Martin Ermen
  206. 206 相思 毛阿敏
  207. 207 只要有你 那英/孙楠
  208. 208 Croatian Rhapsody Maksim Mrvica
  209. 209 来生缘 刘德华
  210. 210 莫失莫忘 麦振鸿
  211. 211 往后余生 王贰浪
  212. 212 雪见—仙凡之旅 麦振鸿
  213. 213 让泪化作相思雨 南合文斗
  214. 214 追梦人 阿木
  215. 215 真英雄 张卫健
  216. 216 天使的翅膀 安琥
  217. 217 生生世世爱 吴雨霏
  218. 218 爱我就跟我走 王鹤铮
  219. 219 特别的爱给特别的你 伍思凯
  220. 220 杜婧荧/王艺翔
  221. 221 I Am You Kim Taylor
  222. 222 起风了 买辣椒也用券
  223. 223 江湖笑 周华健
  224. 224 半壶纱 刘珂矣
  225. 225 Jar Of Love 曲婉婷
  226. 226 野百合也有春天 孟庭苇
  227. 227 后来 刘若英
  228. 228 不仅仅是喜欢 萧全/孙语赛
  229. 229 Time (Official) MKJ
  230. 230 纸短情长 (完整版) 烟把儿
  231. 231 离人愁 曲肖冰
  232. 232 难念的经 周华健
  233. 233 佛系少女 冯提莫
  234. 234 红昭愿 音阙诗听
  235. 235 BINGBIAN病变 Cubi/多多Aydos
  236. 236 说散就散 袁娅维TIA RAY
  237. 237 慢慢喜欢你 莫文蔚
  238. 238 最美的期待 周笔畅
  239. 239 牵丝戏 银临/Aki阿杰
  240. 240 夜的钢琴曲 K. Williams
霞光-《精灵世纪》片尾曲 - 小时姑娘
00:00 / 00:00
An audio error has occurred, player will skip forward in 2 seconds.

作词 : 无

作曲 : 无

霞光

作曲 : 陈欣若

作词 : 高博

原编曲:陈欣若 闫志强

改编:李大白

原唱:曲锦楠

翻唱:小时

后期:漫漫

摄影:笑笑

封面:函数

月光把天空照亮

洒下一片光芒点缀海洋

每当流星从天而降

心中的梦想都随风飘扬

展开透明翅膀越出天窗

找寻一个最美丽的希望

每当天空泛起彩色霞光

带着回忆和幻想一起飞翔

月光把天空照亮

洒下一片光芒点缀海洋

每当流星从天而降

心中的梦想随风飘扬

展开透明翅膀越出天窗

找寻一个最美丽的希望

每当天空泛起彩色霞光

带着回忆和幻想一起飞翔

啦啦 啦啦啦啦啦啦啦

啦啦啦啦啦 啦啦啦啦啦

啦啦啦啦啦啦啦啦啦

啦啦啦啦啦啦啦啦啦啦 啦啦

点击右上角即可分享
微信分享提示