Cilium Cluster Mesh(转载)

Cilium Cluster Mesh(转载)

一、环境信息

主机 IP
ubuntu 172.16.94.141
软件 版本
docker 26.1.4
helm v3.15.0-rc.2
kind 0.18.0
clab 0.54.2
cilium 命令行 0.13.0
kubernetes 1.23.4
ubuntu os Ubuntu 20.04.6 LTS
kernel 5.11.5 内核升级文档

cilium 命令行使用版本 0.13.0 ,某些版本中没有 --inherit-ca 指令,会导致继承 kind-cluster1cilium-ca 证书时失败,无法安装 kind-cluster2 集群 cilium 服务
当然可以手动导入 kind-cluster1 证书到 kind-cluster2 集群后,kind-cluster2 集群在安装 cilium 服务

二、Cilium ClusterMesh 架构概览

img

  • Cilium 控制面基于 etcd 设计,尽可能保持设计简单
    • 每个 Kubernetes 集群都维护自己的 etcd 集群,其中包含该集群的状态。来自多个集群的状态永远不会在 etcd 中混淆。
    • 每个集群通过一组 etcd proxy 公开自身 etcd。其他集群中运行的 Cilium agent 连接到 etcd proxy 监听集群资源状态,并将多集群相关资源状态复制到自己的集群中。使用 etcd proxy 可确保 etcd watcher 的可扩展性。访问受到 TLS 证书的保护。
    • 从一个集群到另一个集群的访问始终是只读的。这确保了故障域保持不变,即一个集群中的故障永远不会传播到其他集群。
    • 配置通过一个简单的 Kubernetes secrets,其中包含远程 etcd 代理的地址信息以及集群名称和访问 etcd 代理所需的证书。

博客文章

三、Cilium ClusterMesh 使用背景

  • High Availability 容灾备份

    • Cluster Mesh 增强了服务的高可用性和容错能力。支持 Kubernetes 集群在多个地域或者可用区的运行。如果资源暂时不可用、一个集群中配置错误或离线升级,它可以将故障转移到其他集群,确保您的服务始终可访问。
      img
  • Shared Services Across Clusters (跨集群共享服务)

    • Cluster Mesh 支持在所有集群之间共享服务,例如秘密管理、日志记录、监控或 DNS。这可以减少运营开销、简化管理并保持租户集群之间的隔离。
      img
  • Splitting Stateful and Stateless services (拆分有状态服务和无状态服务)

    • Cluster Mesh 支持针对无状态和有状态运行单独的集群,可以将依赖关系复杂性隔离到较少数量的集群,并保持无状态集群的依赖关系。
      img
  • Transparent Service Discovery

    • Cluster Mesh 可自动发现 Kubernetes 集群中的服务。使用标准 Kubernetes service,它会自动将跨集群具有相同名称和命名空间的服务合并为全局服务。这意味着您的应用程序可以发现服务并与服务交互,无论它们驻留在哪个集群中,从而大大简化了跨集群通信。
      img
  • 可基于Pod IP 进行路由

    • Cluster Mesh 能够以本机性能处理跨多个 Kubernetes 集群的 Pod IP 路由。通过使用隧道或直接路由,它不需要任何网关或代理。这允许您的 Pod 跨集群无缝通信,从而提高微服务架构的整体效率。
    • native-routing模式:
      img
    • 隧道封装模式:
      img
  • Uniform Network Policy Enforcement (强制统一网络策略)

    • 集群网格将 Cilium 的第 3-7 层网络策略实施扩展到网格中的所有集群。它标准化了网络策略的应用,确保整个 Kubernetes 部署采用一致的安全方法,无论涉及多少集群。
      img

四、Cilium ClusterMesh 模式环境搭建

安装须知

  • 所有 Kubernetes 工作节点必须分配一个唯一的 IP 地址,并且所有工作节点之间必须具有 IP 直通。
  • 必须为所有集群分配唯一的 PodCIDR 范围。
  • Cilium 必须配置为使用 etcd 作为 kvstore
  • 集群之间的网络必须允许集群间通信。确切的防火墙要求取决于 Cilium 是否配置为在直接路由模式或隧道模式下运行。

cilium 客户端工具

安装 kind-cluster1 集群

kind-cluster1 配置文件信息
#!/bin/bash
set -v
date

# 1. prep noCNI env
cat <<EOF | kind create cluster --name=cluster1 --image=kindest/node:v1.27.3 --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  # kind 默认使用 rancher cni,cni 我们需要自己创建
  disableDefaultCNI: true
  # pod 网段
  podSubnet: "10.10.0.0/16"
  # service 网段
  serviceSubnet: "10.11.0.0/16"

nodes:
  - role: control-plane
  - role: worker

EOF

# 2. remove taints
kubectl taint nodes $(kubectl get nodes -o name | grep control-plane) node-role.kubernetes.io/control-plane:NoSchedule-
kubectl get nodes -o wide

# 3. install CNI
cilium install --context kind-cluster1 \
  --version v1.13.0-rc5  \
  --helm-set ipam.mode=kubernetes,cluster.name=cluster1,cluster.id=1 
cilium status --context kind-cluster1 --wait

# 4.install necessary tools
for i in $(docker ps -a --format "table {{.Names}}" | grep kind-cluster1) 
do
    echo $i
    docker cp /usr/bin/ping $i:/usr/bin/ping
    docker exec -it $i bash -c "sed -i -e 's/jp.archive.ubuntu.com\|archive.ubuntu.com\|security.ubuntu.com/old-releases.ubuntu.com/g' /etc/apt/sources.list"
    docker exec -it $i bash -c "apt-get -y update >/dev/null && apt-get -y install net-tools tcpdump lrzsz bridge-utils >/dev/null 2>&1"
done
  • 安装 kind-cluster1 集群和 cilium 服务
root@kind:~# ./install.sh

Creating cluster "cluster1" ...
 ✓ Ensuring node image (kindest/node:v1.23.4) 🖼
 ✓ Preparing nodes 📦 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing StorageClass 💾 
 ✓ Joining worker nodes 🚜 
Set kubectl context to "kind-cluster1"
You can now use your cluster with:

kubectl cluster-info --context kind-cluster1

Have a nice day! 👋

# install cilium
🔮 Auto-detected Kubernetes kind: kind
✨ Running "kind" validation checks
✅ Detected kind version "0.18.0"
ℹ️  Using Cilium version 1.13.0-rc5
🔮 Auto-detected cluster name: kind-cluster1
🔮 Auto-detected datapath mode: tunnel
🔮 Auto-detected kube-proxy has been installed
ℹ️  helm template --namespace kube-system cilium cilium/cilium --version 1.13.0-rc5 --set cluster.id=1,cluster.name=cluster1,encryption.nodeEncryption=false,ipam.mode=kubernetes,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vxlan
ℹ️  Storing helm values file in kube-system/cilium-cli-helm-values Secret
🔑 Created CA in secret cilium-ca
🔑 Generating certificates for Hubble...
🚀 Creating Service accounts...
🚀 Creating Cluster roles...
🚀 Creating ConfigMap for Cilium version 1.13.0-rc5...
🚀 Creating Agent DaemonSet...
🚀 Creating Operator Deployment...
⌛ Waiting for Cilium to be installed and ready...
✅ Cilium was successfully installed! Run 'cilium status' to view installation health

    /¯¯\
 /¯¯\__/¯¯\    Cilium:          OK
 \__/¯¯\__/    Operator:        OK
 /¯¯\__/¯¯\    Hubble Relay:    disabled
 \__/¯¯\__/    ClusterMesh:     disabled
    \__/

DaemonSet         cilium             Desired: 2, Ready: 2/2, Available: 2/2
Deployment        cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
Containers:       cilium             Running: 2
                  cilium-operator    Running: 1
Cluster Pods:     3/3 managed by Cilium
Image versions    cilium-operator    quay.io/cilium/operator-generic:v1.13.0-rc5@sha256:74c05a1e27f6f7e4d410a4b9e765ab4bb33c36d19016060a7e82c8d305ff2d61: 1
                  cilium             quay.io/cilium/cilium:v1.13.0-rc5@sha256:143c6fb2f32cbd28bb3abb3e9885aab0db19fae2157d167f3bc56021c4fd1ad8: 2

默认使用 tunnel 模式即 vxlan 模式

查看 kind-cluster1 集群安装的服务
root@kind:~# kubectl get pods -A
NAMESPACE            NAME                                             READY   STATUS    RESTARTS   AGE
kube-system          cilium-79vq6                                     1/1     Running   0          2m29s
kube-system          cilium-operator-76564696fd-jpcs8                 1/1     Running   0          2m29s
kube-system          cilium-thfjk                                     1/1     Running   0          2m29s
kube-system          coredns-64897985d-jtbmb                          1/1     Running   0          2m50s
kube-system          coredns-64897985d-n6ctz                          1/1     Running   0          2m50s
kube-system          etcd-cluster1-control-plane                      1/1     Running   0          3m4s
kube-system          kube-apiserver-cluster1-control-plane            1/1     Running   0          3m4s
kube-system          kube-controller-manager-cluster1-control-plane   1/1     Running   0          3m4s
kube-system          kube-proxy-8p985                                 1/1     Running   0          2m31s
kube-system          kube-proxy-b4lq4                                 1/1     Running   0          2m50s
kube-system          kube-scheduler-cluster1-control-plane            1/1     Running   0          3m4s
local-path-storage   local-path-provisioner-5ddd94ff66-dhpsg          1/1     Running   0          2m50s

root@kind:~# kubectl get svc -A
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes   ClusterIP   10.11.0.1    <none>        443/TCP                  2m41s
kube-system   kube-dns     ClusterIP   10.11.0.10   <none>        53/UDP,53/TCP,9153/TCP   2m40s
kind-cluster1 集群 cilium 配置信息
root@kind:~# kubectl -n kube-system exec -it ds/cilium -- cilium status

KVStore:                 Ok   Disabled
Kubernetes:              Ok   1.23 (v1.23.4) [linux/amd64]
Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement:    Disabled   
Host firewall:           Disabled
CNI Chaining:            none
CNI Config file:         CNI configuration file management disabled
Cilium:                  Ok   1.13.0-rc5 (v1.13.0-rc5-dc22a46f)
NodeMonitor:             Listening for events on 128 CPUs with 64x4096 of shared memory
Cilium health daemon:    Ok   
IPAM:                    IPv4: 5/254 allocated from 10.10.0.0/24, 
ClusterMesh:             0/0 clusters ready, 0 global-services
IPv6 BIG TCP:            Disabled
BandwidthManager:        Disabled
Host Routing:            Legacy
Masquerading:            IPTables [IPv4: Enabled, IPv6: Disabled]
Controller Status:       30/30 healthy
Proxy Status:            OK, ip 10.10.0.170, 0 redirects active on ports 10000-20000
Global Identity Range:   min 256, max 65535
Hubble:                  Ok   Current/Max Flows: 171/4095 (4.18%), Flows/s: 2.81   Metrics: Disabled
Encryption:              Disabled
Cluster health:          1/1 reachable   (2024-07-20T02:40:13Z)

安装 kind-cluster2 集群

kind-cluster2 配置文件信息

安装 kind-cluster2 : 关键配置 install CNI 步骤时继承 kind-cluster1cilium-ca 证书

#!/bin/bash
set -v
date

# 1. prep noCNI env
cat <<EOF | kind create cluster --name=cluster2 --image=kindest/node:v1.27.3 --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  # kind 默认使用 rancher cni,cni 我们需要自己创建
  disableDefaultCNI: true
  # pod 网段
  podSubnet: "10.20.0.0/16"
  # service 网段
  serviceSubnet: "10.21.0.0/16"

nodes:
  - role: control-plane
  - role: worker

EOF

# 2. remove taints
kubectl taint nodes $(kubectl get nodes -o name | grep control-plane) node-role.kubernetes.io/control-plane:NoSchedule-
kubectl get nodes -o wide

# 3. install CNI
# 如果此处的 cilium 不支持 --inherit-ca 也可以手动导入证书
# kubectl --context=kind-cluster1 get secret -n kube-system cilium-ca -o yaml | kubectl --context kind-cluster2 create -f -
cilium install --context kind-cluster2 \
  --version v1.13.0-rc5 \
  --helm-set ipam.mode=kubernetes,cluster.name=cluster2,cluster.id=2 --inherit-ca kind-cluster1
cilium status --context kind-cluster2 --wait

# 4.install necessary tools
for i in $(docker ps -a --format "table {{.Names}}" | grep kind-cluster1) 
do
    echo $i
    docker cp /usr/bin/ping $i:/usr/bin/ping
    docker exec -it $i bash -c "sed -i -e 's/jp.archive.ubuntu.com\|archive.ubuntu.com\|security.ubuntu.com/old-releases.ubuntu.com/g' /etc/apt/sources.list"
    docker exec -it $i bash -c "apt-get -y update >/dev/null && apt-get -y install net-tools tcpdump lrzsz bridge-utils >/dev/null 2>&1"
done
  • 安装 kind-cluster2 集群和 cilium 服务
root@kind:~# ./install.sh

Creating cluster "cluster2" ...
 ✓ Ensuring node image (kindest/node:v1.23.4) 🖼
 ✓ Preparing nodes 📦 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing StorageClass 💾 
 ✓ Joining worker nodes 🚜 
Set kubectl context to "kind-cluster2"
You can now use your cluster with:

kubectl cluster-info --context kind-cluster2

Have a nice day! 👋

# install cilium
🔮 Auto-detected Kubernetes kind: kind
✨ Running "kind" validation checks
✅ Detected kind version "0.18.0"
ℹ️  Using Cilium version 1.13.0-rc5
🔮 Auto-detected cluster name: kind-cluster2
🔮 Auto-detected datapath mode: tunnel
🔮 Auto-detected kube-proxy has been installed
ℹ️  helm template --namespace kube-system cilium cilium/cilium --version 1.13.0-rc5 --set cluster.id=2,cluster.name=cluster2,encryption.nodeEncryption=false,ipam.mode=kubernetes,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vxlan
ℹ️  Storing helm values file in kube-system/cilium-cli-helm-values Secret
🔑 Found CA in secret cilium-ca
🔑 Generating certificates for Hubble...
🚀 Creating Service accounts...
🚀 Creating Cluster roles...
🚀 Creating ConfigMap for Cilium version 1.13.0-rc5...
🚀 Creating Agent DaemonSet...
🚀 Creating Operator Deployment...
⌛ Waiting for Cilium to be installed and ready...
✅ Cilium was successfully installed! Run 'cilium status' to view installation health

    /¯¯\
 /¯¯\__/¯¯\    Cilium:          OK
 \__/¯¯\__/    Operator:        OK
 /¯¯\__/¯¯\    Hubble Relay:    disabled
 \__/¯¯\__/    ClusterMesh:     disabled
    \__/

DaemonSet         cilium             Desired: 2, Ready: 2/2, Available: 2/2
Deployment        cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
Containers:       cilium             Running: 2
                  cilium-operator    Running: 1
Cluster Pods:     3/3 managed by Cilium
Image versions    cilium             quay.io/cilium/cilium:v1.13.0-rc5@sha256:143c6fb2f32cbd28bb3abb3e9885aab0db19fae2157d167f3bc56021c4fd1ad8: 2
                  cilium-operator    quay.io/cilium/operator-generic:v1.13.0-rc5@sha256:74c05a1e27f6f7e4d410a4b9e765ab4bb33c36d19016060a7e82c8d305ff2d61: 1

默认使用 tunnel 模式即 vxlan 模式

查看 kind-cluster2 集群安装的服务
root@kind:~# kubectl get pods -A
NAMESPACE            NAME                                             READY   STATUS    RESTARTS   AGE
kube-system          cilium-operator-76564696fd-qztwd                 1/1     Running   0          89s
kube-system          cilium-qcqgp                                     1/1     Running   0          89s
kube-system          cilium-vddf9                                     1/1     Running   0          89s
kube-system          coredns-64897985d-ncj25                          1/1     Running   0          107s
kube-system          coredns-64897985d-wxq2t                          1/1     Running   0          107s
kube-system          etcd-cluster2-control-plane                      1/1     Running   0          2m3s
kube-system          kube-apiserver-cluster2-control-plane            1/1     Running   0          119s
kube-system          kube-controller-manager-cluster2-control-plane   1/1     Running   0          119s
kube-system          kube-proxy-8cccw                                 1/1     Running   0          107s
kube-system          kube-proxy-f2lgj                                 1/1     Running   0          91s
kube-system          kube-scheduler-cluster2-control-plane            1/1     Running   0          119s
local-path-storage   local-path-provisioner-5ddd94ff66-28t24          1/1     Running   0          107s

root@kind:~# kubectl get svc -A
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes   ClusterIP   10.21.0.1    <none>        443/TCP                  2m15s
kube-system   kube-dns     ClusterIP   10.21.0.10   <none>        53/UDP,53/TCP,9153/TCP   2m13s
kind-cluster2 集群 cilium 配置信息
root@kind:~# kubectl -n kube-system exec -it ds/cilium -- cilium status

KVStore:                 Ok   Disabled
Kubernetes:              Ok   1.23 (v1.23.4) [linux/amd64]
Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement:    Disabled   
Host firewall:           Disabled
CNI Chaining:            none
CNI Config file:         CNI configuration file management disabled
Cilium:                  Ok   1.13.0-rc5 (v1.13.0-rc5-dc22a46f)
NodeMonitor:             Listening for events on 128 CPUs with 64x4096 of shared memory
Cilium health daemon:    Ok   
IPAM:                    IPv4: 5/254 allocated from 10.20.0.0/24, 
ClusterMesh:             0/0 clusters ready, 0 global-services
IPv6 BIG TCP:            Disabled
BandwidthManager:        Disabled
Host Routing:            Legacy
Masquerading:            IPTables [IPv4: Enabled, IPv6: Disabled]
Controller Status:       30/30 healthy
Proxy Status:            OK, ip 10.20.0.92, 0 redirects active on ports 10000-20000
Global Identity Range:   min 256, max 65535
Hubble:                  Ok   Current/Max Flows: 293/4095 (7.16%), Flows/s: 3.24   Metrics: Disabled
Encryption:              Disabled
Cluster health:          2/2 reachable   (2024-07-20T02:56:22Z)

k8s 集群均已安装完成

root@kind:~# kubectl config get-contexts
CURRENT   NAME            CLUSTER         AUTHINFO        NAMESPACE
          kind-cluster1   kind-cluster1   kind-cluster1   
*         kind-cluster2   kind-cluster2   kind-cluster2   

# 切换到 kind-cluster1 集群
root@kind:~# kubectl config use-context kind-cluster1
Switched to context "kind-cluster1".

root@kind:~# kubectl get pods -A
NAMESPACE            NAME                                             READY   STATUS    RESTARTS   AGE
kube-system          cilium-79vq6                                     1/1     Running   0          5m7s
kube-system          cilium-operator-76564696fd-jpcs8                 1/1     Running   0          5m7s
kube-system          cilium-thfjk                                     1/1     Running   0          5m7s
kube-system          coredns-64897985d-jtbmb                          1/1     Running   0          5m28s
kube-system          coredns-64897985d-n6ctz                          1/1     Running   0          5m28s
kube-system          etcd-cluster1-control-plane                      1/1     Running   0          5m42s
kube-system          kube-apiserver-cluster1-control-plane            1/1     Running   0          5m42s
kube-system          kube-controller-manager-cluster1-control-plane   1/1     Running   0          5m42s
kube-system          kube-proxy-8p985                                 1/1     Running   0          5m9s
kube-system          kube-proxy-b4lq4                                 1/1     Running   0          5m28s
kube-system          kube-scheduler-cluster1-control-plane            1/1     Running   0          5m42s
local-path-storage   local-path-provisioner-5ddd94ff66-dhpsg          1/1     Running   0          5m28s
root@kind:~# kubectl get svc -A
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes   ClusterIP   10.11.0.1    <none>        443/TCP                  5m46s
kube-system   kube-dns     ClusterIP   10.11.0.10   <none>        53/UDP,53/TCP,9153/TCP   5m45s
local-path-storage   local-path-provisioner-5ddd94ff66-sljs9          1/1     Running   0          52m

# 切换到 kind-cluster2 集群
root@kind:~# kubectl config use-context kind-cluster2
Switched to context "kind-cluster2".

root@kind:~# kubectl get pods -A
NAMESPACE            NAME                                             READY   STATUS    RESTARTS   AGE
kube-system          cilium-operator-76564696fd-qztwd                 1/1     Running   0          2m42s
kube-system          cilium-qcqgp                                     1/1     Running   0          2m42s
kube-system          cilium-vddf9                                     1/1     Running   0          2m42s
kube-system          coredns-64897985d-ncj25                          1/1     Running   0          3m
kube-system          coredns-64897985d-wxq2t                          1/1     Running   0          3m
kube-system          etcd-cluster2-control-plane                      1/1     Running   0          3m16s
kube-system          kube-apiserver-cluster2-control-plane            1/1     Running   0          3m12s
kube-system          kube-controller-manager-cluster2-control-plane   1/1     Running   0          3m12s
kube-system          kube-proxy-8cccw                                 1/1     Running   0          3m
kube-system          kube-proxy-f2lgj                                 1/1     Running   0          2m44s
kube-system          kube-scheduler-cluster2-control-plane            1/1     Running   0          3m12s
local-path-storage   local-path-provisioner-5ddd94ff66-28t24          1/1     Running   0          3m
root@kind:~# kubectl get svc -A
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes   ClusterIP   10.21.0.1    <none>        443/TCP                  3m16s
kube-system   kube-dns     ClusterIP   10.21.0.10   <none>        53/UDP,53/TCP,9153/TCP   3m14s

kind-cluster1 kind-cluster2 组成 Cluster Mesh

clustermesh 互联是通过 nodeport svc 实现的

root@kind:~# cilium clustermesh enable --context kind-cluster1 --service-type NodePort
⚠️  Using service type NodePort may fail when nodes are removed from the cluster!
🔑 Found CA in secret cilium-ca
🔑 Generating certificates for ClusterMesh...
✨ Deploying clustermesh-apiserver from quay.io/cilium/clustermesh-apiserver:v1.13.0-rc5...
✅ ClusterMesh enabled!

root@kind:~# cilium clustermesh enable --context kind-cluster2 --service-type NodePort
⚠️  Using service type NodePort may fail when nodes are removed from the cluster!
🔑 Found CA in secret cilium-ca
🔑 Generating certificates for ClusterMesh...
✨ Deploying clustermesh-apiserver from quay.io/cilium/clustermesh-apiserver:v1.13.0-rc5...
✅ ClusterMesh enabled!

root@kind:~# cilium clustermesh connect --context kind-cluster1 --destination-context kind-cluster2
✨ Extracting access information of cluster cluster2...
🔑 Extracting secrets from cluster cluster2...
⚠️  Service type NodePort detected! Service may fail when nodes are removed from the cluster!
ℹ️  Found ClusterMesh service IPs: [172.18.0.5]
✨ Extracting access information of cluster cluster1...
🔑 Extracting secrets from cluster cluster1...
⚠️  Service type NodePort detected! Service may fail when nodes are removed from the cluster!
ℹ️  Found ClusterMesh service IPs: [172.18.0.3]
✨ Connecting cluster kind-cluster1 -> kind-cluster2...
🔑 Secret cilium-clustermesh does not exist yet, creating it...
🔑 Patching existing secret cilium-clustermesh...
✨ Patching DaemonSet with IP aliases cilium-clustermesh...
✨ Connecting cluster kind-cluster2 -> kind-cluster1...
🔑 Secret cilium-clustermesh does not exist yet, creating it...
🔑 Patching existing secret cilium-clustermesh...
✨ Patching DaemonSet with IP aliases cilium-clustermesh...
✅ Connected cluster kind-cluster1 and kind-cluster2!

root@kind:~# cilium clustermesh status  --context kind-cluster1 --wait
⚠️  Service type NodePort detected! Service may fail when nodes are removed from the cluster!
✅ Cluster access information is available:
  - 172.18.0.3:30029
✅ Service "clustermesh-apiserver" of type "NodePort" found
⌛ [kind-cluster1] Waiting for deployment clustermesh-apiserver to become ready...
⌛ Waiting (13s) for clusters to be connected: unable to determine status of cilium pod "cilium-bdnrw": unable to determine cilium status: command terminated with exit code 1
⌛ Waiting (29s) for clusters to be connected: 2 clusters have errors
⌛ Waiting (42s) for clusters to be connected: 2 clusters have errors
⌛ Waiting (53s) for clusters to be connected: 1 clusters have errors
✅ All 2 nodes are connected to all clusters [min:1 / avg:1.0 / max:1]
🔌 Cluster Connections:
- cluster2: 2/2 configured, 2/2 connected
🔀 Global services: [ min:3 / avg:3.0 / max:3 ]

如果节点重启过,集群互联状态可能会失效,需要重新执行该脚本。

检查 kind-cluster1 集群

clustermesh 互联是通过 nodeport svc 实现的

root@kind:~# kubectl get pods -A
NAMESPACE            NAME                                             READY   STATUS    RESTARTS   AGE
kube-system          clustermesh-apiserver-857c87986d-zpv6s           2/2     Running   0          92s

root@kind:~# kubectl get svc -A
NAMESPACE     NAME                    TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                  AGE
kube-system   clustermesh-apiserver   NodePort    10.11.24.85   <none>        2379:30029/TCP           95s

新增了 clustermesh-apiserver-857c87986d-zpv6s Podclustermesh-apiserver Svc 用于 Cluster Mesh 通讯

检查 kind-cluster2 集群

root@kind:~# kubectl get pods -A
NAMESPACE            NAME                                             READY   STATUS    RESTARTS   AGE
kube-system          clustermesh-apiserver-69fb7f6646-bscjg           2/2     Running   0          2m36s

root@kind:~# kubectl get svc -A
NAMESPACE     NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE
kube-system   clustermesh-apiserver   NodePort    10.21.245.59   <none>        2379:30363/TCP           2m39s

新增了 clustermesh-apiserver-69fb7f6646-bscjg Podclustermesh-apiserver Svc 用于 Cluster Mesh 通讯

五、测试 Cluster Mesh 功能

Load-balancing with Global Services

通过在每个集群中定义具有相同 namenamespaceKubernetes service 并添加注解 service.cilium.io/global: "true" 将其声明为全局来实现集群之间建立负载均衡。 Cilium 将自动对两个集群中的 pod 执行负载均衡。

---
apiVersion: v1
kind: Service
metadata:
  name: rebel-base
  annotations:    
    io.cilium/global-service: "true"
    # 默认即为 “true”
    io.cilium/shared-service: "true"
  • 部署服务
# 下载 test yaml 文件
root@kind:~# wget https://gh.api.99988866.xyz/https://raw.githubusercontent.com/cilium/cilium/1.11.2/examples/kubernetes/clustermesh/global-service-example/cluster1.yaml
root@kind:~# wget https://gh.api.99988866.xyz/https://raw.githubusercontent.com/cilium/cilium/1.11.2/examples/kubernetes/clustermesh/global-service-example/cluster2.yaml
# 替换镜像,docker.io 无法拉去 image
root@kind:~# sed -i "s#image: docker.io#image: harbor.dayuan1997.com/devops#g" cluster1.yaml 
root@kind:~# sed -i "s#image: docker.io#image: harbor.dayuan1997.com/devops#g" cluster2.yaml 

# kind-cluster1 kind-cluster2 集群部署服务
root@kind:~# kubectl apply -f ./cluster1.yaml --context kind-cluster1
service/rebel-base created
deployment.apps/rebel-base created
configmap/rebel-base-response created
deployment.apps/x-wing created

root@kind:~# kubectl apply -f ./cluster2.yaml --context kind-cluster2
service/rebel-base created
deployment.apps/rebel-base created
configmap/rebel-base-response created
deployment.apps/x-wing created

# 等待服务正常启动
root@kind:~# kubectl wait --for=condition=Ready=true pods --all --context kind-cluster1
root@kind:~# kubectl wait --for=condition=Ready=true pods --all --context kind-cluster2
  • 查看 cilium service list
root@kind:~# kubectl  --context kind-cluster1 get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.11.0.1      <none>        443/TCP   14m
rebel-base   ClusterIP   10.11.153.54   <none>        80/TCP    8m56s

root@kind:~# kubectl -n kube-system --context kind-cluster1  exec -it ds/cilium -- cilium service list

ID   Frontend           Service Type   Backend                          
1    10.11.0.1:443      ClusterIP      1 => 172.18.0.3:6443 (active)    
2    10.11.0.10:53      ClusterIP      1 => 10.10.0.184:53 (active)     
                                       2 => 10.10.0.252:53 (active)     
3    10.11.0.10:9153    ClusterIP      1 => 10.10.0.184:9153 (active)   
                                       2 => 10.10.0.252:9153 (active)   
4    10.11.218.4:2379   ClusterIP      1 => 10.10.1.250:2379 (active)   
5    10.11.153.54:80    ClusterIP      1 => 10.10.1.59:80 (active)      
                                       2 => 10.10.1.39:80 (active)      
                                       3 => 10.20.1.204:80 (active)     
                                       4 => 10.20.1.166:80 (active)    

rebel-base svc 后端对应了 4 个 Pod 服务, kind-cluster1 kind-cluster2 集群各 2Pod

  • 测试
root@kind:~# for i in $(seq 1 10); do kubectl --context kind-cluster1 exec -ti deployment/x-wing -- curl rebel-base; done

{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}

请求负载到了2个集群的不同后端上,多集群流量管理生效

High Availability 容灾备份

# Cluster Failover
sleep 3
root@kind:~# kubectl --context kind-cluster1 scale deployment rebel-base --replicas=0 
root@kind:~# kubectl --context kind-cluster1  get pods
NAME                      READY   STATUS    RESTARTS      AGE
x-wing-7d5dc844c6-f2xlp   1/1     Running   1 (84m ago)   85m
x-wing-7d5dc844c6-zb6tl   1/1     Running   1 (85m ago)   85m

root@kind:~# for i in $(seq 1 10); do kubectl --context kind-cluster1 exec -ti deployment/x-wing -- curl rebel-base; done

{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}

模拟 kind-cluster1 集群后端服务异常无法提供服务,此时所有访问后端流量都会转向 kind-cluster2 集群后端服务

取消全局负载均衡

取消全局负载均衡,可通过 io.cilium/shared-service: "false" 注解实现,该注解模式默认是true

root@kind:~# kubectl --context kind-cluster1 annotate service rebel-base io.cilium/shared-service="false" --overwrite
  • 测试
root@kind:~# for i in $(seq 1 10); do kubectl --context kind-cluster1 exec -ti deployment/x-wing -- curl rebel-base; done
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}

root@kind:~# for i in $(seq 1 10); do kubectl --context kind-cluster2 exec -ti deployment/x-wing -- curl rebel-base; done
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
  • kind-cluster1 集群中访问后端服务,依旧能负载均衡到 2 个后端的 k8s 集群中
  • kind-cluster2 集群中访问后端服务,只能访问到 kind-cluster2 集群后端服务,因为 kind-cluster1 集群后端服务已经不在是全局共享

文档

  • 移除该注解
root@kind:~# kubectl --context kind-cluster1 annotate service rebel-base io.cilium/shared-service-

Service Affinity 服务亲和

  • 在某些情况下,跨多个集群的负载平衡可能并不理想。通过注释 io.cilium/service-affinity: "local|remote|none" 可用于指定首选端点目的地。

例如,如果注释 io.cilium/service-affinity: local,则全局服务将在健康的本地后端之间进行负载平衡,并且仅当所有本地后端均不可用或不健康时,才会使用远程端点。

apiVersion: v1
kind: Service
metadata:
  name: rebel-base
  annotations:
     io.cilium/global-service: "true"
     # Possible values:
     # - local
     #    preferred endpoints from local cluster if available
     # - remote
     #    preferred endpoints from remote cluster if available
     # none (default)
     #    no preference. Default behavior if this annotation does not exist
     io.cilium/service-affinity: "local"
spec:
  type: ClusterIP
  ports:
  - port: 80
  selector:
    name: rebel-base
kind-cluster1 中的 rebel-base svc 添加 io.cilium/service-affinity: local
root@kind:~# kubectl --context kind-cluster1 annotate service rebel-base io.cilium/service-affinity=local --overwrite
service/rebel-base annotated
root@kind:~# kubectl --context kind-cluster1 describe svc rebel-base 
Name:              rebel-base
Namespace:         default
Labels:            <none>
Annotations:       io.cilium/global-service: true
                   io.cilium/service-affinity: local
  • 测试
root@kind:~# for i in $(seq 1 10); do kubectl --context kind-cluster1 exec -ti deployment/x-wing -- curl rebel-base; done
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}

root@kind:~# for i in $(seq 1 10); do kubectl --context kind-cluster2 exec -ti deployment/x-wing -- curl rebel-base; done
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
  • kind-cluster1 集群中访问后端服务,全部访问到 kind-cluster1 集群后端服务,因为默认优先使用本地集群后端提供服务
  • kind-cluster2 集群中访问后端服务,依旧能负载均衡到 2 个后端的 k8s 集群中

  • 查看 cilium service 信息
root@kind:~# kubectl -n kube-system --context kind-cluster1  exec -it ds/cilium -- cilium service list --clustermesh-affinity

ID   Frontend           Service Type   Backend                                   
1    10.11.0.1:443      ClusterIP      1 => 172.18.0.3:6443 (active)             
2    10.11.0.10:53      ClusterIP      1 => 10.10.0.184:53 (active)              
                                       2 => 10.10.0.252:53 (active)              
3    10.11.0.10:9153    ClusterIP      1 => 10.10.0.184:9153 (active)            
                                       2 => 10.10.0.252:9153 (active)            
4    10.11.218.4:2379   ClusterIP      1 => 10.10.1.250:2379 (active)            
5    10.11.153.54:80    ClusterIP      1 => 10.10.1.59:80 (active) (preferred)      # 访问服务时优先选择
                                       2 => 10.10.1.39:80 (active) (preferred)      # 访问服务时优先选择
                                       3 => 10.20.1.66:80 (active)               
                                       4 => 10.20.1.41:80 (active)  

root@kind:~# kubectl -n kube-system --context kind-cluster2  exec -it ds/cilium -- cilium service list --clustermesh-affinity

ID   Frontend             Service Type   Backend                          
1    10.21.0.1:443        ClusterIP      1 => 172.18.0.5:6443 (active)    
2    10.21.0.10:53        ClusterIP      1 => 10.20.0.3:53 (active)       
                                         2 => 10.20.0.244:53 (active)     
3    10.21.0.10:9153      ClusterIP      1 => 10.20.0.244:9153 (active)   
                                         2 => 10.20.0.3:9153 (active)     
4    10.21.236.203:2379   ClusterIP      1 => 10.20.1.254:2379 (active)   
5    10.21.255.13:80      ClusterIP      1 => 10.20.1.66:80 (active)      
                                         2 => 10.20.1.41:80 (active)      
                                         3 => 10.10.1.59:80 (active)      
                                         4 => 10.10.1.39:80 (active)          

kind-cluster1 集群的 rebel-base svc 后端比 kind-cluster2 集群 rebel-base svc 后端信息多了 preferred 标示,表示访问服务时优先选择。而 10.10.X.X IPkind-cluster1 集群 Pod IP

文档

kind-cluster1 中的 rebel-base svc 添加 io.cilium/service-affinity: remote
root@kind:~# kubectl --context kind-cluster1 annotate service rebel-base io.cilium/service-affinity=remote --overwrite
service/rebel-base annotated
root@kind:~# kubectl --context kind-cluster1 describe svc rebel-base 
Name:              rebel-base
Namespace:         default
Labels:            <none>
Annotations:       io.cilium/global-service: true
                   io.cilium/service-affinity: remote
  • 测试
root@kind:~# for i in $(seq 1 10); do kubectl --context kind-cluster1 exec -ti deployment/x-wing -- curl rebel-base; done
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}

root@kind:~# for i in $(seq 1 10); do kubectl --context kind-cluster2 exec -ti deployment/x-wing -- curl rebel-base; done
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-1"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
{"Galaxy": "Alderaan", "Cluster": "Cluster-2"}
  • kind-cluster1 集群中访问后端服务,全部访问到 kind-cluster2 集群后端服务,因为默认优先使用远端集群后端提供服务
  • kind-cluster2 集群中访问后端服务,依旧能负载均衡到 2 个后端的 k8s 集群中

  • 查看 cilium service 信息
root@kind:~# kubectl -n kube-system --context kind-cluster1  exec -it ds/cilium -- cilium service list --clustermesh-affinity

ID   Frontend           Service Type   Backend                                   
1    10.11.0.1:443      ClusterIP      1 => 172.18.0.3:6443 (active)             
2    10.11.0.10:53      ClusterIP      1 => 10.10.0.184:53 (active)              
                                       2 => 10.10.0.252:53 (active)              
3    10.11.0.10:9153    ClusterIP      1 => 10.10.0.184:9153 (active)            
                                       2 => 10.10.0.252:9153 (active)            
4    10.11.218.4:2379   ClusterIP      1 => 10.10.1.250:2379 (active)            
5    10.11.153.54:80    ClusterIP      1 => 10.10.1.59:80 (active)               
                                       2 => 10.10.1.39:80 (active)               
                                       3 => 10.20.1.66:80 (active) (preferred)   # 访问服务时优先选择
                                       4 => 10.20.1.41:80 (active) (preferred)   # 访问服务时优先选择

root@kind:~# kubectl -n kube-system --context kind-cluster2  exec -it ds/cilium -- cilium service list --clustermesh-affinity

ID   Frontend             Service Type   Backend                          
1    10.21.0.1:443        ClusterIP      1 => 172.18.0.5:6443 (active)    
2    10.21.0.10:53        ClusterIP      1 => 10.20.0.3:53 (active)       
                                         2 => 10.20.0.244:53 (active)     
3    10.21.0.10:9153      ClusterIP      1 => 10.20.0.244:9153 (active)   
                                         2 => 10.20.0.3:9153 (active)     
4    10.21.236.203:2379   ClusterIP      1 => 10.20.1.254:2379 (active)   
5    10.21.255.13:80      ClusterIP      1 => 10.20.1.66:80 (active)      
                                         2 => 10.20.1.41:80 (active)      
                                         3 => 10.10.1.59:80 (active)      
                                         4 => 10.10.1.39:80 (active)          

kind-cluster1 集群的 rebel-base svc 后端比 kind-cluster2 集群 rebel-base svc 后端信息多了 preferred 标示,表示访问服务时优先选择。而 10.20.X.X IPkind-cluster2 集群 Pod IP

文档

Service Affinity 实际运用
  • echoserver-service.yaml
配置文件
# echoserver-service.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: echoserver-service-local
  annotations:
    io.cilium/global-service: "true"
    io.cilium/service-affinity: local
spec:
  type: ClusterIP
  selector:
    app: echoserver
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: echoserver-service-remote
  annotations:
    io.cilium/global-service: "true"
    io.cilium/service-affinity: remote
spec:
  type: ClusterIP
  selector:
    app: echoserver
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: echoserver-service-none
  annotations:
    io.cilium/global-service: "true"
    io.cilium/service-affinity: none
spec:
  type: ClusterIP
  selector:
    app: echoserver
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: echoserver-daemonset
  labels:
    app: echoserver
spec:
  selector:
    matchLabels:
      app: echoserver
  template:
    metadata:
      labels:
        app: echoserver
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      containers:
      - name: echoserver
        image: harbor.dayuan1997.com/devops/ealen/echo-server:0.9.2
        env:
        - name: NODE
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName

  • netshoot-ds.yaml
配置文件
# netshoot-ds.yaml
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: netshoot
spec:
  selector:
    matchLabels:
      app: netshoot
  template:
    metadata:
      labels:
        app: netshoot
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      containers:
      - name: netshoot
        image: harbor.dayuan1997.com/devops/nettool:0.9

  • kind-cluster1 kind-cluster2 集群创建服务
root@kind:~# cat service-affinity.sh
#!/bin/bash

NAME=cluster
NAMESPACE=service-affinity

kubectl --context kind-${NAME}1 create ns $NAMESPACE
kubectl --context kind-${NAME}2 create ns $NAMESPACE
kubectl --context kind-${NAME}1 -n $NAMESPACE apply -f netshoot-ds.yaml
kubectl --context kind-${NAME}2 -n $NAMESPACE apply -f netshoot-ds.yaml
kubectl --context kind-${NAME}1 -n $NAMESPACE apply -f echoserver-service.yaml
kubectl --context kind-${NAME}2 -n $NAMESPACE apply -f echoserver-service.yaml
cilium clustermesh status --context kind-${NAME}1 --wait
cilium clustermesh status --context kind-${NAME}2 --wait

kubectl -n$NAMESPACE wait --for=condition=Ready=true pods --all --context kind-${NAME}1
kubectl -n$NAMESPACE wait --for=condition=Ready=true pods --all --context kind-${NAME}2

root@kind:~# bash service-affinity.sh
  • 查看 kind-cluster2 集群信息
root@kind:~# kubectl -n service-affinity get pods --context kind-cluster2
NAME                         READY   STATUS    RESTARTS   AGE
echoserver-daemonset-5m6p6   1/1     Running   0          36s
echoserver-daemonset-lvfgk   1/1     Running   0          36s
netshoot-8nb8k               1/1     Running   0          38s
netshoot-mg4pt               1/1     Running   0          38s

root@kind:~# kubectl -n service-affinity get svc --context kind-cluster2
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
echoserver-service-local    ClusterIP   10.21.197.14    <none>        80/TCP    2m59s
echoserver-service-none     ClusterIP   10.21.196.200   <none>        80/TCP    2m59s
echoserver-service-remote   ClusterIP   10.21.226.141   <none>        80/TCP    2m59s
  • 测试服务
root@kind:~# cat verify-service-affinity.sh
#!/bin/bash
set -v
# exec &>./verify-log-rec-2-verify-service-affinity.txt
NREQUESTS=10

echo "------------------------------------------------------"
echo Current_Context View:
echo "------------------------------------------------------"
kubectl config get-contexts

for affinity in local remote none; do
  echo "------------------------------------------------------"
  rm -f $affinity.txt
  echo "Sending $NREQUESTS requests to service-affinity=$affinity service"
  echo "------------------------------------------------------"
  for i in $(seq 1 $NREQUESTS); do
  Current_Cluster=`kubectl --context kind-cluster2 -n service-affinity exec -it ds/netshoot -- curl -q "http://echoserver-service-$affinity.service-affinity.svc.cluster.local?echo_env_body=NODE"` 
  echo -e Current_Rsp_From_Cluster: ${Current_Cluster}
  done
done
echo "------------------------------------------------------"

kind-cluster2 集群进行测试

root@kind:~# bash verify-service-affinity.sh

------------------------------------------------------
Sending 10 requests to service-affinity=local service
------------------------------------------------------
Current_Rsp_From_Cluster: "cluster2-worker"
Current_Rsp_From_Cluster: "cluster2-worker"
Current_Rsp_From_Cluster: "cluster2-worker"
Current_Rsp_From_Cluster: "cluster2-worker"
Current_Rsp_From_Cluster: "cluster2-control-plane"
Current_Rsp_From_Cluster: "cluster2-control-plane"
Current_Rsp_From_Cluster: "cluster2-worker"
Current_Rsp_From_Cluster: "cluster2-worker"
Current_Rsp_From_Cluster: "cluster2-worker"
Current_Rsp_From_Cluster: "cluster2-control-plane"
------------------------------------------------------
Sending 10 requests to service-affinity=remote service
------------------------------------------------------
Current_Rsp_From_Cluster: "cluster1-worker"
Current_Rsp_From_Cluster: "cluster1-worker"
Current_Rsp_From_Cluster: "cluster1-control-plane"
Current_Rsp_From_Cluster: "cluster1-control-plane"
Current_Rsp_From_Cluster: "cluster1-worker"
Current_Rsp_From_Cluster: "cluster1-worker"
Current_Rsp_From_Cluster: "cluster1-worker"
Current_Rsp_From_Cluster: "cluster1-control-plane"
Current_Rsp_From_Cluster: "cluster1-worker"
Current_Rsp_From_Cluster: "cluster1-control-plane"
------------------------------------------------------
Sending 10 requests to service-affinity=none service
------------------------------------------------------
Current_Rsp_From_Cluster: "cluster2-worker"
Current_Rsp_From_Cluster: "cluster2-worker"
Current_Rsp_From_Cluster: "cluster1-control-plane"
Current_Rsp_From_Cluster: "cluster1-worker"
Current_Rsp_From_Cluster: "cluster2-control-plane"
Current_Rsp_From_Cluster: "cluster1-control-plane"
Current_Rsp_From_Cluster: "cluster2-worker"
Current_Rsp_From_Cluster: "cluster2-worker"
Current_Rsp_From_Cluster: "cluster1-control-plane"
Current_Rsp_From_Cluster: "cluster2-control-plane"
------------------------------------------------------
  • 请求不同的 svc 访问的后端 pod 服务来自不同的集群
    • local svc: 后端 pod 均来自本地 kind-cluster2 集群
    • remote svc: 后端 pod 均来自远端 kind-cluster1 集群
    • none svc: 后端 pod 随即访问 kind-cluster1 kind-cluster2 集群

转载博客

https://github.com/HFfleming/k8s-network-learning/blob/main/cilium-cni/ClusterMesh/Cilium-ClusterMesh.md

posted @   evescn  阅读(171)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 震惊!C++程序真的从main开始吗?99%的程序员都答错了
· 【硬核科普】Trae如何「偷看」你的代码?零基础破解AI编程运行原理
· 单元测试从入门到精通
· 上周热点回顾(3.3-3.9)
· Vue3状态管理终极指南:Pinia保姆级教程
  1. 1 毛不易
  2. 2 青丝 等什么君(邓寓君)
  3. 3 最爱 周慧敏
  4. 4 青花 (Live) 摩登兄弟刘宇宁/周传雄
  5. 5 怨苍天变了心 葱香科学家(王悠然)
  6. 6 吹梦到西洲 恋恋故人难/黄诗扶/王敬轩(妖扬)
  7. 7 姑娘别哭泣 柯柯柯啊
  8. 8 我会好好的 王心凌
  9. 9 半生雪 七叔-叶泽浩
  10. 10 用力活着 张茜
  11. 11 山茶花读不懂白玫瑰 梨笑笑
  12. 12 赴春寰 张壹ZHANG/Mukyo木西/鹿予/弦上春秋Official
  13. 13 故事终章 程响
  14. 14 沿海独白 王唯一(九姨太)
  15. 15 若把你 越南电音 云音乐AI/网易天音
  16. 16 世间美好与你环环相扣 柏松
  17. 17 愿你如愿 陆七言
  18. 18 多情种 胡杨林
  19. 19 和你一样 李宇春
  20. 20 晚风心里吹 李克勤
  21. 21 世面 黄梓溪
  22. 22 等的太久 杨大六
  23. 23 微醺状态 张一
  24. 24 醉今朝 安小茜
  25. 25 阿衣莫 阿吉太组合
  26. 26 折风渡夜 沉默书生
  27. 27 星河万里 王大毛
  28. 28 满目星辰皆是你 留小雨
  29. 29 老人与海 海鸣威/吴琼
  30. 30 海底 一支榴莲
  31. 31 只要有你 曹芙嘉
  32. 32 兰花指 阿里郎
  33. 33 口是心非 张大帅
  34. 34 爱不得忘不舍 白小白
  35. 35 惊鸿醉 指尖笑
  36. 36 如愿 葱香科学家(王悠然)
  37. 37 晚风心里吹 阿梨粤
  38. 38 惊蛰·归云 陈拾月(只有影子)/KasaYAYA
  39. 39 风飞沙 迪克牛仔
  40. 40 把孤独当做晚餐 井胧
  41. 41 星星点灯 郑智化
  42. 42 客子光阴 七叔-叶泽浩
  43. 43 走马观花 王若熙
  44. 44 沈园外 阿YueYue/戾格/小田音乐社
  45. 45 盗将行 花粥/马雨阳
  46. 46 她的眼睛会唱歌 张宇佳
  47. 47 一笑江湖 姜姜
  48. 48 虎二
  49. 49 人间烟火 程响
  50. 50 不仅仅是喜欢 萧全/孙语赛
  51. 51 你的眼神(粤语版) Ecrolyn
  52. 52 剑魂 李炜
  53. 53 虞兮叹 闻人听書_
  54. 54 时光洪流 程响
  55. 55 桃花诺 G.E.M.邓紫棋
  56. 56 行星(PLANET) 谭联耀
  57. 57 别怕我伤心 悦开心i/张家旺
  58. 58 上古山海经 小少焱
  59. 59 你的眼神 七元
  60. 60 怨苍天变了心 米雅
  61. 61 绝不会放过 王亚东
  62. 62 可笑的孤独 黄静美
  63. 63 错位时空 艾辰
  64. 64 像个孩子 仙屁孩
  65. 65 完美世界 [主题版] 水木年华
  66. 66 我们的时光 赵雷
  67. 67 万字情诗 椒椒JMJ
  68. 68 妖王 浮生
  69. 69 天地无霜 (合唱版) 杨紫/邓伦
  70. 70 塞北殇 王若熙
  71. 71 花亦山 祖娅纳惜
  72. 72 醉今朝 是可乐鸭
  73. 73 欠我个未来 艾岩
  74. 74 缘分一道桥 容云/青峰AomineDaiky
  75. 75 不知死活 子无余/严书
  76. 76 不可说 霍建华/赵丽颖
  77. 77 孤勇者 陈奕迅
  78. 78 让酒 摩登兄弟刘宇宁
  79. 79 红尘悠悠DJ沈念版 颜一彦
  80. 80 折风渡夜 (DJ名龙版) 泽国同学
  81. 81 吹灭小山河 国风堂/司南
  82. 82 等什么君 - 辞九门回忆 张大帅
  83. 83 绝世舞姬 张曦匀/戚琦
  84. 84 阿刁(无修音版|live) 张韶涵网易云资讯台
  85. 85 往事如烟 蓝波
  86. 86 清明上河图 李玉刚
  87. 87 望穿秋水 坤坤阿
  88. 88 太多 杜宣达
  89. 89 小阿七
  90. 90 霞光-《精灵世纪》片尾曲 小时姑娘
  91. 91 放开 爱乐团王超
  92. 92 醉仙美 娜美
  93. 93 虞兮叹(完整版) 黎林添娇kiki
  94. 94 单恋一枝花 夏了个天呐(朴昱美)/七夕
  95. 95 一个人挺好 (DJ版) 69/肖涵/沈子凡
  96. 96 一笑江湖 闻人听書_
  97. 97 赤伶 李玉刚
  98. 98 达拉崩吧 (Live) 周深
  99. 99 等你归来 程响
  100. 100 责无旁贷 阿悠悠
  101. 101 你是人间四月天(钢琴弹唱版) 邵帅
  102. 102 虐心 徐良/孙羽幽
  103. 103 大天蓬 (女生版) 清水er
  104. 104 赤伶 是二智呀
  105. 105 有种关系叫知己 刘大壮
  106. 106 怎随天下 王若熙
  107. 107 有人 赵钶
  108. 108 海底 三块木头
  109. 109 有何不可 许嵩
  110. 110 大天蓬 (抖音版) 璐爷
  111. 111 我吹过你吹过的晚风(翻自 ac) 辛辛
  112. 112 只爱西经 林一
  113. 113 关山酒 等什么君(邓寓君)
  114. 114 曾经的你 年少不川
  115. 115 倔强 五月天
  116. 116 Lydia F.I.R.
  117. 117 爱你 王心凌
  118. 118 杀破狼 哥哥妹妹
  119. 119 踏山河 七叔-叶泽浩
  120. 120 错过的情人 雷婷
  121. 121 你看到的我 黄勇/任书怀
  122. 122 新欢渡旧爱 黄静美
  123. 123 慕容晓晓-黄梅戏(南柯一梦 / 明洋 remix) 南柯一梦/MINGYANG
  124. 124 浮白 花粥/王胜娚
  125. 125 叹郁孤 霄磊
  126. 126 贝加尔湖畔 (Live) 李健
  127. 127 不虞 王玖
  128. 128 麻雀 李荣浩
  129. 129 一场雨落下来要用多久 鹿先森乐队
  130. 130 野狼disco 宝石Gem
  131. 131 我们不该这样的 张赫煊
  132. 132 海底 一支榴莲
  133. 133 爱情错觉 王娅
  134. 134 你一定要幸福 何洁
  135. 135 往后余生 马良
  136. 136 放你走 正点
  137. 137 只要平凡 张杰/张碧晨
  138. 138 只要平凡-小石头和孩子们 小石头和孩子们
  139. 139 红色高跟鞋 (Live) 韩雪/刘敏涛/万茜
  140. 140 明月天涯 五音Jw
  141. 141 华年 鹿先森乐队
  142. 142 分飞 徐怀钰
  143. 143 你是我撞的南墙 刘楚阳
  144. 144 同簪 小时姑娘/HITA
  145. 145 我的将军啊-唯美独特女版 熙宝(陆迦卉)
  146. 146 我的将军啊(女版戏腔) Mukyo木西
  147. 147 口是心非 南柯nanklo/乐小桃
  148. 148 DAY BY DAY (Japanese Ver.) T-ara
  149. 149 我承认我怕黑 雅楠
  150. 150 我要找到你 冯子晨
  151. 151 你的答案 子尧
  152. 152 一剪梅 费玉清
  153. 153 纸船 薛之谦/郁可唯
  154. 154 那女孩对我说 (完整版) Uu
  155. 155 我好像在哪见过你 薛之谦
  156. 156 林中鸟 葛林
  157. 157 渡我不渡她 (正式版) 苏谭谭
  158. 158 红尘来去梦一场 大壮
  159. 159 都说 龙梅子/老猫
  160. 160 산다는 건 (Cheer Up) 洪真英
  161. 161 听说 丛铭君
  162. 162 那个女孩 张泽熙
  163. 163 最近 (正式版) 王小帅
  164. 164 不谓侠 萧忆情Alex
  165. 165 芒种 音阙诗听/赵方婧
  166. 166 恋人心 魏新雨
  167. 167 Trouble Is A Friend Lenka
  168. 168 风筝误 刘珂矣
  169. 169 米津玄師-lemon(Ayasa绚沙 Remix) Ayasa
  170. 170 可不可以 张紫豪
  171. 171 告白の夜 Ayasa
  172. 172 知否知否(翻自 胡夏) 凌之轩/rainbow苒
  173. 173 琵琶行 奇然/沈谧仁
  174. 174 一曲相思 半阳
  175. 175 起风了 吴青峰
  176. 176 胡广生 任素汐
  177. 177 左手指月 古琴版 古琴唐彬/古琴白无瑕
  178. 178 清明上河图 排骨教主
  179. 179 左手指月 萨顶顶
  180. 180 刚刚好 薛之谦
  181. 181 悟空 戴荃
  182. 182 易燃易爆炸 陈粒
  183. 183 漫步人生路 邓丽君
  184. 184 不染 萨顶顶
  185. 185 不染 毛不易
  186. 186 追梦人 凤飞飞
  187. 187 笑傲江湖 刘欢/王菲
  188. 188 沙漠骆驼 展展与罗罗
  189. 189 外滩十八号 男才女貌
  190. 190 你懂得 小沈阳/沈春阳
  191. 191 铁血丹心 罗文/甄妮
  192. 192 温柔乡 陈雅森
  193. 193 似水柔情 王备
  194. 194 我只能爱你 彭青
  195. 195 年轻的战场 张杰
  196. 196 七月七日晴 许慧欣
  197. 197 心爱 金学峰
  198. 198 Something Just Like This (feat. Romy Wave) Anthony Keyrouz/Romy Wave
  199. 199 ブルーバード いきものがかり
  200. 200 舞飞扬 含笑
  201. 201 时间煮雨 郁可唯
  202. 202 英雄一怒为红颜 小壮
  203. 203 天下有情人 周华健/齐豫
  204. 204 白狐 陈瑞
  205. 205 River Flows In You Martin Ermen
  206. 206 相思 毛阿敏
  207. 207 只要有你 那英/孙楠
  208. 208 Croatian Rhapsody Maksim Mrvica
  209. 209 来生缘 刘德华
  210. 210 莫失莫忘 麦振鸿
  211. 211 往后余生 王贰浪
  212. 212 雪见—仙凡之旅 麦振鸿
  213. 213 让泪化作相思雨 南合文斗
  214. 214 追梦人 阿木
  215. 215 真英雄 张卫健
  216. 216 天使的翅膀 安琥
  217. 217 生生世世爱 吴雨霏
  218. 218 爱我就跟我走 王鹤铮
  219. 219 特别的爱给特别的你 伍思凯
  220. 220 杜婧荧/王艺翔
  221. 221 I Am You Kim Taylor
  222. 222 起风了 买辣椒也用券
  223. 223 江湖笑 周华健
  224. 224 半壶纱 刘珂矣
  225. 225 Jar Of Love 曲婉婷
  226. 226 野百合也有春天 孟庭苇
  227. 227 后来 刘若英
  228. 228 不仅仅是喜欢 萧全/孙语赛
  229. 229 Time (Official) MKJ
  230. 230 纸短情长 (完整版) 烟把儿
  231. 231 离人愁 曲肖冰
  232. 232 难念的经 周华健
  233. 233 佛系少女 冯提莫
  234. 234 红昭愿 音阙诗听
  235. 235 BINGBIAN病变 Cubi/多多Aydos
  236. 236 说散就散 袁娅维TIA RAY
  237. 237 慢慢喜欢你 莫文蔚
  238. 238 最美的期待 周笔畅
  239. 239 牵丝戏 银临/Aki阿杰
  240. 240 夜的钢琴曲 K. Williams
笑傲江湖 - 刘欢/王菲
00:00 / 00:00
An audio error has occurred, player will skip forward in 2 seconds.

作词 : 易茗

作曲 : 赵季平

作曲:赵季平

作词:易茗

(浮世滔滔)

(人情渺渺)

(一剑飘飘)

(一生笑傲)

(浮世滔滔)

(人情渺渺)

(一剑飘飘)

(一生笑傲)

传一曲天荒地老

共一生水远山高

正义不倒

会盟天下英豪

无招胜有招

(浮世滔滔)

(人情渺渺)

(一剑飘飘)

(一生笑傲)

(一剑飘飘)

(一生笑傲)

英雄肝胆两相照

江湖儿女日见少

心还在

人去了

回首一片

风雨飘摇

心还在

人去了

回首一片

回首一片

风雨飘摇

(浮世滔滔)

(人情渺渺)

(一剑飘飘)

(一生笑傲)

传一曲天荒地老

共一生水远山高

正义不倒

会盟天下英豪

无招胜有招

(浮世滔滔)

(人情渺渺)

(一剑飘飘)

(一生笑傲)

(一剑飘飘)

(一生笑傲)

点击右上角即可分享
微信分享提示