Cilium WireGuard with kubeProxy 模式

Cilium WireGuard with kubeProxy 模式

一、环境信息

主机 IP
ubuntu 172.16.94.141
软件 版本
docker 26.1.4
helm v3.15.0-rc.2
kind 0.18.0
kubernetes 1.23.4
ubuntu os Ubuntu 20.04.6 LTS
kernel 5.11.5 内核升级文档

二、安装服务

kind 配置文件信息

$ cat install.sh

#!/bin/bash
date
set -v

# 1.prep noCNI env
cat <<EOF | kind create cluster --name=cilium-wireguard --image=kindest/node:v1.23.4 --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  # kind 默认使用 rancher cni,cni 我们需要自己创建
  disableDefaultCNI: true
  #kubeProxyMode: "none" # Enable kubeProxy
nodes:
  - role: control-plane
  - role: worker
  - role: worker

containerdConfigPatches:
- |-
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."harbor.evescn.com"]
    endpoint = ["https://harbor.evescn.com"]
EOF

# 2.remove taints
controller_node_ip=`kubectl get node -o wide --no-headers | grep -E "control-plane|bpf1" | awk -F " " '{print $6}'`
# kubectl taint nodes $(kubectl get nodes -o name | grep control-plane) node-role.kubernetes.io/master:NoSchedule-
kubectl get nodes -o wide

# 3.install cni
helm repo add cilium https://helm.cilium.io > /dev/null 2>&1
helm repo update > /dev/null 2>&1

# 创建 wireguard key 信息
kubectl create -n kube-system secret generic cilium-wireguard-keys \
	    --from-literal=keys="3 rfc4106(gcm(aes)) $(echo $(dd if=/dev/urandom count=20 bs=1 2> /dev/null | xxd -p -c 64)) 128"

# wireguard Options(--set tunnel=disabled --set autoDirectNodeRoutes=true --set ipv4NativeRoutingCIDR="10.0.0.0/8" --set encryption.enabled=true --set encryption.type=wireguard)
helm install cilium cilium/cilium \
    --set k8sServiceHost=$controller_node_ip \
    --set k8sServicePort=6443 \
    --version 1.13.0-rc5 \
    --namespace kube-system \
    --set debug.enabled=true \
    --set debug.verbose=datapath \
    --set monitorAggregation=none \
    --set ipam.mode=cluster-pool \
    --set cluster.name=cilium-wireguard \
    --set tunnel=disabled \
    --set autoDirectNodeRoutes=true \
    --set ipv4NativeRoutingCIDR="10.0.0.0/8" \
    --set encryption.enabled=true \
    --set encryption.type=wireguard \
    --set l7Proxy=false

# 4.install necessary tools
for i in $(docker ps -a --format "table {{.Names}}" | grep cilium) 
do
    echo $i
    docker cp /usr/bin/ping $i:/usr/bin/ping
    docker exec -it $i bash -c "sed -i -e 's/jp.archive.ubuntu.com\|archive.ubuntu.com\|security.ubuntu.com/old-releases.ubuntu.com/g' /etc/apt/sources.list"
    docker exec -it $i bash -c "apt-get -y update >/dev/null && apt-get -y install net-tools tcpdump lrzsz bridge-utils wireguard-tools >/dev/null 2>&1"
done

--set 参数解释

  1. --set tunnel=disabled

    • 含义: 禁用隧道模式。
    • 用途: 禁用后,Cilium 将不使用 vxlan 技术,直接在主机之间路由数据包,即 direct-routing 模式。
  2. --set autoDirectNodeRoutes=true

    • 含义: 启用自动直接节点路由。
    • 用途: 使 Cilium 自动设置直接节点路由,优化网络流量。
  3. --set ipv4NativeRoutingCIDR="10.0.0.0/8"

    • 含义: 指定用于 IPv4 本地路由的 CIDR 范围,这里是 10.0.0.0/8
    • 用途: 配置 Cilium 使其知道哪些 IP 地址范围应该通过本地路由进行处理,不做 snat , Cilium 默认会对所用地址做 snat。
  4. encryption.enabledencryption.type:

    • --set encryption.enabled=true: 启用加密功能。
    • --set encryption.type=wireguard: 使用 WireGuard 进行加密。
  5. --set l7Proxy=false:

    • 禁用第七层代理(L7 Proxy),这意味着 Cilium 不会处理应用层的代理功能。
  • 安装 k8s 集群和 cilium 服务
# ./install.sh

Creating cluster "cilium-wireguard" ...
 ✓ Ensuring node image (kindest/node:v1.23.4) 🖼
 ✓ Preparing nodes 📦 📦 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing StorageClass 💾 
 ✓ Joining worker nodes 🚜 
Set kubectl context to "kind-cilium-wireguard"
You can now use your cluster with:

kubectl cluster-info --context kind-cilium-wireguard

Not sure what to do next? 😅  Check out https://kind.sigs.k8s.io/docs/user/quick-start/
  • 查看安装的服务
root@kind:~# kubectl get pods -A
NAMESPACE            NAME                                                     READY   STATUS    RESTARTS   AGE
kube-system          cilium-operator-68d8dcd5dc-6kltl                         1/1     Running   0          18m
kube-system          cilium-operator-68d8dcd5dc-n5mj6                         1/1     Running   0          18m
kube-system          cilium-pk9qv                                             1/1     Running   0          18m
kube-system          cilium-rw8c8                                             1/1     Running   0          18m
kube-system          cilium-scblb                                             1/1     Running   0          18m
kube-system          coredns-64897985d-mtcwd                                  1/1     Running   0          20m
kube-system          coredns-64897985d-vv8m5                                  1/1     Running   0          20m
kube-system          etcd-cilium-wireguard-control-plane                      1/1     Running   0          20m
kube-system          kube-apiserver-cilium-wireguard-control-plane            1/1     Running   0          20m
kube-system          kube-controller-manager-cilium-wireguard-control-plane   1/1     Running   0          20m
kube-system          kube-proxy-5lm5z                                         1/1     Running   0          20m
kube-system          kube-proxy-68prn                                         1/1     Running   0          20m
kube-system          kube-proxy-kdq96                                         1/1     Running   0          20m
kube-system          kube-scheduler-cilium-wireguard-control-plane            1/1     Running   0          20m
local-path-storage   local-path-provisioner-5ddd94ff66-qcbg4                  1/1     Running   0          20m

cilium 配置信息

root@kind:~# kubectl -n kube-system exec -it ds/cilium -- cilium status

KVStore:                 Ok   Disabled
Kubernetes:              Ok   1.23 (v1.23.4) [linux/amd64]
Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement:    Disabled   
Host firewall:           Disabled
CNI Chaining:            none
CNI Config file:         CNI configuration file management disabled
Cilium:                  Ok   1.13.0-rc5 (v1.13.0-rc5-dc22a46f)
NodeMonitor:             Listening for events on 128 CPUs with 64x4096 of shared memory
Cilium health daemon:    Ok   
IPAM:                    IPv4: 5/254 allocated from 10.0.0.0/24, 
IPv6 BIG TCP:            Disabled
BandwidthManager:        Disabled
Host Routing:            Legacy
Masquerading:            IPTables [IPv4: Enabled, IPv6: Disabled]
Controller Status:       30/30 healthy
Proxy Status:            No managed proxy redirect
Global Identity Range:   min 256, max 65535
Hubble:                  Ok              Current/Max Flows: 4095/4095 (100.00%), Flows/s: 9.73   Metrics: Disabled
Encryption:              Wireguard       [cilium_wg0 (Pubkey: 47ZDP52maTXdv6BWAmy/wIhlFgHAvrhw4Zi0i9CnX2Y=, Port: 51871, Peers: 2)]
Cluster health:          3/3 reachable   (2024-07-06T08:50:28Z)
  • KubeProxyReplacement: Disabled
    • kube-proxy 替代功能被禁用,Cilium 没有接管 kube-proxy 的功能。Kubernetes 集群将继续使用默认的 kube-proxy 进行服务负载均衡和网络策略管理。
  • Host Routing: Legacy
    • 使用传统的主机路由。
  • Masquerading: IPTables [IPv4: Enabled, IPv6: Disabled]
    • 使用 iptables 进行 IP 伪装(NAT),IPv4 伪装启用,IPv6 伪装禁用。
  • Encryption
    • 启用了 Wireguard 加密,网络接口 cilium_wg0 使用 Wireguard 加密,公钥信息、监听端口等

k8s 集群安装 Pod 测试网络

# cat cni.yaml

apiVersion: apps/v1
kind: DaemonSet
#kind: Deployment
metadata:
  labels:
    app: cni
  name: cni
spec:
  #replicas: 1
  selector:
    matchLabels:
      app: cni
  template:
    metadata:
      labels:
        app: cni
    spec:
      containers:
      - image: harbor.dayuan1997.com/devops/nettool:0.9
        name: nettoolbox
        securityContext:
          privileged: true

---
apiVersion: v1
kind: Service
metadata:
  name: serversvc
spec:
  type: NodePort
  selector:
    app: cni
  ports:
  - name: cni
    port: 80
    targetPort: 80
    nodePort: 32000
# kubectl apply -f cni.yaml
daemonset.apps/cni created
service/serversvc created

# kubectl run net --image=harbor.dayuan1997.com/devops/nettool:0.9
pod/net created
  • 查看安装服务信息

kubectl taint nodes $(kubectl get nodes -o name | grep control-plane) node-role.kubernetes.io/master:NoSchedule- node/cilium-wireguard-control-plane untainted

# kubectl get pods -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP           NODE                       NOMINATED NODE   READINESS GATES
cni-7n2f9   1/1     Running   0          52s   10.0.2.194   cilium-wireguard-worker2   <none>           <none>
cni-vk8fx   1/1     Running   0          52s   10.0.1.54    cilium-wireguard-worker    <none>           <none>
net         1/1     Running   0          8s    10.0.1.203   cilium-wireguard-worker    <none>           <none>

# kubectl get svc 
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        25m
serversvc    NodePort    10.96.179.210   <none>        80:32000/TCP   62s

三、测试网络

同节点 Pod 网络通讯

img
可以查看此文档 Cilium Native Routing with kubeProxy 模式 中,同节点网络通讯,数据包转发流程一致

不同节点 Pod 网络通讯

拓扑

  • Pod 节点信息
## ip 信息
root@kind:~# kubectl exec -it net -- ip a l
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 42:bf:e8:df:0c:99 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.1.203/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::40bf:e8ff:fedf:c99/64 scope link 
       valid_lft forever preferred_lft forever

## 路由信息
root@kind:~# kubectl exec -it net -- ip r s
default via 10.0.1.26 dev eth0 mtu 1420 
10.0.1.26 dev eth0 scope link
  • Pod 节点所在 Node 节点信息
root@kind:~# docker exec -it cilium-wireguard-worker bash

## ip 信息
root@cilium-wireguard-worker:/# ip a l 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: cilium_wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default 
    link/none 
3: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 4a:0e:20:c2:f3:ba brd ff:ff:ff:ff:ff:ff
    inet6 fe80::480e:20ff:fec2:f3ba/64 scope link 
       valid_lft forever preferred_lft forever
4: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ce:61:54:d4:58:40 brd ff:ff:ff:ff:ff:ff
    inet 10.0.1.26/32 scope link cilium_host
       valid_lft forever preferred_lft forever
    inet6 fe80::cc61:54ff:fed4:5840/64 scope link 
       valid_lft forever preferred_lft forever
6: lxc_health@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 1a:6b:e3:06:81:b8 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::186b:e3ff:fe06:81b8/64 scope link 
       valid_lft forever preferred_lft forever
7: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.18.0.3/16 brd 172.18.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fc00:f853:ccd:e793::3/64 scope global nodad 
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe12:3/64 scope link 
       valid_lft forever preferred_lft forever
13: lxcaac4b4192870@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d6:99:4c:b1:92:15 brd ff:ff:ff:ff:ff:ff link-netns cni-9788dc47-985e-a8c2-4464-6ef5f01faa7c
    inet6 fe80::d499:4cff:feb1:9215/64 scope link 
       valid_lft forever preferred_lft forever
15: lxc2d3008e4e496@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ee:cf:c0:71:b2:58 brd ff:ff:ff:ff:ff:ff link-netns cni-8ceef947-984e-3756-85df-270f5ebe2602
    inet6 fe80::eccf:c0ff:fe71:b258/64 scope link 
       valid_lft forever preferred_lft forever

## wireguard 信息
root@cilium-wireguard-worker:/# ip -d link show
2: cilium_wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/none  promiscuity 0 minmtu 0 maxmtu 2147483552 
    wireguard addrgenmode none numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 

root@cilium-wireguard-worker:/# wg
interface: cilium_wg0
  public key: IBO/gvSTtNAvy9pVytKFWIYiaWVooOr78vwpwRtShXE=
  private key: (hidden)
  listening port: 51871

peer: IKwg++mojsgZ5mw3McPgcg5Cq9qe/dfNOz3det8U12o=
  endpoint: 172.18.0.4:51871
  allowed ips: 10.0.2.210/32, 10.0.2.136/32, 10.0.2.194/32
  latest handshake: 3 minutes, 41 seconds ago
  transfer: 476 B received, 628 B sent

peer: 47ZDP52maTXdv6BWAmy/wIhlFgHAvrhw4Zi0i9CnX2Y=
  endpoint: 172.18.0.2:51871
  allowed ips: 10.0.0.152/32, 10.0.0.14/32, 10.0.0.76/32, 10.0.0.177/32, 10.0.0.232/32


## 路由信息
root@cilium-wireguard-worker:/# ip r s
default via 172.18.0.1 dev eth0 
10.0.0.0/24 via 172.18.0.2 dev eth0 
10.0.1.0/24 via 10.0.1.26 dev cilium_host src 10.0.1.26 
10.0.1.26 dev cilium_host scope link 
10.0.2.0/24 via 172.18.0.4 dev eth0 
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.3 

通过 wireguard 信息,可以发现只有 cilium_wg0 网卡工作在 wireguard 模式,但是查看路由信息发现没有路由到达 cilium_wg0 网卡,那怎么进行的数据加密?其实这个地方的原理同 cilium ipsec 模式,均使用的源地址路由,在内核中转发数据包到 cilium_wg0 网卡进行数据加密

img

# 源地址路由信息
root@cilium-wireguard-worker:/# ip rule show
0:      from all lookup local
1:      from all fwmark 0xe00/0xf00 lookup 201
32766:  from all lookup main
32767:  from all lookup default

# 查看宿主机路由发现也存在 10.0.0.0/24 10.0.2.0/24 这2个网端路由,但是源地址路由优先级高于目的地址路由,
# 基于源地址路由信息会发现数据会送往 cilium_wg0 网卡
root@cilium-wireguard-worker:/# ip r s t 201
default dev cilium_wg0 

源地址路由: default dev cilium_wg0 表示所有非本机的数据包均送往 cilium_wg0 网卡

  • Pod 节点进行 ping 包测试
root@kind:~# kubectl exec -it net -- ping -c 1 10.0.2.194
PING 10.0.2.194 (10.0.2.194): 56 data bytes
64 bytes from 10.0.2.194: seq=0 ttl=60 time=6.803 ms

--- 10.0.2.194 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 6.803/6.803/6.803 ms
  • Pod 节点 eth0 网卡抓包
net~$ tcpdump -pne -i eth0
09:11:17.849857 42:bf:e8:df:0c:99 > ee:cf:c0:71:b2:58, ethertype IPv4 (0x0800), length 98: 10.0.1.203 > 10.0.2.194: ICMP echo request, id 84, seq 0, length 64
09:11:17.851644 ee:cf:c0:71:b2:58 > 42:bf:e8:df:0c:99, ethertype IPv4 (0x0800), length 98: 10.0.2.194 > 10.0.1.203: ICMP echo reply, id 84, seq 0, length 64
  • Node 节点 cilium-wireguard-workercilium_wg0 网卡抓包,
root@cilium-wireguard-worker:/# tcpdump -pne -i cilium_wg0
listening on cilium_wg0, link-type RAW (Raw IP), snapshot length 262144 bytes
09:12:24.280618 ip: 10.0.1.203 > 10.0.2.194: ICMP echo request, id 93, seq 0, length 64
09:12:24.281380 ip: 10.0.2.194 > 10.0.1.203: ICMP echo reply, id 93, seq 0, length 64

cilium_wg0 网卡数据包中没有 mac 层,没有 mac 地址,只有 ip 层往上的数据层。因为这个接口模式为 link-type RAW (Raw IP) : 裸的 ip 数据包。 在 wg0 上没有抓到封装后的数据包信息,继续在 eth0 网卡抓包

  • Node 节点 cilium-wireguard-workereth0 网卡抓包,并使用 wireshark 工具分析
root@cilium-kubeproxy-replacement-ebpf-vxlan-worker:/# tcpdump -pne -i eth0 -w /tmp/eth0.cap
root@cilium-kubeproxy-replacement-ebpf-vxlan-worker:/# sz /tmp/eth0.cap

img

搜索 wg 数据包, wireguard 模式下,数据包是密文 wg 数据包,需要进行解密后才能查看。可以参考此的博客进行解密 WireGuard:抓包和实时解密

img

拓扑

  • 数据从 net 服务发出,通过查看本机路由表,送往 node 节点。路由: default via 10.0.1.26 dev eth0 mtu 1420
  • node 节点获取到数据包后,被 lxc 网卡上的 sk_buff 标记捕捉,然后被送往 node 节点上的 cilium_gw0 网卡
  • cilium_gw0 接口收到数据包信息后,基于 源地址路由表 信息 1: from all fwmark 0xe00/0xf00 lookup 201 会关联上 wireguard 规则,对数据进行加密后发送到 eth0 网卡。
  • eth0 网卡,封装上 ip mac 层信息后,并送往对端 node 节点。
  • 对端 node 节点接受到数据包后,发现这个是一个 wireguard 数据包,将数据包内核模块处理。
  • 解封装后发现内部的数据包,目的地址为 10.0.2.194 ,发现是本机 Pod 地址段,会直接送往目标 Pod eth0veth pair 网卡 lxceb24b483d559
  • 最终会把数据包送到目地 Pod 主机

Service 网络通讯

  • 查看 Service 信息
root@kind:~# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        138m
serversvc    NodePort    10.96.142.173   <none>        80:32000/TCP   74m
  • net 服务上请求 Pod 所在 Node 节点 32000 端口
root@kind:~# kubectl exec -ti net -- curl 172.18.0.3:32000
PodName: cni-vk8fx | PodIP: eth0 10.0.1.54/32

并在 net 服务 eth0 网卡 抓包查看

net~$ tcpdump -pne -i eth0

10:38:30.718136 42:bf:e8:df:0c:99 > ee:cf:c0:71:b2:58, ethertype IPv4 (0x0800), length 74: 10.0.1.203.37004 > 172.18.0.3.32000: Flags [S], seq 3659711964, win 64860, options [mss 1380,sackOK,TS val 776473280 ecr 0,nop,wscale 7], length 0
10:38:30.719343 ee:cf:c0:71:b2:58 > 42:bf:e8:df:0c:99, ethertype IPv4 (0x0800), length 74: 172.18.0.3.32000 > 10.0.1.203.37004: Flags [S.], seq 3020717123, ack 3659711965, win 65160, options [mss 1460,sackOK,TS val 2156122710 ecr 776473280,nop,wscale 7], length 0
10:38:30.719366 42:bf:e8:df:0c:99 > ee:cf:c0:71:b2:58, ethertype IPv4 (0x0800), length 66: 10.0.1.203.37004 > 172.18.0.3.32000: Flags [.], ack 1, win 507, options [nop,nop,TS val 776473281 ecr 2156122710], length 0
10:38:30.723796 42:bf:e8:df:0c:99 > ee:cf:c0:71:b2:58, ethertype IPv4 (0x0800), length 146: 10.0.1.203.37004 > 172.18.0.3.32000: Flags [P.], seq 1:81, ack 1, win 507, options [nop,nop,TS val 776473286 ecr 2156122710], length 80
10:38:30.724324 ee:cf:c0:71:b2:58 > 42:bf:e8:df:0c:99, ethertype IPv4 (0x0800), length 66: 172.18.0.3.32000 > 10.0.1.203.37004: Flags [.], ack 81, win 509, options [nop,nop,TS val 2156122715 ecr 776473286], length 0
10:38:30.736025 ee:cf:c0:71:b2:58 > 42:bf:e8:df:0c:99, ethertype IPv4 (0x0800), length 302: 172.18.0.3.32000 > 10.0.1.203.37004: Flags [P.], seq 1:237, ack 81, win 509, options [nop,nop,TS val 2156122727 ecr 776473286], length 236
10:38:30.736035 42:bf:e8:df:0c:99 > ee:cf:c0:71:b2:58, ethertype IPv4 (0x0800), length 66: 10.0.1.203.37004 > 172.18.0.3.32000: Flags [.], ack 237, win 506, options [nop,nop,TS val 776473298 ecr 2156122727], length 0
10:38:30.736740 ee:cf:c0:71:b2:58 > 42:bf:e8:df:0c:99, ethertype IPv4 (0x0800), length 112: 172.18.0.3.32000 > 10.0.1.203.37004: Flags [P.], seq 237:283, ack 81, win 509, options [nop,nop,TS val 2156122727 ecr 776473298], length 46
10:38:30.736746 42:bf:e8:df:0c:99 > ee:cf:c0:71:b2:58, ethertype IPv4 (0x0800), length 66: 10.0.1.203.37004 > 172.18.0.3.32000: Flags [.], ack 283, win 506, options [nop,nop,TS val 776473299 ecr 2156122727], length 0
10:38:30.737556 42:bf:e8:df:0c:99 > ee:cf:c0:71:b2:58, ethertype IPv4 (0x0800), length 66: 10.0.1.203.37004 > 172.18.0.3.32000: Flags [F.], seq 81, ack 283, win 506, options [nop,nop,TS val 776473300 ecr 2156122727], length 0
10:38:30.739557 ee:cf:c0:71:b2:58 > 42:bf:e8:df:0c:99, ethertype IPv4 (0x0800), length 66: 172.18.0.3.32000 > 10.0.1.203.37004: Flags [F.], seq 283, ack 82, win 509, options [nop,nop,TS val 2156122730 ecr 776473300], length 0
10:38:30.739572 42:bf:e8:df:0c:99 > ee:cf:c0:71:b2:58, ethertype IPv4 (0x0800), length 66: 10.0.1.203.37004 > 172.18.0.3.32000: Flags [.], ack 284, win 506, options [nop,nop,TS val 776473302 ecr 2156122730], length 0
10:38:35.838740 42:bf:e8:df:0c:99 > ee:cf:c0:71:b2:58, ethertype ARP (0x0806), length 42: Request who-has 10.0.1.26 tell 10.0.1.203, length 28
10:38:35.839309 ee:cf:c0:71:b2:58 > 42:bf:e8:df:0c:99, ethertype ARP (0x0806), length 42: Reply 10.0.1.26 is-at ee:cf:c0:71:b2:58, length 28

抓包数据显示, net 服务使用一个随机的端口和 172.18.0.3 32000 端口进行 tcp 通讯。

* `KubeProxyReplacement:    Disabled`
  * kube-proxy 替代功能被禁用,Cilium 没有接管 kube-proxy 的功能。Kubernetes 集群将继续使用默认的 kube-proxy 进行服务负载均衡和网络策略管理。

cilium 配置 KubeProxyReplacement: Disabled,通过配置信息确定 cilium 没有接管 kube-proxy 的功能。那么 kube-proxy 使用 iptablesipvs 进行 service 转发,此处 kind 使用 iptables,查看 conntrack 连接跟踪和 iptables 规则验证

  • conntrack 信息
root@cilium-wireguard-worker:/# conntrack -L | grep 32000
conntrack v1.4.6 (conntrack-tools): 44 flow entries have been shown.
tcp      6 66 TIME_WAIT src=10.0.1.203 dst=172.18.0.3 sport=37004 dport=32000 src=10.0.1.54 dst=10.0.1.26 sport=80 dport=37004 [ASSURED] mark=0 use=1
  • iptables 信息
root@cilium-wireguard-worker:/# iptables-save | grep 32000
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/serversvc:cni" -m tcp --dport 32000 -j KUBE-SVC-CU7F3MNN62CF4ANP
-A KUBE-SVC-CU7F3MNN62CF4ANP -p tcp -m comment --comment "default/serversvc:cni" -m tcp --dport 32000 -j KUBE-MARK-MASQ
posted @ 2024-07-06 18:44  evescn  阅读(117)  评论(1编辑  收藏  举报