k8s-calico

一、calico简介及应用

Calico是一个纯三层的网络解决方案,为容器提供多node间的访问通信,calico将每一个node节点都当做为一个路由器(router),
各节点通过BGP(Border Gateway Protocol) 边界网关协议学习并在node节点生成路由规则,从而将不同node节点上的pod连接起来进行通信。

BGP是一个去中心化的协议,它通过自动学习和维护路由表实现网络的可用性,但是并不是所有的网络都支持BGP,另外为了跨网络实现更大规模的网络管理,
calico 还支持IP-in-IP的叠加模型,简称IPIP,IPIP可以实现跨不同网段建立路由通信,但是会存在安全性问题,其在内核内置,可以通过Calico的配置文件设置是否启用IPIP,
在公司内部如果k8s的node节点没有跨越网段建议关闭IPIP。

IPIP是一种将各Node的路由之间做一个tunnel,再把两个网络连接起来的模式。启用IPIP模式时,Calico将在各Node上创建一个名为"tunl0"的虚拟网络接口。
BGP模式则直接使用物理机作为虚拟路由路(vRouter),不再创建额外的tunnel。

calico 核心组件:
Felix:calico的agent,运行在每一台node节点上,其主要是维护路由规则、汇报当前节点状态以确保pod的夸主机通信。

BGP Client:每台node都运行,其主要负责监听node节点上由felix生成的路由信息,然后通过BGP协议广播至其他剩余的node节点,从而相互学习路由实现pod通信。

Route Reflector:集中式的路由反射器,calico v3.3开始支持,当Calico BGP客户端将路由从其FIB(ForwardInformation dataBase,转发信息库)通告到Route Reflector时,
Route Reflector会将这些路由通告给部署集群中的其他节点,Route Reflector专门用于管理BGP网络路由规则,不会产生pod数据通信。

BIRD:BGP协议客户端,负责将Felix生成的路由信息载入内核并通告到整个网络中

etcd存储系统:利用etcd,Calico可以有明确状态的系统,且易于通过扩展应对访问压力提升,避免自身成为系统瓶颈。另外,etcd也是calico各组件的通信总线

calico默认工作模式是BGP的node-to-node mesh,如果要使用Route Reflector需要进行相关配置。

calico镜像:
calico-cni:                运行在所有节点。
calico-node:            Calico在k8s集群中每个节点运行的代理程序,负责提供Felxi、bird4、bird6和confd等守护进程
calico-kube-controllers:Calico运行在k8s集群上的自定义控制器,是Calico协同k8s的插件,运行在master节点


Calico的部署文件默认使用的是IPIP 隧道模式,这里就保持默认,不再进行修改。如果要使用纯BGP路由模式或者混合模式可以修改变量CALICO_IPV4POOL_IPIP的值,可用值如下:
    Always:只使用IPIP隧道网络,默认值
    Never:不使用IPIP隧道网络,配置中也有设置value: “off”
    CrossSubnet:启用混合网络模式

两对网络对比
IPIP网络:
流量:tunlo设备封装数据,形成隧道,承载流量。
适用网络类型:适用于互相访问的pod不在同一个网段中,跨网段访问的场景。外层封装的ip能够解决跨网段的路由问题。
效率:流量需要tunl0设备封装,效率略低

BGP网络:
流量:使用路由信息导向流量
适用网络类型:适用于互相访问的pod在同一个网段,适用于大型网络。
效率:原生hostGW,效率高

 

二、测试

#创建pod测试夸主机网络通信是否正常(域名无法ping通,是DNS没有设置)
kubectl run net-test1 --image=alpine --replicas=4 sleep 360000 

[root@localhost7D ~]# calicoctl  node status
Calico process is running.

IPv4 BGP status
+----------------+-------------------+-------+----------+-------------+
|  PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+----------------+-------------------+-------+----------+-------------+
| 192.168.80.120 | node-to-node mesh | up    | 09:00:46 | Established |
| 192.168.80.140 | node-to-node mesh | up    | 09:00:44 | Established |
| 192.168.80.150 | node-to-node mesh | up    | 09:00:44 | Established |
| 192.168.80.160 | node-to-node mesh | up    | 09:00:44 | Established |
| 192.168.80.170 | node-to-node mesh | up    | 09:00:44 | Established |
+----------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

[root@localhost7D ~]# route  -n 

[root@localhost7D ~]# kubectl get pod   -A  -o wide
NAMESPACE              NAME                                         READY   STATUS             RESTARTS   AGE     IP               NODE             NOMINATED NODE   READINESS GATES
default                net-test1-5fcc69db59-dzv57                   1/1     Running            0          5m52s   10.20.245.1      192.168.80.160   <none>           <none>
default                net-test1-5fcc69db59-t5kzw                   1/1     Running            0          5m52s   10.20.75.66      192.168.80.170   <none>           <none>
default                net-test1-5fcc69db59-tn4bf                   1/1     Running            0          5m52s   10.20.41.129     192.168.80.150   <none>           <none>
default                net-test1-5fcc69db59-z8x44                   1/1     Running            0          5m52s   10.20.75.65      192.168.80.170   <none>           <none>
kube-system            calico-kube-controllers-6cf5b744d7-l645q     1/1     Running            0          13m     192.168.80.150   192.168.80.150   <none>           <none>
kube-system            calico-node-6ndtp                            1/1     Running            0          13m     192.168.80.130   192.168.80.130   <none>           <none>
kube-system            calico-node-94kfn                            1/1     Running            0          13m     192.168.80.120   192.168.80.120   <none>           <none>
kube-system            calico-node-cxm9r                            1/1     Running            0          13m     192.168.80.140   192.168.80.140   <none>           <none>
kube-system            calico-node-d4gl4                            1/1     Running            0          13m     192.168.80.170   192.168.80.170   <none>           <none>
kube-system            calico-node-g5jdz                            1/1     Running            0          13m     192.168.80.160   192.168.80.160   <none>           <none>
kube-system            calico-node-z8hck                            1/1     Running            0          13m     192.168.80.150   192.168.80.150   <none>           <none>
kube-system            kube-dns-6b65447b86-5zs4v                    3/3     Running            0          3m7s    10.20.245.2      192.168.80.160   <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-74bbb59f48-426hz   0/1     CrashLoopBackOff   100        5d7h    10.20.5.4        192.168.80.170   <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-bc4695695-tcvzl         0/1     CrashLoopBackOff   99         3d      10.20.3.2        192.168.80.160   <none>           <none>


[root@localhost7C k8s]# kubectl  exec  -it net-test1-5fcc69db59-dzv57  sh
/ # traceroute 10.20.75.66
traceroute to 10.20.75.66 (10.20.75.66), 30 hops max, 46 byte packets
 1  192.168.80.160 (192.168.80.160)  0.006 ms  0.004 ms  0.003 ms
 2  10.20.75.64 (10.20.75.64)  2.157 ms  0.550 ms  0.425 ms    #使用tunnel接口转发
 3  10.20.75.66 (10.20.75.66)  0.367 ms  0.518 ms  0.289 ms


----------------------------------------
#关闭IPIP的通信状态测试


需要清空环境,重新部署k8s集群,所有的节点要重启
ansible-playbook  99.clean.yml 

#设置网络组件
vim  /etc/ansible/hosts
# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"

#设置关闭IPIP
vim  roles/calico/defaults/main.yml 
# 设置 CALICO_IPV4POOL_IPIP=“off”,可以提高网络性能,条件限制详见 docs/setup/calico.md
CALICO_IPV4POOL_IPIP: "off"


#重新部署
ansible-playbook 01.prepare.yml
ansible-playbook 02.etcd.yml 
ansible-playbook 03.docker.yml
ansible-playbook 04.kube-master.yml
ansible-playbook 05.kube-node.yml 
ansible-playbook 06.network.yml 


#创建pod测试夸主机网络通信是否正常(域名无法ping通,是DNS没有设置)
kubectl run net-test1 --image=alpine --replicas=4 sleep 360000 



[root@localhost7C ~]# route  -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.80.2    0.0.0.0         UG    100    0        0 eth0
10.20.41.128    192.168.80.150  255.255.255.192 UG    0      0        0 eth0
10.20.75.64     192.168.80.170  255.255.255.192 UG    0      0        0 eth0
10.20.245.0     192.168.80.160  255.255.255.192 UG    0      0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.10.0    0.0.0.0         255.255.255.0   U     101    0        0 eth1
192.168.80.0    0.0.0.0         255.255.255.0   U     100    0        0 eth0   #使用宿主机的接口转发
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0
[root@localhost7C ~]# 
[root@localhost7C ~]# 
[root@localhost7C ~]# 
[root@localhost7C ~]# kubectl  get pod  -A  -o wide
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE             NOMINATED NODE   READINESS GATES
default       net-test1-5fcc69db59-2p7ld                 1/1     Running   0          6m32s   10.20.75.64      192.168.80.170   <none>           <none>
default       net-test1-5fcc69db59-5fvs4                 1/1     Running   0          6m32s   10.20.245.1      192.168.80.160   <none>           <none>
default       net-test1-5fcc69db59-7xjs8                 1/1     Running   0          6m32s   10.20.245.0      192.168.80.160   <none>           <none>
default       net-test1-5fcc69db59-tnb8z                 1/1     Running   0          6m32s   10.20.41.128     192.168.80.150   <none>           <none>
kube-system   calico-kube-controllers-6cf5b744d7-sks5b   1/1     Running   0          8m7s    192.168.80.170   192.168.80.170   <none>           <none>
kube-system   calico-node-69wjq                          1/1     Running   0          8m7s    192.168.80.160   192.168.80.160   <none>           <none>
kube-system   calico-node-cctlm                          1/1     Running   0          8m7s    192.168.80.170   192.168.80.170   <none>           <none>
kube-system   calico-node-hppss                          1/1     Running   0          8m7s    192.168.80.130   192.168.80.130   <none>           <none>
kube-system   calico-node-rkrc5                          1/1     Running   0          8m7s    192.168.80.120   192.168.80.120   <none>           <none>
kube-system   calico-node-s2bz7                          1/1     Running   0          8m7s    192.168.80.140   192.168.80.140   <none>           <none>
kube-system   calico-node-wwjrz                          1/1     Running   0          8m7s    192.168.80.150   192.168.80.150   <none>           <none>
[root@localhost7C ~]# 
[root@localhost7C ~]# 
[root@localhost7C ~]#  kubectl exec  -it net-test1-5fcc69db59-2p7ld sh
/ # traceroute
traceroute   traceroute6
/ # traceroute 10.20.41.128
traceroute to 10.20.41.128 (10.20.41.128), 30 hops max, 46 byte packets
 1  192.168.80.170 (192.168.80.170)  0.006 ms  0.006 ms  0.004 ms
 2  192.168.80.150 (192.168.80.150)  0.261 ms  0.254 ms  0.308 ms      ##使用宿主机的接口转发
 3  10.20.41.128 (10.20.41.128)  0.357 ms  0.270 ms  0.236 ms



参考文档
https://blog.csdn.net/weixin_43266367/article/details/128018625
https://www.cnblogs.com/goldsunshine/p/10701242.html
https://blog.csdn.net/sl963216757/article/details/118637534  网络策略文档

 

三、calico yaml文件

# Calico Version v3.4.4 
# https://docs.projectcalico.org/v3.4/releases#v3.4.4
# This manifest includes the following component versions:
#   calico/node:v3.4.4
#   calico/cni:v3.4.4
#   calico/kube-controllers:v3.4.4

# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
  name: calico-config
  namespace: kube-system
data:
  # Configure this with the location of your etcd cluster.
  etcd_endpoints: "https://192.168.80.200:2379,https://192.168.80.190:2379,https://192.168.80.180:2379"

  # If you're using TLS enabled etcd uncomment the following.
  # You must also populate the Secret below with these files.
  etcd_ca: "/calico-secrets/etcd-ca"
  etcd_cert: "/calico-secrets/etcd-cert"
  etcd_key: "/calico-secrets/etcd-key"
  # Configure the Calico backend to use.
  calico_backend: "bird"

  # Configure the MTU to use
  veth_mtu: "1440"

  # The CNI network configuration to install on each node.
  cni_network_config: |-
    {
      "name": "k8s-pod-network",
      "cniVersion": "0.3.0",
      "plugins": [
        {
          "type": "calico",
          "log_level": "warning",
          "etcd_endpoints": "https://192.168.80.200:2379,https://192.168.80.190:2379,https://192.168.80.180:2379",
          "etcd_key_file": "/etc/calico/ssl/calico-key.pem",
          "etcd_cert_file": "/etc/calico/ssl/calico.pem",
          "etcd_ca_cert_file": "/etc/kubernetes/ssl/ca.pem",
          "mtu": 1500,
          "ipam": {
              "type": "calico-ipam"
          },
          "policy": {
              "type": "k8s"
          },
          "kubernetes": {
              "kubeconfig": "/root/.kube/config"
          }
        },
        {
          "type": "portmap",
          "snat": true,
          "capabilities": {"portMappings": true}
        }
      ]
    }

---

# We use cmd-line-way( kubectl create) to create secrets 'calico-etcd-secrets',
# refer to 'roles/calico/tasks/main.yml' for details.

---

# This manifest installs the calico/node container, as well
# as the Calico CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: calico-node
  namespace: kube-system
  labels:
    k8s-app: calico-node
spec:
  selector:
    matchLabels:
      k8s-app: calico-node
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        k8s-app: calico-node
    spec:
      priorityClassName: system-cluster-critical
      nodeSelector:
        beta.kubernetes.io/os: linux
      hostNetwork: true
      tolerations:
        # Make sure calico-node gets scheduled on all nodes.
        - effect: NoSchedule
          operator: Exists
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - effect: NoExecute
          operator: Exists
      serviceAccountName: calico-node
      # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
      # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
      terminationGracePeriodSeconds: 0
      initContainers:
        # This container installs the Calico CNI binaries
        # and CNI network config file on each node.
        - name: install-cni
          image: calico/cni:v3.4.4
          command: ["/install-cni.sh"]
          env:
            # Name of the CNI config file to create.
            - name: CNI_CONF_NAME
              value: "10-calico.conflist"
            # The CNI network config to install on each node.
            - name: CNI_NETWORK_CONFIG
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: cni_network_config
            # The location of the Calico etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # CNI MTU Config variable
            - name: CNI_MTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # Prevents the container from sleeping forever.
            - name: SLEEP
              value: "false"
          volumeMounts:
            - mountPath: /host/opt/cni/bin
              name: cni-bin-dir
            - mountPath: /host/etc/cni/net.d
              name: cni-net-dir
            - mountPath: /calico-secrets
              name: etcd-certs
      containers:
        # Runs calico/node container on each Kubernetes node.  This
        # container programs network policy and routes on each
        # host.
        - name: calico-node
          image: calico/node:v3.4.4
          env:
            # The location of the Calico etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # Location of the CA certificate for etcd.
            - name: ETCD_CA_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_ca
            # Location of the client key for etcd.
            - name: ETCD_KEY_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_key
            # Location of the client certificate for etcd.
            - name: ETCD_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_cert
            # Set noderef for node controller.
            - name: CALICO_K8S_NODE_REF
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            # Choose the backend to use.
            - name: CALICO_NETWORKING_BACKEND
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: calico_backend
            # Cluster type to identify the deployment type
            - name: CLUSTER_TYPE
              value: "k8s,bgp"
            # Auto-detect the BGP IP address.
            - name: IP
              value: "autodetect"
            - name: IP_AUTODETECTION_METHOD
              value: "can-reach=192.168.80.140"
            # Enable IPIP
            - name: CALICO_IPV4POOL_IPIP
              value: "off"
            # Set MTU for tunnel device used if ipip is enabled
            - name: FELIX_IPINIPMTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # The default IPv4 pool to create on startup if none exists. Pod IPs will be
            # chosen from this range. Changing this value after installation will have
            # no effect. This should fall within `--cluster-cidr`.
            - name: CALICO_IPV4POOL_CIDR
              value: "10.20.0.0/16"
            # Disable file logging so `kubectl logs` works.
            - name: CALICO_DISABLE_FILE_LOGGING
              value: "true"
            # Set Felix endpoint to host default action to ACCEPT.
            - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
              value: "ACCEPT"
            # Disable IPv6 on Kubernetes.
            - name: FELIX_IPV6SUPPORT
              value: "false"
            # Set Felix logging 
            - name: FELIX_LOGSEVERITYSCREEN
              value: "warning"
            - name: FELIX_HEALTHENABLED
              value: "true"
            # Set Kubernetes NodePorts: If services do use NodePorts outside Calico’s expected range,
            # Calico will treat traffic to those ports as host traffic instead of pod traffic.
            - name: FELIX_KUBENODEPORTRANGES
              value: "30000:40000"
            - name: FELIX_PROMETHEUSMETRICSENABLED
              value: "true"
          securityContext:
            privileged: true
          resources:
            requests:
              cpu: 250m
          livenessProbe:
            httpGet:
              path: /liveness
              port: 9099
              host: localhost
            periodSeconds: 10
            initialDelaySeconds: 10
            failureThreshold: 6
          readinessProbe:
            exec:
              command:
              - /bin/calico-node
              - -bird-ready
              - -felix-ready
            periodSeconds: 10
          volumeMounts:
            - mountPath: /lib/modules
              name: lib-modules
              readOnly: true
            - mountPath: /run/xtables.lock
              name: xtables-lock
              readOnly: false
            - mountPath: /var/run/calico
              name: var-run-calico
              readOnly: false
            - mountPath: /var/lib/calico
              name: var-lib-calico
              readOnly: false
            - mountPath: /calico-secrets
              name: etcd-certs
      volumes:
        # Used by calico/node.
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - name: var-run-calico
          hostPath:
            path: /var/run/calico
        - name: var-lib-calico
          hostPath:
            path: /var/lib/calico
        - name: xtables-lock
          hostPath:
            path: /run/xtables.lock
            type: FileOrCreate
        # Used to install CNI.
        - name: cni-bin-dir
          hostPath:
            path: /usr/bin
        - name: cni-net-dir
          hostPath:
            path: /etc/cni/net.d
        # Mount in the etcd TLS secrets with mode 400.
        # See https://kubernetes.io/docs/concepts/configuration/secret/
        - name: etcd-certs
          secret:
            secretName: calico-etcd-secrets
            defaultMode: 0400
---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-node
  namespace: kube-system

---

# This manifest deploys the Calico Kubernetes controllers.
# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:
  name: calico-kube-controllers
  namespace: kube-system
  labels:
    k8s-app: calico-kube-controllers
spec:
  # The controllers can only have a single active instance.
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      k8s-app: calico-kube-controllers
  template:
    metadata:
      name: calico-kube-controllers
      namespace: kube-system
      labels:
        k8s-app: calico-kube-controllers
    spec:
      priorityClassName: system-cluster-critical
      nodeSelector:
        beta.kubernetes.io/os: linux
      # The controllers must run in the host network namespace so that
      # it isn't governed by policy that would prevent it from working.
      hostNetwork: true
      tolerations:
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      serviceAccountName: calico-kube-controllers
      containers:
        - name: calico-kube-controllers
          image: calico/kube-controllers:v3.4.4
          env:
            # The location of the Calico etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # Location of the CA certificate for etcd.
            - name: ETCD_CA_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_ca
            # Location of the client key for etcd.
            - name: ETCD_KEY_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_key
            # Location of the client certificate for etcd.
            - name: ETCD_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_cert
            # Choose which controllers to run.
            - name: ENABLED_CONTROLLERS
              value: policy,namespace,serviceaccount,workloadendpoint,node
          volumeMounts:
            # Mount in the etcd TLS secrets.
            - mountPath: /calico-secrets
              name: etcd-certs
          readinessProbe:
            exec:
              command:
              - /usr/bin/check-status
              - -r
      volumes:
        # Mount in the etcd TLS secrets with mode 400.
        # See https://kubernetes.io/docs/concepts/configuration/secret/
        - name: etcd-certs
          secret:
            secretName: calico-etcd-secrets
            defaultMode: 0400

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-kube-controllers
  namespace: kube-system
---

# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: calico-kube-controllers
rules:
  # Pods are monitored for changing labels.
  # The node controller monitors Kubernetes nodes.
  # Namespace and serviceaccount labels are used for policy.
  - apiGroups:
      - ""
    resources:
      - pods
      - nodes
      - namespaces
      - serviceaccounts
    verbs:
      - watch
      - list
  # Watch for changes to Kubernetes NetworkPolicies.
  - apiGroups:
      - networking.k8s.io
    resources:
      - networkpolicies
    verbs:
      - watch
      - list
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: calico-kube-controllers
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-kube-controllers
subjects:
- kind: ServiceAccount
  name: calico-kube-controllers
  namespace: kube-system
---
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: calico-node
rules:
  # The CNI plugin needs to get pods, nodes, and namespaces.
  - apiGroups: [""]
    resources:
      - pods
      - nodes
      - namespaces
    verbs:
      - get
  - apiGroups: [""]
    resources:
      - endpoints
      - services
    verbs:
      # Used to discover service IPs for advertisement.
      - watch
      - list
  - apiGroups: [""]
    resources:
      - nodes/status
    verbs:
      # Needed for clearing NodeNetworkUnavailable flag.
      - patch
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: calico-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-node
subjects:
- kind: ServiceAccount
  name: calico-node
  namespace: kube-system
---

 

posted @ 2023-03-22 16:23  yuanbangchen  阅读(1309)  评论(0编辑  收藏  举报