Cluster-API研究

概述

Cluster API is a Kubernetes sub-project focused on providing declarative APIs and tooling to simplify provisioning, upgrading, and operating multiple Kubernetes clusters.
将kubernetes集群作为资源,通过声明式的方式创建和管理集群。首先需要在已有的kuberntes集群上部署Cluster API,这个集群被称之为管理集群,通过管理集群声明并创建出来的集群叫负载集群,也就是我们真正运行负载的集群。管理集群一般仅仅作为负载集群的管理使用,不运行其它应用。
支持在多种环境下使用:AWS、Azure、DigitalOcean、Docker、Equinix Metal、GCP、Hetzner、Metal3、OpenStack、vSphere
下面测试基于Docker环境测试,其中管理集群通过kind独立创建。

环境

  • RockyLinux 8.5
  • Docker 20.10
  • Kind 0.11.1
  • kubectl 1.21.9
  • clusterctl
[root@localhost ~]# hostnamectl set-hostname cluster-api.test
[root@localhost ~]# hostname
cluster-api.test
# 关闭防火墙和selinux
[root@localhost ~]# systemctl disable firewalld --now
[root@localhost ~]# setenforce 0
# 内核配置
[root@localhost ~]# cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

[root@localhost ~]# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@localhost ~]# sudo sysctl --system

[root@localhost ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@localhost ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@localhost ~]# sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
[root@localhost ~]# yum -y install docker-ce
[root@localhost ~]# systemctl enable docker --now
[root@localhost ~]# docker version
Client: Docker Engine - Community
 Version:           20.10.12
 API version:       1.41
 Go version:        go1.16.12
 Git commit:        e91ed57
 Built:             Mon Dec 13 11:45:22 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.12
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.12
  Git commit:       459d0df
  Built:            Mon Dec 13 11:43:44 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.12
  GitCommit:        7b11cfaabd73bb80907dd23182b9347b4245eb5d
 runc:
  Version:          1.0.2
  GitCommit:        v1.0.2-0-g52b36a2
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0


[root@cluster-api ~]# yum install vim wget -y
[root@cluster-api ~]# wget https://github.com/kubernetes-sigs/kind/releases/download/v0.11.1/kind-linux-amd64
[root@cluster-api ~]# chmod +x kind-linux-amd64
[root@cluster-api ~]# mv kind-linux-amd64 /usr/bin/kind
[root@cluster-api ~]# kind version
kind v0.11.1 go1.16.4 linux/amd64

[root@cluster-api ~]# wget http://rancher-mirror.rancher.cn/kubectl/v1.21.9/linux-amd64-v1.21.9-kubectl
[root@cluster-api ~]# chmod +x linux-amd64-v1.21.9-kubectl
[root@cluster-api ~]# mv linux-amd64-v1.21.9-kubectl /usr/bin/kubectl
[root@cluster-api ~]#

[root@cluster-api ~]# wget https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.1.2/clusterctl-linux-amd64
[root@cluster-api ~]# chmod +x clusterctl-linux-amd64
[root@cluster-api ~]# mv clusterctl-linux-amd64 /usr/bin/clusterctl
[root@cluster-api ~]# clusterctl version
clusterctl version: &version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.2", GitCommit:"3433f7b769b4e7f5cb899b2742a5a8a1a9f51b3e", GitTreeState:"clean", BuildDate:"2022-02-17T19:14:59Z", GoVersion:"go1.17.3", Compiler:"gc", Platform:"linux/amd64"}

正文

  1. 创建kind集群配置文件,并创建集群
[root@cluster-api ~]# cat > kind-cluster-with-extramounts.yaml <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraMounts:
    - hostPath: /var/run/docker.sock
      containerPath: /var/run/docker.sock
EOF
[root@cluster-api ~]#
[root@cluster-api ~]# ls
anaconda-ks.cfg  kind-cluster-with-extramounts.yaml
[root@cluster-api ~]# kind create cluster --config kind-cluster-with-extramounts.yaml
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.21.1) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Thanks for using kind! 😊

[root@cluster-api ~]# kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:35695
CoreDNS is running at https://127.0.0.1:35695/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@cluster-api ~]# kubectl get node -o wide
NAME                 STATUS   ROLES                  AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE       KERNEL-VERSION              CONTAINER-RUNTIME
kind-control-plane   Ready    control-plane,master   3m10s   v1.21.1   172.18.0.2    <none>        Ubuntu 21.04   4.18.0-348.el8.0.2.x86_64   containerd://1.5.2

  1. 初始化管理集群
[root@cluster-api ~]# export CLUSTER_TOPOLOGY=true
[root@cluster-api ~]# clusterctl init --infrastructure docker
Fetching providers
Installing cert-manager Version="v1.5.3"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v1.1.2" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v1.1.2" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v1.1.2" TargetNamespace="capi-kubeadm-control-plane-system"
I0225 09:44:12.323072   22464 request.go:665] Waited for 1.025729779s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:35695/apis/bootstrap.cluster.x-k8s.io/v1beta1?timeout=30s
Installing Provider="infrastructure-docker" Version="v1.1.2" TargetNamespace="capd-system"

Your management cluster has been initialized successfully!

You can now create your first workload cluster by running the following:

  clusterctl generate cluster [name] --kubernetes-version [version] | kubectl apply -f -

[root@cluster-api ~]# kubectl  get deploy -A -o wide
NAMESPACE                           NAME                                            READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS               IMAGES                                                           SELECTOR
capd-system                         capd-controller-manager                         0/1     1            0           15m   manager                  gcr.io/k8s-staging-cluster-api/capd-manager:v1.1.2               cluster.x-k8s.io/provider=infrastructure-docker,control-plane=controller-manager
capi-kubeadm-bootstrap-system       capi-kubeadm-bootstrap-controller-manager       0/1     1            0           15m   manager                  k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v1.1.2       cluster.x-k8s.io/provider=bootstrap-kubeadm,control-plane=controller-manager
capi-kubeadm-control-plane-system   capi-kubeadm-control-plane-controller-manager   0/1     1            0           15m   manager                  k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v1.1.2   cluster.x-k8s.io/provider=control-plane-kubeadm,control-plane=controller-manager
capi-system                         capi-controller-manager                         0/1     1            0           15m   manager                  k8s.gcr.io/cluster-api/cluster-api-controller:v1.1.2             cluster.x-k8s.io/provider=cluster-api,control-plane=controller-manager
cert-manager                        cert-manager                                    1/1     1            1           16m   cert-manager             quay.io/jetstack/cert-manager-controller:v1.5.3                  app.kubernetes.io/component=controller,app.kubernetes.io/instance=cert-manager,app.kubernetes.io/name=cert-manager
cert-manager                        cert-manager-cainjector                         1/1     1            1           16m   cert-manager             quay.io/jetstack/cert-manager-cainjector:v1.5.3                  app.kubernetes.io/component=cainjector,app.kubernetes.io/instance=cert-manager,app.kubernetes.io/name=cainjector
cert-manager                        cert-manager-webhook                            1/1     1            1           16m   cert-manager             quay.io/jetstack/cert-manager-webhook:v1.5.3                     app.kubernetes.io/component=webhook,app.kubernetes.io/instance=cert-manager,app.kubernetes.io/name=webhook
kube-system                         coredns                                         2/2     2            2           29m   coredns                  k8s.gcr.io/coredns/coredns:v1.8.0                                k8s-app=kube-dns
local-path-storage                  local-path-provisioner                          1/1     1            1           29m   local-path-provisioner   docker.io/rancher/local-path-provisioner:v0.0.14                 app=local-path-provisioner

# 默认部分镜像来源于谷歌仓库,需要替换下
[root@cluster-api ~]# kubectl  set image deploy/capd-controller-manager -n capd-system manager=registry.cn-hangzhou.aliyuncs.com/k8gcrio/capd-manager:v1.1.2
deployment.apps/capd-controller-manager image updated
[root@cluster-api ~]# kubectl  set image deploy/capi-kubeadm-bootstrap-controller-manager -n capi-kubeadm-bootstrap-system manager=registry.cn-hangzhou.aliyuncs.com/k8gcrio/kubeadm-bootstrap-controller:v1.1.2
deployment.apps/capi-kubeadm-bootstrap-controller-manager image updated
[root@cluster-api ~]# kubectl  set image deploy/capi-kubeadm-control-plane-controller-manager -n capi-kubeadm-control-plane-system manager=registry.cn-hangzhou.aliyuncs.com/k8gcrio/kubeadm-control-plane-controller:v1.1.2
deployment.apps/capi-kubeadm-control-plane-controller-manager image updated
[root@cluster-api ~]# kubectl  set image deploy/capi-controller-manager -n capi-system manager=registry.cn-hangzhou.aliyuncs.com/k8gcrio/cluster-api-controller:v1.1.2   
deployment.apps/capi-controller-manager image updated
[root@cluster-api ~]#

# 确保pod都已处于running
[root@cluster-api ~]# kubectl get po -A
NAMESPACE                           NAME                                                             READY   STATUS    RESTARTS   AGE
capd-system                         capd-controller-manager-786d464fb6-vth69                         1/1     Running   0          6m36s
capi-kubeadm-bootstrap-system       capi-kubeadm-bootstrap-controller-manager-76844f4976-hm8qt       1/1     Running   0          4m16s
capi-kubeadm-control-plane-system   capi-kubeadm-control-plane-controller-manager-667459965f-2bnvb   1/1     Running   0          2m38s
capi-system                         capi-controller-manager-65dd59549d-7hrl5                         1/1     Running   0          45s
cert-manager                        cert-manager-848f547974-zpjnp                                    1/1     Running   0          42m
cert-manager                        cert-manager-cainjector-54f4cc6b5-5k7gx                          1/1     Running   0          42m
cert-manager                        cert-manager-webhook-7c9588c76-rx2j6                             1/1     Running   0          42m
kube-system                         coredns-558bd4d5db-4nnbv                                         1/1     Running   0          55m
kube-system                         coredns-558bd4d5db-q2fq2                                         1/1     Running   0          55m
kube-system                         etcd-kind-control-plane                                          1/1     Running   0          55m
kube-system                         kindnet-ts7m9                                                    1/1     Running   0          55m
kube-system                         kube-apiserver-kind-control-plane                                1/1     Running   0          55m
kube-system                         kube-controller-manager-kind-control-plane                       1/1     Running   0          55m
kube-system                         kube-proxy-ndpq8                                                 1/1     Running   0          55m
kube-system                         kube-scheduler-kind-control-plane                                1/1     Running   0          55m
local-path-storage                  local-path-provisioner-547f784dff-rtxl7                          1/1     Running   0          55m
[root@cluster-api ~]#

  1. 创建负载集群

# 设置负载集群配置文件环境变量,不设置会有有默认值
[root@cluster-api ~]# # The list of service CIDR, default ["10.128.0.0/12"]
[root@cluster-api ~]# export SERVICE_CIDR=["10.96.0.0/12"]
[root@cluster-api ~]# # The list of pod CIDR, default ["192.168.0.0/16"]
[root@cluster-api ~]# export POD_CIDR=["192.168.0.0/16"]
[root@cluster-api ~]# # The service domain, default "cluster.local"
[root@cluster-api ~]# export SERVICE_DOMAIN="k8s.test"
[root@cluster-api ~]#

# 生成负载集群配置文件
[root@cluster-api ~]# clusterctl generate cluster capi-quickstart --flavor development \
  --kubernetes-version v1.23.3 \
  --control-plane-machine-count=3 \
  --worker-machine-count=3 \
  > capi-quickstart.yaml

[root@cluster-api ~]#
[root@cluster-api ~]# cat capi-quickstart.yaml
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
  name: capi-quickstart
  namespace: default
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    serviceDomain: k8s.test
    services:
      cidrBlocks:
      - 10.96.0.0/12
  controlPlaneRef:
    apiVersion: controlplane.cluster.x-k8s.io/v1beta1
    kind: KubeadmControlPlane
    name: capi-quickstart-control-plane
    namespace: default
  infrastructureRef:
    apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
    kind: DockerCluster
    name: capi-quickstart
    namespace: default
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerCluster
metadata:
  name: capi-quickstart
  namespace: default
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
  name: capi-quickstart-control-plane
  namespace: default
spec:
  kubeadmConfigSpec:
    clusterConfiguration:
      apiServer:
        certSANs:
        - localhost
        - 127.0.0.1
        - 0.0.0.0
      controllerManager:
        extraArgs:
          enable-hostpath-provisioner: "true"
    initConfiguration:
      nodeRegistration:
        criSocket: /var/run/containerd/containerd.sock
        kubeletExtraArgs:
          cgroup-driver: cgroupfs
          eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
    joinConfiguration:
      nodeRegistration:
        criSocket: /var/run/containerd/containerd.sock
        kubeletExtraArgs:
          cgroup-driver: cgroupfs
          eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
  machineTemplate:
    infrastructureRef:
      apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
      kind: DockerMachineTemplate
      name: capi-quickstart-control-plane
      namespace: default
  replicas: 3
  version: v1.23.3
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
metadata:
  name: capi-quickstart-control-plane
  namespace: default
spec:
  template:
    spec:
      extraMounts:
      - containerPath: /var/run/docker.sock
        hostPath: /var/run/docker.sock
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
metadata:
  name: capi-quickstart-md-0
  namespace: default
spec:
  template:
    spec: {}
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
  name: capi-quickstart-md-0
  namespace: default
spec:
  template:
    spec:
      joinConfiguration:
        nodeRegistration:
          kubeletExtraArgs:
            cgroup-driver: cgroupfs
            eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
  name: capi-quickstart-md-0
  namespace: default
spec:
  clusterName: capi-quickstart
  replicas: 3
  selector:
    matchLabels: null
  template:
    spec:
      bootstrap:
        configRef:
          apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
          kind: KubeadmConfigTemplate
          name: capi-quickstart-md-0
          namespace: default
      clusterName: capi-quickstart
      infrastructureRef:
        apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
        kind: DockerMachineTemplate
        name: capi-quickstart-md-0
        namespace: default
      version: v1.23.3
[root@cluster-api ~]#

# 创建集群
[root@cluster-api ~]# kubectl  apply -f capi-quickstart.yaml
cluster.cluster.x-k8s.io/capi-quickstart created
dockercluster.infrastructure.cluster.x-k8s.io/capi-quickstart created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capi-quickstart-control-plane created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/capi-quickstart-control-plane created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/capi-quickstart-md-0 created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capi-quickstart-md-0 created
machinedeployment.cluster.x-k8s.io/capi-quickstart-md-0 created
[root@cluster-api ~]#

  1. 负载集群管理
# 查看负载集群创建情况
[root@cluster-api ~]# kubectl get cluster
NAME              PHASE         AGE     VERSION
capi-quickstart   Provisioned   2m11s
[root@cluster-api ~]# clusterctl describe cluster capi-quickstart
NAME                                                                READY  SEVERITY  REASON                           SINCE  MESSAGE                                
Cluster/capi-quickstart                                             False  Warning   ScalingUp                        116s   Scaling up control plane to 3 replicas (actual 1)
├─ClusterInfrastructure - DockerCluster/capi-quickstart             True                                              2m18s                                         
├─ControlPlane - KubeadmControlPlane/capi-quickstart-control-plane  False  Warning   ScalingUp                        116s   Scaling up control plane to 3 replicas (actual 1)
│ └─Machine/capi-quickstart-control-plane-7mg5j                     False  Info      Bootstrapping                    23s    1 of 2 completed                       
└─Workers                                                                                                                                                           
  └─MachineDeployment/capi-quickstart-md-0                          False  Warning   WaitingForAvailableMachines      2m33s  Minimum availability requires 3 replicas, current 0 available
    └─3 Machines...                                                 False  Info      WaitingForControlPlaneAvailable  2m18s  See capi-quickstart-md-0-58d47fc5f7-7hcbr, capi-quickstart-md-0-58d47fc5f7-hv7bl, ...
[root@cluster-api ~]#

# 查看负载集群控制节点是否完成
[root@cluster-api ~]# kubectl get kubeadmcontrolplane
NAME                            CLUSTER           INITIALIZED   API SERVER AVAILABLE   REPLICAS   READY   UPDATED   UNAVAILABLE   AGE     VERSION
capi-quickstart-control-plane   capi-quickstart   true                                 2                  2         2             3m47s   v1.23.3
[root@cluster-api ~]#

# 如果控制节点已完成,可以导出负载集群的kubeconfig
[root@cluster-api ~]# clusterctl get kubeconfig capi-quickstart > capi-quickstart.kubeconfig
[root@cluster-api ~]#

# 通过指定kubeconfig查看负载集群信息
[root@cluster-api ~]# kubectl --kubeconfig capi-quickstart.kubeconfig get node
NAME                                    STATUS     ROLES                  AGE     VERSION
capi-quickstart-control-plane-7mg5j     NotReady   control-plane,master   4m17s   v1.23.3
capi-quickstart-control-plane-fkjdw     NotReady   control-plane,master   2m36s   v1.23.3
capi-quickstart-md-0-58d47fc5f7-7hcbr   NotReady   <none>                 2m59s   v1.23.3
capi-quickstart-md-0-58d47fc5f7-hv7bl   NotReady   <none>                 2m59s   v1.23.3
capi-quickstart-md-0-58d47fc5f7-lgx9d   NotReady   <none>                 2m59s   v1.23.3
[root@cluster-api ~]#
[root@cluster-api ~]# kubectl --kubeconfig capi-quickstart.kubeconfig get all -A
NAMESPACE     NAME                                                              READY   STATUS    RESTARTS        AGE
kube-system   pod/coredns-64897985d-2fd72                                       0/1     Pending   0               5m1s
kube-system   pod/coredns-64897985d-grmk2                                       0/1     Pending   0               5m1s
kube-system   pod/etcd-capi-quickstart-control-plane-7mg5j                      1/1     Running   0               5m1s
kube-system   pod/etcd-capi-quickstart-control-plane-fkjdw                      1/1     Running   0               3m24s
kube-system   pod/kube-apiserver-capi-quickstart-control-plane-7mg5j            1/1     Running   0               5m1s
kube-system   pod/kube-apiserver-capi-quickstart-control-plane-fkjdw            1/1     Running   1 (3m12s ago)   3m9s
kube-system   pod/kube-controller-manager-capi-quickstart-control-plane-7mg5j   1/1     Running   2 (2m18s ago)   5m1s
kube-system   pod/kube-controller-manager-capi-quickstart-control-plane-fkjdw   1/1     Running   0               3m9s
kube-system   pod/kube-proxy-5xz9f                                              1/1     Running   0               3m48s
kube-system   pod/kube-proxy-bj5md                                              1/1     Running   0               3m25s
kube-system   pod/kube-proxy-pd48w                                              1/1     Running   0               3m48s
kube-system   pod/kube-proxy-v4w4k                                              1/1     Running   0               3m48s
kube-system   pod/kube-proxy-w8hgn                                              1/1     Running   0               5m1s
kube-system   pod/kube-scheduler-capi-quickstart-control-plane-7mg5j            1/1     Running   2 (2m21s ago)   5m6s
kube-system   pod/kube-scheduler-capi-quickstart-control-plane-fkjdw            1/1     Running   0               3m9s

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  5m18s
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   5m5s

NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/kube-proxy   5         5         5       5            5           kubernetes.io/os=linux   5m5s

NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns   0/2     2            0           5m5s

NAMESPACE     NAME                                DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-64897985d   2         2         0       5m1s
[root@cluster-api ~]#

# 查看当前的容器节点(kind通过容器运行k8s节点)
[root@cluster-api ~]# docker ps
CONTAINER ID   IMAGE                                COMMAND                  CREATED          STATUS          PORTS                                  NAMES
b8ec18c847ab   kindest/node:v1.23.3                 "/usr/local/bin/entr…"   22 minutes ago   Up 21 minutes   38223/tcp, 127.0.0.1:38223->6443/tcp   capi-quickstart-control-plane-fkjdw
9f04dd630847   kindest/node:v1.23.3                 "/usr/local/bin/entr…"   22 minutes ago   Up 21 minutes                                          capi-quickstart-md-0-58d47fc5f7-hv7bl
010e83ddfa63   kindest/node:v1.23.3                 "/usr/local/bin/entr…"   22 minutes ago   Up 21 minutes                                          capi-quickstart-md-0-58d47fc5f7-lgx9d
155232d3cba1   kindest/node:v1.23.3                 "/usr/local/bin/entr…"   22 minutes ago   Up 21 minutes                                          capi-quickstart-md-0-58d47fc5f7-7hcbr
82166a42acd3   kindest/node:v1.23.3                 "/usr/local/bin/entr…"   24 minutes ago   Up 23 minutes   36755/tcp, 127.0.0.1:36755->6443/tcp   capi-quickstart-control-plane-7mg5j
fb44ec37abb4   kindest/haproxy:v20210715-a6da3463   "haproxy -sf 7 -W -d…"   25 minutes ago   Up 25 minutes   44679/tcp, 0.0.0.0:44679->6443/tcp     capi-quickstart-lb
2631ef09c129   kindest/node:v1.21.1                 "/usr/local/bin/entr…"   2 hours ago      Up 2 hours      127.0.0.1:35695->6443/tcp              kind-control-plane
[root@cluster-api ~]#

  1. 负载集群部署CNI
# 为负载集群部署CNI组件
[root@cluster-api ~]# kubectl --kubeconfig=./capi-quickstart.kubeconfig \
  apply -f https://docs.projectcalico.org/v3.21/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
[root@cluster-api ~]#

[root@cluster-api ~]# kubectl --kubeconfig=./capi-quickstart.kubeconfig get po -A
NAMESPACE     NAME                                                          READY   STATUS    RESTARTS      AGE
kube-system   calico-kube-controllers-85b5b5888d-9l89b                      1/1     Running   0             10m
kube-system   calico-node-2n8zv                                             1/1     Running   0             10m
kube-system   calico-node-72jll                                             1/1     Running   0             10m
kube-system   calico-node-9bfqm                                             1/1     Running   0             10m
kube-system   calico-node-rrf58                                             1/1     Running   0             10m
kube-system   calico-node-zfmwg                                             1/1     Running   0             10m
kube-system   coredns-64897985d-2fd72                                       1/1     Running   0             17m
kube-system   coredns-64897985d-grmk2                                       1/1     Running   0             17m
kube-system   etcd-capi-quickstart-control-plane-7mg5j                      1/1     Running   0             17m
kube-system   etcd-capi-quickstart-control-plane-fkjdw                      1/1     Running   0             16m
kube-system   kube-apiserver-capi-quickstart-control-plane-7mg5j            1/1     Running   0             17m
kube-system   kube-apiserver-capi-quickstart-control-plane-fkjdw            1/1     Running   1 (15m ago)   15m
kube-system   kube-controller-manager-capi-quickstart-control-plane-7mg5j   1/1     Running   2 (15m ago)   17m
kube-system   kube-controller-manager-capi-quickstart-control-plane-fkjdw   1/1     Running   0             15m
kube-system   kube-proxy-5xz9f                                              1/1     Running   0             16m
kube-system   kube-proxy-bj5md                                              1/1     Running   0             16m
kube-system   kube-proxy-pd48w                                              1/1     Running   0             16m
kube-system   kube-proxy-v4w4k                                              1/1     Running   0             16m
kube-system   kube-proxy-w8hgn                                              1/1     Running   0             17m
kube-system   kube-scheduler-capi-quickstart-control-plane-7mg5j            1/1     Running   2 (15m ago)   17m
kube-system   kube-scheduler-capi-quickstart-control-plane-fkjdw            1/1     Running   0             15m
[root@cluster-api ~]#

# CNI完成后,集群节点都变成了Ready
[root@cluster-api ~]# kubectl --kubeconfig capi-quickstart.kubeconfig get node
NAME                                    STATUS   ROLES                  AGE   VERSION
capi-quickstart-control-plane-7mg5j     Ready    control-plane,master   20m   v1.23.3
capi-quickstart-control-plane-fkjdw     Ready    control-plane,master   18m   v1.23.3
capi-quickstart-md-0-58d47fc5f7-7hcbr   Ready    <none>                 19m   v1.23.3
capi-quickstart-md-0-58d47fc5f7-hv7bl   Ready    <none>                 19m   v1.23.3
capi-quickstart-md-0-58d47fc5f7-lgx9d   Ready    <none>                 19m   v1.23.3
[root@cluster-api ~]#

  1. 清理负载集群
[root@cluster-api ~]# kubectl delete cluster capi-quickstart
cluster.cluster.x-k8s.io "capi-quickstart" deleted

# 容器节点就剩下管理集群了
[root@cluster-api ~]# docker ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED       STATUS       PORTS                       NAMES
2631ef09c129   kindest/node:v1.21.1   "/usr/local/bin/entr…"   2 hours ago   Up 2 hours   127.0.0.1:35695->6443/tcp   kind-control-plane
[root@cluster-api ~]#

posted @ 2022-03-12 18:31  longtds  阅读(938)  评论(0编辑  收藏  举报