17_0_Kubernetes v1.25 集群部署与架构简介

Kubernetes 是一个开源的容器编排引擎,用来对容器化应用进行自动化部署、 扩缩和管理。
安装环境与组件版本:Centos7.9、Kubernetes v1.25.3、docker 20.10.21、cri-dockerd 0.2.6、Calico、Dashboard。
官方文档:https://kubernetes.io/zh/docs/home

01 为什么要使用kubernetes

企业需求:为提高业务并发和高可用,会使用多台服务器

  • 多容器跨主机提供服务
  • 多容器分布节点部署
  • 多容器怎么升级
  • 高效管理这些容器,例如监控,部署,监控,回滚等操作

02 基本定义

Kubernetes是Google在2014年开源的一个容器集群管理系统,

Kubernetes简称K8S。 • Kubernetes用于容器化应用程序的部署,扩展和管理,目标是让部署容器化应用简单高效。

03 集群架构与组件

3.1 Master组件

kube-apiserver Kubernetes API,集群的统一入口,各组件协调者,以RESTful API提供接口服务,所有对象资源的增删改查和监听操作都交给 APIServer处理后再提交给Etcd存储。

kube-controller-manager 处理集群中常规后台任务,一个资源对应一个控制器,而 ControllerManager就是负责管理这些控制器的。例如 Deployment、Service

kube-scheduler 根据调度算法为新创建的Pod选择一个Node节点,可以任意部署, 可以部署在同一个节点上,也可以部署在不同的节点上。

etcd 分布式键值存储系统。用于保存集群状态数据,比如Pod、Service 等对象信息。

3.2 Node组件

kubelet kubelet是Master在Node节点上的Agent,管理本机运行容器的生命周 期,比如创建容器、Pod挂载数据卷、下载secret、获取容器和节点状态 等工作。kubelet将每个Pod转换成一组容器。

kube-proxy 在Node节点上实现Pod网络代理,维护网络规则和四层负载均衡工作。

第三方容器引擎,例如docker、containerd、podman 容器引擎,运行容器

04 集群部署

4.1 两种部署方式

(1)kubeadm

Kubeadm是一个工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。 部署地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

(2)二进制

从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。 下载地址:https://github.com/kubernetes/kubernetes/releases

这里我们采用kubeadm安装

4.2 准备工作

Centos7.8 Kubernetes v1.25.3 docker cri-dockerd

(1)配置 Centos 7 yum 源

# 该源包含base,updates,epel,会自动匹配Centos系统版本
# 参考USTC:https://mirrors.ustc.edu.cn/help/centos.html

sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \
         -e 's|^#baseurl=http://mirror.centos.org/centos|baseurl=https://mirrors.ustc.edu.cn/centos|g' \
         -i.bak \
         /etc/yum.repos.d/CentOS-Base.repo
# 清理缓存
yum makecache 

(2)yum更新

yum install update
yum install lrzsz
yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp

(3)关闭selinux 、 firewalld 以及 Swap分区

# 查看 selinux 状态
sestatus
# 临时关闭 selinux
setenforce 0  
# 永久关闭  selinux
sed -i s#SELINUX=enforcing#SELINUX=disabled# /etc/selinux/config 


# 永久关闭 firewalld
systemctl disable firewalld.service

# 关闭swap
swapoff -a  # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久

(4)设置主机名以及添加hosts

# 根据规划设置主机名
hostnamectl set-hostname <hostname>

# 在master添加hosts
cat >> /etc/hosts << EOF
192.168.22.160 master
192.168.22.161 slave1
192.168.22.162 slave2
EOF

(5)安装docker

因为Docker是用yum安装的,docker的cgroup驱动程序默认设置为system。默认情况下Kubernetes cgroup为systemd,我们需要更改Docker cgroup驱动,同时配置docker源

sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": [
       "https://j16wttpi.mirror.aliyuncs.com",
       "https://hub-mirror.c.163.com",
       "https://mirror.baidubce.com"
  ]
}

然后进行安装

# 安装依赖包
yum install -y yum-utils 
# 添加Docker软件包源
yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 安装Docker CE
yum install -y docker-ce

systemctl start docker
systemctl enable docke

(6)配置iptables的ACCEPT规则

iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT

(7)调整内核参数

使网桥的流量被iptables处理到,因为kubernetes是基于iptables实现了一些网络通信与转发,iptables是基于Linux netfiliter机制的一部分,通过iptables可以操作linux对网络访问进行精细的控制,如丢掉特定地址的包,kubernetest官方建议配置启用。

# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  # 生效

# 时间同步
yum install ntpdate -y
ntpdate time.windows.com

4.3 部署 cri-dockerd

4.3.1 为什么需要cri-dockerd

tar -xf cri-dockerd-0.2.6.amd64.tgz
cp cri-dockerd/cri-dockerd /usr/bin/
chmod +x /usr/bin/cri-dockerd
  • 配置启动⽂件,执行如下命令
cat <<"EOF" > /usr/lib/systemd/system/cri-docker.service
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket
[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
  • ⽣成 socket ⽂件,执行如下命令
cat <<"EOF" > /usr/lib/systemd/system/cri-docker.socket
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service
[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
EOF
  • 启动 cri-docker 并设置开机⾃动启动
systemctl daemon-reload
systemctl enable cri-docker --now
systemctl is-active cri-docker

4.4 安装 kubeadm kubelet 和 kubectl

(1)添加阿里云Yum软件源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

(2)在三台机器进行安装

yum install -y kubelet-1.25.3 kubeadm-1.25.3 kubectl-1.25.3
# 设置开机启动
systemctl enable kubelet

(3)配置⽂件修改

cat << EOF > /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
EOF

4.5 部署Kubernetes Master

(1)初始化master节点前可以提前拉去镜像,减少初始化时长

# 查看要拉取那些镜像
kubeadm config images list

# 拉取镜像
kubeadm config images pull

由于默认镜像仓库地址registry.k8s.io无法访问

我们替换镜像仓库为 registry.cn-hangzhou.aliyuncs.com/google_containers,

并编写脚本拉取镜像,重新打标签等操作,脚本如下:

vi install.sh

k8slen=$(echo "registry.k8s.io/" | wc -L)
for file in $(kubeadm config images list)
do
    # file 示例 registry.k8s.io/kube-apiserver:v1.25.4
	# ${file: ${k8slen}} 示例:kube-apiserver:v1.25.4
	#下载镜像
	docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/${file: ${k8slen}}
	#打上标签
	docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/${file: ${k8slen}} ${file}
	#删除之前镜像
	docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/${file: ${k8slen}}
done

初始化有问题,可以重新初始化先通过kubeadm重置

kubeadm reset

(2)在192.168.22.160(Master)执行

kubeadm init \
--apiserver-advertise-address=192.168.22.160 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.25.3 \
--service-cidr=10.10.0.0/12 \
--pod-network-cidr=172.17.0.0/16 \
--cri-socket /var/run/cri-dockerd.sock \
--ignore-preflight-errors=all

# 成功效果
[root@master ~]# kubeadm init \
> --apiserver-advertise-address=192.168.22.160 \
> --image-repository registry.aliyuncs.com/google_containers \
> --kubernetes-version v1.25.3 \
> --service-cidr=10.10.0.0/12 \
> --pod-network-cidr=172.17.0.0/16 \
> --cri-socket /var/run/cri-dockerd.sock \
> --ignore-preflight-errors=all
W1117 06:22:39.399808    2368 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.0.0.1 192.168.22.160]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.22.160 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.22.160 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 11.511013 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 22nhvq.mbtfh8vnfhp6onu2
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.22.160:6443 --token 22nhvq.mbtfh8vnfhp6onu2 \
        --discovery-token-ca-cert-hash sha256:8728a0ea42a224922b893ff15a786aac4529dfe6674d0325e5e8eb5b9f62afed
  • Token过期后再加入节点

如果忘记记录token或者过了一段时间后,加入节点时提示token已经过期。我们可以这样拿到token和hash值。

kubeadm token create
kubeadm token list
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

# 或者在master节点重新生成 Join Token,然后复制生成的内容,到从节点,执行下
kubeadm token create --print-join-command

kubeadm join k8s-master:6443 --token zngem6.f06ys380p9yur1v1 \
    --discovery-token-ca-cert-hash sha256:26903a8256b69daba36e276c0d1e384e9e137bd348d2967209b7dfbd379c9185

• --apiserver-advertise-address 集群通告地址

• --image-repository 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址

• --kubernetes-version K8s版本,与上面安装的一致

• --service-cidr 集群内部虚拟网络,Pod统一访问入口

• --pod-network-cidr Pod网络,与下面部署的CNI网络组件yaml中保持一致


初始化完成后,最后会输出一个join命令,先记住,下面用。

(3)拷贝kubectl使用的连接k8s认证文件到默认路径:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

(4)在slave1、slave2上执行初始化输出的join命令

注意:加入集群是需要在join后添加: " --cri-socket unix:///var/run/cri-dockerd.sock "。

(5)查看工作节点:

注:由于网络插件还没有部署,还没有准备就绪 NotReady

4.6 安装网络插件calico

考虑一下条件
1、每个主机docker网段是独立的,默认网段为172.17.0.0、24
2、docker主机创建的容器是随机分配的ip
3、如何将容器1的数据包送到容器2?
通常会想到 iptables(nat)、静态路由(把每台主机当作一个虚拟路由器)的解决方法,但对于kubernetes实现大批量容器间的通信比较复杂,所以引入Calico等网络组件

因此部署网络组件的目的是打通Pod到Pod之间网络、Node与Pod之间网络,从而集群中数据包可以任意传输,形成了一 个扁平化网络。 主流网络组件有:Flannel、Calico等 而所谓的CNI( Container Network Interface,容器网络接口)就是k8s对接这些第三方网络组件的接口er

(1)网络插件yaml下载

wget https://docs.projectcalico.org/manifests/calico.yaml --no-check-certificate

(2)下载calico镜像,并手动部署到各台机器

链接:https://pan.baidu.com/s/17aY17ikdtfkmbD_DOeQTvQ?pwd=3val 
提取码:3val 
--来自百度网盘超级会员V1的分享

(3)在master节点执行命令进行calico插件安装

kubectl apply -f calico.yaml

(4)主节点执行命令看集群是否成功

kubectl get node -o wide

4.7 部署 Dashboard

Dashboard是官方提供的一个UI,可用于基本管理K8s资源。

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.1/aio/deploy/recommended.yaml

编辑recommend.yaml 新增以下标记内容

...
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort  # 新增内容
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001  # 新增内容
  selector:
    k8s-app: kubernetes-dashboard
...

部署并验证Dashboard

kubectl apply -f recommended.yaml   
kubectl -n kubernetes-dashboard get pods -o wide    #查看命名空间

访问地址:https://NodeIP:30001

创建service account并绑定默认cluster-admin管理员集群角色:

# (1) 获取token
kubectl -n kubernetes-dashboard describe secrets "token"
# 'v1.24.0 更新之后进行创建 ServiceAccount 不会自动生成 Secret 需要对其手动创建'
# (2) 查看 服务账户 信息,可以看到token为空
kubectl -n kube-system describe serviceaccounts kubernetes-dashboard
# (3) 创建 token 并使用输出的token登录Dashboard。
kubectl -n kubernetes-dashboard create token kubernetes-dashboard

[root@master kubernetes]# kubectl -n kubernetes-dashboard create token kubernetes-dashboard
eyJhbGciOiJSUzI1NiIsImtpZCI6IlhXcVJCNlViZ2FNTHNrMUd1MFdqU3J0b3ZvMV9waWpaSm15dEYtLVdsM0kifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjY4ODcyMjAzLCJpYXQiOjE2Njg4Njg2MDMsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInVpZCI6IjZmZjIzYzRlLWJhMDItNDQzOS1hNTRjLThjMGQ0NDFiZDZjOSJ9fSwibmJmIjoxNjY4ODY4NjAzLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQifQ.QPq7hTtPIHKQULrP8F08Z2ClaGMxailqlUaPraWCgL2avfGsE14p-0fBzpATHiNeM-NKU4zuHy8Meik3ThJy4UZHLtSwqla0KOU8Wcf8Vp47_TyFDR2sLjZZDToqHYtVVNaFSkmZpZHfeb5Rx3pT2Z07DY_RJ_xdl-CHKkaFDqmBydSf1vYIYzG7mK2MII5_OgPMrAHky4nynuCFjtLuiGvObK4uN-SxvtK5VGBNt1CBjijgnhYseux1B9PzAr31lVV1I6h8AALOiyh8IngFy04XSz7NDE-N2-pt0r4t7Aosm6Exva9BD3PIhpdYF_QNv0jFfEOeDDsliFUCNjCPfQ


# (4) 授权:对帐户kubernetes-dashboard 绑定默认的cluster-admin管理员集群角色
kubectl create clusterrolebinding kubernetes-dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard
# 附:集群角色绑定(kubernetes-dashboard-cluster-admin名称任意)
# (5) 查看集群的权限绑定
kubectl get clusterrolebindings -o wide | grep dash    

4.8 切换 Containerd

我们切换Slave2节点的容器运行时为Containerd

由于Containerd本身就是docker的一部分,所以我们这里不用重复进行安装,只需将其独立运行并关闭docker服务即可。

(1)生成配置文件

mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml

(2)修改配置文件

• pause镜像设置过阿里云镜像仓库地址

• cgroups驱动设置为systemd

• 拉取Docker Hub镜像配置加速地址设置为阿里云镜像仓库地址

# 编辑配置文件
vi /etc/containerd/config.toml
   [plugins."io.containerd.grpc.v1.cri"]
      sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.2"  
         ...
      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
         SystemdCgroup = true
             ...
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
         endpoint = ["https://b9pmyelo.mirror.aliyuncs.com"]
# 重启containerd
systemctl restart containerd

(3)配置kubelet使用containerd

vi /etc/sysconfig/kubelet 
KUBELET_EXTRA_ARGS=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --cgroup-driver=systemd

systemctl restart kubelet

(4)master验证切换效果

4.9 BUG总结

查看错误日志

kubectl get pod -A
kubectl logs -f -n kubernetes-dashboard kubernetes-dashboard-658485d5c7-t7fw7

Bug 1

[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

修改或创建/etc/docker/daemon.json,添加如下内容:

{
    "exec-opts": ["native.cgroupdriver=systemd"]
}

Bug 2

# problem
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
# solution
echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables

参考链接:

https://kubernetes.io/zh-cn/docs/home/

https://projectcalico.docs.tigera.io/getting-started/kubernetes/quickstart

https://github.com/projectcalico/calico/releases/tag/v3.24.5

https://icloudnative.io/posts/getting-started-with-containerd/

posted @ 2022-11-18 09:33  Z-Y-Z  阅读(508)  评论(0编辑  收藏  举报