CentOS7.9离线部署Kubernetes 1.27.2

1、节点介绍 ,最小化安装CentOS7.9

1 HostName      vm8649         vm8648           vm8647
2     IP           10.17.86.49   10.17.86.48      10.17.86.47

2、配置网络,关闭防火墙,关闭selinux

3、安装必备的软件

1 yum install vim gcc wget lrzsz bash-completion gperf

4、安装containerd

在https://github.com/containerd/ 下载相关软件包

1 cri-containerd-cni-1.7.2-linux-amd64.tar
2 containerd-1.7.2-linux-amd64.tar.g
1 tar Cxzvf /usr/local containerd-1.7.2-linux-amd64.tar.gz   解压二进制到/usr/local下
2 mkdir tmp;tar -xvf cri-containerd-cni-1.7.2-linux-amd64.tar.gz -C tmp/
3 
4 mkdir -p /usr/local/lib/systemd/system/
5 cp tmp/etc/systemd/system/containerd.service /usr/local/lib/systemd/system/   使用systemd管理container
6 
7 systemctl daemon-reload
8 systemctl start containerd
9 mkdir -p /etc/containerd && containerd config default > /etc/containerd/config.toml

配置containerd,编辑 /etc/containerd/config.toml

1 [grpc]
2   address = "/var/run/containerd/containerd.sock"
3 
4 .......
5 [plugins]
6    [plugins."io.containerd.grpc.v1.cri"]
7        sandbox_image = "registry.k8s.io/pause:3.9"
8        .....
9    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
10 SystemdCgroup = true
 5、添加部分系统配置
 1 *执行 
 2 modprobe br_netfilter
 3 *在sysctl.conf中添加如下:
 4 net.bridge.bridge-nf-call-iptables=1       
 5 net.bridge.bridge-nf-call-ip6tables=1
 6 net.ipv4.ip_forward=1
 7 
 8 保存执行sysctl -p
 9 
10 *创建/etc/modules-load.d/k8s.conf文件,添加如下内容
11 overlay
12 br_netfilter

6、安装runc和cni插件

从  https://github.com/opencontainers/runc/releases   下载 libseccomp-2.5.4.tar.gz,编译安装

tar -xvf libseccomp-2.5.4.tar.gz
cd libseccomp-2.5.4
./configure && make -j8 && make install
echo "/usr/local/lib" >> /etc/ld.so.conf
ldconfig
cp tmp/usr/local/sbin/runc /usr/local/sbin/
1 ./runc --version
2 runc version 1.1.7
3 commit: v1.1.7-0-g860f061b
4 spec: 1.0.2-dev
5 go: go1.20.4
6 libseccomp: 2.5.4
1 mkdir -p /opt/cni/bin
2 cp tmp/opt/cni/bin/* /opt/cni/bin

7、安装kube等组件

使用阿里云源

 1 创建   /etc/yum.repos.d/kubernetes.repo
 2 
 3 写入如下内容
 4 
 5 [kubernetes]
 6 name=Kubernetes
 7 baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
 8 enabled=1
 9 gpgcheck=1
10 repo_gpgcheck=1
11 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
1 yum makecache 
2 yum install kubeadm-1.27.2-0.x86_64 kubectl-1.27.2-0.x86_64 kubelet-1.27.2-0.x86_64
3 systemctl start kubelet.service

8、获取镜像

 1 kubeadm config images list
 2 
 3 
 4 registry.k8s.io/kube-apiserver:v1.27.2
 5 registry.k8s.io/kube-controller-manager:v1.27.2
 6 registry.k8s.io/kube-scheduler:v1.27.2
 7 registry.k8s.io/kube-proxy:v1.27.2
 8 registry.k8s.io/pause:3.9
 9 registry.k8s.io/etcd:3.5.7-0
10 registry.k8s.io/coredns/coredns:v1.10.1

  由于国内无法直接访问,采用阿里云的源在其他机器上下载

 1 docker pull  registry.aliyuncs.com/google_containers/coredns:v1.10.1
 2 docker pull  registry.aliyuncs.com/google_containers/etcd:3.5.7-0
 3 docker pull  registry.aliyuncs.com/google_containers/kube-apiserver:v1.27.2
 4 docker pull  registry.aliyuncs.com/google_containers/kube-controller-manager:v1.27.2
 5 docker pull  registry.aliyuncs.com/google_containers/kube-proxy:v1.27.2
 6 docker pull  registry.aliyuncs.com/google_containers/kube-scheduler:v1.27.2
 7 docker pull  registry.aliyuncs.com/google_containers/pause:3.9
 8 
 9 docker tag  registry.aliyuncs.com/google_containers/coredns:v1.10.1                      registry.k8s.io/coredns:v1.10.1
10 docker tag  registry.aliyuncs.com/google_containers/etcd:3.5.7-0                         registry.k8s.io/etcd:3.5.7-0
11 docker tag  registry.aliyuncs.com/google_containers/kube-apiserver:v1.27.2               registry.k8s.io/kube-apiserver:v1.27.2
12 docker tag  registry.aliyuncs.com/google_containers/kube-controller-manager:v1.27.2      registry.k8s.io/kube-controller-manager:v1.27.2
13 docker tag  registry.aliyuncs.com/google_containers/kube-proxy:v1.27.2                   registry.k8s.io/kube-proxy:v1.27.2
14 docker tag  registry.aliyuncs.com/google_containers/kube-scheduler:v1.27.2               registry.k8s.io/kube-scheduler:v1.27.2
15 docker tag  registry.aliyuncs.com/google_containers/pause:3.9                            registry.k8s.io/pause:3.9
16 
17 
18 docker save -o coredns-v1.10.1.tar                    registry.k8s.io/coredns:v1.10.1
19 docker save -o etcd-3.5.7-0.tar                       registry.k8s.io/etcd:3.5.7-0
20 docker save -o kube-apiserver-v1.27.2.tar             registry.k8s.io/kube-apiserver:v1.27.2
21 docker save -o kube-controller-manager-v1.27.2.tar    registry.k8s.io/kube-controller-manager:v1.27.2
22 docker save -o kube-proxy-v1.27.2.tar                 registry.k8s.io/kube-proxy:v1.27.2
23 docker save -o kube-scheduler-v1.27.2.tar             registry.k8s.io/kube-scheduler:v1.27.2
24 docker save -o pause-3.9.tar                          registry.k8s.io/pause:3.9

K8S节点导入

1 ctr -n k8s.io image import  coredns-v1.10.1.tar
2 ctr -n k8s.io image import  etcd-3.5.7-0.tar 
3 ctr -n k8s.io image import  kube-apiserver-v1.27.2.tar
4 ctr -n k8s.io image import  kube-controller-manager-v1.27.2.tar
5 ctr -n k8s.io image import  kube-proxy-v1.27.2.tar
6 ctr -n k8s.io image import  kube-scheduler-v1.27.2.tar 
7 ctr -n k8s.io image import  pause-3.9.tar
 1 [root@vm8648 ~]# ctr -n k8s.io image list
 2 REF                                                                     TYPE                                                 DIGEST                                                                  SIZE      PLATFORMS   LABELS                          
 3 registry.k8s.io/coredns/coredns:v1.10.1                                 application/vnd.docker.distribution.manifest.v2+json sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378 15.4 MiB  linux/amd64 io.cri-containerd.image=managed 
 4 registry.k8s.io/etcd:3.5.7-0                                            application/vnd.docker.distribution.manifest.v2+json sha256:6e1676ae2e54aeeb1b4bdec90a4bd59c3850dca616d20dbb1fa8ea9c01f7c5be 96.9 MiB  linux/amd64 io.cri-containerd.image=managed 
 5 registry.k8s.io/kube-apiserver:v1.27.2                                  application/vnd.docker.distribution.manifest.v2+json sha256:71e76a381a92e85c26e58856532f414a6977e66145c92f925049f7c816ace6bc 31.7 MiB  linux/amd64 io.cri-containerd.image=managed 
 6 registry.k8s.io/kube-controller-manager:v1.27.2                         application/vnd.docker.distribution.manifest.v2+json sha256:5099b2191b973470df7eee7ebd8b37b605528bcee345d140bf24345bd37caa35 29.4 MiB  linux/amd64 io.cri-containerd.image=managed 
 7 registry.k8s.io/kube-proxy:v1.27.2                                      application/vnd.docker.distribution.manifest.v2+json sha256:931b8fa2393b7e2a926afbfd24784153760b999ddbf2059f2cb652510ecdef83 22.8 MiB  linux/amd64 io.cri-containerd.image=managed 
 8 registry.k8s.io/kube-scheduler:v1.27.2                                  application/vnd.docker.distribution.manifest.v2+json sha256:ab7ad4514ca457a14e89291dd8883f37813e54510de12e2428f5ee57b29fe036 17.2 MiB  linux/amd64 io.cri-containerd.image=managed 
 9 registry.k8s.io/pause:3.9                                               application/vnd.docker.distribution.manifest.v2+json sha256:0fc1f3b764be56f7c881a69cbd553ae25a2b5523c6901fbacb8270307c29d0c4 311.6 KiB linux/amd64 io.cri-containerd.image=managed 
10 sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681 application/vnd.docker.distribution.manifest.v2+json sha256:6e1676ae2e54aeeb1b4bdec90a4bd59c3850dca616d20dbb1fa8ea9c01f7c5be 96.9 MiB  linux/amd64 io.cri-containerd.image=managed 
11 sha256:89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0 application/vnd.docker.distribution.manifest.v2+json sha256:ab7ad4514ca457a14e89291dd8883f37813e54510de12e2428f5ee57b29fe036 17.2 MiB  linux/amd64 io.cri-containerd.image=managed 
12 sha256:ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12 application/vnd.docker.distribution.manifest.v2+json sha256:5099b2191b973470df7eee7ebd8b37b605528bcee345d140bf24345bd37caa35 29.4 MiB  linux/amd64 io.cri-containerd.image=managed 
13 sha256:b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee application/vnd.docker.distribution.manifest.v2+json sha256:931b8fa2393b7e2a926afbfd24784153760b999ddbf2059f2cb652510ecdef83 22.8 MiB  linux/amd64 io.cri-containerd.image=managed 
14 sha256:c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370 application/vnd.docker.distribution.manifest.v2+json sha256:71e76a381a92e85c26e58856532f414a6977e66145c92f925049f7c816ace6bc 31.7 MiB  linux/amd64 io.cri-containerd.image=managed 
15 sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c application/vnd.docker.distribution.manifest.v2+json sha256:0fc1f3b764be56f7c881a69cbd553ae25a2b5523c6901fbacb8270307c29d0c4 311.6 KiB linux/amd64 io.cri-containerd.image=managed 
16 sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc application/vnd.docker.distribution.manifest.v2+json sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378 15.4 MiB  linux/amd64 io.cri-containerd.image=managed

9、初始化集群

在第一个节点执行

1 kubeadm init --kubernetes-version=v1.27.2 --pod-network-cidr=10.224.0.0/16 --apiserver-advertise-address=10.17.86.49
 1 [root@vm8649 ~]# ctr -n k8s.io image import scheduler 
 2 unpacking registry.k8s.io/kube-scheduler:v1.27.2 (sha256:ab7ad4514ca457a14e89291dd8883f37813e54510de12e2428f5ee57b29fe036)...done
 3 [root@vm8649 ~]# kubeadm init --kubernetes-version=v1.27.2 --pod-network-cidr=10.224.0.0/16 --apiserver-advertise-address=10.17.86.49
 4 [init] Using Kubernetes version: v1.27.2
 5 [preflight] Running pre-flight checks
 6 [preflight] Pulling images required for setting up a Kubernetes cluster
 7 [preflight] This might take a minute or two, depending on the speed of your internet connection
 8 [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
 9 W0617 10:57:44.232246   19799 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
10 [certs] Using certificateDir folder "/etc/kubernetes/pki"
11 [certs] Generating "ca" certificate and key
12 [certs] Generating "apiserver" certificate and key
13 [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local vm8649] and IPs [10.96.0.1 10.17.86.49]
14 [certs] Generating "apiserver-kubelet-client" certificate and key
15 [certs] Generating "front-proxy-ca" certificate and key
16 [certs] Generating "front-proxy-client" certificate and key
17 [certs] Generating "etcd/ca" certificate and key
18 [certs] Generating "etcd/server" certificate and key
19 [certs] etcd/server serving cert is signed for DNS names [localhost vm8649] and IPs [10.17.86.49 127.0.0.1 ::1]
20 [certs] Generating "etcd/peer" certificate and key
21 [certs] etcd/peer serving cert is signed for DNS names [localhost vm8649] and IPs [10.17.86.49 127.0.0.1 ::1]
22 [certs] Generating "etcd/healthcheck-client" certificate and key
23 [certs] Generating "apiserver-etcd-client" certificate and key
24 [certs] Generating "sa" key and public key
25 [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
26 [kubeconfig] Writing "admin.conf" kubeconfig file
27 [kubeconfig] Writing "kubelet.conf" kubeconfig file
28 [kubeconfig] Writing "controller-manager.conf" kubeconfig file
29 [kubeconfig] Writing "scheduler.conf" kubeconfig file
30 [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
31 [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
32 [kubelet-start] Starting the kubelet
33 [control-plane] Using manifest folder "/etc/kubernetes/manifests"
34 [control-plane] Creating static Pod manifest for "kube-apiserver"
35 [control-plane] Creating static Pod manifest for "kube-controller-manager"
36 [control-plane] Creating static Pod manifest for "kube-scheduler"
37 [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
38 W0617 10:57:48.187471   19799 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
39 [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
40 [apiclient] All control plane components are healthy after 20.502183 seconds
41 [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
42 [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
43 [upload-certs] Skipping phase. Please see --upload-certs
44 [mark-control-plane] Marking the node vm8649 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
45 [mark-control-plane] Marking the node vm8649 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
46 [bootstrap-token] Using token: rn6jmw.5rb3lw3rkblzozyc
47 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
48 [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
49 [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
50 [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
51 [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
52 [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
53 [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
54 [addons] Applied essential addon: CoreDNS
55 [addons] Applied essential addon: kube-proxy
56 
57 Your Kubernetes control-plane has initialized successfully!
58 
59 To start using your cluster, you need to run the following as a regular user:
60 
61   mkdir -p $HOME/.kube
62   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
63   sudo chown $(id -u):$(id -g) $HOME/.kube/config
64 
65 Alternatively, if you are the root user, you can run:
66 
67   export KUBECONFIG=/etc/kubernetes/admin.conf
68 
69 You should now deploy a pod network to the cluster.
70 Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
71   https://kubernetes.io/docs/concepts/cluster-administration/addons/
72 
73 Then you can join any number of worker nodes by running the following on each as root:
74 
75 kubeadm join 10.17.86.49:6443 --token rn6jmw.5rb3lw3rkblzozyc \
76     --discovery-token-ca-cert-hash sha256:6c1f56ddb17caa8056cedf27e1c76517ff72ed9f0ce1f265c4be06d4a4b28335

配置一下

1 mkdir -p $HOME/.kube
2 cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
3 chown $(id -u):$(id -g) $HOME/.kube/config

其他节点执行

1 kubeadm join 10.17.86.49:6443 --token rn6jmw.5rb3lw3rkblzozyc \
2     --discovery-token-ca-cert-hash sha256:6c1f56ddb17caa8056cedf27e1c76517ff72ed9f0ce1f265c4be06d4a4b28335
 1 [root@vm8648 ~]# kubeadm join 10.17.86.49:6443 --token rn6jmw.5rb3lw3rkblzozyc \
 2 > --discovery-token-ca-cert-hash sha256:6c1f56ddb17caa8056cedf27e1c76517ff72ed9f0ce1f265c4be06d4a4b28335
 3 [preflight] Running pre-flight checks
 4 [preflight] Reading configuration from the cluster...
 5 [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
 6 [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
 7 [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
 8 [kubelet-start] Starting the kubelet
 9 [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
10 
11 This node has joined the cluster:
12 * Certificate signing request was sent to apiserver and a response was received.
13 * The Kubelet was informed of the new secure connection details.
14 
15 Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

 

 1 [root@vm8647 ~]# kubeadm join 10.17.86.49:6443 --token rn6jmw.5rb3lw3rkblzozyc \
 2 > --discovery-token-ca-cert-hash sha256:6c1f56ddb17caa8056cedf27e1c76517ff72ed9f0ce1f265c4be06d4a4b28335
 3 [preflight] Running pre-flight checks
 4 [preflight] Reading configuration from the cluster...
 5 [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
 6 [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
 7 [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
 8 [kubelet-start] Starting the kubelet
 9 [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
10 
11 This node has joined the cluster:
12 * Certificate signing request was sent to apiserver and a response was received.
13 * The Kubelet was informed of the new secure connection details.
14 
15 Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

在主节点执行 kubectl get nodes

1 [root@vm8649 ~]# kubectl get nodes
2 NAME     STATUS     ROLES           AGE    VERSION
3 vm8647   NotReady   <none>          11s    v1.27.2
4 vm8648   NotReady   <none>          12m    v1.27.2
5 vm8649   NotReady   control-plane   175m   v1.27.2

在主节点执行kubectl get pod -n kube-system -o wide

 1 [root@vm8649 ~]# kubectl get pod -n kube-system -o wide
 2 NAME                             READY   STATUS    RESTARTS   AGE    IP            NODE     NOMINATED NODE   READINESS GATES
 3 coredns-5d78c9869d-bztx7         0/1     Pending   0          175m   <none>        <none>   <none>           <none>
 4 coredns-5d78c9869d-xtdqq         0/1     Pending   0          175m   <none>        <none>   <none>           <none>
 5 etcd-vm8649                      1/1     Running   0          175m   10.17.86.49   vm8649   <none>           <none>
 6 kube-apiserver-vm8649            1/1     Running   0          175m   10.17.86.49   vm8649   <none>           <none>
 7 kube-controller-manager-vm8649   1/1     Running   0          175m   10.17.86.49   vm8649   <none>           <none>
 8 kube-proxy-26rpx                 1/1     Running   0          25s    10.17.86.47   vm8647   <none>           <none>
 9 kube-proxy-kmhjf                 1/1     Running   0          175m   10.17.86.49   vm8649   <none>           <none>
10 kube-proxy-q9zr6                 1/1     Running   0          12m    10.17.86.48   vm8648   <none>           <none>
11 kube-scheduler-vm8649            1/1     Running   0          175m   10.17.86.49   vm8649   <none>           <none>

10.配置网络,采用calico网络

从github下载  https://github.com/projectcalico/calico/releases/download/v3.26.0/release-v3.26.0.tgz

上传到每个节点,设置每个节点的DNS,不能为127.0.0.1

1 * 每个节点都要执行
2 tar -xxvf release-v3.26.0.tgz
3 cd  release-v3.26.0/images
4 ctr -n k8s.io image import calico-cni.tar
5 ctr -n k8s.io image import calico-node.tar
6 ctr -n k8s.io image import calico-kube-controllers.tar
1 *在第一个节点执行
2 cd release-v3.26.0/manifests
3 kubectl apply -f calico.yaml
 1 [root@vm8649 manifests]# kubectl get pod -n kube-system -o wide
 2 NAME                                       READY   STATUS    RESTARTS        AGE     IP              NODE     NOMINATED NODE   READINESS GATES
 3 calico-kube-controllers-786b679988-8dvrp   1/1     Running   1 (5m15s ago)   16m     10.224.225.67   vm8647   <none>           <none>
 4 calico-node-4tjgm                          1/1     Running   1 (5m16s ago)   16m     10.17.86.48     vm8648   <none>           <none>
 5 calico-node-9txm9                          1/1     Running   1 (5m15s ago)   16m     10.17.86.47     vm8647   <none>           <none>
 6 calico-node-qvpv9                          1/1     Running   1 (7m7s ago)    16m     10.17.86.49     vm8649   <none>           <none>
 7 coredns-5d78c9869d-bztx7                   1/1     Running   7 (10m ago)     3h41m   10.224.57.194   vm8648   <none>           <none>
 8 coredns-5d78c9869d-xtdqq                   1/1     Running   8 (5m30s ago)   3h41m   10.224.225.68   vm8647   <none>           <none>
 9 etcd-vm8649                                1/1     Running   1 (7m7s ago)    3h42m   10.17.86.49     vm8649   <none>           <none>
10 kube-apiserver-vm8649                      1/1     Running   1 (7m7s ago)    3h42m   10.17.86.49     vm8649   <none>           <none>
11 kube-controller-manager-vm8649             1/1     Running   1 (7m7s ago)    3h42m   10.17.86.49     vm8649   <none>           <none>
12 kube-proxy-26rpx                           1/1     Running   1 (5m15s ago)   47m     10.17.86.47     vm8647   <none>           <none>
13 kube-proxy-kmhjf                           1/1     Running   1 (7m7s ago)    3h41m   10.17.86.49     vm8649   <none>           <none>
14 kube-proxy-q9zr6                           1/1     Running   1 (5m16s ago)   59m     10.17.86.48     vm8648   <none>           <none>
15 kube-scheduler-vm8649                      1/1     Running   1 (7m7s ago)    3h42m   10.17.86.49     vm8649   <none>           <none>

搭建完毕

 
posted @ 2023-06-17 14:44  苏陌宁  阅读(1414)  评论(0编辑  收藏  举报