重置k8s集群,主master节点备份

1、在master和node上执行重置

swapoff -a #关闭swap空间

#初始化

 

kubeadm reset

 

 

#重新加载 #重启kubelet服务

systemctl daemon-reload 
systemctl restart kubelet 

 

 

iptables -F #清空iptables规则



rm -rf $HOME/.kube

 

查看安装过程中的报错

journalctl -xeu kubelet

 

2、在master上创建集群

master1

kubeadm init --apiserver-advertise-address=192.168.43.100 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.17.4 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16 

 

下面是日志:

master1

W0208 15:31:43.911132 13048 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0208 15:31:43.911189 13048 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.43.100 192.168.43.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master1 localhost] and IPs [192.168.43.100 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master1 localhost] and IPs [192.168.43.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0208 15:31:49.057218 13048 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0208 15:31:49.058180 13048 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 34.537188 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

kubeadm join 192.168.43.200:7443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:8ae071e7e8252901844ce34928d415b10ebfd01daafc2a07329eae6ea69689c7 \
--control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.43.200:7443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:8ae071e7e8252901844ce34928d415b10ebfd01daafc2a07329eae6ea69689c7

 

master2

I0208 02:52:27.091907 83088 join.go:357] [preflight] found /etc/kubernetes/admin.conf. Use it for skipping discovery
I0208 02:52:27.093038 83088 join.go:371] [preflight] found NodeName empty; using OS hostname as NodeName
I0208 02:52:27.093051 83088 join.go:375] [preflight] found advertiseAddress empty; using default interface's IP address as advertiseAddress
I0208 02:52:27.096736 83088 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock
I0208 02:52:27.099312 83088 interface.go:400] Looking for default routes with IPv4 addresses
I0208 02:52:27.099325 83088 interface.go:405] Default route transits interface "ens33"
I0208 02:52:27.100010 83088 interface.go:208] Interface ens33 is up
I0208 02:52:27.100054 83088 interface.go:256] Interface "ens33" has 2 addresses :[192.168.43.101/24 fe80::20c:29ff:feb3:3d66/64].
I0208 02:52:27.100084 83088 interface.go:223] Checking addr 192.168.43.101/24.
I0208 02:52:27.100091 83088 interface.go:230] IP found 192.168.43.101
I0208 02:52:27.100100 83088 interface.go:262] Found valid IPv4 address 192.168.43.101 for interface "ens33".
I0208 02:52:27.100105 83088 interface.go:411] Found active IP 192.168.43.101
[preflight] Running pre-flight checks
I0208 02:52:27.100188 83088 preflight.go:90] [preflight] Running general checks
I0208 02:52:27.100253 83088 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests
I0208 02:52:27.100302 83088 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf
I0208 02:52:27.100311 83088 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
I0208 02:52:27.100318 83088 checks.go:102] validating the container runtime
I0208 02:52:27.197733 83088 checks.go:128] validating if the service is enabled and active
I0208 02:52:27.302021 83088 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0208 02:52:27.302069 83088 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0208 02:52:27.302090 83088 checks.go:649] validating whether swap is enabled or not
I0208 02:52:27.302117 83088 checks.go:376] validating the presence of executable ip
I0208 02:52:27.303412 83088 checks.go:376] validating the presence of executable iptables
I0208 02:52:27.303434 83088 checks.go:376] validating the presence of executable mount
I0208 02:52:27.303469 83088 checks.go:376] validating the presence of executable nsenter
I0208 02:52:27.303860 83088 checks.go:376] validating the presence of executable ebtables
I0208 02:52:27.305201 83088 checks.go:376] validating the presence of executable ethtool
I0208 02:52:27.305686 83088 checks.go:376] validating the presence of executable socat
I0208 02:52:27.305721 83088 checks.go:376] validating the presence of executable tc
I0208 02:52:27.305742 83088 checks.go:376] validating the presence of executable touch
I0208 02:52:27.306154 83088 checks.go:520] running all checks
I0208 02:52:27.392301 83088 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0208 02:52:27.392602 83088 checks.go:618] validating kubelet version
I0208 02:52:27.470303 83088 checks.go:128] validating if the service is enabled and active
I0208 02:52:27.480657 83088 checks.go:201] validating availability of port 10250
I0208 02:52:27.480837 83088 checks.go:432] validating if the connectivity type is via proxy or direct
I0208 02:52:27.480880 83088 join.go:455] [preflight] Fetching init configuration
I0208 02:52:27.480886 83088 join.go:493] [preflight] Retrieving KubeConfig objects
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
I0208 02:52:27.502258 83088 interface.go:400] Looking for default routes with IPv4 addresses
I0208 02:52:27.502283 83088 interface.go:405] Default route transits interface "ens33"
I0208 02:52:27.503136 83088 interface.go:208] Interface ens33 is up
I0208 02:52:27.503188 83088 interface.go:256] Interface "ens33" has 2 addresses :[192.168.43.101/24 fe80::20c:29ff:feb3:3d66/64].
I0208 02:52:27.503205 83088 interface.go:223] Checking addr 192.168.43.101/24.
I0208 02:52:27.503212 83088 interface.go:230] IP found 192.168.43.101
I0208 02:52:27.503218 83088 interface.go:262] Found valid IPv4 address 192.168.43.101 for interface "ens33".
I0208 02:52:27.503223 83088 interface.go:411] Found active IP 192.168.43.101
I0208 02:52:27.503269 83088 preflight.go:101] [preflight] Running configuration dependant checks
[preflight] Running pre-flight checks before initializing the new control plane instance
I0208 02:52:27.503868 83088 checks.go:577] validating Kubernetes and kubeadm version
I0208 02:52:27.503885 83088 checks.go:166] validating if the firewall is enabled and active
I0208 02:52:27.517097 83088 checks.go:201] validating availability of port 6443
I0208 02:52:27.517176 83088 checks.go:201] validating availability of port 10259
I0208 02:52:27.517201 83088 checks.go:201] validating availability of port 10257
I0208 02:52:27.517227 83088 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0208 02:52:27.517240 83088 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0208 02:52:27.517246 83088 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0208 02:52:27.517252 83088 checks.go:286] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0208 02:52:27.517259 83088 checks.go:432] validating if the connectivity type is via proxy or direct
I0208 02:52:27.517366 83088 checks.go:471] validating http connectivity to first IP address in the CIDR
I0208 02:52:27.517383 83088 checks.go:471] validating http connectivity to first IP address in the CIDR
I0208 02:52:27.517389 83088 checks.go:201] validating availability of port 2379
I0208 02:52:27.517436 83088 checks.go:201] validating availability of port 2380
I0208 02:52:27.517468 83088 checks.go:249] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0208 02:52:27.588691 83088 checks.go:838] image exists: registry.aliyuncs.com/google_containers/kube-apiserver:v1.17.0
I0208 02:52:27.653453 83088 checks.go:838] image exists: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.17.0
I0208 02:52:27.718074 83088 checks.go:838] image exists: registry.aliyuncs.com/google_containers/kube-scheduler:v1.17.0
I0208 02:52:27.801390 83088 checks.go:838] image exists: registry.aliyuncs.com/google_containers/kube-proxy:v1.17.0
I0208 02:52:27.880141 83088 checks.go:838] image exists: registry.aliyuncs.com/google_containers/pause:3.1
I0208 02:52:27.953927 83088 checks.go:838] image exists: registry.aliyuncs.com/google_containers/etcd:3.4.3-0
I0208 02:52:28.020082 83088 checks.go:838] image exists: registry.aliyuncs.com/google_containers/coredns:1.6.5
I0208 02:52:28.020116 83088 controlplaneprepare.go:211] [download-certs] Skipping certs download
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0208 02:52:28.020137 83088 certs.go:39] creating PKI assets
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master2 localhost] and IPs [192.168.43.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master2 localhost] and IPs [192.168.43.101 127.0.0.1 ::1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.43.101 192.168.43.200]
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
I0208 02:52:29.742631 83088 certs.go:70] creating a new public/private key files for signing service account users
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0208 02:52:30.559861 83088 manifests.go:90] [control-plane] getting StaticPodSpecs
W0208 02:52:30.561119 83088 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0208 02:52:30.567688 83088 manifests.go:115] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0208 02:52:30.567705 83088 manifests.go:90] [control-plane] getting StaticPodSpecs
W0208 02:52:30.567770 83088 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0208 02:52:30.569219 83088 manifests.go:115] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0208 02:52:30.569236 83088 manifests.go:90] [control-plane] getting StaticPodSpecs
W0208 02:52:30.569281 83088 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0208 02:52:30.569796 83088 manifests.go:115] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[check-etcd] Checking that the etcd cluster is healthy
I0208 02:52:30.570817 83088 local.go:75] [etcd] Checking etcd cluster health
I0208 02:52:30.570830 83088 local.go:78] creating etcd client that connects to etcd pods
I0208 02:52:30.573901 83088 etcd.go:107] etcd endpoints read from pods: https://192.168.43.100:2379
I0208 02:52:30.595428 83088 etcd.go:166] etcd endpoints read from etcd: https://192.168.43.100:2379
I0208 02:52:30.595465 83088 etcd.go:125] update etcd endpoints: https://192.168.43.100:2379
I0208 02:52:30.618028 83088 kubelet.go:107] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
I0208 02:52:30.619642 83088 kubelet.go:133] [kubelet-start] Stopping the kubelet
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I0208 02:52:31.809602 83088 kubelet.go:168] [kubelet-start] preserving the crisocket information for the node
I0208 02:52:31.809629 83088 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master2" as an annotation
I0208 02:52:52.324342 83088 local.go:127] creating etcd client that connects to etcd pods
I0208 02:52:52.330861 83088 etcd.go:107] etcd endpoints read from pods: https://192.168.43.100:2379
I0208 02:52:52.351963 83088 etcd.go:166] etcd endpoints read from etcd: https://192.168.43.100:2379
I0208 02:52:52.351994 83088 etcd.go:125] update etcd endpoints: https://192.168.43.100:2379
I0208 02:52:52.352002 83088 local.go:136] Adding etcd member: https://192.168.43.101:2380
[etcd] Announced new etcd member joining to the existing etcd cluster
I0208 02:52:52.389656 83088 local.go:142] Updated etcd member list: [{master2 https://192.168.43.101:2380} {master1 https://192.168.43.100:2380}]
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
I0208 02:52:52.391625 83088 etcd.go:408] [etcd] attempting to see if all cluster endpoints ([https://192.168.43.100:2379 https://192.168.43.101:2379]) are available 1/8
{"level":"warn","ts":"2023-02-08T02:53:08.914-0500","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://192.168.43.101:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
I0208 02:53:08.914200 83088 etcd.go:388] Failed to get etcd status for https://192.168.43.101:2379: context deadline exceeded
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node master2 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

 

 

master3

I0208 02:55:34.585198 39275 join.go:357] [preflight] found /etc/kubernetes/admin.conf. Use it for skipping discovery
I0208 02:55:34.586266 39275 join.go:371] [preflight] found NodeName empty; using OS hostname as NodeName
I0208 02:55:34.586278 39275 join.go:375] [preflight] found advertiseAddress empty; using default interface's IP address as advertiseAddress
I0208 02:55:34.590256 39275 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock
I0208 02:55:34.590669 39275 interface.go:400] Looking for default routes with IPv4 addresses
I0208 02:55:34.590679 39275 interface.go:405] Default route transits interface "ens33"
I0208 02:55:34.591147 39275 interface.go:208] Interface ens33 is up
I0208 02:55:34.591299 39275 interface.go:256] Interface "ens33" has 2 addresses :[192.168.43.102/24 fe80::20c:29ff:fe4d:c036/64].
I0208 02:55:34.591335 39275 interface.go:223] Checking addr 192.168.43.102/24.
I0208 02:55:34.591342 39275 interface.go:230] IP found 192.168.43.102
I0208 02:55:34.591355 39275 interface.go:262] Found valid IPv4 address 192.168.43.102 for interface "ens33".
I0208 02:55:34.591361 39275 interface.go:411] Found active IP 192.168.43.102
[preflight] Running pre-flight checks
I0208 02:55:34.591450 39275 preflight.go:90] [preflight] Running general checks
I0208 02:55:34.591500 39275 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests
I0208 02:55:34.591553 39275 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf
I0208 02:55:34.591567 39275 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
I0208 02:55:34.591576 39275 checks.go:102] validating the container runtime
I0208 02:55:34.747034 39275 checks.go:128] validating if the service is enabled and active
I0208 02:55:34.883067 39275 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0208 02:55:34.883135 39275 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0208 02:55:34.883161 39275 checks.go:649] validating whether swap is enabled or not
I0208 02:55:34.883201 39275 checks.go:376] validating the presence of executable ip
I0208 02:55:34.883236 39275 checks.go:376] validating the presence of executable iptables
I0208 02:55:34.883255 39275 checks.go:376] validating the presence of executable mount
I0208 02:55:34.883266 39275 checks.go:376] validating the presence of executable nsenter
I0208 02:55:34.883275 39275 checks.go:376] validating the presence of executable ebtables
I0208 02:55:34.883284 39275 checks.go:376] validating the presence of executable ethtool
I0208 02:55:34.883291 39275 checks.go:376] validating the presence of executable socat
I0208 02:55:34.883301 39275 checks.go:376] validating the presence of executable tc
I0208 02:55:34.883309 39275 checks.go:376] validating the presence of executable touch
I0208 02:55:34.883388 39275 checks.go:520] running all checks
I0208 02:55:34.993643 39275 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0208 02:55:34.993966 39275 checks.go:618] validating kubelet version
I0208 02:55:35.069842 39275 checks.go:128] validating if the service is enabled and active
I0208 02:55:35.078533 39275 checks.go:201] validating availability of port 10250
I0208 02:55:35.080281 39275 checks.go:432] validating if the connectivity type is via proxy or direct
I0208 02:55:35.080352 39275 join.go:455] [preflight] Fetching init configuration
I0208 02:55:35.080360 39275 join.go:493] [preflight] Retrieving KubeConfig objects
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
I0208 02:55:35.129272 39275 interface.go:400] Looking for default routes with IPv4 addresses
I0208 02:55:35.129294 39275 interface.go:405] Default route transits interface "ens33"
I0208 02:55:35.129881 39275 interface.go:208] Interface ens33 is up
I0208 02:55:35.129927 39275 interface.go:256] Interface "ens33" has 2 addresses :[192.168.43.102/24 fe80::20c:29ff:fe4d:c036/64].
I0208 02:55:35.129942 39275 interface.go:223] Checking addr 192.168.43.102/24.
I0208 02:55:35.129949 39275 interface.go:230] IP found 192.168.43.102
I0208 02:55:35.129954 39275 interface.go:262] Found valid IPv4 address 192.168.43.102 for interface "ens33".
I0208 02:55:35.129959 39275 interface.go:411] Found active IP 192.168.43.102
I0208 02:55:35.130001 39275 preflight.go:101] [preflight] Running configuration dependant checks
[preflight] Running pre-flight checks before initializing the new control plane instance
I0208 02:55:35.130588 39275 checks.go:577] validating Kubernetes and kubeadm version
I0208 02:55:35.130607 39275 checks.go:166] validating if the firewall is enabled and active
I0208 02:55:35.139050 39275 checks.go:201] validating availability of port 6443
I0208 02:55:35.139131 39275 checks.go:201] validating availability of port 10259
I0208 02:55:35.139254 39275 checks.go:201] validating availability of port 10257
I0208 02:55:35.139279 39275 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0208 02:55:35.139298 39275 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0208 02:55:35.139311 39275 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0208 02:55:35.139317 39275 checks.go:286] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0208 02:55:35.139324 39275 checks.go:432] validating if the connectivity type is via proxy or direct
I0208 02:55:35.139343 39275 checks.go:471] validating http connectivity to first IP address in the CIDR
I0208 02:55:35.139352 39275 checks.go:471] validating http connectivity to first IP address in the CIDR
I0208 02:55:35.139356 39275 checks.go:201] validating availability of port 2379
I0208 02:55:35.139375 39275 checks.go:201] validating availability of port 2380
I0208 02:55:35.139392 39275 checks.go:249] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0208 02:55:35.212275 39275 checks.go:838] image exists: registry.aliyuncs.com/google_containers/kube-apiserver:v1.17.0
I0208 02:55:35.282266 39275 checks.go:838] image exists: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.17.0
I0208 02:55:35.368581 39275 checks.go:838] image exists: registry.aliyuncs.com/google_containers/kube-scheduler:v1.17.0
I0208 02:55:35.435968 39275 checks.go:838] image exists: registry.aliyuncs.com/google_containers/kube-proxy:v1.17.0
I0208 02:55:35.511278 39275 checks.go:838] image exists: registry.aliyuncs.com/google_containers/pause:3.1
I0208 02:55:35.582165 39275 checks.go:838] image exists: registry.aliyuncs.com/google_containers/etcd:3.4.3-0
I0208 02:55:35.647562 39275 checks.go:838] image exists: registry.aliyuncs.com/google_containers/coredns:1.6.5
I0208 02:55:35.647600 39275 controlplaneprepare.go:211] [download-certs] Skipping certs download
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0208 02:55:35.647634 39275 certs.go:39] creating PKI assets
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master3 localhost] and IPs [192.168.43.102 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master3 localhost] and IPs [192.168.43.102 127.0.0.1 ::1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master3 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.43.102 192.168.43.200]
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
I0208 02:55:37.681589 39275 certs.go:70] creating a new public/private key files for signing service account users
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0208 02:55:38.630966 39275 manifests.go:90] [control-plane] getting StaticPodSpecs
W0208 02:55:38.631127 39275 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0208 02:55:38.636639 39275 manifests.go:115] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0208 02:55:38.636658 39275 manifests.go:90] [control-plane] getting StaticPodSpecs
W0208 02:55:38.636715 39275 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0208 02:55:38.637436 39275 manifests.go:115] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0208 02:55:38.637448 39275 manifests.go:90] [control-plane] getting StaticPodSpecs
W0208 02:55:38.637484 39275 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0208 02:55:38.638763 39275 manifests.go:115] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[check-etcd] Checking that the etcd cluster is healthy
I0208 02:55:38.639932 39275 local.go:75] [etcd] Checking etcd cluster health
I0208 02:55:38.639944 39275 local.go:78] creating etcd client that connects to etcd pods
I0208 02:55:38.646363 39275 etcd.go:107] etcd endpoints read from pods: https://192.168.43.100:2379,https://192.168.43.101:2379
I0208 02:55:38.657399 39275 etcd.go:166] etcd endpoints read from etcd: https://192.168.43.101:2379,https://192.168.43.100:2379
I0208 02:55:38.657431 39275 etcd.go:125] update etcd endpoints: https://192.168.43.101:2379,https://192.168.43.100:2379
I0208 02:55:38.697589 39275 kubelet.go:107] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
I0208 02:55:38.699096 39275 kubelet.go:133] [kubelet-start] Stopping the kubelet
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I0208 02:55:39.937734 39275 kubelet.go:168] [kubelet-start] preserving the crisocket information for the node
I0208 02:55:39.937758 39275 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master3" as an annotation
I0208 02:56:00.469020 39275 local.go:127] creating etcd client that connects to etcd pods
I0208 02:56:00.481185 39275 etcd.go:107] etcd endpoints read from pods: https://192.168.43.100:2379,https://192.168.43.101:2379
I0208 02:56:00.505720 39275 etcd.go:166] etcd endpoints read from etcd: https://192.168.43.101:2379,https://192.168.43.100:2379
I0208 02:56:00.505747 39275 etcd.go:125] update etcd endpoints: https://192.168.43.101:2379,https://192.168.43.100:2379
I0208 02:56:00.505755 39275 local.go:136] Adding etcd member: https://192.168.43.102:2380
[etcd] Announced new etcd member joining to the existing etcd cluster
I0208 02:56:00.567804 39275 local.go:142] Updated etcd member list: [{master2 https://192.168.43.101:2380} {master3 https://192.168.43.102:2380} {master1 https://192.168.43.100:2380}]
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
I0208 02:56:00.568564 39275 etcd.go:408] [etcd] attempting to see if all cluster endpoints ([https://192.168.43.101:2379 https://192.168.43.100:2379 https://192.168.43.102:2379]) are available 1/8
{"level":"warn","ts":"2023-02-08T02:56:08.612-0500","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://192.168.43.102:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
I0208 02:56:08.615537 39275 etcd.go:388] Failed to get etcd status for https://192.168.43.102:2379: context deadline exceeded
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node master3 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master3 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

 

 node1

W0208 16:02:58.047534 22552 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

 

 

node2

W0208 16:03:05.444472 22336 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

 

 

3、在master上安装网络插件

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

kubectl apply -f kube-flannel.yml

posted @ 2023-02-07 16:42  larybird  阅读(192)  评论(0编辑  收藏  举报