当k8s做集群高可用的时候,需要将另一个master加入到当前master failure loading certificate for CA: couldn‘t load the certificate file

 

 解决办法:

[root@k8s-master2 ~]# mkdir -pv /etc/kubernetes/pki
mkdir: created directory ‘/etc/kubernetes/pki’
[root@k8s-master2 ~]# mkdir -pv /etc/kubernetes/pki/etcd
mkdir: created directory ‘/etc/kubernetes/pki/etcd’

scp -rp /etc/kubernetes/pki/ca.* master02:/etc/kubernetes/pki
scp -rp /etc/kubernetes/pki/sa.* master02:/etc/kubernetes/pki
scp -rp /etc/kubernetes/pki/front-proxy-ca.* master02:/etc/kubernetes/pki
scp -rp /etc/kubernetes/pki/etcd/ca.* master02:/etc/kubernetes/pki/etcd
scp -rp /etc/kubernetes/admin.conf master02:/etc/kubernetes

拷贝CA证书文件,再次执行无报错。

[root@k8s-master2 ~]# kubeadm join 192.168.0.100:16443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:8e859b3f8367c4b454f55a8ebff70e08516f9546dc306392e0053a783a78182c --control-plane --cri-socket unix:///var/run/cri-dockerd.sock --v=5
I0309 17:35:46.670097 5499 join.go:406] [preflight] found /etc/kubernetes/admin.conf. Use it for skipping discovery
I0309 17:35:46.670539 5499 join.go:416] [preflight] found NodeName empty; using OS hostname as NodeName
I0309 17:35:46.670546 5499 join.go:420] [preflight] found advertiseAddress empty; using default interface's IP address as advertiseAddress
I0309 17:35:46.670638 5499 interface.go:432] Looking for default routes with IPv4 addresses
I0309 17:35:46.670643 5499 interface.go:437] Default route transits interface "eth0"
I0309 17:35:46.670699 5499 interface.go:209] Interface eth0 is up
I0309 17:35:46.670722 5499 interface.go:257] Interface "eth0" has 2 addresses :[192.168.0.23/24 fe80::f816:3eff:feaf:71e/64].
I0309 17:35:46.670730 5499 interface.go:224] Checking addr 192.168.0.23/24.
I0309 17:35:46.670736 5499 interface.go:231] IP found 192.168.0.23
I0309 17:35:46.670762 5499 interface.go:263] Found valid IPv4 address 192.168.0.23 for interface "eth0".
I0309 17:35:46.670766 5499 interface.go:443] Found active IP 192.168.0.23
[preflight] Running pre-flight checks
I0309 17:35:46.670816 5499 preflight.go:92] [preflight] Running general checks
I0309 17:35:46.670845 5499 checks.go:280] validating the existence of file /etc/kubernetes/kubelet.conf
I0309 17:35:46.670851 5499 checks.go:280] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
I0309 17:35:46.670858 5499 checks.go:104] validating the container runtime
I0309 17:35:46.695722 5499 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0309 17:35:46.695766 5499 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0309 17:35:46.695785 5499 checks.go:644] validating whether swap is enabled or not
I0309 17:35:46.695806 5499 checks.go:370] validating the presence of executable crictl
I0309 17:35:46.695820 5499 checks.go:370] validating the presence of executable conntrack
I0309 17:35:46.695829 5499 checks.go:370] validating the presence of executable ip
I0309 17:35:46.695836 5499 checks.go:370] validating the presence of executable iptables
I0309 17:35:46.695845 5499 checks.go:370] validating the presence of executable mount
I0309 17:35:46.695854 5499 checks.go:370] validating the presence of executable nsenter
I0309 17:35:46.695862 5499 checks.go:370] validating the presence of executable ebtables
I0309 17:35:46.695869 5499 checks.go:370] validating the presence of executable ethtool
I0309 17:35:46.695877 5499 checks.go:370] validating the presence of executable socat
I0309 17:35:46.695885 5499 checks.go:370] validating the presence of executable tc
I0309 17:35:46.695893 5499 checks.go:370] validating the presence of executable touch
I0309 17:35:46.695903 5499 checks.go:516] running all checks
I0309 17:35:46.700932 5499 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost
I0309 17:35:46.701022 5499 checks.go:610] validating kubelet version
I0309 17:35:46.734400 5499 checks.go:130] validating if the "kubelet" service is enabled and active
I0309 17:35:46.738940 5499 checks.go:203] validating availability of port 10250
I0309 17:35:46.739047 5499 checks.go:430] validating if the connectivity type is via proxy or direct
I0309 17:35:46.739076 5499 join.go:547] [preflight] Fetching init configuration
I0309 17:35:46.739081 5499 join.go:593] [preflight] Retrieving KubeConfig objects
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
I0309 17:35:46.748667 5499 kubelet.go:74] attempting to download the KubeletConfiguration from ConfigMap "kubelet-config"
I0309 17:35:46.750525 5499 interface.go:432] Looking for default routes with IPv4 addresses
I0309 17:35:46.750532 5499 interface.go:437] Default route transits interface "eth0"
I0309 17:35:46.750583 5499 interface.go:209] Interface eth0 is up
I0309 17:35:46.750604 5499 interface.go:257] Interface "eth0" has 2 addresses :[192.168.0.23/24 fe80::f816:3eff:feaf:71e/64].
I0309 17:35:46.750612 5499 interface.go:224] Checking addr 192.168.0.23/24.
I0309 17:35:46.750618 5499 interface.go:231] IP found 192.168.0.23
I0309 17:35:46.750622 5499 interface.go:263] Found valid IPv4 address 192.168.0.23 for interface "eth0".
I0309 17:35:46.750626 5499 interface.go:443] Found active IP 192.168.0.23
I0309 17:35:46.752417 5499 preflight.go:103] [preflight] Running configuration dependant checks
I0309 17:35:46.752478 5499 certs.go:522] validating certificate period for CA certificate
I0309 17:35:46.752641 5499 certs.go:522] validating certificate period for front-proxy CA certificate
I0309 17:35:46.752702 5499 certs.go:522] validating certificate period for etcd CA certificate
[preflight] Running pre-flight checks before initializing the new control plane instance
I0309 17:35:46.752777 5499 checks.go:568] validating Kubernetes and kubeadm version
I0309 17:35:46.752787 5499 checks.go:168] validating if the firewall is enabled and active
I0309 17:35:46.756900 5499 checks.go:203] validating availability of port 6443
I0309 17:35:46.756934 5499 checks.go:203] validating availability of port 10259
I0309 17:35:46.756947 5499 checks.go:203] validating availability of port 10257
I0309 17:35:46.756961 5499 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0309 17:35:46.756972 5499 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0309 17:35:46.756989 5499 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0309 17:35:46.756994 5499 checks.go:280] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0309 17:35:46.757000 5499 checks.go:430] validating if the connectivity type is via proxy or direct
I0309 17:35:46.757009 5499 checks.go:469] validating http connectivity to first IP address in the CIDR
I0309 17:35:46.757020 5499 checks.go:469] validating http connectivity to first IP address in the CIDR
I0309 17:35:46.757028 5499 checks.go:203] validating availability of port 2379
I0309 17:35:46.757044 5499 checks.go:203] validating availability of port 2380
I0309 17:35:46.757057 5499 checks.go:243] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0309 17:35:46.757109 5499 checks.go:832] using image pull policy: IfNotPresent
I0309 17:35:46.778190 5499 checks.go:849] pulling: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.25.3
I0309 17:36:13.396202 5499 checks.go:849] pulling: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.25.3
I0309 17:36:36.421312 5499 checks.go:849] pulling: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.25.3
I0309 17:36:44.843800 5499 checks.go:849] pulling: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.25.3
I0309 17:36:56.163430 5499 checks.go:849] pulling: registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8
I0309 17:36:57.306803 5499 checks.go:849] pulling: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0
I0309 17:38:16.540236 5499 checks.go:849] pulling: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3
I0309 17:38:21.455582 5499 controlplaneprepare.go:220] [download-certs] Skipping certs download
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0309 17:38:21.455608 5499 certs.go:47] creating PKI assets
I0309 17:38:21.455671 5499 certs.go:522] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.23 192.168.0.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0309 17:38:21.617358 5499 certs.go:522] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0309 17:38:21.708890 5499 certs.go:522] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master2 localhost] and IPs [192.168.0.23 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master2 localhost] and IPs [192.168.0.23 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
I0309 17:38:22.545443 5499 certs.go:78] creating new public/private key files for signing service account users
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
W0309 17:38:22.545597 5499 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
W0309 17:38:22.599179 5499 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
W0309 17:38:22.737984 5499 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0309 17:38:22.855581 5499 manifests.go:99] [control-plane] getting StaticPodSpecs
I0309 17:38:22.855801 5499 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0309 17:38:22.855817 5499 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I0309 17:38:22.855823 5499 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0309 17:38:22.857349 5499 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0309 17:38:22.857360 5499 manifests.go:99] [control-plane] getting StaticPodSpecs
I0309 17:38:22.857487 5499 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0309 17:38:22.857493 5499 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I0309 17:38:22.857498 5499 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0309 17:38:22.857502 5499 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0309 17:38:22.857507 5499 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0309 17:38:22.857931 5499 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0309 17:38:22.857940 5499 manifests.go:99] [control-plane] getting StaticPodSpecs
I0309 17:38:22.858047 5499 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0309 17:38:22.858304 5499 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[check-etcd] Checking that the etcd cluster is healthy
I0309 17:38:22.859148 5499 local.go:71] [etcd] Checking etcd cluster health
I0309 17:38:22.859157 5499 local.go:74] creating etcd client that connects to etcd pods
I0309 17:38:22.859166 5499 etcd.go:168] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0309 17:38:22.867726 5499 etcd.go:104] etcd endpoints read from pods: https://192.168.0.126:2379
I0309 17:38:22.873509 5499 etcd.go:224] etcd endpoints read from etcd: https://192.168.0.126:2379
I0309 17:38:22.873517 5499 etcd.go:122] update etcd endpoints: https://192.168.0.126:2379
I0309 17:38:22.882444 5499 kubelet.go:120] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
I0309 17:38:22.883512 5499 kubelet.go:156] [kubelet-start] Checking for an existing Node in the cluster with name "k8s-master2" and status "Ready"
I0309 17:38:22.885401 5499 kubelet.go:171] [kubelet-start] Stopping the kubelet
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I0309 17:38:27.945527 5499 kubelet.go:219] [kubelet-start] preserving the crisocket information for the node
I0309 17:38:27.945546 5499 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///var/run/cri-dockerd.sock" to the Node API object "k8s-master2" as an annotation
I0309 17:38:27.945571 5499 cert_rotation.go:137] Starting client certificate rotation controller
I0309 17:38:47.180809 5499 with_retry.go:242] Got a Retry-After 1s response for attempt 1 to https://192.168.0.100:16443/api/v1/nodes/k8s-master2?timeout=10s
I0309 17:38:49.453987 5499 local.go:139] creating etcd client that connects to etcd pods
I0309 17:38:49.454001 5499 etcd.go:168] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0309 17:38:49.456263 5499 etcd.go:104] etcd endpoints read from pods: https://192.168.0.126:2379
I0309 17:38:49.461486 5499 etcd.go:224] etcd endpoints read from etcd: https://192.168.0.126:2379
I0309 17:38:49.461495 5499 etcd.go:122] update etcd endpoints: https://192.168.0.126:2379
I0309 17:38:49.461501 5499 local.go:151] [etcd] Adding etcd member: https://192.168.0.23:2380
[etcd] Announced new etcd member joining to the existing etcd cluster
I0309 17:38:49.467213 5499 local.go:157] Updated etcd member list: [{k8s-master1 https://192.168.0.126:2380} {k8s-master2 https://192.168.0.23:2380}]
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
I0309 17:38:49.467799 5499 etcd.go:484] [etcd] attempting to see if all cluster endpoints ([https://192.168.0.126:2379 https://192.168.0.23:2379]) are available 1/8
I0309 17:38:51.484359 5499 etcd.go:464] Failed to get etcd status for https://192.168.0.23:2379: failed to dial endpoint https://192.168.0.23:2379 with maintenance client: context deadline exceeded
I0309 17:38:53.596350 5499 etcd.go:464] Failed to get etcd status for https://192.168.0.23:2379: failed to dial endpoint https://192.168.0.23:2379 with maintenance client: context deadline exceeded
I0309 17:38:55.766509 5499 etcd.go:464] Failed to get etcd status for https://192.168.0.23:2379: failed to dial endpoint https://192.168.0.23:2379 with maintenance client: context deadline exceeded
I0309 17:38:58.013875 5499 etcd.go:464] Failed to get etcd status for https://192.168.0.23:2379: failed to dial endpoint https://192.168.0.23:2379 with maintenance client: context deadline exceeded
I0309 17:39:00.374770 5499 etcd.go:464] Failed to get etcd status for https://192.168.0.23:2379: failed to dial endpoint https://192.168.0.23:2379 with maintenance client: context deadline exceeded
I0309 17:39:02.935948 5499 etcd.go:464] Failed to get etcd status for https://192.168.0.23:2379: failed to dial endpoint https://192.168.0.23:2379 with maintenance client: context deadline exceeded
[kubelet-check] Initial timeout of 40s passed.
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node k8s-master2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master2 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

posted @ 2023-03-10 08:49  beawh  阅读(278)  评论(0编辑  收藏  举报