api-server-pod-重启失败

api-server-pod-重启失败

错误日志

kubelet

apiserver.go:42] "Waiting for node sync before watching apiserver pods"<br/>11月 05 19:22:18 k8s.master kubelet[845]: E1105 19:22:18.852473     845 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://lb.kubesphere.local:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.191.9.21:6443: connect: connection refused<br/>11月 05 19:22:18 k8s.master kubelet[845]: E1105 19:22:18.852493     845 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://lb.kubesphere.local:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s.master&limit=500&resourceVersion=0": dial tcp 10.191.9.21:6443<br/>11月 05 19:22:18 k8s.master kubelet[845]: I1105 19:22:18.871794     845 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="docker" version="24.0.6" apiVersion="1.43.0"<br/>11月 05 19:22:19 k8s.master kubelet[845]: E1105 19:22:19.156476     845 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.<br/>11月 05 19:22:19 k8s.master kubelet[845]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors<br/>11月 05 19:22:19 k8s.master kubelet[845]: I1105 19:22:19.164642     845 server.go:1190] "Started kubelet"<br/>11月 05 19:22:19 k8s.master kubelet[845]: E1105 19:22:19.165838     845 kubelet.go:1306] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache"<br/>11月 05 19:22:19 k8s.master kubelet[845]: I1105 19:22:19.168787     845 server.go:149] "Starting to listen" address="0.0.0.0" port=10250<br/>11月 05 19:22:19 k8s.master kubelet[845]: I1105 19:22:19.169208     845 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"<br/>11月 05 19:22:19 k8s.master kubelet[845]: I1105 19:22:19.171173     845 volume_manager.go:279] "Starting Kubelet Volume Manager"<br/>11月 05 19:22:19 k8s.master kubelet[845]: I1105 19:22:19.171983     845 desired_state_of_world_populator.go:141] "Desired state populator starts to run"<br/>11月 05 19:22:19 k8s.master kubelet[845]: E1105 19:22:19.175230     845 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"k8s.master.18050ea64a1af988", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTim<br/>11月 05 19:22:19 k8s.master kubelet[845]: E1105 19:22:19.176214     845 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://lb.kubesphere.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k8s.master?timeout=10s": dial tcp 10.191.9.21:6443: connect: connection refused<br/>11月 05 19:22:19 k8s.master kubelet[845]: E1105 19:22:19.175392     845 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://lb.kubesphere.local:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.191.9.21:6443: connect:<br/>11月 05 19:22:19 k8s.master kubelet[845]: I1105 19:22:19.210838     845 server.go:409] "Adding debug handlers to kubelet server"

api-server-pod

[root@k8s manifests]# docker logs k8s_POD_kube-apiserver-k8s.master_kube-system_2f3af35c125801725e924d2739d691fd_6 -f<br/>Shutting down, got signal: Terminated<br/>Shutting down, got signal: Terminated

网卡设备缺失

截图内的网卡设备在恢复前,都没有正常启动

image-20241106081520177

原因分析

从kubelet日志可以看出,无法连接到api-server服务,

从api-server-pod 的日志可以看出,是因为收到了一个中断信号,所以服务启动失败

同时我是因为电脑卡机,被我强制关机,重新启动的,所以推断是因为强制关机导致pod无法正常重启

解决:

使用kubekey把整个集群删除,再重新安装集群

使用KubeKey安装K8s集群

在K8s上安装KubeSphere

posted @ 2024-11-06 08:19  菜阿  阅读(12)  评论(0编辑  收藏  举报