k8s集群故障二:节点为NotReady状态

按照教程部署完k8s的各个节点后,获取节点信息时,可是发现只有作为master和同时作为node的节点状态才是正确的:

[root@k8s-master ~]# kubectl get node
NAME         STATUS     ROLES    AGE     VERSION
k8s-master   Ready      <none>   7d22h   v1.18.6
k8s1         NotReady   <none>   7d21h   v1.18.6
k8s2         NotReady   <none>   7d21h   v1.18.6

在node中查看日志:

7月 31 17:22:43 k8s1 kubelet[29033]: E0731 17:22:43.663415   29033 kubelet.go:2188] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
7月 31 17:22:48 k8s1 kubelet[29033]: E0731 17:22:48.674388   29033 kubelet.go:2188] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
7月 31 17:22:53 k8s1 kubelet[29033]: E0731 17:22:53.689125   29033 kubelet.go:2188] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

网络问题导致,经过多次排查发现:因为是有k8s-master 主机直接迁移node需要的组件到另外的机器,所以在配置文件中有两处一定要做出修改:

[root@k8s1 ~]# vim /opt/kubernetes/cfg/kube-proxy-config.yml

hostnameOverride: k8s1     ##主机名

[root@k8s1 ~]#vim /opt/kubernetes/cfg/kubelet.conf

--hostname-override=k8s-node1

##在重新删除相关证书文件:

[root@k8s1 ~]# rm /opt/kubernetes/cfg/kubelet.kubeconfig

[root@k8s1 ~]# rm -f /opt/kubernetes/ssl/kubelet*

然后重启服务器


posted @ 2020-07-31 18:27  树运维  阅读(3684)  评论(0编辑  收藏  举报