k8s中删除节点之后再添加节点

删除节点

先标记为不可调度,驱逐节点上的pod

#先标记节点
$ kubectl drain centos7909 --delete-emptydir-data --force --ignore-daemonsets 
node/centos7909 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-flannel/kube-flannel-ds-hnmmg, kube-system/kube-proxy-87gpm
node/centos7909 drained

#查看节点状态
$ kubectl get nodes 
NAME         STATUS                        ROLES                 AGE   VERSION
centos7906   Ready    					control-plane   	   28m    v1.27.4
centos7909   Ready,SchedulingDisabled     <none>         		 8m39s   v1.27.4
centos7910   Ready    					<none>          	   8m30s  v1.27.4

#已经标记为SchedulingDisabled

删除节点

kubectl delete node centos7909
node "centos7909" deleted

#再查看,就已经看不到了
$ ubectl get nodes 
NAME         STATUS   ROLES                  AGE   VERSION
centos7906   Ready    control-plane,master   65m   v1.27.4
centos7910   Ready    <none>                 19m   v1.27.4

这个时候就完成了删除节点。

如果之后旧node又想重新加入怎么办?

停掉kubelet

(在需要添加进来的节点上操作)

systemctl stop kubelet

重置

正情况下

kubeadm  reset

[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0728 15:54:54.944853   10725 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

1.24版本之后仍然使用docker的

kubeadm  reset  --cri-socket unix:///run/cri-dockerd.sock
# 需要加上docker的cri


[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0728 15:54:54.944853   10725 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

清理旧文件

根据提示,删除必要文件,清理必要策略

rm -rf /etc/cni/net.d

清理策略

使用iptables的

iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

使用ipvs的

ipvsadm --clear

没有ipvsadm命令的,需要安装

yum install ipvsadm -y

获取token和ca-hash

在master节点上执行

kubeadm token create --ttl 0 --print-join-command
kubeadm join 10.0.0.3:6443 --token vxzs52.lx25mhodqlimerqg --discovery-token-ca-cert-hash sha256:ed83dba16c0cbe603ff7a7f4fc4f12f81e922739505e80f81f45edc96b92caed

加入集群

在待加入节点上执行

正常情况下

kubeadm join 10.0.0.3:6443 --token vxzs52.lx25mhodqlimerqg --discovery-token-ca-cert-hash sha256:ed83dba16c0cbe603ff7a7f4fc4f12f81e922739505e80f81f45edc96b92caed

1.24版本之后仍然使用docker的

kubeadm join 10.0.0.3:6443 --token vxzs52.lx25mhodqlimerqg --discovery-token-ca-cert-hash sha256:ed83dba16c0cbe603ff7a7f4fc4f12f81e922739505e80f81f45edc96b92caed  --cri-socket unix:///run/cri-dockerd.sock

这就完成了加入

posted @ 2023-07-24 21:53  厚礼蝎  阅读(149)  评论(0编辑  收藏  举报