k8s 添加master (删除m1 m2 添加 m4 m5)
kubeadm搭建k8s 3master 高可用 keepalived + haproxy
3+2节点的 m1和m2 在8g的电脑上,m3 n1 n2 在16g电脑上,每次得开两个电脑,做个mster迁移,把m1 m2踢掉,添加m4 m5到16g的电脑
先删除m2添加m4
克隆m3,修改ip 192.168.1.221 作为m4
192.168.1.222 k8s-master11 m1
192.168.1.223 k8s-master12 m2
192.168.1.224 k8s-master13 m3
192.168.1.221 k8s-master14 m4
192.168.1.225 k8s-node01 n1
192.168.1.226 k8s-node02 n2
m4操作 reset
kubeadm reset y删除
m1打印join和certs信息
kubeadm token create --print-join-command kubeadm init phase upload-certs --experimental-upload-certs
m4加入集群(报错了,因为m2关机未启动,需要etcd删除m2)
kubeadm join 192.168.1.123:6443 --token f6mqyw.ulqtzivtfas2qzna --discovery-token-ca-cert-hash sha256:dd05586525feb49ccd6e19b894b6a30d1f7d0ee5ef8cc9345ec63665f8011fd5 --control-plane --certificate-key ef7d743f7db39b26fa172e41b7071026fdeba89cff7622bb75ad078d6bfc69a9
m1操作 删除m2节点
#先配置不可用 [root@k8s-master11 ~]# kubectl drain k8s-master12 --delete-local-data --force --ignore-daemonsets node/k8s-master12 already cordoned WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-g2gwg, kube-system/kube-flannel-ds-fkl6h, kube-system/kube-proxy-72d7c node/k8s-master12 drained #再删除 [root@k8s-master11 ~]# kubectl delete node k8s-master12 node "k8s-master12" deleted [root@k8s-master11 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master11 Ready master 13d v1.15.1 k8s-master13 Ready <none> 13d v1.15.1 k8s-node01 Ready <none> 13d v1.15.1 k8s-node02 Ready <none> 13d v1.15.1
进etcd删除m2
# kubectl exec -it nginx-ingress-controller-6568684597-zls7j -n ingress-nginx sh # export ETCDCTL_API=3 # alias etcdctl='etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key' # etcdctl member list # etcdctl member remove 63bfe05c4446fb08 # etcdctl member list
m4再次加入集群,kubeadm join
flannel起不来,报错 Init:Blocked , 检查m4节点
[root@k8s-master14 ~]# kubectl describe nodes k8s-master14 Name: k8s-master14 Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=k8s-master14 kubernetes.io/os=linux node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Sat, 09 Apr 2022 14:23:17 +0800 Taints: node.kubernetes.io/not-ready:NoExecute node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoSchedule Unschedulable: false Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Sat, 09 Apr 2022 14:34:34 +0800 Sat, 09 Apr 2022 14:28:08 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Sat, 09 Apr 2022 14:34:34 +0800 Sat, 09 Apr 2022 14:28:08 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Sat, 09 Apr 2022 14:34:34 +0800 Sat, 09 Apr 2022 14:28:08 +0800 KubeletHasSufficientPID kubelet has sufficient PID available Ready False Sat, 09 Apr 2022 14:34:34 +0800 Sat, 09 Apr 2022 14:28:08 +0800 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized Addresses: InternalIP: 192.168.1.221 Hostname: k8s-master14 Capacity: cpu: 4 ephemeral-storage: 18121Mi hugepages-2Mi: 0 memory: 4028736Ki pods: 110 Allocatable: cpu: 4 ephemeral-storage: 17101121099 hugepages-2Mi: 0 memory: 3926336Ki pods: 110 System Info: Machine ID: efff03ecd86c427db378e55aaeee3e05 System UUID: 564D9218-40AD-BFEF-BA33-B75245DAA244 Boot ID: 5d8692e3-68c8-4a92-b624-f2587411ff7b Kernel Version: 4.4.222-1.el7.elrepo.x86_64 OS Image: CentOS Linux 7 (Core) Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.13 Kubelet Version: v1.15.1 Kube-Proxy Version: v1.15.1 PodCIDR: 10.244.5.0/24 Non-terminated Pods: (5 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- kube-system etcd-k8s-master14 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m55s kube-system kube-apiserver-k8s-master14 250m (6%) 0 (0%) 0 (0%) 0 (0%) 6m55s kube-system kube-controller-manager-k8s-master14 200m (5%) 0 (0%) 0 (0%) 0 (0%) 6m55s kube-system kube-flannel-ds-amd64-hfkfc 100m (2%) 100m (2%) 50Mi (1%) 50Mi (1%) 11m kube-system kube-scheduler-k8s-master14 100m (2%) 0 (0%) 0 (0%) 0 (0%) 6m55s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 650m (16%) 100m (2%) memory 50Mi (1%) 50Mi (1%) ephemeral-storage 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 11m kubelet, k8s-master14 Starting kubelet. Normal NodeHasSufficientMemory 11m kubelet, k8s-master14 Node k8s-master14 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 11m kubelet, k8s-master14 Node k8s-master14 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 11m kubelet, k8s-master14 Node k8s-master14 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 11m kubelet, k8s-master14 Updated Node Allocatable limit across pods Normal Starting 6m58s kubelet, k8s-master14 Starting kubelet. Warning Rebooted 6m57s kubelet, k8s-master14 Node k8s-master14 has been rebooted, boot id: 5d8692e3-68c8-4a92-b624-f2587411ff7b Normal NodeHasSufficientMemory 6m56s (x2 over 6m57s) kubelet, k8s-master14 Node k8s-master14 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6m56s (x2 over 6m57s) kubelet, k8s-master14 Node k8s-master14 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 6m56s (x2 over 6m57s) kubelet, k8s-master14 Node k8s-master14 status is now: NodeHasSufficientPID Normal NodeNotReady 6m56s kubelet, k8s-master14 Node k8s-master14 status is now: NodeNotReady Normal NodeAllocatableEnforced 6m56s kubelet, k8s-master14 Updated Node Allocatable limit across pods
报 runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialize
问题在于没有镜像,pull镜像后解决 ( )不用apply flannel ,千万别 kubectl delete 会导致整个集群flannel挂了
[root@k8s-master13 ~]# kubectl describe po/kube-flannel-ds-amd64-xx52q -n kube-system|grep ima Normal Pulled 49m kubelet, k8s-master13 Container image "quay.io/coreos/flannel:v0.12.0-amd64" already present on machine Normal Pulled 48m kubelet, k8s-master13 Container image "quay.io/coreos/flannel:v0.12.0-amd64" already present on machine [root@k8s-master12 ~]# docker pull quay.io/coreos/flannel:v0.12.0-amd64 v0.12.0-amd64: Pulling from coreos/flannel 921b31ab772b: Pull complete 4882ae1d65d3: Pull complete ac6ef98d5d6d: Pull complete 8ba0f465eea4: Pull complete fd2c2618e30c: Pull complete Digest: sha256:6d451d92c921f14bfb38196aacb6e506d4593c5b3c9d40a8b8a2506010dc3e10 Status: Downloaded newer image for quay.io/coreos/flannel:v0.12.0-amd64 quay.io/coreos/flannel:v0.12.0-amd64 [root@k8s-master12 ~]# docker images|grep fla rancher/mirrored-flannelcni-flannel v0.17.0 9247abf08677 5 weeks ago 59.8MB rancher/mirrored-flannelcni-flannel-cni-plugin v1.0.1 ac40ce625740 2 months ago 8.1MB quay.io/coreos/flannel v0.12.0-amd64 4e9f801d2217 2 years ago 52.8MB
[root@k8s-master13 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master11 NotReady master 13d v1.15.1 k8s-master13 Ready <none> 13d v1.15.1 k8s-master14 Ready master 3h33m v1.15.1 k8s-node01 Ready <none> 13d v1.15.1 k8s-node02 Ready <none> 13d v1.15.1
克隆了一个初始化的节点,作为m2 再次加入集群
hostnamectl set-hostname k8s-master12
sed -i s#192.168.1.110#192.168.1.223#g /etc/sysconfig/network-scripts/ifcfg-ens33
systemctl restart network
kubeadm join 192.168.1.123:6443 --token f6mqyw.ulqtzivtfas2qzna --discovery-token-ca-cert-hash sha256:dd05586525feb49ccd6e19b894b6a30d1f7d0ee5ef8cc9345ec63665f8011fd5 --control-plane --certificate-key c1ee219deaece497e3f7d9f9b8415a4103cc6aa6c58500cfee032fccc3854629
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
docker pull quay.io/coreos/flannel:v0.12.0-amd64
m1关机 [root@k8s-master13 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master11 NotReady master 13d v1.15.1 k8s-master12 Ready master 91m v1.15.1 k8s-master13 Ready <none> 13d v1.15.1 k8s-master14 Ready master 3h33m v1.15.1 k8s-node01 Ready <none> 13d v1.15.1 k8s-node02 Ready <none> 13d v1.15.1
把m1踢出集群
[root@k8s-master13 ~]# kubectl drain k8s-master11 --delete-local-data --force --ignore-daemonsets node/k8s-master11 cordoned WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-gqmxb, kube-system/kube-flannel-ds-amd64-xwrx6, kube-system/kube-flannel-ds-cvwsw, kube-system/kube-proxy-kdpns node/k8s-master11 drained [root@k8s-master12 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master11 NotReady,SchedulingDisabled master 13d v1.15.1 k8s-master12 Ready master 96m v1.15.1 k8s-master13 Ready <none> 13d v1.15.1 k8s-master14 Ready master 3h37m v1.15.1 k8s-node01 Ready <none> 13d v1.15.1 k8s-node02 Ready <none> 13d v1.15.1 [root@k8s-master13 ~]# kubectl delete node k8s-master11 node "k8s-master11" deleted [root@k8s-master13 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master12 Ready master 97m v1.15.1 k8s-master13 Ready <none> 13d v1.15.1 k8s-master14 Ready master 3h39m v1.15.1 k8s-node01 Ready <none> 13d v1.15.1 k8s-node02 Ready <none> 13d v1.15.1
etcd清除m1
[root@k8s-master13 ~]# kubectl exec -it etcd-k8s-master12 -n kube-system sh / # export ETCDCTL_API=3 / # alias etcdctl='etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key' / # etcdctl member list d4df797f30246bbb, started, k8s-master12, https://192.168.1.223:2380, https://192.168.1.223:2379 e01fc7b792ca9ffb, started, k8s-master13, https://192.168.1.224:2380, https://192.168.1.224:2379 e7cc3bc2b77f3119, started, k8s-master14, https://192.168.1.221:2380, https://192.168.1.221:2379 ebf530c716671c0c, started, k8s-master11, https://192.168.1.222:2380, https://192.168.1.222:2379 / # etcdctl member remove ebf530c716671c0c Member ebf530c716671c0c removed from cluster ced802635dc18ab0