k8s扩容节点

集群新增主机:
192.168.10.45 fei-test-k8snode12.idc2.test.cn
192.168.10.159 fei-test-k8snode13.idc2.test.cn
192.168.10.58 fei-test-k8snode14.idc2.test.cn
 
ssh 192.168.10.19
ansible主机添加(master1操作)

准备阶段

  • 互信操作
借助于中控机拷贝id_rsa.pub到新增机器的authorized_keys并进行测试,登录op-admin1.idc1主机做互信
将master1的id_rsa.pub文件copy到新增节点:
ssh-copy-id -i /home/tengfei/florence/id_rsa.pub $ip;【$ip 是新加的主机 ip,以下不再说明】
=>
root@op-admin1:/home/tengfei# ssh-copy-id -i /home/tengfei/florence/id_rsa.pub 192.168.10.45
root@op-admin1:/home/tengfei# ssh-copy-id -i /home/tengfei/florence/id_rsa.pub 192.168.10.159
root@op-admin1:/home/tengfei# ssh-copy-id -i /home/tengfei/florence/id_rsa.pub 192.168.10.58
  • 添加新增worker到hosts中
vim /etc/hosts
添加:
#new worker 2021/05/24
192.168.10.45 fei-test-k8snode12.idc2.test.cn n12
192.168.10.159 fei-test-k8snode13.idc2.test.cn n13
192.168.10.58 fei-test-k8snode14.idc2.test.cn n14
  • 将新增节点写到一个临时的host文件中:tmp.hosts
cd /etc/ansible
cat tmp.hosts
192.168.10.45
192.168.10.159
192.168.10.58
  • 新增节点与现有k8s节点与时间服务器同步时间:
cd /etc/ansible
ansible -i tmp.hosts all -m shell -a  "ntpdate 192.168.4.12 192.168.4.21"
ansible -i all.hosts all_k8s_node -m shell -a "ntpdate 192.168.4.12 192.168.4.21"
  • 查看时间:
ansible -i tmp.hosts all -m shell -a  "w"
ansible -i all.hosts all_k8s_node -m shell -a "w"
cat /etc/cron.d/sys_init_cron
 ntpdate 192.168.4.12 192.168.4.21
  • 从master1节点上面copy hosts同步到新增节点上面
ansible -i tmp.hosts all -m copy -a "src=/etc/hosts dest=/etc/hosts"
  • check dns本地解析配置:
/etc/resolv.conf的dns绑定(在新增服务器上操作)
并测试 curl http://nexus.intra.test.cn
  • 升级内核(互医不是必须操作的)
ansible -i tmp.hosts all -m shell -a "yum update kernel -y"
ansible -i tmp.hosts all -m shell -a "reboot"

删除相关旧文件

如:mesos/marathon/docker 等服务
[root@fei-test-k8smaster1 ansible]# ansible-playbook -i tmp.hosts 危险操作.yml  --list-host   #确认操作主机ip
[root@fei-test-k8smaster1 ansible]# ansible-playbook -i tmp.hosts 危险操作.yml
[root@fei-test-k8smaster1 ansible]# kubectl get pod --all-namespaces  | grep -v Run

安装相关软件及目录创建

  • 安装相关软件
[root@fei-test-k8smaster1 ansible]# ansible-playbook -i tmp.hosts 000.docker-kubelet-dir.yml
  • 检查kubelet的配置
检查其他节点是否存在/var/lib/kubelet/config.json
若有则同步后重启新增节点的kubelet以刷取配置
  • 创建各个节点额外的目录
[root@fei-test-k8smaster1 ansible]# ansible -i tmp.hosts all -m shell -a 'mkdir -p /home/work/eventlog/statslog'

新增节点

  • master1 节点上
运行 ./easzctl add-node $ip 将节点加入集群;多个节点,依次执行即可;($ip 为节点 ip 地址)
cd /etc/ansible/tools
for x in ` cat ../tmp.hosts`;do  bash easzctl add-node $x && kubectl cordon $x;done
  • 镜像载入
在master1节点上:
ansible-playbook -i tmp.hosts  03.load-images.yml
  • 同步docker的配置文件
ansible -i tmp.hosts all -m copy -a "src=/etc/docker/daemon.json dest=/etc/docker/daemon.json"
ansible -i tmp.hosts all -m shell -a 'kill -HUP `pidof dockerd`'
  • 删除多余网卡
add-node 后会在新加入的节点中默认初始化一个mynet0网卡,需要手工删除,才能正常启动calico
ip link list
ip link del mynet0
ansible -i tmp.hosts all -m shell -a 'ip link list|grep mynet0'
ansible -i tmp.hosts all -m shell -a 'ip link del mynet0'
  • 移除新增节点上 kubectl 的配置文件 /root/.kube/config
ansible -i tmp.hosts all -m shell -a "ls /root/.kube/config && rm -rf  /root/.kube/config"
  • 重载 journald和docker(此步骤可忽略)
ansible-playbook -i tmp.hosts 08.reload-journald.yml

新增节点的有效检查

  • 所有pod健康
[root@fei-test-k8smaster1 tools]# kubectl get po --all-namespaces |egrep -Ev "Run|es-index"
[root@fei-test-k8smaster1 ansible]# kubectl get po -A -owide|egrep "22.159|22.58|22.45"
  • 所有node ready
[root@fei-test-k8smaster1 ansible]# kubectl get node|egrep "22.159|22.58|22.45"
192.168.10.159   Ready,SchedulingDisabled   node     52m    v1.14.8
192.168.10.45    Ready,SchedulingDisabled   node     45m    v1.14.8
192.168.10.58    Ready,SchedulingDisabled   node     37m    v1.14.8
  • 验证扩容节点的网络连通性
[root@fei-test-k8smaster1 ~]# calicoctl node status
  • 设置污点
[root@fei-test-k8smaster1 ~]# kubectl taint nodes 192.168.10.45 key1=v1:NoSchedule
[root@fei-test-k8smaster1 ~]# kubectl taint nodes 192.168.10.159 key1=v1:NoSchedule
[root@fei-test-k8smaster1 ~]# kubectl taint nodes 192.168.10.58 key1=v1:NoSchedule
  • 查看污点配置
[root@fei-test-k8smaster1 ~]# kubectl get nodes 192.168.10.45 -o go-template={{.spec.taints}}
[map[effect:NoSchedule key:key1 value:v1]]
[root@fei-test-k8smaster1 ~]# kubectl get nodes 192.168.10.159 -o go-template={{.spec.taints}}
[map[effect:NoSchedule key:key1 value:v1]]
[root@fei-test-k8smaster1 ~]# kubectl get nodes 192.168.10.58 -o go-template={{.spec.taints}}
[map[effect:NoSchedule key:key1 value:v1]]
  • 撤销污点配置
[root@fei-test-k8smaster1 ~]# kubectl taint nodes 192.168.10.45 key1:-
node/192.168.10.45 untainted
[root@fei-test-k8smaster1 ~]# kubectl taint nodes 192.168.10.159 key1:-
node/192.168.10.159 untainted
[root@fei-test-k8smaster1 ~]# kubectl taint nodes 192.168.10.58 key1:-
node/192.168.10.58 untainted
  • 取消禁止调度限制
[root@fei-test-k8smaster1 ~]# kubectl uncordon 192.168.10.45
[root@fei-test-k8smaster1 ~]# kubectl uncordon 192.168.10.159
[root@fei-test-k8smaster1 ~]# kubectl uncordon 192.168.10.58
  • mynet0是否删除
ip link show | grep mynet0
  • 镜像拉取是否正常
[root@fei-test-k8smaster1 ansible]# docker save quay.io/prometheus/node-exporter:v0.18.1 -o /tmp/node-exporter.tar
[root@fei-test-k8smaster1 ansible]# docker save quay.io/coreos/kube-rbac-proxy:v0.4.1 -o /tmp/kube-rbac-proxy.tar
[root@fei-test-k8smaster1 ansible]# ansible -i tmp.hosts all -m  shell -a "docker load < /tmp/kube-rbac-proxy.tar"
[root@fei-test-k8smaster1 ansible]# ansible -i tmp.hosts all -m  shell -a "docker load < /tmp/node-exporter.tar"
posted @ 2022-10-03 01:00  tengfei520  阅读(522)  评论(0编辑  收藏  举报