通过Kubeadm升级Kubernetes集群
K8S 升级可以跨小版本,但是不能跨大版本升级,只能一个大版本一个大版本的升级!
实验环境:
- 升级前k8s版本:1.17.4
- 升级后k8s版本:1.18.16
以后操作如果没有特殊声明,均在k8s-master节点执行!
一、查看当前集群组件列表
$ kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.17.4
k8s.gcr.io/kube-controller-manager:v1.17.4
k8s.gcr.io/kube-scheduler:v1.17.4
k8s.gcr.io/kube-proxy:v1.17.4
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5
二、配置Kubernetes国内yum源
如果安装完Kubernetes集群后,自然就会有Kubernetes国内yum源,如果没有的话,执行以下命令:
$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
$ yum clean all
$ yum makecache fast
三、升级kubeadm工具版本
3.1 升级kubeadm工具,然后执行检测命令
$ yum list kubeadm --showduplicates | sort -r # 显示kubeadm所有可安装版本
$ yum update -y kubeadm-1.18.16-0
$ kubeadm upgrade plan # 升级前执行kubeadm检测命令
........
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet 3 x v1.17.4 v1.18.16
Upgrade to the latest stable version:
COMPONENT CURRENT AVAILABLE
API Server v1.17.4 v1.18.16
Controller Manager v1.17.4 v1.18.16
Scheduler v1.17.4 v1.18.16
Kube Proxy v1.17.4 v1.18.16
CoreDNS 1.6.5 1.6.7
Etcd 3.4.3 3.4.3-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.18.16
3.2 检测失败后的处理方法
当时从1.16.3 版本升级到1.17.4 的时候,执行kubeadm升级前检查出现了如下警告信息:
[preflight] Running pre-flight checks.
[WARNING CoreDNSUnsupportedPlugins]: there are unsupported plugins in the CoreDNS Corefile
通过 github 上 的 相关 issue 知道这个问题没有影响,可以忽略。
use the --ignore-preflight-errors=CoreDNSUnsupportedPlugins while upgrading. The proxy plugin will be replaced to use forward automatically.
待升级前的工作执行完成后,可以直接执行升级命令:
$ kubeadm upgrade apply v1.17.4 --ignore-preflight-errors=CoreDNSUnsupportedPlugins
升级成功
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.17.4". Enjoy!
亲测有效!
四、升级集群
4.1 查看待升级的Kubernetes组件所使用的镜像列表
$ kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.18.16
k8s.gcr.io/kube-controller-manager:v1.18.16
k8s.gcr.io/kube-scheduler:v1.18.16
k8s.gcr.io/kube-proxy:v1.18.16
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7
4.2 编写脚本拉取升级所需的镜像脚本
$ vim pull-image.sh
#!/bin/bash
## 设置镜像仓库地址
MY_REGISTRY=registry.aliyuncs.com/google_containers
## 拉取镜像
docker pull ${MY_REGISTRY}/kube-apiserver:v1.18.16
docker pull ${MY_REGISTRY}/kube-controller-manager:v1.18.16
docker pull ${MY_REGISTRY}/kube-scheduler:v1.18.16
docker pull ${MY_REGISTRY}/kube-proxy:v1.18.16
docker pull ${MY_REGISTRY}/etcd:3.4.3-0
docker pull ${MY_REGISTRY}/pause:3.2
docker pull ${MY_REGISTRY}/coredns:1.6.7
## 设置标签
docker tag ${MY_REGISTRY}/kube-apiserver:v1.18.16 k8s.gcr.io/kube-apiserver:v1.18.16
docker tag ${MY_REGISTRY}/kube-scheduler:v1.18.16 k8s.gcr.io/kube-scheduler:v1.18.16
docker tag ${MY_REGISTRY}/kube-controller-manager:v1.18.16 k8s.gcr.io/kube-controller-manager:v1.18.16
docker tag ${MY_REGISTRY}/kube-proxy:v1.18.16 k8s.gcr.io/kube-proxy:v1.18.16
docker tag ${MY_REGISTRY}/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
docker tag ${MY_REGISTRY}/pause:3.2 k8s.gcr.io/pause:3.2
docker tag ${MY_REGISTRY}/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7
$ bash pull-image.sh
4.3 升级集群
$ kubeadm upgrade apply 1.18.16
.......
[addons] Migrating CoreDNS Corefile
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.18.16". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so
升级过程中,不用备份当前节点的 Etcd 和 Kubernetes 清单数据,Kubeadm 会自动备份相关数据存于 /etc/kuberntes/tmp 目录下。
4.4 升级kubelet和kubectl工具
$ yum update -y kubectl-1.18.16-0 kubelet-1.18.16-0
$ systemctl daemon-reload
$ systemctl restart kubelet
$ systemctl status kubelet
五、升级工作节点kubeadm、kubelet 版本
$ kubectl drain k8s-node01 --ignore-daemonsets --delete-local-data --force
# 设置节点进入维护状态,方便升级 kubelet 版本
$ yum update -y kubectl-1.18.16-0 kubelet-1.18.16-0
$ systemctl daemon-reload
$ systemctl restart kubelet
$ systemctl status kubelet
$ kubectl uncordon k8s-node01
k8s-node02 一样的操作!
六、确认版本升级成功
在对集群中所有节点的 kubelet 进行升级之后,请执行以下命令,以确认所有节点又重新变为 Ready 可用状态:
$ kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 143d v1.18.16
k8s-node01 Ready <none> 143d v1.18.16
k8s-node02 Ready <none> 143d v1.18.16
七、升级网络插件(非必须)
Kubernetes 有很多网络插件,一般都是安装时候选择的,所以 Kubeadm 并不维护这些网络插件镜像的升级,需要根据自己安装的插件信息,选择性更新,下面是常用的 Flannel、calico 网络插件升级文档信息供参考:
注意:升级网络插件时,注意配置网络插件的子网域和
kubeadm
配置中的podSubnet.podSubnet
值保持一致,可以通过kubectl describe configmaps kubeadm-config -n kube-system
命令查看。
- Calico 升级参考文档: 如果 Kubernetes 集群使用的是 Calico 网络插件,请参考:
https://docs.projectcalico.org/maintenance/kubernetes-upgrade
- Flannel 升级参考文档:如果 Kubernetes 集群使用的是 Flannel 网络插件,请参考:
https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md
八、升级docker版本(非必须)
由于 Kubernetes 对 Docker 版本有限制,所以升级 Kubernetes 的同时也需要升级 Docker 版本:
$ yum install -y yum-utils device-mapper-persistent-data lvm2
$ yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
$ yum list docker-ce --showduplicates | sort -r
# 显示docker-ce所有可安装版本
$ yum update -y docker-ce-19.03.13 docker-ce-cli-19.03.13
$ systemctl daemon-reload && systemctl restart docker
$ systemctl restart kubelet
九、升级完成,证书自动更新(确认一下)
如果只需要更新证书,可参考博文解决kubeadm部署的k8s 1.18.3 集群证书过期
$ kubeadm alpha certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Mar 05, 2022 02:53 UTC 364d no
apiserver Mar 05, 2022 02:52 UTC 364d ca no
apiserver-etcd-client Mar 05, 2022 02:52 UTC 364d etcd-ca no
apiserver-kubelet-client Mar 05, 2022 02:52 UTC 364d ca no
controller-manager.conf Mar 05, 2022 02:52 UTC 364d no
etcd-healthcheck-client Mar 04, 2022 10:24 UTC 364d etcd-ca no
etcd-peer Mar 04, 2022 10:24 UTC 364d etcd-ca no
etcd-server Mar 04, 2022 10:24 UTC 364d etcd-ca no
front-proxy-client Mar 05, 2022 02:52 UTC 364d front-proxy-ca no
scheduler.conf Mar 05, 2022 02:52 UTC 364d no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Oct 10, 2030 15:00 UTC 9y no
etcd-ca Oct 10, 2030 15:00 UTC 9y no
front-proxy-ca Oct 10, 2030 15:00 UTC 9y no