Fork me on GitHub

使用 Kubeadm 升级 Kubernetes 版本

升级最新版 kubelet kubeadm kubectl (阿里云镜像)

复制代码
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
复制代码

查看此版本的容器镜像版本

复制代码
$ kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.12.2
k8s.gcr.io/kube-controller-manager:v1.12.2
k8s.gcr.io/kube-scheduler:v1.12.2
k8s.gcr.io/kube-proxy:v1.12.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.2
复制代码

查询可用的版本

$ yum list --showduplicates | grep 'kubeadm\|kubectl\|kubelet'

拉取容器镜像

复制代码
$ touch pull_k8s_images.sh

#!/bin/bash
images=(kube-proxy:v1.12.2 kube-scheduler:v1.12.2 kube-controller-manager:v1.12.2
kube-apiserver:v1.12.2 kubernetes-dashboard-amd64:v1.10.0 
etcd:3.2.24 coredns:1.2.2 pause:3.1 )
for imageName in ${images[@]} ; do
docker pull anjia0532/google-containers.$imageName
docker tag anjia0532/google-containers.$imageName k8s.gcr.io/$imageName
docker rmi anjia0532/google-containers.$imageName
done

$ sh pull_k8s_images.sh
复制代码

复制代码
# docker pull gcrxio/kubernetes-dashboard-amd64:v1.10.0

#!/bin/bash
images=(kube-proxy:v1.13.0 kube-scheduler:v1.13.0 kube-controller-manager:v1.13.0
kube-apiserver:v1.13.0 kubernetes-dashboard-amd64:v1.10.0 
etcd:3.2.24 coredns:1.2.6 pause:3.1 )
for imageName in ${images[@]} ; do
docker pull gcrxio/$imageName
docker tag gcrxio/$imageName k8s.gcr.io/$imageName
docker rmi gcrxio/$imageName
done
复制代码

其他同步镜像源:

查询需要升级的信息

$ kubeadm upgrade plan

Node 节点升级

升级对应的 kubelet kubeadm kubectl 的版本,拉取对应版本的镜像即可。

升级 Master 节点

$ kubeadm upgrade apply v1.12.2
复制代码
[root@kubernetes-master opt]# kubeadm upgrade plan
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.12.1
[upgrade/versions] kubeadm version: v1.12.2
[upgrade/versions] Latest stable version: v1.12.2
[upgrade/versions] Latest version in the v1.12 series: v1.12.2

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     3 x v1.12.1   v1.12.2

Upgrade to the latest version in the v1.12 series:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.12.1   v1.12.2
Controller Manager   v1.12.1   v1.12.2
Scheduler            v1.12.1   v1.12.2
Kube Proxy           v1.12.1   v1.12.2
CoreDNS              1.2.2     1.2.2
Etcd                 3.2.24    3.2.24

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.12.2

_____________________________________________________________________

[root@kubernetes-master opt]# kubeadm upgrade apply v1.12.2
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file.
[upgrade/version] You have chosen to change the cluster version to "v1.12.2"
[upgrade/versions] Cluster version: v1.12.1
[upgrade/versions] kubeadm version: v1.12.2
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.12.2"...
Static pod: kube-apiserver-021rjsh216048s hash: 89c5b90341989424b0e19818b3f6e2be
Static pod: kube-controller-manager-021rjsh216048s hash: 8813655e72b0b808698db6f851c08ab4
Static pod: kube-scheduler-021rjsh216048s hash: 3c7cced3664379c67e8177757da3fa42
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests977991704"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests977991704/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests977991704/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests977991704/kube-scheduler.yaml"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-10-30-13-43-20/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
Static pod: kube-apiserver-021rjsh216048s hash: 89c5b90341989424b0e19818b3f6e2be
Static pod: kube-apiserver-021rjsh216048s hash: 37239b196c489a9b62de0bd5d3fa7fab
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-10-30-13-43-20/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
Static pod: kube-controller-manager-021rjsh216048s hash: 8813655e72b0b808698db6f851c08ab4
Static pod: kube-controller-manager-021rjsh216048s hash: c30d65a55ad9ffe79d5d36185a605db2
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-10-30-13-43-20/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
Static pod: kube-scheduler-021rjsh216048s hash: 3c7cced3664379c67e8177757da3fa42
Static pod: kube-scheduler-021rjsh216048s hash: ee7b1077c61516320f4273309e9b4690
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "021rjsh216048s" as an annotation
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.2". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
复制代码

查询各节点信息与 pod 信息

$ systemctl enable kubelet && systemctl restart #各节点依次重启
$ kubelet$ kubectl get nodes
$ kubectl get pod --all-namespaces -o wide

REFER:
https://my.oschina.net/u/2306127/blog/2231184
https://github.com/kubernetes/kubeadm/issues/1054

posted @   花儿笑弯了腰  阅读(4072)  评论(2编辑  收藏  举报
编辑推荐:
· 如何编写易于单元测试的代码
· 10年+ .NET Coder 心语,封装的思维:从隐藏、稳定开始理解其本质意义
· .NET Core 中如何实现缓存的预热?
· 从 HTTP 原因短语缺失研究 HTTP/2 和 HTTP/3 的设计差异
· AI与.NET技术实操系列:向量存储与相似性搜索在 .NET 中的实现
阅读排行:
· 周边上新:园子的第一款马克杯温暖上架
· Open-Sora 2.0 重磅开源!
· .NET周刊【3月第1期 2025-03-02】
· 分享 3 个 .NET 开源的文件压缩处理库,助力快速实现文件压缩解压功能!
· [AI/GPT/综述] AI Agent的设计模式综述
历史上的今天:
2016-10-26 阿里云 Redis 服务遇到的问题
点击右上角即可分享
微信分享提示