容器编排系统k8s之ReplicaSet和Deployment控制器
前文我们了解了k8s上的Pod资源的生命周期、健康状态和就绪状态探测以及资源限制相关话题,回顾请参考https://www.cnblogs.com/qiuhom-1874/p/14143610.html;今天我们来了解下Pod控制器相关话题;
在k8s上控制器就是k8s的“大脑”,在聊k8s开篇时,我们说过控制器主要负责创建,管理k8s上的资源,如果对应资源不吻合用户定义的资源状态,它就会尝试重启或重建的方式让其状态和用户定义的状态吻合;在k8s上控制器的类型有很多,比如pod控制,service控制器,endpoint控制器等等;不同类型的控制器有着不同的功能和作用;比如pod控制器就是针对pod资源进行管理的控制器;单说pod控制器,它也有很多类型,根据pod里容器跑的应用程序来分类,可以分为有状态应用和无状态应用控制,从应用程序是否运行为守护进程我们可以将控制器分为,守护进程和非守护进程控制器;其中无状态控制器中最常用的有ReplicaSet控制器和Deployment控制;有状态应用控制器常用的有StatefulSet;守护进程控制器最常用的有daemonSet控制器;非守护进程控制器有job控制器,对Job类型的控制器,如果要周期性执行的有Cronjob控制器;
1、ReplicaSet控制器
ReplicaSet控制器的主要作用是确保Pod对象副本数量在任何时刻都能精准满足用户期望的数量;这种控制器启动以后,它首先会查找集群中匹配其标签选择器的Pod资源对象,当活动pod数量与用户期望的pod数量不吻合时,如果多了就删除,少了就创建;它创建新pod是靠我们在配置清单中定义的pod模板来创建新pod;
示例:定义创建ReplicaSet控制器
[root@master01 ~]# cat ReplicaSet-controller-demo.yaml apiVersion: apps/v1 kind: ReplicaSet metadata: name: replicaset-demo namespace: default spec: replicas: 3 selector: matchLabels: app: nginx-pod template: metadata: labels: app: nginx-pod spec: containers: - name: nginx image: nginx:1.14-alpine ports: - name: http containerPort: 80 [root@master01 ~]#
提示:定义ReplicaSet控制器,apiVersion字段的值为apps/v1,kind为ReplicaSet,这两个字段都是固定的;后面的metadata中主要定义名称和名称空间;spec中主要定义replicas、selector、template;其中replicas这个字段的值为一个整数,表示对应pod的副本数量;selector用于定义标签选择器;其值为一个对象,其中matchLabels字段表示精确匹配标签,这个字段的值为一个字典;除了精确匹配标签选择器这种方式,还有matchExpressions表示使用匹配表达式,其值为一个对象;简单说定义标签选择器,第一种是matchLabels,这种方式就是指定一个或多个标签,每个标签就是一个kvj键值对;后者matchExpressions是指定一个表达式,其值为一个对象,这个对象中主要定义key字段,这个字段定义key的名称;operator定义操作符,values定义值;key和operator字段的值类型都是字符串,其中operator的值有In, NotIn, Exists和DoesNotExist;values是一个字符串列表;其次就是定义pod模板,使用template字段定义,该字段的值为一个对象其中metadata字段用于定义模板的元素据信息,这个元数据信息必须定义标签属性;通常这个标签属性和选择器中的标签相同;spec字段用于定义pod模板的状态,最重要的是定义pod里容器的名字,镜像等等;
应用资源配置清单
[root@master01 ~]# kubectl apply -f ReplicaSet-controller-demo.yaml replicaset.apps/replicaset-demo created [root@master01 ~]# kubectl get rs NAME DESIRED CURRENT READY AGE replicaset-demo 3 3 3 9s [root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicaset-demo 3 3 3 17s nginx nginx:1.14-alpine app=nginx-pod [root@master01 ~]#
提示:rs就是ReplicaSet的简写;从上面的信息可以看到对应控制器已经创建;并且当前pod副本数量为3,用户期望的数量也为3,有3个准备就绪;
查看pod
[root@master01 ~]# kubectl get pod -L app NAME READY STATUS RESTARTS AGE APP replicaset-demo-rsl7q 1/1 Running 0 2m57s nginx-pod replicaset-demo-twknl 1/1 Running 0 2m57s nginx-pod replicaset-demo-vzdbb 1/1 Running 0 2m57s nginx-pod [root@master01 ~]#
提示:可以看到当前default名称空间中创建了3个pod,其标签为nginx-pod;
测试:更改其中一个pod的标签为ngx,看看对应控制器是否会新建一个标签为nginx-pod的pod呢?
[root@master01 ~]# kubectl get pod -L app NAME READY STATUS RESTARTS AGE APP replicaset-demo-rsl7q 1/1 Running 0 5m48s nginx-pod replicaset-demo-twknl 1/1 Running 0 5m48s nginx-pod replicaset-demo-vzdbb 1/1 Running 0 5m48s nginx-pod [root@master01 ~]# kubectl label pod/replicaset-demo-vzdbb app=ngx --overwrite pod/replicaset-demo-vzdbb labeled [root@master01 ~]# kubectl get pod -L app NAME READY STATUS RESTARTS AGE APP replicaset-demo-qv8tp 1/1 Running 0 4s nginx-pod replicaset-demo-rsl7q 1/1 Running 0 6m2s nginx-pod replicaset-demo-twknl 1/1 Running 0 6m2s nginx-pod replicaset-demo-vzdbb 1/1 Running 0 6m2s ngx [root@master01 ~]#
提示:可以看到当我们把其中一个pod的标签更改为app=ngx后,对应控制器又会根据pod模板创建一个新pod;
测试:更改pod标签为app=nginx-pod,看看对应控制器是否会删除一个pod呢?
[root@master01 ~]# kubectl get pod -L app NAME READY STATUS RESTARTS AGE APP replicaset-demo-qv8tp 1/1 Running 0 2m35s nginx-pod replicaset-demo-rsl7q 1/1 Running 0 8m33s nginx-pod replicaset-demo-twknl 1/1 Running 0 8m33s nginx-pod replicaset-demo-vzdbb 1/1 Running 0 8m33s ngx [root@master01 ~]# kubectl label pod/replicaset-demo-vzdbb app=nginx-pod --overwrite pod/replicaset-demo-vzdbb labeled [root@master01 ~]# kubectl get pod -L app NAME READY STATUS RESTARTS AGE APP replicaset-demo-qv8tp 0/1 Terminating 0 2m50s nginx-pod replicaset-demo-rsl7q 1/1 Running 0 8m48s nginx-pod replicaset-demo-twknl 1/1 Running 0 8m48s nginx-pod replicaset-demo-vzdbb 1/1 Running 0 8m48s nginx-pod [root@master01 ~]# kubectl get pod -L app NAME READY STATUS RESTARTS AGE APP replicaset-demo-rsl7q 1/1 Running 0 8m57s nginx-pod replicaset-demo-twknl 1/1 Running 0 8m57s nginx-pod replicaset-demo-vzdbb 1/1 Running 0 8m57s nginx-pod [root@master01 ~]#
提示:可以看到当集群中有多余用户期望数量的pod标签时,对应控制器会把多余的相同标签的pod删除;从上面的测试可以看到ReplicaSet控制器是依靠标签选择器来判断集群中pod的数量是否和用户定义的数量吻合,如果不吻合就尝试删除或新建,让对应pod数量精确满足用户期望pod数量;
查看rs控制器的详细信息
[root@master01 ~]# kubectl describe rs replicaset-demo Name: replicaset-demo Namespace: default Selector: app=nginx-pod Labels: <none> Annotations: <none> Replicas: 3 current / 3 desired Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=nginx-pod Containers: nginx: Image: nginx:1.14-alpine Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 20m replicaset-controller Created pod: replicaset-demo-twknl Normal SuccessfulCreate 20m replicaset-controller Created pod: replicaset-demo-vzdbb Normal SuccessfulCreate 20m replicaset-controller Created pod: replicaset-demo-rsl7q Normal SuccessfulCreate 15m replicaset-controller Created pod: replicaset-demo-qv8tp Normal SuccessfulDelete 12m replicaset-controller Deleted pod: replicaset-demo-qv8tp [root@master01 ~]#
扩展/缩减rs控制pod副本数量
[root@master01 ~]# kubectl scale rs replicaset-demo --replicas=6 replicaset.apps/replicaset-demo scaled [root@master01 ~]# kubectl get rs NAME DESIRED CURRENT READY AGE replicaset-demo 6 6 6 32m [root@master01 ~]# kubectl scale rs replicaset-demo --replicas=4 replicaset.apps/replicaset-demo scaled [root@master01 ~]# kubectl get rs NAME DESIRED CURRENT READY AGE replicaset-demo 4 4 4 32m [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE replicaset-demo-5t9tt 0/1 Terminating 0 33s replicaset-demo-j75hk 1/1 Running 0 33s replicaset-demo-rsl7q 1/1 Running 0 33m replicaset-demo-twknl 1/1 Running 0 33m replicaset-demo-vvqfw 0/1 Terminating 0 33s replicaset-demo-vzdbb 1/1 Running 0 33m [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE replicaset-demo-j75hk 1/1 Running 0 41s replicaset-demo-rsl7q 1/1 Running 0 33m replicaset-demo-twknl 1/1 Running 0 33m replicaset-demo-vzdbb 1/1 Running 0 33m [root@master01 ~]#
提示:scale也可以对控制器做扩展和缩减pod副本数量,除了以上使用命令的方式来变更对应pod副本数量;也可以直接在配置清单中修改replicas字段,然后使用apply命令执行配置清单进行修改;
修改配置清单中的replicas字段的值来扩展pod副本数量
[root@master01 ~]# cat ReplicaSet-controller-demo.yaml apiVersion: apps/v1 kind: ReplicaSet metadata: name: replicaset-demo namespace: default spec: replicas: 7 selector: matchLabels: app: nginx-pod template: metadata: labels: app: nginx-pod spec: containers: - name: nginx image: nginx:1.14-alpine ports: - name: http containerPort: 80 [root@master01 ~]# kubectl apply -f ReplicaSet-controller-demo.yaml replicaset.apps/replicaset-demo configured [root@master01 ~]# kubectl get rs NAME DESIRED CURRENT READY AGE replicaset-demo 7 7 7 35m [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE replicaset-demo-j75hk 1/1 Running 0 3m33s replicaset-demo-k2n9g 1/1 Running 0 9s replicaset-demo-n7fmk 1/1 Running 0 9s replicaset-demo-q4dc6 1/1 Running 0 9s replicaset-demo-rsl7q 1/1 Running 0 36m replicaset-demo-twknl 1/1 Running 0 36m replicaset-demo-vzdbb 1/1 Running 0 36m [root@master01 ~]#
更新pod版本
方式1修改资源配置清单中pod模板的版本,然后在使用apply命令来执行配置清单
[root@master01 ~]# cat ReplicaSet-controller-demo.yaml apiVersion: apps/v1 kind: ReplicaSet metadata: name: replicaset-demo namespace: default spec: replicas: 7 selector: matchLabels: app: nginx-pod template: metadata: labels: app: nginx-pod spec: containers: - name: nginx image: nginx:1.16-alpine ports: - name: http containerPort: 80 [root@master01 ~]# kubectl apply -f ReplicaSet-controller-demo.yaml replicaset.apps/replicaset-demo configured [root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicaset-demo 7 7 7 55m nginx nginx:1.16-alpine app=nginx-pod [root@master01 ~]#
提示:从上面命令可以看到,它显示的镜像版本是1.16的版本;
验证:查看对应pod,看看对应pod中容器镜像版本是否变成了1.16呢?
[root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE replicaset-demo-j75hk 1/1 Running 0 25m replicaset-demo-k2n9g 1/1 Running 0 21m replicaset-demo-n7fmk 1/1 Running 0 21m replicaset-demo-q4dc6 1/1 Running 0 21m replicaset-demo-rsl7q 1/1 Running 0 57m replicaset-demo-twknl 1/1 Running 0 57m replicaset-demo-vzdbb 1/1 Running 0 57m [root@master01 ~]#
提示:从pod创建的时间来看,pod没有更新;
测试:删除一个pod看看对应pod里容器镜像是否会更新呢?
[root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE replicaset-demo-j75hk 1/1 Running 0 25m replicaset-demo-k2n9g 1/1 Running 0 21m replicaset-demo-n7fmk 1/1 Running 0 21m replicaset-demo-q4dc6 1/1 Running 0 21m replicaset-demo-rsl7q 1/1 Running 0 57m replicaset-demo-twknl 1/1 Running 0 57m replicaset-demo-vzdbb 1/1 Running 0 57m [root@master01 ~]# kubectl delete pod/replicaset-demo-vzdbb pod "replicaset-demo-vzdbb" deleted [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE replicaset-demo-9wqj9 0/1 ContainerCreating 0 10s replicaset-demo-j75hk 1/1 Running 0 26m replicaset-demo-k2n9g 1/1 Running 0 23m replicaset-demo-n7fmk 1/1 Running 0 23m replicaset-demo-q4dc6 1/1 Running 0 23m replicaset-demo-rsl7q 1/1 Running 0 58m replicaset-demo-twknl 1/1 Running 0 58m [root@master01 ~]# kubectl describe pod/replicaset-demo-9wqj9 |grep Image Image: nginx:1.16-alpine Image ID: docker-pullable://nginx@sha256:5057451e461dda671da5e951019ddbff9d96a751fc7d548053523ca1f848c1ad [root@master01 ~]#
提示:可以看到我们删除了一个pod,对应控制器又新建了一个pod,对应新建的pod镜像版本就成为了新版本的pod;从上面测试情况可以看到,对于rs控制器当pod模板中的镜像版本发生更改,如果k8s集群上对应pod数量和用户定义的数量吻合,此时rs控制器不会更新pod;只有新建后的pod才会拥有新版本;也就说如果我们要rs来对pod版本更新,就得删除原有老的pod后才会更新;
方式2使用命令更新pod版本
[root@master01 ~]# kubectl set image rs replicaset-demo nginx=nginx:1.18-alpine replicaset.apps/replicaset-demo image updated [root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicaset-demo 7 7 7 72m nginx nginx:1.18-alpine app=nginx-pod [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE replicaset-demo-9wqj9 1/1 Running 0 13m replicaset-demo-j75hk 1/1 Running 0 40m replicaset-demo-k2n9g 1/1 Running 0 36m replicaset-demo-n7fmk 1/1 Running 0 36m replicaset-demo-q4dc6 1/1 Running 0 36m replicaset-demo-rsl7q 1/1 Running 0 72m replicaset-demo-twknl 1/1 Running 0 72m [root@master01 ~]#
提示:对于rs控制器,不管用命令还是修改资源配置清单中pod模板中镜像版本,如果有和用户期望数量的pod,它是不会自动更新pod版本的;只有手动删除老版本pod,对应新版本pod才会被创建;
2、deployment控制器
对于deployment控制来说,它的定义方式和rs控制都差不多,但deploy控制器的功能要比rs强大,它可以实现滚动更新,用户手动定义更新策略;其实deploy控制器是在rs控制器的基础上来管理pod;也就说我们在创建deploy控制器时,它自动会创建一个rs控制器;其中使用deployment控制器创建的pod名称是由deploy控制器名称加上“-”pod模板hash名称加上“-”随机字符串;而对应rs控制器的名称恰好就是deploy控制器名称加“-”pod模板hash;即pod名称就为rs控制器名称加“-”随机字符串;
示例:创建deployment控制器
[root@master01 ~]# cat deploy-demo.yaml apiVersion: apps/v1 kind: Deployment metadata: name: deploy-demo namespace: default spec: replicas: 3 selector: matchLabels: app: ngx-dep-pod template: metadata: labels: app: ngx-dep-pod spec: containers: - name: nginx image: nginx:1.14-alpine ports: - name: http containerPort: 80 [root@master01 ~]#
应用配置清单
[root@master01 ~]# kubectl apply -f deploy-demo.yaml deployment.apps/deploy-demo created [root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 3 3 10s nginx nginx:1.14-alpine app=ngx-dep-pod [root@master01 ~]#
验证:查看是否有rs控制器创建?
[root@master01 ~]# kubectl get rs NAME DESIRED CURRENT READY AGE deploy-demo-6d795f958b 3 3 3 57s replicaset-demo 7 7 7 84m [root@master01 ~]#
提示:可以看到有一个deploy-demo-6d795f958b的rs控制器被创建;
验证:查看pod,看看对应pod名称是否有rs控制器名称加“-”一串随机字符串?
[root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE deploy-demo-6d795f958b-bppjr 1/1 Running 0 2m16s deploy-demo-6d795f958b-mxwkn 1/1 Running 0 2m16s deploy-demo-6d795f958b-sh76g 1/1 Running 0 2m16s replicaset-demo-9wqj9 1/1 Running 0 26m replicaset-demo-j75hk 1/1 Running 0 52m replicaset-demo-k2n9g 1/1 Running 0 49m replicaset-demo-n7fmk 1/1 Running 0 49m replicaset-demo-q4dc6 1/1 Running 0 49m replicaset-demo-rsl7q 1/1 Running 0 85m replicaset-demo-twknl 1/1 Running 0 85m [root@master01 ~]#
提示:可以看到有3个pod的名称是deploy-demo-6d795f958b-加随机字符串;
更新pod版本
[root@master01 ~]# cat deploy-demo.yaml apiVersion: apps/v1 kind: Deployment metadata: name: deploy-demo namespace: default spec: replicas: 3 selector: matchLabels: app: ngx-dep-pod template: metadata: labels: app: ngx-dep-pod spec: containers: - name: nginx image: nginx:1.16-alpine ports: - name: http containerPort: 80 [root@master01 ~]# kubectl apply -f deploy-demo.yaml deployment.apps/deploy-demo configured [root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 3 3 5m45s nginx nginx:1.16-alpine app=ngx-dep-pod [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE deploy-demo-95cc58f4d-45l5c 1/1 Running 0 43s deploy-demo-95cc58f4d-6bmb6 1/1 Running 0 45s deploy-demo-95cc58f4d-7d5r5 1/1 Running 0 29s replicaset-demo-9wqj9 1/1 Running 0 30m replicaset-demo-j75hk 1/1 Running 0 56m replicaset-demo-k2n9g 1/1 Running 0 53m replicaset-demo-n7fmk 1/1 Running 0 53m replicaset-demo-q4dc6 1/1 Running 0 53m replicaset-demo-rsl7q 1/1 Running 0 89m replicaset-demo-twknl 1/1 Running 0 89m [root@master01 ~]#
提示:可以看到deploy控制器只要更改了pod模板中镜像版本,对应pod会自动更新;
使用命令更新pod版本
[root@master01 ~]# kubectl set image deploy deploy-demo nginx=nginx:1.18-alpine deployment.apps/deploy-demo image updated [root@master01 ~]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE deploy-demo 3/3 1 3 9m5s [root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 1 3 9m11s nginx nginx:1.18-alpine app=ngx-dep-pod [root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 3 3 9m38s nginx nginx:1.18-alpine app=ngx-dep-pod [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE deploy-demo-567b54cd6-6h97c 1/1 Running 0 28s deploy-demo-567b54cd6-j74t4 1/1 Running 0 27s deploy-demo-567b54cd6-wcccx 1/1 Running 0 49s replicaset-demo-9wqj9 1/1 Running 0 34m replicaset-demo-j75hk 1/1 Running 0 60m replicaset-demo-k2n9g 1/1 Running 0 56m replicaset-demo-n7fmk 1/1 Running 0 56m replicaset-demo-q4dc6 1/1 Running 0 56m replicaset-demo-rsl7q 1/1 Running 0 92m replicaset-demo-twknl 1/1 Running 0 92m [root@master01 ~]#
提示:可以看到deploy控制器,只要修改了pod模板中镜像的版本,对应pod就会随之滚动更新到我们指定的版本;
查看rs历史版本
[root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR deploy-demo-567b54cd6 3 3 3 3m50s nginx nginx:1.18-alpine app=ngx-dep-pod,pod-template-hash=567b54cd6 deploy-demo-6d795f958b 0 0 0 12m nginx nginx:1.14-alpine app=ngx-dep-pod,pod-template-hash=6d795f958b deploy-demo-95cc58f4d 0 0 0 7m27s nginx nginx:1.16-alpine app=ngx-dep-pod,pod-template-hash=95cc58f4d replicaset-demo 7 7 7 95m nginx nginx:1.18-alpine app=nginx-pod [root@master01 ~]#
提示:deploy控制器的更新pod版本操作,它会记录rs的所有历史版本;因为只要pod模板的hash值发生变化,对应的rs就会重新被创建一遍,不同于rs控制器,历史版本的rs上没有pod运行,只有当前版本的rs上才会运行pod;
查看更新历史记录
[root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 1 <none> 2 <none> 3 <none> [root@master01 ~]#
提示:这里可以看到有3个版本,没有记录对应的原因;这是因为我们在更新pod版本是没有记录;要想记录器更新原因,可以在对应名后面加--record选项即可;
示例:记录更新操作命令到更新历史记录
[root@master01 ~]# kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record deployment.apps/deploy-demo image updated [root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 2 <none> 3 <none> 4 kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record=true [root@master01 ~]# kubectl apply -f deploy-demo-nginx-1.16.yaml --record deployment.apps/deploy-demo configured [root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 3 <none> 4 kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record=true 5 kubectl apply --filename=deploy-demo-nginx-1.16.yaml --record=true [root@master01 ~]#
提示:可以看到更新操作时加上--record选项后,再次查看更新历史记录,就能显示对应的更新命令;
回滚到上一个版本
[root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 3 3 33m nginx nginx:1.16-alpine app=ngx-dep-pod [root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR deploy-demo-567b54cd6 0 0 0 24m nginx nginx:1.18-alpine app=ngx-dep-pod,pod-template-hash=567b54cd6 deploy-demo-6d795f958b 0 0 0 33m nginx nginx:1.14-alpine app=ngx-dep-pod,pod-template-hash=6d795f958b deploy-demo-95cc58f4d 3 3 3 28m nginx nginx:1.16-alpine app=ngx-dep-pod,pod-template-hash=95cc58f4d replicaset-demo 7 7 7 116m nginx nginx:1.18-alpine app=nginx-pod [root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 3 <none> 4 kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record=true 5 kubectl apply --filename=deploy-demo-nginx-1.16.yaml --record=true [root@master01 ~]# kubectl rollout undo deploy/deploy-demo deployment.apps/deploy-demo rolled back [root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 3 <none> 5 kubectl apply --filename=deploy-demo-nginx-1.16.yaml --record=true 6 kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record=true [root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 3 3 34m nginx nginx:1.14-alpine app=ngx-dep-pod [root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR deploy-demo-567b54cd6 0 0 0 26m nginx nginx:1.18-alpine app=ngx-dep-pod,pod-template-hash=567b54cd6 deploy-demo-6d795f958b 3 3 3 35m nginx nginx:1.14-alpine app=ngx-dep-pod,pod-template-hash=6d795f958b deploy-demo-95cc58f4d 0 0 0 29m nginx nginx:1.16-alpine app=ngx-dep-pod,pod-template-hash=95cc58f4d replicaset-demo 7 7 7 118m nginx nginx:1.18-alpine app=nginx-pod [root@master01 ~]#
提示:可以看到执行了kubectl rollout undo deploy/deploy-demo命令后,对应版本从1.16就回滚到1.14的版本了;对应更新历史记录也把1.14版本更新为当前最新记录;
回滚到指定历史记录版本
[root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 3 <none> 5 kubectl apply --filename=deploy-demo-nginx-1.16.yaml --record=true 6 kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record=true [root@master01 ~]# kubectl rollout undo deploy/deploy-demo --to-revision=3 deployment.apps/deploy-demo rolled back [root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 5 kubectl apply --filename=deploy-demo-nginx-1.16.yaml --record=true 6 kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record=true 7 <none> [root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 3 3 42m nginx nginx:1.18-alpine app=ngx-dep-pod [root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR deploy-demo-567b54cd6 3 3 3 33m nginx nginx:1.18-alpine app=ngx-dep-pod,pod-template-hash=567b54cd6 deploy-demo-6d795f958b 0 0 0 42m nginx nginx:1.14-alpine app=ngx-dep-pod,pod-template-hash=6d795f958b deploy-demo-95cc58f4d 0 0 0 36m nginx nginx:1.16-alpine app=ngx-dep-pod,pod-template-hash=95cc58f4d replicaset-demo 7 7 7 125m nginx nginx:1.18-alpine app=nginx-pod [root@master01 ~]#
提示:指定要回滚到某个历史记录的版本,可以使用--to-revision选项来指定历史记录的编号;
查看deploy控制器的详细信息
[root@master01 ~]# kubectl describe deploy deploy-demo Name: deploy-demo Namespace: default CreationTimestamp: Thu, 17 Dec 2020 23:40:11 +0800 Labels: <none> Annotations: deployment.kubernetes.io/revision: 7 Selector: app=ngx-dep-pod Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=ngx-dep-pod Containers: nginx: Image: nginx:1.18-alpine Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: deploy-demo-567b54cd6 (3/3 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 58m deployment-controller Scaled down replica set deploy-demo-6d795f958b to 1 Normal ScalingReplicaSet 58m deployment-controller Scaled up replica set deploy-demo-95cc58f4d to 3 Normal ScalingReplicaSet 58m deployment-controller Scaled down replica set deploy-demo-6d795f958b to 0 Normal ScalingReplicaSet 55m deployment-controller Scaled up replica set deploy-demo-567b54cd6 to 1 Normal ScalingReplicaSet 54m deployment-controller Scaled down replica set deploy-demo-95cc58f4d to 2 Normal ScalingReplicaSet 38m deployment-controller Scaled up replica set deploy-demo-6d795f958b to 1 Normal ScalingReplicaSet 38m deployment-controller Scaled down replica set deploy-demo-567b54cd6 to 2 Normal ScalingReplicaSet 38m deployment-controller Scaled up replica set deploy-demo-6d795f958b to 2 Normal ScalingReplicaSet 37m deployment-controller Scaled down replica set deploy-demo-567b54cd6 to 1 Normal ScalingReplicaSet 37m deployment-controller Scaled down replica set deploy-demo-567b54cd6 to 0 Normal ScalingReplicaSet 33m (x2 over 58m) deployment-controller Scaled up replica set deploy-demo-95cc58f4d to 1 Normal ScalingReplicaSet 33m (x2 over 58m) deployment-controller Scaled up replica set deploy-demo-95cc58f4d to 2 Normal ScalingReplicaSet 33m (x2 over 58m) deployment-controller Scaled down replica set deploy-demo-6d795f958b to 2 Normal ScalingReplicaSet 29m (x3 over 64m) deployment-controller Scaled up replica set deploy-demo-6d795f958b to 3 Normal ScalingReplicaSet 22m (x14 over 54m) deployment-controller (combined from similar events): Scaled down replica set deploy-demo-6d795f958b to 2 [root@master01 ~]#
提示:查看deploy控制器的详细信息,可以看到对应pod模板,回滚的过程,以及默认更新策略等等信息;
自定义滚动更新策略
[root@master01 ~]# cat deploy-demo-nginx-1.14.yaml apiVersion: apps/v1 kind: Deployment metadata: name: deploy-demo namespace: default spec: replicas: 3 selector: matchLabels: app: ngx-dep-pod template: metadata: labels: app: ngx-dep-pod spec: containers: - name: nginx image: nginx:1.14-alpine ports: - name: http containerPort: 80 strategy: type: RollingUpdate rollingUpdate: maxSurge: 2 maxUnavailable: 1 minReadySeconds: 5 [root@master01 ~]#
提示:定义滚动更新策略需要使用strategy这个字段,这个字段的值是一个对象,其中type是指定更新策略,其策略有两种,第一种是Recreate,这种策略更新方式是新建一个新版pod,然后再删除一个旧版pod以这种方式滚动更新;第二种是RollingUpdate,这种策略是用于我们手动指定的策略;其中maxSurge表示最大允许超出用户期望的pod数量(即更新时允许新建超出用户期望的pod数量),maxUnavailable表示最大允许少于用于期望的pod数量(即更新时可以一次删除几个旧版pod);最后minReadySeconds字段不是定义更新策略的,它是spec中的一个字段,用于限定pod最小就绪时长;以上更新策略表示,使用RollingUpdate类型策略,并指定最大新建pod超出用户期望pod数量为2个,最大允许少于用户期望pod数量为1个;pod最小就绪时间为5秒;
应用配置清单
[root@master01 ~]# kubectl apply -f deploy-demo-nginx-1.14.yaml deployment.apps/deploy-demo configured [root@master01 ~]# kubectl describe deploy/deploy-demo Name: deploy-demo Namespace: default CreationTimestamp: Thu, 17 Dec 2020 23:40:11 +0800 Labels: <none> Annotations: deployment.kubernetes.io/revision: 8 Selector: app=ngx-dep-pod Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 5 RollingUpdateStrategy: 1 max unavailable, 2 max surge Pod Template: Labels: app=ngx-dep-pod Containers: nginx: Image: nginx:1.14-alpine Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: deploy-demo-6d795f958b (3/3 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 47m deployment-controller Scaled up replica set deploy-demo-6d795f958b to 1 Normal ScalingReplicaSet 47m deployment-controller Scaled down replica set deploy-demo-567b54cd6 to 1 Normal ScalingReplicaSet 42m (x2 over 68m) deployment-controller Scaled up replica set deploy-demo-95cc58f4d to 1 Normal ScalingReplicaSet 42m (x2 over 68m) deployment-controller Scaled up replica set deploy-demo-95cc58f4d to 2 Normal ScalingReplicaSet 42m (x2 over 68m) deployment-controller Scaled down replica set deploy-demo-6d795f958b to 2 Normal ScalingReplicaSet 31m (x14 over 64m) deployment-controller (combined from similar events): Scaled down replica set deploy-demo-6d795f958b to 2 Normal ScalingReplicaSet 41s (x4 over 73m) deployment-controller Scaled up replica set deploy-demo-6d795f958b to 3 Normal ScalingReplicaSet 41s (x2 over 47m) deployment-controller Scaled down replica set deploy-demo-567b54cd6 to 2 Normal ScalingReplicaSet 41s (x2 over 47m) deployment-controller Scaled up replica set deploy-demo-6d795f958b to 2 Normal ScalingReplicaSet 34s (x2 over 47m) deployment-controller Scaled down replica set deploy-demo-567b54cd6 to 0 [root@master01 ~]#
提示:可以看到对应deploy控制器的更新策略已经更改为我们定义的策略;为了能够看出更新的效果,我们这里先手动把pod数量调整为10个;
扩展pod副本数量
[root@master01 ~]# kubectl scale deploy/deploy-demo --replicas=10 deployment.apps/deploy-demo scaled [root@master01 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy-demo-6d795f958b-5bdfw 1/1 Running 0 3m33s deploy-demo-6d795f958b-5zr7r 1/1 Running 0 8s deploy-demo-6d795f958b-9mc7k 1/1 Running 0 8s deploy-demo-6d795f958b-czwdp 1/1 Running 0 3m33s deploy-demo-6d795f958b-jfrnc 1/1 Running 0 8s deploy-demo-6d795f958b-jw9n8 1/1 Running 0 3m33s deploy-demo-6d795f958b-mbrlw 1/1 Running 0 8s deploy-demo-6d795f958b-ph99t 1/1 Running 0 8s deploy-demo-6d795f958b-wzscg 1/1 Running 0 8s deploy-demo-6d795f958b-z5mnf 1/1 Running 0 8s replicaset-demo-9wqj9 1/1 Running 0 100m replicaset-demo-j75hk 1/1 Running 0 126m replicaset-demo-k2n9g 1/1 Running 0 123m replicaset-demo-n7fmk 1/1 Running 0 123m replicaset-demo-q4dc6 1/1 Running 0 123m replicaset-demo-rsl7q 1/1 Running 0 159m replicaset-demo-twknl 1/1 Running 0 159m [root@master01 ~]#
查看更新过程
[root@master01 ~]# kubectl get pod -w NAME READY STATUS RESTARTS AGE deploy-demo-6d795f958b-5bdfw 1/1 Running 0 5m18s deploy-demo-6d795f958b-5zr7r 1/1 Running 0 113s deploy-demo-6d795f958b-9mc7k 1/1 Running 0 113s deploy-demo-6d795f958b-czwdp 1/1 Running 0 5m18s deploy-demo-6d795f958b-jfrnc 1/1 Running 0 113s deploy-demo-6d795f958b-jw9n8 1/1 Running 0 5m18s deploy-demo-6d795f958b-mbrlw 1/1 Running 0 113s deploy-demo-6d795f958b-ph99t 1/1 Running 0 113s deploy-demo-6d795f958b-wzscg 1/1 Running 0 113s deploy-demo-6d795f958b-z5mnf 1/1 Running 0 113s replicaset-demo-9wqj9 1/1 Running 0 102m replicaset-demo-j75hk 1/1 Running 0 128m replicaset-demo-k2n9g 1/1 Running 0 125m replicaset-demo-n7fmk 1/1 Running 0 125m replicaset-demo-q4dc6 1/1 Running 0 125m replicaset-demo-rsl7q 1/1 Running 0 161m replicaset-demo-twknl 1/1 Running 0 161m deploy-demo-578d6b6f94-qhc9j 0/1 Pending 0 0s deploy-demo-578d6b6f94-qhc9j 0/1 Pending 0 0s deploy-demo-578d6b6f94-95srs 0/1 Pending 0 0s deploy-demo-6d795f958b-mbrlw 1/1 Terminating 0 4m16s deploy-demo-578d6b6f94-95srs 0/1 Pending 0 0s deploy-demo-578d6b6f94-qhc9j 0/1 ContainerCreating 0 0s deploy-demo-578d6b6f94-95srs 0/1 ContainerCreating 0 0s deploy-demo-578d6b6f94-bht84 0/1 Pending 0 0s deploy-demo-578d6b6f94-bht84 0/1 Pending 0 0s deploy-demo-578d6b6f94-bht84 0/1 ContainerCreating 0 0s deploy-demo-6d795f958b-mbrlw 0/1 Terminating 0 4m17s deploy-demo-6d795f958b-mbrlw 0/1 Terminating 0 4m24s deploy-demo-6d795f958b-mbrlw 0/1 Terminating 0 4m24s deploy-demo-578d6b6f94-qhc9j 1/1 Running 0 15s deploy-demo-578d6b6f94-95srs 1/1 Running 0 16s deploy-demo-578d6b6f94-bht84 1/1 Running 0 18s deploy-demo-6d795f958b-ph99t 1/1 Terminating 0 4m38s deploy-demo-6d795f958b-jfrnc 1/1 Terminating 0 4m38s deploy-demo-578d6b6f94-lg6vk 0/1 Pending 0 0s deploy-demo-578d6b6f94-g9c8x 0/1 Pending 0 0s deploy-demo-578d6b6f94-lg6vk 0/1 Pending 0 0s deploy-demo-578d6b6f94-g9c8x 0/1 Pending 0 0s deploy-demo-578d6b6f94-lg6vk 0/1 ContainerCreating 0 0s deploy-demo-578d6b6f94-g9c8x 0/1 ContainerCreating 0 0s deploy-demo-6d795f958b-ph99t 0/1 Terminating 0 4m38s deploy-demo-6d795f958b-jfrnc 0/1 Terminating 0 4m38s deploy-demo-6d795f958b-5zr7r 1/1 Terminating 0 4m43s deploy-demo-578d6b6f94-4rpx9 0/1 Pending 0 0s deploy-demo-578d6b6f94-4rpx9 0/1 Pending 0 0s deploy-demo-578d6b6f94-4rpx9 0/1 ContainerCreating 0 0s deploy-demo-6d795f958b-5zr7r 0/1 Terminating 0 4m43s deploy-demo-6d795f958b-ph99t 0/1 Terminating 0 4m44s deploy-demo-6d795f958b-ph99t 0/1 Terminating 0 4m44s deploy-demo-6d795f958b-jfrnc 0/1 Terminating 0 4m44s deploy-demo-6d795f958b-jfrnc 0/1 Terminating 0 4m44s deploy-demo-578d6b6f94-g9c8x 1/1 Running 0 12s deploy-demo-6d795f958b-5zr7r 0/1 Terminating 0 4m51s deploy-demo-6d795f958b-5zr7r 0/1 Terminating 0 4m51s deploy-demo-578d6b6f94-lg6vk 1/1 Running 0 15s deploy-demo-6d795f958b-9mc7k 1/1 Terminating 0 4m56s deploy-demo-578d6b6f94-4lbwg 0/1 Pending 0 0s deploy-demo-578d6b6f94-4lbwg 0/1 Pending 0 0s deploy-demo-578d6b6f94-4lbwg 0/1 ContainerCreating 0 0s deploy-demo-578d6b6f94-4rpx9 1/1 Running 0 13s deploy-demo-6d795f958b-9mc7k 0/1 Terminating 0 4m57s deploy-demo-578d6b6f94-4lbwg 1/1 Running 0 2s deploy-demo-6d795f958b-wzscg 1/1 Terminating 0 4m58s deploy-demo-578d6b6f94-fhkk9 0/1 Pending 0 0s deploy-demo-578d6b6f94-fhkk9 0/1 Pending 0 0s deploy-demo-578d6b6f94-fhkk9 0/1 ContainerCreating 0 0s deploy-demo-6d795f958b-wzscg 0/1 Terminating 0 4m59s deploy-demo-578d6b6f94-fhkk9 1/1 Running 0 2s deploy-demo-6d795f958b-z5mnf 1/1 Terminating 0 5m2s deploy-demo-578d6b6f94-sfpz4 0/1 Pending 0 1s deploy-demo-578d6b6f94-sfpz4 0/1 Pending 0 1s deploy-demo-6d795f958b-czwdp 1/1 Terminating 0 8m28s deploy-demo-578d6b6f94-sfpz4 0/1 ContainerCreating 0 1s deploy-demo-578d6b6f94-5bs6z 0/1 Pending 0 0s deploy-demo-578d6b6f94-5bs6z 0/1 Pending 0 0s deploy-demo-578d6b6f94-5bs6z 0/1 ContainerCreating 0 0s deploy-demo-6d795f958b-czwdp 0/1 Terminating 0 8m28s deploy-demo-6d795f958b-z5mnf 0/1 Terminating 0 5m4s deploy-demo-578d6b6f94-sfpz4 1/1 Running 0 2s deploy-demo-6d795f958b-5bdfw 1/1 Terminating 0 8m29s deploy-demo-6d795f958b-9mc7k 0/1 Terminating 0 5m4s deploy-demo-6d795f958b-9mc7k 0/1 Terminating 0 5m4s deploy-demo-578d6b6f94-5bs6z 1/1 Running 0 1s deploy-demo-6d795f958b-5bdfw 0/1 Terminating 0 8m30s deploy-demo-6d795f958b-czwdp 0/1 Terminating 0 8m36s deploy-demo-6d795f958b-czwdp 0/1 Terminating 0 8m36s deploy-demo-6d795f958b-5bdfw 0/1 Terminating 0 8m36s deploy-demo-6d795f958b-5bdfw 0/1 Terminating 0 8m36s deploy-demo-6d795f958b-wzscg 0/1 Terminating 0 5m11s deploy-demo-6d795f958b-wzscg 0/1 Terminating 0 5m11s deploy-demo-6d795f958b-jw9n8 1/1 Terminating 0 8m38s deploy-demo-6d795f958b-jw9n8 0/1 Terminating 0 8m38s deploy-demo-6d795f958b-z5mnf 0/1 Terminating 0 5m14s deploy-demo-6d795f958b-z5mnf 0/1 Terminating 0 5m14s deploy-demo-6d795f958b-jw9n8 0/1 Terminating 0 8m46s deploy-demo-6d795f958b-jw9n8 0/1 Terminating 0 8m46s
提示:使用-w选项可以一直跟踪查看pod变化过程;从上面的监控信息可以看到,在更新时,首先是将三个pod标记为pending状态,然后先删除一个pod,然后再创建两个pod;然后又创建一个,再删除3个,一次进行;不管怎么删除和新建,对应新旧pod的数量最少要有9个,最大不超过12个;
使用暂停更新实现金丝雀发布
[root@master01 ~]# kubectl set image deploy/deploy-demo nginx=nginx:1.14-alpine && kubectl rollout pause deploy/deploy-demo deployment.apps/deploy-demo image updated deployment.apps/deploy-demo paused [root@master01 ~]#
提示:以上命令会根据我们定义的更新策略,先删除一个pod,然后再创建3个新版pod,然后更新操作就暂停了;此时对应pod只更新了1个,然后新建了2个新pod,总共就有12个pod;
查看pod情况
[root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE deploy-demo-6d795f958b-df77k 1/1 Running 0 87s deploy-demo-6d795f958b-tll8b 1/1 Running 0 87s deploy-demo-6d795f958b-zbhwp 1/1 Running 0 87s deploy-demo-fb957b9b-44l6g 1/1 Running 0 3m21s deploy-demo-fb957b9b-7q6wh 1/1 Running 0 3m38s deploy-demo-fb957b9b-d45rg 1/1 Running 0 3m27s deploy-demo-fb957b9b-j7p2j 1/1 Running 0 3m38s deploy-demo-fb957b9b-mkpz6 1/1 Running 0 3m38s deploy-demo-fb957b9b-qctnv 1/1 Running 0 3m21s deploy-demo-fb957b9b-rvrtf 1/1 Running 0 3m27s deploy-demo-fb957b9b-wf254 1/1 Running 0 3m12s deploy-demo-fb957b9b-xclhz 1/1 Running 0 3m22s replicaset-demo-9wqj9 1/1 Running 0 135m replicaset-demo-j75hk 1/1 Running 0 161m replicaset-demo-k2n9g 1/1 Running 0 158m replicaset-demo-n7fmk 1/1 Running 0 158m replicaset-demo-q4dc6 1/1 Running 0 158m replicaset-demo-rsl7q 1/1 Running 0 3h14m replicaset-demo-twknl 1/1 Running 0 3h14m [root@master01 ~]# kubectl get pod|grep "^deploy.*" |wc -l 12 [root@master01 ~]#
提示:之所以多两个是因为我们在更新策略中定义允许最大超出用户期望2个pod;
恢复更新
[root@master01 ~]# kubectl rollout resume deploy/deploy-demo && kubectl rollout status deploy/deploy-demo deployment.apps/deploy-demo resumed Waiting for deployment "deploy-demo" rollout to finish: 3 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 of 10 updated replicas are available... Waiting for deployment "deploy-demo" rollout to finish: 9 of 10 updated replicas are available... deployment "deploy-demo" successfully rolled out [root@master01 ~]#
提示:resume表示恢复刚才暂停的更新操作;status是用来查看对应更新过程;