kubernetes(7):rc副本控制器—滚动升级和一键回滚
rc副本控制器—滚动升级和一键回滚
http://www.bubuko.com/infodetail-2612308.html
https://www.cnblogs.com/luoahong/p/10300314.html
1 滚动升级
滚动升级是一种平滑的升级方式,通过逐步替换的策略,保证系统的文档。在初始升级的时候就可以及时发现、调整问题,以保证问题影响度不会被扩大。Kubernetes滚动升级命令:
kubectl rolling-update my-rcName-v1 -f my-rcName-v2-rc.yaml --update-period=10s
升级开始后,首先依据提供的定义文件创建V2版本的RC,然后每隔10s(--update-period=10s)逐步的增加V2版本的Pod副本数,逐步减少V1版本Pod的副本数。升级完成之后,删除V1版本的RC,保留V2版本的RC,及实现滚动升级。
升级过程中,发生了错误中途退出时,可以选择继续升级。Kubernetes能够智能的判断升级中断之前的状态,然后紧接着继续执行升级。当然,也可以进行回退,命令如下:
kubectl rolling-update my-rcName-v1 -f my-rcName-v2-rc.yaml --update-period=10s --rollback
2 新一代副本控制器ReplicaSet
replica set,可以被认为 是“升级版”的Replication Controller。replica set也是用于保证与label selector匹配的pod数量维持在期望状态。区别在于,replica set引入了对基于子集的selector查询条件,而Replication Controller仅支持基于值相等的selecto条件查询。replica set很少被单独使用,目前它多被Deployment用于进行pod的创建、更新与删除的编排机制。
2.1 RC(Replica Set)特性和作用
- 通过定义一个RC实现Pod的创建过程及副本数量的自动控制。
- RC 里包括完整的Pod定义模板。
- RC通过Label Selector 机制是对Pod副本的自动控制。
- 通过改变RC的副本数量,可以实现Pod副本的扩容和缩容。
- 通过改变RC里Pod模板中的镜像版本,可以实现Pod滚动升级。
3 Nginx滚动升级
3.1 查看当前容器运行nginx版本
[root@k8s-master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE myweb-q7cn1 1/1 Running 0 26m 172.16.14.2 k8s-node-2 myweb-xjrsn 1/1 Running 0 1h 172.16.73.3 k8s-node-1 test 1/1 Running 0 1h 172.16.73.2 k8s-node-1 [root@k8s-master ~]# kubectl describe rc myweb | grep Image Image(s): 192.168.0.136:5000/nginx:latest [root@k8s-master ~]#
3.2 下载其他版本并上传私有仓库
[root@k8s-master ~]# docker pull nginx:1.15 Trying to pull repository docker.io/library/nginx ... 1.15: Pulling from docker.io/library/nginx 743f2d6c1f65: Pull complete 6bfc4ec4420a: Pull complete 688a776db95f: Pull complete Digest: sha256:23b4dcdf0d34d4a129755fc6f52e1c6e23bb34ea011b315d87e193033bcd1b68 [root@k8s-master ~]# docker images | grep nginx 192.168.0.136:5000/nginx latest 5a3221f0137b 6 days ago 125.9 MB docker.io/nginx latest 5a3221f0137b 6 days ago 125.9 MB docker.io/nginx 1.15 53f3fd8007f7 3 months ago 109.3 MB [root@k8s-master ~]# docker tag docker.io/nginx:1.15 192.168.0.136:5000/nginx:1.15 [root@k8s-master ~]# docker push 192.168.0.136:5000/nginx:1.15 The push refers to a repository [192.168.0.136:5000/nginx] 332fa54c5886: Pushed 6ba094226eea: Pushed 6270adb5794c: Pushed 1.15: digest: sha256:e770165fef9e36b990882a4083d8ccf5e29e469a8609bb6b2e3b47d9510e2c8d size: 948 [root@k8s-master ~]# docker images | grep nginx 192.168.0.136:5000/nginx latest 5a3221f0137b 6 days ago 125.9 MB docker.io/nginx latest 5a3221f0137b 6 days ago 125.9 MB 192.168.0.136:5000/nginx 1.15 53f3fd8007f7 3 months ago 109.3 MB docker.io/nginx 1.15 53f3fd8007f7 3 months ago 109.3 MB [root@k8s-master ~]# ls /opt/myregistry/docker/registry/v2/repositories/nginx/_manifests/tags/ 1.15 latest [root@k8s-master ~]#
3.3 删除所有pod和rc
[root@k8s-master ~]# kubectl get pods No resources found. [root@k8s-master ~]# kubectl get rc No resources found. [root@k8s-master ~]#
3.4 创建一个旧版本的rc并启动
[root@k8s-master k8s]# cat myweb-rcv1.yaml apiVersion: v1 kind: ReplicationController metadata: name: myweb spec: replicas: 3 selector: app: myweb template: metadata: labels: app: myweb spec: containers: - name: myweb image: 192.168.0.136:5000/nginx:1.15 ports: - containerPort: 80 [root@k8s-master k8s]# [root@k8s-master k8s]# kubectl create -f myweb-rcv1.yaml replicationcontroller "myweb" created [root@k8s-master k8s]# kubectl get pods NAME READY STATUS RESTARTS AGE myweb-94vm9 1/1 Running 0 4m myweb-tgcvn 1/1 Running 0 4m myweb-zjn61 1/1 Running 0 4m [root@k8s-master k8s]#
3.5 创建升级rc
[root@k8s-master k8s]# cat myweb-rcv2.yaml apiVersion: v1 kind: ReplicationController metadata: name: myweb2 spec: replicas: 3 selector: app: myweb2 template: metadata: labels: app: myweb2 spec: containers: - name: myweb2 image: 192.168.0.136:5000/nginx:latest ports: - containerPort: 80
3.6 执行升级
[root@k8s-master k8s]# kubectl rolling-update myweb -f myweb-rcv2.yaml --update-period=20s Created myweb2 Scaling up myweb2 from 0 to 3, scaling down myweb from 3 to 0 (keep 3 pods available, don't exceed 4 pods) Scaling myweb2 up to 1 Scaling myweb down to 2 Scaling myweb2 up to 2 Scaling myweb down to 1 Scaling myweb2 up to 3 Scaling myweb down to 0 Update succeeded. Deleting myweb replicationcontroller "myweb" rolling updated to "myweb2" [root@k8s-master k8s]#
3.7 检查升级之后服务和镜像
[root@k8s-master k8s]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE myweb2-27ss0 1/1 Running 1 14m 172.16.47.3 k8s-node-2 myweb2-5xq3j 1/1 Running 1 15m 172.16.47.2 k8s-node-2 myweb2-s23qz 1/1 Running 1 15m 172.16.73.2 k8s-node-1 [root@k8s-master k8s]# curl -I 172.16.47.2 HTTP/1.1 200 OK Server: nginx/1.17.3 Date: Thu, 22 Aug 2019 05:44:46 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 13 Aug 2019 08:50:00 GMT Connection: keep-alive ETag: "5d5279b8-264" Accept-Ranges: bytes [root@k8s-master k8s]# kubectl describe pod myweb2-s23qz | grep -i image Image: 192.168.0.136:5000/nginx:latest Image ID: docker-pullable://192.168.0.136:5000/nginx@sha256:099019968725f0fc12c4b69b289a347ae74cc56da0f0ef56e8eb8e0134fc7911 16m 16m 1 {kubelet k8s-node-1} spec.containers{myweb2} Normal Pulling pulling image "192.168.0.136:5000/nginx:latest" 16m 16m 1 {kubelet k8s-node-1} spec.containers{myweb2} Normal Pulled Successfully pulled image "192.168.0.136:5000/nginx:latest" 2m 2m 1 {kubelet k8s-node-1} spec.containers{myweb2} Normal Pulling pulling image "192.168.0.136:5000/nginx:latest" 2m 2m 1 {kubelet k8s-node-1} spec.containers{myweb2} Normal Pulled Successfully pulled image "192.168.0.136:5000/nginx:latest"
4 回滚
[root@k8s-master k8s]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE myweb2-27ss0 1/1 Running 1 19m 172.16.47.3 k8s-node-2 myweb2-5xq3j 1/1 Running 1 19m 172.16.47.2 k8s-node-2 myweb2-s23qz 1/1 Running 1 19m 172.16.73.2 k8s-node-1 [root@k8s-master k8s]# kubectl rolling-update myweb2 -f myweb-rcv1.yaml --update-period=10s Created myweb Scaling up myweb from 0 to 3, scaling down myweb2 from 3 to 0 (keep 3 pods available, don't exceed 4 pods) Scaling myweb up to 1 Scaling myweb2 down to 2 Scaling myweb up to 2 Scaling myweb2 down to 1 Scaling myweb up to 3 Scaling myweb2 down to 0 Update succeeded. Deleting myweb2 replicationcontroller "myweb2" rolling updated to "myweb" [root@k8s-master k8s]# kubectl get rc NAME DESIRED CURRENT READY AGE myweb 3 3 3 2m [root@k8s-master k8s]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE myweb-b9231 1/1 Running 0 57s 172.16.47.2 k8s-node-2 myweb-csz1w 1/1 Running 0 45s 172.16.73.4 k8s-node-1 myweb-h3vtz 1/1 Running 0 2m 172.16.73.3 k8s-node-1 [root@k8s-master k8s]#
4.1 检查回滚之后服务和镜像
[root@k8s-master ~]# kubectl get rc NAME DESIRED CURRENT READY AGE myweb 3 3 3 1h [root@k8s-master ~]# [root@k8s-master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE myweb-3gkq7 1/1 Running 0 1m 172.16.47.2 k8s-node-2 myweb-csz1w 1/1 Running 2 1h 172.16.73.2 k8s-node-1 myweb-h3vtz 1/1 Running 2 1h 172.16.73.3 k8s-node-1 [root@k8s-master ~]# [root@k8s-master ~]# curl -I 172.16.73.2 HTTP/1.1 200 OK Server: nginx/1.15.12 Date: Thu, 22 Aug 2019 07:28:10 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 16 Apr 2019 13:08:19 GMT Connection: keep-alive ETag: "5cb5d3c3-264" Accept-Ranges: bytes [root@k8s-master ~]# kubectl describe pod myweb-csz1w | grep -i image: Image: 192.168.0.136:5000/nginx:1.15