k8s更新pod4中模式

k8s更新策略

四种部署方案

滚动更新:先上v2版本,然后慢慢干掉v1版本 (每当一个v2版本的Pod变成Running,再干掉一个v1版本的Pod)
优点:不存在某段时间内服务不可用
缺点:切换过程中,存在pod新老版本共存 (解决:v2代码需要做兼容性)
补充:默认是滚动更新 缺省是滚动更新

重新创建:v1版本都干掉之后,然后再上v2版本,
优点:不存在新老版本共存
缺点:可能存在服务某段时间服务不可用,这个无法避免。
效果:四个都Terminating 四个都Pending 四个ContainerCreating 四个Running。

蓝绿部署:不是特定类型,而是对滚动更新蓝绿控制,通过selector一键切换,解决pod新老版本共存的问题

金丝雀:又称为灰度部署或者AB测试,通过selector配置各个版本比例

原文链接:https://blog.csdn.net/qq_36963950/article/details/125128594

蓝绿部署

什么是蓝绿部署?

蓝绿部署中,一共有两套系统:一套是正在提供服务系统,标记为“绿色”;另一套是准备发布的系 统,标记为“蓝色”。两套系统都是功能完善的、正在运行的系统,只是系统版本和对外服务情况不同。 开发新版本,要用新版本替换线上的旧版本,在线上的系统之外,搭建了一个使用新版本代码的全新 系统。 这时候,一共有两套系统在运行,正在对外提供服务的老系统是绿色系统,新部署的系统是蓝色系统。

image-20240410154241368

蓝色系统不对外提供服务,用来做什么呢? 用来做发布前测试,测试过程中发现任何问题,可以直接在蓝色系统上修改,不干扰用户正在使用的 系统。(注意,两套系统没有耦合的时候才能百分百保证不干扰) 蓝色系统经过反复的测试、修改、验证,确定达到上线标准之后,直接将用户切换到蓝色系统:

image-20240410154356230

切换后的一段时间内,依旧是蓝绿两套系统并存,但是用户访问的已经是蓝色系统。这段时间内观察 蓝色系统(新系统)工作状态,如果出现问题,直接切换回绿色系统。

当确信对外提供服务的蓝色系统工作正常,不对外提供服务的绿色系统已经不再需要的时候,蓝色系统正式成为对外提供服务系统,成为新的绿色系统。原先的绿色系统可以销毁,将资源释放出来,用于部署下一个蓝色系统。

蓝绿部署的优缺点

优点:

1、更新过程无需停机,风险较少。

2、回滚方便,只需要更改路由或者切换 DNS 服务器,效率较高 。

缺点:

1、成本较高,需要部署两套环境。如果新版本中基础服务出现问题,会瞬间影响全网用户。

2、需要部署两套机器,费用开销大。

3、在非隔离的机器(Docker、VM)上操作时,可能会导致蓝绿环境被摧毁风险 。

4、负载均衡器/反向代理/路由/DNS 处理不当,将导致流量没有切换过来情况出现。

部署蓝绿业务

Kubernetes 不支持内置的蓝绿部署。目前最好的方式是创建新的 deployment,然后更新应用程 序的 service 以指向新的 deployment 部署的应用。

创建绿色部署环境

[root@k8s-master huidu]# kubectl create ns blue-green
namespace/blue-green created
[root@k8s-master huidu]# vim lv.yaml
[root@k8s-master huidu]# cat lv.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-v2
  namespace: blue-green
spec:
  replicas: 3
  selector:
   matchLabels:
    app: myapp
    version: v2
  template:
   metadata:
    labels:
     app: myapp
     version: v2
   spec:
    containers:
    - name: myapp
      image: 192.168.199.88/ae/janakiramm/myapp:v2
      imagePullPolicy: IfNotPresent
      ports:
      - containerPort: 80


创建svc

[root@k8s-master huidu]# cat svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: myapp-lan-lv
  namespace: blue-green
  labels:
    app: myapp
    version: v2
spec:
  type: NodePort
  ports:
  - port: 80
    nodePort: 30062
    name: http
  selector:
    app: myapp
    version: v2

image-20240410165813833

部署蓝色环境

[root@k8s-master huidu]# cat lan.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-v1
  namespace: blue-green
spec:
  replicas: 3
  selector:
   matchLabels:
    app: myapp
    version: v1
  template:
   metadata:
    labels:
     app: myapp
     version: v1
   spec:
    containers:
    - name: myapp
      image: 192.168.199.88/ae/janakiramm/myapp:v1
      imagePullPolicy: IfNotPresent
      ports:
      - containerPort: 80


image-20240410170422493

更新svc

image-20240410171158957

image-20240410171524504

通过 k8s 实现滚动更新-滚动更新流程和策略

滚动更新简介

滚动更新是一种自动化程度较高的发布方式,用户体验比较平滑,是目前成熟型技术组织所采用的主 流发布方式,一次滚动发布一般由若干个发布批次组成,每批的数量一般是可以配置的(可以通过发布模 板定义),例如第一批 1 台,第二批 10%,第三批 50%,第四批 100%。每个批次之间留观察间隔,通 过手工验证或监控反馈确保没有问题再发下一批次,所以总体上滚动式发布过程是比较缓慢的

k8s实现滚动更新

[root@k8s-master ~]# kubectl explain deployment.spec
KIND:     Deployment
VERSION:  apps/v1

RESOURCE: spec <Object>

DESCRIPTION:
     Specification of the desired behavior of the Deployment.

     DeploymentSpec is the specification of the desired behavior of the
     Deployment.

FIELDS:
   minReadySeconds	<integer>
     Minimum number of seconds for which a newly created pod should be ready
     without any of its container crashing, for it to be considered available.
     Defaults to 0 (pod will be considered available as soon as it is ready)

   paused	<boolean>
     Indicates that the deployment is paused.
     
  `#暂停,当我们更新的时候创建pod先暂停,而不是立即更新`

   progressDeadlineSeconds	<integer>
     The maximum time in seconds for a deployment to make progress before it is
     considered to be failed. The deployment controller will continue to process
     failed deployments and a condition with a ProgressDeadlineExceeded reason
     will be surfaced in the deployment status. Note that progress will not be
     estimated during the time a deployment is paused. Defaults to 600s.

   replicas	<integer>
     Number of desired pods. This is a pointer to distinguish between explicit
     zero and not specified. Defaults to 1.

   revisionHistoryLimit	<integer>
   `#保留的历史版本数,默认是10个`
     The number of old ReplicaSets to retain to allow rollback. This is a
     pointer to distinguish between explicit zero and not specified. Defaults to
     10.

   selector	<Object> -required-
     Label selector for pods. Existing ReplicaSets whose pods are selected by
     this will be the ones affected by this deployment. It must match the pod
     template's labels.

   strategy	<Object>
   `#更新策略,支持的滚动更新策略`
     The deployment strategy to use to replace existing pods with new ones.

   template	<Object> -required-
     Template describes the pods that will be created.

[root@k8s-master ~]# kubectl explain deployment.spec.strategy
KIND:     Deployment
VERSION:  apps/v1

RESOURCE: strategy <Object>

DESCRIPTION:
     The deployment strategy to use to replace existing pods with new ones.

     DeploymentStrategy describes how to replace existing pods with new ones.

FIELDS:
   rollingUpdate	<Object>
     Rolling update config params. Present only if DeploymentStrategyType =
     RollingUpdate.

   type	<string>
     Type of deployment. Can be "Recreate" or "RollingUpdate". Default is
     RollingUpdate.
`#支持两种更新, "Recreate" or "RollingUpdate"`
`#Recreate 是重建式更新,删除一个更新一个`
`#RollingUpdate 滚动更新,定义滚动更新的更新方式的,也就是 pod 能多几个,少几个,控制更新力度的`

     Possible enum values:
     - `"Recreate"` Kill all existing pods before creating new ones.
     - `"RollingUpdate"` Replace the old ReplicaSets by new one using rolling
     update i.e gradually scale down the old ReplicaSets and scale up the new
     one.


[root@k8s-master ~]# kubectl explain deploy.spec.strategy.rollingUpdate
KIND:     Deployment
VERSION:  apps/v1

RESOURCE: rollingUpdate <Object>

DESCRIPTION:
     Rolling update config params. Present only if DeploymentStrategyType =
     RollingUpdate.

     Spec to control the desired behavior of rolling update.

FIELDS:
   maxSurge	<string>
     The maximum number of pods that can be scheduled above the desired number
     of pods. Value can be an absolute number (ex: 5) or a percentage of desired
     pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number
     is calculated from percentage by rounding up. Defaults to 25%. Example:
     when this is set to 30%, the new ReplicaSet can be scaled up immediately
     when the rolling update starts, such that the total number of old and new
     pods do not exceed 130% of desired pods. Once old pods have been killed,
     new ReplicaSet can be scaled up further, ensuring that total number of pods
     running at any time during the update is at most 130% of desired pods.
`#我们更新的过程当中最多允许超出的指定的目标副本数有几个;`
`它有两种取值方式,第一种直接给定数量,第二种根据百分比,百分比表示原本是 5 个,最多可以超出 20%,那就允许多一个,最多可以超过 40%,那就允许多两个`

   maxUnavailable	<string>
     The maximum number of pods that can be unavailable during the update. Value
     can be an absolute number (ex: 5) or a percentage of desired pods (ex:
     10%). Absolute number is calculated from percentage by rounding down. This
     can not be 0 if MaxSurge is 0. Defaults to 25%. Example: when this is set
     to 30%, the old ReplicaSet can be scaled down to 70% of desired pods
     immediately when the rolling update starts. Once new pods are ready, old
     ReplicaSet can be scaled down further, followed by scaling up the new
     ReplicaSet, ensuring that the total number of pods available at all times
     during the update is at least 70% of desired pods.
`#最多允许几个不可用假设有 5 个副本,最多一个不可用,就表示最少有 4 个可用`

部署案例,滚动更新

[root@k8s-master ~]# cat deploy-demo.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-v1
  annotations: 
     image.version.history: "v1: janakiramm/myapp:v1, v2: janakiramm/myapp:v2, v3: janakiramm/myapp:v3"
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
      version: v1
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1

  template:
    metadata:
      labels:
         app: myapp
         version: v1
    spec:
      containers:
      - name: myapp
        image: nginx:1.9.1
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
        startupProbe:
           periodSeconds: 5
           initialDelaySeconds: 20
           timeoutSeconds: 10
           httpGet:
             scheme: HTTP
             port: 80
             path: /
        livenessProbe:
           periodSeconds: 5
           initialDelaySeconds: 20
           timeoutSeconds: 10
           httpGet:
             scheme: HTTP
             port: 80
             path: /
        readinessProbe:
           periodSeconds: 5
           initialDelaySeconds: 20
           timeoutSeconds: 10
           httpGet:
             scheme: HTTP
             port: 80
             path: /



image-20240411095211503

image-20240411102634384

image-20240411103109821

image-20240411103640049

image-20240411104725888

image-20240411105212864

image-20240411105618157

image-20240411105951134

[root@k8s-master ~]# kubectl set image deployment/myapp-v1 myapp=192.168.199.88/ae/janakiramm/myapp:v2 --record=true
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/myapp-v1 image updated
[root@k8s-master ~]# kubectl rollout history deployment myapp-v1 
deployment.apps/myapp-v1 
REVISION  CHANGE-CAUSE
2         <none>
3         <none>
4         kubectl set image deployment/myapp-v1 myapp=192.168.199.88/ae/janakiramm/myapp:v2 --record=true


image-20240411111323572

不推荐这种方式访问,最好的方式就是修改yaml文件,然后再更新。

备注:生产环境
strategy:
 rollingUpdate:
 maxSurge: 1
 maxUnavailable: 0
maxSurge 根据具体 pod 数量设置数字,控制更新速度
maxUnavailable 设置成 0,为了更新时候,pod 始终跟目标副本数一样

自定义滚动更新策略

maxSurge 和 maxUnavailable 用来控制滚动更新的更新策略

取值范围 数值

  1. maxUnavailable: [0, 副本数]

  2. maxSurge: [0, 副本数]

    注意:两者不能同时为 0。

    比如:

  3. maxUnavailable: [0%, 100%] 向下取整,比如 10 个副本,5%的话==0.5 个,但计算按照 0 个;

  4. maxSurge: [0%, 100%] 向上取整,比如 10 个副本,5%的话==0.5 个,但计算按照 1 个;

建议配置

  1. maxUnavailable == 0

    1. maxSurge == 1

    这是我们生产环境提供给用户的默认配置。即“一上一下,先上后下”最平滑原则:

    1 个新版本 pod ready(结合 readiness)后,才销毁旧版本 pod。此配置适用场景是平滑更新、 保证服务平稳,但也有缺点,就是“太慢”了。

    总结:

    maxUnavailable:和期望的副本数比,不可用副本数最大比例(或最大值),这个值越小,越能保 证服务稳定,更新越平滑;

    maxSurge:和期望的副本数比,超过期望副本数最大比例(或最大值),这个值调的越大,副本更 新速度越快。

重新创建

把旧的pod都删除,重新创建

[root@k8s-master ~]# cat deploy-demo.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-v1
  annotations: 
     image.version.history: "v1: janakiramm/myapp:v1, v2: janakiramm/myapp:v2, v3: janakiramm/myapp:v3"
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
      version: v1
  strategy:
    type: Recreate

  template:
    metadata:
      labels:
         app: myapp
         version: v1
    spec:
      containers:
      - name: myapp
        image: nginx:1.9.1
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
        startupProbe:
           periodSeconds: 5
           initialDelaySeconds: 20
           timeoutSeconds: 10
           httpGet:
             scheme: HTTP
             port: 80
             path: /
        livenessProbe:
           periodSeconds: 5
           initialDelaySeconds: 20
           timeoutSeconds: 10
           httpGet:
             scheme: HTTP
             port: 80
             path: /
        readinessProbe:
           periodSeconds: 5
           initialDelaySeconds: 20
           timeoutSeconds: 10
           httpGet:
             scheme: HTTP
             port: 80
             path: /



image-20240411115037462

image-20240411115228179

通过金丝雀发布

金丝雀发布简介

金丝雀发布的由来:17 世纪,英国矿井工人发现,金丝雀对瓦斯这种气体十分敏感。空气中哪怕有 极其微量的瓦斯,金丝雀也会停止歌唱;当瓦斯含量超过一定限度时,虽然人类毫无察觉,金丝雀却早已 毒发身亡。当时在采矿设备相对简陋的条件下,工人们每次下井都会带上一只金丝雀作为瓦斯检测指标, 以便在危险状况下紧急撤离。

金丝雀发布(又称灰度发布、灰度更新):金丝雀发布一般先发 1 台,或者一个小比例,例如 2%的 服务器,主要做流量验证用,也称为金丝雀 (Canary) 测试 (国内常称灰度测试)。 简单的金丝雀测试一般通过手工测试验证,复杂的金丝雀测试需要比较完善的监控基础设施配合,通 过监控指标反馈,观察金丝雀的健康状况,作为后续发布或回退的依据。 如果金丝测试通过,则把剩余 的 V1 版本全部升级为 V2 版本。如果金丝雀测试失败,则直接回退金丝雀,发布失败。

image-20240411133008549

优点:灵活,策略自定义,可以按照流量或具体的内容进行灰度(比如不同账号,不同参数),出现问题不会影响全网用户。

缺点:没有覆盖到所有的用户导致出现问题不好排查。

部署案例

[root@k8s-master ~]# cat deploy-demo.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-v1
  annotations: 
     image.version.history: "v1: janakiramm/myapp:v1, v2: janakiramm/myapp:v2, v3: janakiramm/myapp:v3"
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
      version: v1
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0

  template:
    metadata:
      labels:
         app: myapp
         version: v1
    spec:
      containers:
      - name: myapp
        image: nginx:1.9.1
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
        startupProbe:
           periodSeconds: 5
           initialDelaySeconds: 20
           timeoutSeconds: 10
           httpGet:
             scheme: HTTP
             port: 80
             path: /
        livenessProbe:
           periodSeconds: 5
           initialDelaySeconds: 20
           timeoutSeconds: 10
           httpGet:
             scheme: HTTP
             port: 80
             path: /
        readinessProbe:
           periodSeconds: 5
           initialDelaySeconds: 20
           timeoutSeconds: 10
           httpGet:
             scheme: HTTP
             port: 80
             path: /

image-20240411134031661

打标签,更新镜像,然后暂停

kubectl set image deployment myapp-v1 myapp=192.168.199.88/ae/janakiramm/myapp:v1  && kubectl rollout pause deployment myapp-v1  

kubectl set image deployment myapp-v1 myapp=192.168.199.88/ae/janakiramm/myapp:v2 --record=true  && kubectl rollout pause deployment myapp-v1  

image-20240411135411165

解除更新,

kubectl rollout resume deployment myapp-v1 

image-20240411135826183

image-20240411140307594

image-20240411140459036

image-20240411140540921

回滚版本

image-20240411141224784

image-20240411141427231

posted @   挖挖挖  阅读(25)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 全程不用写代码,我用AI程序员写了一个飞机大战
· DeepSeek 开源周回顾「GitHub 热点速览」
· 记一次.NET内存居高不下排查解决与启示
· 物流快递公司核心技术能力-地址解析分单基础技术分享
· .NET 10首个预览版发布:重大改进与新特性概览!
点击右上角即可分享
微信分享提示