Kubernetes——StatefulSet资源升级(滚动更新操作、暂存更新操作)
StatefulSet资源升级
自 Kubernetes 1.7 版本起,StatefulSet 资源支持自动更新机制,其更新策略由 spec.updateStrategy 字段定义,默认为 RollingUpdate,即滚动更新。
一、滚动更新操作
滚动更新 StatefulSet 控制器的 Pod 资源以逆序的形式从其最大索引编号的 Pod 资源逐一进行,它在终止一个 Pod 资源、更新资源并待其就绪后启动更新下一个资源,即索引号比当前号小 1 的 Pod 资源。对于主从复制类的集群应用来说,这样也能保证起主节点作用的 Pod 资源最后进行更新,确保兼容性。
StatefulSet 的默认更新策略为滚动更新,通过 "kubectl get statefulset NAME -o yaml" 命令中的输出可以获取相关的信息:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 | [root@k8s-master01- test -2-26 ~] # kubectl get statefulset alertmanager-main -n kubesphere-monitoring-system NAME READY AGE alertmanager-main 1 /1 66m [root@k8s-master01- test -2-26 ~] # kubectl get statefulset alertmanager-main -n kubesphere-monitoring-system -o yaml apiVersion: apps /v1 kind: StatefulSet metadata: annotations: prometheus-operator-input- hash : "10566271792385409484" creationTimestamp: "2022-06-29T05:31:54Z" generation: 1 labels: app.kubernetes.io /component : alert-router app.kubernetes.io /instance : main app.kubernetes.io /name : alertmanager app.kubernetes.io /part-of : kube-prometheus app.kubernetes.io /version : 0.23.0 name: alertmanager-main namespace: kubesphere-monitoring-system ownerReferences: - apiVersion: monitoring.coreos.com /v1 blockOwnerDeletion: true controller: true kind: Alertmanager name: main uid: c054f193-4d87-49b8-b8e7-9275c753460c resourceVersion: "6347" uid: dbd3d926-fefd-4b11-94a6-079b08b7a293 spec: podManagementPolicy: Parallel replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: alertmanager: main app.kubernetes.io /instance : main app.kubernetes.io /managed-by : prometheus-operator app.kubernetes.io /name : alertmanager serviceName: alertmanager-operated template: metadata: annotations: kubectl.kubernetes.io /default-container : alertmanager creationTimestamp: null labels: alertmanager: main app.kubernetes.io /component : alert-router app.kubernetes.io /instance : main app.kubernetes.io /managed-by : prometheus-operator app.kubernetes.io /name : alertmanager app.kubernetes.io /part-of : kube-prometheus app.kubernetes.io /version : 0.23.0 spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: alertmanager operator: In values: - main namespaces: - kubesphere-monitoring-system topologyKey: kubernetes.io /hostname weight: 100 containers: - args: - --config. file = /etc/alertmanager/config/alertmanager .yaml - --storage.path= /alertmanager - --data.retention=120h - --cluster.listen-address= - --web.listen-address=:9093 - --web.route-prefix=/ - --cluster.peer=alertmanager-main-0.alertmanager-operated:9094 - --cluster.reconnect-timeout=5m env : - name: POD_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP image: registry.cn-beijing.aliyuncs.com /kubesphereio/alertmanager :v0.23.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 10 httpGet: path: /-/healthy port: web scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 3 name: alertmanager ports: - containerPort: 9093 name: web protocol: TCP - containerPort: 9094 name: mesh-tcp protocol: TCP - containerPort: 9094 name: mesh-udp protocol: UDP readinessProbe: failureThreshold: 10 httpGet: path: /-/ready port: web scheme: HTTP initialDelaySeconds: 3 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 3 resources: limits: cpu: 200m memory: 200Mi requests: cpu: 20m memory: 30Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/alertmanager/config name: config-volume - mountPath: /etc/alertmanager/certs name: tls-assets readOnly: true - mountPath: /alertmanager name: alertmanager-main-db - args: - --listen-address=:8080 - --reload-url=http: //localhost :9093 /-/reload - --watched- dir = /etc/alertmanager/config command : - /bin/prometheus-config-reloader env : - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: SHARD value: "-1" image: registry.cn-beijing.aliyuncs.com /kubesphereio/prometheus-config-reloader :v0.55.1 imagePullPolicy: IfNotPresent name: config-reloader ports: - containerPort: 8080 name: reloader-web protocol: TCP resources: limits: cpu: 100m memory: 50Mi requests: cpu: 100m memory: 50Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/alertmanager/config name: config-volume readOnly: true dnsPolicy: ClusterFirst nodeSelector: kubernetes.io /os : linux restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 2000 runAsNonRoot: true runAsUser: 1000 serviceAccount: alertmanager-main serviceAccountName: alertmanager-main terminationGracePeriodSeconds: 120 volumes: - name: config-volume secret: defaultMode: 420 secretName: alertmanager-main-generated - name: tls-assets projected: defaultMode: 420 sources: - secret: name: alertmanager-main-tls-assets-0 - emptyDir: {} name: alertmanager-main-db updateStrategy: type : RollingUpdate status: availableReplicas: 1 collisionCount: 0 currentReplicas: 1 currentRevision: alertmanager-main-cd5bc8fdc observedGeneration: 1 readyReplicas: 1 replicas: 1 updateRevision: alertmanager-main-cd5bc8fdc updatedReplicas: 1 [root@k8s-master01- test -2-26 ~] # |
"kubectl rollout status" 命令跟踪 StatefulSet 资源滚动更新过程中的状态信息。
二、暂存更新操作
当用户需要设定一个更新操作,但又不希望它立即执行时,可将更新操作予以 "暂存",待条件满足后再手动触发其执行更新。
StatefulSet 资源的分区更新机制能够实现此项功能。在设定更新操作之前,将 .spec.updateStrategy.rollingUpdate.partition 字段的值设置为 Pod 资源的副本数量,即比 Pod 资源的最大索引号大 1,这就意味着,所有的 Pod 资源都不会处于可直接更新的分区之内,那么于其后设定的更新操作也就不会真正执行,直到用户降低分区编号至现有 Pod 资源索引号范围之内。
1 2 3 4 5 6 7 8 9 10 11 12 13 | [root@k8s-master01- test -2-26 ~] # kubectl explain statefulset.spec.updateStrategy.rollingUpdate.partition KIND: StatefulSet VERSION: apps /v1 FIELD: partition <integer> DESCRIPTION: Partition indicates the ordinal at which the StatefulSet should be partitioned for updates. During a rolling update, all pods from ordinal Replicas-1 to Partition are updated. All pods from ordinal Partition-1 to 0 remain untouched. This is helpful in being able to do a canary based deployment. The default value is 0. [root@k8s-master01- test -2-26 ~] # |
下面测试滚动更新暂存更新操作,首先将 StatefulSet 资源滚动更新分区值设定为 3:
1 | kubectl patch statefulset myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":3}}}}' |
暂存状态的更新操作对所有的 Pod 资源均不产生影响,比如即使删除某 Pod 资源,它依然会基于旧的版本镜像进行重建。
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 阿里最新开源QwQ-32B,效果媲美deepseek-r1满血版,部署成本又又又降低了!
· 开源Multi-agent AI智能体框架aevatar.ai,欢迎大家贡献代码
· Manus重磅发布:全球首款通用AI代理技术深度解析与实战指南
· 被坑几百块钱后,我竟然真的恢复了删除的微信聊天记录!
· AI技术革命,工作效率10个最佳AI工具
2021-06-29 Kubernetes——安装GlusterFS分布式文件系统(一)
2018-06-29 《SaltStack技术入门与实践》—— 实践案例 <中小型Web架构>2 Keepalived
2018-06-29 《SaltStack技术入门与实践》—— 实践案例 <中小型Web架构>1 初始化和Haproxy
2018-06-29 《SaltStack技术入门与实践》—— Peer
2018-06-29 《SaltStack技术入门与实践》—— Mine
2018-06-29 《SaltStack技术入门与实践》—— Renderer组件