K8S基础 - 05Pod控制器
一、概述
1.1 Pod控制器
无状态应用
- ReplicationController(rc)
- ReplicaSet(rs)
- Deployment(deploy)
- DaemonSet(ds)
- Job
- CronJob
- StatefulSet
有状态应用
redis、 mysql、zookeeper
1.2 主要配置: 副本数、 标签选择器、 Pod模板
1.3 资源管理
TPR: Third Party Resources, 1.2+, 1.7
CDR: Custom Defined Resources, 1.8+
helm:
二、ReplicaSet
2.1 创建rs
[root@k8s-master pod-k8s]# cat rs-demo.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: rs-mynginx
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: rs-mynginx
release: canary
template: # 描述 Pod
metadata:
name: rs-mynginx-pod
labels:
app: rs-mynginx
release: canary
enviroment: qa
spec:
containers:
- name: rs-mynginx-pod-container
image: sun2010wg/my-nginx:v1
ports:
- name: http
containerPort: 80
[root@k8s-master ~]# kubectl apply -f rs-demo.yaml
replicaset.apps/rs-mynginx created
[root@k8s-master ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
rs-mynginx 2 2 2 7s
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
rs-mynginx-knqsl 1/1 Running 0 105s app=rs-mynginx,enviroment=qa,release=canary
rs-mynginx-qswct 1/1 Running 0 105s app=rs-mynginx,enviroment=qa,release=canary
2.2 蓝绿部署、滚动更新
在1.19.3 pod的镜像更新需要重新删除Pod后,才能生效。
[root@k8s-master ~]# kubectl get rs -o wide
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
rs-mynginx 2 2 2 2m42s rs-mynginx-pod-container sun2010wg/my-nginx:v1 app=rs-mynginx,release=canary
[root@k8s-master ~]# kubectl edit rs rs-mynginx ### 修改镜像版本由 v1 到 v2
replicaset.apps/rs-mynginx edited
[root@k8s-master ~]# kubectl get rs -o wide
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
rs-mynginx 2 2 2 4m27s rs-mynginx-pod-container sun2010wg/my-nginx:v2 app=rs-mynginx,release=canary
三、Deployment
3.1 创建deploy
deployment -> replicaset -> pod
[root@k8s-master pod-k8s]# cat deploy-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-mynginx
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: deploy-mynginx-pod
release: canary
template:
metadata:
labels:
app: deploy-mynginx-pod
release: canary
spec:
containers:
- name: deploy-mynginx-pod-container
image: sun2010wg/my-nginx:v1
ports:
- name: http
containerPort: 80
[root@k8s-master ~]# kubectl apply -f deploy-demo.yaml
deployment.apps/deploy-mynginx created
[root@k8s-master ~]# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
deploy-mynginx 2/2 2 2 9s
[root@k8s-master ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
deploy-mynginx-68dd874d86 2 2 2 26s
rs-mynginx 2 2 2 19m
3.2 升级策略
[root@k8s-master ~]# kubectl describe deployment deploy-mynginx
Name: deploy-mynginx
Namespace: default
CreationTimestamp: Sat, 23 Oct 2021 19:55:38 +0800
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=deploy-mynginx-pod,release=canary
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=deploy-mynginx-pod
release=canary
Containers:
deploy-mynginx-pod-container:
Image: sun2010wg/my-nginx:v1
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: deploy-mynginx-68dd874d86 (2/2 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 3m47s deployment-controller Scaled up replica set deploy-mynginx-68dd874d86 to 2
[root@k8s-master ~]# kubectl edit deploy deploy-mynginx
deployment.apps/deploy-mynginx edited
[root@k8s-master ~]# kubectl get pods -l app=deploy-mynginx-pod -w
NAME READY STATUS RESTARTS AGE
deploy-mynginx-68dd874d86-227ht 1/1 Running 0 10m
deploy-mynginx-68dd874d86-7dz2w 1/1 Running 0 64s
deploy-mynginx-68dd874d86-dcm5g 1/1 Running 0 64s
deploy-mynginx-68dd874d86-q9c5t 1/1 Running 0 10m
deploy-mynginx-6f8b85cd4c-nhmjq 0/1 Pending 0 0s ### 开始启动一个新的 nhmjq
deploy-mynginx-6f8b85cd4c-nhmjq 0/1 Pending 0 0s
deploy-mynginx-68dd874d86-7dz2w 1/1 Terminating 0 67s ### 开始停止一个旧的 7dz2w
deploy-mynginx-6f8b85cd4c-nhmjq 0/1 ContainerCreating 0 0s
deploy-mynginx-6f8b85cd4c-7qcnf 0/1 Pending 0 0s
deploy-mynginx-6f8b85cd4c-7qcnf 0/1 Pending 0 0s
deploy-mynginx-6f8b85cd4c-7qcnf 0/1 ContainerCreating 0 0s
deploy-mynginx-68dd874d86-7dz2w 0/1 Terminating 0 67s ### 停止 7dz2w 成功
deploy-mynginx-6f8b85cd4c-nhmjq 1/1 Running 0 1s ### 启动 nhmjq 成功
deploy-mynginx-68dd874d86-q9c5t 1/1 Terminating 0 10m ### 开始关闭一个旧的 q9c5t
deploy-mynginx-6f8b85cd4c-6fccc 0/1 Pending 0 0s ### 开始启动一个新的 6fccc
deploy-mynginx-6f8b85cd4c-6fccc 0/1 Pending 0 0s
deploy-mynginx-6f8b85cd4c-6fccc 0/1 ContainerCreating 0 0s
deploy-mynginx-6f8b85cd4c-7qcnf 1/1 Running 0 1s
deploy-mynginx-68dd874d86-q9c5t 0/1 Terminating 0 10m
deploy-mynginx-68dd874d86-dcm5g 1/1 Terminating 0 68s
deploy-mynginx-68dd874d86-7dz2w 0/1 Terminating 0 68s
[root@k8s-master ~]# kubectget pods -l app=deploy-mynginx-pod
NAME READY STATUS RESTARTS AGE
deploy-mynginx-6f8b85cd4c-5v7wr 1/1 Running 0 79s
deploy-mynginx-6f8b85cd4c-6fccc 1/1 Running 0 79s
deploy-mynginx-6f8b85cd4c-7qcnf 1/1 Running 0 80s
deploy-mynginx-6f8b85cd4c-nhmjq 1/1 Running 0 80s
3.3 查看升级历史
[root@k8s-master ~]# kubectl rollout history deployment deploy-mynginx
deployment.apps/deploy-mynginx
REVISION CHANGE-CAUSE
1 <none>
2 <none>
3.4 使用patch修改配置
[root@k8s-master ~]# kubectl patch deployment deploy-mynginx -p '{"spec":{"replicas":5}}'
deployment.apps/deploy-mynginx patched
[root@k8s-master ~]# kubectl patch deployment deploy-mynginx -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":0}}}}'
deployment.apps/deploy-mynginx patched
[root@k8s-master ~]# kubectl describe deployment deploy-mynginx
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 0 max unavailable, 1 max surge
3.5 控制升级进程
[root@k8s-master ~]# kubectl set image deployment deploy-mynginx deploy-mynginx-pod-container=sun2010wg/my-nginx:v3 && kubectl rollout pause deployment deploy-mynginx
deployment.apps/deploy-mynginx image updated
deployment.apps/deploy-mynginx paused
查看更新状态,处于暂停状态
[root@k8s-master ~]# kubectl rollout status deployment deploy-mynginx
Waiting for deployment "deploy-mynginx" rollout to finish: 1 out of 5 new replicas have been updated...
新增一个Pod,镜像版本为v3
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
deploy-mynginx-698f66c576-tjsbw 1/1 Running 0 2m9s #更新v3版本, 其余为原有
deploy-mynginx-6f8b85cd4c-5v7wr 1/1 Running 0 98m
deploy-mynginx-6f8b85cd4c-6fccc 1/1 Running 0 98m
deploy-mynginx-6f8b85cd4c-7qcnf 1/1 Running 0 98m
deploy-mynginx-6f8b85cd4c-nhmjq 1/1 Running 0 98m
deploy-mynginx-6f8b85cd4c-znlhc 1/1 Running 0 93m
继续更新,查看更新过程
[root@k8s-master ~]# kubectl rollout resume deployment deploy-mynginx
deployment.apps/deploy-mynginx resumed
[root@k8s-master ~]# kubectl rollout status deployment deploy-mynginx
Waiting for deployment "deploy-mynginx" rollout to finish: 1 out of 5 new replicas ha
Waiting for deployment spec update to be observed...
Waiting for deployment spec update to be observed...
Waiting for deployment "deploy-mynginx" rollout to finish: 1 out of 5 new replicas have been updated...
Waiting for deployment "deploy-mynginx" rollout to finish: 1 out of 5 new replicas have been updated...
Waiting for deployment "deploy-mynginx" rollout to finish: 2 out of 5 new replicas have been updated...
Waiting for deployment "deploy-mynginx" rollout to finish: 2 out of 5 new replicas have been updated...
Waiting for deployment "deploy-mynginx" rollout to finish: 2 out of 5 new replicas have been updated...
Waiting for deployment "deploy-mynginx" rollout to finish: 3 out of 5 new replicas have been updated...
Waiting for deployment "deploy-mynginx" rollout to finish: 3 out of 5 new replicas have been updated...
Waiting for deployment "deploy-mynginx" rollout to finish: 3 out of 5 new replicas have been updated...
Waiting for deployment "deploy-mynginx" rollout to finish: 4 out of 5 new replicas have been updated...
Waiting for deployment "deploy-mynginx" rollout to finish: 4 out of 5 new replicas have been updated...
Waiting for deployment "deploy-mynginx" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "deploy-mynginx" rollout to finish: 1 old replicas are pending termination...
deployment "deploy-mynginx" successfully rolled out
生成5个新的Pod
[root@k8s-master ~]# kubectl get pods -l app=deploy-mynginx-pod
NAME READY STATUS RESTARTS AGE
deploy-mynginx-698f66c576-7ll24 1/1 Running 0 38s
deploy-mynginx-698f66c576-msmjb 1/1 Running 0 37s
deploy-mynginx-698f66c576-tjsbw 1/1 Running 0 7m4s
deploy-mynginx-698f66c576-vrbl5 1/1 Running 0 34s
deploy-mynginx-698f66c576-zl4rb 1/1 Running 0 35s
3.6 回退更新
[root@k8s-master ~]# kubectl rollout undo deployment deploy-mynginx --to-revision=1
deployment.apps/deploy-mynginx rolled back
[root@k8s-master ~]# kubectl rollout history deployment deploy-mynginx
deployment.apps/deploy-mynginx
REVISION CHANGE-CAUSE
2 <none>
3 <none>
4 <none>
四、StatefulSet
CoreOS: Operator
StatefulSet:
cattle, pet
PetSet -> StatefulSet
1. 稳定且唯一的网络标识符
2. 稳定且持久的存储;
3. 有序、平滑地部署和扩展;
4. 有序、平滑地删除和终止;
5. 有序的滚动更新
三个组件: headless service、 StatefulSet、volumeClaimTemplate
pod_name.service_name.ns_name.svc.cluster.local
4.1 创建sts
[root@k8s-master ~]# cat stateful-demo.yaml apiVersion: v1 kind: Service metadata: name: stateful-svc labels: app: stateful-svc spec: ports: - port: 80 name: web clusterIP: None selector: app: stateful-pod --- apiVersion: apps/v1 kind: StatefulSet metadata: name: stateful-demo spec: serviceName: stateful-svc replicas: 2 selector: matchLabels: app: stateful-pod template: metadata: labels: app: stateful-pod spec: containers: - name: statefule-pod-nginx image: sun2010wg/my-nginx:v1 ports: - containerPort: 80 name: web volumeMounts: - name: vol-data mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: vol-data spec: accessModes: ["ReadWriteOnce"] # storageClassName: "gluster-dynamic" resources: requests: storage: 10Gi
[root@k8s-master ~]# kubectl apply -f stateful-demo.yaml
service/stateful-svc created
statefulset.apps/statefule-demo created
[root@k8s-master ~]# kubectl get sts
NAME READY AGE
stateful-demo 2/2 5h10m
[root@k8s-master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23d
stateful-svc ClusterIP None <none> 80/TCP 21s
[root@k8s-master ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv01 2Gi RWO,RWX Retain Available 15d
pv02 5Gi RWO Retain Available 15d
pv03 20Gi RWO,RWX Retain Available 15d
pv04 10Gi RWO,RWX Retain Bound default/mypvc 15d
pv05 10Gi RWO,RWX Retain Bound default/vol-data-stateful-demo-1 15d
pv06 5Gi RWO,RWX Retain Available 5h45m
pv08 10Gi RWO,RWX Retain Bound default/vol-data-stateful-demo-0 5h24m
[root@k8s-master ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mypvc Bound pv04 10Gi RWO,RWX 15d
vol-data-stateful-demo-0 Bound pv08 10Gi RWO,RWX 5h11m
vol-data-stateful-demo-1 Bound pv05 10Gi RWO,RWX 5h11m
[root@k8s-master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES stateful-demo-0 1/1 Running 0 5h16m 10.244.1.56 k8s-node31.bearpx.com <none> <none> stateful-demo-1 1/1 Running 0 5h16m 10.244.3.67 k8s-node32.bearpx.com <none> <none> [root@k8s-master ~]# kubectl exec -it stateful-demo-0 -- /bin/sh / # nslookup stateful-demo-0 Server: 10.96.0.10 Address: 10.96.0.10:53
/ # nslookup stateful-demo-0.stateful-svc.default.svc.cluster.local Server: 10.96.0.10 Address: 10.96.0.10:53 *** Can't find stateful-demo-0.stateful-svc.default.svc.cluster.local: No answer Name: stateful-demo-0.stateful-svc.default.svc.cluster.local Address: 10.244.1.56 / # nslookup stateful-demo-1.stateful-svc.default.svc.cluster.local Server: 10.96.0.10 Address: 10.96.0.10:53 Name: stateful-demo-1.stateful-svc.default.svc.cluster.local Address: 10.244.3.67 *** Can't find stateful-demo-1.stateful-svc.default.svc.cluster.local: No answer
4.2 扩展sts
[root@k8s-master ~]# kubectl scale sts stateful-demo --replicas=4
statefulset.apps/stateful-demo scaled
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
stateful-demo-0 1/1 Running 0 5h19m
stateful-demo-1 1/1 Running 0 5h19m
stateful-demo-2 1/1 Running 0 5s
stateful-demo-3 1/1 Running 0 3s
[root@k8s-master ~]# kubectl patch sts stateful-demo -p '{"spec":{"replicas":3}}'
statefulset.apps/stateful-demo patched
[root@k8s-master ~]# kubectl patch sts stateful-demo -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":4}}}}'
statefulset.apps/stateful-demo patched
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
stateful-demo-0 1/1 Running 0 5h59m
stateful-demo-1 1/1 Running 0 5h59m
stateful-demo-2 1/1 Running 0 39m
stateful-demo-3 1/1 Running 0 28s
stateful-demo-4 0/1 Pending 0 26s
[root@k8s-master ~]# kubectl patch sts stateful-demo -p '{"spec":{"template":{"containers[0]":{"image":"sun2010wg/my-nginx:v2"}}}}'
statefulset.apps/stateful-demo patched (no change)
[root@k8s-master ~]# kubectl set image sts/stateful-demo statefule-pod-nginx=sun2010wg/my-nginx:v2
statefulset.apps/stateful-demo image updated
五、DaemonSet
5.1 创建ds
[root@k8s-master ~]# cat ds-filebeat.yml apiVersion: apps/v1 kind: Deployment metadata: name: redis namespace: default spec: replicas: 1 selector: matchLabels: app: redis role: logstor template: metadata: labels: app: redis role: logstor spec: containers: - name: redis image: redis:4.0-alpine ports: - name: redis containerPort: 6379 --- apiVersion: apps/v1 kind: DaemonSet metadata: name: filebeat-ds namespace: default spec: selector: matchLabels: app: filebeat release: stable template: metadata: labels: app: filebeat release: stable spec: containers: - name: filebeat image: ikubernetes/filebeat:5.6.5-alpine ports: - name: http containerPort: 80 env: - name: REDIS_HOST value: redis.default.svc.cluster.local - name: REDIS_LOG_LEVEL value: info
[root@k8s-master ~]# kubectl apply -f ds-filebeat.yml deployment.apps/redis unchanged daemonset.apps/filebeat-ds created [root@k8s-master ~]# kubectl expose deployment redis --port=6379 service/redis exposed [root@k8s-node31 ~]# kubectl exec -it redis-788d97689d-g8cnb -- /bin/sh /data # netstat -tul Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN tcp 0 0 :::6379 :::* LISTEN /data # nslookup redis.default.svc.cluster.local Server: 10.96.0.10 Address: 10.96.0.10:53 Name: redis.default.svc.cluster.local Address: 10.99.58.88
/data # kill 1 /data # command terminated with exit code 137
[root@k8s-master ~]# kubectl get ds NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE filebeat-ds 3 3 3 3 3 <none> 2d23h [root@k8s-master ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE redis ClusterIP 10.99.58.88 <none> 6379/TCP 2d23h [root@k8s-master ~]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE redis 1/1 1 1 2d23h
5.2 滚动升级
[root@k8s-master ~]# kubectl set image daemonsets filebeat-ds filebeat=ikubernetes/filebeat:5.6.6-alpine daemonset.apps/filebeat-ds image updated
[root@k8s-master ~]# kubectl get ds -w NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE filebeat-ds 3 3 3 3 3 <none> 2d23h filebeat-ds 3 3 3 3 3 <none> 2d23h filebeat-ds 3 3 3 0 3 <none> 2d23h filebeat-ds 3 3 2 0 2 <none> 2d23h filebeat-ds 3 3 2 1 2 <none> 2d23h filebeat-ds 3 3 3 1 3 <none> 2d23h filebeat-ds 3 3 2 1 2 <none> 2d23h filebeat-ds 3 3 2 2 2 <none> 2d23h filebeat-ds 3 3 3 2 3 <none> 2d23h filebeat-ds 3 3 2 2 2 <none> 2d23h filebeat-ds 3 3 2 3 2 <none> 2d23h filebeat-ds 3 3 3 3 3 <none> 2d23h
[root@k8s-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE filebeat-ds-2gxg7 1/1 Running 1 2d23h filebeat-ds-87nkp 1/1 Running 1 2d23h filebeat-ds-jjhbc 1/1 Running 1 2d23h ### 更新后 [root@k8s-master ~]# kubectl get pods | grep filebeat filebeat-ds-59kvk 1/1 Running 0 5m13s filebeat-ds-9xkrk 1/1 Running 0 6m47s filebeat-ds-bnm95 1/1 Running 0 4m44s