kubernetes(15):k8s有状态服务StatefulSet
K8s有状态服务StatefulSet
https://blog.51cto.com/newfly/2140004
https://www.cnblogs.com/cocowool/p/kubernetes_statefulset.html
1 有状态服务
RC、Deployment、DaemonSet都是面向无状态的服务,它们所管理的Pod的IP、名字,启停顺序等都是随机的,而StatefulSet是什么?顾名思义,有状态的集合,管理所有有状态的服务,比如MySQL、MongoDB集群等。MySQL,cache,这种组件要不要上K8S???
2 StatefulSet
StatefulSet本质上是Deployment的一种变体,在v1.9版本中已成为GA版本,它为了解决有状态服务的问题,它所管理的Pod拥有固定的Pod名称,启停顺序,在StatefulSet中,Pod名字称为网络标识(hostname),还必须要用到共享存储。
在Deployment中,与之对应的服务是service,而在StatefulSet中与之对应的headless service,headless
service,即无头服务,与service的区别就是它没有Cluster IP,解析它的名称时将返回该Headless
Service对应的全部Pod的Endpoint列表。
除此之外,StatefulSet在Headless Service的基础上又为StatefulSet控制的每个Pod副本创建了一个DNS域名,这个域名的格式为:
$(podname).(headless server name)
FQDN: $(podname).(headless server name).namespace.svc.cluster.local
StatefulSet适用于具有以下特点的应用:
- 具有固定的网络标记(主机名)
- 具有持久化存储
- 需要按顺序部署和扩展
- 需要按顺序终止及删除
- 需要按顺序滚动更新
3 StatefulSet示例
3.1 创建storageclass
kubernetes(14):k8s基于NFS部署storageclass实现pv自动供给
#test.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-claim annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" spec: accessModes: - ReadWriteMany resources: requests: storage: 1Mi
[root@k8s-master storageclass]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 24s [root@k8s-master storageclass]# kubectl apply -f test-claim.yaml persistentvolumeclaim/test-claim unchanged [root@k8s-master storageclass]# kubectl get storageclasses.storage.k8s.io NAME PROVISIONER AGE managed-nfs-storage fuseim.pri/ifs 13m [root@k8s-master storageclass]#
3.2 创建Nginx-StatefulSet
apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: selector: matchLabels: app: nginx serviceName: "nginx" replicas: 3 template: metadata: labels: app: nginx spec: terminationGracePeriodSeconds: 10 containers: - name: nginx image: nginx ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "managed-nfs-storage" resources: requests: storage: 1Gi
[root@k8s-master statefulset]# kubectl create -f nginx_statefulSet.yaml service/nginx created statefulset.apps/web created [root@k8s-master statefulset]#
3.3 查看sts/pod/pv/pvc/svc
[root@k8s-master v1]# kubectl get sts NAME READY AGE web 3/3 13m [root@k8s-master v1]# [root@k8s-master statefulset]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test-claim Bound default-test-claim-pvc-0ece5a66-dab2-4a13-be51-1a2acdbc45eb 1Mi RWX managed-nfs-storage 24m www-web-0 Bound default-www-web-0-pvc-fb47aba4-6d37-4f3e-b118-a35d78f4bca8 1Gi RWO managed-nfs-storage 66s www-web-1 Bound default-www-web-1-pvc-39123d42-6712-4a66-b7fe-beae48708aad 1Gi RWO managed-nfs-storage 55s www-web-2 Bound default-www-web-2-pvc-e395f225-ef4a-4645-a2a9-d139590346dc 1Gi RWO managed-nfs-storage 43s [root@k8s-master statefulset]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE default-test-claim-pvc-0ece5a66-dab2-4a13-be51-1a2acdbc45eb 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 19m default-www-web-0-pvc-fb47aba4-6d37-4f3e-b118-a35d78f4bca8 1Gi RWO Delete Bound default/www-web-0 managed-nfs-storage 68s default-www-web-1-pvc-39123d42-6712-4a66-b7fe-beae48708aad 1Gi RWO Delete Bound default/www-web-1 managed-nfs-storage 57s default-www-web-2-pvc-e395f225-ef4a-4645-a2a9-d139590346dc 1Gi RWO Delete Bound default/www-web-2 managed-nfs-storage 45s [root@k8s-master statefulset]# [root@k8s-master statefulset]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 21m web-0 1/1 Running 0 2m45s web-1 1/1 Running 0 2m34s web-2 1/1 Running 0 2m22s
pod创建过程从web-0到web-2顺序创建
StatefulSet创建顺序是从0到N-1,终止顺序则是相反。如果需要对StatefulSet扩容,则之前的N个Pod必须已经存在。如果要终止一个Pod,则它的后序Pod必须全部终止。
[root@k8s-master statefulset]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 18m web-0 1/1 Running 0 18s web-1 0/1 ContainerCreating 0 7s [root@k8s-master statefulset]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 18m web-0 1/1 Running 0 24s web-1 1/1 Running 0 13s web-2 0/1 Pending 0 1s [root@k8s-master statefulset]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 19m web-0 1/1 Running 0 33s web-1 1/1 Running 0 22s web-2 0/1 ContainerCreating 0 10s [root@k8s-master statefulset]# [root@k8s-master statefulset]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 19m web-0 1/1 Running 0 52s web-1 1/1 Running 0 41s web-2 1/1 Running 0 29s [root@k8s-master statefulset]#
根据volumeClaimTemplates自动创建的PVC
[root@k8s-master statefulset]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test-claim Bound default-test-claim-pvc-0ece5a66-dab2-4a13-be51-1a2acdbc45eb 1Mi RWX managed-nfs-storage 28m www-web-0 Bound default-www-web-0-pvc-fb47aba4-6d37-4f3e-b118-a35d78f4bca8 1Gi RWO managed-nfs-storage 5m15s www-web-1 Bound default-www-web-1-pvc-39123d42-6712-4a66-b7fe-beae48708aad 1Gi RWO managed-nfs-storage 5m4s www-web-2 Bound default-www-web-2-pvc-e395f225-ef4a-4645-a2a9-d139590346dc 1Gi RWO managed-nfs-storage 4m52s [root@k8s-master statefulset]# [root@k8s-master statefulset]# cd /data/volumes/ [root@k8s-master volumes]# ls v1 v2 v3 [root@k8s-master volumes]# tree . ├── v1 │ ├── default-test-claim-pvc-0ece5a66-dab2-4a13-be51-1a2acdbc45eb │ ├── default-www-web-0-pvc-fb47aba4-6d37-4f3e-b118-a35d78f4bca8 │ ├── default-www-web-1-pvc-39123d42-6712-4a66-b7fe-beae48708aad │ └── default-www-web-2-pvc-e395f225-ef4a-4645-a2a9-d139590346dc ├── v2 └── v3 7 directories, 0 files
3.4 测试statefulset持久化存储和自愈
写一个文件
[root@k8s-master v1]# ll 总用量 0 drwxrwxrwx 2 root root 6 9月 5 14:35 default-test-claim-pvc-0ece5a66-dab2-4a13-be51-1a2acdbc45eb drwxrwxrwx 2 root root 23 9月 5 15:01 default-www-web-0-pvc-fb47aba4-6d37-4f3e-b118-a35d78f4bca8 drwxrwxrwx 2 root root 6 9月 5 14:53 default-www-web-1-pvc-39123d42-6712-4a66-b7fe-beae48708aad drwxrwxrwx 2 root root 6 9月 5 14:54 default-www-web-2-pvc-e395f225-ef4a-4645-a2a9-d139590346dc [root@k8s-master v1]# echo "<h1>test Server</h1>" > default-www-web-0-pvc-fb47aba4-6d37-4f3e-b118-a35d78f4bca8/index.html [root@k8s-master v1]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 29m 10.254.1.103 k8s-node-1 <none> <none> web-0 1/1 Running 0 10m 10.254.1.104 k8s-node-1 <none> <none> web-1 1/1 Running 0 10m 10.254.2.77 k8s-node-2 <none> <none> web-2 1/1 Running 0 10m 10.254.1.105 k8s-node-1 <none> <none> [root@k8s-master v1]# curl 10.254.1.104 <h1>test Server</h1> [root@k8s-master v1]#
删除文件,pod自愈名称不变,只是IP变动了
[root@k8s-master v1]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 29m 10.254.1.103 k8s-node-1 <none> <none> web-0 1/1 Running 0 10m 10.254.1.104 k8s-node-1 <none> <none> web-1 1/1 Running 0 10m 10.254.2.77 k8s-node-2 <none> <none> web-2 1/1 Running 0 10m 10.254.1.105 k8s-node-1 <none> <none> [root@k8s-master v1]# kubectl delete pod web-0 pod "web-0" deleted [root@k8s-master v1]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 29m 10.254.1.103 k8s-node-1 <none> <none> web-0 0/1 ContainerCreating 0 7s <none> k8s-node-2 <none> <none> web-1 1/1 Running 0 11m 10.254.2.77 k8s-node-2 <none> <none> web-2 1/1 Running 0 10m 10.254.1.105 k8s-node-1 <none> <none> [root@k8s-master v1]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 29m 10.254.1.103 k8s-node-1 <none> <none> web-0 1/1 Running 0 10s 10.254.2.78 k8s-node-2 <none> <none> web-1 1/1 Running 0 11m 10.254.2.77 k8s-node-2 <none> <none> web-2 1/1 Running 0 11m 10.254.1.105 k8s-node-1 <none> <none> [root@k8s-master v1]# curl 10.254.2.78 <h1>test Server</h1>
3.5 顺序扩容,倒序缩容
[root@k8s-master v1]# kubectl get sts NAME READY AGE web 3/3 13m [root@k8s-master v1]# kubectl scale statefulset web --replicas=6 statefulset.apps/web scaled [root@k8s-master v1]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 33m web-0 1/1 Running 0 3m30s web-1 1/1 Running 0 14m web-2 1/1 Running 0 14m web-3 0/1 ContainerCreating 0 6s [root@k8s-master v1]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 33m web-0 1/1 Running 0 4m web-1 1/1 Running 0 15m web-2 1/1 Running 0 14m web-3 1/1 Running 0 36s web-4 1/1 Running 0 23s web-5 0/1 ContainerCreating 0 7s [root@k8s-master v1]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 34m web-0 1/1 Running 0 4m23s web-1 1/1 Running 0 15m web-2 1/1 Running 0 15m web-3 1/1 Running 0 59s web-4 1/1 Running 0 46s web-5 1/1 Running 0 30s [root@k8s-master v1]# [root@k8s-master v1]# [root@k8s-master v1]# kubectl scale statefulset web --replicas=2 statefulset.apps/web scaled [root@k8s-master v1]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 34m web-0 1/1 Running 0 4m35s web-1 1/1 Running 0 15m web-2 1/1 Running 0 15m web-3 1/1 Running 0 71s web-4 1/1 Running 0 58s web-5 1/1 Terminating 0 42s [root@k8s-master v1]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 34m web-0 1/1 Running 0 4m49s web-1 1/1 Running 0 15m web-2 0/1 Terminating 0 15m [root@k8s-master v1]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5558488b74-mkhbs 1/1 Running 0 34m web-0 1/1 Running 0 4m57s web-1 1/1 Running 0 16m [root@k8s-master v1]#
3.6 滚动更新-倒序更新
我们用的是最新的Nginx版本,现在换个1.15版本的
[root@k8s-master statefulset]# kubectl get pods -o wide |grep web web-0 1/1 Running 0 40s 10.254.1.111 k8s-node-1 <none> <none> web-1 1/1 Running 0 56s 10.254.2.81 k8s-node-2 <none> <none> web-2 1/1 Running 0 76s 10.254.1.110 k8s-node-1 <none> <none> [root@k8s-master statefulset]# curl -s -I 10.254.1.111|grep Server: Server: nginx/1.17.3 [root@k8s-master statefulset]# curl 10.254.1.111 <h1>test Server</h1> [root@k8s-master statefulset]# [root@k8s-master statefulset]# kubectl apply -f nginx_statefulSet.yaml service/nginx unchanged statefulset.apps/web configured [root@k8s-master statefulset]# kubectl get pods -o wide |grep web web-0 1/1 Running 0 2m22s 10.254.1.111 k8s-node-1 <none> <none> web-1 1/1 Running 0 2m38s 10.254.2.81 k8s-node-2 <none> <none> web-2 0/1 Terminating 0 2m58s 10.254.1.110 k8s-node-1 <none> <none> [root@k8s-master statefulset]# kubectl get pods -o wide |grep web web-0 1/1 Running 0 2m34s 10.254.1.111 k8s-node-1 <none> <none> web-1 1/1 Running 0 2m50s 10.254.2.81 k8s-node-2 <none> <none> web-2 0/1 ContainerCreating 0 10s <none> k8s-node-1 <none> <none> [root@k8s-master statefulset]# kubectl get pods -o wide |grep web web-0 0/1 Terminating 0 2m53s <none> k8s-node-1 <none> <none> web-1 1/1 Running 0 13s 10.254.2.82 k8s-node-2 <none> <none> web-2 1/1 Running 0 29s 10.254.1.112 k8s-node-1 <none> <none> [root@k8s-master statefulset]# kubectl get pods -o wide |grep web web-0 1/1 Running 0 23s 10.254.2.83 k8s-node-2 <none> <none> web-1 1/1 Running 0 37s 10.254.2.82 k8s-node-2 <none> <none> web-2 1/1 Running 0 53s 10.254.1.112 k8s-node-1 <none> <none> [root@k8s-master statefulset]# curl -s -I 10.254.2.83 |grep Server Server: nginx/1.15.12 [root@k8s-master statefulset]# curl 10.254.2.83 <h1>test Server</h1> [root@k8s-master statefulset]#