八、k8s入门系列---- Node Affinity
K8S 跟POD调度相关的三个概念:Node Affinity(节点亲和性),Taints(污点),Tolerations(容忍度),这节先讲下Node Affinity(节点亲和性)。
Node Affinity
Affinity中文意思 ”亲和性” ,跟nodeSelect 类似,根据节点上的标签来调度 POD 在哪些节点上创建。
目录nodeAffinity 根据软策略和硬策略分为2种:
- 硬策略(requiredDuringSchedulingIgnoredDuringExecution)
表示POD 必须部署到满足条件的节点上,如果没有满足条件的节点,就不停重试。其中 IgnoredDuringExecution表示 PDO 部署完成之后,如果节点标签发生了变化,不再满足POD指定的条件,POD也会继续运行
- 软策略(preferredDuringSchedulingIgnoredDuringExecution)
表示 POD 优先部署到满足条件的节点上,如果没有满足条件的节点,就忽略这些条件,按照正常逻辑部署
nodeAffinity在匹配label时,可选的操作符有:
- In label的值在某个列表中
- NotIn label的值不在某个列表中
- Exists 某个label存在
- DoesNotExist 某个label不存在
- Gt label的值大于某个值(字符串比较)
- Lt label的值小于某个值(字符串比较)
创建deployment 资源配置文件:
[root@ylserver10686071 ~]# cat affinity001.yml apiVersion: apps/v1 kind: Deployment metadata: name: affinity001 namespace: prod spec: replicas: 1 selector: matchLabels: k8s-app: affinity001 template: metadata: labels: k8s-app: affinity001 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: web operator: In values: - tomcat - nginx containers: - name: tomcat image: tomcat:8.0
创建deployment 资源,并查看pod创建情况:
[root@ylserver10686071 ~]# kubectl apply -f affinity001.yml deployment.apps/affinity001 created [root@ylserver10686071 ~]# kubectl get pods -n prod -o wide|grep affinity001 affinity001-59c9d9c4f-9fj8t 0/1 Pending 0 2m48s <none> <none> <none> <none> [root@ylserver10686071 ~]#
可以看到POD一直处于Pending状态,因为没有找到符合添加的node,给其中一个node打上标签:
[root@ylserver10686071 ~]# kubectl label node ylserver10686073 web=tomcat node/ylserver10686073 labeled [root@ylserver10686071 ~]# kubectl get pods -n prod -o wide|grep affinity001 affinity001-59c9d9c4f-9fj8t 1/1 Running 0 3m59s 10.233.72.50 ylserver10686073 <none> <none> [root@ylserver10686071 ~]#
此时可以看到POD已经在打标签的node上创建,移除标签,看POD是否继续正常运行:
[root@ylserver10686071 ~]# kubectl label node ylserver10686073 web- node/ylserver10686073 labeled [root@ylserver10686071 ~]# kubectl get pods -n prod -o wide|grep affinity001 affinity001-59c9d9c4f-9fj8t 1/1 Running 0 5m2s 10.233.72.50 ylserver10686073 <none> <none> [root@ylserver10686071 ~]#
可以看到移除标签后,POD依然正常运行,给其他node打上标签,然后重启deployment资源,看pod是否会在其他node上重现部署:
[root@ylserver10686071 ~]# kubectl label node ylserver10686072 web=nginx node/ylserver10686072 labeled [root@ylserver10686071 ~]# kubectl rollout restart deployment affinity001 -n prod deployment.apps/affinity001 restarted [root@ylserver10686071 ~]# kubectl get pods -n prod -o wide|grep affinity001 affinity001-d9d8d9b8d-j87z9 1/1 Running 0 37s 10.233.67.42 ylserver10686072 <none> <none> [root@ylserver10686071 ~]#
可以看到 POD已经在其他node上重新部署,命令 kubectl rollout restart可以重启deployment、daemonset、StatefulSet 等资源,软策略这里不再演示。
编写软策略的deployment资源:
[root@ylserver10686071 ~]# cat affinity002.yml apiVersion: apps/v1 kind: Deployment metadata: name: affinity002 namespace: prod spec: replicas: 1 selector: matchLabels: k8s-app: affinity002 template: metadata: labels: k8s-app: affinity002 spec: affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: web operator: In values: - apache - haproxy containers: - name: tomcat image: tomcat:8.0
创建软策略的deployment资源,查看pod部署情况,当前所有node都没有对应标签:
[root@ylserver10686071 ~]# kubectl apply -f affinity002.yml deployment.apps/affinity002 created [root@ylserver10686071 ~]# kubectl get pods -n prod -o wide|grep affinity002 affinity002-58cb4db98-72nj8 1/1 Running 0 32s 10.233.75.71 ylserver10686071 <none> <none> [root@ylserver10686071 ~]# kubectl get nodes --show-labels|grep web [root@ylserver10686071 ~]#
给其中一个node打上标签,重启deployment资源,看pod部署情况:
[root@ylserver10686071 ~]# kubectl label node ylserver10686073 web=apache node/ylserver10686073 labeled [root@ylserver10686071 ~]# kubectl rollout restart deployment affinity002 -n prod deployment.apps/affinity002 restarted [root@ylserver10686071 ~]# kubectl get pods -n prod -o wide|grep affinity002 affinity002-59b9b4cfcd-8ph9d 1/1 Running 0 66s 10.233.72.51 ylserver10686073 <none> <none> [root@ylserver10686071 ~]#
可以看到pod已经部署到打标签的node节点上,所以软策略在没有找到匹配的node时,就会按照正常流程部署。