定向调度,指的是利用pod上声明nodeName或者nodeSelector,以此将Pod调度到期望的node节点上。
注意,这里的调度是强制的,这就意味着即使要调度的目标Node不存在,也会向上面进行调度,只不过
Pod运行失败而已。
NodeName
NodeName用于强制约束将Pod调度到指定的Name的Node节点上。这种方式,其实是直接跳过Scheduler的
调度逻辑,直接将pod调度到指定名称的节点。
接下来,实验一下:创建一个pod-nodename.yaml文件
| apiVersion: v1 |
| kind: Pod |
| metadata: |
| name: pod-nodename |
| namespace: dev |
| spec: |
| containers: |
| - name: nginx |
| image: nginx:1.17.1 |
| nodeName: dev-k8s-master1 |
| |
| |
| [root@dev-k8s-master1 ~] |
| pod/pod-nodename created |
| |
| [root@dev-k8s-master1 ~] |
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES |
| pod-nodename 1/1 Running 0 3m21s 10.244.6.209 dev-k8s-master1 <none> <none> |
| |
| [root@dev-k8s-master1 ~] |
| pod "pod-nodename" deleted |
| |
| [root@dev-k8s-master1 ~] |
| pod/pod-nodename created |
| |
| [root@dev-k8s-master1 ~] |
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES |
| pod-nodename 0/1 Pending 0 4s <none> dev-k8s-master5 <none> <none> |
NodeSelector
NodeSelector用于将pod调度到添加了指定标签的node节点上。它是通过kubernetes的label-selector机制实现
的,也就是说,在pod创建之前,会由schedule使用MatchNodeSelector调度策略进行label匹配,找出目标node,
然后将pod调度到目标节点,该匹配规则是强制约束。
接下来,实验一下:
1.首先分别为node节点添加标签
| [root@dev-k8s-master1 ~] |
| node/dev-k8s-master1 labeled |
| [root@dev-k8s-master1 ~] |
| node/dev-k8s-master2 labeled |
2.创建一个pod-nodeselector.yaml文件,并使用它创建Pod
| apiVersion: v1 |
| kind: Pod |
| metadata: |
| name: pod-nodeselector |
| namespace: dev |
| spec: |
| containers: |
| - name: nginx |
| image: nginx:1.17.1 |
| nodeSelector: |
| nodeenv: pro |
| |
| |
| [root@dev-k8s-master1 ~] |
| pod/pod-nodeselector created |
| |
| [root@dev-k8s-master1 ~] |
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES |
| pod-nodeselector 1/1 Running 0 11m 10.244.6.210 dev-k8s-master1 <none> <none> |
| |
| |
| [root@dev-k8s-master1 ~] |
| pod "pod-nodeselector" deleted |
| |
| [root@dev-k8s-master1 ~] |
| apiVersion: v1 |
| kind: Pod |
| metadata: |
| name: pod-nodeselector |
| namespace: dev |
| spec: |
| containers: |
| - name: nginx |
| image: nginx:1.17.1 |
| nodeSelector: |
| nodeenv: pro1 |
| |
| [root@dev-k8s-master1 ~] |
| pod/pod-nodeselector created |
| |
| [root@dev-k8s-master1 ~] |
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES |
| pod-nodeselector 0/1 Pending 0 78s <none> <none> <none> <none> |
| |
| |
| [root@dev-k8s-master1 ~] |
| Name: pod-nodeselector |
| Namespace: dev |
| Priority: 0 |
| Node: <none> |
| Labels: <none> |
| Annotations: <none> |
| Status: Pending |
| IP: |
| Containers: |
| nginx: |
| Image: nginx:1.17.1 |
| Port: <none> |
| Host Port: <none> |
| Environment: <none> |
| Mounts: |
| /var/run/secrets/kubernetes.io/serviceaccount from default-token-6s48d (ro) |
| Conditions: |
| Type Status |
| PodScheduled False |
| Volumes: |
| default-token-6s48d: |
| Type: Secret (a volume populated by a Secret) |
| SecretName: default-token-6s48d |
| Optional: false |
| QoS Class: BestEffort |
| Node-Selectors: nodeenv=pro1 |
| Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s |
| node.kubernetes.io/unreachable:NoExecute for 300s |
| Events: |
| Type Reason Age From Message |
| ---- ------ ---- ---- ------- |
| Warning FailedScheduling 14s (x3 over 2m48s) default-scheduler 0/3 nodes are available: 3 node(s) didn't match node selector. |
亲和性调度
上一节,介绍了两种定向调度的方式,使用起来非常方便,但是也有一定的问题,那就是如果没有满足条件的Node,那么Pod将不会
被运行,即使在集群中还有可用Node列表也不行,这就限制了它的使用场景。
基于上面的问题,kubernetes还提供了一种亲和性调度(Affinity)。它在nodeSelector的基础之上进行了扩展,可以通过配置
的形式,实现优先选择满足条件的Node进行调度,如果没有,也可以调度到不满足条件的节点上,使调度更加灵活。
Affinity主要分为三类:
- nodeAffinity(node亲和性):以node为目标,解决pod可以调度到哪些node的问题
- podAffinity(pod亲和性):以pod为目标,解决pod可以和哪些已存在的pod部署在同一个拓扑域中的问题
- podAntiAffinity(pod反亲和性):以pod为目标,解决pod不能和哪些已存在pod部署在同一个拓扑域中的问题
关于亲和性(反亲和性)使用场景的说明:
亲和性:如果两个应用频繁交互,那就有必要利用亲和性让两个应用的尽可能的靠近,这样可以减少因网络通信
而带来的性能损耗。
反亲和性:当应用的采用多副本部署时,有必要采用反亲和性让各个应用实例打散分布在各个node上,这样可以
提高服务的高可用性。
| pod.spec.affinity.nodeAffinity |
| requiredDuringSchedulingIgnoredDuringExecution # Node节点必须满足指定的所有规则才可以,相当于硬限制 |
| nodeSelectorTerms # 节点选择列表 |
| matchFields # 按节点字段列出的节点选择器要求列表 |
| matchExpressions # 按节点标签列出的节点选择器要求列表(推荐) |
| key # 键 |
| values # 值 |
| operator # 关系符,支持Exists,DoesNotExist,In,NotIn,Gt,Lt |
| preferredDuringSchedulingIgnoredDuringExecution # 优先调度到满足指定规则的Node,相当于软限制(倾向) |
| preference # 一个节点选择器项,与相应的权重相关联 |
| matchFields # 按节点字段列出的节点选择器要求列表 |
| matchExpressions # 按节点标签列出的节点选择器要求列表(推荐) |
| key # 键 |
| values # 值 |
| operator # 关系符,支持Exists,DoesNotExist,In,NotIn,Gt,Lt |
| weight # 倾向权重,在范围1-100。 |
| 关系符的使用说明: |
| - matchExpressions: |
| - key: nodeenv # 匹配存在标签的key为nodeenv的节点 |
| operator: Exists |
| - key: nodeenv # 匹配标签的key为nodeenv,且value是"xxx"或"yyy"的节点 |
| operator: In |
| values: ["xxx", "yyy"] |
| - key: nodeenv # 匹配标签的key为nodeenv,且value大于"xxx"的节点 |
| operator: Gt |
| values: "xxx" |
接下来首先演示一下requiredDuringSchedulingIgnoredDuringExecution,
创建pod-nodeaffinity-required.yaml
| apiVersion: v1 |
| kind: Pod |
| meatdata: |
| name: pod-nodeaffinity-required |
| namespace: dev |
| spec: |
| containers: |
| - name: nginx |
| image: nginx:1.17.1 |
| affinity: |
| nodeAffinity: |
| requiredDuringSchedulingIgnoredDuringExecution: |
| nodeSelectorTerms: |
| - matchExpressions: |
| - key: nodeenv |
| operator: In |
| values: ["xxx", "yyy"] |
| |
| |
| [root@dev-k8s-master1 ~] |
| pod/pod-nodeaffinity-required created |
| |
| |
| [root@dev-k8s-master1 ~] |
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES |
| pod-nodeaffinity-required 0/1 Pending 0 23s <none> <none> <none> <none> |
| |
| |
| [root@dev-k8s-master1 ~] |
| pod "pod-nodeaffinity-required" deleted |
| |
| |
| [root@dev-k8s-master1 ~] |
| apiVersion: v1 |
| kind: Pod |
| metadata: |
| name: pod-nodeaffinity-required |
| namespace: dev |
| spec: |
| containers: |
| - name: nginx |
| image: nginx:1.17.1 |
| affinity: |
| nodeAffinity: |
| requiredDuringSchedulingIgnoredDuringExecution: |
| nodeSelectorTerms: |
| - matchExpressions: |
| - key: nodeenv |
| operator: In |
| values: ["pro", "yyy"] |
| |
| [root@dev-k8s-master1 ~] |
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES |
| pod-nodeaffinity-required 1/1 Running 0 6s 10.244.6.211 dev-k8s-master1 <none> <none> |
接下来再演示一下preferredDuringSchedulingIgnoredDuringExecution,
创建pod-nodeaffinity-preferred.yaml
| apiVersion: v1 |
| kind: Pod |
| metadata: |
| name: pod-nodeaffinity-preferred |
| namespace: dev |
| spec: |
| containers: |
| - name: nginx |
| image: nginx:1.17.1 |
| affinity: |
| nodeAffinity: |
| preferredDuringSchedulingIgnoredDuringExecution: |
| - weight: 1 |
| preference: |
| matchExpressions: |
| - key: nodeenv |
| operator: In |
| values: ["xxx", "yyy"] |
| |
| [root@dev-k8s-master1 ~] |
| pod/pod-nodeaffinity-preferred created |
| |
| [root@dev-k8s-master1 ~] |
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES |
| pod-nodeaffinity-preferred 1/1 Running 0 16s 10.244.6.212 dev-k8s-master1 <none> <none> |
| |
| NodeAffinity规则设置的注意事项: |
| 1 如果同时定义了nodeSelector和nodeAffinity,那么必须两个条件都得到满足,Pod才能运行在指定的Node上 |
| 2 如果nodeAffinity指定了多个nodeSelectorTerms,那么只需要其中一个能够匹配成功即可 |
| 3 如果一个nodeSelectorTerms中有多个matchExpressions,则一个节点必须满足所有的才能匹配成功 |
| 4 如果一个pod所在的Node在Pod运行期间其标签发生了改变,不再符合该Pod的节点亲和性需求,则系统将忽略此变化 |
PodAffinity
PodAffinity主要实现以一个运行的P哦对为参照,让新创建的Pod跟参照pod在一个拓扑域中的功能。
首先来看一个PodAffinity的可配置项:
| pod.spec.affinity.podAffinity |
| requiredDuringSchedulingIgnoredDuringExecution # 硬限制 |
| namespaecs # 指定参照pod的namespace |
| topologyKey # 指定调度作用域 |
| labelSelector # 标签选择器 |
| matchExpressions # 按节点标签列出的节点选择器要求列表(推荐) |
| key # 键 |
| values # 值 |
| operator # 关系符,支持In,NotIn,Exists,DoesNotExist。 |
| matchLabels # 指多个matchExpressions映射的内容 |
| preferredDuringSchedulingIgnoredDuringExecution # 软限制 |
| podAffinityTerm # 选项 |
| namespaces |
| topologyKey |
| labelSelector |
| matchExpressions |
| key # 键 |
| values # 值 |
| operator |
| matchLabels |
| weight # 倾向权重,在范围1-100 |
| |
| topologyKey用于指定调度时作用域,例如: |
| 如果指定为kubernetes.io/hostname,那就是以Node节点为区分范围 |
| 如果指定为beta.kubernetes.io/os,则以Node节点的操作系统类型来区分 |
接下来,演示一下requiredDuringSchedulingIgnoredDuringExecution,
1)首先创建一个参照Pod,pod-podaffinity-target.yaml:
| apiVersion: v1 |
| kind: Pod |
| metadata: |
| name: pod-podaffinity-target |
| namespace: dev |
| labels: |
| podenv: pro |
| spec: |
| containers: |
| - name: nginx |
| image: nginx:1.17.1 |
| nodeName: node1 |
| |
| [root@dev-k8s-master1 ~] |
| pod/pod-podaffinity-target created |
| |
| [root@dev-k8s-master1 ~] |
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES |
| pod-podaffinity-target 1/1 Running 0 14s 10.244.6.213 dev-k8s-master1 <none> <none> |
2)创建pod-podaffinity-required.yaml,内容如下:
| apiVersion: v1 |
| kind: Pod |
| metadata: |
| name: pod-podaffinity-required |
| namespace: dev |
| spec: |
| containers: |
| - name: nginx |
| image: nginx:1.17.1 |
| affinity: |
| podAffinity: |
| requiredDuringSchedulingIgnoredDuringExecution: |
| - labelSelector: |
| matchExpressions: |
| - key: podenv |
| operator: In |
| values: ["xxx", "yyy"] |
| topologyKey: kubernetes.io/hostname |
| |
| |
上面配置表达的意思是:新Pod必须要与拥有标签nodeenv=xxx或者nodeenv=yyy的pod在同一Node上,显然现在
没有这样pod,接下来,运行测试一下。
| |
| [root@dev-k8s-master1 ~] |
| pod/pod-podaffinity-required created |
| |
| [root@dev-k8s-master1 ~] |
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES |
| pod-podaffinity-required 0/1 Pending 0 12s <none> <none> <none> <none> |
| |
| |
| [root@dev-k8s-master1 ~] |
| pod "pod-podaffinity-required" deleted |
| |
| [root@dev-k8s-master1 ~] |
| apiVersion: v1 |
| kind: Pod |
| metadata: |
| name: pod-podaffinity-required |
| namespace: dev |
| spec: |
| containers: |
| - name: nginx |
| image: nginx:1.17.1 |
| affinity: |
| podAffinity: |
| requiredDuringSchedulingIgnoredDuringExecution: |
| - labelSelector: |
| matchExpressions: |
| - key: podenv |
| operator: In |
| values: ["pro", "yyy"] |
| topologyKey: kubernetes.io/hostname |
| |
| [root@dev-k8s-master1 ~] |
| pod/pod-podaffinity-required created |
| |
| [root@dev-k8s-master1 ~] |
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES |
| pod-podaffinity-target 1/1 Running 0 16m 10.244.6.213 dev-k8s-master1 <none> <none> |
PodAntiAffinity
PodAntiAffinity主要实现以运行的Pod为参照,让新创建的Pod跟参照Pod不在一个区域中的功能。
它的配置方式和选项跟PodAffinty是一样的,这里不再做详细解释,直接做一个测试案例。
1)继续使用上一个案例中目标pod
| [root@dev-k8s-master1 ~]# kubectl get pods -n dev |
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS |
| pod-podaffinity-required 1/1 Running 0 9m5s 10.244.6.214 dev-k8s-master1 <none> <none> <none> |
| pod-podaffinity-target 1/1 Running 0 25m 10.244.6.213 dev-k8s-master1 <none> <none> podenv=pro |
2)创建pod-podantiaffinity-required.yaml,内容如下:
| apiVersion: v1 |
| kind: Pod |
| metadata: |
| name: pod-podantiaffinity-required |
| namespace: dev |
| spec: |
| containers: |
| - name: nginx |
| image: nginx:1.17.1 |
| affinity: |
| podAntiAffinity: |
| requiredDuringSchedulingIgnoredDuringExecution: |
| - labelSelector: |
| matchExpressions: |
| - key: podenv |
| operator: In |
| values: ["pro"] |
| topologyKey: kubernetes.io/hostname |
上面配置表达的意思是:新Pod必须要与拥有标签nodeenv=pro的pod不在同一Node上,运行测试一下。
| # 创建pod |
| [root@dev-k8s-master1 ~]# kubectl create -f pod-podantiaffinity-required.yaml |
| pod/pod-podantiaffinity-required created |
| # 查看pod状态 |
| [root@dev-k8s-master1 ~]# kubectl get pods -n dev |
| NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS |
| pod-podaffinity-required 1/1 Running 0 18m 10.244.6.214 dev-k8s-master1 <none> <none> <none> |
| pod-podaffinity-target 1/1 Running 0 34m 10.244.6.213 dev-k8s-master1 <none> <none> podenv=pro |
| pod-podantiaffinity-required 1/1 Running 0 9s 10.244.1.45 dev-k8s-master2 <none> <none> <none> |
污点和容忍
污点(Taints)
前面的调度方式都是站在Pod的角度上,通过在Pod上添加属性,来确定Pod是否要调度到指定的Node上,其实
我们也可以站在Node的角度上,通过在Node上添加污点属性,来决定是否允许Pod调度过来。
Node被设置上污点之后就和Pod之间存在了另一种相斥的关系,进而拒绝Pod调度进来,甚至可以将已经存在的
Pod驱逐出去。
污点的格式为:key=value:effect,key和value是污点的标签,effect描述污点的作用,支持如下三个选项:
- PreferNoSchedule:kubernetes将尽量避免把Pod调度到具有该污点的Node上,除非没有其他节点可调度
- NoSchedule:kubernetes将不会把Pod调度到具有该污点的Node上,但不会影响当前Node上已存在的Pod
- NoExecute:kubernetes将不会把Pod调度到具有该污点的Node上,同时也会将Node上已存在的Pod驱离。
使用kubec设置和去除污点的命令示例如下:
| |
| kubectl taint nodes node1 key=value:effect |
| |
| kubectl taint nodes node1 key=effect- |
| |
| kubectl taint nodes node1 key- |
接下来,演示下污点的效果:
- 准备节点node1(为了演示效果更加明显,暂时停止node2节点)
- 为node1节点设置一个污点:tag=heima:PreferNoSchedule; 然后创建pod1
- 修改为node1节点设置一个污点:tag=heima:NoSchedule; 然后创建pod2
- 修改为node1节点设置一个污点:tag=heima:NoExecute; 然后创建pod3
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 分享4款.NET开源、免费、实用的商城系统
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
· 记一次.NET内存居高不下排查解决与启示