xone

  博客园 :: 首页 :: 博问 :: 闪存 :: 新随笔 :: 联系 :: 订阅 订阅 :: 管理 ::
  114 随笔 :: 0 文章 :: 2 评论 :: 58933 阅读
< 2025年3月 >
23 24 25 26 27 28 1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31 1 2 3 4 5

定向调度,指的是利用pod上声明nodeName或者nodeSelector,以此将Pod调度到期望的node节点上。
注意,这里的调度是强制的,这就意味着即使要调度的目标Node不存在,也会向上面进行调度,只不过
Pod运行失败而已。

NodeName

NodeName用于强制约束将Pod调度到指定的Name的Node节点上。这种方式,其实是直接跳过Scheduler的
调度逻辑,直接将pod调度到指定名称的节点。
接下来,实验一下:创建一个pod-nodename.yaml文件

apiVersion: v1
kind: Pod
metadata:
name: pod-nodename
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
nodeName: dev-k8s-master1 # 指定调度到node1节点上
# 创建Pod
[root@dev-k8s-master1 ~]# kubectl apply -f pod-nodename.yaml
pod/pod-nodename created
# 查看pod调度到node节点
[root@dev-k8s-master1 ~]# kubectl get pods pod-nodename -n dev -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodename 1/1 Running 0 3m21s 10.244.6.209 dev-k8s-master1 <none> <none>
# 接下来,删除pod,修改nodeName的值为node3(并没有node3节点)
[root@dev-k8s-master1 ~]# kubectl delete -f pod-nodename.yaml
pod "pod-nodename" deleted
# 创建pod
[root@dev-k8s-master1 ~]# kubectl create -f pod-nodename.yaml
pod/pod-nodename created
#查看创建
[root@dev-k8s-master1 ~]# kubectl get pods pod-nodename -n dev -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodename 0/1 Pending 0 4s <none> dev-k8s-master5 <none> <none>

NodeSelector

NodeSelector用于将pod调度到添加了指定标签的node节点上。它是通过kubernetes的label-selector机制实现
的,也就是说,在pod创建之前,会由schedule使用MatchNodeSelector调度策略进行label匹配,找出目标node,
然后将pod调度到目标节点,该匹配规则是强制约束。
接下来,实验一下:
1.首先分别为node节点添加标签

[root@dev-k8s-master1 ~]# kubectl label nodes dev-k8s-master1 nodeenv=pro
node/dev-k8s-master1 labeled
[root@dev-k8s-master1 ~]# kubectl label nodes dev-k8s-master2 nodeenv=test
node/dev-k8s-master2 labeled

2.创建一个pod-nodeselector.yaml文件,并使用它创建Pod

apiVersion: v1
kind: Pod
metadata:
name: pod-nodeselector
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
nodeSelector:
nodeenv: pro # 指定调度到具有nodeenv:pro标签的节点上
# 创建Pod
[root@dev-k8s-master1 ~]# kubectl create -f pod-nodeselector.yaml
pod/pod-nodeselector created
# 查看创建的pod
[root@dev-k8s-master1 ~]# kubectl get pods pod-nodeselector -n dev -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodeselector 1/1 Running 0 11m 10.244.6.210 dev-k8s-master1 <none> <none>
# 删除pod
[root@dev-k8s-master1 ~]# kubectl delete -f pod-nodeselector.yaml
pod "pod-nodeselector" deleted
# 修改label标签
[root@dev-k8s-master1 ~]# cat pod-nodeselector.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-nodeselector
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
nodeSelector:
nodeenv: pro1
# 创建pod
[root@dev-k8s-master1 ~]# kubectl create -f pod-nodeselector.yaml
pod/pod-nodeselector created
# 查看创建的pod
[root@dev-k8s-master1 ~]# kubectl get pods pod-nodeselector -n dev -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodeselector 0/1 Pending 0 78s <none> <none> <none> <none>
# 查看pod创建信息
[root@dev-k8s-master1 ~]# kubectl describe pods pod-nodeselector -n dev
Name: pod-nodeselector
Namespace: dev
Priority: 0
Node: <none>
Labels: <none>
Annotations: <none>
Status: Pending
IP:
Containers:
nginx:
Image: nginx:1.17.1
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-6s48d (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-6s48d:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-6s48d
Optional: false
QoS Class: BestEffort
Node-Selectors: nodeenv=pro1
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 14s (x3 over 2m48s) default-scheduler 0/3 nodes are available: 3 node(s) didn't match node selector.

亲和性调度

上一节,介绍了两种定向调度的方式,使用起来非常方便,但是也有一定的问题,那就是如果没有满足条件的Node,那么Pod将不会
被运行,即使在集群中还有可用Node列表也不行,这就限制了它的使用场景。
基于上面的问题,kubernetes还提供了一种亲和性调度(Affinity)。它在nodeSelector的基础之上进行了扩展,可以通过配置
的形式,实现优先选择满足条件的Node进行调度,如果没有,也可以调度到不满足条件的节点上,使调度更加灵活。
Affinity主要分为三类:

  • nodeAffinity(node亲和性):以node为目标,解决pod可以调度到哪些node的问题
  • podAffinity(pod亲和性):以pod为目标,解决pod可以和哪些已存在的pod部署在同一个拓扑域中的问题
  • podAntiAffinity(pod反亲和性):以pod为目标,解决pod不能和哪些已存在pod部署在同一个拓扑域中的问题

关于亲和性(反亲和性)使用场景的说明:
亲和性:如果两个应用频繁交互,那就有必要利用亲和性让两个应用的尽可能的靠近,这样可以减少因网络通信
而带来的性能损耗。
反亲和性:当应用的采用多副本部署时,有必要采用反亲和性让各个应用实例打散分布在各个node上,这样可以
提高服务的高可用性。

pod.spec.affinity.nodeAffinity
requiredDuringSchedulingIgnoredDuringExecution # Node节点必须满足指定的所有规则才可以,相当于硬限制
nodeSelectorTerms # 节点选择列表
matchFields # 按节点字段列出的节点选择器要求列表
matchExpressions # 按节点标签列出的节点选择器要求列表(推荐)
key # 键
values # 值
operator # 关系符,支持Exists,DoesNotExist,In,NotIn,Gt,Lt
preferredDuringSchedulingIgnoredDuringExecution # 优先调度到满足指定规则的Node,相当于软限制(倾向)
preference # 一个节点选择器项,与相应的权重相关联
matchFields # 按节点字段列出的节点选择器要求列表
matchExpressions # 按节点标签列出的节点选择器要求列表(推荐)
key # 键
values # 值
operator # 关系符,支持Exists,DoesNotExist,In,NotIn,Gt,Lt
weight # 倾向权重,在范围1-100。
关系符的使用说明:
- matchExpressions:
- key: nodeenv # 匹配存在标签的key为nodeenv的节点
operator: Exists
- key: nodeenv # 匹配标签的key为nodeenv,且value是"xxx""yyy"的节点
operator: In
values: ["xxx", "yyy"]
- key: nodeenv # 匹配标签的key为nodeenv,且value大于"xxx"的节点
operator: Gt
values: "xxx"

接下来首先演示一下requiredDuringSchedulingIgnoredDuringExecution,
创建pod-nodeaffinity-required.yaml

apiVersion: v1
kind: Pod
meatdata:
name: pod-nodeaffinity-required
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
affinity: # 亲和性设置
nodeAffinity: # 设置node亲和性
requiredDuringSchedulingIgnoredDuringExecution: # 硬限制
nodeSelectorTerms:
- matchExpressions: # 匹配env的值在["xxx", "yyy"]中的标签(当前环境没有)
- key: nodeenv
operator: In
values: ["xxx", "yyy"]
# 创建pod
[root@dev-k8s-master1 ~]# kubectl create -f pod-nodeaffinity-required.yaml
pod/pod-nodeaffinity-required created
# 查看pod状态(创建失败)
[root@dev-k8s-master1 ~]# kubectl get pods pod-nodeaffinity-required -n dev -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodeaffinity-required 0/1 Pending 0 23s <none> <none> <none> <none>
# 删除pod
[root@dev-k8s-master1 ~]# kubectl delete -f pod-nodeaffinity-required.yaml
pod "pod-nodeaffinity-required" deleted
# 修改pod信息
[root@dev-k8s-master1 ~]# cat pod-nodeaffinity-required.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-nodeaffinity-required
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
affinity: # 亲和性设置
nodeAffinity: # 设置node亲和性
requiredDuringSchedulingIgnoredDuringExecution: # 硬限制
nodeSelectorTerms:
- matchExpressions: # 匹配env的值在["pro", "yyy"]中的标签(当前环境有)
- key: nodeenv
operator: In
values: ["pro", "yyy"]
# 查看pod状态(创建成功)
[root@dev-k8s-master1 ~]# kubectl get pods pod-nodeaffinity-required -n dev -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodeaffinity-required 1/1 Running 0 6s 10.244.6.211 dev-k8s-master1 <none> <none>

接下来再演示一下preferredDuringSchedulingIgnoredDuringExecution,
创建pod-nodeaffinity-preferred.yaml

apiVersion: v1
kind: Pod
metadata:
name: pod-nodeaffinity-preferred
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
affinity: # 亲和性设置
nodeAffinity: # 设置node亲和性
preferredDuringSchedulingIgnoredDuringExecution: # 软限制
- weight: 1
preference:
matchExpressions: # 匹配env的值在["xxx", "yyy"]中的标签(当前环境没有)
- key: nodeenv
operator: In
values: ["xxx", "yyy"]
# 创建pod
[root@dev-k8s-master1 ~]# kubectl create -f pod-nodeaffinity-preferred.yaml
pod/pod-nodeaffinity-preferred created
# 查看pod状态(运行成功)
[root@dev-k8s-master1 ~]# kubectl get pods pod-nodeaffinity-preferred -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodeaffinity-preferred 1/1 Running 0 16s 10.244.6.212 dev-k8s-master1 <none> <none>
NodeAffinity规则设置的注意事项:
1 如果同时定义了nodeSelector和nodeAffinity,那么必须两个条件都得到满足,Pod才能运行在指定的Node上
2 如果nodeAffinity指定了多个nodeSelectorTerms,那么只需要其中一个能够匹配成功即可
3 如果一个nodeSelectorTerms中有多个matchExpressions,则一个节点必须满足所有的才能匹配成功
4 如果一个pod所在的Node在Pod运行期间其标签发生了改变,不再符合该Pod的节点亲和性需求,则系统将忽略此变化

PodAffinity

PodAffinity主要实现以一个运行的P哦对为参照,让新创建的Pod跟参照pod在一个拓扑域中的功能。
首先来看一个PodAffinity的可配置项:

pod.spec.affinity.podAffinity
requiredDuringSchedulingIgnoredDuringExecution # 硬限制
namespaecs # 指定参照pod的namespace
topologyKey # 指定调度作用域
labelSelector # 标签选择器
matchExpressions # 按节点标签列出的节点选择器要求列表(推荐)
key # 键
values # 值
operator # 关系符,支持In,NotIn,Exists,DoesNotExist。
matchLabels # 指多个matchExpressions映射的内容
preferredDuringSchedulingIgnoredDuringExecution # 软限制
podAffinityTerm # 选项
namespaces
topologyKey
labelSelector
matchExpressions
key # 键
values # 值
operator
matchLabels
weight # 倾向权重,在范围1-100
topologyKey用于指定调度时作用域,例如:
如果指定为kubernetes.io/hostname,那就是以Node节点为区分范围
如果指定为beta.kubernetes.io/os,则以Node节点的操作系统类型来区分

接下来,演示一下requiredDuringSchedulingIgnoredDuringExecution,
1)首先创建一个参照Pod,pod-podaffinity-target.yaml:

apiVersion: v1
kind: Pod
metadata:
name: pod-podaffinity-target
namespace: dev
labels:
podenv: pro # 设置标签
spec:
containers:
- name: nginx
image: nginx:1.17.1
nodeName: node1 # 将目标pod明确指定到node1上
# 启动目标pod
[root@dev-k8s-master1 ~]# kubectl create -f pod-podaffinity-target.yaml
pod/pod-podaffinity-target created
# 查看pod状态
[root@dev-k8s-master1 ~]# kubectl get pods pod-podaffinity-target -n dev -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-podaffinity-target 1/1 Running 0 14s 10.244.6.213 dev-k8s-master1 <none> <none>

2)创建pod-podaffinity-required.yaml,内容如下:

apiVersion: v1
kind: Pod
metadata:
name: pod-podaffinity-required
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
affinity: # 亲和性设置
podAffinity: # 设置pod亲和性
requiredDuringSchedulingIgnoredDuringExecution: # 硬限制
- labelSelector:
matchExpressions: # 匹配env的值在["xxx", "yyy"]中的标签
- key: podenv
operator: In
values: ["xxx", "yyy"]
topologyKey: kubernetes.io/hostname

上面配置表达的意思是:新Pod必须要与拥有标签nodeenv=xxx或者nodeenv=yyy的pod在同一Node上,显然现在
没有这样pod,接下来,运行测试一下。

# 创建pod
[root@dev-k8s-master1 ~]# kubectl create -f pod-podaffinity-required.yaml
pod/pod-podaffinity-required created
# 查看pod状态,发现未运行
[root@dev-k8s-master1 ~]# kubectl get pods pod-podaffinity-required -n dev -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-podaffinity-required 0/1 Pending 0 12s <none> <none> <none> <none>
# 删除pod
[root@dev-k8s-master1 ~]# kubectl delete -f pod-podaffinity-required.yaml
pod "pod-podaffinity-required" deleted
# 修改pod匹配标签
[root@dev-k8s-master1 ~]# cat pod-podaffinity-required.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-podaffinity-required
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
affinity: # 亲和性设置
podAffinity: # 设置pod亲和性
requiredDuringSchedulingIgnoredDuringExecution: # 硬限制
- labelSelector:
matchExpressions: # 匹配env的值在["pro", "yyy"]中的标签
- key: podenv
operator: In
values: ["pro", "yyy"]
topologyKey: kubernetes.io/hostname
# 创建pod
[root@dev-k8s-master1 ~]# kubectl create -f pod-podaffinity-required.yaml
pod/pod-podaffinity-required created
# 查看pod状态,发现正常运行
[root@dev-k8s-master1 ~]# kubectl get pods pod-podaffinity-target -n dev -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-podaffinity-target 1/1 Running 0 16m 10.244.6.213 dev-k8s-master1 <none> <none>

PodAntiAffinity

PodAntiAffinity主要实现以运行的Pod为参照,让新创建的Pod跟参照Pod不在一个区域中的功能。
它的配置方式和选项跟PodAffinty是一样的,这里不再做详细解释,直接做一个测试案例。
1)继续使用上一个案例中目标pod

[root@dev-k8s-master1 ~]# kubectl get pods -n dev --show-labels -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
pod-podaffinity-required 1/1 Running 0 9m5s 10.244.6.214 dev-k8s-master1 <none> <none> <none>
pod-podaffinity-target 1/1 Running 0 25m 10.244.6.213 dev-k8s-master1 <none> <none> podenv=pro

2)创建pod-podantiaffinity-required.yaml,内容如下:

apiVersion: v1
kind: Pod
metadata:
name: pod-podantiaffinity-required
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
affinity: # 亲和性设置
podAntiAffinity: # 设置pod亲和性
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions: # 匹配podenv的值在["pro"]中的标签
- key: podenv
operator: In
values: ["pro"]
topologyKey: kubernetes.io/hostname

上面配置表达的意思是:新Pod必须要与拥有标签nodeenv=pro的pod不在同一Node上,运行测试一下。

# 创建pod
[root@dev-k8s-master1 ~]# kubectl create -f pod-podantiaffinity-required.yaml
pod/pod-podantiaffinity-required created
# 查看pod状态
[root@dev-k8s-master1 ~]# kubectl get pods -n dev --show-labels -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
pod-podaffinity-required 1/1 Running 0 18m 10.244.6.214 dev-k8s-master1 <none> <none> <none>
pod-podaffinity-target 1/1 Running 0 34m 10.244.6.213 dev-k8s-master1 <none> <none> podenv=pro
pod-podantiaffinity-required 1/1 Running 0 9s 10.244.1.45 dev-k8s-master2 <none> <none> <none>

污点和容忍

污点(Taints)

前面的调度方式都是站在Pod的角度上,通过在Pod上添加属性,来确定Pod是否要调度到指定的Node上,其实
我们也可以站在Node的角度上,通过在Node上添加污点属性,来决定是否允许Pod调度过来。
Node被设置上污点之后就和Pod之间存在了另一种相斥的关系,进而拒绝Pod调度进来,甚至可以将已经存在的
Pod驱逐出去。
污点的格式为:key=value:effect,key和value是污点的标签,effect描述污点的作用,支持如下三个选项:

  • PreferNoSchedule:kubernetes将尽量避免把Pod调度到具有该污点的Node上,除非没有其他节点可调度
  • NoSchedule:kubernetes将不会把Pod调度到具有该污点的Node上,但不会影响当前Node上已存在的Pod
  • NoExecute:kubernetes将不会把Pod调度到具有该污点的Node上,同时也会将Node上已存在的Pod驱离。
图片名称

使用kubec设置和去除污点的命令示例如下:

# 设置污点
kubectl taint nodes node1 key=value:effect
# 去除污点
kubectl taint nodes node1 key=effect-
# 去除所有污点
kubectl taint nodes node1 key-

接下来,演示下污点的效果:

  1. 准备节点node1(为了演示效果更加明显,暂时停止node2节点)
  2. 为node1节点设置一个污点:tag=heima:PreferNoSchedule; 然后创建pod1
  3. 修改为node1节点设置一个污点:tag=heima:NoSchedule; 然后创建pod2
  4. 修改为node1节点设置一个污点:tag=heima:NoExecute; 然后创建pod3
# 为node1设置污点(PreferNoSchedule)
posted on   周小百  阅读(140)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 分享4款.NET开源、免费、实用的商城系统
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
· 记一次.NET内存居高不下排查解决与启示
点击右上角即可分享
微信分享提示