K8S-节点亲和调度

1 、节点亲和:POD自身的资源定义,对期望运行的节点类型的倾向性,反之为反亲和

1.1、节点选择器

在定义pod资源时可以通过spec.nodeSelector 选择指定的node运行该pod

1.2、强制节点亲和

在定义资源时,也可通过以下定义节点条件,可以有多个nodeSelectorTerms,满足任一nodeSelectorTerms即可,认为节点符合条件

pod:
  spec:
    affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          matchExpressions:
            - key: 
              operator:
              values: 
          matchLabels:
View Code

1.3、首选节点亲和

scheduler尽量将pod调度到满足定义条件的node上,但是当没有符合条件的node时,也能调度到其他node

如下,节点优先级顺序:2个条件都满足>满足权重大的条件节点>满足权重小的节点>都不满足条件的节点

pod:
  spec:
    affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution
        - weight: 60  #该条件的权重,1-100,数字越大权重越高
          preference:
            matchExpression:

            matchLabels:
        - weight: 30
View Code

2、pod亲和调度

2.1、强制亲和: 相关pod对象需要运行在同一位置(如同一节点、同一机架、同一区域等),例如应用程序所在pod和为应用提供数据服务的pod等具有亲和关系的对象

 

 

 如:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      ctrl: redis
  template:
    metadata:
      labels:
        app: redis
        ctrl: redis
    spec:
      containers:
      - name: redis
        image: redis:6.0-alpine
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pod-affinity-required
spec:
  replicas: 5
  selector:
    matchLabels:
      app: demoapp
      ctrl: pod-affinity-required
  template:
    metadata:
      labels:
        app: demoapp
        ctrl: pod-affinity-required
    spec:
      containers:
        - name: demoapp
          image: ikubernetes/demoapp:v1.0
      affinity:
        podAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - {key: app, operator: In, values: ["redis"]}
              - {key: ctrl, operator: In, values: ["redis"]}
            topologyKey: zone 
View Code

可以看到,kube-node03和kube-node04 zone标签值为:zoneB,当redis调度到kube-node03,与该pod亲和的pod被调度到zone标签值为zoneB的kube-node3和kube-node4上

注意,pod运行期间,如果节点标签更改或者被亲和节点(示例中的redis)的nodeselector标签更改,导致被亲和节点重新调度至其他节点或者toplology的值,定义的亲和节点不会一起重新调度,而是继续运行在原来的节点上。

2.2、pod间首选亲和

强制亲和:当定义的条件无法满足时,pod会处于pending状态,而pod间首选亲和是尽量满足约束条件,当不能满足约束条件时,也能在不满足约束条件的情况创建pod

优先级 :约束都满足>只满足优先级高的约束>只满足优先级低的约束>不满足约束

 

2.2、反亲和的pod对象相关pod对象不能运行在同一位置(不能运行在同一节点、同一机架、同一区域等),出于安全或者分布式容灾等原因,需要将个pod对象隔离开,例如高可用的应用所在的pod,一般不能在同一节点。 

如:

所有节点的zone只有两个值,如果我们以zone作为toplologykey来实现反亲和,每个zone值的所有节点中只有一个满足matchExpressions的pod运行,多出来的pod将会处于pending状态 

例如:将副本设置为3

kind: Deployment
metadata:
  name: pod-antiffinity-required
spec:
  replicas: 3
  selector:
    matchLabels:
      app: demoapp
      ctrl: pod-antiaffinity-required
  template:
    metadata:
      labels:
        app: demoapp
        ctrl: pod-antiaffinity-required
    spec:
      containers:
        - name: demoapp
          image: ikubernetes/demoapp:v1.0
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - {key: app, operator: In, values: ["demoapp"]}
              - {key: ctrl, operator: In, values: ["pod-antiaffinity-required"]}
            topologyKey: zone 
View Code

有一个始终处于pengding

 

       

 

 

 

 

 

 

 

posted @ 2022-02-22 14:46  西风发财  阅读(585)  评论(0编辑  收藏  举报