49、K8S-调度机制-拓朴调度-topologySpreadConstraints

Kubernetes学习目录

1、基础知识

1.1、回顾

我们知道,对于pod来说,其在定义pod亲和性和反亲和的时候,有一个 topologyKey的属性,但是默认情况下,
pod的亲和性调度,仅仅针对单一的拓扑场景,也就是说,要么所有的pod都在这里,要么所有的pod都不要在这里,
这样会导致,应用过于集中,反而导致物理资源的浪费。
那么我们希望在进行正常pod亲和性调度的时候,能够自动识别多个不同的调度节点,然后均匀的分散到多个节点中,从而实现资源的有效利用。而这就是我们接下来要学习的 拓扑调度。 注意: 这个功能是 k8s
1.19版本才有的功能, v1.18 之前版本中,要用 Pod 拓扑扩展,需要在 API 服务器 和调度器 中启用 EvenPodsSpread 特性。

1.2、属性解析

kubectl explain pod.spec.topologySpreadConstraints

labelSelector      # 根据 LabelSelector来选择匹配的pod所在的位置
maxSkew            # 描述了 pod 可能不均匀分布的程度,各节点pod数量比例的最大限制
topologyKey        # 用于指定带有此标签的节点在同一拓扑中。
whenUnsatisfiable  # 指示如果 Pod 不满足扩展约束,如何处理它。默认值是DoNotSchedule

maxSkew 默认值是1,也就是所有节点的分布式 11,如果这个值是大的话,则pod的分布可以是不均匀的。

2、拓朴调度-topologySpreadConstraints-实践

2.1、给节点打标签规划拓朴

kubectl label nodes master1 node=master1 zone=zoneA
kubectl label nodes node1 node=node1 zone=zoneA
kubectl label nodes node2 node=node2 zone=zoneB

拓朴结构:
  A区:master1、node1
  B区:node2

2.2、需求

将pod均匀分散到多个节点中

2.2、定义资源清单且应用

kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pod-affinity-preferred
spec:
  replicas: 17
  selector:
    matchLabels:
      foo: bar
  template:
    metadata:
      labels:
        foo: bar
    spec:
      containers:
      - name: pod-test
        image: 192.168.10.33:80/k8s/pod_test:v0.1
        imagePullPolicy: IfNotPresent
      topologySpreadConstraints:
      - maxSkew: 7
        topologyKey: zone
        whenUnsatisfiable: DoNotSchedule
        labelSelector:
          matchLabels:
            foo: bar
EOF

2.3、查询

2.3.1、查看运行状态

]# kubectl  get pods -o wide
NAME                                      READY   STATUS    RESTARTS   AGE     IP             NODE    NOMINATED NODE   READINESS GATES
pod-affinity-preferred-5ffdc5975b-2qfhx   1/1     Running   0          2m25s   10.244.3.71    node1   <none>           <none>
pod-affinity-preferred-5ffdc5975b-4w9vp   1/1     Running   0          2m25s   10.244.3.72    node1   <none>           <none>
pod-affinity-preferred-5ffdc5975b-6b7x4   1/1     Running   0          2m25s   10.244.3.70    node1   <none>           <none>
pod-affinity-preferred-5ffdc5975b-6bsz8   1/1     Running   0          2m25s   10.244.3.73    node1   <none>           <none>
pod-affinity-preferred-5ffdc5975b-8rb7h   1/1     Running   0          2m25s   10.244.4.103   node2   <none>           <none>
pod-affinity-preferred-5ffdc5975b-9nxmh   1/1     Running   0          2m25s   10.244.4.104   node2   <none>           <none>
pod-affinity-preferred-5ffdc5975b-9x5mr   1/1     Running   0          2m25s   10.244.4.99    node2   <none>           <none>
pod-affinity-preferred-5ffdc5975b-9xkp6   1/1     Running   0          2m25s   10.244.4.101   node2   <none>           <none>
pod-affinity-preferred-5ffdc5975b-hkf9q   1/1     Running   0          2m25s   10.244.4.102   node2   <none>           <none>
pod-affinity-preferred-5ffdc5975b-nxnc2   1/1     Running   0          2m25s   10.244.4.100   node2   <none>           <none>
pod-affinity-preferred-5ffdc5975b-qs4qc   1/1     Running   0          2m25s   10.244.3.66    node1   <none>           <none>
pod-affinity-preferred-5ffdc5975b-s6pfw   1/1     Running   0          2m25s   10.244.3.68    node1   <none>           <none>
pod-affinity-preferred-5ffdc5975b-sn2ks   1/1     Running   0          2m25s   10.244.4.96    node2   <none>           <none>
pod-affinity-preferred-5ffdc5975b-ss29v   1/1     Running   0          2m25s   10.244.3.67    node1   <none>           <none>
pod-affinity-preferred-5ffdc5975b-t5fkn   1/1     Running   0          2m25s   10.244.4.97    node2   <none>           <none>
pod-affinity-preferred-5ffdc5975b-vjd6x   1/1     Running   0          2m25s   10.244.3.69    node1   <none>           <none>
pod-affinity-preferred-5ffdc5975b-z9rsf   1/1     Running   0          2m25s   10.244.4.98    node2   <none>           <none>

2.3.2、为什么没有往master1节点调度

原因:节点上有打了NoSchedule污点,所以不会往那里调度

master2 ~]# kubectl get nodes master1 -o jsonpath='{.spec.taints}'
[{"effect":"NoSchedule","key":"node-role.kubernetes.io/control-plane"}]

2.3.3、分布情况

分别调度的数量,还算比较平均
master2 ~]# kubectl  get pods -o wide | grep node1| wc -l
8

master2 ~]# kubectl  get pods -o wide | grep node2| wc -l
9

2.4、修改maxSkew=3、replicas=7观察分配的情况

2.4.1、修改配置清单且应用

kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pod-affinity-preferred
spec:
  replicas: 7
  selector:
    matchLabels:
      foo: bar
  template:
    metadata:
      labels:
        foo: bar
    spec:
      containers:
      - name: pod-test
        image: 192.168.10.33:80/k8s/pod_test:v0.1
        imagePullPolicy: IfNotPresent
      topologySpreadConstraints:
      - maxSkew: 3
        topologyKey: zone
        whenUnsatisfiable: DoNotSchedule
        labelSelector:
          matchLabels:
            foo: bar
EOF

2.4.2、查看分配情况

master2 ~]# kubectl  get pods -o wide
NAME                                      READY   STATUS    RESTARTS   AGE     IP             NODE    NOMINATED NODE   READINESS GATES
pod-affinity-preferred-7799cc85f6-2m2hw   1/1     Running   0          2m21s   10.244.4.107   node2   <none>           <none>
pod-affinity-preferred-7799cc85f6-8gc6w   1/1     Running   0          2m18s   10.244.3.76    node1   <none>           <none>
pod-affinity-preferred-7799cc85f6-8jbl7   1/1     Running   0          2m19s   10.244.3.74    node1   <none>           <none>
pod-affinity-preferred-7799cc85f6-fmw4j   1/1     Running   0          2m18s   10.244.3.77    node1   <none>           <none>
pod-affinity-preferred-7799cc85f6-g9bqq   1/1     Running   0          2m19s   10.244.3.75    node1   <none>           <none>
pod-affinity-preferred-7799cc85f6-h9xjn   1/1     Running   0          2m21s   10.244.4.105   node2   <none>           <none>
pod-affinity-preferred-7799cc85f6-jzt9f   1/1     Running   0          2m21s   10.244.4.106   node2   <none>           <none>

# node1:4个pod node2:3个pod,节点数量调度按3:3的分配。

 

posted @ 2023-04-06 15:51  小粉优化大师  阅读(404)  评论(0编辑  收藏  举报