kubernetes-什么可以实现pod调度?

作者:雪庆华
原创作品,严禁转载!

目录

  1. pod调度是什么?为什么要pod调度?
  2. 基于nodeName实现pod调度
  3. 基于hostNetwork实现pod调度
  4. 基于hostPort实现pod调度
  5. 基于resources实现pod调度
  6. 基于nodeSelector实现pod调度
  7. 基于taints实现pod调度
  8. 基于cordon实现pod调度
  9. 基于drain实现pod调度
  10. 基于Affinity亲和性实现pod调度

1.pod调度是什么?为什么要pod调度?

Pod 调度是 Kubernetes 中实现资源管理、高可用性、负载均衡和故障恢复的关键机制。通过合理的调度策略,Kubernetes 可以确保 Pod 在合适的节点上运行,同时满足应用的资源需求和部署约束。调度器的存在使得 Kubernetes 能够高效地管理大规模容器化应用,是 Kubernetes 集群管理的核心功能之一。
大白话就是:将pod与合适的节点关联

2.基于nodeName实现pod调度

通过节点名称,实现pod调度

案例展示:

  1.编写资源清单
[root@master231 scheduler]# cat  01-scheduler-nodeName.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-xiuxian-nodename
  labels:
    school: huazai007
spec:
  replicas: 5
  selector:
    matchLabels:
      apps: xiuxian
  template:
    metadata:
      labels:
        apps: xiuxian
        address: shahe
        class: v1
    spec:
      nodeName: worker233
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/huazai-k8s/apps:v1
[root@master231 scheduler]# 

  2.验证 
[root@master231 scheduler]# kubectl apply  -f 01-scheduler-nodeName.yaml 
deployment.apps/deploy-xiuxian-nodename created
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl get pods -o wide
NAME                                      READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
deploy-xiuxian-nodename-966844f7d-cttm9   1/1     Running   0          39s   10.100.2.249   worker233   <none>           <none>
deploy-xiuxian-nodename-966844f7d-jzzvz   1/1     Running   0          39s   10.100.2.247   worker233   <none>           <none>
deploy-xiuxian-nodename-966844f7d-knqjs   1/1     Running   0          39s   10.100.2.246   worker233   <none>           <none>
deploy-xiuxian-nodename-966844f7d-q5td4   1/1     Running   0          39s   10.100.2.250   worker233   <none>           <none>
deploy-xiuxian-nodename-966844f7d-qcmzn   1/1     Running   0          39s   10.100.2.248   worker233   <none>           <none>
[root@master231 scheduler]#  

3.基于hostNetwork实现pod调度

通过打开hostNetwork=true,允许 Pod 使用宿主机的网络命名空间

案例展示:

  1.编写资源清单
[root@master231 scheduler]# cat 03-scheduler-hostNetwork.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-xiuxian-hostnetwork
  labels:
    school: huazai007
spec:
  replicas: 5
  selector:
    matchLabels:
      apps: xiuxian
  template:
    metadata:
      labels:
        apps: xiuxian
        address: shahe
        class: v1
    spec:
      hostNetwork: true
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/huazai-k8s/apps:v1
        ports:
        - containerPort: 80
[root@master231 scheduler]# 
  
  2.验证
[root@master231 scheduler]# kubectl apply -f 03-scheduler-hostNetwork.yaml
deployment.apps/deploy-xiuxian-hostnetwork created
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl get deploy,rs,pods -o wide
NAME                                         READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                                      SELECTOR
deployment.apps/deploy-xiuxian-hostnetwork   2/5     5            2           6s    c1           registry.cn-hangzhou.aliyuncs.com/huazai-k8s/apps:v1   apps=xiuxian

NAME                                                   DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES                                                      SELECTOR
replicaset.apps/deploy-xiuxian-hostnetwork-bf5c7c4f6   5         5         2       6s    c1           registry.cn-hangzhou.aliyuncs.com/huazai-k8s/apps:v1   apps=xiuxian,pod-template-hash=bf5c7c4f6

NAME                                             READY   STATUS    RESTARTS   AGE   IP           NODE        NOMINATED NODE   READINESS GATES
pod/deploy-xiuxian-hostnetwork-bf5c7c4f6-g4bs8   0/1     Pending   0          6s    <none>       <none>      <none>           <none>
pod/deploy-xiuxian-hostnetwork-bf5c7c4f6-gd9c5   1/1     Running   0          6s    10.0.0.233   worker233   <none>           <none>
pod/deploy-xiuxian-hostnetwork-bf5c7c4f6-jgztg   1/1     Running   0          6s    10.0.0.232   worker232   <none>           <none>
pod/deploy-xiuxian-hostnetwork-bf5c7c4f6-pkns2   0/1     Pending   0          6s    <none>       <none>      <none>           <none>
pod/deploy-xiuxian-hostnetwork-bf5c7c4f6-s5qd4   0/1     Pending   0          6s    <none>       <none>      <none>           <none>
[root@master231 scheduler]# 

4.基于hostPort实现pod调度

hostPort 允许 Pod 内的容器直接使用宿主机的端口,从而允许外部网络通过宿主机的 IP 地址访问容器内的服务。

案例展示:

  1.编写资源清单
[root@master231 scheduler]# cat 02-scheduler-ports.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-xiuxian-ports
  labels:
    school: huazai007
spec:
  replicas: 5
  selector:
    matchLabels:
      apps: xiuxian
  template:
    metadata:
      labels:
        apps: xiuxian
        address: shahe
        class: v1
    spec:
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/huazai-k8s/apps:v1
        ports:
        - containerPort: 80
          hostPort: 81
[root@master231 scheduler]# 

  2.验证
[root@master231 scheduler]# kubectl apply -f 02-scheduler-ports.yaml
deployment.apps/deploy-xiuxian-ports created
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl get deploy,rs,pods -o wide
NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                                      SELECTOR
deployment.apps/deploy-xiuxian-ports   2/5     5            2           41s   c1           registry.cn-hangzhou.aliyuncs.com/huazai-k8s/apps:v1   apps=xiuxian

NAME                                            DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES                                                      SELECTOR
replicaset.apps/deploy-xiuxian-ports-f7fc745d   5         5         2       41s   c1           registry.cn-hangzhou.aliyuncs.com/huazai-k8s/apps:v1   apps=xiuxian,pod-template-hash=f7fc745d

NAME                                      READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
pod/deploy-xiuxian-ports-f7fc745d-26jmf   1/1     Running   0          41s   10.100.2.6    worker233   <none>           <none>
pod/deploy-xiuxian-ports-f7fc745d-dfvq7   1/1     Running   0          41s   10.100.1.54   worker232   <none>           <none>
pod/deploy-xiuxian-ports-f7fc745d-xw2d5   0/1     Pending   0          41s   <none>        <none>      <none>           <none>
pod/deploy-xiuxian-ports-f7fc745d-z2mts   0/1     Pending   0          41s   <none>        <none>      <none>           <none>
pod/deploy-xiuxian-ports-f7fc745d-zww84   0/1     Pending   0          41s   <none>        <none>      <none>           <none>
[root@master231 scheduler]#  

发现调度到两个节点,其他的pod处于pending状态,因为每台节点同一个端口只能匹配一各pod,由此间接实现pod调度

5.基于resources实现pod调度

resources表示为容器配置资源限制,通过资源限制,间接调度pod
不满足资源的节点,则不被调度
可实现的字段:requests期望资源、limits限制上限

requests案例展示:

[root@master231 scheduler]# cat 04-scheduler-resources-requests.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-xiuxian-resources
  labels:
    school: huazai007
spec:
  replicas: 5
  selector:
    matchLabels:
      apps: xiuxian
  template:
    metadata:
      labels:
        apps: xiuxian
    spec:
      containers:
      - name: c1
        image: jasonyin2020/huazai007-linux-tools:v0.1
        # 为容器分配一个标准输入,类似于docker run -i 
        stdin: true
        # 为容器配置资源限制
        resources:
          # 表示用户的期望资源,符合期望才会被调度
          requests:
            cpu: 0.5
            # memory: 1G
            memory: 10G
[root@master231 scheduler]# 

limits案例展示:

  1.编写资源清单
[root@master231 scheduler]# cat 05-scheduler-resources-limits.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-xiuxian-resources-limits
  labels:
    school: huazai007
spec:
  replicas: 5
  selector:
    matchLabels:
      apps: xiuxian
  template:
    metadata:
      labels:
        apps: xiuxian
        address: shahe
        class: v1
    spec:
      containers:
      - name: c1
        image: jasonyin2020/huazai007-linux-tools:v0.1
        stdin: true
        # 如果不配置资源限制,则默认使用Pod调度节点的所有资源
        resources:
          # 配置资源的上限
          limits:
            cpu: 0.5
            memory: 2G
          # 配置容器的期望资源,如果不配置该字段,则默认和limits相同。
          # requests <= limits
          requests:
            # 1 core = 1000m
            cpu: 200m
            memory: 1G
[root@master231 scheduler]# 

压力测试命令:
  stress  -m 5 --vm-bytes 200000000 --vm-keep --verbose
可以进入容器内进行压力测试,验证限制是否成功
若超出上限,则会强制中断压制力测试

6.基于nodeSelector实现pod调度

nodeSelector可以基于节点标签调度Pod到对应的worker节点
使用方式:先给节点打上标签,再使用nodeSelector匹配节点,实现pod调度

案例展示:

    1 环境准备
[root@master231 scheduler]# kubectl label nodes worker232 school=huazai007
node/worker232 labeled
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl label nodes worker233 school=laonanhai
node/worker233 labeled
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl get nodes -l school --show-labels
NAME        STATUS   ROLES    AGE    VERSION    LABELS
worker232   Ready    <none>   2d3h   v1.23.17   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker232,kubernetes.io/os=linux,school=huazai007
worker233   Ready    <none>   2d3h   v1.23.17   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker233,kubernetes.io/os=linux,school=laonanhai
[root@master231 scheduler]# 


    2 编写资源清单
[root@master231 scheduler]# cat 06-scheduler-nodeSelector.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-xiuxian-nodeselector
  labels:
    school: huazai007
spec:
  replicas: 5
  selector:
    matchLabels:
      apps: xiuxian
  template:
    metadata:
      labels:
        class: beibei
    spec:
      # 基于节点标签调度Pod
      nodeSelector:
        school: laonanhai
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/huazai-k8s/apps:v1
[root@master231 scheduler]# 

    3 验证
[root@master231 scheduler]# kubectl apply -f 06-scheduler-nodeSelector.yaml
deployment.apps/deploy-xiuxian-nodeselector created
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl get pods -o wide
NAME                                           READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
deploy-xiuxian-nodeselector-7f78d9d78b-2gfcj   1/1     Running   0          4s    10.100.2.35   worker233   <none>           <none>
deploy-xiuxian-nodeselector-7f78d9d78b-fb88d   1/1     Running   0          4s    10.100.2.36   worker233   <none>           <none>
deploy-xiuxian-nodeselector-7f78d9d78b-rn96d   1/1     Running   0          4s    10.100.2.38   worker233   <none>           <none>
deploy-xiuxian-nodeselector-7f78d9d78b-s4jvf   1/1     Running   0          4s    10.100.2.34   worker233   <none>           <none>
deploy-xiuxian-nodeselector-7f78d9d78b-sfv7h   1/1     Running   0          4s    10.100.2.37   worker233   <none>           <none>
[root@master231 scheduler]# 

7.基于taints实现pod调度

taints中译为污点。起作用就是,给节点打上污点,影响pod的调度。
格式为: "key[=value]:effect"
其中effect有三种类型:
NoSchedule
	不接受新的Pod,已经调度到该节点的Pod并不驱逐。
PreferNoSchedule:
	尽可能调度到其他节点,当其他节点不满足时,再调度到当前节点。
NoExecute:
	不接受新的Pod调度,与此同时,还会驱逐已经调度到给节点的所有Pod,不推荐使用。

7.1 taints的基本使用

1.查看污点

[root@master231 scheduler]# kubectl describe nodes master231  | grep Taints
Taints:             node-role.kubernetes.io/master:NoSchedule
[root@master231 scheduler]# 
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl describe nodes worker232  | grep Taints
Taints:             <none>
[root@master231 scheduler]# 
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl describe nodes worker233  | grep Taints
Taints:             <none>
[root@master231 scheduler]# 
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl describe nodes  | grep Taints
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             <none>
Taints:             <none>

2.打污点

kubectl taint node worker233 class=linux:NoSchedule

kubectl describe nodes  | grep Taints
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             <none>
Taints:             class=linux:NoSchedule
kubectl taint node --all address=shahe:PreferNoSchedule
node/master231 tainted
node/worker232 tainted
node/worker233 tainted

[root@master231 deployments]# kubectl describe nodes  | grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
                    address=shahe:PreferNoSchedule
Unschedulable:      false
--
Taints:             address=shahe:PreferNoSchedule
Unschedulable:      false
Lease:
--
Taints:             class=linux:NoSchedule
                    address=shahe:PreferNoSchedule
Unschedulable:      false

3.修改污点

kubectl taint node worker233 class=Linux:NoExecute --overwrite 
#--overwrite:表示覆盖原有污点

4.删除污点

kubectl taint node worker233 class=linux:NoSchedule-
#在原有污点后面加"-"

7.2验证污点作用

  1.书写资源清单
[root@master231 scheduler]# cat 07-scheduler-Taints.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-xiuxian-taints
  labels:
    school: huazai007
spec:
  replicas: 10
  selector:
    matchLabels:
      apps: xiuxian
  template:
    metadata:
      labels:
        apps: xiuxian
        address: shahe
        class: beibei
    spec:
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/huazai-k8s/apps:v1
        resources:
          requests:
            cpu: 0.2
            memory: 500Mi
[root@master231 scheduler]# 
  2.查看验证
在上面步骤已经将worker233打上污点了,那么pod将不往233调度
[root@master231 /oldboyedu/manifests/scheduler]# kubectl  describe  nodes  | grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
--
Taints:             class=linux:NoSchedule
Unschedulable:      false
Lease:
[root@master231 /oldboyedu/manifests/scheduler]# kubectl  apply  -f 07-scheduler-Taints.yaml 
deployment.apps/deploy-xiuxian-taints created
[root@master231 /oldboyedu/manifests/scheduler]# kubectl  get pod  -o wide
NAME                                     READY   STATUS    RESTARTS        AGE   IP            NODE        NOMINATED NODE   READINESS GATES
deploy-xiuxian-taints-85569f9fd5-4skpl   1/1     Running   0               10s   10.100.1.54   worker232   <none>           <none>
deploy-xiuxian-taints-85569f9fd5-6r87l   1/1     Running   0               10s   10.100.1.52   worker232   <none>           <none>
deploy-xiuxian-taints-85569f9fd5-89jt2   1/1     Running   0               10s   10.100.1.53   worker232   <none>           <none>
deploy-xiuxian-taints-85569f9fd5-99kcf   1/1     Running   0               10s   10.100.1.56   worker232   <none>           <none>
deploy-xiuxian-taints-85569f9fd5-cr9p9   1/1     Running   0               10s   10.100.1.51   worker232   <none>           <none>
deploy-xiuxian-taints-85569f9fd5-grvlp   1/1     Running   0               10s   10.100.1.57   worker232   <none>           <none>
deploy-xiuxian-taints-85569f9fd5-j7dfx   1/1     Running   0               10s   10.100.1.55   worker232   <none>           <none>
deploy-xiuxian-taints-85569f9fd5-k9tfg   0/1     Pending   0               10s   <none>        <none>      <none>           <none>
deploy-xiuxian-taints-85569f9fd5-prwm6   0/1     Pending   0               10s   <none>        <none>      <none>           <none>
deploy-xiuxian-taints-85569f9fd5-v2cv7   0/1     Pending   0               10s   <none>        <none>      <none>           <none>
[root@master231 /oldboyedu/manifests/scheduler]# 

可以发现就算worker232节点没有资源了,也不忘worker233调度

7.3tolerations容忍污点

基于容忍污点实现pod调度
值得注意的是,一个Pod想要调度到一个有污点的worker node,则该Pod需要容忍该节点的所有污点,否则调度失败。

案例展示:

2.环境准备 
[root@master231 deployments]# kubectl describe nodes | grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             school=huazai007:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             school=tiantian:NoExecute
                    class=beibei:NoSchedule
Unschedulable:      false
[root@master231 deployments]# 

3.编写资源清单
[root@master231 scheduler]# cat 08-scheduler-tolerations.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-xiuxian-tolerations
  labels:
    school: huazai007
spec:
  replicas: 10
  selector:
    matchLabels:
      apps: xiuxian
  template:
    metadata:
      labels:
        apps: xiuxian
    spec:
      # 配置污点容忍
      tolerations:
        # 匹配污点的key
      - key: node-role.kubernetes.io/master
        # 匹配污点的effect
        effect: NoSchedule
        # 表示key和value之间的关系,有效值为:Exists,Equal(default)。
        operator: Equal
      - key: school
        value: tiantian
        effect: NoExecute
        operator: Equal
      - key: class
        value: beibei
        effect: NoSchedule
        operator: Equal
      - key: school
        value: huazai007
        effect: NoSchedule
      # 无视任何污点,生产环境不推荐使用,临时测试可以.
      #tolerations:
      #- operator: Exists
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/huazai-k8s/apps:v1
        resources:
          requests:
            cpu: 0.2
            memory: 500Mi
[root@master231 scheduler]# 

后续检查pod状态,可以发现,实现了调度到以容忍的节点,比如master231

8.基于cordon实现pod调度

标记节点不可调度,与此同时会给改节点打污点。

使用方法:

2.实战案例
    2.1 环境准备
[root@master231 scheduler]# kubectl get nodes
NAME        STATUS   ROLES                  AGE    VERSION
master231   Ready    control-plane,master   2d5h   v1.23.17
worker232   Ready    <none>                 2d5h   v1.23.17
worker233   Ready    <none>                 2d5h   v1.23.17
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl describe nodes | grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
[root@master231 scheduler]# 


    2.2 标记worker232节点不可调度 
[root@master231 scheduler]# kubectl cordon worker232 
node/worker232 cordoned
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl get nodes
NAME        STATUS                     ROLES                  AGE    VERSION
master231   Ready                      control-plane,master   2d5h   v1.23.17
worker232   Ready,SchedulingDisabled   <none>                 2d5h   v1.23.17
worker233   Ready                      <none>                 2d5h   v1.23.17
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl describe nodes | grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             node.kubernetes.io/unschedulable:NoSchedule
Unschedulable:      true
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
[root@master231 scheduler]# 

后续验证,与上面案例一致

uncordon消除节点的污点、不可调度:

[root@master231 scheduler]# kubectl uncordon worker232 
node/worker232 uncordoned
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl get nodes
NAME        STATUS   ROLES                  AGE    VERSION
master231   Ready    control-plane,master   2d5h   v1.23.17
worker232   Ready    <none>                 2d5h   v1.23.17
worker233   Ready    <none>                 2d5h   v1.23.17
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl describe nodes | grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
[root@master231 scheduler]#

9.基于drain实现pod调度

drain可以驱逐已经调度到节点的Pod。底层会调用cordon标记节点不可调度。
同样使用uncordon取消节点不可调度

案例展示:

    1 环境准备 
[root@master231 scheduler]# kubectl apply -f 07-scheduler-Taints.yaml 
deployment.apps/deploy-xiuxian-taints created
[root@master231 scheduler]# 
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl get pods -o wide
NAME                                     READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
deploy-xiuxian-taints-85569f9fd5-2rhnc   1/1     Running   0          13s   10.100.2.92   worker233   <none>           <none>
deploy-xiuxian-taints-85569f9fd5-4wmb5   1/1     Running   0          13s   10.100.1.96   worker232   <none>           <none>
deploy-xiuxian-taints-85569f9fd5-55wzx   1/1     Running   0          13s   10.100.2.94   worker233   <none>           <none>
deploy-xiuxian-taints-85569f9fd5-d88mt   1/1     Running   0          13s   10.100.1.97   worker232   <none>           <none>
deploy-xiuxian-taints-85569f9fd5-k8b8b   1/1     Running   0          13s   10.100.2.95   worker233   <none>           <none>
deploy-xiuxian-taints-85569f9fd5-pmx56   1/1     Running   0          13s   10.100.2.91   worker233   <none>           <none>
deploy-xiuxian-taints-85569f9fd5-s85ls   1/1     Running   0          13s   10.100.2.93   worker233   <none>           <none>
deploy-xiuxian-taints-85569f9fd5-wbtr8   1/1     Running   0          13s   10.100.1.98   worker232   <none>           <none>
deploy-xiuxian-taints-85569f9fd5-xgsk9   1/1     Running   0          13s   10.100.1.95   worker232   <none>           <none>
deploy-xiuxian-taints-85569f9fd5-zx8vk   1/1     Running   0          13s   10.100.1.94   worker232   <none>           <none>
[root@master231 scheduler]# 


    2 开始驱逐
[root@master231 scheduler]# kubectl drain worker233 --ignore-daemonsets 
node/worker233 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-flannel/kube-flannel-ds-57zww, kube-system/kube-proxy-pzjt7
evicting pod default/deploy-xiuxian-taints-85569f9fd5-s85ls
evicting pod default/deploy-xiuxian-taints-85569f9fd5-55wzx
evicting pod default/deploy-xiuxian-taints-85569f9fd5-2rhnc
evicting pod default/deploy-xiuxian-taints-85569f9fd5-pmx56
evicting pod default/deploy-xiuxian-taints-85569f9fd5-k8b8b
pod/deploy-xiuxian-taints-85569f9fd5-55wzx evicted
pod/deploy-xiuxian-taints-85569f9fd5-s85ls evicted
pod/deploy-xiuxian-taints-85569f9fd5-2rhnc evicted
pod/deploy-xiuxian-taints-85569f9fd5-k8b8b evicted
pod/deploy-xiuxian-taints-85569f9fd5-pmx56 evicted
node/worker233 drained
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl get pods -o wide
NAME                                     READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
deploy-xiuxian-taints-85569f9fd5-4wmb5   1/1     Running   0          3m50s   10.100.1.96    worker232   <none>           <none>
deploy-xiuxian-taints-85569f9fd5-5kmgj   0/1     Pending   0          47s     <none>         <none>      <none>           <none>
deploy-xiuxian-taints-85569f9fd5-5t4t8   0/1     Pending   0          47s     <none>         <none>      <none>           <none>
deploy-xiuxian-taints-85569f9fd5-d88mt   1/1     Running   0          3m50s   10.100.1.97    worker232   <none>           <none>
deploy-xiuxian-taints-85569f9fd5-gl67l   1/1     Running   0          47s     10.100.1.100   worker232   <none>           <none>
deploy-xiuxian-taints-85569f9fd5-l5jwj   1/1     Running   0          47s     10.100.1.99    worker232   <none>           <none>
deploy-xiuxian-taints-85569f9fd5-wbtr8   1/1     Running   0          3m50s   10.100.1.98    worker232   <none>           <none>
deploy-xiuxian-taints-85569f9fd5-xgsk9   1/1     Running   0          3m50s   10.100.1.95    worker232   <none>           <none>
deploy-xiuxian-taints-85569f9fd5-zrqfp   0/1     Pending   0          47s     <none>         <none>      <none>           <none>
deploy-xiuxian-taints-85569f9fd5-zx8vk   1/1     Running   0          3m50s   10.100.1.94    worker232   <none>           <none>
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl get nodes
NAME        STATUS                     ROLES                  AGE    VERSION
master231   Ready                      control-plane,master   2d5h   v1.23.17
worker232   Ready                      <none>                 2d5h   v1.23.17
worker233   Ready,SchedulingDisabled   <none>                 2d5h   v1.23.17
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl describe nodes | grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             <none>
Unschedulable:      false
Lease:
--
Taints:             node.kubernetes.io/unschedulable:NoSchedule
Unschedulable:      true
Lease:
[root@master231 scheduler]# 

10.基于Affinity亲和性实现pod调度

10.1nodeAffinity节点亲和度

nodeAffinity表示Pod调度到期望节点。
功能和nodeSelector类似,但功能更加强大。
nodeSelector的key只能匹配一个velue
nodeAffintify但是可以匹配多个

案例展示:

  1.书写资源清单
[root@master231 scheduler]# cat 09-scheduler-nodeAffinity.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-xiuxian-nodeaffinity
  labels:
    school: huazai007
spec:
  replicas: 10
  selector:
    matchLabels:
      apps: xiuxian
  template:
    metadata:
      labels:
        apps: xiuxian
        address: shahe
        class: beibei
    spec:
      tolerations:
      - operator: Exists
      # 定义亲和性
      affinity:
        # 定义节点亲和性
        nodeAffinity:
          # 定义硬限制
          requiredDuringSchedulingIgnoredDuringExecution:
            # 定义节点选择器
            nodeSelectorTerms:
              # 基于表达式匹配节点
            - matchExpressions:
                # 定义节点标签的key
              - key: dc
                # 定义节点标签的value
                values:
                - beijing
                - shanghai
                # 定义key和values之间的关系,有效值为: In,NotIn, Exists, DoesNotExist. Gt, and Lt
                operator: In
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/huazai-k8s/apps:v1
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl get nodes -l dc=beijing
NAME        STATUS   ROLES    AGE   VERSION
worker232   Ready    <none>   15h   v1.23.17
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl get nodes -l dc=shanghai
NAME        STATUS   ROLES                  AGE     VERSION
master231   Ready    control-plane,master   2d21h   v1.23.17
[root@master231 scheduler]# 

  2.查看验证
[root@master231 scheduler]# kubectl  apply -f 09-scheduler-nodeAffinity.yaml 
deployment.apps/deploy-xiuxian-nodeaffinity created
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl get pods -o wide
NAME                                          READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
deploy-xiuxian-nodeaffinity-b7f465544-72fqm   1/1     Running   0          8s    10.100.0.26   master231   <none>           <none>
deploy-xiuxian-nodeaffinity-b7f465544-7xq55   1/1     Running   0          8s    10.100.1.21   worker232   <none>           <none>
deploy-xiuxian-nodeaffinity-b7f465544-bcrkc   1/1     Running   0          8s    10.100.1.24   worker232   <none>           <none>
deploy-xiuxian-nodeaffinity-b7f465544-brd4w   1/1     Running   0          8s    10.100.0.29   master231   <none>           <none>
deploy-xiuxian-nodeaffinity-b7f465544-fglx4   1/1     Running   0          8s    10.100.1.20   worker232   <none>           <none>
deploy-xiuxian-nodeaffinity-b7f465544-g9jc5   1/1     Running   0          8s    10.100.1.19   worker232   <none>           <none>
deploy-xiuxian-nodeaffinity-b7f465544-gb7mq   1/1     Running   0          8s    10.100.0.27   master231   <none>           <none>
deploy-xiuxian-nodeaffinity-b7f465544-pjsz9   1/1     Running   0          8s    10.100.1.22   worker232   <none>           <none>
deploy-xiuxian-nodeaffinity-b7f465544-pkhcg   1/1     Running   0          8s    10.100.1.23   worker232   <none>           <none>
deploy-xiuxian-nodeaffinity-b7f465544-qlsxl   1/1     Running   0          8s    10.100.0.28   master231   <none>           <none>
[root@master231 scheduler]# 

10.2podAffinity

podAffinity在调度Pod时可以基于拓扑域进行调度,当首个Pod调度到特定的拓扑域后,后续的所有Pod都往该拓扑域调度。

在 Kubernetes 中,拓扑域(Topology Domain) 是一个用于描述集群中资源分布的逻辑层次结构的概念。
可以理解为"一个区域"。

通过标签实现控制pod后续调度

案例展示:

   1 环境准备 
[root@master231 scheduler]# kubectl get nodes --show-labels -l "dc in (beijing,shanghai,shenzhen)" | grep dc
master231   Ready    control-plane,master   2d23h   v1.23.17   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,dc=shanghai,kubernetes.io/arch=amd64,kubernetes.io/hostname=master231,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers
worker232   Ready    <none>                 16h     v1.23.17   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,dc=beijing,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker232,kubernetes.io/os=linux
worker233   Ready    <none>                 2d22h   v1.23.17   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,dc=shenzhen,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker233,kubernetes.io/os=linux,school=laonanhai
[root@master231 scheduler]# 


   2 编写资源清单
[root@master231 scheduler]# cat 10-scheduler-podAffinity.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-xiuxian-podaffinity
  labels:
    school: huazai007
spec:
  replicas: 10
  selector:
    matchLabels:
      apps: xiuxian
  template:
    metadata:
      labels:
        apps: xiuxian
        address: shahe
        class: beibei
    spec:
      tolerations:
      - operator: Exists
      # 定义亲和性
      affinity:
        # 定义Pod亲和性
        podAffinity:
          # 定义硬限制
          requiredDuringSchedulingIgnoredDuringExecution:
            # 定义拓扑域(“机房”)
          - topologyKey: dc
            # 定义Pod标签选择器
            labelSelector:
              # 基于表达式匹配节点
              matchExpressions:
                # 定义节点标签的key
              - key: apps
                # 定义节点标签的value
                values:
                - xiuxian
                # 定义key和values之间的关系,有效值为: In,NotIn, Exists, DoesNotExist. Gt, and Lt
                operator: In
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/huazai-k8s/apps:v1
[root@master231 scheduler]# 


   3 测试验证
[root@master231 scheduler]# kubectl apply -f 10-scheduler-podAffinity.yaml
deployment.apps/deploy-xiuxian-podaffinity created
[root@master231 scheduler]# 
[root@master231 scheduler]# kubectl get pods -o wide
NAME                                          READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
deploy-xiuxian-podaffinity-7c64bd95f7-5564f   1/1     Running   0          11s   10.100.2.124   worker233   <none>           <none>
deploy-xiuxian-podaffinity-7c64bd95f7-gsg7x   1/1     Running   0          11s   10.100.2.127   worker233   <none>           <none>
deploy-xiuxian-podaffinity-7c64bd95f7-pgc5f   1/1     Running   0          11s   10.100.2.126   worker233   <none>           <none>
deploy-xiuxian-podaffinity-7c64bd95f7-pl466   1/1     Running   0          11s   10.100.2.125   worker233   <none>           <none>
deploy-xiuxian-podaffinity-7c64bd95f7-qcp6m   1/1     Running   0          11s   10.100.2.129   worker233   <none>           <none>
deploy-xiuxian-podaffinity-7c64bd95f7-rplw5   1/1     Running   0          11s   10.100.2.128   worker233   <none>           <none>
deploy-xiuxian-podaffinity-7c64bd95f7-rzhmm   1/1     Running   0          11s   10.100.2.121   worker233   <none>           <none>
deploy-xiuxian-podaffinity-7c64bd95f7-ssc46   1/1     Running   0          11s   10.100.2.123   worker233   <none>           <none>
deploy-xiuxian-podaffinity-7c64bd95f7-tjsrn   1/1     Running   0          11s   10.100.2.122   worker233   <none>           <none>
deploy-xiuxian-podaffinity-7c64bd95f7-whrvg   1/1     Running   0          11s   10.100.2.120   worker233   <none>           <none>
[root@master231 scheduler]# 

10.3podAntiAffinity

podAntiAffinity和podAffinity相反。

在调度Pod时可以基于拓扑域进行调度,当某个Pod调度到特定的拓扑域后,后续的Pod都不能往该拓扑域调度。

案例展示:

  1 环境准备 
[root@master231 scheduler]# kubectl get nodes --show-labels -l "dc in (beijing,shanghai,shenzhen)" | grep dc
master231   Ready    control-plane,master   2d23h   v1.23.17   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,dc=shanghai,kubernetes.io/arch=amd64,kubernetes.io/hostname=master231,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=,school=laonanhai
worker232   Ready    <none>                 16h     v1.23.17   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,dc=beijing,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker232,kubernetes.io/os=linux
worker233   Ready    <none>                 2d22h   v1.23.17   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,dc=shenzhen,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker233,kubernetes.io/os=linux,school=laonanhai
[root@master231 scheduler]# 
    
  2 编写资源清单
[root@master231 scheduler]# cat 11-scheduler-podAntiAffinity.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-xiuxian-podantiaffinity
  labels:
    school: huazai007
spec:
  replicas: 5
  selector:
    matchLabels:
      apps: xiuxian
  template:
    metadata:
      labels:
        apps: xiuxian
        address: shahe
        class: beibei
    spec:
      tolerations:
      - operator: Exists
      # 定义亲和性
      affinity:
        # 定义Pod反亲和性
        podAntiAffinity:
          # 定义硬限制
          requiredDuringSchedulingIgnoredDuringExecution:
            # 定义拓扑域(“机房”)
          - topologyKey: dc
            # 定义Pod标签选择器
            labelSelector:
              # 基于表达式匹配节点
              matchExpressions:
                # 定义节点标签的key
              - key: apps
                # 定义节点标签的value
                values:
                - xiuxian
                # 定义key和values之间的关系,有效值为: In,NotIn, Exists, DoesNotExist. Gt, and Lt
                operator: In
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/huazai-k8s/apps:v1
[root@master231 scheduler]# 


   3 测试验证
[root@master231 scheduler]# kubectl get pods -o wide
NAME                                              READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
deploy-xiuxian-podantiaffinity-7f774fc697-mzm8r   0/1     Pending   0          43s   <none>         <none>      <none>           <none>
deploy-xiuxian-podantiaffinity-7f774fc697-nt8xs   1/1     Running   0          43s   10.100.2.130   worker233   <none>           <none>
deploy-xiuxian-podantiaffinity-7f774fc697-pbtwx   1/1     Running   0          43s   10.100.1.51    worker232   <none>           <none>
deploy-xiuxian-podantiaffinity-7f774fc697-sw9gd   1/1     Running   0          43s   10.100.0.30    master231   <none>           <none>
deploy-xiuxian-podantiaffinity-7f774fc697-xtmjk   0/1     Pending   0          43s   <none>         <none>      <none>           <none>
[root@master231 scheduler]# 

posted @   悍匪屋  阅读(3)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 全程不用写代码,我用AI程序员写了一个飞机大战
· DeepSeek 开源周回顾「GitHub 热点速览」
· 记一次.NET内存居高不下排查解决与启示
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· .NET10 - 预览版1新功能体验(一)
点击右上角即可分享
微信分享提示