|NO.Z.00206|——————————|CloudNative|——|KuberNetes&高级调度.V09|——|Affinity.v02|Affinity实验一:Ln|

一、实验一:书写yaml文件,配置NodeAffinity参数;验证和的关系
### --- Affinity参数说明

~~~     In:部署在满足多个条件的节点上
~~~     NotIn:不要部署在满足这些条件的节点上
~~~     Exists:部署在具有某个存在key为指定的值的Node节点上
~~~     DoesNotExist:和Exists相反
~~~     Gt: 大于指定的条件  条件为number,不能为字符串
~~~     Lt:小于指定的条件
二、实验一:In:部署在符合这些标签的节点上,给节点打label
### --- 查看每个节点所具有的label
~~~     注:都是默认label,

[root@k8s-master01 ~]# kubectl get node --show-labels
NAME           STATUS   ROLES    AGE   VERSION   LABELS
k8s-master01   Ready    <none>   20d   v1.20.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master01,kubernetes.io/os=linux,node.kubernetes.io/node=
k8s-master02   Ready    <none>   20d   v1.20.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master02,kubernetes.io/os=linux,node.kubernetes.io/node=
k8s-master03   Ready    <none>   20d   v1.20.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ds=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master03,kubernetes.io/os=linux,node.kubernetes.io/node=
k8s-node01     Ready    <none>   20d   v1.20.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ds=true,ingress=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node01,kubernetes.io/os=linux,node.kubernetes.io/node=
k8s-node02     Ready    <none>   20d   v1.20.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ds=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node02,kubernetes.io/os=linux,node.kubernetes.io/node=,region=subnet7
### --- 为k8s-node01和k8s-node02和k8s-master01分别打label

[root@k8s-master01 ~]# kubectl label node k8s-node01 kubernetes.io/e2e-az-name=e2e-az1
node/k8s-node01 labeled
[root@k8s-master01 ~]# kubectl label node k8s-node02 kubernetes.io/e2e-az-name=e2e-az2
node/k8s-node02 labeled
[root@k8s-master01 ~]# kubectl label node k8s-node01 another-node-label-key=another-node-label-value
node/k8s-node01 labeled
[root@k8s-master01 ~]# kubectl label node k8s-master01 another-node-label-key=another-node-label-value
node/k8s-master01 labeled 
### --- 为demo-nginx配置NodeAffinity参数
~~~     导出配置文件 
~~~     添加NodeAffinity参数

[root@k8s-master01 ~]# kubectl get deploy demo-nginx -oyaml > affinity-demo-nginx.yaml
三、在demo-nginx的yaml文件添加Node Affinity配置参数
### --- 把pod部署在label等于 kubernetes.io/e2e-az-name;values:
~~~     是- e2e-az1和- e2e-az2节点上面。

[root@k8s-master01 ~]# vim affinity-demo-nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: demo-nginx
  name: demo-nginx
spec:
  progressDeadlineSeconds: 600
  replicas: 2
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: demo-nginx
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: demo-nginx
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/e2e-az-name
                operator: In
                values:
                - e2e-az1
                - e2e-az2
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            preference:
              matchExpressions:
              - key: another-node-label-key
                operator: In
                values:
                - another-node-label-value
      containers:
      - command:
        - sh
        - -c
        - sleep 36000000000
        image: nginx
        imagePullPolicy: IfNotPresent
        name: nginx2
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /mnt
          name: cache-volume
        - mountPath: /tmp/nfs
          name: nfs-test
      - command:
        - sh
        - -c
        - sleep 36000000000
        image: nginx
        imagePullPolicy: IfNotPresent
        name: nginx
        ports:
        - containerPort: 80
          name: web
          protocol: TCP
        resources:
          limits:
            cpu: 100m
            memory: 270Mi
          requests:
            cpu: 100m
            memory: 70Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/nginx/nginx.conf
          name: config-volume
          subPath: etc/nginx/nginx.conf
        - mountPath: /mnt/
          name: config-volume-non-subpath
        - mountPath: /tmp/1
          name: test-hostpath
        - mountPath: /tmp/2
          name: cache-volume
        - mountPath: /tmp/pvc
          name: pvc-test
      dnsPolicy: ClusterFirst
      initContainers:
      - command:
        - sh
        - -c
        - echo "InitContainer" >> /tmp/nfs/init
        image: nginx
        imagePullPolicy: IfNotPresent
        name: init1
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /tmp/nfs
          name: nfs-test
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      shareProcessNamespace: true
      terminationGracePeriodSeconds: 30
      tolerations:
      - effect: NoSchedule
        key: master-test
        operator: Equal
        value: test
      - effect: NoExecute
        key: master-test
        operator: Equal
        tolerationSeconds: 60
        value: test
      volumes:
      - hostPath:
          path: /etc/hosts
          type: File
        name: test-hostpath
      - configMap:
          defaultMode: 420
          items:
          - key: nginx.conf
            path: etc/nginx/nginx.conf
          name: nginx-conf
        name: config-volume
      - configMap:
          defaultMode: 420
          name: nginx-conf
        name: config-volume-non-subpath
      - emptyDir:
          medium: Memory
        name: cache-volume
      - name: nfs-test
        nfs:
          path: /data/k8s-data/testDir
          server: 192.168.1.14
      - name: pvc-test
        persistentVolumeClaim:
          claimName: myclaim
### --- 配置说明:该配置是把pod部署在label等于 kubernetes.io/e2e-az-name;values:
~~~     是- e2e-az1和- e2e-az2节点上面。
~~~     部署NodeAffinity配置说明:强制要求部署在    
~~~     根据上面的情况:affinity是依据在满足强制性之后再去满足其它label的条件。
~~~     所以:根据之前配置的标签,第一步:它会被强制性的部署在k8s-node-01和node02节点上;
~~~     第二步;若是node节点的label后,它会尽量去选择部署在k8-master01节点上面。

       affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/e2e-az-name  // 尽量部署在具有这两个label的节点上面
                operator: In
                values: 
                - e2e-az1
                - e2e-az2
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            preference:
              matchExpressions:
              - key: another-node-label-key     // 强制要求部署在具有这各label的节点上面。   
                operator: In
                values:
                - another-node-label-value
四、重新触发replace容器
### --- 重置容器

[root@k8s-master01 ~]# kubectl replace -f affinity-demo-nginx.yaml
deployment.apps/demo-nginx replaced
### --- 查看部署的容器所在是否在node01和node02上

[root@k8s-master01 ~]# kubectl get po -owide
NAME                         READY   STATUS     RESTARTS   AGE   IP               NODE           NOMINATED NODE   READINESS GATES
busybox                      1/1     Running    3          15h   172.25.244.247   k8s-master01   <none>           <none>
demo-nginx-95d8d8475-n8m9x   0/2     Init:0/1   0          48s   <none>           k8s-node01     <none>           <none>
demo-nginx-95d8d8475-ptpkv   0/2     Init:0/1   0          48s   <none>           k8s-node01     <none>           <none> 
~~~     # 删除k8s-node01的label,触发不需要强制要求标签属性。

[root@k8s-master01 ~]# kubectl label  node k8s-node01 another-node-label-key-
~~~     # 删除demo-nginx容器,重新触发,验证结果;查看是否部署在k8s-master上面

[root@k8s-master01 ~]# kubectl delete po -l app=demo-nginx
pod "demo-nginx-95d8d8475-n8m9x" deleted
pod "demo-nginx-95d8d8475-ptpkv" deleted
### --- 结果:重新触发,还是部署在k8s-node02上面,
~~~     所以说:配置的规则是和的关系

[root@k8s-master01 ~]# kubectl get po -owide
NAME                         READY   STATUS     RESTARTS   AGE   IP               NODE           NOMINATED NODE   READINESS GATES
busybox                      1/1     Running    3          15h   172.25.244.247   k8s-master01   <none>           <none>
demo-nginx-95d8d8475-qs54z   0/2     Init:0/1   0          36s   <none>           k8s-node02     <none>           <none>
demo-nginx-95d8d8475-vvbh7   0/2     Init:0/1   0          37s   <none>           k8s-node02     <none>           <none> 

 
 
 
 
 
 
 
 
 

Walter Savage Landor:strove with none,for none was worth my strife.Nature I loved and, next to Nature, Art:I warm'd both hands before the fire of life.It sinks, and I am ready to depart
                                                                                                                                                   ——W.S.Landor

 

 

posted on   yanqi_vip  阅读(25)  评论(0编辑  收藏  举报

相关博文:
阅读排行:
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 记一次.NET内存居高不下排查解决与启示
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
· DeepSeek 开源周回顾「GitHub 热点速览」
< 2025年3月 >
23 24 25 26 27 28 1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31 1 2 3 4 5

导航

统计

点击右上角即可分享
微信分享提示