Kubernetes---集群调度
⒈简介
apiVersion: v1 kind: Pod metadata: name: annotation-second-scheduler labels: name: multischeduler-example spec: schedulername: my-scheduler containers: - name: pod-with-second-annotation-container image: gcr.io/google_containers/pause:2.0
⒋节点亲和性【指定调度到的节点】
pod.spec.nodeAffinity
·preferredDuringSchedulinglgnoredDuringExecution:软策略 【我想要去这个节点】
·requiredDuringschedulinglgnoredDuringExecution:硬策略 【我一定要去这个节点】
requiredDuringschedulinglgnoredDuringExecution【硬策略示例】
apiVersion: v1 kind: Pod metadata: name: affinity labels: app: node-affinity-pod spec: containers: - name: with-node-affinity image: hub.coreqi.cn/library/myapp:v1 affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: node5electorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: NotIn values: - k8s-node02
preferredDuringSchedulinglgnoredDuringExecution【软策略示例】
apiVersion: v1 kind: Pod metadata: name: affinity labels: app: node-affinity-pod spec: containers: - name: with-node-affinity image: hub.coreqi.cn/library/myapp:v1 affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: source operator: In values: - k8s-node03
合并示例
apiVersion: v1 kind: Pod metadata: name: affinity labels: app: node-affinity-pod spec: containers: - name: with-node-affinity image: hub.coreqi.cn/library/myapp:v1 affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: NotIn values: - k8s-node02 preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: source operator: In values: - k8s-node02
键值运算关系
In: label的值在某个列表中
Notin: label的值不在某个列表中
Gt: label 的值大于某个值
Lt: label 的值小于某个值
Exists:某个label存在
DoesNotExist:某个label不存在
<!-- 如果`nodeSelectorTerms`下面有多个选项的话,满足任何一个条件就可以了;如果`matchExpressions`有多个选项的话,则必须同时满足这些条件才能正常调度Pod -->
⒌Pod亲和性
pod.spec.affinity.podAffinity/podAntiAffinity
preferredDuringSchedulinglgnored DuringExecution:软策略
requiredDuringSchedulinglgnoredDuringExecution:硬策略
apiVersion: v1 kind: Pod metadata: name: pod-3 labels: app: pod-3 spec: containers: - name: pod-3 image: hub.coreqi.cn/library/myapp:v1 affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - pod-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - pod-2 topologyKey: kubernetes.io/hostname
⒍Taint(污点) 和 Toleration(容忍)
key-value:effect
#设置污点 kubectl taint nodes node1 key1=value1:NoSchedule #节点说明中,查找 Taints 字段 kubectl describe pod pod-name # 去除污点 kubectl taint nodes node1 key1:NoSchedule-
⒏容忍(Tolerations)
tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" tolerationSeconds: 3600 - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" - key:"key2" operator: "Exists" effect: "NoSchedule"
tolerations: - operator: "Exists"
2.当不指定 effect 值时,表示容忍所有的污点作用
tolerations: - key: "key" operator: "Exists"
3.有多个Master 存在时,防止资源浪费,可以如下设置
kubectl taint nodes Node-Name node-role.kubernetes.io/master=:PreferNoSchedule
⒐指定调度节点
1.通过指定Pod.spec.nodeName 将 Pod直接调度到指定的Node节点上,会跳过 Scheduler 的调度策略,该匹配规则是强制匹配。
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: myweb spec: replicas: 7 template: metadata: labels: app: myweb spec: nodeName: k8s-node01 containers: - name: myweb image: hub.coreqi.cn/library/myapp:v1 ports: - containerPort: 80
2.Pod.spec.nodeSelector:通过 kubernetes 的label-selector机制选择节点,由调度器调度策略匹配label, 而后调度Pod到目标节点,该匹配规则属于强制约束
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: myweb spec: replicas: 2 template: metadata: labels: app: myweb spec: nodeSelector: type: backEndNode1 containers: - name: myweb image: harbor/tomcat:8.5-jre8 ports: - containerPort: 80