深入解读核心资源Pod(二)

一、Pod 资源清单

Pod 资源清单详解

apiVersion: v1          # 版本号,例如v1
kind: Pod               # 资源类型,例如Pod
metadata:               # 元数据,包含name、namespace、labels等
  name: string             # Pod名称,例如nginx
  namespace: string        # Pod所属的命名空间,例如default
  labels:                  # 自定义标签,例如app: nginx
    - name: string            # 标签名称,例如app
  annotations:             # 自定义注解,例如author: xxx
    - name: string            # 注解名称,例如author
spec:                   # Pod的规格(容器的详细定义)
  containers:             # 容器列表,一个Pod可以有多个容器
    - name: string            # 容器名称,例如nginx
      image: string           # 容器使用的镜像,例如nginx:1.7.9
      imagePullPolicy: PullIfNotPresent  # 镜像拉取策略,例如IfNotPresent、Always、Never
      command: [string]       # 容器启动命令,例如["/bin/sh"]
      args: [string]          # 容器启动参数,例如["-c", "echo hello"]
      workingDir: string      # 容器工作目录,例如/var/www
      volumeMounts:           # 挂载到容器内部的存储卷配置
        - name: string            # 存储卷名称,例如www
          mountPath: string       # 存储卷挂载路径,例如/var/www
          readOnly: boolean       # 是否只读,例如false
      ports:                  # 容器监听的端口配置
        - name: string            # 端口名称,例如http
          containerPort: int      # 容器监听的端口号,例如80
          hostPort: int           # Pod监听的端口号,例如80
          protocol: string        # 端口使用的协议,例如TCP
      env:                    # 容器运行前需设置的环境变量
        - name: string            # 环境变量名称,例如VERSION
          value: string           # 环境变量的值,例如v1
      resources:              # 容器的资源限制和请求配置
        limits:                   # 资源限制的配置
          cpu: string              # CPU的限制,单位为core数
          memory: string           # 内存的限制,单位为bytes或者MiB或者GiB
        requests:                 # 资源请求的配置
          cpu: string              # CPU的请求,容器启动的初始可用数量
          memory: string           # 内存的请求,容器启动的初始可用内存
      livenessProbe:          # 对Pod内个容器健康检查的配置,检查方法:exec、httpGet、tcpSocket
        exec:                     # 对 Pod 容器内检查方式设置为 exec 方式
          command: [string]         # exec方式需要制定的命令或者脚本
        httpGet:                  # 对 Pod 容器内检查方式设置为 httpGet 方式
          path: string              # HTTP请求路径,例如/healthz
          port: int                 # HTTP请求端口,例如8080
          host: string              # HTTP请求HOST,例如example.com
          scheme: string            # HTTP请求scheme,例如HTTP
          httpHeaders:              # HTTP请求头
            - name: string            # HTTP请求头名称,例如X-Custom-Header
              value: string           # HTTP请求头的值,例如Awesome
        tcpSocket:                # 对 Pod 容器内检查方式设置为 tcpSocket 方式
          port: int                 # TCP请求端口,例如8080
        initialDelaySeconds: int  # 容器启动完成后首次探测的时间,单位为秒,默认为0
        timeoutSeconds: int       # 探测等待响应的超时时间,单位为秒,默认为1
        periodSeconds: int        # 探测之间的间隔时间,单位为秒,默认为10
        successThreshold: int     # 探测成功的连续次数,默认为1
        failureThreshold: int     # 探测失败的连续次数,默认为3
        securityContext:          # Pod 安全上下文配置
          privileged: boolean         # 是否特权模式,默认为false
      restartPolicy: string   # Pod内个容器的重启策略,Always、OnFailure、Never,默认为Always
        nodeSelector: object      # 节点选择器,用于指定Pod调度到的节点
        imagePullSecrets:         # 镜像拉取凭证,用于从私有仓库拉取镜像
          - name: string              # 镜像拉取凭证名称,例如regsecret
        hostNetwork: boolean      # 是否使用主机网络模式,默认为false(不使用)
        volumes:                  # 在该pod上定义共享存储卷列表
        - name: string                # 共享存储卷名称,例如www
          emptyDir: {}                # 类型为emptyDir的存储卷,与Pod同生命周期的临时存储
          hostPath:                   # 类型为hostPath的存储卷,表示挂载pod所在宿主机的目录
            path: string                 # pod所在宿主机的目录,将被用于存储卷的挂载
          secret:                     # 类型为secret的存储卷,用于将Secret以文件或目录的形式挂载到Pod中
            secretName: string           # 密文存储卷名称
            items:                       # 配置文件存储卷项列表
              - key: string                 # 配置文件存储卷项的key
                path: string                # 配置文件存储卷项的路径
          configMap:                  # 类型为configMap的存储卷,用于将ConfigMap以文件或目录的形式挂载到Pod中
            name: string                 # 配置文件存储卷名称
            items:                       # 配置文件存储卷项列表
              - key: string                 # 配置文件存储卷项的key
                path: string                # 配置文件存储卷项的路径

二、node节点选择器

创建pod资源时,pod 会根据 scheduler 的调度策略,选择一个合适的 node 节点运行。

如果希望 pod 能够运行在指定的 node 节点上,可以通过 nodeSelector 字段来指定 pod 调度到的 node 节点。

  • nodeSelector 字段的值是一个 map,key 是 node 节点的标签名称,value 是 node 节点的标签值。
  • nodeSelector 字段的值必须和 node 节点的标签匹配,才能将 pod 调度到该 node 节点上。

1、nodeName

nodeName 字段用于指定 pod 调度到的 node 节点的名称,该字段的值必须是 node 节点的名称,而不是 node 节点的标签。

上传tomcat.tar.gz文件和busybox.tar.gz文件到node节点的/root目录下,使用ctr命令加载tomcat.tar.gz文件到容器镜像仓库,使用ctr命令加载busybox.tar.gz文件到容器镜像仓库,查看容器镜像仓库中的镜像。

# 上传 tomcat.tar.gz文件和busybox.tar.gz文件到node节点的/root目录下
[root@node ~]# ls
anaconda-ks.cfg  busybox.tar.gz  install.sh  original-ks.cfg  tomcat.tar.gz  xianchao-nginx.tar.gz  xianchao-tomcat.tar.gz

# 使用ctr命令加载tomcat.tar.gz文件到容器镜像仓库
[root@node ~]# ctr -n k8s.io images import tomcat.tar.gz
unpacking docker.io/library/tomcat:8.5-jre8-alpine (sha256:463a0b1de051bff2208f81a86bdf4e7004eb68c0edfcc658f2e2f367aab5e342)...done

# 使用ctr命令加载busybox.tar.gz文件到容器镜像仓库
[root@node ~]# ctr -n k8s.io images import busybox.tar.gz
unpacking docker.io/library/busybox:latest (sha256:2d86744fc4e303fbf4e71c67b89ee77cc6c60e9315cbd2c27f50e85b2d866450)...done

# 查看容器镜像仓库中的镜像
[root@node ~]# ctr -n k8s.io images ls | grep -E "tomcat|busybox"
docker.io/library/busybox:latest                                                                                           application/vnd.docker.distribution.manifest.v2+json      sha256:2d86744fc4e303fbf4e71c67b89ee77cc6c60e9315cbd2c27f50e85b2d866450 1.4 MiB   linux/amd64                                                                                             io.cri-containerd.image=managed 
docker.io/library/tomcat:8.5-jre8-alpine                                                                                   application/vnd.docker.distribution.manifest.v2+json      sha256:463a0b1de051bff2208f81a86bdf4e7004eb68c0edfcc658f2e2f367aab5e342 105.1 MiB linux/amd64                                                                                             io.cri-containerd.image=managed

创建 pod-node.yaml 文件,指定 pod 调度到 node 节点 node 上。

apiVersion: v1
kind: Pod
metadata:
  name: demo-pod
  namespace: default
  labels:
    app: demo
    env: dev
spec:
  nodeName: node
  containers:
  - name: tomcat-pod-java
    image: tomcat:8.5-jre8-alpine
    imagePullPolicy: IfNotPresent
  - name: busybox
    image: busybox:latest
    command: 
    - "/bin/sh"
    - "-c"
    - "sleep 3600"

启动并查看 pod 资源。

# 创建 pod 资源
[root@master ~]# kubectl apply -f pod-node.yaml
pod/demo-pod created
# 查看 pod 资源调度到的 node 节点
[root@master tomcat-test]# kubectl get pod -o wide -n default
NAME         READY   STATUS    RESTARTS   AGE     IP            NODE   NOMINATED NODE   READINESS GATES
demo-pod     2/2     Running   0          12m     10.244.1.29   node   <none>           <none>

2、nodeSelector

nodeSelector 字段用于指定 pod 调度到的 node 节点的标签,该字段的值是一个 map,key 是 node 节点的标签名称,value 是 node 节点的标签值。

# 给 node 节点打标签,打个具有 disk=ceph 的标签
[root@node ~]# kubectl label nodes node disk=ceph
node/node labeled

# 查看 node 节点的标签
[root@node ~]# kubectl get node --show-labels

# 定义 pod 资源,指定 pod 调度到具有 disk=ceph 标签的 node 节点上
[root@master ~]# cat pod-1.yaml
apiVersion: v1
kind: Pod
metadata:
  name: demo-pod-1
  namespace: default
  labels:
    app: myapp
    env: dev
spec:
  nodeSelector:
    disk: ceph
  containers:
  - name: tomcat-pod-java
    ports:
    - containerPort: 8081
    image: tomcat:8.5-jre8-alpine
    imagePullPolicy: IfNotPresent

# 创建 pod 资源
[root@master ~]# kubectl apply -f pod-1.yaml
pod/demo-pod-1 created

# 查看 pod 资源调度到的 node 节点
[root@master ~]# kubectl get pods -o wide -n default
NAME         READY   STATUS    RESTARTS   AGE     IP            NODE   NOMINATED NODE   READINESS GATES
demo-pod     2/2     Running   0          12m     10.244.1.29   node   <none>           <none>
demo-pod-1   1/1     Running   0          2m32s   10.244.1.30   node   <none>           <none>

# 修改命名空间为default
[root@master ~]# kubectl config set-context --current --namespace=default
Context "kubernetes-admin@kubernetes" modified.

三、污点和容忍度

污点是一种标记,用于标记 node 节点的特殊属性,例如:node 节点的 CPU、内存、磁盘等资源不足,或者 node 节点上运行了一些系统级的服务,这些服务占用了大量的 CPU、内存、磁盘等资源,这些都可以称之为污点。

污点是 node 节点的属性,可以通过 kubectl taint nodes 命令查看 node 节点的污点信息,也可以通过 kubectl taint nodes 命令给 node 节点打上污点。

容忍度是 pod 资源的属性,可以通过 kubectl describe pod 命令查看 pod 资源的容忍度信息,也可以通过 kubectl edit pod 命令给 pod 资源打上容忍度。

1、node节点亲和性

node节点亲和性是一种特殊的亲和性,用于指定 pod 资源调度到具有某些污点的 node 节点上。

nodeAffinity: 用于指定 pod 资源调度到具有某些污点的 node 节点上。

# 查看 pod 资源的亲和性
[root@master tomcat-test]# kubectl explain pods.spec.affinity
KIND:     Pod
VERSION:  v1

RESOURCE: affinity <Object>

DESCRIPTION:
     If specified, the pods scheduling constraints

     Affinity is a group of affinity scheduling rules.

FIELDS:
   nodeAffinity	<Object>
     Describes node affinity scheduling rules for the pod.

   podAffinity	<Object>
     Describes pod affinity scheduling rules (e.g. co-locate this pod in the
     same node, zone, etc. as some other pod(s)).

   podAntiAffinity	<Object>
     Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod
     in the same node, zone, etc. as some other pod(s)).

# 查看 pod 资源的 节点亲和性
[root@master tomcat-test]# kubectl explain pods.spec.affinity.nodeAffinity
KIND:     Pod
VERSION:  v1
RESOURCE: nodeAffinity <Object>

DESCRIPTION:
     Describes node affinity scheduling rules for the pod.

     Node affinity is a group of node affinity scheduling rules.

FIELDS:
   preferredDuringSchedulingIgnoredDuringExecution	<[]Object>          # prefered:表示有节点尽量满足这个位置定义的亲和性,这不是一个必须的条件,软亲和性。
     The scheduler will prefer to schedule pods to nodes that satisfy the
     affinity expressions specified by this field, but it may choose a node that
     violates one or more of the expressions. The node that is most preferred is
     the one with the greatest sum of weights, i.e. for each node that meets all
     of the scheduling requirements (resource request, requiredDuringScheduling
     affinity expressions, etc.), compute a sum by iterating through the
     elements of this field and adding "weight" to the sum if the node matches
     the corresponding matchExpressions; the node(s) with the highest sum are
     the most preferred.

   requiredDuringSchedulingIgnoredDuringExecution	<Object>            # require:表示必须有节点满足这个位置定义的亲和性,这是个硬性条件,硬亲和性。
     If the affinity requirements specified by this field are not met at
     scheduling time, the pod will not be scheduled onto the node. If the
     affinity requirements specified by this field cease to be met at some point
     during pod execution (e.g. due to an update), the system may or may not try
     to eventually evict the pod from its node.

# 这个字段是节点亲和性(Node Affinity)的一部分,用于定义硬性调度要求,即只有当节点满足这些条件时,Pod 才能被调度到该节点上。
[root@k8s-master1 ~]# kubectl explain pods.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution
KIND:     Pod
VERSION:  v1

RESOURCE: requiredDuringSchedulingIgnoredDuringExecution <Object>

DESCRIPTION:
     If the affinity requirements specified by this field are not met at
     scheduling time, the pod will not be scheduled onto the node. If the
     affinity requirements specified by this field cease to be met at some point
     during pod execution (e.g. due to an update), the system may or may not try
     to eventually evict the pod from its node.

     A node selector represents the union of the results of one or more label
     queries over a set of nodes; that is, it represents the OR of the selectors
     represented by the node selector terms.

FIELDS:
   nodeSelectorTerms	<[]Object> -required-
     Required. A list of node selector terms. The terms are ORed.

# 输出关于 nodeSelectorTerms 字段的详细说明。
# 这个字段是用于定义 Pod 的节点亲和性(Node Affinity)规则的一部分,具体来说,它规定了哪些节点是 Pod 调度的硬性要求。
kubectl explain pods.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms

# 提供关于 Kubernetes Pod 规格中 matchFields 字段的详细信息。这个字段是节点亲和性(Node Affinity)配置的一部分,用于定义基于节点字段(而不是标签)的选择条件。
kubectl explain pods.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchFields

# 提供关于 Kubernetes Pod 规格中 matchExpressions 字段的详细信息。这个字段是节点亲和性(Node Affinity)配置的一部分,用于定义基于节点标签的选择条件。
[root@k8s-master1 ~]# kubectl explain pods.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions
KIND:     Pod
VERSION:  v1
RESOURCE: matchExpressions <[]Object>
DESCRIPTION:
     A list of node selector requirements by nodes labels.
     A node selector requirement is a selector that contains values, a key, and
     an operator that relates the key and values.

FIELDS:
   key	<string> -required-              # 检查label
   operator	<string> -required-          # 做等值选择或不等值选择
   values	<[]string>                   # 给定值

(1)node节点硬亲和性示例

使用 requiredDuringSchedulingIgnoredDuringExecution 硬亲和性

# 把 myapp-v1.tar.gz 上传到 k8s-node1 和 k8s-node2 上,手动解压
[root@k8s-node1 ~]# rz
[root@k8s-node1 ~]# scp myapp-v1.tar.gz k8s-node2:/root
myapp-v1.tar.gz                                                              100%   15MB  56.1MB/s   00:00 

[root@k8s-node1 ~]# docker load -i myapp-v1.tar.gz 
Loaded image: ikubernetes/myapp:v1

[root@k8s-node2 ~]# docker load -i myapp-v1.tar.gz 
Loaded image: ikubernetes/myapp:v1

# 编辑 pod-nodeaffinity-demo.yaml
# 检查当前节点有任一个节点拥有 zone=foo 或 zone=bar 的标签,就可以把 pod 调度到这个node节点的 foo 或者 bar 标签的节点上
[root@k8s-master1 ~]# vi pod-nodeaffinity-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-node-affinity-demo
  namespace: default
  labels:
    app: myapp
    tier: frontend
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: zone
            operator: In
            values:
            - foo
            - bar

# 创建 pod
[root@k8s-master1 ~]# kubectl apply -f pod-nodeaffinity-demo.yaml 
pod/pod-node-affinity-demo created

# 查看 pod
# status 的状态是 pending,上面说明没有完成调度,因为没有一个拥有 zone 的标签的值是 foo 或者 bar,而且使用的是硬亲和性,必须满足条件才能完成调度
[root@k8s-master1 ~]# kubectl get pods -o wide | grep pod-node
pod-node-affinity-demo   0/1     Pending   0          34s   <none>   <none>   <none>           <none>

# 给这个 k8s-node1 节点打上标签 zone=foo
[root@k8s-master1 ~]# kubectl label node k8s-node1 zone=foo
node/k8s-node1 labeled

# 重新查看 pod
# status 的状态是 running,上面说明完成调度,因为 k8s-node1 节点拥有 zone=foo 的标签
[root@k8s-master1 ~]# kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
pod-node-affinity-demo   1/1     Running   0          3m    10.244.36.78   k8s-node1   <none>           <none>

(2)node节点软亲和性示例

使用 preferredDuringSchedulingIgnoredDuringExecution 软亲和性。

# 编辑 pod-nodeaffinity-demo2.yaml
[root@k8s-master1 ~]# vi pod-nodeaffinity-demo2.yaml 
apiVersion: v1
kind: Pod
metadata:
        name: pod-node-affinity-demo-2
        namespace: default
        labels:
            app: myapp
            tier: frontend
spec:
    containers:
    - name: myapp
      image: ikubernetes/myapp:v1
    affinity:
        nodeAffinity:
            preferredDuringSchedulingIgnoredDuringExecution:
            - preference:
               matchExpressions:
               - key: zone1
                 operator: In
                 values:
                 - foo1
                 - bar1
              weight: 60

# 创建 pod
[root@k8s-master1 ~]# kubectl apply -f pod-nodeaffinity-demo-2.yaml 
pod/pod-node-affinity-demo-2 created

# 查看 pod
[root@k8s-master1 ~]# kubectl get pods -o wide
NAME                       READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
pod-node-affinity-demo     1/1     Running   0          16m   10.244.36.78     k8s-node1   <none>           <none>
pod-node-affinity-demo-2   1/1     Running   0          89s   10.244.169.138   k8s-node2   <none>           <none>

上面说明软亲和性是可以运行这个 pod 的,尽管没有运行这个 pod 的节点定义的 zone1 标签。

Node 节点亲和性针对的是 pod 和 node 的关系,Pod 调度到 node 节点的时候匹配的条件。

2、Pod节点亲和性

pod 自身的亲和性调度有两种表示形式:

  • podaffinity:pod 和 pod 更倾向在一起,把相近的 pod 结合到相近的位置,如同一区域,同一机架,这样的话 pod 和 pod 之间更好通信,比方说有两个机房,这两个机房部署的集群有 1000 台主机,那么我们希望把 nginx 和 tomcat 都部署同一个地方的 node 节点上,可以提高通信效率;
  • podunaffinity:pod 和 pod 更倾向不腻在一起,如果部署两套程序,那么这两套程序更倾向于反亲和性,这样相互之间不会有影响。

第一个 pod 随机选则一个节点,做为评判后续的 pod 能否到达这个 pod 所在的节点上的运行方式,这就称为 pod 亲和性;

我们怎么判定哪些节点是相同位置的,哪些节点是不同位置的;

我们在定义 pod 亲和性时需要有一个前提,哪些 pod 在同一个位置,哪些 pod 不在同一个位置,这个位置是怎么定义的,标准是什么?以节点名称为标准,这个节点名称相同的表示是同一个位置,节点名称不相同的表示不是一个位置。

# 描述 pod 亲和性调度规则
[root@k8s-master1 ~]# kubectl explain pods.spec.affinity.podAffinity
KIND:     Pod
VERSION:  v1

RESOURCE: podAffinity <Object>

DESCRIPTION:
     Describes pod affinity scheduling rules (e.g. co-locate this pod in the
     same node, zone, etc. as some other pod(s)).   
     Pod affinity is a group of inter pod affinity scheduling rules.  

FIELDS:
   preferredDuringSchedulingIgnoredDuringExecution	<[]Object>       # 软亲和性

   requiredDuringSchedulingIgnoredDuringExecution	<[]Object>         # 硬亲和性


# 查看 pod 硬亲和性调度规则
[root@k8s-master1 ~]# kubectl explain pods.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution
KIND:     Pod
VERSION:  v1

RESOURCE: requiredDuringSchedulingIgnoredDuringExecution <[]Object>

DESCRIPTION:
     If the affinity requirements specified by this field are not met at
     scheduling time, the pod will not be scheduled onto the node. If the
     affinity requirements specified by this field cease to be met at some point
     during pod execution (e.g. due to a pod label update), the system may or
     may not try to eventually evict the pod from its node. When there are
     multiple elements, the lists of nodes corresponding to each podAffinityTerm
     are intersected, i.e. all terms must be satisfied.

     Defines a set of pods (namely those matching the labelSelector relative to
     the given namespace(s)) that this pod should be co-located (affinity) or
     not co-located (anti-affinity) with, where co-located is defined as running
     on a node whose value of the label with key <topologyKey> matches that of
     any node on which a pod of the set of pods is running

FIELDS:
   labelSelector	<Object>             # 我们要判断 pod 跟别的 pod 亲和,跟哪个 pod 亲和,需要靠 labelSelector,通过 labelSelector 选则一组能作为亲和对象的 pod 资源

   namespaces	<[]string>               # labelSelector 需要选则一组资源,那么这组资源是在哪个名称空间中呢,通过 namespace 指定,如果不指定 namespaces,那么就是当前创建 pod 的名称空间

   topologyKey	<string> -required-    # 位置拓扑的键,这个是必须字段。如何判断节点是否是同一个位置的,这个标准就是这个键的值是否相同。


# 查看 pod 软亲和性调度规则
[root@k8s-master1 ~]# kubectl explain pods.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution
KIND:     Pod
VERSION:  v1

RESOURCE: preferredDuringSchedulingIgnoredDuringExecution <[]Object>

DESCRIPTION:
     The scheduler will prefer to schedule pods to nodes that satisfy the
     affinity expressions specified by this field, but it may choose a node that
     violates one or more of the expressions. The node that is most preferred is
     the one with the greatest sum of weights, i.e. for each node that meets all
     of the scheduling requirements (resource request, requiredDuringScheduling
     affinity expressions, etc.), compute a sum by iterating through the
     elements of this field and adding "weight" to the sum if the node has pods
     which matches the corresponding podAffinityTerm; the node(s) with the
     highest sum are the most preferred.

     The weights of all of the matched WeightedPodAffinityTerm fields are added
     per-node to find the most preferred node(s)

FIELDS:
   podAffinityTerm	<Object> -required-
     Required. A pod affinity term, associated with the corresponding weight.    # 一个Pod亲和性条件,附带相应的权重   

   weight	<integer> -required-
     weight associated with matching the corresponding podAffinityTerm, in the   # 与匹配相应的Pod亲和性条件相关联的权重,在1到100的范围内
     range 1-100.

(1)pod节点亲和性案例

定义两个 pod,第一个 pod 做为基准,第二个 pod 跟着它走。

# 删除 pod
[root@k8s-master1 ~]# kubectl delete pods pod-node-affinity-demo
pod "pod-node-affinity-demo" deleted

[root@k8s-master1 ~]# kubectl delete pods pod-node-affinity-demo-2
pod "pod-node-affinity-demo-2" deleted

# 编辑 pod-required-affinity-demo.yaml
[root@k8s-master1 ~]# vi pod-required-affinity-demo.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-required-affinity-demo
  labels:
    app2: myapp2
    tier: frontend
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-required-affinity-demo-2
  labels:
    app: backend 
    tier: db
spec:
  containers:
  - name: busybox
    image: busybox:1.28
    imagePullPolicy: IfNotPresent
    command: ["sh", "-c", "sleep 3600"]
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - {key: app2, operator: In, values: ["myapp2"]}
        topologyKey: kubernetes.io/hostname                        # 此处的缩进很容易错

# 创建 pod,必须与拥有 app=myapp 标签的pod 在一个节点
[root@k8s-master1 ~]# kubectl apply -f pod-required-affinity-demo.yaml 
pod/pod-required-affinity-demo created
pod/pod-required-affinity-demo-2 created

# 从Kubernetes集群中获取Pod列表,并以更详细(wide)的格式显示这些Pod的信息
# 下面结果表明第一个pod调度到哪,第二个pod也调度到哪,这就是pod节点亲和性
[root@k8s-master1 ~]# kubectl get pods -o wide
NAME                           READY   STATUS    RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
pod-required-affinity-demo     1/1     Running   0          9m50s   10.244.169.142   k8s-node2   <none>           <none>
pod-required-affinity-demo-2   1/1     Running   0          3m58s   10.244.169.145   k8s-node2   <none>           <none>

# 列出Kubernetes集群中的所有节点,并显示每个节点的标签信息
# --show-labels:这是一个输出选项,它告诉 kubectl 在输出结果中包含每个节点的标签(labels)。
[root@k8s-master1 ~]# kubectl get nodes --show-labels
NAME          STATUS   ROLES                  AGE   VERSION   LABELS
k8s-master1   Ready    control-plane,master   10d   v1.20.6   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
k8s-node1     Ready    worker                 10d   v1.20.6   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux,node-role.kubernetes.io/worker=worker,zone=foo
k8s-node2     Ready    worker                 10d   v1.20.6   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disk=ceph,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux,node-role.kubernetes.io/worker=worker

# 删除 pod
[root@k8s-master1 ~]# kubectl delete -f pod-required-affinity-demo.yaml 
pod "pod-required-affinity-demo" deleted
pod "pod-required-affinity-demo-2" deleted

(2)pod节点反亲和性案例

定义两个 pod,第一个 pod 做为基准,第二个 pod 跟它调度节点相反。

posted @   休耕  阅读(62)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 全程不用写代码,我用AI程序员写了一个飞机大战
· DeepSeek 开源周回顾「GitHub 热点速览」
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 记一次.NET内存居高不下排查解决与启示
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
点击右上角即可分享
微信分享提示