08-K8S Basic-Pod控制器进阶(DaemonSet、Job、CronJob)

一、Pod控制器 DaemonSet

1.1、DaemonSet控制器介绍

  • DaemonSet控制器确保全部(或者一些)Node上运行一个Pod的副本。当Node集群加入时,也会为他们增加一个Pod。当有Node从集权中移除时,这些Pod也会被回收,删除DaemonSet将会删除它创建的所有的Pod。
  • 使用DaemonSet的一些典型用法:
    • 运行集权存储 daemon,例如在那个Node上运行glusterd、ceph。
    • 在每个Node上运行日志收集 daemon , 例如fluentd 、 logstash。
    • 在每个Node上运行监控 daemon , 例如Promettheus Node Exporter 、collectd 、Datadog代理、New Relic代理、或 Ganglia gmond.
  • 一个简单的用法是,在所有的Node上都存在一个DaemonSet,将被作为每个类型的daemon使用,一个稍微复杂的用法可能是,对单独的每种类型的daemon使用多个DaemonSet, 但具有不同的标志,和/或对不同硬件类型具有不同的内存、CPU要求。

1.2、DaemonSet控制器实战

1.2.1、DaemonSet Spec

  • kubectl explain ds
    1、必需字段 : 和其它所有 Kubernetes 配置一样,DaemonSet 需要 apiVersion、kind 和 metadata字段。
~]# kubectl explain daemonset
KIND:     DaemonSet
VERSION:  extensions/v1beta1
 
DESCRIPTION:
     DEPRECATED - This group version of DaemonSet is deprecated by
     apps/v1beta2/DaemonSet. See the release notes for more information.
     DaemonSet represents the configuration of a daemon set.
 
FIELDS:
   apiVersion    <string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#resources
 
   kind    <string>
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
 
   metadata    <Object>
     Standard object's metadata. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
 
   spec    <Object>
     The desired behavior of this daemon set. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
 
   status    <Object>
     The current status of this daemon set. This data may be out of date by some
     window of time. Populated by the system. Read-only. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status

2、Pod模板

  • .spec 唯一必需的字段是 .spec.template
  • .spec.template 是一个 Pod 模板。 它与 Pod 具有相同的 schema,除了它是嵌套的,而且不具有 apiVersion 或 kind 字段。
  • Pod 除了必须字段外,在 DaemonSet 中的 Pod 模板必须指定合理的标签(查看 pod selector)。
  • 在 DaemonSet 中的 Pod 模板必需具有一个值为 Always 的 RestartPolicy,或者未指定它的值,默认是 Always。
  • ~]# kubectl explain daemonset.spec.template.spec

3、Pod Seletor

  • .spec.selector 字段表示 Pod Selector,它与 Job 或其它资源的 .sper.selector 的原理是相同的。
  • spec.selector 表示一个对象,它由如下两个字段组成:
    • matchLabels - 与 ReplicationController 的 .spec.selector 的原理相同。
    • matchExpressions - 允许构建更加复杂的 Selector,可以通过指定 key、value 列表,以及与 key 和 value 列表的相关的操作符。
  • 当上述两个字段都指定时,结果表示的是 AND 关系。
    • 如果指定了 .spec.selector,必须与 .spec.template.metadata.labels 相匹配。如果没有指定,它们默认是等价的。如果与它们配置的不匹配,则会被 API 拒绝。
    • 如果 Pod 的 label 与 selector 匹配,或者直接基于其它的 DaemonSet、或者 Controller(例如 ReplicationController),也不可以创建任何 Pod。 否则 DaemonSet Controller 将认为那些 Pod 是它创建的。Kubernetes 不会阻止这样做。一个场景是,可能希望在一个具有不同值的、用来测试用的 Node 上手动创建 Pod。

4、Daemon Pod通信

  • 与 DaemonSet 中的 Pod 进行通信,几种可能的模式如下:
    • Push:配置 DaemonSet 中的 Pod 向其它 Service 发送更新,例如统计数据库。它们没有客户端。
    • NodeIP 和已知端口:DaemonSet 中的 Pod 可以使用 hostPort,从而可以通过 Node IP 访问到 Pod。客户端能通过某种方法知道 Node IP 列表,并且基于此也可以知道端口。
    • DNS:创建具有相同 Pod Selector 的 Headless Service,然后通过使用 endpoints 资源或从 DNS 检索到多个 A 记录来发现 DaemonSet。
    • Service:创建具有相同 Pod Selector 的 Service,并使用该 Service 访问到某个随机 Node 上的 daemon。(没有办法访问到特定 Node)

1.2.2、DaemonSet 在每台节点上部署filebeat

1、创建DaemonSet类型的控制器资源YAML
chapter5]# cat filebeat-ds.yaml
#标准资源api版本定义群组
apiVersion: apps/v1
#定义资源的类型为DaemonSet
kind: DaemonSet
#定义元数据
metadata:
  #控制器名称
  name: filebeat-ds
  #定义此控制器属于的名称空间
  namespace: prod
  #定义控制器的标签,可选项
  labels:
    app: filebeat
#DaemonSet的规格定义
spec:  #拥有selector和template
  #标签选择器,定义匹配的Pod的标签(仅有一个pod所被匹配)
  selector:
    ##使用matchLabels来匹配,每一个pod被创建出来后匹配有app这个标签且值为filebeat的pod
    matchLabels:
      app: filebeat
  #Pod模板的定义,以下Pod自动使用app: filebeat这个标签(template模板用于哪个Nod节点上没有响应的Pod,需要根据此模板创建Pod)
  template:
    #后续Nod节点上创建Pod的元数据
    metadata:
      #Pod的标签
      labels:
        app: filebeat
      #Pod的名称
      name: filebeat
    #Pod的规格定义
    spec:
      #Pod中容器定义
      containers:
      #Pod中 容器的名称
      - name: filebeat
        #运行此容器使用的镜像
        image: ikubernetes/filebeat:5.6.5-alpine
        #此镜像运行为容器所依赖的环境变量(env是容器级别的参数,是一个资源列表)
        env:
        #传递变量的变量名称
        - name: REDIS_HOST
          #给定传递的值,这里意思为Redi地址
          value: db.ikubernetes.io:6379
        - name: LOG_LEVEL
          #传递收集日志的级别     
          value: info
 
2、使用声明式接口创建ReplicaSet定义的pod
    chapter5]# kubectl apply -f filebeat-ds.yaml
        daemonset.apps/filebeat-ds created
 
3、查看创建pod容器 (每个节点上都会运行此pod,此集群Node节点有5台机器,则运行5个Pod,但是master上未运行因为master上有污点)
    chapter5]# kubectl get pods -n prod --show-labels -l app=filebeat -o wide
        NAME                READY   STATUS    RESTARTS   AGE     IP              NODE            NOMINATED NODE   READINESS GATES   LABELS
        filebeat-ds-h8w4p   1/1     Running   0          8m49s   192.168.1.150   192.168.1.51    <none>           <none>            app=filebeat,controller-revision-hash=fb6b847cc,pod-template-generation=1
        filebeat-ds-lf852   1/1     Running   0          8m49s   192.168.1.41    192.168.1.247   <none>           <none>            app=filebeat,controller-revision-hash=fb6b847cc,pod-template-   generation=1
        filebeat-ds-qn6z9   1/1     Running   0          8m49s   192.168.1.116   192.168.1.141   <none>           <none>            app=filebeat,controller-revision-hash=fb6b847cc,pod-template-generation=1
        filebeat-ds-tms5t   1/1     Running   0          8m49s   192.168.1.170   192.168.1.185   <none>           <none>            app=filebeat,controller-revision-hash=fb6b847cc,pod-template-generation=1
        filebeat-ds-wsccr   1/1     Running   0          8m49s   192.168.1.223   192.168.1.211   <none>           <none>            app=filebeat,controller-revision-hash=fb6b847cc,pod-template-generation=1
 
4、查看master节点有哪些污点
    chapter5]# kubectl describe node k8s.master1
    Name:               k8s.master1
    Roles:              master
    Labels:             beta.kubernetes.io/arch=amd64
                        beta.kubernetes.io/os=linux
                        kubernetes.io/arch=amd64
                        kubernetes.io/hostname=k8s.master1
                        kubernetes.io/os=linux
                        node-role.kubernetes.io/master=
    Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"72:44:5d:13:bf:79"}
                        flannel.alpha.coreos.com/backend-type: vxlan
                        flannel.alpha.coreos.com/kube-subnet-manager: true
                        flannel.alpha.coreos.com/public-ip: 192.168.20.236
                        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                        node.alpha.kubernetes.io/ttl: 0
                        volumes.kubernetes.io/controller-managed-attach-detach: true
    CreationTimestamp:  Sat, 25 Apr 2020 00:49:31 +0800
    Taints:             node-role.kubernetes.io/master:NoSchedule    # 污点信息
    Unschedulable:      false
    Lease:
    HolderIdentity:  k8s.master1
    AcquireTime:     <unset>
     RenewTime:       Mon, 11 May 2020 15:13:09 +0800

1.2.3、创建redis-filebeat的DaemonSet演示

(1)编辑daemonSet的yaml文件
可以在同一个yaml文件中定义多个资源,这里将redis和filebeat定在一个文件当中
 
[root@k8s-master mainfests]# vim ds-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: logstor
  template:
    metadata:
      labels:
        app: redis
        role: logstor
    spec:
      containers:
      - name: redis
        image: redis:4.0-alpine
        ports:
        - name: redis
          containerPort: 6379
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat-ds
  namespace: default
spec:
  selector:
    matchLabels:
      app: filebeat
      release: stable
  template:
    metadata:
      labels:
        app: filebeat
        release: stable
    spec:
      containers:
      - name: filebeat
        image: ikubernetes/filebeat:5.6.5-alpine
        env:
        - name: REDIS_HOST
          value: redis.default.svc.cluster.local
        - name: REDIS_LOG_LEVEL
          value: info
 
(2)创建pods               
[root@k8s-master mainfests]# kubectl apply -f ds-demo.yaml
deployment.apps/redis created
daemonset.apps/filebeat-ds created
 
(3)暴露端口
[root@k8s-master mainfests]# kubectl expose deployment redis --port=6379
service/redis exposed
[root@k8s-master mainfests]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        16d
myapp        NodePort    10.106.67.242    <none>        80:32432/TCP   13d
nginx        ClusterIP   10.106.162.254   <none>        80/TCP         14d
redis        ClusterIP   10.107.163.143   <none>        6379/TCP       4s
 
[root@k8s-master mainfests]# kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE
filebeat-ds-rpp9p        1/1       Running   0          5m
filebeat-ds-vwx7d        1/1       Running   0          5m
pod-demo                 2/2       Running   6          5d
redis-5b5d6fbbbd-v82pw   1/1       Running   0          36s
 
(4)测试redis是否收到日志
[root@k8s-master mainfests]# kubectl exec -it redis-5b5d6fbbbd-v82pw -- /bin/sh
/data # netstat -tnl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 0.0.0.0:6379            0.0.0.0:*               LISTEN     
tcp        0      0 :::6379                 :::*                    LISTEN     
 
/data # nslookup redis.default.svc.cluster.local
nslookup: can't resolve '(null)': Name does not resolve
 
Name:      redis.default.svc.cluster.local
Address 1: 10.107.163.143 redis.default.svc.cluster.local
 
/data # redis-cli -h redis.default.svc.cluster.local
redis.default.svc.cluster.local:6379> KEYS *  #由于redis在filebeat后面才启动,日志可能已经发走了,所以查看key为空
(empty list or set)
 
[root@k8s-master mainfests]# kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE
filebeat-ds-rpp9p        1/1       Running   0          14m
filebeat-ds-vwx7d        1/1       Running   0          14m
pod-demo                 2/2       Running   6          5d
redis-5b5d6fbbbd-v82pw   1/1       Running   0          9m
[root@k8s-master mainfests]# kubectl exec -it filebeat-ds-rpp9p -- /bin/sh
/ # cat /etc/filebeat/filebeat.yml
filebeat.registry_file: /var/log/containers/filebeat_registry
filebeat.idle_timeout: 5s
filebeat.spool_size: 2048
 
logging.level: info
 
filebeat.prospectors:
- input_type: log
  paths:
    - "/var/log/containers/*.log"
    - "/var/log/docker/containers/*.log"
    - "/var/log/startupscript.log"
    - "/var/log/kubelet.log"
    - "/var/log/kube-proxy.log"
    - "/var/log/kube-apiserver.log"
    - "/var/log/kube-controller-manager.log"
    - "/var/log/kube-scheduler.log"
    - "/var/log/rescheduler.log"
    - "/var/log/glbc.log"
    - "/var/log/cluster-autoscaler.log"
  symlinks: true
  json.message_key: log
  json.keys_under_root: true
  json.add_error_key: true
  multiline.pattern: '^\s'
  multiline.match: after
  document_type: kube-logs
  tail_files: true
  fields_under_root: true
 
output.redis:
  hosts: ${REDIS_HOST:?No Redis host configured. Use env var REDIS_HOST to set host.}
  key: "filebeat"
 
[root@k8s-master mainfests]# kubectl get pods -l app=filebeat -o wide
NAME                READY     STATUS    RESTARTS   AGE       IP            NODE
filebeat-ds-rpp9p   1/1       Running   0          16m       10.244.2.12   k8s-node02
filebeat-ds-vwx7d   1/1       Running   0          16m       10.244.1.15   k8s-node01

1.2.4、DaemonSet 滚动更新

  • DaemonSet有两种更新策略类型:
    • OnDelete:这是向后兼容性的默认更新策略。使用 OnDelete更新策略,在更新DaemonSet模板后,只有在手动删除旧的DaemonSet pod时才会创建新的DaemonSet pod。这与Kubernetes 1.5或更早版本中DaemonSet的行为相同。
    • RollingUpdate:(默认更新策略)使用RollingUpdate更新策略,在更新DaemonSet模板后,旧的DaemonSet pod将被终止,并且将以受控方式自动创建新的DaemonSet pod。(默认情况下一次只更新一个节点)
  • 要启用DaemonSet的滚动更新功能,必须将其设置 .spec.updateStrategy.type为RollingUpdate。
查看DaemonSet更新策略
    chapter5]# kubectl explain ds.spec.updateStrategy
        KIND:     DaemonSet
        VERSION:  apps/v1
        RESOURCE: updateStrategy <Object>
        DESCRIPTION:
            An update strategy to replace existing DaemonSet pods with new pods.
            DaemonSetUpdateStrategy is a struct used to control the update strategy for a DaemonSet.
        FIELDS:
            rollingUpdate   <Object>
            Rolling update config params. Present only if type = "RollingUpdate".
        type    <string>
            Type of daemon set update. Can be "RollingUpdate" or "OnDelete". Default is
            RollingUpdate.
1、查看DaemonSet当前更新策略
    chapter5]# kubectl get ds/filebeat-ds -n prod -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'
        RollingUpdate
         
2、更新DaemonSet模板   #对RollingUpdate DaemonSet的任何更新都.spec.template将触发滚动更新。这可以通过几个不同的kubectl命令来完成。
   #方式一
    声明式命令方式:
        如果使用配置文件进行更新DaemonSet,可以使用kubectl aapply:
            kubectl apply -f ds-demo.yaml
    #方式二   
    补丁式命令方式:
        kubectl edit ds/filebeat-ds
        kubectl patch ds/filebeat-ds -p=<strategic-merge-patch>
     
    #方式三(使用场景多)
    仅仅更新容器镜像还可以使用以下命令:
        kubectl set image ds/<daemonset-name> <container-name>=<container-new-image>
            Usage:
            kubectl set image (-f FILENAME | TYPE NAME) CONTAINER_NAME_1=CONTAINER_IMAGE_1... CONTAINER_NAME_N=CONTAINER_IMAGE_N [options]
 
    filebeat-ds的镜像进行版本更新,如下:
                                     #  控制器、控制器名名称、容器名称=更新的镜像名称、-n名称空间
        mainfests]# kubectl set image daemonsets filebeat-ds filebeat=ikubernetes/filebeat:5.6.6-alpine -n prod
            daemonset.extensions/filebeat-ds image updated
        mainfests]# kubectl get pods -w  #观察滚动更新状态
        mainfests]# kubectl get pods
        mainfests]# kubectl get ds -n prod -o wide
        # 指定pod标签选择器查看pods
        mainfests]# kubectl get pods -n prod -l app=filebeat
 
 
#从上面的滚动更新,可以看到在更新过程中,是先终止旧的pod,再创建一个新的pod,逐步进行替换的,这就是DaemonSet的滚动更新策略!

1.2.5、DaemonSet 定义节点标签选择器限制Pod仅能被调度到哪些节点(nodeSelector)

chapter5]# kubectl explain pods.spec.nodeSelector
KIND:     Pod
VERSION:  v1
 
FIELD:    nodeSelector <map[string]string>
 
DESCRIPTION:
     NodeSelector is a selector which must be true for the pod to fit on a node.
     Selector which must match a node's labels for the pod to be scheduled on
     that node. More info:
     https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
1、删除之前创建的ds
    chapter5]# kubectl delete ds filebeat-ds -n prod
        daemonset.apps "filebeat-ds" deleted
 
2、显示k8s集群节点标签
chapter5]# kubectl get nodes --show-labels
NAME            STATUS   ROLES    AGE   VERSION   LABELS
192.168.1.141   Ready    <none>   10d   v1.16.4   UhostID=uhost-jjckmq54,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=cn-bj2,failure-domain.beta.kubernetes.io/zone=cn-bj2-05,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.1.141,kubernetes.io/os=linux,node.uk8s.ucloud.cn/instance_type=uhost,node.uk8s.ucloud.cn/resource_id=uhost-jjckmq54,role.node.kubernetes.io/k8s-node=true,topology.udisk.csi.ucloud.cn/region=cn-bj2,topology.udisk.csi.ucloud.cn/zone=cn-bj2-05
192.168.1.185   Ready    <none>   10d   v1.16.4   UhostID=uhost-cupuivic,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=cn-bj2,failure-domain.beta.kubernetes.io/zone=cn-bj2-05,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.1.185,kubernetes.io/os=linux,node.uk8s.ucloud.cn/instance_type=uhost,node.uk8s.ucloud.cn/resource_id=uhost-cupuivic,role.node.kubernetes.io/k8s-node=true,topology.udisk.csi.ucloud.cn/region=cn-bj2,topology.udisk.csi.ucloud.cn/zone=cn-bj2-05
192.168.1.211   Ready    <none>   10d   v1.16.4   UhostID=uhost-muy3y1cf,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=cn-bj2,failure-domain.beta.kubernetes.io/zone=cn-bj2-05,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.1.211,kubernetes.io/os=linux,node.uk8s.ucloud.cn/instance_type=uhost,node.uk8s.ucloud.cn/resource_id=uhost-muy3y1cf,role.node.kubernetes.io/k8s-node=true,topology.udisk.csi.ucloud.cn/region=cn-bj2,topology.udisk.csi.ucloud.cn/zone=cn-bj2-05
192.168.1.247   Ready    <none>   10d   v1.16.4   UhostID=uhost-i5jpcqec,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=cn-bj2,failure-domain.beta.kubernetes.io/zone=cn-bj2-05,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.1.247,kubernetes.io/os=linux,node.uk8s.ucloud.cn/instance_type=uhost,node.uk8s.ucloud.cn/resource_id=uhost-i5jpcqec,role.node.kubernetes.io/k8s-node=true,topology.udisk.csi.ucloud.cn/region=cn-bj2,topology.udisk.csi.ucloud.cn/zone=cn-bj2-05
192.168.1.51    Ready    <none>   10d   v1.16.4   UhostID=uhost-1i52wcma,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=cn-bj2,failure-domain.beta.kubernetes.io/zone=cn-bj2-05,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.1.51,kubernetes.io/os=linux,node.uk8s.ucloud.cn/instance_type=uhost,node.uk8s.ucloud.cn/resource_id=uhost-1i52wcma,role.node.kubernetes.io/k8s-node=true,topology.udisk.csi.ucloud.cn/region=cn-bj2,topology.udisk.csi.ucloud.cn/zone=cn-bj2-05
 
3、创建DaemonSet类型的控制器资源YAML添加定义节点标签选择器显示Pod仅能被调度到哪些节点(nodeSelector)
    #模拟node节点中有些是需要收集日志的
 
 
chapter5]# cat filebeat-ds.yaml
#标准资源api版本定义群组
apiVersion: apps/v1
#定义资源的类型为DaemonSet
kind: DaemonSet
#定义元数据
metadata:
  #控制器名称
  name: filebeat-ds
  #定义此控制器属于的名称空间
  namespace: prod
  #定义控制器的标签,可选项
  labels:
    app: filebeat
#DaemonSet的规格定义
spec:  #拥有selector和template
  #标签选择器,定义匹配的Pod的标签
  selector:
    ##使用matchLabels来匹配,每一个pod被创建出来后匹配有app这个标签且值为filebeat的pod
    matchLabels:
      app: filebeat
  #Pod模板的定义,以下Pod自动使用app: filebeat这个标签(template模板用于哪个Nod节点上没有响应的Pod,需要根据此模板创建Pod)
  template:
    #后续Nod节点上创建Pod的元数据
    metadata:
      #Pod的标签
      labels:
        app: filebeat
      #Pod的名称
      name: filebeat
    #Pod的规格定义
    spec:
      #Pod中容器定义
      containers:
      #Pod中 容器的名称
      - name: filebeat
        #运行此容器使用的镜像
        image: ikubernetes/filebeat:5.6.5-alpine
        #此镜像运行为容器所依赖的环境变量(env是容器级别的参数,是一个资源列表)
        env:
        #传递变量的变量名称
        - name: REDIS_HOST
          #给定传递的值,这里意思为Redi地址
          value: db.ikubernetes.io:6379
        - name: LOG_LEVEL
          #传递收集日志的级别     
          value: info
      #节点标签选择器
      nodeSelector:
        #node节点上有logcollecting等于on的node节点才会运行此控制器的Pod
        logcollecting: "on"
 
 
4、使用声明式接口创建此控制器资源对象
    chapter5]# kubectl apply -f filebeat-ds.yaml
        daemonset.apps/filebeat-ds created
 
    chapter5]# kubectl get ds -n prod
        NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR      AGE   #由于所有的node节点没有满足其ds中定义的node节点标签选择器则无法在其node节点上运行响应的pod资源
        filebeat-ds   0         0         0       0            0           logcollecting=on   44s
 
 
5、设置一台node节点的标签,以满足ds控制器中定义的node节点选择器
    chapter5]# kubectl label node 192.168.1.51 logcollecting="on"
        node/192.168.1.51 labeled
 
    chapter5]# kubectl get ds -n prod
        NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR      AGE    #此时ds控制器中定义的node节点选择器中已经检测到其中一个节点满足其标签则运行一个Pod
        filebeat-ds   1         1         1       1            1           logcollecting=on   5m21s  
 
    chapter5]# kubectl get pods -n prod -o wide
        NAME                     READY   STATUS    RESTARTS   AGE     IP              NODE            NOMINATED NODE   READINESS GATES
        filebeat-ds-dns76        1/1     Running   0          2m12s   192.168.1.143   192.168.1.51    <none>           <none>
 
 
 
6、node 192.168.1.51 删除已添加的标签,并查看控制器下的Pod资源数量
    chapter5]# kubectl label node 192.168.1.51 logcollecting-
        node/192.168.1.51 labeled
 
    chapter5]# kubectl get ds -n prod
        NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR      AGE
        filebeat-ds   0         0         0       0            0           logcollecting=on   12m

二、Pod控制器JOB

2.1、单路作业

# job任务pod控制器不是主要的,这里仅演示示例
 
 
1、创建job类型的控制器资源
chapter5]# cat job-example.yaml
apiVersion: batch/v1
资源类型为JOB
kind: Job
metadata:
  # job控制器名称
  name: job-example
  namespace: prod
spec:
  template:
    metadata:
      labels:
        app: myjob
    spec:
      containers:
      - name: myjob
        image: alpine
        command: ["/bin/sh",  "-c", "sleep 20"]
      # 执行失败时不会被重新启动
      restartPolicy: Never
 
 
 
2、使用声明式接口创建此job控制器
    chapter5]# kubectl apply -f job-example.yaml
        job.batch/job-example created
 
3、查看集群中job
    chapter5]# kubectl get job -n prod -o wide
        NAME          COMPLETIONS   DURATION   AGE   CONTAINERS   IMAGES   SELECTOR
        job-example   0/1           36s        36s   myjob        alpine   controller-uid=16af4c08-1308-4c46-92f3-0e87c89ba799
 
4、查看job控制器管理的pod运行状态
    chapter5]# kubectl get pods -n prod -o wide
        NAME                     READY   STATUS    RESTARTS   AGE     IP            NODE        NOMINATED NODE   READINESS GATES
        filebeat-ds-4c2b4        1/1     Running   0          121m    10.244.1.22   k8s.node1   <none>           <none>
        filebeat-ds-j7fss        1/1     Running   0          121m    10.244.2.17   k8s.node2   <none>           <none>
        job-example-j8j2m        1/1     Running   0          2m29s   10.244.1.23   k8s.node1   <none>           <none>
 
 
5、过一段时间查看此JOB控制器,可以查看到已经运行完成一次,再次查看管理的pods,已经结束即成功完成
    chapter5]# kubectl get job -n prod -o wide
        NAME          COMPLETIONS   DURATION   AGE     CONTAINERS   IMAGES   SELECTOR
        job-example   1/1           3m1s       3m11s   myjob        alpine   controller-uid=16af4c08-1308-4c46-92f3-0e87c89ba799
 
     chapter5]# kubectl get pods -n prod -o wide
        NAME                     READY   STATUS      RESTARTS   AGE     IP            NODE        NOMINATED NODE   READINESS GATES
        filebeat-ds-4c2b4        1/1     Running     0          123m    10.244.1.22   k8s.node1   <none>           <none>
        filebeat-ds-j7fss        1/1     Running     0          123m    10.244.2.17   k8s.node2   <none>           <none>
        job-example-j8j2m        0/1     Completed   0          4m10s   10.244.1.23   k8s.node1   <none>           <none>
 
 
6、删除执行完的作业
    chapter5]# kubectl delete job job-example -n prod
        job.batch "job-example" deleted

2.2、多路作业

  • 作业总量为1个,并行作业也只能为一个

2.2.1、多路作业(completions)

  • completions指定作业数量即任务总量
1、创建job类型的控制器资源--多路作业,默认并行度为1路,即以下示例为5个作业就会启动5个pod,但是要一个pod执行完再执行下一个pod,直到所有pod执行完为止
chapter5]# cat job-multi.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: job-multi
  namespace: prod
spec:
  completions: 5
  template:
    metadata:
      labels:
        app: myjob
    spec:
      containers:
      - name: myjob
        image: alpine
        command: ["/bin/sh",  "-c", "sleep 10"]
      restartPolicy: Never
 
2、使用声明式接口创建job控制器
    chapter5]# kubectl apply -f job-multi.yaml
        job.batch/job-multi created
 
3、查看pods状态
    chapter5]# kubectl get pods -n prod -w
        NAME                     READY   STATUS              RESTARTS   AGE
        filebeat-ds-4c2b4        1/1     Running             0          129m
        filebeat-ds-j7fss        1/1     Running             0          129m
        job-example-j8j2m        0/1     Completed           0          10m
        job-multi-c4x6h          0/1     ContainerCreating   0          3s
        job-multi-srcmk          0/1     Completed           0          45s
        myapp-5c554c56fc-4q8tw   1/1     Running             0          23h
        myapp-5c554c56fc-n7stm   1/1     Running             0          23h
        myapp-5c554c56fc-rr7vf   1/1     Running             0          23h
        job-multi-c4x6h          1/1     Running             0          38s
        job-multi-c4x6h          0/1     Completed           0          48s
        job-multi-7xmdf          0/1     Pending             0          0s
        job-multi-7xmdf          0/1     Pending             0          0s
        job-multi-7xmdf          0/1     ContainerCreating   0          0s
        job-multi-7xmdf          1/1     Running             0          21s
        job-multi-7xmdf          0/1     Completed           0          31s
        job-multi-ngvhn          0/1     Pending             0          0s
        job-multi-ngvhn          0/1     Pending             0          0s
        job-multi-ngvhn          0/1     ContainerCreating   0          0s
        job-multi-ngvhn          1/1     Running             0          23s
        job-multi-ngvhn          0/1     Completed           0          33s
        job-multi-mm84j          0/1     Pending             0          0s
        job-multi-mm84j          0/1     Pending             0          0s
        job-multi-mm84j          0/1     ContainerCreating   0          0s
        job-multi-mm84j          1/1     Running             0          29s
        job-multi-mm84j          0/1     Completed           0          39s
 
    chapter5]# kubectl get pod -n prod
        NAME                     READY   STATUS      RESTARTS   AGE
        filebeat-ds-4c2b4        1/1     Running     0          132m
        filebeat-ds-j7fss        1/1     Running     0          132m
        job-example-j8j2m        0/1     Completed   0          13m
        job-multi-7xmdf          0/1     Completed   0          3m3s
        job-multi-c4x6h          0/1     Completed   0          3m51s
        job-multi-mm84j          0/1     Completed   0          119s
        job-multi-ngvhn          0/1     Completed   0          2m32s
        job-multi-srcmk          0/1     Completed   0          4m33s
 
4、查看集群中的job作业状态
    chapter5]# kubectl get job -n prod
        NAME          COMPLETIONS   DURATION   AGE
        job-example   1/1           3m1s       14m
        job-multi     5/5           3m13s      4m59s      # 总共5个作业/执行完成5个作业
 
 
5、删除执行完的作业
    chapter5]# kubectl delete job job-multi -n prod
        job.batch "job-multi" deleted

2.2.2、多路作业(completions、parallelism)

  • completions : 指定总作业数量
  • parallelism : 指定并行作业数量
1、创建job类型的控制器资源--多路作业, parallelism指定job工作数量的并行数量,以下示例执行顺序为  2-2-1
 
 
chapter5]# cat job-multi.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: job-multi
  namespace: test
spec:
  # 指定job的作业数量
  completions: 5
  # 指定job工作数量的并行数量
  parallelism: 2
  template:
    metadata:
      labels:
        app: myjob
    spec:
      containers:
      - name: myjob
        image: alpine
        command: ["/bin/sh",  "-c", "sleep 10"]
      restartPolicy: Never
 
2、使用声明接口创建控制器
    chapter5]# kubectl create ns test
        namespace/test created
    chapter5]# kubectl apply -f job-multi.yaml
        job.batch/job-multi created
 
3、查看pods状态
    chapter5]# kubectl get pods -n test -w
        NAME              READY   STATUS              RESTARTS   AGE
        job-multi-22627   0/1     ContainerCreating   0          27s
        job-multi-wpxlk   0/1     ContainerCreating   0          27s
        job-multi-wv9dx   0/1     ContainerCreating   0          27s
        job-multi-22627   1/1     Running             0          29s
        job-multi-22627   0/1     Completed           0          39s
        job-multi-m9szl   0/1     Pending             0          0s
        job-multi-m9szl   0/1     Pending             0          0s
        job-multi-m9szl   0/1     ContainerCreating   0          0s
        job-multi-wv9dx   1/1     Running             0          49s
        job-multi-wpxlk   1/1     Running             0          54s
        job-multi-wv9dx   0/1     Completed           0          59s
        job-multi-qc46x   0/1     Pending             0          0s
        job-multi-qc46x   0/1     Pending             0          0s
        job-multi-qc46x   0/1     ContainerCreating   0          0s
        job-multi-wpxlk   0/1     Completed           0          63s
        job-multi-m9szl   1/1     Running             0          34s
        job-multi-qc46x   1/1     Running             0          22s
        job-multi-m9szl   0/1     Completed           0          44s
        job-multi-qc46x   0/1     Completed           0          33s
 
 
4、查看集群中的job作业状态
    chapter5]# kubectl get job -n test -o wide
    NAME        COMPLETIONS   DURATION   AGE     CONTAINERS   IMAGES   SELECTOR
    job-multi   5/5           92s        2m48s   myjob        alpine   controller-uid=5ed78462-763f-499d-8f96-e324aaebeb08
 
 
5、删除执行完的作业
    chapter5]# kubectl delete job job-multi -n test
        job.batch "job-multi" deleted

三、CronJob周期性作业(CronJob控制→ Job → Pod)

# cronjob任务pod控制器不是主要的,这里仅演示示例
 
 
1、创建cronjob类型的控制器资源--周期性作业
chapter5]# cat cronjob-example.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: cronjob-example
  labels:
    app: mycronjob
spec:
  # 定义周期性作业时间(分时日月周)
  schedule: "*/2 * * * *"
  # 定义周期性job的模板(CronJob控制→ Job → Pod)
  jobTemplate:
    metadata:
      labels:
        app: mycronjob-jobs
    spec:
      # 指定job工作数量的并行数量
      parallelism: 2
      template:
        spec:
          containers:
          - name: myjob
            image: alpine
            command:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster; sleep 10
          restartPolicy: OnFailure
 
chapter5]# kubectl get cronjob -n test
posted @ 2021-06-13 22:46  SRE运维充电站  阅读(1)  评论(0编辑  收藏  举报