k8s--五种控制器

一、k8s的五种控制器

1.1 k8s的控制器类型

Kubernetes中内建了很多controller(控制器),这些相当于一个状态机,用来控制Pod的具体状态和行为:

  • deployment:适合无状态的服务部署
  • StatefullSet:适合有状态的服务部署
  • DaemonSet:一次部署,所有的node节点都会部署,例如一些典型的应用场景:
    • 运行集群存储 daemon,例如在每个Node上运行 glusterd、ceph
    • 在每个Node上运行日志收集 daemon,例如 fluentd、 logstash
    • 在每个Node上运行监控 daemon,例如 Prometheus Node Exporter
  • Job:一次性的执行任务
  • Cronjob:周期性的执行任务

控制器又被称为工作负载,pod通过控制器实现应用的运维,比如伸缩、升级等

1.2 Deployment控制器

适合部署无状态的应用服务,用来管理pod和replicaset,具有上线部署、副本设定、滚动更新、回滚等功能,还可提供声明式更新,例如只更新一个新的Image

编写yaml文件,并创建nginx服务pod资源

vim nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3   
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx1
        image: nginx:1.15.4
        ports:
        - containerPort: 80
[root@master test]# kubectl create -f nginx-deployment.yaml 
deployment.apps/nginx-deployment created
[root@master test]# kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-78cdb5b557-86fm9   1/1     Running   0          44s
nginx-deployment-78cdb5b557-jqvng   1/1     Running   0          44s
nginx-deployment-78cdb5b557-rkxtk   1/1     Running   0          44s

查看控制器参数:可以使用describe或者edit两种方式

[root@master test]# kubectl describe deploy nginx-deployment
'或者使用edit'
[root@master test]# kubectl edit deploy nginx-deployment
  strategy:
    rollingUpdate:        '此段解释的是滚动更新机制'
      maxSurge: 25%       '25%指的是pod数量的百分比,最多可以扩容125%'
      maxUnavailable: 25%  '25%指的是pod数量的百分比,最多可以缩容75%'
    type: RollingUpdate

查看控制器的历史版本,滚动更新以此为基础

[root@master test]# kubectl rollout history deploy/nginx-deployment
deployment.extensions/nginx-deployment 
REVISION  CHANGE-CAUSE
1         <none>    '发现只有一个,说明没有开始滚动更新,否则会保持2个'

1.3 SatefulSet控制器

  • 适合部署有状态应用
  • 解决Pod的独立生命周期,保持Pod启动顺序和唯一性
  • 稳定,唯一的网络标识符,持久存储(例如:etcd配置文件,节点地址发生变化,将无法使用)
  • 有序,优雅的部署和扩展、删除和终止(例如:mysql主从关系,先启动主,再启动从)
  • 有序,滚动更新
  • 应用场景:例如数据库

无状态服务的特点:

  • deployment 认为所有的pod都是一样的
  • 不用考虑顺序的要求
  • 不用考虑在哪个node节点上运行
  • 可以随意扩容和缩容

有状态服务的特点:

  • 实例之间有差别,每个实例都有自己的独特性,元数据不同,例如etcd,zookeeper
  • 实例之间不对等的关系,以及依靠外部存储的应用

常规的service服务和无头服务的区别

  • service:一组Pod访问策略,提供cluster-IP群集之间通讯,还提供负载均衡和服务发现
  • Headless service 无头服务,不需要cluster-IP,直接绑定具体的Pod的IP,无头服务经常用于statefulset的有状态部署

创建无头服务的service资源和dns资源
由于有状态服务的IP地址是动态的,所以使用无头服务的时候要绑定dns服务

1、编写yaml文件并创建service资源

root@master test]# vim nginx-headless.yaml

apiVersion: v1
kind: Service    '创建一个service类型的资源'
metadata: 
  name: nginx-headless
  labels: 
    app: nginx
spec: 
  ports: 
  - port: 80
    name: web
  clusterIP: None     '不使用clusterIP'   
  selector: 
    app: nginx
[root@master test]# kubectl create -f nginx-headless.yaml 
service/nginx-headless created
[root@master test]# kubectl get svc
NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes       ClusterIP   10.0.0.1     <none>        443/TCP   15d
nginx-headless   ClusterIP   None         <none>        80/TCP    16s

2、配置dns服务,使用yaml文件创建

[root@master test]# vim coredns.yaml
# Warning: This is a file generated from the base underscore template file: coredns.yaml.base

apiVersion: v1
kind: ServiceAccount    '系统账户,为pod中的进程和外部用户提供身份信息'
metadata:
  name: coredns
  namespace: kube-system    '指定名称空间'
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole    '创建访问权限的角色'
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding    '创建集群角色绑定的用户'
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap      '通过此服务来更改服务发现的工作方式'
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |    '是coreDNS的配置文件'
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure    
            upstream
            fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        proxy . /etc/resolv.conf
       cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      serviceAccountName: coredns
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      containers:
      - name: coredns
        image: coredns/coredns:1.2.2
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.0.0.2 
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
[root@master test]# kubectl create -f coredns.yaml 
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.extensions/coredns created
service/kube-dns created
[root@master test]# kubectl get pod,svc -n kube-system   '查看kube-system名称空间的pod和svc资源'
NAME                                        READY   STATUS    RESTARTS   AGE
pod/coredns-56684f94d6-cc9jk                1/1     Running   0          58s
pod/kubernetes-dashboard-7dffbccd68-v6q55   1/1     Running   1          6d2h
pod/kuboard-78bcb484bc-s7svz                1/1     Running   0          2d2h

NAME                           TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
service/kube-dns               ClusterIP   10.0.0.2     <none>        53/UDP,53/TCP   58s
service/kubernetes-dashboard   NodePort    10.0.0.220   <none>        443:30001/TCP   6d2h
service/kuboard                NodePort    10.0.0.184   <none>        80:32567/TCP    6d2h

3、创建一个测试的pod资源并验证DNS解析

[root@master test]# vim demo08.yaml
apiVersion: v1
kind: Pod
metadata:
  name: dns-test
spec:
  containers:
  - name: busybox
    image: busybox:1.28.4
    args:
    - /bin/sh
    - -c
    - sleep 36000
  restartPolicy: Never
[root@master test]# kubectl create -f demo08.yaml 
pod/dns-test created

[root@master test]# kubectl exec -it dns-test sh  '进入容器进行解析'
/ # nslookup kubernetes
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local

创建statefulset资源
1、编写yaml文件并创建资源

[root@master test]# vim statefulset-test.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None    '指定为无头服务'
  selector:
    app: nginx
---
apiVersion: apps/v1beta1  
kind: StatefulSet  
metadata:
  name: nginx-statefulset  
  namespace: default
spec:
  serviceName: nginx  
  replicas: 3     '指定副本数量'
  selector:
    matchLabels:  
       app: nginx
  template:  
    metadata:
      labels:
        app: nginx  
    spec:
      containers:
      - name: nginx
        image: nginx:latest  
        ports:
        - containerPort: 80
[root@master test]# vim pod-dns-test.yaml     '创建用来测试dns的pod资源'
apiVersion: v1
kind: Pod
metadata:
  name: dns-test
spec:
containers:
  - name: busybox
    image: busybox:1.28.4
    args:
    - /bin/sh
    - -c
    - sleep 36000
  restartPolicy: Never

[root@master test]# kubectl delete -f .    '先删除之前所有的资源' 

2、创建资源并测试

[root@master test]# kubectl create -f sfs.yaml 
service/nginx created
statefulset.apps/nginx-statefulset created
[root@master test]# kubectl get pod,svc
NAME                      READY   STATUS    RESTARTS   AGE
pod/dns-test              1/1     Running   0          5m1s
pod/nginx-statefulset-0   1/1     Running   0          92s
pod/nginx-statefulset-1   1/1     Running   0          83s
pod/nginx-statefulset-2   1/1     Running   0          44s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   15d
service/nginx        ClusterIP   None         <none>        80/TCP    92s
[root@master test]# kubectl exec -it dns-test sh   '登陆pod资源进行测试'
/ # nslookup pod/nginx-statefulset-0
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

nslookup: can't resolve 'pod/nginx-statefulset-0'
/ # nslookup nginx-statefulset-0.nginx
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      nginx-statefulset-0.nginx
Address 1: 172.17.29.3 nginx-statefulset-0.nginx.default.svc.cluster.local
/ # nslookup nginx-statefulset-1.nginx
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      nginx-statefulset-1.nginx
Address 1: 172.17.76.3 nginx-statefulset-1.nginx.default.svc.cluster.local
/ # nslookup nginx-statefulset-2.nginx
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
Name:      nginx-statefulset-2.nginx
Address 1: 172.17.29.2 nginx-statefulset-2.nginx.default.svc.cluster.local

相比于Deployment而言,StatefulSet是有身份的!(序列编号区分唯一身份)
身份三要素:

1、域名 nginx-statefulset-0.nginx

2、主机名 nginx-statefulset-0

3、存储(PVC)

StatefulSet的有序部署和有序伸缩
有序部署(即0到N-1)

有序收缩,有序删除(即从N-1到0)

无论是部署还是删除,更新下一个 Pod 前,StatefulSet 控制器终止每个 Pod 并等待它们变成 Running 和 Ready。

1.4 DaemonSet控制器

  • 在每一个Node上运行一个Pod
    新加入的Node也同样会自动运行一个Pod
  • 应用场景:监控,分布式存储,日志收集等

编写yaml文件并创建资源进行测试

[root@master test]# vim ds.yaml
apiVersion: apps/v1
kind: DaemonSet 
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.15.4
        ports:
        - containerPort: 80

查看资源的部署情况,发现daemonset的资源已经分配到两个node节点上了

[root@master test]# kubectl create -f ds.yaml 
daemonset.apps/nginx-deployment created
[root@master test]# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-deployment-gdsnd   1/1     Running   0          6s
nginx-deployment-z2dbl   1/1     Running   0          6s

1.5 Job控制器

  • 一次性执行任务,类似Linux中的job
  • 应用场景:如离线数据处理,视频解码等业务

编写yaml文件并创建资源测试

[root@master test]# vim job.yaml

apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
      - name: pi
        image: perl
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]    '计算圆周率'
      restartPolicy: Never
  backoffLimit: 4     '重试次数默认是6次,修改为4次,当遇到异常时Never状态会重启,所以要设定次数'

查看job资源

[root@master test]# kubectl create -f job.yaml 
job.batch/pi created
[root@master test]# kubectl get pods
NAME       READY   STATUS      RESTARTS   AGE
pi-dscgp   0/1     Completed   0          76s    '执行成功后就结束了'
[root@master test]# kubectl logs pi-dscgp   '查看日志可以查看结果'
3.141592653589793238462643.....

1.6 cronjob控制器

  • 周期性任务,像Linux的Crontab一样
  • 应用场景:如通知,备份等

编写yaml文件

[root@master test]# vim cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure

创建资源

[root@master test]# kubectl create -f cronjob.yaml 
cronjob.batch/hello created
[root@master test]# kubectl get cronjob
NAME    SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
hello   */1 * * * *   False     0        <none>          38s
[root@master test]# kubectl get pods
NAME                     READY   STATUS      RESTARTS   AGE
hello-1602677340-dt9hq   0/1     Completed   0          39s

可以查看日志信息

[root@master test]# kubectl logs hello-1602677340-dt9hq
Wed Oct 14 12:09:21 UTC 2020
Hello from the Kubernetes cluster
[root@master test]# kubectl get pods
NAME                     READY   STATUS      RESTARTS   AGE
hello-1602677340-dt9hq   0/1     Completed   0          2m21s
hello-1602677400-2nd48   0/1     Completed   0          81s
hello-1602677460-qp2xt   0/1     Completed   0          20s

使用cronjob要慎重,用完之后要删掉,不然会占用很多资源

 

posted @ 2021-01-27 16:35  星火撩原  阅读(984)  评论(0编辑  收藏  举报