linux运维、架构之路-K8s应用
一、Deployment
k8s通过各种Controller管理Pod的生命周期,为了满足不同的业务场景,k8s提供了Deployment、ReplicaSet、DaemonSet、StatefuleSet、Job等多种资源类型。
1、创建Deployment应用
kubectl run nginx-deployment --image=nginx:1.7.9 --replicas=2
上述命令部署了包含两个副本的nginx-deployment,容器的image为nginx:1.7.9
①查看nginx-deployment的状态
[root@k8s-node1 ~]# kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment 2 2 2 2 6h
②详细查看nginx-deployment
[root@k8s-node1 ~]# kubectl describe deployment nginx-deployment Name: nginx-deployment Namespace: default CreationTimestamp: Thu, 05 Dec 2019 09:52:01 +0800 Labels: run=nginx-deployment Annotations: deployment.kubernetes.io/revision=1 Selector: run=nginx-deployment Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 1 max unavailable, 1 max surge Pod Template: Labels: run=nginx-deployment Containers: nginx-deployment: Image: nginx:1.7.9 Port: <none> Host Port: <none> Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: nginx-deployment-6b5c99b6fd (2/2 replicas created) Events: <none>
图上标红的部分告诉我们创建了一个nginx-deployment-6b5c99b6fd,Events记录了ReplicaSet的启动过程,同样验证了deployment是通过ReplicaSet来管理Pod,可以通过命令查看ReplicaSet
[root@k8s-node1 ~]# kubectl get replicaset NAME DESIRED CURRENT READY AGE nginx-deployment-6b5c99b6fd 2 2 2 6h
③详细查看Replicaset
[root@k8s-node1 ~]# kubectl describe replicaset nginx-deployment-6b5c99b6fd Name: nginx-deployment-6b5c99b6fd Namespace: default Selector: pod-template-hash=2617556298,run=nginx-deployment Labels: pod-template-hash=2617556298 run=nginx-deployment Annotations: deployment.kubernetes.io/desired-replicas=2 deployment.kubernetes.io/max-replicas=3 deployment.kubernetes.io/revision=1 Controlled By: Deployment/nginx-deployment Replicas: 2 current / 2 desired Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: pod-template-hash=2617556298 run=nginx-deployment Containers: nginx-deployment: Image: nginx:1.7.9 Port: <none> Host Port: <none> Environment: <none> Mounts: <none> Volumes: <none> Events: <none>
④查看Pod
[root@k8s-node1 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE nginx-deployment-6b5c99b6fd-9mxc7 1/1 Running 0 6h nginx-deployment-6b5c99b6fd-wq6vb 1/1 Running 0 6h
⑤详细查看Pod
[root@k8s-node1 ~]# kubectl describe pod nginx-deployment-6b5c99b6fd-9mxc7 Name: nginx-deployment-6b5c99b6fd-9mxc7 Namespace: default Node: 192.168.56.12/192.168.56.12 Start Time: Thu, 05 Dec 2019 09:52:03 +0800 Labels: pod-template-hash=2617556298 run=nginx-deployment Annotations: <none> Status: Running IP: 10.2.72.5 Controlled By: ReplicaSet/nginx-deployment-nginx-deployment-6b5c99b6fd #此Pod是由nginx-deployment-nginx-deployment-6b5c99b6fd创建的
Containers: nginx-deployment: Container ID: docker://a9768d66dd2cee284ca310250cb01d7814251335d5104ab98d0444e295e7a94d Image: nginx:1.7.9 Image ID: docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451 Port: <none> Host Port: <none> State: Running Started: Thu, 05 Dec 2019 09:53:20 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-jqfzk (ro)
2、创建Deployment整个过程
- 用户通过kubectl创建Deployment
- Deployment创建ReplicaSet
- ReplicaSet创建Pod
3、YAML文件方式创建
①kubectl apply -f nginx.yaml
apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: nginx-deployment name: nginx-deployment spec: replicas: 2 selector: matchLabels: app: nginx-deployment template: metadata: labels: app: nginx-deployment spec: containers: - image: nginx:1.7.9 name: nginx
②通过命令自动生成yaml文件
kubectl create deployment nginx-deployment --image=nginx:1.7.9 --dry-run -o yaml >nginx-deployment.yaml
kubectl apply不但能够创建k8s资源,也可以对资源进行更新,此外k8s还提供了kubectl create、kubectl replace、kubectl edit和 kubectl patch,工作中尽量使用 kubectl apply,此命令可以应对90%以上的应用场景。
4、Deployment配置文件介绍
①nginx-deployment为例说明
②删除Deployment资源
kubectl delete deployment nginx-deployment
或者
kubectl delete -f nginx-deployment.yaml
5、Pod弹性伸缩
①查看pod分配在哪个节点
[root@k8s-node1 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-deployment-6b5c99b6fd-9mxc7 1/1 Running 0 3d 10.2.72.5 192.168.56.12 nginx-deployment-6b5c99b6fd-wq6vb 1/1 Running 0 3d 10.2.73.1 192.168.56.13
②修改Pod副本数为5
[root@k8s-node1 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-deployment-6d8fccdfcb-4ql9d 1/1 Running 0 10s 10.2.72.5 192.168.56.12 nginx-deployment-6d8fccdfcb-f7895 1/1 Running 0 10s 10.2.72.6 192.168.56.12 nginx-deployment-6d8fccdfcb-jsmfj 1/1 Running 0 10s 10.2.72.7 192.168.56.12 nginx-deployment-6d8fccdfcb-mcrxz 1/1 Running 0 10s 10.2.73.1 192.168.56.13 nginx-deployment-6d8fccdfcb-qnjvc 1/1 Running 0 10s 10.2.73.2 192.168.56.13
新创建的三个Podf副本调度到k8s-node1和k8s-node2
③修改Pod副本书为3
[root@k8s-node1 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-deployment-6d8fccdfcb-4ql9d 1/1 Running 0 8m 10.2.72.5 192.168.56.12 nginx-deployment-6d8fccdfcb-mcrxz 1/1 Running 0 8m 10.2.72.6 192.168.56.12 nginx-deployment-6d8fccdfcb-qnjvc 1/1 Running 0 8m 10.2.73.1 192.168.56.13
6、Failover
- 如果k8s-node2节点故障,k8s会检查到k8s-node2节点不可用,将node2上的Pod标记为Unknown状态,并在node1上新创建两个Pod,保持副本数为3
- 当k8s-node2节点恢复后,Unknown的Pod会被删除,不过已经运行的 Pod不会重新调度回node2节点的
7、用label控制Pod的位置
- 默认配置下,Scheduler会将Pod调度到所有可用的Node节点
- 配置Pod部署到指定的Node节点,通过label来实现
- label是Key-value对,各种资源都可以设置label,自定义属性
①标注k8s-node2配置SSD节点
[root@k8s-node1 ~]# kubectl label node 192.168.56.12 disktype=ssd node "192.168.56.12" labeled
②查看节点的label
[root@k8s-node1 ~]# kubectl get node --show-labels NAME STATUS ROLES AGE VERSION LABELS 192.168.56.12 Ready <none> 5d v1.10.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/hostname=192.168.56.12
③修改yaml配置文件指定将Pod部署到k8s-node2上面
apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: nginx-deployment name: nginx-deployment spec: replicas: 3 selector: matchLabels: app: nginx-deployment template: metadata: labels: app: nginx-deployment spec: containers: - image: nginx:1.7.9 name: nginx nodeSelector: disktype: ssd
④查看Pod运行在的节点
[root@k8s-node1 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-deployment-5c7c98cf4b-fr85m 1/1 Running 0 1m 10.2.72.24 192.168.56.12 nginx-deployment-5c7c98cf4b-krb4z 1/1 Running 0 1m 10.2.72.25 192.168.56.12 nginx-deployment-5c7c98cf4b-x9nqd 1/1 Running 0 1m 10.2.72.23 192.168.56.12
结果:3个副本全都运行在了k8s-node2节点上面
⑤删除标签 label disktype
[root@k8s-node1 ~]# kubectl label node 192.168.56.12 disktype- node "192.168.56.12" labeled
虽然删除了label,但Pod并不会重新部署,除非修改yaml中配置,kubectl apply重新部署
二、DaemonSet
- Deployment部署的Pod副本会分布在每个node节点,每个node可能运行好几个副本
- DaemonSet的不同之处在于,每个Node上最多只部署运行一个副本
1、DaemonSet应用场景
- 在集群的每个节点上运行存储Daemon,比如glusterd或ceph
- 在每个节点上运行日志收集Daemon,比如logstash
- 在每个节点上运行监控Daemon,比如Prometheus Node Exporter或 collectd
2、K8s的系统组件也是通过DaemonSet实现的(kube-flannel-ds、kube-proxy)
[root@k8s-node1 ~]# kubectl get pod --namespace=kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE coredns-77c989547b-jkg8k 1/1 Running 0 32d 10.2.11.2 192.168.29.182 coredns-77c989547b-zcwv9 1/1 Running 0 3d 10.2.11.23 192.168.29.182 grafana-core-f796895df-mqfqp 1/1 Running 0 16h 10.2.1.41 192.168.29.176 heapster-64f4f9f59d-tdxsc 1/1 Running 0 3d 10.2.11.26 192.168.29.182 kubernetes-dashboard-66c9d98865-76798 1/1 Running 0 32d 10.2.11.3 192.168.29.182 monitoring-influxdb-644db5c5b6-zfkdd 1/1 Running 0 32d 10.2.11.4 192.168.29.182 node-exporter-8zrg7 1/1 Running 0 19h 10.2.11.79 192.168.29.182 #监控Daemon node-exporter-prshg 1/1 Running 0 19h 10.2.1.55 192.168.29.176 #监控Daemon prometheus-5f9d587758-bcmpm 2/2 Running 0 17h 10.2.1.152 192.168.29.176
3、Node Exporter配置文件
apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: node-exporter namespace: kube-system labels: k8s-app: node-exporter spec: template: metadata: labels: k8s-app: node-exporter spec: containers: - image: prom/node-exporter name: node-exporter ports: - containerPort: 9100 protocol: TCP name: http volumeMounts: - name: tz-config mountPath: /etc/localtime volumes: - name: tz-config hostPath: path: /usr/share/zoneinfo/Asia/Shanghai
三、Job
容器按照持续运行时间分为两类:
- 服务类容器:持续提供服务,需要一直运行,例如:HTTP Server、Daemon等
- 工作类容器:一次性任务,例如:批处理程序,完成后容器就会退出
1、创建一个Job配置文件
apiVersion: batch/v1 #当前Job的apiVersion kind: Job #指明当前资源类型 metadata: name: myjob spec: template: metadata: name: myjob spec: containers: - name: hello image: busybox command: ["echo","hello k8s job!"] restartPolicy: Never #什么时候重启容器,对于Job只能设置Never或者OnFailure
2、查看Job
[root@k8s-node1 ~]# kubectl apply -f myjob.yaml job.batch "myjob" created [root@k8s-node1 ~]# kubectl get job NAME DESIRED SUCCESSFUL AGE myjob 1 1 25s #DESIRED和SUCCESSFUL都为1,表示启动一个pod并成功执行,SUCCESSFUL为0刚表示失败
Pod执行完毕后容器会退出,需要用--show-all才能查看已经完成的Pod
[root@k8s-node1 ~]# kubectl get pod --show-all NAME READY STATUS RESTARTS AGE myjob-98knz 0/1 Completed 0 3m
- restartPolicy: Never 如果启动Pod失败,此容器不会被重启,但Job DESIRED值为1,目前SUCCESSFUL为0,所以会一直重新启动Pod,直到SUCCESSFUL为1
- restartPolicy: OnFailure 如果启动Pod失败,容器会自动重启
3、Job的并行
①创建示例文件
apiVersion: batch/v1 kind: Job metadata: name: myjob spec: parallelism: 2 #通过此参数设置同时运行多个Pod,提高Job的执行效率 template: metadata: name: myjob spec: containers: - name: hello image: busybox command: ["echo","hello k8s job!"] restartPolicy: Never
②查看执行结果
[root@k8s-node1 ~]# kubectl get job NAME DESIRED SUCCESSFUL AGE myjob <none> 2 35s [root@k8s-node1 ~]# kubectl get pod --show-all NAME READY STATUS RESTARTS AGE #一共启动了两个Pod,而且AGE相同,可见是并行运行的 myjob-n4sd6 0/1 Completed 0 1m myjob-q84dx 0/1 Completed 0 1m
③Job并行应用场景
批处理:每个副本都会从任务池中读取任务并执行,副本越多,执行时间就越快,效率就越高
4、定时Job
k8s也提供了类似linux中cron定时任务功能CronJob,可以定时执行Job。
①创建示例文件
apiVersion: batch/v2alpha1 #当前CronJob的apiVersion版本 kind: CronJob #资源类型 metadata: name: hello spec: schedule: "*/1 * * * *" #什么时候运行Job,这里表示每一分钟启动一次 jobTemplate: #定义Job的模板,格式与前面的Job一致 spec: template: spec: containers: - name: hello image: busybox command: ["echo","hello k8s job!"] restartPolicy: OnFailure
②创建定时Job
[root@k8s-node1 ~]# kubectl apply -f cronjob.yaml error: error validating "cronjob.yaml": error validating data: [ValidationError(CronJob.spec): unknown field "spec" in io.k8s.api.batch.v1beta1.CronJobSpec, ValidationError(CronJob.spec): missing required field "jobTemplate" in io.k8s.api.batch.v1beta1.CronJobSpec]; if you choose to ignore these errors, turn validation off with --validate=false
k8s默认没有允许CronJob的功能,需要在kube-apiserver中加入这个功能
③修改kube-apiserver配置文件
vim /usr/lib/systemd/system/kube-apiserver.service 加入以下参数 --runtime-config=batch/v2alpha1=true
重启Node节点kubelet服务
systemctl restart kubelet.service
④查看定时任务
kubectl apply -f cronjob.yaml [root@k8s-node1 ~]# kubectl get cronjob NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE hello */1 * * * * False 0 16s 5m
查看Job执行情况
[root@k8s-node1 ~]# kubectl get jobs NAME DESIRED SUCCESSFUL AGE hello-1576119900 1 1 3m hello-1576119960 1 1 2m hello-1576120020 1 1 1m hello-1576120080 1 1 11s
每隔一分钟就会启动一个Job,通过kubect log可以查看某个Job的日志
[root@k8s-node1 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE hello-1576120020-d7479 0/1 Completed 0 2m hello-1576120080-cpxdj 0/1 Completed 0 1m hello-1576120140-6dqvj 0/1 Completed 0 37s [root@k8s-node1 ~]# kubectl logs hello-1576120020-d7479 hello k8s job!