名称空间,亲和性,pod生命周期,健康检查
一、名称空间
1、切换名称空间
[root@master pod]# kubectl create ns test namespace/test created [root@master pod]# kubectl get ns NAME STATUS AGE default Active 10h kube-node-lease Active 10h kube-public Active 10h kube-system Active 10h test Active 2s [root@master pod]# kubectl config set-context --current --namespace=kube-system Context "kubernetes-admin@kubernetes" modified. [root@master pod]# kubectl get pod NAME READY STATUS RESTARTS AGE calico-kube-controllers-d886b8fff-mbdz7 1/1 Running 0 6h42m calico-node-48tnk 1/1 Running 0 6h46m calico-node-jq7mr 1/1 Running 0 6h46m calico-node-pdwcr 1/1 Running 0 6h46m coredns-567c556887-99cqw 1/1 Running 1 (6h44m ago) 10h coredns-567c556887-9sbfp 1/1 Running 1 (6h44m ago) 10h etcd-master 1/1 Running 1 (6h44m ago) 10h kube-apiserver-master 1/1 Running 1 (6h44m ago) 10h kube-controller-manager-master 1/1 Running 1 (6h44m ago) 10h kube-proxy-7dl5r 1/1 Running 1 (6h50m ago) 10h kube-proxy-pvbrg 1/1 Running 1 (6h44m ago) 10h kube-proxy-xsqt9 1/1 Running 1 (6h50m ago) 10h kube-scheduler-master 1/1 Running 1 (6h44m ago) 10h [root@master pod]# kubectl config set-context --current --namespace=default Context "kubernetes-admin@kubernetes" modified. [root@master pod]# kubectl get pod NAME READY STATUS RESTARTS AGE nginx1 1/1 Running 0 8m44s
2、设置名称空间资源限额
-
就是不能超过这个名称空间的限制
-
限制这个名称空间所有pod的类型的限制
[root@master ns]# cat test.yaml apiVersion: v1 kind: ResourceQuota #这个是资源配额 metadata: name: mem-cpu-qutoa namespace: test spec: hard: #限制资源 requests.cpu: "2" #最少2个cpu requests.memory: 2Gi limits.cpu: "4" #最大4个cpu limits.memory: 4Gi #查看名称空间详细信息 [root@master ns]# kubectl describe ns test Name: test Labels: kubernetes.io/metadata.name=test Annotations: <none> Status: Active Resource Quotas Name: mem-cpu-qutoa Resource Used Hard -------- --- --- limits.cpu 0 4 limits.memory 0 4Gi requests.cpu 0 2 requests.memory 0 2Gi No LimitRange resource. #定义了名称空间限制的话,创建Pod必须设置资源限制,否则会报错 [root@master pod]# cat nginx.yaml apiVersion: v1 kind: Pod metadata: name: nginx1 namespace: test labels: app: nginx-pod spec: containers: - name: nginx01 image: docker.io/library/nginx:1.9.1 imagePullPolicy: IfNotPresent resources: #pod资源的限制,如果不做限制的话,pod出现了问题的话,一直吃内存的话,就会出现问题 limits: memory: "2Gi" #内存为2g cpu: "2m" #单位为毫核,1000m=1核
二、标签
-
这个非常的重要,因为很多的资源类型都是靠这个标签进行管理的(识别到了)
-
服务或者控制器等都是靠这个标签来进行管理的
#打上标签 [root@master /]# kubectl label pods nginx1 test=01 pod/nginx1 labeled [root@master /]# kubectl get pod --show-labels NAME READY STATUS RESTARTS AGE LABELS nginx1 1/1 Running 0 45m app=nginx-pod,test=01 #具有这个标签的pod进行列出 [root@master /]# kubectl get pods -l app=nginx-pod NAME READY STATUS RESTARTS AGE nginx1 1/1 Running 0 48m #查看所有名称空间和标签 [root@master /]# kubectl get pods --all-namespaces --show-labels #查看这个键app对应的值是什么 [root@master /]# kubectl get pods -L app NAME READY STATUS RESTARTS AGE APP nginx1 1/1 Running 0 50m nginx-pod #删除这个标签 [root@master ~]# kubectl label pod nginx1 app- pod/nginx1 unlabeled [root@master ~]# kubectl get pod --show-labels NAME READY STATUS RESTARTS AGE LABELS nginx1 1/1 Running 0 57m test=01 s
三、亲和性
1、node节点选择器
就是根据主机名或者标签进行pod的调度,属于强制性的调度,不存在的也能进行调度,是pending的状态
1、nodename
[root@master pod]# cat pod1.yaml apiVersion: v1 kind: Pod metadata: name: pod1 namespace: test spec: nodeName: node1 #调度到node1主机上面 containers: - name: pod1 image: docker.io/library/nginx imagePullPolicy: IfNotPresent [root@master pod]# kubectl get pod -n test -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx1 1/1 Running 0 12h 10.244.104.5 node2 <none> <none> pod1 1/1 Running 0 34s 10.244.166.130 node1 <none> <none>
2、nodeselector
#给主机名打上标签,以便进行调度 [root@master ~]# kubectl label nodes node1 app=node1 node/node1 labeled [root@master ~]# kubectl get nodes node1 --show-labels NAME STATUS ROLES AGE VERSION LABELS node1 Ready <none> 23h v1.26.0 app=node1,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux [root@master pod]# cat pod2.yaml apiVersion: v1 kind: Pod metadata: name: pod2 namespace: test spec: nodeSelector: #根据主机名的标签进行调度 app: node1 #这种键值的形式来表现出来 containers: - name: pod2 image: docker.io/library/nginx imagePullPolicy: IfNotPresent [root@master pod]# kubectl get pod -n test -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx1 1/1 Running 0 12h 10.244.104.5 node2 <none> <none> pod1 1/1 Running 0 9m28s 10.244.166.130 node1 <none> <none> pod2 1/1 Running 0 12s 10.244.166.131 node1 <none> <none>
2、node亲和性
-
根据node上面的标签进行调度
-
根据的是node和pod之间的关系进行调度的
1、软亲和性
- 如果没有符合条件的,就随机选择一个进行调度
[root@master pod]# cat pod4.yaml apiVersion: v1 kind: Pod metadata: name: pod4 namespace: test spec: affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - preference: matchExpressions: #匹配节点上面的标签 - key: app operator: In values: ["node1"] weight: 1 #根据权重来调度 containers: - name: pod4 image: docker.io/library/nginx imagePullPolicy: IfNotPresent [root@master pod]# kubectl get pod -n test -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod3 1/1 Running 0 6m52s 10.244.166.133 node1 <none> <none> pod4 1/1 Running 0 40s 10.244.166.135 node1 <none> <none>
2、硬亲和性
[root@master pod]# cat pod3.yaml apiVersion: v1 kind: Pod metadata: name: pod3 namespace: test spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: #硬限制 nodeSelectorTerms: #根据这个node上面的标签来进行调度 - matchExpressions: - key: app operator: In values: ["node1"] #调度到上面有app=node1这个标签的节点上面去 containers: - name: pod3 image: docker.io/library/nginx:1.9.1 imagePullPolicy: IfNotPresent
3、pod亲和性
-
就是几个pod之间有依赖的关系,就放在一起,这样效率就快一点,网站服务和数据库服务就需要在一起,提高效率
-
根据正在运行的pod上面的标签进行调度
1、软亲和性
apiVersion: v1 kind: Pod metadata: name: pod7 namespace: test spec: affinity: podAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: ["pod4"] topologyKey: app weight: 1 containers: - name: pod7 image: docker.io/library/nginx imagePullPolicy: IfNotPresent [root@master pod]# kubectl get pod -n test -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod4 1/1 Running 0 24m 10.244.166.136 node1 <none> <none> pod5 1/1 Running 0 21m 10.244.166.137 node1 <none> <none> pod7 1/1 Running 0 51s 10.244.166.139 node1 <none> <none>
2、硬亲和性
[root@master pod]# cat pod5.yaml apiVersion: v1 kind: Pod metadata: name: pod5 namespace: test spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: ["pod4"] topologyKey: kubernetes.io/hostname #这个就是拓扑域,每个节点的这个都不一样。node1,node2等 containers: - name: pod5 image: docker.io/library/nginx imagePullPolicy: IfNotPresent #关于这个topologyKey的值的选择,一般就是节点上面的标签 apiVersion: v1 kind: Pod metadata: name: pod6 namespace: test spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: ["pod4"] topologyKey: app2 #这个是node2上面的标签,调度到pod包含这个app=pod4这个标签,并且节点是标签是app2上面的节点上面 containers: - name: pod6 image: docker.io/library/nginx imagePullPolicy: IfNotPresent [root@master pod]# cat pod5.yaml apiVersion: v1 kind: Pod metadata: name: pod6 namespace: test spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: ["pod4"] topologyKey: app #调度到pod包含了app的标签,并且值在app节点上面去了 containers: - name: pod6 image: docker.io/library/nginx imagePullPolicy: IfNotPresent # operator: DoesNotExist情况 apiVersion: v1 kind: Pod metadata: name: pod6 namespace: test spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: DoesNotExist topologyKey: app #调度到key不包含app并且节点标签为app的节点上面,还是调度到app节点上面去了 containers: - name: pod6 image: docker.io/library/nginx imagePullPolicy: IfNotPresent
4、pod反亲和性
就是当2个都是占内存比较高的Pod,就使用和这个反亲和性进行分开
apiVersion: v1 kind: Pod metadata: name: pod8 namespace: test spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: ["pod4"] topologyKey: kubernetes.io/hostname #调度到不能包含app=pod4上面的节点,调度到node1上 containers: - name: pod8 image: docker.io/library/nginx imagePullPolicy: IfNotPresent [root@master pod]# kubectl get pod -n test -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod4 1/1 Running 0 36m 10.244.166.136 node1 <none> <none> pod5 1/1 Running 0 33m 10.244.166.137 node1 <none> <none> pod6 1/1 Running 0 7m42s 10.244.166.140 node1 <none> <none> pod7 1/1 Running 0 12m 10.244.166.139 node1 <none> <none> pod8 1/1 Running 0 8s 10.244.104.6 node2 <none> <none>=
5、污点
-
在node上面进行打污点
-
kubectl explain node.spec.taints
-
手动打污点,
kubectl taint nodes node1 a=b:NoSchedule
-
污点三个等级
-
NoExecute 节点上面的pod都移除掉,不能调度到这个节点上
-
NoSchedule 节点上面存在的pod保留,但是新创建的pod不能调度到这个节点上面
-
PreferNoSchedule pod不到万不得已的情况下,才能调度到这个节点上面
-
#给node1打上一个污点 [root@master pod]# kubectl get pod -n test -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod4 1/1 Running 0 41m 10.244.166.136 node1 <none> <none> pod5 1/1 Running 0 37m 10.244.166.137 node1 <none> <none> pod6 1/1 Running 0 12m 10.244.166.140 node1 <none> <none> pod7 1/1 Running 0 17m 10.244.166.139 node1 <none> <none> pod8 1/1 Running 0 4m33s 10.244.104.6 node2 <none> <none> [root@master pod]# kubectl taint node node1 app=node1:NoExecute node/node1 tainted #发现这个节点上面的pod都销毁了 [root@master pod]# kubectl get pod -n test -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod8 1/1 Running 0 6m21s 10.244.104.6 node2 <none> <none> #去除污点 [root@master pod]# kubectl taint node node1 app- node/node1 untainted [root@master pod]# kubectl describe node node1 | grep -i taint Taints: <none>
6、容忍度
-
在pod上面进行容忍度,就是会容忍node上面的污点,从而能进行调度
-
kubectl explain pod.spec.tolerations
#就是节点上面有污点但是pod上面有容忍度可以容忍这个污点来进行调度到指定的节点上面去 #给node1打上污点 [root@master pod]# kubectl taint node node1 app=node1:NoExecute node/node1 tainted #进行调度到node1上 apiVersion: v1 kind: Pod metadata: name: pod10 namespace: test spec: tolerations: - key: "app" operator: Equal #就是key和values,effect必须和node上面完全匹配才行 #exists,只要对应的键是存在的,其值被自动定义成通配符 value: "node1" effect: NoExecute containers: - name: pod10 image: docker.io/library/nginx:1.9.1 [root@master pod]# kubectl get pod -n test -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod10 1/1 Running 0 58s 10.244.166.142 node1 <none> <none> pod8 1/1 Running 0 27m 10.244.104.6 node2 <none> <none> apiVersion: v1 kind: Pod metadata: name: pod11 namespace: test spec: tolerations: - key: "app" operator: Exists #容忍无论app,NoExecute的值为多少,都能进行调度 value: "" effect: NoExecute containers: - name: pod11 image: docker.io/library/nginx:1.9.1
四:pod的生命周期
-
init容器,初始化的容器,就是必须要经过这个阶段才能运行主容器
-
主容器,里面有启动前钩子和启动后钩子
1、初始化容器
[root@master pod]# cat init.yaml apiVersion: v1 kind: Pod metadata: name: init-pod namespace: test spec: initContainers: - name: init-pod1 image: docker.io/library/nginx:1.9.1 command: ["/bin/bash","-c","touch /11.txt"] containers: - name: main-pod image: docker.io/library/nginx:1.9.1 [root@master pod]# kubectl get pod -n test -w NAME READY STATUS RESTARTS AGE init-pod 0/1 Pending 0 0s init-pod 0/1 Pending 0 0s init-pod 0/1 Init:0/1 0 0s init-pod 0/1 Init:0/1 0 1s init-pod 0/1 PodInitializing 0 2s init-pod 1/1 Running 0 3s #如果初始化错误的话,会一直陷入重启的状态,这个跟pod的重启策略有关 [root@master pod]# cat init.yaml apiVersion: v1 kind: Pod metadata: name: init-pod namespace: test spec: initContainers: - name: init-pod1 image: docker.io/library/nginx:1.9.1 command: ["/bin/bash","-c","qwe /11.txt"] containers: - name: main-pod image: docker.io/library/nginx:1.9.1 [root@master pod]# kubectl get pod -n test -w NAME READY STATUS RESTARTS AGE init-pod 0/1 Pending 0 0s init-pod 0/1 Pending 0 0s init-pod 0/1 Init:0/1 0 0s init-pod 0/1 Init:0/1 0 0s init-pod 0/1 Init:0/1 0 1s init-pod 0/1 Init:Error 0 2s init-pod 0/1 Init:Error 1 (2s ago) 3s init-pod 0/1 Init:CrashLoopBackOff 1 (2s ago) 4s init-pod 0/1 Init:Error 2 (14s ago) 16s
2、启动前钩子
-
就是在主容器运行的前,执行这个钩子
-
失败的话,会一直重启(重启策略决定的),就不会运行主容器了
-
有三种的写法
1、exec
[root@master pod]# cat pre.yaml apiVersion: v1 kind: Pod metadata: name: pre-pod namespace: test spec: containers: - name: pre-pod image: docker.io/library/nginx:1.9.1 lifecycle: postStart: exec: command: ["/bin/bash","-c","touch /11.txt"] [root@master pod]# kubectl exec -n test -ti pre-pod -- /bin/bash root@pre-pod:/# ls 11.txt boot etc lib media opt root sbin sys usr bin dev home lib64 mnt proc run srv tmp var root@pre-pod:/# cat 11.txt #如果启动前钩子钩子报错的话,后面的主容器不会运行了
3、启动后钩子
[root@master pod]# cat pre.yaml apiVersion: v1 kind: Pod metadata: name: pre-pod namespace: test spec: containers: - name: pre-pod image: docker.io/library/nginx:1.9.1 lifecycle: preStop: exec: command: ["/bin/bash","-c","touch /11.txt"]
4、pod重启策略和pod的状态
-
用于设置pod的值
-
Always,当容器出现任何状况的话,就自动进行重启,这个是默认的值
-
OnFailure,当容器终止运行且退出码不为0时,kubelet自动重启该容器
-
Never,不论容器的状态如何,kubelet都不会重启该容器
-
pod的状态
1、pending,请求创建Pod时,条件不满足,调度没有进行完成没有一个节点符合,或者是处于下载镜像的情况
-
running 就是已经调度到一个节点上面了,里面的容器至少有一个创建出来了
-
succeeded pod里面的所有容器都成功的被终止了,并且不会在重启了
-
Failed 里面的所有容器都已经终止了,并且至少有一个容器是因为失败终止的,就是非0状态重启的
-
Unknown 未知状态,就是apiserver和kubelet出现了问题
-
Evicted状态,内存和硬盘资源不够
-
CrashLoopBackOff 容器曾经启动了,但是又异常退出了
-
Error pod启动过程中发生了错误
-
Completed 说明pod已经完成了工作,
-
#在容器里面设置一个启动前钩子,钩子会失败,然后重启策略设置为Never apiVersion: v1 kind: Pod metadata: name: pre-pod namespace: test spec: restartPolicy: Never containers: - name: pre-pod image: docker.io/library/nginx:1.9.1 lifecycle: postStart: exec: command: ["/bin/bash","-c","qwe /11.txt"] #这个钩子失败了,然后pod不进行重启策略 [root@master pod]# kubectl get pod -n test -w NAME READY STATUS RESTARTS AGE pre-pod 0/1 Pending 0 0s pre-pod 0/1 Pending 0 0s pre-pod 0/1 ContainerCreating 0 0s pre-pod 0/1 ContainerCreating 0 0s pre-pod 0/1 Completed 0 2s pre-pod 0/1 Completed 0 3s pre-pod 0/1 Completed 0 4s #查看详细信息 #正常退出了 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 12m default-scheduler Successfully assigned test/pre-pod to node1 Normal Pulled 12m kubelet Container image "docker.io/library/nginx:1.9.1" already present on machine Normal Created 12m kubelet Created container pre-pod Normal Started 12m kubelet Started container pre-pod Warning FailedPostStartHook 12m kubelet PostStartHook failed Normal Killing 12m kubelet s FailedPostStartHook
五、pod健康检查(主要就是容器里面)
1、liveness probe(存活探测)
-
用于检测pod内的容器是否处于运行的状态,当这个探测失效时,k8s会根据这个重启策略决定是否重启改容器
-
适用于在容器发生故障时进行重启,web程序等
-
主要就是检测pod是否运行的
-
支持三种格式,exec,tcp,httpget
-
探测结果有三个值,Success表示通过了检测,Failure表示未通过检测,Unknown表示检测没有正常的运行
-
kubectl explain pod.spec.containers.livenessProbe
1、参数详解
livenessProbe: initialDelaySeconds: #pod启动后首次进行检查的等待时间,单位为秒 periodSeconds: #检查的间隔时间,默认为10秒 timeoutSeconds: #探针执行检测请求后,等待响应的超时时间,默认为1秒 successThreshold: #连续探测几次成功,才认为探测成功,默认为1,在liveness中,必须为1,最小值为1 failureThreshold: #探测失败的重试次数,重试一定次数后将认为失败,在readiness探针中,Pod会被标记未就绪,默认为3,最小值为1
2、exec格式
[root@master pod]# cat liveness.yaml apiVersion: v1 kind: Pod metadata: name: live1 namespace: test spec: containers: - name: live1 image: docker.io/library/nginx:1.9.1 livenessProbe: exec: command: ["/bin/bash","-c","touch /11.txt"] failureThreshold: 3 #失败三次就认定为失败 initialDelaySeconds: 3 #进行探测的时候,等待三秒 periodSeconds: 5 #检查的时间间隔为10s successThreshold: 1 #必须为1,有1次成功即可 timeoutSeconds: 10 #执行请求后,等待的时间为10s [root@master pod]# kubectl get pod -n test -w NAME READY STATUS RESTARTS AGE pre-pod 0/1 Completed 0 4h45m live1 0/1 Pending 0 0s live1 0/1 Pending 0 0s live1 0/1 ContainerCreating 0 0s live1 0/1 ContainerCreating 0 1s live1 1/1 Running 0 2s live1 1/1 Running 0 30s
3、httpget格式
#格式说明 httpGet: scheme: #用于连接host的协议,默认为http host: #要连接的主机名,默认为pod的ip,就是容器里面的主机名 port: #容器上要访问端口号或名称 path: #http服务器上的访问url httpHeaders: #自定义http请求headers,允许重复 [root@master pod]# cat liveness.yaml apiVersion: v1 kind: Pod metadata: name: live1 namespace: test spec: containers: - name: live1 image: docker.io/library/nginx:1.9.1 livenessProbe: httpGet: port: 80 scheme: HTTP path: /index.html #就是在容器内部curl localhost:80/index.html检测 failureThreshold: 3 #返回了一个成功的 HTTP 响应(状态码在 200-399 之间)就是成功的 initialDelaySeconds: 3 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 10 #可以运行 live1 0/1 ContainerCreating 0 0s live1 0/1 ContainerCreating 0 1s live1 1/1 Running 0 2s live1 1/1 Running 0 42s
4、tcp方式健康检查
[root@master pod]# cat liveness.yaml apiVersion: v1 kind: Pod metadata: name: live1 namespace: test spec: containers: - name: live1 image: docker.io/library/nginx:1.9.1 livenessProbe: tcpSocket: port: 80 #发送一个探针,尝试连接容器80端口 failureThreshold: 3 initialDelaySeconds: 3 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 10
2、readiness probe(就绪性探测)
-
就是pod里面的容器运行了,但是提供服务的程序,需要读取这个网页的配置文件,才能提供服务
-
所以的话需要这个就绪性探测,服务器起来了,就能提供这个服务了
-
防止Pod起来了,但是里面的服务是假的服务这种情况
-
也支持三种
[root@master pod]# cat liveness.yaml apiVersion: v1 kind: Pod metadata: name: live1 namespace: test spec: containers: - name: live1 image: docker.io/library/nginx:1.9.1 readinessProbe: httpGet: port: 80 #发送一个请求 failureThreshold: 3 initialDelaySeconds: 3 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 10 #在检测的时候的等待几秒钟 [root@master pod]# kubectl get pod -n test -w NAME READY STATUS RESTARTS AGE pre-pod 0/1 Completed 0 5h11m live1 0/1 Pending 0 0s live1 0/1 Pending 0 0s live1 0/1 ContainerCreating 0 0s live1 0/1 ContainerCreating 0 0s live1 0/1 Running 0 1s live1 1/1 Running 0 5s
3、startProbe(启动探测)
-
探测容器中的应用是否已经启动,如果提供了这个启动探测,则禁用所有其他的探测,直到他成功为止
-
如果启动探测失败的话,kubelet将杀死容器,容器服从其重启策略进行重启,如果容器没有提供启动探测,则默认为状态为success
-
可以自定义在pod启动是是否执行这些检测,如果不设置的,则检测结果均默认为通过,如果设置,则顺序为 startupProbe > readinessProbe > livenessProbe。后面的2个探针没有启动的顺序
-
这个优先级是最高的,先执行这个,在执行后面的探针
-
作用: 用于确定容器是否已经启动并且可以接收流量。与就绪探针不同,启动探针只有在容器启动时进行一次检查
apiVersion: v1 kind: Pod metadata: name: start1 namespace: test spec: containers: - name: start1 image: docker.io/library/nginx:1.9.1 startupProbe: exec: #检测nginx是否启动了 command: ["/bin/bash","-c","ps -aux|grep nginx"] [root@master ~]# kubectl get pod -n test -w NAME READY STATUS RESTARTS AGE live1 1/1 Running 0 17h pre-pod 0/1 Completed 0 22h start1 0/1 Pending 0 1s start1 0/1 Pending 0 1s start1 0/1 ContainerCreating 0 1s start1 0/1 ContainerCreating 0 1s start1 0/1 Running 0 2s start1 0/1 Running 0 11s start1 0/1 Running 0 11s start1 1/1 Running 0 12s
4、三种方式一起使用
apiVersion: v1 kind: Service metadata: name: springboot labels: app: springboot spec: type: NodePort ports: - name: server port: 8080 targetPort: 8080 nodePort: 31180 - name: management port: 8081 targetPort: 8081 nodePort: 31181 selector: app: springboot --- apiVersion: v1 kind: Pod metadata: name: springboot-live labels: app: springboot spec: containers: - name: springboot image: mydlqclub/springboot-helloworld:0.0.1 imagePullPolicy: IfNotPresent ports: - name: server containerPort: 8080 - name: management containerPort: 8081 readinessProbe: #这个是就绪性探针,里面的服务是否启动的 initialDelaySeconds: 20 periodSeconds: 5 timeoutSeconds: 10 httpGet: scheme: HTTP port: 8081 path: /actuator/health livenessProbe: #存货行探测,容器是否启动 initialDelaySeconds: 20 periodSeconds: 5 timeoutSeconds: 10 httpGet: scheme: HTTP port: 8081 path: /actuator/health startupProbe: #启动探针,先执行这个探针 initialDelaySeconds: 20 #检测之前等待几秒钟 periodSeconds: 5 #每个5秒进行检测 timeoutSeconds: 10 #发出请求后,超过10秒为超时 httpGet: scheme: HTTP port: 8081 path: /actuator/health #如果容器出现了,问题,就根据重启策略进行操作
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】凌霞软件回馈社区,博客园 & 1Panel & Halo 联合会员上线
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】博客园社区专享云产品让利特惠,阿里云新客6.5折上折
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 一个费力不讨好的项目,让我损失了近一半的绩效!
· 清华大学推出第四讲使用 DeepSeek + DeepResearch 让科研像聊天一样简单!
· 实操Deepseek接入个人知识库
· CSnakes vs Python.NET:高效嵌入与灵活互通的跨语言方案对比
· Plotly.NET 一个为 .NET 打造的强大开源交互式图表库