18、K8S-Pod资源限制之requests、limits

Kubernetes学习目录

1、基础知识

1.1、需求

Kubernetes是分布式的容器管理平台,所有资源都有Node节点提供,而Node节点上运行着很多Pod业务有
很多,在一个项目中,有的业务占用的资源比重大,有的小,想要高效的利用Node资源,必须进行资源限制,
那么Pod的资源限制应该如何处理?
Kubernetes技术已经对Pod做了相应的资源配额设置,这些资源主要体现在:CPU和内存、存储,因为存储
在k8s中有专门的资源对象来进行管控,所以我们在说到pod资源限制的时候,一半指的是 CPU和内存。

1.2、资源限制

为了方便与k8s的其他单独的资源对象区分开来,根据我们对于CPU和内存来说的应用场景特点,我们一般将其称为 计算资源。

1.2.1、CPU

CPU
 特点:是一种可压缩资源,cpu资源是支持抢占的。
   单位:CPU的资源单位是CPU(Core)的数量,是一个绝对值。
   大小:因为CPU配额对于绝大多数的容器来说,实在是太多了,所以在Kubernetes中通常以千分之一的CPU为最小单位,用m表示。
   经验:一般来说一个容器占用的CPU是100~300m,即0.1-0.3个CPU
   注意:
   mi 代表是1024进制的

1.2.2、内存

内存
   特点:是不可压缩资源,当pod资源扩展的时候,如果node上资源不够,那么就会发生资源抢占,或者OOM问题
   单位:内存的资源以字节数为单位,是一个绝对值
   大小:内存配额对于绝大多数容器来说很重要,在Kubernetes中通常以Mi为单位来分配。
   经验:因为目前内存很便宜,所以可以自由的来分配。

1.3、配额限制

1.3.1、Requests和limits介绍

Kubernetes中,对于每种资源的配额限定都需要两个参数:Resquests和Limits
申请配额(Requests):
 业务运行时最小的资源申请使用量,该参数的值必须满足,若不满足,业务运行不起来。
最大配额(Limits):
 业务运行时最大的资源允许使用量,该参数的值不能被突破,若突破,该业务的资源对象会被重启或删除
等意外操作

1.3.2、Requests和limits框架图

2、资源限制

2.1、设置默认的资源限制

2.1.1、yaml清单

cat >limit-mem-cpu-per-container.yml<<'EOF'
apiVersion: v1
kind: LimitRange
metadata:
  name: limit-mem-cpu-per-container
spec:
  limits:
  - max:
      cpu: "800m"
      memory: "1Gi"
    min:
      cpu: "100m"
      memory: "99Mi"
    default:
      cpu: "700m"
      memory: "900Mi"
    defaultRequest:
      cpu: "110m"
      memory: "111Mi"
    type: Container
EOF

2.1.2、应用并且查询limitranges

]# kubectl apply -f limit-mem-cpu-per-container.yml 
limitrange/limit-mem-cpu-per-container created

]# kubectl get limitranges NAME CREATED AT limit
-mem-cpu-per-container 2023-03-19T14:25:43Z
]# kubectl describe limitranges limit
-mem-cpu-per-container Name: limit-mem-cpu-per-container Namespace: default Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Container memory 99Mi 1Gi 111Mi 900Mi - Container cpu 100m 800m 110m 700m -

2.1.3、验证默认配置是否生效

cat > pod-test-limit.yml<<'EOF'
apiVersion: v1
kind: Pod
metadata:
  name: nginx-test
spec:
  containers:
  - name: nginx
    image: 192.168.10.33:80/k8s/my_nginx:v1
    env:
    - name: HELLO
      value: "Hello kubernetes nginx"
EOF

master1 ]# kubectl apply -f pod-test-limit.yml 
pod/nginx-test created

master1 ]# kubectl get pods
NAME         READY   STATUS    RESTARTS   AGE
nginx-test   1/1     Running   0          4s

master1 ]# kubectl describe pod nginx-test 
...
    Limits:
      cpu:     700m
      memory:  900Mi
    Requests:
      cpu:     110m
      memory:  111Mi
...

2.2、压测实践

2.2.1、压测的yaml

cat >stress-test.yml<<'EOF'
apiVersion: v1
kind: Pod
metadata:
  name: stress-test
spec:
  containers:
  - name: stress
    image: 192.168.10.33:80/k8s/stress:v0.1
    imagePullPolicy: IfNotPresent
    command: ["/usr/bin/stress-ng","-m 3","-c 2","--metrics-brief"]
    resources:
      requests:
        memory: "128Mi"
        cpu: "200m"
      limits:
        memory: "256Mi"
        cpu: "500m"
EOF

# 镜像自己到docker hub下载

2.2.2、查询资源有没有超

# 查看调度到哪个节点上
master1 ]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE     IP            NODE    NOMINATED NODE   READINESS GATES
stress-test   1/1     Running   0          9m39s   10.244.3.89   node1   <none>           <none>

# 到node1节点查询容器ID
node1 ]# crictl ps 
CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
872a1fdda3243       1ae56ccafe553       8 minutes ago       Running             stress              0                   bd05fcd9358d3       stress-test

# 使用容器ID查询内存的使用情况
node1 ]# crictl stats 872a1fdda3243
CONTAINER           CPU %               MEM                 DISK                INODES
872a1fdda3243       51.08               230.5MB             0B                  8

2.2.3、查看容器的配置情况

master1 ]# kubectl describe pod stress-test
...
    Limits:
      cpu:     500m
      memory:  256Mi
    Requests:
      cpu:        200m
      memory:     128Mi
...

2.3、内存OOM实践

2.3.1、内存OOM yml

cat >pod-oom.yml<<'EOF'
apiVersion: v1
kind: Pod
metadata:
  name: oom-test
spec:
  containers:
  - name: oom-test-ctr
    image: 192.168.10.33:80/k8s/simmemleak:v0.1
    imagePullPolicy: IfNotPresent
    resources:
      limits:
        memory: "99Mi"
        cpu: "100m"
      requests:
        memory: "99Mi"
        cpu: "100m"
EOF

2.3.2、观察OOM效果

master1 ]# kubectl apply -f pod-oom.yml && kubectl get pods -w -o wide
pod/oom-test created
NAME       READY   STATUS              RESTARTS   AGE   IP       NODE    NOMINATED NODE   READINESS GATES
oom-test   0/1     ContainerCreating   0          1s    <none>   node1   <none>           <none>
oom-test   1/1     Running             0          2s    10.244.3.91   node1   <none>           <none>
oom-test   0/1     OOMKilled           0          3s    10.244.3.91   node1   <none>           <none>
oom-test   1/1     Running             1 (2s ago)   4s    10.244.3.91   node1   <none>           <none>
oom-test   0/1     OOMKilled           1 (3s ago)   5s    10.244.3.91   node1   <none>           <none>
oom-test   0/1     CrashLoopBackOff    1 (2s ago)   6s    10.244.3.91   node1   <none>           <none>

3、服务质量等级

3.1、QoS(服务质量等级)

3.1.1、高优先级-Guaranteed

Pod内的每个容器同时设置了CPU和内存的requests和limits 而且值必须相等

3.1.2、中优先级-Burstable

pod至少有一个容器设置了cpu或内存的requests和limits,不满足 Guarantee 等级的要求。

3.1.3、低优先级-BestEffort

没有任何一个容器设置了requests或limits的属性。(最低优先级)

3.2、服务质量等级-示例

3.2.1、高优先级示例

cat >pod-qos-guaranteed.yml<<'EOF'
apiVersion: v1
kind: Pod
metadata:
  name: qos-demo
spec:
  containers:
  - name: qos-demo-ctr
    image: 192.168.10.33:80/k8s/my_nginx:v1
    resources:
      limits:
        memory: "200Mi"
        cpu: "700m"
      requests:
        memory: "200Mi"
        cpu: "700m"
EOF

----------

master1 ]# kubectl apply -f pod-qos-guaranteed.yml 
pod/qos-demo created

master1 ]# kubectl describe pod qos-demo 
...
QoS Class:                   Guaranteed
...

3.2.2、中优先级示例

cat >pod-qos-burstable.yml<<'EOF'
apiVersion: v1
kind: Pod
metadata:
  name: qos-demo
spec:
  containers:
  - name: qos-demo-ctr
    image: 192.168.10.33:80/k8s/my_nginx:v1
    resources:
      limits:
        memory: "200Mi"
        cpu: "700m"
      requests:
        memory: "100Mi"
        cpu: "200m"
EOF

----------

master1 ]# kubectl apply -f pod-qos-burstable.yml 
pod/qos-demo created

master1 ]# kubectl describe pod qos-demo 
...
QoS Class:                   Burstable
...

3.2.3、低优先级示例

cat >pod-qos-bestEffort.yml<<'EOF'
apiVersion: v1
kind: Pod
metadata:
  name: qos-demo
spec:
  containers:
  - name: qos-demo-ctr
    image: 192.168.10.33:80/k8s/my_nginx:v1
EOF

----------

master1 ]# kubectl apply -f pod-qos-bestEffort.yml 
pod/qos-demo created

master1 ]# kubectl describe pod qos-demo 
...
QoS Class:                   BestEffort
...

 

posted @ 2023-03-20 09:40  小粉优化大师  阅读(1853)  评论(0编辑  收藏  举报