Kubernetes (二)POD资源限制
一、request和limits
1、request:告诉POD运行需要多少资源,用于集群的节点调度
2、limits:指定POD最多可以使用多少资源
二、内存限制
限制可使用内存200Mi
# more 1.yaml apiVersion: v1 kind: Pod metadata: name: test namespace: test spec: containers: - name: resource-demo image: nginx resources: limits: memory: 200Mi requests: memory: 200Mi # kubectl create namespace test # kubectl apply -f 1.yaml # kubectl -n test get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test 0/1 ContainerCreating 0 25s <none> 10.30.20.44 <none> <none> # docker inspect bf45666c14f0 -f "{{.State.Pid}}" 368 # grep memory /proc/368/cgroup 11:memory:/kubepods/burstable/podeab504be-2d32-11eb-a7e0-fa163e6a6737/bf45666c14f06a08cbe305d68a7dc7f4f7f40f9b73fb489abb7c1a235b5c2b88 # more /sys/fs/cgroup/memory/kubepods/burstable/podeab504be-2d32-11eb-a7e0-fa163e6a6737/bf45666c14f06a08cbe305d68a7dc7f4f7f40f9b73fb489abb7c1a235b5c2b88/memory.limit_in_bytes 209715200
三、CPU限制
1、yaml文件
# more cpu.yaml apiVersion: v1 kind: Pod metadata: name: test namespace: test spec: containers: - name: resource-demo image: nginx resources: limits: cpu: 100m requests: cpu: 50m # kubectl apply -f cpu.yaml # kubectl -n test get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test 1/1 Running 0 69s 10.20.101.231 10.30.20.113 <none> <none>
2、登陆节点10.30.20.113
# docker ps |grep test c468cb5989e0 nginx "/docker-entrypoint.…" 2 minutes ago Up 2 minutes k8s_resource-demo_test_test_f69e63fb-2d35-11eb-a7e0-fa163e6a6737_0 3986a5234a7b mirrorgooglecontainers/pause-amd64:3.0 "/pause" 2 minutes ago Up 2 minutes k8s_POD_test_test_f69e63fb-2d35-11eb-a7e0-fa163e6a6737_0 # docker inspect c468cb5989e0 -f "{{.State.Pid}}" 3966 # grep cpu /proc/3966/cgroup 4:cpuset:/kubepods/burstable/pod0ab4f5cd-2eca-11eb-a7e0-fa163e6a6737/c468cb5989e0f5338175d8a8bdc2080e4788b6d7a70ccae0fa192c8f6c8526fd 2:cpuacct,cpu:/kubepods/burstable/pod0ab4f5cd-2eca-11eb-a7e0-fa163e6a6737/c468cb5989e0f5338175d8a8bdc2080e4788b6d7a70ccae0fa192c8f6c8526fd # cd /sys/fs/cgroup/cpu # cd kubepods/burstable/pod0ab4f5cd-2eca-11eb-a7e0-fa163e6a6737/c468cb5989e0f5338175d8a8bdc2080e4788b6d7a70ccae0fa192c8f6c8526fd # more cpu.shares 51 # more cpu.cfs_period_us 100000 # more cpu.cfs_quota_us 10000
Requests 使用的是 cpu shares 系统,cpu shares 将每个 CPU 核心划分为 1024 个时间片,并保证每个进程将获得固定比例份额的时间片。
如果总共有 1024 个时间片,并且两个进程中的每一个都将 cpu.shares 设置为 512,那么它们将分别获得大约一半的 CPU 可用时间。
但 cpu shares 系统无法精确控制 CPU 使用率的上限,如果一个进程没有设置 shares,则另一个进程可用自由使用 CPU 资源。
limits是由cpu.cfs_period_us 和cpu.cfs_quota_us 共通决定的,在本例中我们将 Pod 的 cpu limits 设置为 100m,这表示 100/1000 个 CPU 核心