博客园  :: 首页  :: 新随笔  :: 联系 :: 订阅 订阅  :: 管理

Docker基础 - 08资源管理

Posted on 2021-11-09 22:08  Kingdomer  阅读(242)  评论(0编辑  收藏  举报

Docker基础 - 08资源管理 

一、限制容器资源

  • By default, a container has no resource constraints and
    • can use as much of a given resource as the host's kernel scheduler allows.
  • Docker provides ways to control how much memory, CPU, or block IO a container can use,
    • setting runtime configuration flags of the docker run command.
  • Resource allowances: Eight-sided containers, A process in isolation
    • NET - Network access and structure
    • UTS - Host and domain name
    • USR - User names and identifiers
    • IPC - Communication by shared memory
    • PID - Process identifiers and process capabilities
    • MNT - File system access and structure
    • Cgroups - Resource protection
    • chroot() - Controls location of file system root

二、Memory

  • OOME: On Linux hosts, if the kernel detects that there is not enough memory to perform important system functions,
    • it throws an OOME Out Of Memeory Exception, and starts killing processes to free up memory。
  • 一旦发生OOME, 任何进程都有可能被杀死,包括 docker daemon在内。
  • Docker 特地调整了docker daemon的OOM优选级,以免它被内核杀死,但容器优选级并未调整。
  • Limit a container's access to memory:
    • --memory: Memory limit, The maximum amount of memory the container can use。
    • --memory-swap: Swap limit equal to memory plus swap: '-1' to enable unlimited swap
    • --memory-swapiness: Tune container memory swappiness (0 to 100) (default -1)
    • --memory-reservation: Memory soft limit
    • --kernel-memory:
    • --oom-kill-disable:
  • --memory-swap
    • --memory-swap is a modifier flag that only has meaning if --memory is also set
    • memory-swap=S,memory=M: 容器可用总空间为S,ram为M, swap为S-M。
    • memory-swap=0,memory=M: 未设置swap
    • memory-swap为unset, memory=M: 宿主机启用了swap, 容器可用swap为2*M
    • memory-swap=-1,memory=M: 宿主机启用了swap,容器可使用最大至主机的所有swap
[root@component ~]# docker run --help

     --oom-kill-disable               Disable OOM Killer
     --oom-score-adj int              Tune host's OOM preferences (-1000 to 1000)

 

[root@cl-server ~]# docker update --memory 1024M 25d96325f6b8
Error response from daemon: Cannot update container 25d96325f6b838d5461f6e0b819fca806a78074b81536cc88d5b6ffd0fdaf8e3: 
Memory limit should be smaller than already set memoryswap limit, update the memoryswap at the same time
[root@cl-server ~]#  docker update --memory 512M --memory-swap 2048M 25d96325f6b8
25d96325f6b8

  

三、CPU

  • By default, each container's access to the host machine's CPU cycles is unlimited。
  • Most users use and configure the default CFS scheuler。 完全公平调度。CPU密集型。
  • In Docker 1.13 and higher, you can also configure the realtime scheduler。
  • Configure the default CFS scheduler
    • --cpus=<value>:  the machine has two CPUs and you set --cpus="1.5"。
    • --cpu-period=<value>: Defaults to 100 microseconds.--cpu-period="100000"。 
    • --cpu-quota=<value>: --cpu-quota="150000"
    • --cpuset-cpus: Limit the specific CPUs or cores a container can use。 0-3 / 1,3
    • --cpu-shares: set this flag to a value greater or less than the default of 1024 to
      • increase or reduce the container's weight, and give it access to a greater or lesser proportion
      • of the host machine's CPU cycles.
[root@k8s-node33 ~]# lscpu
架构:           x86_64
CPU 运行模式:   32-bit, 64-bit
字节序:         Little Endian
CPU:             2
在线 CPU 列表:  0,1
每个核的线程数: 1
每个座的核数:   2
座:             1
NUMA 节点:      1
厂商 ID:        GenuineIntel
CPU 系列:       6
型号:           142
型号名称:       Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz
步进:           10
CPU MHz:        1799.996
BogoMIPS:       3599.99
超管理器厂商:   VMware
虚拟化类型:     完全
L1d 缓存:       32K
L1i 缓存:       32K
L2 缓存:        256K
L3 缓存:        6144K
NUMA 节点0 CPU: 0,1
标记:           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves arat flush_l1d arch_capabilities

 

 

 

 

四、测试

4.1 宿主机安装stress-ng

[root@component ~]# yum install epel-release stress-ng
[root@component ~]# stress-ng --help
stress-ng, version 0.07.29

Usage: stress-ng [OPTION [ARG]]


Example: stress-ng --cpu 8 --io 4 --vm 2 --vm-bytes 128M --fork 4 --timeout 10s

Note: Sizes can be suffixed with B,K,M,G and times with s,m,h,d,y

4.2 安装docker-stress-ng

[root@component ~]# docker pull lorel/docker-stress-ng
Digest: sha256:c8776b750869e274b340f8e8eb9a7d8fb2472edd5b25ff5b7d55728bca681322
Status: Downloaded newer image for lorel/docker-stress-ng:latest
docker.io/lorel/docker-stress-ng:latest

4.3 测试CPU

[root@component ~]# docker run --name stress -it --rm lorel/docker-stress-ng:latest stress --cpu 2
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 2 cpu

[root@component ~]# docker stats
CONTAINER ID   NAME      CPU %     MEM USAGE / LIMIT     MEM %     NET I/O     BLOCK I/O   PIDS
e77bdb3b7116   stress    200.36%   9.359MiB / 1.777GiB   0.51%     656B / 0B   0B / 0B     3

 

[root@component ~]# docker run --name stress3 -it --rm --cpu-shares 1024 lorel/docker-stress-ng stress --cpu 8
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 8 cpu
^Cstress-ng: info: [1] successful run completed in 20.20s

4.4 测试内存

[root@component ~]# docker run --name stress -it --rm lorel/docker-stress-ng:latest stress --vm 2 --vm-bytes 128M
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 2 vm

[root@component ~]# docker stats
CONTAINER ID   NAME      CPU %     MEM USAGE / LIMIT     MEM %     NET I/O     BLOCK I/O   PIDS
431c10eb5db6   stress    200.33%   258.3MiB / 1.777GiB   14.20%    656B / 0B   0B / 0B     5

  

[root@component ~]# docker top stress
UID     PID      PPID      C     STIME       TTY          TIME          CMD
root    2311     2291      0     21:24       pts/0        00:00:00      /usr/bin/stress-ng stress --vm 2 --vm-bytes 128M
root    2343     2311      0     21:25       pts/0        00:00:00      /usr/bin/stress-ng stress --vm 2 --vm-bytes 128M
root    2344     2311      0     21:25       pts/0        00:00:00      /usr/bin/stress-ng stress --vm 2 --vm-bytes 128M
root    2345     2344      99    21:25       pts/0        00:06:24      /usr/bin/stress-ng stress --vm 2 --vm-bytes 128M
root    2346     2343      99    21:25       pts/0        00:06:24      /usr/bin/stress-ng stress --vm 2 --vm-bytes 128M

  

[root@component ~]# docker run --name stress2 -it --rm -m 256m lorel/docker-stress-ng stress --vm 2
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 2 vm
^Cstress-ng: info: [1] successful run completed in 96.37s

CONTAINER ID   NAME      CPU %     MEM USAGE / LIMIT   MEM %     NET I/O     BLOCK I/O       PIDS
17a61b74905b   stress2   184.80%   256MiB / 256MiB     99.98%    656B / 0B   14GB / 41.8GB   5