十、Docker容器:磁盘&内存&CPU资源限制实战
inode1 192.168.31.101 ----- docker version:Docker version 1.13.1, build cccb291/1.13.1 inode2 192.168.31.102 ----- docker version:Docker version 19.03.8, build afacb8b(docker-ce)
[root@node1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES aca49b0226ad web:v1 "sleep 9999d" 25 hours ago Up 19 seconds web01
宿主机的磁盘
[root@node1 ~]# df -Th Filesystem Type Size Used Avail Use% Mounted on /dev/sda2 xfs 20G 3.0G 17G 15% / devtmpfs devtmpfs 233M 0 233M 0% /dev tmpfs tmpfs 243M 0 243M 0% /dev/shm tmpfs tmpfs 243M 5.2M 238M 3% /run tmpfs tmpfs 243M 0 243M 0% /sys/fs/cgroup /dev/sda1 xfs 497M 117M 380M 24% /boot tmpfs tmpfs 49M 0 49M 0% /run/user/0
宿主机上docker容器的磁盘
[root@node1 ~]# docker exec web01 df -Th Filesystem Type Size Used Avail Use% Mounted on overlay overlay 20G 3.0G 17G 15% / tmpfs tmpfs 243M 0 243M 0% /dev tmpfs tmpfs 243M 0 243M 0% /sys/fs/cgroup /dev/sda2 xfs 20G 3.0G 17G 15% /etc/hosts shm tmpfs 64M 0 64M 0% /dev/shm tmpfs tmpfs 243M 0 243M 0% /proc/acpi tmpfs tmpfs 243M 0 243M 0% /proc/scsi tmpfs tmpfs 243M 0 243M 0% /sys/firmware
宿主机的内存
[root@node1 ~]# free -mh total used free shared buff/cache available Mem: 485M 98M 184M 5.2M 203M 345M Swap: 2.0M 0B 2.0M
[root@node1 ~]# docker exec web01 free -mh total used free shared buff/cache available Mem: 485M 105M 173M 5.2M 206M 338M Swap: 2.0M 0B 2.0M
宿主机的cpu
#物理cpu数 [root@node1 ~]# grep 'physical id' /proc/cpuinfo|sort|uniq|wc -l 1 #cpu的核数 [root@node1 ~]# grep 'cpu cores' /proc/cpuinfo|uniq|awk -F ':' '{print $2}' 1
宿主机上的docker容器的cpu
[root@node1 ~]# docker exec web01 grep 'physical id' /proc/cpuinfo|sort|uniq|wc -l 1 [root@node1 ~]# docker exec web01 grep 'cpu cores' /proc/cpuinfo|uniq|awk -F ':' '{print $2}' 1
4、查看单个容器的内存、cpu资源使用
#实时查看容器web01的内存和cpu资源 [root@node1 ~]# docker stats web01 CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS web01 0.00% 88 KiB / 485.7 MiB 0.02% 4.29 kB / 1.34 kB 6.64 MB / 0 B 1 CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS web01 0.00% 88 KiB / 485.7 MiB 0.02% 4.29 kB / 1.34 kB 6.64 MB / 0 B 1 CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS web01 0.00% 88 KiB / 485.7 MiB 0.02% 4.29 kB / 1.34 kB 6.64 MB / 0 B 1 CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS web01 0.00% 88 KiB / 485.7 MiB 0.02% 4.29 kB / 1.34 kB 6.64 MB / 0 B 1 #查看容器web01瞬时的内存和cpu资源 [root@node1 ~]# docker stats --no-stream web01 CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS web01 0.00% 88 KiB / 485.7 MiB 0.02% 4.42 kB / 1.42 kB 6.64 MB / 0 B 1 [root@node1 ~]#
CPU和内存的资源限制
docker run -itd --cpuset-cpus=0-0 -m 4MB --name=test web:v1 /bin/bash --cpuset-cpus:设置cpu的核数,0-0、1-1、2-2...(这种是绑定cpu,把本虚拟机绑定在一个逻辑cpu上);0-1、0-2、0-3和0,1、0,2、0,3(这两种形式都是指定多个逻辑cpu,每次随机使用一个逻辑cpu,相当于是共享cpu) #注意:一个docker容器绑定一个逻辑cpu便于监控容器占用cpu的情况;而共享cpu可以更好利用cpu资源,而且要选好cpu调度算法! -m:设置内存的大小 [root@node1 ~]# docker run -itd --cpuset-cpus=0-0 -m 4MB --name=test web:v1 /bin/bash de30929be801fe3d0262b7a8f2de15234c53bc07b7c8d05d27ea4845b3c5f479 [root@node1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES de30929be801 web:v1 "/bin/bash" 3 seconds ago Up 2 seconds test
查看内存
[root@node1 ~]# docker stats --no-stream test CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS test 0.00% 372 KiB / 4 MiB 9.08% 1.34 kB / 734 B 1.53 MB / 0 B 1 #内存被限制在了4M
[root@node1 ~]# docker run -itd --cpuset-cpus=0-1 -m 4MB --name=test2 web:v1 /bin/bash 1944e0f432d57d4ad48015a74d4b537f6fa76bda09e32d204a4d20a38fa6594a /usr/bin/docker-current: Error response from daemon: oci runtime error: container_linux.go:235: starting container process caused "process_linux.go:327: setting cgroup config for procHooks process caused \"failed to write 0-1 to cpuset.cpus: write /sys/fs/cgroup/cpuset/system.slice/docker-1944e0f432d57d4ad48015a74d4b537f6fa76bda09e32d204a4d20a38fa6594a.scope/cpuset.cpus: permission denied\"". #报错,因为宿主机的cpu核数只有1颗,给docker容器配置2颗会报错
从上面可以看到docker的cpu和内存资源有被限制
四、对容器硬盘资源的限制
1、硬盘资源的限制是要修改配置文件
Docker version 1.13.1
docker配置文件:/etc/sysconfig/docker(注意不是docker-storage文件)中,OPTIONS参数后面添加如下代码:
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --storage-opt overlay2.size=10G'
docker配置文件:/usr/lib/systemd/system/docker.service中,OPTIONS参数后面添加如下代码:
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --storage-opt overlay2.size=10G
重启docker服务
[root@node1 ~]# systemctl restart docker Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details. [root@node1 ~]# tail -fn 50 /var/log/messages ...... Apr 1 06:29:21 node1 dockerd-current: Error starting daemon: error initializing graphdriver: Storage option overlay2.size not supported. Filesystem does not support Project Quota: Failed to set quota limit for projid 1 on /var/lib/docker/overlay2/backingFsBlockDev: function not implemented ......
[root@node2 ~]# systemctl restart docker.service Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details. [root@node2 ~]tail -fn 50 /var/log/messages ...... Apr 1 06:34:29 node2 dockerd: time="2020-04-01T06:34:29.701688085+08:00" level=error msg="[graphdriver] prior storage driver overlay2 failed: Storage Option overlay2.size only supported for backingFS XFS. Found <unknown>" ......
原因:
Overlay2 Docker磁盘驱动模式,如果要调整其大小,需要让Linux文件系统设置为xfs,并且支持目录级别的磁盘配额功能;默认情况下,我们在安装系统时,不会做磁盘配额限制的。
什么叫支持目录的磁盘配额?
就是支持在固定大小目录中分配磁盘大小。目录有大小怎么理解?将一个固定大小的硬盘挂载到此目录,这个目录的大小就是硬盘的大小。然后目录可分配指定大小的硬盘资源给其下的文件
准备工作:备份docker images
[root@node1 ~]# docker image save busybox > /tmp/busybox.tar
[root@node1 ~]# fdisk -l Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x0004ff38 Device Boot Start End Blocks Id System /dev/sda1 * 2048 1026047 512000 83 Linux /dev/sda2 1026048 41938943 20456448 83 Linux /dev/sda3 41938944 41943039 2048 82 Linux swap / Solaris Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@node1 ~]# mkfs.xfs -f /dev/sdb meta-data=/dev/sdb isize=512 agcount=4, agsize=1310720 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=5242880, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
[root@node1 ~]# mkdir /data/ -p
[root@node1 ~]# mount -o uquota,prjquota /dev/sdb /data/
[root@node1 ~]# xfs_quota -x -c 'report' /data/ User quota on /data (/dev/sdb) Blocks User ID Used Soft Hard Warn/Grace ---------- -------------------------------------------------- root 0 0 0 00 [--------] Project quota on /data (/dev/sdb) Blocks Project ID Used Soft Hard Warn/Grace ---------- -------------------------------------------------- #0 0 0 0 00 [--------]
第六步:从/data/docker/作软链接到/var/lib下
把/var/lib目录下docker目录备份走,再重新做一个/data/docker的软连接到/var/lib下;
不支持目录级别的磁盘配额功能的源/var/lib/docker/目录移走,把支持目录级别的磁盘配额功能软链接到/data/docker/目录下的/var/lib/docker/目录
cd /var/lib mv docker docker.bak mkdir -p /data/docker ln -s /data/docker/ /var/lib/
[root@node1 ~]# systemctl restart docker [root@node1 ~]# ps -ef |grep docker root 3842 1 0 07:11 ? 00:00:00 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json --selinux-enabled --log-driver=journald --signature-verification=false --storage-opt overlay2.size=10G --storage-driver overlay2 -b=br0 root 3848 3842 0 07:11 ? 00:00:00 /usr/bin/docker-containerd-current -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --shim docker-containerd-shim --runtime docker-runc --runtime-args --systemd-cgroup=true root 3929 1759 0 07:13 pts/0 00:00:00 grep --color=auto docker #会看到一个"--storage-opt overlay2.size=10G"参数,表示磁盘配置成功
#导入我们备份的docker images [root@node1 ~]#docker image load -i busybox:latest /tmp/busybox.tar #启动容器 [root@node1 ~]# docker run -itd --name=test --privileged --cpuset-cpus=0 -m 4M busybox /bin/sh 0c4465b350551011e1dfebd6f8fc057a336ff7980736c60e31871ab67c42ac42
查看容器的磁盘大小
[root@node1 ~]# docker exec test df -Th Filesystem Type Size Used Available Use% Mounted on overlay overlay 10.0G 8.0K 10.0G 0% / tmpfs tmpfs 242.9M 0 242.9M 0% /dev tmpfs tmpfs 242.9M 0 242.9M 0% /sys/fs/cgroup /dev/sdb xfs 20.0G 33.6M 20.0G 0% /etc/resolv.conf /dev/sdb xfs 20.0G 33.6M 20.0G 0% /etc/hostname /dev/sdb xfs 20.0G 33.6M 20.0G 0% /etc/hosts shm tmpfs 64.0M 0 64.0M 0% /dev/shm /dev/sdb xfs 20.0G 33.6M 20.0G 0% /run/secrets
[root@node1 ~]# docker stats --no-stream test CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS test 0.00% 56 KiB / 4 MiB 1.37% 780 B / 734 B 0 B / 0 B 1
1、无论是磁盘大小的限制、还是cpu、内存,它们都不能超出实际拥有的大小! 比如我这台vmware的内存是4G、cpu两核、硬盘20G(因为这里可配额的/data/目录就只有20G),因为centos系统运行还需要占部分内存,所以容器指定内存最好不要超过3G,cpu不能超过两核(即0-0、1-1;0-1都可以)、硬盘不能超过20G(最好在15G以下) 2、做磁盘资源限制时,磁盘分区需要做支持目录级别的磁盘配额功能 3、配置新磁盘支持目录级别的磁盘配额功能后,重启docker服务,docker会被重新初始化,注意备份docker images
I have a dream so I study hard!!!