docker 使用中的重点总结【长期更新】


记录日常使用中 docker 遇到的一些问题。



docker篇


记录备忘关于docker 的一些要点难点。


快速安装docker-ce

docker 下载链接:
https://download.docker.com/linux/static/stable/x86_64/


解压:
tar xf docker-20.10.9.tgz


拷贝命令到指定目录:
cp -a * /usr/bin/


开启转发数据包
cat << 'EOF' >> /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system


编写docker配置:
mkdir /etc/docker
cat << 'EOF' > /etc/docker/daemon.json
{
  "log-driver": "json-file",
  "data-root": "/data/docker",
  "log-opts": {
    "max-size": "100m",
    "max-file": "3"
  },
  "exec-opts": ["native.cgroupdriver=systemd"],
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "insecure-registries": [
    "192.168.1.200:80"
  ],
  "registry-mirrors": [
    "https://docker.mirrors.ustc.edu.cn",
    "https://hub-mirror.c.163.com"
  ]
}
EOF



添加启动脚本:
cat << EOF > /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP
TimeoutSec=0
RestartSec=2
Restart=always

# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500

[Install]
WantedBy=multi-user.target
EOF


启动docker
systemctl daemon-reload
systemctl enable docker # 开机启动docker
systemctl start docker 	# 启动docker服务


查看docker是否安装成功
docker info 

如果需要在 bash 中添加 docker 自动补全功能,请拷贝yum安装docker服务器目录下的docker文件至 二进制安装同目录下:

# 安装bash shell 自动补全功能:
yum install -y bash-completion
# 将该文件拷贝到二进制安装docker 的主机同目录下:
/usr/share/bash-completion/completions/docker

# 生效:
source /usr/share/bash-completion/completions/docker
source /usr/share/bash-completion/bash_completion

避免不同目录的多次COPY


[root@docker(192.168.1.101) ~/manifests]#ls
default.conf  Dockerfile  html/

[root@docker(192.168.1.101) ~/manifests]#cat Dockerfile
FROM nginx:alpine
COPY ./default.conf /etc/nginx/conf.d
COPY ./html /usr/share/nginx/html

问题:因为需要拷贝不同的文件到不同的目录,所以这里 COPY 不得不使用两次,有什么方法可以优化一下吗?


因为在 linux 上都是以 / 为起点,所以在 Dockerfile 目录中可以建立"绝对路径"。

[root@docker(192.168.1.101) ~/manifests]#mkdir -pv etc/nginx/conf.d
[root@docker(192.168.1.101) ~/manifests]#mv default.conf etc/nginx/conf.d/

[root@docker(192.168.1.101) ~/manifests]#mkdir -pv usr/share/nginx/
[root@docker(192.168.1.101) ~/manifests]#mv html/ usr/share/nginx/

//修改Dockerfile
[root@docker(192.168.1.101) ~/manifests]#vim Dockerfile
FROM nginx:alpine
COPY . /

//创建镜像
[root@docker(192.168.1.101) ~/manifests]#docker build -t hukey:v01 ./

通过模拟建立容器内绝对路径来实现一次拷贝不同文件到不同目录,并生成镜像。


docker容器日志时区问题

启动 ngx 容器命令:

docker run --name ngx -p 80:80 -d nginx:alpine

查看系统日志:
image

进入容器查看时区:

root@elk(192.168.1.105)/root> docker exec -it ngx sh
/ # date -R 
Mon, 27 Jun 2022 10:53:31 +0000	   # 格林尼治时间

修改时区启动容器命令:

docker run --name ngx -p 80:80 -v /etc/localtime:/etc/localtime:ro -d nginx:alpine

再次,查日志:
image

进入容器查看时区:

root@elk(192.168.1.105)/root> docker exec -it ngx sh
/ # date -R 
Mon, 27 Jun 2022 19:12:21 +0800		# 东八区时间

在docker-compose 中修改时区建议使用 environment,如下:

version: "3.7"
services:
  nginx:
    container_name: "ngx"
    image: nginx:alpine
    environment:
    - "TZ=Asia/Shanghai"
    labels:
      service: nginx
    logging:
      options:
        labels: "service"
    ports:
    - "80:80"
  httpd:
    container_name: "httpd"
    image: httpd
    environment:
    - "TZ=Asia/Shanghai"
    labels:
      service: httpd
    logging:
      options:
        labels: "service"
    ports:
    - "8080:80"

总结

从上面来看,容器可以通过修改时区来改变业务时间,而然容器在宿主机生成的日志json中,time 始终都是格林尼治时间,如果后期做容器 elk时,也需要注意这个时间的转换。


宿主机访问容器内文件

通过容器 rootfs 挂载点来达到访问容器内文件的目的。

root@elk(192.168.1.103)/root> docker inspect ngx | egrep -i mergeddir
                
or

root@elk(192.168.1.103)/root> docker inspect  -f '{{.GraphDriver.Data.MergedDir}}' ngx

MergedDir : 包含整个容器的文件系统,包括修改。

进入 MergedDir 后,就可以通过本地直接查看到容器内的系统文件。

ls /var/lib/docker/overlay2/8f4fb49a0e948e37daa34454e781ab588ed0e1a8d1584a30833b4de555d17563/merged
bin/  dev/  docker-entrypoint.d/  docker-entrypoint.sh*  etc/  home/  lib/  media/  mnt/  opt/  proc/  root/  run/  sbin/  srv/  sys/  tmp/  usr/  var/

查看镜像构建的 Dockerfile

docker history --format {{.CreatedBy}} --no-trunc=true 镜像id |sed "s/\/bin\/sh\ -c\ \#(nop)\ //g"|sed "s/\/bin\/sh\ -c/RUN/g" | tac

alpine 类型的镜像修改时区

FROM openjdk:8-jdk-alpine
RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories && \
    apk add --no-cache tzdata gcc g++ lsof curl bash procps && \
    echo "Asia/Shanghai" > /etc/timezone && \
    cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && \
    apk del tzdata

docker 配置网络代理

有些国外镜像,可能需要特殊手段才能获取,配置docker网络代理:

mkdir -pv /etc/systemd/system/docker.service.d

vim /etc/systemd/system/docker.service.d/proxy.conf
[Service]
Environment="HTTP_PROXY=http://192.168.199.108:7890/"
Environment="HTTPS_PROXY=http://192.168.199.108:7890/"
Environment="NO_PROXY=localhost,127.0.0.1,.example.com"

systemctl daemon-reload; systemctl restart docker

docker 映射端口无法访问问题

当 容器映射的端口无法访问时,请检查 net.ipv4.ip_forward 是否开启

sysctl -a | egrep ip_forward
net.ipv4.ip_forward = 1

docker-compose 篇


记录备忘关于docker 的一些要点难点。


docker-compose 中使用 nfs


问题:如果有一台服务器是数据共享服务器,通过nfs提供服务。而通过 docker-compose启动的容器需要去访问共享服务器里的数据,怎么办?

docker-compose v3 提供了直接在 yml 里编写直接访问nfs 服务器的方式。


nfs 服务器配置:

[root@nfs-server(192.168.1.102) ~]#mkdir -pv /www/html
[root@nfs-server(192.168.1.102) ~]#echo 'hello hukey' > /www/html/index.html
[root@nfs-server(192.168.1.102) ~]#cat /etc/exports
/www/html *(rw,no_root_squash)

[root@nfs-server(192.168.1.102) ~]#systemctl start rpcbind ; systemctl start nfs

docker-compose 服务器

[root@node01(192.168.1.101) ~/manifests]#cat docker-compose.yml
version: '3'
services:
  nginx:
    image: nginx:alpine
    ports:
    - '80:80'
    volumes:
      - type: volume
        source: html
        target: /usr/share/nginx/html
        volume:
          nocopy: true

volumes:
  html:
    name: html
    driver_opts:
      type: "nfs"
      o: "addr=192.168.1.102,nolock,soft,rw"
      device: ":/www/html"

执行:

[root@node01(192.168.1.101) ~/manifests]#docker-compose up --build -d

//进入容器查看
[root@node01(192.168.1.101) ~/manifests]#docker exec -it c9d29e39b2ad sh
/ # cd /usr/share/nginx/html/
/usr/share/nginx/html # cat index.html
hello hukey

//查看挂载目录
/usr/share/nginx/html # df | grep www
:/www/html            95497216   1531904  93965312   2% /usr/share/nginx/html

查看 docker 挂载的卷:

[root@node01(192.168.1.101) ~/manifests]#docker volume ls
DRIVER    VOLUME NAME
local     html
[root@node01(192.168.1.101) ~/manifests]#docker inspect html
[
    {
        "CreatedAt": "2022-06-15T13:47:35+08:00",
        "Driver": "local",
        "Labels": {
            "com.docker.compose.project": "manifests",
            "com.docker.compose.version": "2.4.1",
            "com.docker.compose.volume": "html"
        },
        "Mountpoint": "/var/lib/docker/volumes/html/_data",
        "Name": "html",
        "Options": {
            "device": ":/www/html",
            "o": "addr=192.168.1.102,nolock,soft,rw",
            "type": "nfs"
        },
        "Scope": "local"
    }
]

这样就实现了 docker-compose 中直接使用 nfs 挂载卷。


注意:当使用  docker-compose down 删除 容器时, volumes 是不会被清理的。


[root@node01(192.168.1.101) ~/manifests]#docker-compose down
[root@node01(192.168.1.101) ~/manifests]#docker volume ls
DRIVER    VOLUME NAME
local     html


//这里需要手动清理volumes
[root@node01(192.168.1.101) ~/manifests]#docker volume rm html
html
[root@node01(192.168.1.101) ~/manifests]#docker volume ls
DRIVER    VOLUME NAME


docker-compose 使用默认桥接和自建桥接的不同


在有些例子中,docker-compose 使用默认的 bridge 桥接。于是也尝试使用,在使用中发现了一些问题,记录下来。


使用默认 bridge

docker-compose.yml

version: "3.7"
services:
  ngx01:
    container_name: ngx01
    image: nginx:alpine
    ports:
    - 8080:80
    network_mode: bridge
  ngx02:
    container_name: ngx02
    image: nginx:alpine
    ports:
    - 8090:80
    network_mode: bridge

上面的 docker-compose 中创建了 2 个容器,容器名分别为:ngx01ngx02 使用默认的 bridge 桥接

执行 docker-compose up -d

一般来说,在 docker-compose 中的容器在同一个网络,相互可以使用主机名来进行通信。

进入 ngx01 尝试访问 ngx02

[root@kj-test(192.168.1.101) ~/manifests]#docker exec -it ngx01 sh
/ # ping ngx01
^C
/ # ping ngx02
^C

两个主机名都无法进行通信,于是查看 /etc/resolv.conf

/ # cat /etc/resolv.conf
nameserver 223.5.5.5
nameserver 114.114.114.114

当使用docker 默认桥接方式时,在同一网络中的容器是无法通过主机名来进行通信的,而且容器的dns和宿主机的dns一致。

默认 bridge 网络如下:

[root@kj-test(192.168.1.101) ~/manifests]#docker inspect bridge
[
    {
        "Name": "bridge",
        "Id": "e381e982f4da73fd0eaaca0ce2a42c8c2d08f9998392431b82f1476f568058e9",
        "Created": "2022-06-21T15:30:18.685465006+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "60687bf12fe7eb5285112140bde73a6f65eacaa8d0c51e8067d890601a1da989": {
                "Name": "ngx02",
                "EndpointID": "6e0ab23632d296ec5c17624f56293fb2dacaacfe0749326c6359b5ae013690a1",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            },
            "af62af2506546bfecb4c71d1e1124d66dec89771cbfa7ef99eb000061f890f5f": {
                "Name": "ngx01",
                "EndpointID": "615895fca193dc912d45c953cc463ed998e6ea2c0d309f73b8e6e85042c62941",
                "MacAddress": "02:42:ac:11:00:03",
                "IPv4Address": "172.17.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

使用自建桥接网络

docker-compose.yml

version: "3.7"
services:
  ngx01:
    container_name: ngx01
    image: nginx:alpine
    ports:
    - 8080:80
    networks:
    - xajs_net
  ngx02:
    container_name: ngx02
    image: nginx:alpine
    ports:
    - 8090:80
    networks:
    - xajs_net
networks:
  xajs_net:
    name: xajs_net
    driver: bridge
    ipam:
      config:
      - subnet: "172.100.0.0/16"

上面的 docker-compose 中创建了 2 个容器,1个网络 xajs_net 容器名分别为:ngx01ngx02

执行 docker-compose up -d

进入 ngx01 尝试访问 ngx02

[root@kj-test(192.168.1.101) ~/manifests]#docker exec -it ngx01 sh
/ # ping -w1 -c1 ngx02
PING ngx02 (172.100.0.2): 56 data bytes
64 bytes from 172.100.0.2: seq=0 ttl=64 time=0.716 ms

--- ngx02 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.716/0.716/0.716 ms
/ # ping -w1 -c1 ngx01
PING ngx01 (172.100.0.3): 56 data bytes
64 bytes from 172.100.0.3: seq=0 ttl=64 time=0.158 ms

--- ngx01 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.158/0.158/0.158 ms

两个容器可以通过主机名来通信,查看 /etc/resolv.conf

/ # cat /etc/resolv.conf
nameserver 127.0.0.11
options ndots:0

只有docker自建桥接方式时,在同一网络中的容器是可以主机名来进行通信的。

其中的技术细节,等有时间在深究。


总结

当使用docker 默认桥接方式时,在同一网络中的容器是无法通过主机名来进行通信的,只有docker自建桥接方式时,在同一网络中的容器是可以主机名来进行通信的。


docker-compose 开机自启服务

vim /etc/systemd/system/docker-compose-app.service

[Unit]
Description=Docker Compose Application Service
Requires=docker.service
After=docker.service

[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/data/containers  # 设置 docker-compose.yaml 所在目录
ExecStart=/usr/bin/docker-compose up -d
ExecStop=/usr/bin/docker-compose down
TimeoutStartSec=0

[Install]
WantedBy=multi-user.target

docker-compose 启动mysql

version: "3.7"
services:
  ruoyi-mysql:
    container_name: mysql
    image: mysql:5.7.18
    environment:
    - "MYSQL_ROOT_PASSWORD=ruoyi@123"
    - "MYSQL_DATABASE=ry-vue"
    - "TZ=Asia/Shanghai"
    restart: always
    volumes:
    - /apps/mysql/mydir:/mydir
    - /apps/mysql/datadir:/var/lib/mysql
    - /apps/mysql/conf/my.cnf:/etc/my.cnf
    - /apps/mysql/source:/docker-entrypoint-initdb.d	# 全备备份sql文件放置到此目录下
    ports:
     - 3306:3306

docker-compose 启动 redis

version: '3'
services:
  redis:
    image: redis
    restart: always
    hostname: redis #指定容器hostname
    container_name: redis
    privileged: true
    ports:
      - 6379:6379
    environment:
      TZ: Asia/Shanghai #timeZone 时区
    volumes:
      # 在当前目录下创建/data /conf /logs
      - ./data:/data
      - ./conf:/etc/redis
      - ./logs:/logs
    command: [ "redis-server", "/etc/redis/redis.conf" ]

redis.conf

cat << 'EOF' > /data/redis/conf/redis.conf
bind 0.0.0.0
protected-mode no
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize no
pidfile /var/run/redis_6379.pid
loglevel notice
logfile ""
databases 16
always-show-logo no
set-proc-title yes
proc-title-template "{title} {listen-addr} {server-mode}"
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
rdb-del-sync-files no
dir ./
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync yes
repl-diskless-sync-delay 5
repl-diskless-sync-max-replicas 0
repl-diskless-load disabled
repl-disable-tcp-nodelay no
replica-priority 100
acllog-max-len 128
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
lazyfree-lazy-user-del no
lazyfree-lazy-user-flush no
oom-score-adj no
oom-score-adj-values 0 200 800
disable-thp yes
appendonly no
appendfilename "appendonly.aof"
appenddirname "appendonlydir"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
aof-timestamp-enabled no
 
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-listpack-entries 512
hash-max-listpack-value 64
list-max-listpack-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-listpack-entries 128
zset-max-listpack-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
jemalloc-bg-thread yes
EOF

docker-compose 顺序问题处理

比如 tomcat 比 mysql 启动要快,每次 tomcat都会先报错,怎么解决?

最简单直接的方式就是为 tomcat 容器启用
restart: always 
第二contacts 重启就能连接dm8了。
posted @ 2022-06-15 14:17  hukey  阅读(586)  评论(0编辑  收藏  举报