Docker网络

理解Docker0网络

清空所有环境

测试

image-20200910090340831

三个网络

#问题: docker 是如何处理容器网络访问的?

image-20200910090548250

# 启动容器
[root@localhost ~]# docker run -d -P --name tomcat1 tomcat

#查看容器内部网络地址、ip addr 。发现容器启动会分配到一个eth0的ip地址。
root@localhost ~]# docker exec -it 60ac72339c27 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
      valid_lft forever preferred_lft forever
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
  link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
  inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
      valid_lft forever preferred_lft forever
 
#思考 linux能不能ping通容器内部
[root@localhost ~]# ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.066 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.088 ms

# linux可以ping通 docker 容器内部。

原理

  1. 每启动一个docker容器,docker就会给docker容器分配一个ip,我们只要安装了docker,就会有一个网卡docker0桥接模式,使用的技术就是veth-pair技术!

再次测试ip addr

image-20200910092726286

  1. 在启动一个容器测试,发现又多了一对网卡

image-20200910092851457

#我们发现这个容器带来的网卡,都是一对对的
#veth-pair 就是一对的虚拟设备接口,他们都是成对出现的。一段连着协议,一段彼此相连
#正因为有这个特性,veth-pair 充当一个桥梁,连接各种虚拟网络设备的。
  1. 测试 Tomcat1和tomcat2能否ping通?

[root@localhost ~]# docker exec -it tomcat2 ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.060 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.099 ms
^C
--- 172.17.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.060/0.079/0.099/0.021 ms

# 容器和容器之间是可以互相ping通的!

原理图:

image-20200910094230024

结论: tomcat1和tomcat2 是公用的一个路由,docker0

所有的容器不指定网络的情况下,都是docker0路由的,docker会给容器分配一个默认的可用ip。

小结

Docker 使用的是Linux的桥接,宿主机中是一个Docker容器的网桥docker0.

image-20200910094935505

Docker中的所有网络接口都是虚拟的,虚拟的转发效率高!(内网传递文件!)

只要容器一删除,对应网桥就没了。

image-20200910100650178

--link


思考一个场景,我们编写了一个微服务,database url=ip:,项目不重启,数据库ip换掉了,我们希望可以处理这个问题,可以名字来进行访问容器?

[root@localhost ~]# docker exec -it tomcat2 ping tomcat1
ping: tomcat1: Name or service not known

#如何解决?
#通过--link 既可以解决了网络连通问题
[root@localhost ~]# docker run -d -P --name tomcat3 --link tomcat2 tomcat
da8ff297e3e38fcc7cf565f0b3dff64568b05d8b6b93d7ebc97a2e34035acce4
[root@localhost ~]# docker exec -it tomcat3 ping tomcat2
PING tomcat2 (172.17.0.3) 56(84) bytes of data.
64 bytes from tomcat2 (172.17.0.3): icmp_seq=1 ttl=64 time=0.065 ms
64 bytes from tomcat2 (172.17.0.3): icmp_seq=2 ttl=64 time=0.099 ms

# 反向可以ping通么?
[root@localhost ~]# docker exec -it tomcat2 ping tomcat3
ping: tomcat3: Name or service not known

探究:inspect

image-20200910100744700

其实这个tomcat3就是在本地配置了tomcat2的解析

#查看hosts配置,发现原理。
[root@localhost ~]# docker exec -it tomcat3 cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3 tomcat2 31956372a41f
172.17.0.4 da8ff297e3e3

本质探究:--link就是在hosts配置中增加了一个 172.17.0.3 tomcat2 31956372a41f

现在docker已经不建议使用--link了!

自定义网络! 不适用docker0!

docker0问题: 他不支持容器名连接访问!

 

自定义网络


查看所有的docker网络

[root@localhost ~]# docker network ls 
NETWORK ID         NAME               DRIVER             SCOPE
73eaa41413b8       bridge             bridge             local
05f6a4c7b08f       host               host               local
bf5d00f9b198       none               null               local

网络模式

bridge : 桥接 docker上搭桥 (默认)

none : 不配置网络

host : 和宿主机共享网络

container: 容器网络连通!(用的少。局限很大)

测试

#我们直接启动的命令 --net bridge , 这个就是我们的docker0
docker run -d -P --name tomcat01 tomcat
docker run -d -P --name tomcat01 --net bridge tomcat

# docker0特点: 默认 域名不能访问,--link 可以打通,不建议。
#我们可以自定义一个网络  
#--driver bridge
#--subnet 192.168.0.0/16
#--gateway 192.168.0.1
[root@localhost ~]# docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet
0833b39ff4a0940e99d56857c200b1272ad8abb36d447c19417aaa016c9951a5

[root@localhost ~]# docker network ls
NETWORK ID         NAME               DRIVER             SCOPE
73eaa41413b8       bridge             bridge             local
05f6a4c7b08f       host               host               local
3f192473e4fb       mynet               bridge             local

查看创建的网络:docker network inspect mynet

image-20200910105419323

 

#创建容器加入自定义网络
[root@localhost ~]# docker run -d -P --name tomcat-net01 --net mynet tomcat
ca9c3187931330417728f05d383bb8b5aa28412d678b8890575fa3e78f703351
[root@localhost ~]# docker run -d -P --name tomcat-net02 --net mynet tomcat
09a16147ec07c3b4c43e85e2da27356433e5ce6c4bdfff5f4529c2122ad5cfb9
[root@localhost ~]# docker network inspect mynet
[
  {
       "Name": "mynet",
       "Id": "0833b39ff4a0940e99d56857c200b1272ad8abb36d447c19417aaa016c9951a5",
       "Created": "2020-09-10T10:51:59.117048253+08:00",
       "Scope": "local",
       "Driver": "bridge",
       "EnableIPv6": false,
       "IPAM": {
           "Driver": "default",
           "Options": {},
           "Config": [
              {
                   "Subnet": "192.168.0.0/16",
                   "Gateway": "192.168.0.1"
              }
          ]
      },
       "Internal": false,
       "Attachable": false,
       "Ingress": false,
       "ConfigFrom": {
           "Network": ""
      },
       "ConfigOnly": false,
       "Containers": {
           "09a16147ec07c3b4c43e85e2da27356433e5ce6c4bdfff5f4529c2122ad5cfb9": {
               "Name": "tomcat-net02",
               "EndpointID": "982c0cb4174bc84e983f82add9c199436886187baa9e9b25a25d4b9d12e731c4",
               "MacAddress": "02:42:c0:a8:00:03",
               "IPv4Address": "192.168.0.3/16",
               "IPv6Address": ""
          },
           "ca9c3187931330417728f05d383bb8b5aa28412d678b8890575fa3e78f703351": {
               "Name": "tomcat-net01",
               "EndpointID": "6be67748f5a3b7df2f4bdef1b505dfd1df2ec81d9d2673675c6b411a2c6deeda",
               "MacAddress": "02:42:c0:a8:00:02",
               "IPv4Address": "192.168.0.2/16",
               "IPv6Address": ""
          }
      },
       "Options": {},
       "Labels": {}
  }
]

#再次测试ping、
[root@localhost ~]# docker exec -it tomcat-net01 ping 192.168.0.3
PING 192.168.0.3 (192.168.0.3) 56(84) bytes of data.
64 bytes from 192.168.0.3: icmp_seq=1 ttl=64 time=0.055 ms
64 bytes from 192.168.0.3: icmp_seq=2 ttl=64 time=0.170 ms

#现在不使用--link也可以ping通名字
[root@localhost ~]# docker exec -it tomcat-net01 ping tomcat-net02
PING tomcat-net02 (192.168.0.3) 56(84) bytes of data.
64 bytes from tomcat-net02.mynet (192.168.0.3): icmp_seq=1 ttl=64 time=0.103 ms
64 bytes from tomcat-net02.mynet (192.168.0.3): icmp_seq=2 ttl=64 time=0.100 ms

我们自定义的网络docker可以帮我们维护好对应关系,推荐使用自定义网络!

 

网络连通


[root@localhost ~]# docker network --help
Usage: docker network COMMAND
Manage networks
Commands:
connect     Connect a container to a network
create     Create a network
disconnect Disconnect a container from a network
inspect     Display detailed information on one or more networks
 ls         List networks
prune       Remove all unused networks
 rm         Remove one or more networks
[root@localhost ~]# docker network connect --help
Usage: docker network connect [OPTIONS] NETWORK CONTAINER
Connect a container to a network
Options:
     --alias strings           Add network-scoped alias for the container
     --driver-opt strings     driver options for the network
     --ip string               IPv4 address (e.g., 172.30.100.104)
     --ip6 string             IPv6 address (e.g., 2001:db8::33)
     --link list               Add link to another container
     --link-local-ip strings   Add a link-local address for the container
[root@localhost ~]#
#测试 打通 tomcat01 - mynet
[root@localhost ~]# docker network connect mynet tomcat01

#连通之后就是将 tomcat01 放到了 mynet 网络下、
#一个容器两个 ip 地址。阿里云:公网ip 私网ip
[root@localhost ~]# docker network inspect mynet
[
       "Name": "mynet",
       "Scope": "local",
       "Driver": "bridge"
                   "Subnet": "192.168.0.0/16",
                   "Gateway": "192.168.0.1"
"982c0cb4174bc84e983f82add9c199436886187baa9e9b25a25d4b9d12e731c4",
               "MacAddress": "02:42:c0:a8:00:03",
               "IPv4Address": "192.168.0.3/16",
               "IPv6Address": ""
          },
           "68adf2d6db267c529c07a831771b387616deb308fe1a68ec3ae5c5320bd62c79": {
               "Name": "tomcat01",
               "EndpointID": "7d95ff4db6303a1e822d769df20dd51c7bbc13fe9cd53b38690224de2c1b62f0",
               "MacAddress": "02:42:c0:a8:00:04",
               "IPv4Address": "192.168.0.4/16",
               "IPv6Address": ""
          },
           "ca9c3187931330417728f05d383bb8b5aa28412d678b8890575fa3e78f703351": {
               "Name": "tomcat-net01",
               "EndpointID": "6be67748f5a3b7df2f4bdef1b505dfd1df2ec81d9d2673675c6b411a2c6deeda",
               "MacAddress": "02:42:c0:a8:00:02",
               "IPv4Address": "192.168.0.2/16",
               "IPv6Address": ""
          }

#测试tomcat01能否ping通
[root@localhost ~]# docker exec -it tomcat01 ping tomcat-net01
PING tomcat-net01 (192.168.0.2) 56(84) bytes of data.
64 bytes from tomcat-net01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.078 ms
64 bytes from tomcat-net01.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.098 ms

#tomcat02 依旧不通
[root@localhost ~]# docker exec -it tomcat02 ping tomcat-net01
ping: tomcat-net01: Name or service not known

结论:假设要跨网络操作别人,就需要使用docker network connect 连通!。。

 

实战:部署Redis集群


 

image-20200910113253164

shell脚本!

#创建Redis网络
[root@localhost ~]# docker network create --subnet 172.38.0.0/16 redis
c7464eb062357a03a75275e37e50e13cc4f113806bbb59a880a8835ca65e008e
[root@localhost ~]# docker network ls
NETWORK ID         NAME               DRIVER             SCOPE
73eaa41413b8       bridge             bridge             local
05f6a4c7b08f       host               host               local
c7464eb06235       redis               bridge             local
#通过脚本创建6个Redis配置
for port in $(seq 1 6); \
do \
mkdir -p /mydata/redis/node-${port}/conf
touch /mydata/redis/node-${port}/conf/redis.conf
cat << EOF >/mydata/redis/node-${port}/conf/redis.conf
port 6379
bind 0.0.0.0
cluster-enabled yes
cluster-node-timeout 5000
cluster-announce-ip 172.38.0.1${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
done

#脚本启动节点/同创建redis配置一起用
docker run -p 637${port}:6379 -p 1637${port}:16379 --name redis-${port} -v /mydata/redis/node-${port}/data:/data -v /mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf -d --net redis --ip 172.38.0.1${port} redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

#分别启动六个node节点/1-6
[root@localhost ~]# docker run -p 6371:6379 -p 16371:16379 --name redis-1 -v /mydata/redis/node-1/data:/data -v /mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf -d --net redis --ip 172.38.0.11 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

[root@localhost ~]# docker run -p 6376:6379 -p 16376:16379 --name redis-6 -v /mydata/redis/node-6/data:/data -v /mydata/redis/node-6/conf/redis.conf:/etc/redis/redis.conf -d --net redis --ip 172.38.0.16 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

#创建集群的配置
[root@localhost ~]# docker exec -it redis-1 /bin/sh
/data # ls
appendonly.aof nodes.conf
/data # redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.38.0.15:6379 to 172.38.0.11:6379
Adding replica 172.38.0.16:6379 to 172.38.0.12:6379
Adding replica 172.38.0.14:6379 to 172.38.0.13:6379
M: a0517457e11a0864acf3c3b926f1f91ceac91e12 172.38.0.11:6379
  slots:[0-5460] (5461 slots) master
M: 3ff653ff4cf7675dba9cb589afcda22887bd08f1 172.38.0.12:6379
  slots:[5461-10922] (5462 slots) master
M: 200d88d03dc389f09185107dc7e6b818849f2847 172.38.0.13:6379
  slots:[10923-16383] (5461 slots) master
S: 1e932f4c62cff89381217d4bdb75122af7d54f7d 172.38.0.14:6379
  replicates 200d88d03dc389f09185107dc7e6b818849f2847
S: 811d492546458c922338ce98c9015aa423897de2 172.38.0.15:6379
  replicates a0517457e11a0864acf3c3b926f1f91ceac91e12
S: 9dfa8b6b77cf1b95ebec69a64cc363e85b361a75 172.38.0.16:6379
  replicates 3ff653ff4cf7675dba9cb589afcda22887bd08f1
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
...
>>> Performing Cluster Check (using node 172.38.0.11:6379)
M: a0517457e11a0864acf3c3b926f1f91ceac91e12 172.38.0.11:6379
  slots:[0-5460] (5461 slots) master
  1 additional replica(s)
S: 9dfa8b6b77cf1b95ebec69a64cc363e85b361a75 172.38.0.16:6379
  slots: (0 slots) slave
  replicates 3ff653ff4cf7675dba9cb589afcda22887bd08f1
M: 3ff653ff4cf7675dba9cb589afcda22887bd08f1 172.38.0.12:6379
  slots:[5461-10922] (5462 slots) master
  1 additional replica(s)
S: 811d492546458c922338ce98c9015aa423897de2 172.38.0.15:6379
  slots: (0 slots) slave
  replicates a0517457e11a0864acf3c3b926f1f91ceac91e12
M: 200d88d03dc389f09185107dc7e6b818849f2847 172.38.0.13:6379
  slots:[10923-16383] (5461 slots) master
  1 additional replica(s)
S: 1e932f4c62cff89381217d4bdb75122af7d54f7d 172.38.0.14:6379
  slots: (0 slots) slave
  replicates 200d88d03dc389f09185107dc7e6b818849f2847
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-replicas 1

#
/data # redis-cli -c
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:1287
cluster_stats_messages_pong_sent:1309
cluster_stats_messages_sent:2596
cluster_stats_messages_ping_received:1304
cluster_stats_messages_pong_received:1287
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:2596
127.0.0.1:6379> cluster nodes
a0517457e11a0864acf3c3b926f1f91ceac91e12 172.38.0.11:6379@16379 myself,master - 0 1599716363000 1 connected 0-5460
9dfa8b6b77cf1b95ebec69a64cc363e85b361a75 172.38.0.16:6379@16379 slave 3ff653ff4cf7675dba9cb589afcda22887bd08f1 0 1599716364887 6 connected
3ff653ff4cf7675dba9cb589afcda22887bd08f1 172.38.0.12:6379@16379 master - 0 1599716364000 2 connected 5461-10922
811d492546458c922338ce98c9015aa423897de2 172.38.0.15:6379@16379 slave a0517457e11a0864acf3c3b926f1f91ceac91e12 0 1599716365395 5 connected
200d88d03dc389f09185107dc7e6b818849f2847 172.38.0.13:6379@16379 master - 0 1599716364379 3 connected 10923-16383
1e932f4c62cff89381217d4bdb75122af7d54f7d 172.38.0.14:6379@16379 slave 200d88d03dc389f09185107dc7e6b818849f2847 0 1599716364582 4 connected

#测试
127.0.0.1:6379> set a b
-> Redirected to slot [15495] located at 172.38.0.13:6379
OK
172.38.0.13:6379> get a
"b"
#把172.38.0.13即Redis-3节点停止、然后get a
172.38.0.13:6379> get a
^C
/data # redis-cli -c
127.0.0.1:6379> get a
-> Redirected to slot [15495] located at 172.38.0.14:6379
"b"

#查看、搭建成功。
172.38.0.14:6379> cluster nodes
9dfa8b6b77cf1b95ebec69a64cc363e85b361a75 172.38.0.16:6379@16379 slave 3ff653ff4cf7675dba9cb589afcda22887bd08f1 0 1599717001507 6 connected
200d88d03dc389f09185107dc7e6b818849f2847 172.38.0.13:6379@16379 master,fail - 1599716762868 1599716762563 3 connected
a0517457e11a0864acf3c3b926f1f91ceac91e12 172.38.0.11:6379@16379 master - 0 1599717002000 1 connected 0-5460
1e932f4c62cff89381217d4bdb75122af7d54f7d 172.38.0.14:6379@16379 myself,master - 0 1599717000000 7 connected 10923-16383
3ff653ff4cf7675dba9cb589afcda22887bd08f1 172.38.0.12:6379@16379 master - 0 1599717003638 2 connected 5461-10922
811d492546458c922338ce98c9015aa423897de2 172.38.0.15:6379@16379 slave a0517457e11a0864acf3c3b926f1f91ceac91e12 0 1599717002623 5 connected

docker搭建Redis集群成功!

image-20200910135146274

 

SpeingBoot微服务打包Docker镜像


  1. 构建springboot项目

  2. 打包项目

  3. 编写dockerfile

     

    From java:8
    COPY *.jar /app.jar
    CMD ["--server.port=8080"]
    EXPOSE 8080
    ENTRYPOINT ["java","-jar","/app.jar"]
  4. 构建镜像

  5. 发布运行

 

以后我们使用了Docker之后,给别人交付的就是一个镜像即可!

docker01完结

 

??如果有很多镜像?、1000个镜像?、

 



posted @ 2020-09-14 18:51  nice的  阅读(253)  评论(0编辑  收藏  举报