docker单主机网络
当你安装Docker时,它会自动创建三个网络。你可以使用以下docker network ls命令列出这些网络:
[root@localhost ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 57bd5f150d9a bridge bridge local 012889c9eb3c host host local a549e6efeedc none null local
我们在使用docker run创建Docker容器时,可以用 --net 选项指定容器的网络模式,Docker可以有以下4种网络模式:
host模式:使用 --net=host 指定。
none模式:使用 --net=none 指定。
bridge模式:使用 --net=bridge 指定,默认设置。
container模式:使用 --net=container:NAME_or_ID 指定。
host模式:
启动容器的时候使用host
模式,那么这个容器将不会获得一个独立的Network Namespace
,而是和宿主机共用一个Network Namespace
。容器将不会虚拟出自己的网卡,配置自己的IP
等,而是使用宿主机的IP
和端口。但是,容器的其他方面,如文件系统、进程列表等还是和宿主机隔离的。
[root@localhost ~]# docker run -itd --net=host --name net1 centos7 bash 4ad38fca08fe5fa989b89420571a299f7b6721bb6dc483a2039fb1ea546a85da [root@localhost ~]# docker exec net1 bash -c 'ifconfig' docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 inet6 fe80::42:e6ff:fe40:ac8e prefixlen 64 scopeid 0x20<link> ether 02:42:e6:40:ac:8e txqueuelen 0 (Ethernet) RX packets 2608 bytes 140085 (136.8 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2997 bytes 13384426 (12.7 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.191 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::f7c9:a526:bc5d:3cc2 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:c0:ec:44 txqueuelen 1000 (Ethernet) RX packets 1385581 bytes 759381045 (724.2 MiB) RX errors 0 dropped 184 overruns 0 frame 0 TX packets 125817 bytes 11694475 (11.1 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 4 bytes 404 (404.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 4 bytes 404 (404.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
container模式:
Container模式指定新创建的容器和已存在的容器共享一个Network Namespace,而不是和宿主机共享。
即新创建的容器不会创建自己的网卡,配置自己的ip,而是和指定的容器共享IP,端口范围等,同样两个容器除了网络方面相同之外,其他如文件系统、进程列表等还是隔离的。
#创建net2容器,查看ip为172.17.02
[root@localhost ~]# docker run -itd --name net2 centos7 bash 340ece9d752d2ce00993a91383c8bae6d53d5fe5602d1d596701f8aec1fb267f [root@localhost ~]# docker exec net2 bash -c 'ifconfig' eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.2 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet) RX packets 8 bytes 648 (648.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 1 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0创建net3容器,并指定container网络模式,查看ip和net2容器的ip相同 [root@localhost ~]# docker run -itd --net=container:net2 --name net3 centos7 bash 9a7b90d8f328e58f301285300949b515d59ddaad747b9750b5bd5a0c700c4964 [root@localhost ~]# docker exec net3 bash -c 'ifconfig' eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.2 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet) RX packets 8 bytes 648 (648.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 1 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
None模式:
如果处于None模式,docker容器拥有自己的network namespace,但是并不为docker容器进行任何网络配置,也就是说docker容器没有网卡、ip、路由等信息,需要手工为容器添加网卡配置等,典型pipework工具为docker容器指定ip等信息。
[root@localhost ~]# docker run -itd --net=none --name net4 centos7 bash 7c5e3e04e701837e6907e95ce8139d9f756cf40d1fb1eb34aaf635e8968b34b9 [root@localhost ~]# docker exec net4 bash -c 'ifconfig' lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 1 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
bridge桥接模式:
bridge模式是docker默认的网络模式,不写--net参数,默认就是bridge模式,该模式会为每个容器分配network namespace、设置ip、路由等配置,默认会将docker容器连接到一个虚拟网桥docker0上。
docker bridge创建过程: 1、首先在宿主机创建一对虚拟网卡veth pair设备,组成了一个数据的通道,数据从一个设备进入,就会从另一个设备出来,veth设备常用来连接两个网络设备。 2、docker将veth pair设备的一端放在新建的容器中并命名为eth0,然后另一端放在宿主机中,以vethXXX这样类似的名字命名,并将这个网络设备加入到docker0网桥中,可以通过brctl show命令查看。 3、从docker0子网中分配一个ip给容器使用,并设置docker0的ip地址为容器的默认网关。 4、此时容器ip与宿主机能够通信,宿主机也可以访问容器中的ip地址,在bridge模式下,连在同一个网桥上的容器之间可以相互通信,同时容器也可以访问外网,但是外网不能访问docker容器ip,需要通过nat将容器ip的port映射到宿主机的ip和port。
[root@localhost ~]# docker run -itd --name net5 --net=bridge centos7 bash 5ba8509815efe90db673bec55047198877ad1986ca478a70dee7d648ee854d44 [root@localhost ~]# docker exec net5 bash -c 'ifconfig eth0' eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.2 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet) RX packets 8 bytes 648 (648.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@localhost ~]# docker exec net5 bash -c 'ping www.baidu.com' #通过容器访问外网 PING www.a.shifen.com (180.97.33.107) 56(84) bytes of data. 64 bytes from 180.97.33.107 (180.97.33.107): icmp_seq=1 ttl=53 time=18.3 ms 64 bytes from 180.97.33.107 (180.97.33.107): icmp_seq=2 ttl=53 time=19.1 ms [root@localhost ~]# docker exec net5 bash -c 'route -n' #查看容器的网关为宿主机docker0网桥 Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 172.17.0.1 0.0.0.0 UG 0 0 0 eth0 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
当容器中运行一些网络应用,要让外部访问这些应用时,可以通过-p或-P参数来指定端口映射。
[root@localhost ~]# docker run -itd -p 5000:80 --name web nginx 46323ac3a1989d41613064dd9539ec014e500b92b51bd68de17c438cb5fa9fda [root@localhost ~]# curl -I 127.0.0.1:5000 HTTP/1.1 200 OK Server: nginx/1.15.6 Date: Wed, 21 Nov 2018 05:19:32 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 06 Nov 2018 13:32:09 GMT Connection: keep-alive ETag: "5be197d9-264" Accept-Ranges: bytes [root@localhost ~]# docker port web 80/tcp -> 0.0.0.0:5000
自定义docker网络:
我们可通过bridge
驱动创建类似前面默认的bridge
网络
[root@localhost ~]# docker network create --driver bridge --subnet 192.168.100.0/24 --gateway 192.168.100.1 my_net
1ae5f1b58076113dfac6c53b1174a56dabdbc51676bda85bf2d52cd76bc343da
[root@localhost ~]# docker network inspect my_net
[
{
"Name": "my_net",
"Id": "1ae5f1b58076113dfac6c53b1174a56dabdbc51676bda85bf2d52cd76bc343da",
"Created": "2018-11-21T13:33:36.526223383+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.100.0/24",
"Gateway": "192.168.100.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
查看自定义网桥
[root@localhost ~]# brctl show bridge name bridge id STP enabled interfaces br-1ae5f1b58076 8000.0242d4fadd5f no docker0 8000.0242e640ac8e no vetha201371 vethd99013e [root@localhost ~]# ifconfig br-1ae5f1b58076 br-1ae5f1b58076: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 192.168.100.1 netmask 255.255.255.0 broadcast 192.168.100.255 ether 02:42:d4:fa:dd:5f txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
启动容器通过--network制定使用自定义的网络
[root@localhost ~]# docker run -itd --name test1 --network=my_net centos7 bash b16e32bc80decffa54593e66ccd3f57796e5f4f11960403851e722c0e462194d [root@localhost ~]# docker exec test1 bash -c 'ifconfig eth0' eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.100.2 netmask 255.255.255.0 broadcast 192.168.100.255 ether 02:42:c0:a8:64:02 txqueuelen 0 (Ethernet) RX packets 8 bytes 648 (648.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
只有使用--subnet创建的网络才能指定静态ip
[root@localhost ~]# docker run -itd --name test2 --network=my_net --ip 192.168.100.254 centos7 bash 0398f3a337c50111139871569809e922c27073c2011e0f36d6eda50af4d84454 [root@localhost ~]# docker exec test2 bash -c 'ifconfig eth0' eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.100.254 netmask 255.255.255.0 broadcast 192.168.100.255 ether 02:42:c0:a8:64:fe txqueuelen 0 (Ethernet) RX packets 7 bytes 578 (578.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
容器使用自定义网络可以通过容器名进行通信
[root@localhost ~]# docker run -itd --name bs1 --network=my_net --ip 192.168.100.10 busybox 56a1be29e29d3c258608abd4f16203270b648d7c25dd448aa089f1950236a4d1 [root@localhost ~]# docker run -itd --name bs2 --network=my_net --ip 192.168.100.20 busybox 7e5278889c346ab7ea4832a673d728c31fd3624bff722b98857afd7ce148852b [root@localhost ~]# docker exec bs1 ping bs2 PING bs2 (192.168.100.20): 56 data bytes 64 bytes from 192.168.100.20: seq=0 ttl=64 time=0.119 ms 64 bytes from 192.168.100.20: seq=1 ttl=64 time=0.047 ms
docker使用pipework配置容器与宿主机同一网段:
docker默认提供了一个隔离的内网环境,启动时会建立一个docker0的虚拟网卡,每个容器都是连接到docker0网卡上的。而docker0的ip段为172.17.0.0,若想让容器与宿主机同一网段的其他机器访问,所以为了让容器与宿主机同一个网段,我们需要建立自己的桥接网络。我们只要将Docker
容器和宿主机的网卡桥接起来,再给Docker
容器配上IP
就可以了。
创建桥接网卡br0
[root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-br0
TYPE=Bridge
BOOTPROTO=static
NAME=br0
DEVICE=br0
ONBOOT=yes
IPADDR=192.168.0.191
NETMASK=255.255.255.0
GATEWAY=192.168.0.1
修改宿主机网卡配置
[root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=a284e06a-34b8-4e84-8f18-446c18ccf44f
DEVICE=ens33
ONBOOT=yes
#IPADDR=192.168.0.191
#NETMASK=255.255.255.0
#GATEWAY=192.168.0.1
BRIDGE=br0
重启网络,查看br0的ip地址
[root@localhost ~]# systemctl restart network
[root@localhost ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP group default qlen 1000
link/ether 00:0c:29:c0:ec:44 brd ff:ff:ff:ff:ff:ff
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:57:0c:b1:67 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:57ff:fe0c:b167/64 scope link
valid_lft forever preferred_lft forever
21: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:0c:29:c0:ec:44 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.191/24 brd 192.168.0.255 scope global br0
valid_lft forever preferred_lft forever
inet6 fe80::746c:4cff:fe34:f740/64 scope link
valid_lft forever preferred_lft forever
安装pipework
[root@localhost ~]# git clone https://github.com/jpetazzo/pipework.git Cloning into 'pipework'... remote: Enumerating objects: 501, done. remote: Total 501 (delta 0), reused 0 (delta 0), pack-reused 501 Receiving objects: 100% (501/501), 172.97 KiB | 90.00 KiB/s, done. Resolving deltas: 100% (264/264), done. [root@localhost ~]# cp pipework/pipework /usr/local/bin/
使用pipework给容器配置ip,并测试连通性
[root@localhost ~]# docker run -itd --net=none --name=test5 busybox 7c4e568ad87b31bc0d20e652c6cbf2a1e9a92d8ec9cfa9f8bdf7ef1ebc6d55e6 [root@localhost ~]# pipework br0 test5 192.168.0.195/24@192.168.0.1 [root@localhost ~]# docker exec test5 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 22: eth1@if23: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue qlen 1000 link/ether ea:38:6e:34:5e:a7 brd ff:ff:ff:ff:ff:ff inet 192.168.0.195/24 brd 192.168.0.255 scope global eth1 valid_lft forever preferred_lft forever [root@localhost ~]# docker exec test5 ping www.baidu.com PING www.baidu.com (103.235.46.39): 56 data bytes 64 bytes from 103.235.46.39: seq=0 ttl=45 time=220.568 ms 64 bytes from 103.235.46.39: seq=1 ttl=45 time=223.242 ms