docker入门基础(二)

三、docker的单机网络方案

1、容器虚拟化网络简介

在每个容器中,我们都可以看到文件系统,网卡等资源,这些资源看上去是容器自己的。拿网卡来说,每个容器都会认为自己有一块独立的网卡,即使 host 上只有一块物理网卡。这种方式非常好,它使得容器更像一个独立的计算机。

Linux 实现这种方式的技术是 namespace。namespace 管理着 host 中全局唯一的资源,并可以让每个容器都觉得只有自己在使用它。换句话说,namespace 实现了容器间资源的隔离。

Linux 使用了六种 namespace,分别对应六种资源:Mount、UTS、IPC、PID、Network 和 User
mount

Mount namespace 让容器看上去拥有整个文件系统。

容器有自己的 / 目录,可以执行 mount 和 umount 命令。当然我们知道这些操作只在当前容器中生效,不会影响到 host 和其他容器。

UTS

简单的说,UTS namespace 让容器有自己的 hostname。 默认情况下,容器的 hostname 是它的短ID,可以通过 -h 或 --hostname 参数设置。

PID

容器拥有自己独立的一套 PID,这就是 PID namespace 提供的功能。

network

none

none 网络就是什么都没有的网络。挂在这个网络下的容器除了 lo,没有其他任何网卡。容器创建时,可以通过 --network=none 指定使用 none 网络。

host

连接到 host 网络的容器共享 Docker host 的网络栈,容器的网络配置与 host 完全一样。可以通过 --network=host 指定使用 host 网络。

直接使用 Docker host 的网络最大的好处就是性能,如果容器对网络传输效率有较高要求,则可以选择 host 网络。当然不便之处就是牺牲一些灵活性,比如要考虑端口冲突问题,Docker host 上已经使用的端口就不能再用了。

bridge

Docker 安装时会创建一个 命名为 docker0 的 linux bridge。如果不指定--network,创建的容器默认都会挂到 docker0 上。

除了 none, host, bridge 这三个自动创建的网络,用户也可以根据业务需要创建 user-defined 网络。

user

User namespace 让容器能够管理自己的用户,host 不能看到容器中创建的用户。在容器中创建了用户 cloudman,但 host 中并不会创建相应的用户。

到目前为止,容器的 IP 都是 docker 自动从 subnet 中分配,我们能否指定一个静态 IP 呢?

答案是:可以,通过--ip指定。

注:只有使用 --subnet 创建的网络才能指定静态 IP。

Docker 安装时会自动在host上创建三个网络,我们可用 docker network ls命令查看:

[root@node1 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
df5b7970b3a8        bridge              bridge              local
abd9c40d7983        host                host                local
6aad0b2dd7bb        none                null                local
[root@node1 ~]# ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 fe80::42:acff:fe87:fd09  prefixlen 64  scopeid 0x20<link>
        ether 02:42:ac:87:fd:09  txqueuelen 0  (Ethernet)
        RX packets 16  bytes 1176 (1.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 24  bytes 1772 (1.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.10  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::ab2e:4f4:b96b:27d8  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:7e:60:50  txqueuelen 1000  (Ethernet)
        RX packets 115204  bytes 160314892 (152.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 61790  bytes 8955801 (8.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 12  bytes 1404 (1.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 12  bytes 1404 (1.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth22741b7: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::b010:b5ff:fef6:793a  prefixlen 64  scopeid 0x20<link>
        ether b2:10:b5:f6:79:3a  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 6  bytes 508 (508.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
#veth22741b7为创建容器时候创建的 虚拟网卡,一半在docker容器上,一半在 宿主机上 
#安装bridge-utils 查看
[root@node1 ~]#  yum install -y bridge-utils
[root@node1 ~]# brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.0242ac87fd09	no		veth22741b7
#查看网卡的一半,另一半在容器内
root@node1 ~]# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:7e:60:50 brd ff:ff:ff:ff:ff:ff
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 02:42:ac:87:fd:09 brd ff:ff:ff:ff:ff:ff
19: veth22741b7@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether b2:10:b5:f6:79:3a brd ff:ff:ff:ff:ff:ff link-netnsid 0
#docker创建容器 创建网络会自动生成一套iptables规则
[root@node1 ~]# iptables -t nat -nL
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
DOCKER     all  --  0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
MASQUERADE  all  --  172.17.0.0/16        0.0.0.0/0           

Chain DOCKER (2 references)
target     prot opt source               destination         
RETURN     all  --  0.0.0.0/0            0.0.0.0/0    

2、网络模型理论

另开一台没有docker 的机器 使用ip命令就可以模拟网络名称空间

[root@node3 ~]# rpm -q iproute
iproute-4.11.0-14.el7.x86_64
[root@node3 ~]# ip 
Usage: ip [ OPTIONS ] OBJECT { COMMAND | help }
       ip [ -force ] -batch filename
where  OBJECT := { link | address | addrlabel | route | rule | neigh | ntable |
                   tunnel | tuntap | maddress | mroute | mrule | monitor | xfrm |
                   netns | l2tp | fou | macsec | tcp_metrics | token | netconf | ila |
                   vrf }
       OPTIONS := { -V[ersion] | -s[tatistics] | -d[etails] | -r[esolve] |
                    -h[uman-readable] | -iec |
                    -f[amily] { inet | inet6 | ipx | dnet | mpls | bridge | link } |
                    -4 | -6 | -I | -D | -B | -0 |
                    -l[oops] { maximum-addr-flush-attempts } | -br[ief] |
                    -o[neline] | -t[imestamp] | -ts[hort] | -b[atch] [filename] |
                    -rc[vbuf] [size] | -n[etns] name | -a[ll] | -c[olor]}
#添加网络名称空间
[root@node3 ~]# ip netns help
Usage: ip netns list	#列表
       ip netns add NAME	#添加
       ip netns set NAME NETNSID  #设置sid
       ip [-all] netns delete [NAME]  #删除名称空间
       ip netns identify [PID]
       ip netns pids NAME
       ip [-all] netns exec [NAME] cmd ...  #执行命令
       ip netns monitor
       ip netns list-id
#管理的时候 只有网络命令空间是隔离的 别的还是共享
[root@node3 ~]# ip netns add r1
[root@node3 ~]# ip netns add r2
[root@node3 ~]# ip netns list
r2
r1
#没有设置网卡 默认是只有一个lo 且未激活 需要用 -a 显示所有
[root@node3 ~]# ip netns exec r1 ifconfig -a
lo: flags=8<LOOPBACK>  mtu 65536
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
#创建虚拟网卡对
[root@node3 ~]# ip link add name veth1.1 type veth peer name veth1.2
[root@node3 ~]# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:ac:80:98 brd ff:ff:ff:ff:ff:ff
3: veth1.2@veth1.1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether d2:07:70:74:78:31 brd ff:ff:ff:ff:ff:ff
4: veth1.1@veth1.2: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether e6:2e:77:c5:92:f0 brd ff:ff:ff:ff:ff:ff
#把veth2移动到r1
[root@node3 ~]# ip link set dev veth1.2 netns r1
[root@node3 ~]# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:ac:80:98 brd ff:ff:ff:ff:ff:ff
4: veth1.1@if3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether e6:2e:77:c5:92:f0 brd ff:ff:ff:ff:ff:ff link-netnsid 0
#veth1.2被移动到r1  查看r1的网卡设备
[root@node3 ~]# ip netns exec r1 ifconfig -a
lo: flags=8<LOOPBACK>  mtu 65536
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth1.2: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether d2:07:70:74:78:31  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
#修改veth1.2名字
[root@node3 ~]# ip netns exec r1 ifconfig -a
lo: flags=8<LOOPBACK>  mtu 65536
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

th0: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether d2:07:70:74:78:31  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
#开启veth1
[root@node3 ~]# ifconfig veth1.1 10.2.0.1/24 up
[root@node3 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:ac:80:98 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.30/24 brd 10.0.0.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::15d9:b011:9226:47ac/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
4: veth1.1@if3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
    link/ether e6:2e:77:c5:92:f0 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.2.0.1/24 brd 10.2.0.255 scope global veth1.1
       valid_lft forever preferred_lft forever
#开启另一半
[root@node3 ~]# ip netns exec r1 ifconfig th0 10.2.0.2/24 up
[root@node3 ~]# ip netns exec r1 ifconfig 
th0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.2.0.2  netmask 255.255.255.0  broadcast 10.2.0.255
        inet6 fe80::d007:70ff:fe74:7831  prefixlen 64  scopeid 0x20<link>
        ether d2:07:70:74:78:31  txqueuelen 1000  (Ethernet)
        RX packets 8  bytes 648 (648.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8  bytes 648 (648.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
#ping测试
[root@node3 ~]# ping 10.2.0.2
PING 10.2.0.2 (10.2.0.2) 56(84) bytes of data.
64 bytes from 10.2.0.2: icmp_seq=1 ttl=64 time=0.064 ms
64 bytes from 10.2.0.2: icmp_seq=2 ttl=64 time=0.051 ms
64 bytes from 10.2.0.2: icmp_seq=3 ttl=64 time=0.054 ms
64 bytes from 10.2.0.2: icmp_seq=4 ttl=64 time=0.052 ms
^C
--- 10.2.0.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.051/0.055/0.064/0.007 ms
#把veth1.1也转移走并测试
[root@node3 ~]# ip link set dev veth1.1 netns r2
[root@node3 ~]# ip netns exec r2 ifconfig veth1.1 10.2.0.3/24 up
[root@node3 ~]#  ip netns exec r2 ping 10.2.0.2
PING 10.2.0.2 (10.2.0.2) 56(84) bytes of data.
64 bytes from 10.2.0.2: icmp_seq=1 ttl=64 time=0.065 ms
64 bytes from 10.2.0.2: icmp_seq=2 ttl=64 time=0.055 ms
64 bytes from 10.2.0.2: icmp_seq=3 ttl=64 time=0.052 ms
64 bytes from 10.2.0.2: icmp_seq=4 ttl=64 time=0.051 ms
^C
--- 10.2.0.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.051/0.055/0.065/0.010 ms
#完成手动创建虚拟网卡  可以使用ip命令手动配置转移
####################################################################

3、docker网络模型

docker有四种网络模型
closed container 只有lo接口 不能连接外网
bridged container 桥接模式 通过docker0桥接 net桥
joined container UTS NET IPC 通用 mount user pid是自己的 联盟式网络
Open container 开放式网络 Joined的 一种扩展

创建docker容器的时候 使用一个--network 选择网络 默认是bridge

1、bridge container

创建并开启容器 关闭后删除 使用bridge网络 --network bridge 默认

[root@node1 ~]# docker run --name t1  --network bridge  -it --rm busybox:latest
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:03  
          inet addr:172.17.0.3  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:6 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:508 (508.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
#本机查看IP
/ # exit
[root@node1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:7e:60:50 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.10/24 brd 10.0.0.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::bcd1:23b:c15b:3c72/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
    inet6 fe80::ab2e:4f4:b96b:27d8/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:87:fd:09 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe87:fd09/64 scope link 
       valid_lft forever preferred_lft forever
#发现docker0虚拟网卡的IP为172.17.0.1 ,这是docker建立的桥接网络的虚拟网关,所有使用bridge创建的网络都在该IP段内。
#docker主机名 默认是容器id
/ # hostname
white.com
#可以在启动容器的时候 指定  -h  且自动生成/etc/hosts内的本机解析  DNS解析默认使用宿主机一样的DNS解析
/ # cat /etc/hosts
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
172.17.0.3	white.com white
#指定dns   --dns
[root@node1 ~]#  docker run --name t1 --network bridge -h white.com --dns 114.114.114.114 -it  --rm busybox:latest
/ # cat /etc/resolv.conf 
nameserver 114.114.114.114
#指定dns-search
[root@node1 ~]#  docker run --name t1 --network bridge -h white.com --dns 114.114.114.114 --dns-search ilinux.io -it --rm busybox:latest
/ # cat /etc/resolv.conf 
search ilinux.io
nameserver 114.114.114.114
#自动注入host解析记录
[root@node1 ~]#  docker run --name t1 --network bridge -h white.com --dns 114.114.114.114 --dns-search ilinux.io --add-host www.baidu.com:10.0.0.22 -it --rm busybox:latest
/ # cat /etc/hosts 
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
10.0.0.22	www.baidu.com
172.17.0.3	white.com white
###端口
#Open container  比如nginx服务,需要开放80端口来提供web访问
-p <containerPort>  将指定的容器端口应设置主机所有地址的一个动态端口   32767之后
-p <hostPort>:<containerPort> 将容器端口映射到指定的主机端口
-p <ip>::<containerPort> 将指定的容器端口映射只主机制定的<ip>的动态端口 
-p <ip>:<hostPort>:<containerPort>  将指定容器的端口映射到主机制定IP的端口
动态端口就是随机端口 使用docker port查询

#开启httpd服务并暴露80端口
[root@node1 ~]# docker run --name myweb -p 80 --rm xiaobai20201/httpd:v0.2
#此时暴露的端口是随机的  ()
#复制终端查看
[root@node1 ~]# iptables -t nat -nL
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
DOCKER     all  --  0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
MASQUERADE  all  --  172.17.0.0/16        0.0.0.0/0           
MASQUERADE  tcp  --  172.17.0.3           172.17.0.3           tcp dpt:80

Chain DOCKER (2 references)
target     prot opt source               destination         
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:32768 to:172.17.0.3:80
#发现开启容器后防火墙自动生成docker的规则
#查看虚拟机的IP  
[root@node1 ~]# docker inspect myweb
Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.3",
#由于做了端口映射 DNAT  所以我们访问的时候访问的是docker宿主机的地址和映射的端口     宿主机IP  10.0.0.10
故访问 http://10.0.0.10:32768

1544334730266

#关闭myweb容器 查看iptables规则删除了DNAT规则
[root@node1 ~]# docker kill myweb
myweb
[root@node1 ~]# iptables -t nat -nL
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
DOCKER     all  --  0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
MASQUERADE  all  --  172.17.0.0/16        0.0.0.0/0           

Chain DOCKER (2 references)
target     prot opt source               destination         
RETURN     all  --  0.0.0.0/0            0.0.0.0/0   
#发现关闭容器后防火墙规则自动清除

2、closed containe

默认只有lo网络, --network none

docker容器的主机名 默认是容器id

[root@node1 ~]#  docker run --name t1 --network none -it  --rm busybox:latest
/ # ifconfig
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
#默认只有lo网络
#docker主机名 默认是容器id
/ # hostname
36654003ba6d
#可以在启动容器的时候 指定  -h  且自动生成/etc/hosts内的本机解析  DNS解析默认使用宿主机一样的DNS解析
[root@node1 ~]#  docker run --name t1 --network none -h white.com -it  --rm busybox:latest
/ # hostname
white.com

再次启动myweb 指定宿主机IP 宿主机端口随机 容器端口80

[root@node1 ~]# docker run --name myweb --rm -p 10.0.0.10::80 xiaobai20201/httpd:v0.2
#复制终端查看
[root@node1 ~]# docker port myweb
80/tcp -> 10.0.0.10:32768
#指定宿主机的IP随机端口映射到容器的80端口

再次启动myweb 指定宿主机端口 宿主机ip随机 容器端口80

[root@node1 ~]# docker run --name myweb --rm -p 8010:80 xiaobai20201/httpd:v0.2
#复制终端
[root@node1 ~]# docker port myweb
80/tcp -> 0.0.0.0:8010
#指定宿主机所有IP的8010端口映射到容器的端口

再次启动myweb 指定宿主机IP和端口 宿主机ip随机 容器端口80

[root@node1 ~]# docker run --name myweb --rm -p 10.0.0.10:8010:80 xiaobai20201/httpd:v0.2
#复制终端
[root@node1 ~]# docker port myweb
80/tcp -> 10.0.0.10:8010
#指定宿主机指定IP的8010端口映射到容器的端口

如果想要暴露多个端口,且是服务真正监听的端口, 可以使用多次-p

-P 大写 暴露所有端口

3、Joined containers

共享网络

启动两个容器 (使用默认birdge模式) --network container:<container name>

[root@node1 ~]# docker run --name b1 -it --rm busybox
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:03  
          inet addr:172.17.0.3  Bcast:172.17.255.255  Mask:255.255.0.0
[root@node1 ~]# docker run --name b2 -it --rm busybox
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:04  
          inet addr:172.17.0.4  Bcast:172.17.255.255  Mask:255.255.0.0
#默认是两个隔离的网络地址  关闭b2  --network container:b1 重新创建
[root@node1 ~]# docker run --name b2 -it --network container:b1 --rm busybox
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:03  
          inet addr:172.17.0.3  Bcast:172.17.255.255  Mask:255.255.0.0
#此时发现b1和b2容器网络实现共享
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:03  
          inet addr:172.17.0.3  Bcast:172.17.255.255  Mask:255.255.0.0
#网络共享 但是文件系统还是隔离的 
#b2操作:
/ #  echo "test1 "> /tmp/index.html
/ # httpd -h /tmp
/ # netstat -lnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       
tcp        0      0 :::80                   :::*                    LISTEN  
#此时b2已经开始监听80端口
#b1操作:
/ # wget -O - -q 127.0.0.1
test1 
#互通的  IPC 效果类似一个主机上的两个进程  

4、open container

开放式容器网络 --network host

重启开启容器 指定network为宿主机

[root@node1 ~]# docker run --name b2 -it --network host --rm busybox
/ # ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:AC:87:FD:09  
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:acff:fe87:fd09/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:28 errors:0 dropped:0 overruns:0 frame:0
          TX packets:36 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2099 (2.0 KiB)  TX bytes:3155 (3.0 KiB)

ens33     Link encap:Ethernet  HWaddr 00:0C:29:7E:60:50  
          inet addr:10.0.0.10  Bcast:10.0.0.255  Mask:255.255.255.0
#发现该容器网络是宿主机网络
#验证
/ # echo "hello buasss" >/tmp/index.html
/ # httpd -h /tmp
/ # netstat -lnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      
tcp        0      0 :::80                   :::*                    LISTEN      
tcp        0      0 :::22                   :::*                    LISTEN      
tcp        0      0 ::1:25                  :::*                    LISTEN 
#宿主机查看
[root@node1 ~]# netstat -lnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN     
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN     
tcp6       0      0 :::80                   :::*                    LISTEN     
tcp6       0      0 :::22                   :::*                    LISTEN     
tcp6       0      0 ::1:25                  :::*                    LISTEN    
[root@node1 ~]# wget -O - -q 10.0.0.10
hello buasss

5、扩展

1)修改docker0桥的网络属性信息
(举例) : /etc/docker/daemon.json
{
	"bip": "192.168.2.1/24",      #bridge ip   最重要  设置bip后除了dns都可以自动推算出
	"fixed-cidr": "10.2.0.0/16",  #
	"mtu": 1500,
	"default-gateway": "10.2.0.1" #默认网管
	"dns": ["10.2.0.2","10.2.0.3"]   #dns服务器地址
}
#验证
[root@node1 ~]# vim /etc/docker/daemon.json 
{
  "registry-mirrors": ["https://xhszfb4i.mirror.aliyuncs.com"],
  "bip": "172.10.2.1/24"
}
#宿主机ifconfig查看 docker0网络i已经发生变化
[root@node1 ~]# systemctl daemon-reload
[root@node1 ~]# systemctl restart docker
[root@node1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:7e:60:50 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.10/24 brd 10.0.0.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::bcd1:23b:c15b:3c72/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
    inet6 fe80::ab2e:4f4:b96b:27d8/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:ac:87:fd:09 brd ff:ff:ff:ff:ff:ff
    inet 172.10.2.1/24 brd 172.10.2.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe87:fd09/64 scope link 
       valid_lft forever preferred_lft forever
2)docker容器允许外部机器访问
#配置
[root@node1 ~]# systemctl stop docker
[root@node1 ~]# vim /etc/docker/daemon.json 
{
  "registry-mirrors": ["https://xhszfb4i.mirror.aliyuncs.com"],
  "bip": "172.10.2.1/24",
  "hosts": ["tcp://0.0.0.0:2375", "unix:///var/run/docker.sock"]
}
[root@node1 ~]# systemctl daemon-reload
[root@node1 ~]# systemctl start docker

[root@node1 ~]# ss -lnt
State      Recv-Q Send-Q           Local Address:Port                          Peer Address:Port              
LISTEN     0      100                  127.0.0.1:25                                       *:*                  
LISTEN     0      128                          *:22                                       *:*                  
LISTEN     0      128                         :::2375                                    :::*                  
LISTEN     0      128                         :::22                                      :::*   
#适用另外一个机器连接 node1  查看docker容器   -H
[root@node2 ~]# docker -H tcp://10.0.0.10:2375 ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@node2 ~]# docker -H tcp://10.0.0.10:2375  images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
xiaobai20201/httpd   v0.2                488c5ad2de0d        23 hours ago        1.15MB
xiaobai20201/httpd   v0.1-1              453488ef766a        23 hours ago        1.15MB
nginx               latest              dbfc48660aeb        6 weeks ago         109MB
busybox             latest              59788edf1f3e        8 weeks ago         1.15MB
nginx               1.14-alpine         14d4a58e0d2e        2 months ago        17.4MB
3)创建自定义的网络模式 桥
[root@node1 ~]# docker network create -d bridge --subnet "172.26.0.0/24" --gateway "172.26.0.1" mybr0
0548c07a2face12cc1c0832651a19dc7866b74090fced409ee21c8d094a7ba44
[root@node1 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
13e6f0fed458        bridge              bridge              local
abd9c40d7983        host                host                local
0548c07a2fac        mybr0               bridge              local
6aad0b2dd7bb        none                null                local
[root@node1 ~]# ifconfig
br-0548c07a2fac: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.26.0.1  netmask 255.255.255.0  broadcast 172.26.0.255
#修改名称  需要先down 掉再修改 
[root@node1 ~]# ifconfig br-0548c07a2fac down

[root@node1 ~]# ip link set br-0548c07a2fac name docker1
[root@node1 ~]# ip a
[root@node1 ~]# ifconfig docker1 up
#测试mybr0 
[root@node1 ~]#  docker run --name t1 -it --network mybr0 busybox:latest
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:1A:00:02  
          inet addr:172.26.0.2  Bcast:172.26.0.255  Mask:255.255.255.0
#复制终端,用原bridge桥再创建一个容器 
[root@node1 ~]# docker run --name t2 -it --network bridge busybox:latest
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:10:01:02  
          inet addr:172.16.1.2  Bcast:172.16.1.255  Mask:255.255.255.0
#此时 t1和t2不能通信   需要宿主机开启核心转发才可以  查看核心转发  1为开启
[root@node1 ~]#  cat /proc/sys/net/ipv4/ip_forward
1
#无法通信是由于生成容器会自动生成一个iptables规则阻断虚拟机之间相互通信
[root@node1 ~]# iptables -vnL

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DOCKER-ISOLATION-STAGE-2  all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0           
    0     0 DOCKER-ISOLATION-STAGE-2  all  --  br-0548c07a2fac !br-0548c07a2fac  0.0.0.0/0            0.0.0.0/0           
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0      
posted @ 2018-12-09 15:27  微落不落  阅读(496)  评论(0编辑  收藏  举报