Docker跨host网络:overlay、macvlan

实验环境:uname -r:3.10.0-1160.49.1.el7.x86_64;docker:18.06.3-ce 

跨主机网络方案包括:

docker 原生的 overlay 和 macvlan。
第三方方案:常用的包括 flannel、weave 和 calico。

 

复制代码
libnetwork和CNM(Container Network Model)(Sandbox、Endpoint、Network)
===========================================================
libnetwork 是 docker 容器网络库
Container Network Model (CNM)模型:对容器网络进行抽象,分成3类组件:
    1.Sandbox
        Sandbox 是容器的网络栈,包含容器的 interface、路由表和 DNS 设置。 Linux Network Namespace 是 Sandbox 的标准实现。Sandbox 可以包含来自不同 Network 的 Endpoint。
    2.Endpoint
        Endpoint 的作用是将 Sandbox 接入 Network。Endpoint 的典型实现是 veth pair,后面我们会举例。一个 Endpoint 只能属于一个网络,也只能属于一个 Sandbox。
    3.Network
        Network 包含一组 Endpoint,同一 Network 的 Endpoint 可以直接通信。Network 的实现可以是 Linux Bridge、VLAN 等。

        #Sandbox对应容器network namespace的网络协议栈
        #Endpoint对应接入bridge和network namespace的veth pair
        #Network对应虚拟交换设备,例如 linux bridge
libnetwork和CNM(Container Network Model)(Sandbox、Endpoint、Network)
复制代码

Docker Overlay 网络

 

复制代码
Docker Overlay 网络;实验:搭建 Docker Overlay 网络;vxlan模式网络流量路径分析;容器、host、network namespace路由表项等
===============================================================================================================================
Docker Overlay 网络:
    1.docker 会创建一个 bridge 网络 “docker_gwbridge”,为所有连接到 overlay 网络的容器提供访问外网的能力;
        该bridge提供NAT能力,供容器访问外部网络
    2.每个overlay 网络 都会配备一个 network namespace,其中有一个 linux bridge br0 和 一个 vxlan设备。 
        该network namespace,供host内部容器互访以及跨host overlay网络互访

容器视角:
    网卡1:网卡1(veth pair)---(veth pair)linux bridge docker_gwbridge(存在于host本地,所有不同的自定义overlay网络公用该bridge,也是可以互ping的)---通过nat访问外部网络
    网卡2:网卡2(veth pair)---(veth pair)linux bridge br0(存在于network namespace中)---vxlan设备---完成跨host的overlay网络访问。

------------------------------------------------------------------------------------------------------------------------------
实验:搭建 Docker Overlay 网络
        Docerk overlay 网络需要一个 key-value 数据库用于保存网络状态信息,包括 Network、Endpoint、IP 等。Consul、Etcd 和 ZooKeeper 都是 Docker 支持的 key-vlaue 软件,我们这里使用 Consul。
实验背景:
192.168.1.30用于搭建key-value 数据库---Consul
192.168.1.31/32用于运行容器

###网络实验环境准备
###问题:由于--cluster-store、--cluster-advertise这2个参数已经过时,所以需要安装低版本的docker来进行测试, “yum -y install docker-ce-18.06.3.ce-3.el7”
###安装低版本docker链接:Centos7安装Docker CE https://www.cnblogs.com/adjk/p/14098232.html 
###ping不通就查防火墙,网卡混杂模式等
1.运行容器,搭建key-value 数据库---Consul
    docker run -d -p 8500:8500 -h consul --name consul progrium/consul -server -bootstrap

    容器up后,浏览器访问http://192.168.1.30:8500/

2.修改 192.168.1.22/23 的 docker daemon 的配置文件/usr/lib/systemd/system/docker.service。
    添加参数:--cluster-store 指定 consul 的地址。      --cluster-advertise 告知 consul 自己的连接地址。

    [root@host1 ~]# cat /usr/lib/systemd/system/docker.service |grep ExecStart
    #ExecStart=/usr/bin/dockerd     ###修改前
    ExecStart=/usr/bin/dockerd --cluster-store=consul://192.168.1.30:8500 --cluster-advertise=ens33:2376  ###修改后
  
   重载docker.service文件并重启docker deamon
        systemctl daemon-reload && systemctl restart docker.service   
      
###创建 overlay 网络并进行测试
3.创建 overlay 网络
        docker network create --driver overlay vxlan_net1       #其中一台host执行,另一台网络自动同步

        docker network ls                       #vxlan_net1的SCOPE 为 global,而其他网络为 local,例如默认的bridge。
        docker network inspect vxlan_net1       #"Subnet": "10.0.0.0/24","Gateway": "10.0.0.1"

4.运行容器
        docker run -it --network vxlan_net1 centos      #host1执行
        docker run -it --network vxlan_net1 centos      #host2执行

===========================================================================================================
vxlan模式网络流量路径分析;
容器间互访(host内部):
          (容器侧veth pair)     (bridge侧veth pair)                               
            9: eth0@if10-------------------10: veth0@if9                            
    容器1(IP:10.0.0.2/24)====================>br0(linux bridge)=================================>容器2(IP:)
    路由10.0.0.0/24                          该bridge处于net namespace中
    直接封装目标容器2的mac
    
容器间互访(跨host):
          (容器侧veth pair)     (bridge侧veth pair)                                                  
            9: eth0@if10-------------------10: veth0@if9        8: vxlan0@if8                               
    容器1(IP:10.0.0.2/24)====================>br0(linux bridge)===========......=======>host2 br0(linux bridge)==========>容器2(IP:10.0.0.3/24)
    路由10.0.0.0/24                          该bridge处于net namespace中                   该bridge处于net namespace中
    直接封装目标容器2的mac


容器访问外部网络:
                  (容器侧veth pair)      (bridge侧veth pair) 
                   12: eth1@if13-----------13: veth7675f0d@if12  
    容器1(IP:172.17.0.2/16)====================>docker_gwbridge(linux bridge)---------------host1 ens33(物理网卡)==========>......
        默认路由                               host默认路由指向host网关                              进行SNAT
    使用网卡2的IP,因为默认路由由网卡2出         进入网络协议栈    



[root@host1 ~]# tcpdump -nnvve -i ens33 -c10 host 192.168.1.32
tcpdump: listening on ens33, link-type EN10MB (Ethernet), capture size 262144 bytes
19:52:42.024708 00:0c:29:ae:d6:88 > 00:0c:29:8a:86:b4, ethertype IPv4 (0x0800), length 148: (tos 0x0, ttl 64, id 9451, offset 0, flags [none], proto UDP (17), length 134)
    192.168.1.31.34133 > 192.168.1.32.4789: [no cksum] VXLAN, flags [I] (0x08), vni 256
02:42:0a:00:00:02 > 02:42:0a:00:00:03, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 2226, offset 0, flags [DF], proto ICMP (1), length 84)
    10.0.0.2 > 10.0.0.3: ICMP echo request, id 29, seq 48, length 64
19:52:42.032193 00:0c:29:8a:86:b4 > 00:0c:29:ae:d6:88, ethertype IPv4 (0x0800), length 148: (tos 0x0, ttl 64, id 54095, offset 0, flags [none], proto UDP (17), length 134)
    192.168.1.32.60078 > 192.168.1.31.4789: [no cksum] VXLAN, flags [I] (0x08), vni 256
02:42:0a:00:00:03 > 02:42:0a:00:00:02, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 8381, offset 0, flags [none], proto ICMP (1), length 84)
    10.0.0.3 > 10.0.0.2: ICMP echo reply, id 29, seq 48, length 64




===========================================================================================================
容器、host、network namespace路由表项等
#容器1
    [root@0889571e43de /]# ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
        link/ether 02:42:0a:00:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
        inet 10.0.0.2/24 brd 10.0.0.255 scope global eth0
           valid_lft forever preferred_lft forever
    12: eth1@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
        link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 1
        inet 172.17.0.2/16 brd 172.17.255.255 scope global eth1
           valid_lft forever preferred_lft forever
    [root@0889571e43de /]# ip r s
    default via 172.17.0.1 dev eth1
    10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.0.2
    172.17.0.0/16 dev eth1 proto kernel scope link src 172.17.0.2
#容器2
    [root@d8c1a5e06163 /]# ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
        link/ether 02:42:0a:00:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
        inet 10.0.0.3/24 brd 10.0.0.255 scope global eth0
           valid_lft forever preferred_lft forever
    12: eth1@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
        link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 1
        inet 172.17.0.2/16 brd 172.17.255.255 scope global eth1
           valid_lft forever preferred_lft forever
    [root@d8c1a5e06163 /]# ip r s
    default via 172.17.0.1 dev eth1
    10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.0.3
    172.17.0.0/16 dev eth1 proto kernel scope link src 172.17.0.2

#network namespace
    [root@host1 ~]# ip netns exec 1-d23f0e1d85 ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    2: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
        link/ether 3a:24:e8:3c:7e:cb brd ff:ff:ff:ff:ff:ff
        inet 10.0.0.1/24 brd 10.0.0.255 scope global br0
           valid_lft forever preferred_lft forever
    8: vxlan0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UNKNOWN group default
        link/ether 96:66:dd:59:f7:c2 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    10: veth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UP group default
        link/ether 3a:24:e8:3c:7e:cb brd ff:ff:ff:ff:ff:ff link-netnsid 1
    [root@host1 ~]# ip netns exec 1-d23f0e1d85 ip r s
    10.0.0.0/24 dev br0 proto kernel scope link src 10.0.0.1

    [root@host1 ~]# ip netns exec 1-d23f0e1d85 ip  -d link show vxlan0
    8: vxlan0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UNKNOWN mode DEFAULT group default
        link/ether 96:66:dd:59:f7:c2 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 1
        vxlan id 256 srcport 0 0 dstport 4789 proxy l2miss l3miss ageing 300 noudpcsum noudp6zerocsumtx noudp6zerocsumrx
        bridge_slave state forwarding priority 32 cost 100 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8001 port_no 0x1 designated_port 32769 designated_cost 0 designated_bridge 8000.3a:24:e8:3c:7e:cb designated_root 8000.3a:24:e8:3c:7e:cb hold_timer    0.00 message_age_timer    0.00 forward_delay_timer    0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535


#host1
    [root@host1 ~]# ip a
    2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000
        link/ether 00:0c:29:ae:d6:88 brd ff:ff:ff:ff:ff:ff
        inet 192.168.1.31/24 brd 192.168.1.255 scope global noprefixroute ens33
           valid_lft forever preferred_lft forever
        inet6 fe80::20c:29ff:feae:d688/64 scope link
           valid_lft forever preferred_lft forever
    6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
        link/ether 02:42:bd:63:8b:ac brd ff:ff:ff:ff:ff:ff
        inet 10.3.19.1/24 brd 10.3.19.255 scope global docker0
           valid_lft forever preferred_lft forever
    11: docker_gwbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
        link/ether 02:42:60:cf:3f:0a brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.1/16 brd 172.17.255.255 scope global docker_gwbridge
           valid_lft forever preferred_lft forever
        inet6 fe80::42:60ff:fecf:3f0a/64 scope link
           valid_lft forever preferred_lft forever
    13: veth7675f0d@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default
        link/ether e2:5b:bc:f2:88:11 brd ff:ff:ff:ff:ff:ff link-netnsid 1
        inet6 fe80::e05b:bcff:fef2:8811/64 scope link
           valid_lft forever preferred_lft forever
    [root@host1 ~]# ip r s
    default via 192.168.1.2 dev ens33 proto static metric 100
    10.3.19.0/24 dev docker0 proto kernel scope link src 10.3.19.1
    172.16.5.208/29 dev ens36 proto kernel scope link src 172.16.5.212 metric 101
    172.17.0.0/16 dev docker_gwbridge proto kernel scope link src 172.17.0.1
    192.168.1.0/24 dev ens33 proto kernel scope link src 192.168.1.31 metric 100
#host2
    [root@host2 ~]# ip a
    2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000
        link/ether 00:0c:29:8a:86:b4 brd ff:ff:ff:ff:ff:ff
        inet 192.168.1.32/24 brd 192.168.1.255 scope global noprefixroute ens33
           valid_lft forever preferred_lft forever
        inet6 fe80::20c:29ff:fe8a:86b4/64 scope link
           valid_lft forever preferred_lft forever
    4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
        link/ether 02:42:0d:8e:ca:ec brd ff:ff:ff:ff:ff:ff
        inet 10.3.84.1/24 brd 10.3.84.255 scope global docker0
           valid_lft forever preferred_lft forever
    11: docker_gwbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
        link/ether 02:42:e8:ec:ef:f1 brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.1/16 brd 172.17.255.255 scope global docker_gwbridge
           valid_lft forever preferred_lft forever
        inet6 fe80::42:e8ff:feec:eff1/64 scope link
           valid_lft forever preferred_lft forever
    13: vethe18d522@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default
        link/ether 22:54:cb:d6:f9:40 brd ff:ff:ff:ff:ff:ff link-netnsid 1
        inet6 fe80::2054:cbff:fed6:f940/64 scope link
           valid_lft forever preferred_lft forever
    [root@host2 ~]# ip r s
    default via 192.168.1.2 dev ens33 proto static metric 100
    10.3.84.0/24 dev docker0 proto kernel scope link src 10.3.84.1
    172.16.5.208/29 dev ens36 proto kernel scope link src 172.16.5.210 metric 101
    172.17.0.0/16 dev docker_gwbridge proto kernel scope link src 172.17.0.1
    192.168.1.0/24 dev ens33 proto kernel scope link src 192.168.1.32 metric 100
Docker Overlay 网络;实验:搭建 Docker Overlay 网络;vxlan模式网络流量路径分析;容器、host、network namespace路由表项等
复制代码

docker 会为每个 overlay 网络创建一个独立的 network namespace
可以执行 ip netns 进行查看(请确保在此之前执行过 ln -s /var/run/docker/netns /var/run/netns)

Docker Macvlan 网络

复制代码
Docker macvlan 网络;实验(macvlan网络基于物理网卡);实验(macvlan网络基于vlan设备)
==================================================================================================================
Docker macvlan 网络
    macvlan 本身是 linxu kernel 模块,其功能是允许在同一个物理网卡上配置多个 MAC 地址,即多个 interface,每个 interface 可以配置自己的 IP。
    macvlan 本质上是一种网卡虚拟化技术.
    macvlan 的最大优点是性能极好,相比其他实现,macvlan 不需要创建 Linux bridge,而是直接通过以太 interface 连接到物理网络。
    macvlan 不依赖 Linux bridge

------------------------------------------------------------------------------------------
实验环境准备

    网卡需要打开混杂模式(只有在混杂模式下,网卡接收到非本网卡mac的数据包后不会丢弃)
    ip link set ens33 promisc on

创建macvlan网络(基于物理网卡)
    docker network create --driver macvlan --subnet 172.18.18.0/24 --gateway 172.18.18.1 -o parent=ens33 macvlan_net1    
        #一个macvlan网络将会占用一块网卡,在这块网卡上进行虚拟化;
        #macvlan网络可以指定占用vlan设备!
        #创建macvlan网络,并不会自动同步到其他host;所以其他host的macvlan网络也需要创建;容器IP需要人工规划。
        
运行容器
    docker run -it --name macvlan_container1 --ip 172.18.18.212 --network macvlan_net1 centos
    docker run -it --name macvlan_container2 --ip 172.18.18.210 --network macvlan_net1 centos

测试
    ###ping包测试,通;但是macvlan网络不会自动解析主机名,这一点和overlay网络不同。
    [root@a4316b0809ec /]#  ping 172.18.18.212 -c1        
    PING 172.18.18.212 (172.18.18.212) 56(84) bytes of data.
    64 bytes from 172.18.18.212: icmp_seq=1 ttl=64 time=0.858 ms

    --- 172.18.18.212 ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 0.858/0.858/0.858/0.000 ms

    [root@a4316b0809ec /]# ip a
    8: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default    #该虚拟网卡的父网卡为物理网卡ens33
        link/ether 02:42:ac:12:12:d2 brd ff:ff:ff:ff:ff:ff link-netnsid 0
        inet 172.18.18.210/24 brd 172.18.18.255 scope global eth0
           valid_lft forever preferred_lft forever

----------------------------------------------------------------------------------------------------
基于物理网卡的macvlan的一点数据
#容器1
[root@531585f74e32 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
7: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:12:12:d4 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.18.18.212/24 brd 172.18.18.255 scope global eth0
       valid_lft forever preferred_lft forever
[root@531585f74e32 /]# ip r s
default via 172.18.18.1 dev eth0
172.18.18.0/24 dev eth0 proto kernel scope link src 172.18.18.212

#容器2
[root@e38aa01af601 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
7: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:12:12:d2 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.18.18.210/24 brd 172.18.18.255 scope global eth0
       valid_lft forever preferred_lft forever
[root@e38aa01af601 /]# ip r s
default via 172.18.18.1 dev eth0
172.18.18.0/24 dev eth0 proto kernel scope link src 172.18.18.210

#host1
[root@host1 ~]# ip a show ens33
2: ens33: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000
    link/ether 00:0c:29:ae:d6:88 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.31/24 brd 192.168.1.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:feae:d688/64 scope link
       valid_lft forever preferred_lft forever
[root@host1 ~]# ip r s
default via 192.168.1.2 dev ens33 proto static metric 100
10.3.19.0/24 dev docker0 proto kernel scope link src 10.3.19.1
172.16.5.208/29 dev ens36 proto kernel scope link src 172.16.5.212 metric 101
192.168.1.0/24 dev ens33 proto kernel scope link src 192.168.1.31 metric 100
#host2
[root@host2 ~]#  ip a show ens33
2: ens33: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000
    link/ether 00:0c:29:8a:86:b4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.32/24 brd 192.168.1.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe8a:86b4/64 scope link
       valid_lft forever preferred_lft forever
[root@host2 ~]# ip r s
default via 192.168.1.2 dev ens33 proto static metric 100
10.3.84.0/24 dev docker0 proto kernel scope link src 10.3.84.1
172.16.5.208/29 dev ens36 proto kernel scope link src 172.16.5.210 metric 101
192.168.1.0/24 dev ens33 proto kernel scope link src 192.168.1.32 metric 100

================================================================================================
创建macvlan网络(基于vlan设备)
    ip link add link ens33 name ens33.30 type vlan id 30 
    ip link set dev ens33.30 up
    docker network create --driver macvlan --subnet 172.18.20.0/24 --gateway 172.18.20.1 -o parent=ens33.30 macvlan_net2
    #创建vlan设备,并以该vlan设备为父网卡创建docker macvlan网络

运行容器
    docker run -it --name macvlan_container11 --ip 172.18.20.212 --network macvlan_net2 centos
    docker run -it --name macvlan_container22 --ip 172.18.20.210 --network macvlan_net2 centos

测试:
    #ping包测试,通,且报文为vlan30
    [root@f7875de33356 /]# ping 172.18.20.210 -c1
    PING 172.18.20.210 (172.18.20.210) 56(84) bytes of data.
    64 bytes from 172.18.20.210: icmp_seq=1 ttl=64 time=1.35 ms

    --- 172.18.20.210 ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 1.347/1.347/1.347/0.000 ms
    [root@f7875de33356 /]# ip a show eth0    #显示父网卡的编号为if9
    10: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
        link/ether 02:42:ac:12:14:d4 brd ff:ff:ff:ff:ff:ff link-netnsid 0
        inet 172.18.20.212/24 brd 172.18.20.255 scope global eth0
           valid_lft forever preferred_lft forever

    [root@host1 ~]# ip a show ens33.30    #编号9为vlan设备 ens33.30
    9: ens33.30@ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether 00:0c:29:ae:d6:88 brd ff:ff:ff:ff:ff:ff
        inet6 fe80::20c:29ff:feae:d688/64 scope link
           valid_lft forever preferred_lft forever
Docker macvlan 网络;实验(macvlan网络基于物理网卡);实验(macvlan网络基于vlan设备)
复制代码

 

posted @   雲淡風輕333  阅读(374)  评论(0编辑  收藏  举报
(评论功能已被禁用)
编辑推荐:
· SQL Server 2025 AI相关能力初探
· Linux系列:如何用 C#调用 C方法造成内存泄露
· AI与.NET技术实操系列(二):开始使用ML.NET
· 记一次.NET内存居高不下排查解决与启示
· 探究高空视频全景AR技术的实现原理
阅读排行:
· 阿里最新开源QwQ-32B,效果媲美deepseek-r1满血版,部署成本又又又降低了!
· 单线程的Redis速度为什么快?
· SQL Server 2025 AI相关能力初探
· AI编程工具终极对决:字节Trae VS Cursor,谁才是开发者新宠?
· 展开说说关于C#中ORM框架的用法!
点击右上角即可分享
微信分享提示