Docker第三方跨host网络:flannel、calico、weave

 flannel

复制代码
flannel概述;flannel vxlan、flannel host-gw网络特点总结
====================================================================
flannel 是 CoreOS 开发的容器网络解决方案。flannel 为每个 host 分配一个 subnet,容器从此 subnet 中分配 IP,这些 IP 可以在 host 间路由,容器间无需 NAT 和 port mapping 就可以跨主机通信。
每个 subnet 都是从一个更大的 IP 池中划分的,flannel 会在每个主机上运行一个叫 flanneld 的 agent,其职责就是从池子中分配 subnet。为了在各个主机间共享信息,flannel 用 etcd(与 consul 类似的 key-value 分布式数据库)存放网络配置、已分配的 subnet、host 的 IP 等信息。

数据包如何在主机间转发是由 backend 实现的。flannel 提供了多种 backend,最常用的有 vxlan 和 host-gw。


flannel 网络特点个人总结:
flannel vxlan 网络本质上还是基于docker的默认bridge网络,只不过在bridge网络上进行了一些定制:
    1.flannel定制bridge网络的网段;这个在启动flanneld时就有提示
    2.增加了vxlan设备
    3.启动flannel时,动态增加大段路由指向flannel.1设备
    4.只能只用默认bridge网络
    5.host之间传递通过vxlan隧道
    6.流量思路:通过路由到达host网络,然后通过host路由完成vxlan封装;最后通过vxlan隧道发送到目标host
    
flannel host-gw 网络也是基于docker的默认bridge网络,只不过在bridge网络上进行了一些定制:
    1.flannel定制bridge网络的网段;这个在启动flanneld时就有提示
    2.启动flannel时,动态增加docker subnet路由指向其他host
    3.只能只用默认bridge网络
    4.流量思路:通过路由到达host网络,然后通过host subnet路由发送到目标host(做了SNAT)
    5.与vxlan模式不同,无vxlan设备
    6.流量转发逻辑有差异:都是通过默认路由到达docker0;vxlan 模式通过大段路由到达flannel.1设备,继而走vxlan隧道;host-gw 模式通过subnet路由,直接由物理网络承载,做了SNAT
    
flannel概述;flannel vxlan、flannel host-gw网络特点总结
复制代码

 

复制代码
flannel 网络配置前准备:下载并安装etcd;安装flannel
============================================================================================
#下载并安装etcd
    ETCD_VER=v2.3.7
    DOWNLOAD_URL=https://github.com/coreos/etcd/releases/download
    curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
    mkdir -p /tmp/test-etcd && tar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz -C /tmp/test-etcd --strip-components=1
    cp /tmp/test-etcd/etcd* /usr/local/bin/


#启动 etcd 并打开 2379 监听端口
    etcd -listen-client-urls http://192.168.1.30:2379 -advertise-client-urls http://192.168.1.30:2379
    etcdctl --endpoints=192.168.1.30:2379 set foo "bar"   #测试
    etcdctl --endpoints=192.168.1.30:2379 get foo

----------------------------------------------------------------------------------------------
安装flannel

build flannel
flannel 没有现成的执行文件可用,必须自己 build,最可靠的方法是在 Docker 容器中 build。
    1.下载并重命名 image。
        docker pull cloudman6/kube-cross:v1.6.2-2
        docker tag cloudman6/kube-cross:v1.6.2-2 gcr.io/google_containers/kube-cross:v1.6.2-2
    2.下载 flannel 源码。
        git clone https://github.com/coreos/flannel.git             
    3.开始构建。
        cd flannel
        make dist/flanneld-amd64                   #这一步执行结束
    4.将 flanneld 执行文件拷贝到 host1 和 host2。
        scp dist/flanneld-amd64 192.168.56.104:/usr/local/bin/flanneld
        scp dist/flanneld-amd64 192.168.56.105:/usr/local/bin/flanneld
flannel 网络配置前准备:下载并安装etcd;安装flannel
复制代码
复制代码
实验:flannel vxlan模式;vxlan模式网络流量路径分析;容器及host路由表项等
==============================================================================================
0.实验前确认manager设备已启动etcd进程;host设备已安装flannel
#注意关闭防火墙,systemctl stop firewalld.service;测试时报错“no route to host”

1.自定义flannel vxlan网络,并上传至etcd
    配置json文件flannel-config.json
        [root@manager ~]# cat flannel-config.json
        {
          "Network": "10.2.0.0/16",                #Network 定义该网络的 IP 池为 10.2.0.0/16。
          "SubnetLen": 24,                        #SubnetLen 指定每个主机分配到的 subnet 大小为 24 位,即10.2.X.0/24。
          "Backend": {
            "Type": "vxlan"                        #Backend 为 vxlan,即主机间通过 vxlan 通信
          }
        }

    将配置存入etcd
        etcdctl --endpoints=192.168.1.30:2379 set /docker-test/network/config < flannel-config.json

2.host1 和 host2 启动flannel
    flanneld -etcd-endpoints=http://192.168.1.30:2379 -iface=ens33 -etcd-prefix=/docker-test/network
        -etcd-endpoints 指定 etcd url。
        -iface 指定主机间数据传输使用的 interface。
        -etcd-prefix 指定 etcd 存放 flannel 网络配置信息的 key。

3.配置 Docker 连接 flannel
    配置host1/host2的docker文件/usr/lib/systemd/system/docker.service
    [root@host2 ~]# cat /usr/lib/systemd/system/docker.service |grep ExecStart
    #ExecStart=/usr/bin/dockerd
    ExecStart=/usr/bin/dockerd --bip=10.2.15.1/24 --mtu=1450
        #bip和mtu数值参考/run/flannel/subnet.env

    systemctl daemon-reload  && systemctl restart docker.service
    #可以看到docker进程重启后,docker0网卡使用了指定ip 10.2.15.1/24


4.启动容器并进行测试
    docker run -it centos
    ping xxxxxx
    
===============================================================================================
网络流量路径分析
容器间互访(host内部):
          (容器侧veth pair)      (bridge侧veth pair)         (bridge侧veth pair)       (容器侧veth pair)
            11: eth0@if12-----------12: vetha1ae6e8@if11         15: vethe520c54@if14----------14: eth0@if15
    容器1(IP:10.2.32.2/24)====================>docker0(linux bridge)=================================>容器2(IP:)
    路由10.2.32.0/24
    封装目标容器2的mac
    
容器间互访(跨host):
          (容器侧veth pair)      (bridge侧veth pair) 
            11: eth0@if12-----------12: vetha1ae6e8@if11  
容器1(IP:10.2.32.2/24)====================>docker0(linux bridge)------------------flannel.1====================>host1 ens33(物理网卡)====================>host2 ens33(物理网卡)==========>......
        默认路由                               host路由指向flannel.1                                                                               vxlan隧道
    封装网关mac (docker0 mac)                    进入网络协议栈               封装flannel.1的mac为源mac;目的mac没查出来    封装为vxlan报文


容器访问外部网络:
          (容器侧veth pair)      (bridge侧veth pair) 
            11: eth0@if12-----------12: vetha1ae6e8@if11  
容器1(IP:10.2.32.2/24)====================>docker0(linux bridge)---------------host1 ens33(物理网卡)==========>......
        默认路由                               host默认路由指向host网关
    封装网关mac (docker0 mac)                    进入网络协议栈       
    
========================================================================================================================================
####vxlan设备flannel.1与自定义vxlan设备vxlan111对比
    [root@host1 ~]# ip -d link show flannel.1
    7: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
        link/ether ee:a2:15:78:6d:8c brd ff:ff:ff:ff:ff:ff promiscuity 0
        vxlan id 1 local 192.168.1.31 dev ens33 srcport 0 0 dstport 8472 nolearning ageing 300 noudpcsum noudp6zerocsumtx noudp6zerocsumrx addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    [root@host2 ~]# ip -d link show vxlan111
    11: vxlan111: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
        link/ether 06:6d:e5:0f:9f:14 brd ff:ff:ff:ff:ff:ff promiscuity 0
        vxlan id 111 remote 192.168.1.31 dev ens33 srcport 0 0 dstport 4789 ageing 300 noudpcsum noudp6zerocsumtx noudp6zerocsumrx addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535

    [root@host1 ~]# ip a show docker0
    6: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
        link/ether 02:42:3b:4e:e9:20 brd ff:ff:ff:ff:ff:ff
        inet 10.4.87.1/24 brd 10.4.87.255 scope global docker0
           valid_lft forever preferred_lft forever
        inet6 fe80::42:3bff:fe4e:e920/64 scope link
           valid_lft forever preferred_lft forever
    [root@host1 ~]# ip a show flannel.1
    7: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
        link/ether ee:a2:15:78:6d:8c brd ff:ff:ff:ff:ff:ff
        inet 10.4.87.0/32 brd 10.4.87.0 scope global flannel.1
           valid_lft forever preferred_lft forever
        inet6 fe80::eca2:15ff:fe78:6d8c/64 scope link
           valid_lft forever preferred_lft forever
    [root@host2 ~]# ip a show docker0
    6: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
        link/ether 02:42:bb:51:57:14 brd ff:ff:ff:ff:ff:ff
        inet 10.4.13.1/24 brd 10.4.13.255 scope global docker0
           valid_lft forever preferred_lft forever
        inet6 fe80::42:bbff:fe51:5714/64 scope link
           valid_lft forever preferred_lft forever
    [root@host2 ~]# ip a show flannel.1
    7: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
        link/ether 56:07:52:93:fa:fd brd ff:ff:ff:ff:ff:ff
        inet 10.4.13.0/32 brd 10.4.13.0 scope global flannel.1
           valid_lft forever preferred_lft forever
        inet6 fe80::5407:52ff:fe93:fafd/64 scope link
           valid_lft forever preferred_lft forever


#容器1路由
    [root@b8b60ef0ca6c /]# ip r s
    default via 10.4.87.1 dev eth0
    10.4.87.0/24 dev eth0 proto kernel scope link src 10.4.87.2
#容器2路由
    [root@b6c069cdea49 /]# ip r s
    default via 10.4.13.1 dev eth0
    10.4.13.0/24 dev eth0 proto kernel scope link src 10.4.13.2

#host1路由
    [root@host1 ~]# ip r s
    default via 192.168.1.2 dev ens33 proto static metric 100
    10.4.13.0/24 via 10.4.13.0 dev flannel.1 onlink
    10.4.87.0/24 dev docker0 proto kernel scope link src 10.4.87.1
    192.168.1.0/24 dev ens33 proto kernel scope link src 192.168.1.31 metric 100
#host2路由
    [root@host2 ~]# ip r s
    default via 192.168.1.2 dev ens33 proto static metric 102
    10.4.13.0/24 dev docker0 proto kernel scope link src 10.4.13.1
    10.4.87.0/24 via 10.4.87.0 dev flannel.1 onlink
    192.168.1.0/24 dev ens33 proto kernel scope link src 192.168.1.32 metric 102


#查看已上传的网络配置
    [root@manager ~]# etcdctl --endpoints=192.168.1.30:2379 get /docker-test/network/config
    {
      "Network": "10.4.0.0/16",
      "SubnetLen": 24,
      "Backend": {
        "Type": "vxlan"
      }
    }

#启动flanneld软件时的日志
###输出中提示,host1分配了10.4.87.0/24;host2分配了10.4.13.0/24
    [root@host1 ~]# I1226 15:39:57.705508    2977 main.go:218] CLI flags config: {etcdEndpoints:http://192.168.1.30:2379 etcdPrefix:/docker-test/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: help:false version:false autoDetectIPv4:false autoDetectIPv6:false kubeSubnetMgr:false kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[ens33] ifaceRegex:[] ipMasq:false subnetFile:/run/flannel/subnet.env subnetDir: publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 charonExecutablePath: charonViciUri: iptablesResyncSeconds:5 iptablesForwardRules:true netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true}
    I1226 15:39:57.705878    2977 main.go:238] Created subnet manager: Etcd Local Manager with Previous Subnet: None
    I1226 15:39:57.705891    2977 main.go:241] Installing signal handlers
    I1226 15:39:57.729752    2977 main.go:460] Found network config - Backend type: vxlan
    I1226 15:39:57.731085    2977 main.go:699] Using interface with name ens33 and address 192.168.1.31
    I1226 15:39:57.731132    2977 main.go:721] Defaulting external address to interface address (192.168.1.31)
    I1226 15:39:57.731140    2977 main.go:734] Defaulting external v6 address to interface address (<nil>)
    I1226 15:39:57.731210    2977 vxlan.go:137] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
    I1226 15:39:57.768570    2977 local_manager.go:150] Found lease (10.4.87.0/24) for current IP (192.168.1.31), reusing
    I1226 15:39:57.770878    2977 main.go:362] Changing default FORWARD chain policy to ACCEPT
    I1226 15:39:57.771556    2977 main.go:375] Wrote subnet file to /run/flannel/subnet.env
    I1226 15:39:57.771569    2977 main.go:379] Running backend.
    I1226 15:39:57.784350    2977 vxlan_network.go:60] watching for new subnet leases
    I1226 15:39:57.787639    2977 iptables.go:216] Some iptables rules are missing; deleting and recreating rules
    I1226 15:39:57.787652    2977 iptables.go:240] Deleting iptables rule: -s 10.4.0.0/16 -j ACCEPT
    I1226 15:39:57.789266    2977 iptables.go:240] Deleting iptables rule: -d 10.4.0.0/16 -j ACCEPT
    I1226 15:39:57.790804    2977 iptables.go:228] Adding iptables rule: -s 10.4.0.0/16 -j ACCEPT
    I1226 15:39:57.806425    2977 main.go:507] Waiting for 23h0m0.437993303s to renew lease
    I1226 15:39:57.806609    2977 iptables.go:228] Adding iptables rule: -d 10.4.0.0/16 -j ACCEPT
实验:flannel vxlan模式;vxlan模式网络流量路径分析;容器及host路由表项等
复制代码
复制代码
实验:flannel host-gw模式;host-gw模式网络流量路径分析;容器及host路由表项等
=======================================================================
0.实验前确认manager设备已启动etcd进程;host设备已安装flannel
#注意关闭防火墙,systemctl stop firewalld.service;测试时报错“no route to host”

1.定义flannel vxlan网络,并上传至etcd
    配置json文件flannel-config.json
        [root@manager ~]# cat flannel-config.json
        {
          "Network": "10.3.0.0/16",                #Network 定义该网络的 IP 池为 10.2.0.0/16。
          "SubnetLen": 24,                        #SubnetLen 指定每个主机分配到的 subnet 大小为 24 位,即10.2.X.0/24。
          "Backend": {
            "Type": "host-gw"                        #Backend 为 host-gw,即主机之间转发容器流量时,物理网络直接承载,不使用vxlan
          }
        }

    将配置存入etcd
        etcdctl --endpoints=192.168.1.30:2379 set /docker-test/network/config < flannel-config.json

2.host1 和 host2 启动flannel
    flanneld -etcd-endpoints=http://192.168.1.30:2379 -iface=ens33 -etcd-prefix=/docker-test/network
        -etcd-endpoints 指定 etcd url。
        -iface 指定主机间数据传输使用的 interface。
        -etcd-prefix 指定 etcd 存放 flannel 网络配置信息的 key。

3.配置 Docker 连接 flannel
    配置docker文件/usr/lib/systemd/system/docker.service
        [root@host2 ~]# cat /usr/lib/systemd/system/docker.service |grep ExecStart
        #ExecStart=/usr/bin/dockerd
        ExecStart=/usr/bin/dockerd --bip=10.3.28.1/24 --mtu=1500
        #bip和mtu数值参考/run/flannel/subnet.env

    systemctl daemon-reload  && systemctl restart docker.service
        #可以看到docker进程重启后,docker0网卡使用了指定ip 10.2.15.1/24

4.启动容器并进行测试
    docker run -it centos
    ping xxxxxx

=======================================================================
网络流量路径分析
容器间互访(host内部):
          (容器侧veth pair)      (bridge侧veth pair)         (bridge侧veth pair)       (容器侧veth pair)
            11: eth0@if12-----------12: vetha1ae6e8@if11         15: vethe520c54@if14----------14: eth0@if15
    容器1(IP:10.2.32.2/24)====================>docker0(linux bridge)=================================>容器2(IP:)
    路由10.2.32.0/24
    封装目标容器2的mac

容器间互访(跨host):
          (容器侧veth pair)      (bridge侧veth pair) 
            11: eth0@if12-----------12: vetha1ae6e8@if11  
容器1(IP:10.2.32.2/24)====================>docker0(linux bridge)------------------host1 ens33(物理网卡)====================>host2 ens33(物理网卡)==========>......
        默认路由                                                                            subnet路由
    封装网关mac (docker0 mac)                    进入网络协议栈               封装ens33的mac为源mac;目的mac没查出来(做了SNAT)


容器访问外部网络:
          (容器侧veth pair)      (bridge侧veth pair) 
            11: eth0@if12-----------12: vetha1ae6e8@if11  
容器1(IP:10.2.32.2/24)====================>docker0(linux bridge)---------------host1 ens33(物理网卡)==========>......
        默认路由                               host默认路由指向host网关
    封装网关mac (docker0 mac)                    进入网络协议栈       
    

=======================================================================
[root@05008918ccb5 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
7: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:0a:03:1c:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.3.28.2/24 brd 10.3.28.255 scope global eth0
       valid_lft forever preferred_lft forever

#容器1路由
[root@05008918ccb5 /]# ip r s
default via 10.3.28.1 dev eth0
10.3.28.0/24 dev eth0 proto kernel scope link src 10.3.28.2
#host路由
[root@host1 ~]# ip r s
default via 192.168.1.2 dev ens33 proto static metric 100
10.3.28.0/24 dev docker0 proto kernel scope link src 10.3.28.1
10.3.65.0/24 via 192.168.1.32 dev ens33
192.168.1.0/24 dev ens33 proto kernel scope link src 192.168.1.31 metric 100

========================================================================================
另一次实验的数据
#host1
    [root@host1 ~]# ip a show docker0
    6: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
        link/ether 02:42:48:42:08:9b brd ff:ff:ff:ff:ff:ff
        inet 10.3.19.1/24 brd 10.3.19.255 scope global docker0
           valid_lft forever preferred_lft forever
        inet6 fe80::42:48ff:fe42:89b/64 scope link
           valid_lft forever preferred_lft forever
#host2
    [root@host2 ~]# ip a show docker0
    6: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
        link/ether 02:42:44:17:d7:2d brd ff:ff:ff:ff:ff:ff
        inet 10.3.84.1/24 brd 10.3.84.255 scope global docker0
           valid_lft forever preferred_lft forever
        inet6 fe80::42:44ff:fe17:d72d/64 scope link
           valid_lft forever preferred_lft forever
#host1路由
    [root@host1 ~]# ip r s
    default via 192.168.1.2 dev ens33 proto static metric 100
    10.3.19.0/24 dev docker0 proto kernel scope link src 10.3.19.1
    10.3.84.0/24 via 192.168.1.32 dev ens33
    192.168.1.0/24 dev ens33 proto kernel scope link src 192.168.1.31 metric 100
#host2路由
    [root@host2 ~]#  ip r s
    default via 192.168.1.2 dev ens33 proto static metric 100
    10.3.19.0/24 via 192.168.1.31 dev ens33
    10.3.84.0/24 dev docker0 proto kernel scope link src 10.3.84.1
    192.168.1.0/24 dev ens33 proto kernel scope link src 192.168.1.32 metric 100


#容器1路由
    [root@d40a15899658 /]# ip r s
    default via 10.3.19.1 dev eth0
    10.3.19.0/24 dev eth0 proto kernel scope link src 10.3.19.2
#容器2路由
    [root@183671a15376 /]# ip r s
    default via 10.3.84.1 dev eth0
    10.3.84.0/24 dev eth0 proto kernel scope link src 10.3.84.2

#查看已上传的网络配置
[root@manager ~]#  etcdctl --endpoints=192.168.1.30:2379 get /docker-test/network/config
{
  "Network": "10.3.0.0/16",
  "SubnetLen": 24,
  "Backend": {
    "Type": "host-gw"
  }
}


#启动flanneld软件时的日志
###输出中提示,host1分配了10.3.19.0/24;host2分配了10.3.84.0/24
[root@host1 ~]# flanneld -etcd-endpoints=http://192.168.1.30:2379 -iface=ens33 -etcd-prefix=/docker-test/network &
[1] 2641
[root@host1 ~]# I1226 16:07:13.196315    2641 main.go:218] CLI flags config: {etcdEndpoints:http://192.168.1.30:2379 etcdPrefix:/docker-test/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: help:false version:false autoDetectIPv4:false autoDetectIPv6:false kubeSubnetMgr:false kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[ens33] ifaceRegex:[] ipMasq:false subnetFile:/run/flannel/subnet.env subnetDir: publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 charonExecutablePath: charonViciUri: iptablesResyncSeconds:5 iptablesForwardRules:true netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true}
I1226 16:07:13.196859    2641 main.go:238] Created subnet manager: Etcd Local Manager with Previous Subnet: None
I1226 16:07:13.196872    2641 main.go:241] Installing signal handlers
I1226 16:07:13.202567    2641 main.go:460] Found network config - Backend type: host-gw
I1226 16:07:13.203565    2641 main.go:699] Using interface with name ens33 and address 192.168.1.31
I1226 16:07:13.203583    2641 main.go:721] Defaulting external address to interface address (192.168.1.31)
I1226 16:07:13.203590    2641 main.go:734] Defaulting external v6 address to interface address (<nil>)
I1226 16:07:13.206059    2641 local_manager.go:166] Found lease (10.5.74.0/24) for current IP (192.168.1.31) but not compatible with current config, deleting
I1226 16:07:13.207279    2641 local_manager.go:237] Picking subnet in range 10.3.1.0 ... 10.3.255.0
I1226 16:07:13.208550    2641 local_manager.go:223] Allocated lease (10.3.19.0/24) to current node (192.168.1.31)
I1226 16:07:13.208599    2641 main.go:362] Changing default FORWARD chain policy to ACCEPT
I1226 16:07:13.209244    2641 main.go:375] Wrote subnet file to /run/flannel/subnet.env
I1226 16:07:13.209255    2641 main.go:379] Running backend.
I1226 16:07:13.215356    2641 iptables.go:216] Some iptables rules are missing; deleting and recreating rules
I1226 16:07:13.215374    2641 iptables.go:240] Deleting iptables rule: -s 10.3.0.0/16 -j ACCEPT
I1226 16:07:13.216681    2641 iptables.go:240] Deleting iptables rule: -d 10.3.0.0/16 -j ACCEPT
I1226 16:07:13.217955    2641 iptables.go:228] Adding iptables rule: -s 10.3.0.0/16 -j ACCEPT
I1226 16:07:13.223026    2641 route_network.go:54] Watching for new subnet leases
I1226 16:07:13.223099    2641 iptables.go:228] Adding iptables rule: -d 10.3.0.0/16 -j ACCEPT
I1226 16:07:13.228036    2641 main.go:507] Waiting for 22h59m59.789209067s to renew lease
I1226 16:07:13.230955    2641 route_network.go:93] Subnet added: 10.3.84.0/24 via 192.168.1.32
实验:flannel host-gw模式;host-gw模式网络流量路径分析;容器及host路由表项等
复制代码

Calico

复制代码
Calico概述;Calico网络特点分析
=========================================================================================================================
Calico 是一个纯三层的虚拟网络方案,Calico 为每个容器分配一个 IP,每个 host 都是 router,把不同 host 的容器连接起来。与 VxLAN 不同的是,Calico 不对数据包做额外封装,不需要 NAT 和端口映射,扩展性和性能都很好。
Calico 与其他容器网络方案相比,还有一大优势:network policy。
Calico 依赖 etcd 在不同主机间共享和交换信息,存储 Calico 网络状态。    

------------------------------------------------------------------------------------------------------------------------
Calico网络特点分析:
    1.每台host分配一个/26 subnet网段,并配置了一条该网段黑洞路由,通过bird传播bgp路由协议
    2.host访问本地容器,均为主机路由
    3.host访问其他host容器,bird传播的bgp路由
    4.与flannel host-gw 模式不同,流量暴露到物理网络中时,并不做SNAT
    5.calico网络SCOPE为global
    6.同host,不同calico网络是隔离的,但是不同calico网络分配的是相同/26 subnet网段的IP
    7.Calico 容器是单网卡的;也不借助 linux bridge;而是将宿主机host作为router,在host 路由表中会有一条主机路由指向相应的veth,从而将数据包转发给容器
Calico概述;Calico网络特点分析
复制代码

 

复制代码
实验:Calico网络;网络流量路径分析;容器及host路由表项等
===========================================================================================
#实验前记得关闭防火墙,以及开启网卡混杂模式
#Calico 依赖 etcd,安装方法及启动方法详见"flannel网络部分"

修改docker.service,并重启docker
    配置host1/host2的docker文件/usr/lib/systemd/system/docker.service
        [root@host1 ~]# cat /usr/lib/systemd/system/docker.service |grep ExecStart
        #ExecStart=/usr/bin/dockerd
        ExecStart=/usr/bin/dockerd --cluster-store=etcd://192.168.1.30:2379
    重启docker
        systemctl daemon-reload  && systemctl restart docker.service



部署 calico
    下载 calicoctl:
        wget -O /usr/local/bin/calicoctl https://github.com/projectcalico/calicoctl/releases/download/v1.0.2/calicoctl
        chmod +x calicoctl


    在 host1 和 host2 上启动 calico:
        calicoctl node run    #这样启动总是报错!###启动时,运行容器携带的参数总是不对,导致容器启动失败ETCD_AUTHORITY=127.0.0.1:2379
        ETCD_ENDPOINTS=http://192.168.1.30:2379 calicoctl node run #这样启动就ok了
    
创建 calico 网络
    docker network create --driver calico --ipam-driver calico-ipam calico_net1
    ##这个也报错???总之通过禁用第二张网卡,重启系统等方式,总算通过了
    
运行容器
    docker run -it --network calico_net1 centos

===============================================================================================
网络流量路径分析
容器间互访(host内部):
                  (容器侧veth pair)      (host侧veth pair)              (host侧veth pair)          (容器侧veth pair)
                    7: cali0@if8-----------8: cali0fc27e0dc79              10: calif40399892a6@if9----------9: cali0@if10
    容器1(IP:192.168.119.0/32)====================>host侧veth pair-----host侧veth pair=========================>容器2(IP:192.168.119.1/32)
    默认路由                                                host的主机路由
    封装目的mac:对端veth pair的mac                                    源mac:host侧veth pair mac;目的mac:ee:ee:ee:ee:ee:ee

容器间互访(跨host):
                  (容器侧veth pair)      (host侧veth pair)              
                    7: cali0@if8-----------8: cali0fc27e0dc79              
    容器1(IP:192.168.119.0/32)====================>host侧veth pair-----host1物理网卡=========================>host2物理网卡====.......=====>容器2(IP:192.168.183.64/32)
    默认路由                                                   host路由指向host2
    封装目的mac:对端veth pair的mac                                    源mac:host1物理网卡;目的mac:host2物理网卡

容器访问外部网络:
    存在问题,因为流量暴露到物理网络时,源IP仍然是容器IP;而网关发来的arp解析请求,容器并没有进行响应

=========================================================================================

[root@host2 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
534a4e4c3588        bridge              bridge              local
61ee13f9c269        calico_net1         calico              global            #calico网络SCOPE为global
7f67bd4bde62        host                host                local
39d491bf9f2e        none                null                local

容器1
[root@fd6ebefe7617 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
7: cali0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.119.0/32 brd 192.168.119.0 scope global cali0
       valid_lft forever preferred_lft forever
[root@fd6ebefe7617 /]# ip r s
default via 169.254.1.1 dev cali0        #默认路由下一跳169.254.1.1
169.254.1.1 dev cali0 scope link        #去往下一跳169.254.1.1的出接口为cali0

[root@host1 ~]# ip a show cali0fc27e0dc79
8: cali0fc27e0dc79@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether aa:34:c4:e3:ca:09 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::a834:c4ff:fee3:ca09/64 scope link
       valid_lft forever preferred_lft forever

[root@host1 ~]# ip r s
default via 192.168.1.2 dev ens33 proto static metric 100
192.168.1.0/24 dev ens33 proto kernel scope link src 192.168.1.31 metric 100
blackhole 192.168.119.0/26 proto bird
192.168.119.0 dev cali0fc27e0dc79 scope link
192.168.119.2 dev cali29abe547b49 scope link
192.168.183.64/26 via 192.168.1.32 dev ens33 proto bird

[root@host1 ~]# ETCD_ENDPOINTS=http://192.168.1.30:2379 calicoctl get profile calico_net1 -o yaml
- apiVersion: v1
  kind: profile
  metadata:
    name: calico_net1
    tags:
    - calico_net1
  spec:
    egress:
    - action: allow
      destination: {}
      source: {}
    ingress:
    - action: allow
      destination: {}
      source:
        tag: calico_net1
实验:Calico网络;网络流量路径分析;容器及host路由表项等
复制代码
复制代码
calico网络隔离;定制 Calico Policy;定制 Calico 的 IP 池
========================================================================================================
calico网络隔离是通过Calico Policy实现的;Calico 是通过policy完成了不同Calico 网络的隔离
Calico 默认的 policy 规则是:容器只能与同一个 calico 网络中的容器通信
Calico 的每个网络都有一个同名的 profile,profile 中定义了该网络的 policy。

---------------------------------------------------------------------------------------------------------
[root@host1 ~]# ETCD_ENDPOINTS=http://192.168.1.30:2379 calicoctl get profile calico_net2 -o yaml
- apiVersion: v1
  kind: profile
  metadata:
    name: calico_net2
    tags:
    - calico_net2
  spec:
    egress:
    - action: allow
      destination: {}
      source: {}        #默认出流量无限制
    ingress:
    - action: allow
      destination: {}    #默认入流量有限制
      source: {}        #修改为{},即表示入方向无限制
        tag: calico_net1#删除该配置



ETCD_ENDPOINTS=http://192.168.1.30:2379 calicoctl get profile calico_net2 -o yaml > web2.yml   ###保存为yml文件
ETCD_ENDPOINTS=http://192.168.1.30:2379 calicoctl apply -f web2.yml                               ###上传yml配置
calico_net1、calico_net2网络policy 规则变更前:
    出方向:均无限制
    入方向:限制tag
    抓包现象:calico_net1 容器1对应的host侧veth pair可以抓到包;calico_net2 容器2 host侧veth pair抓包无报文
calico_net2网络policy 规则变更后
    入方向:无限制
    容器1 ping 容器2:通(没错,只修改了calico_net2的入方向规则)
    容器2 ping 容器1:不通
同时验证了跨host跨calico网络的通信,在上面的policy 规则变更后:现象与上面的一直,单通效果。
---------------------------------------------------------------------------------------------------------
定制 Calico 的 IP 池
    本质就是修改yml文件,在metadata中增加了 cidr: 17.2.0.0/16
    详见 071 - 如何定制 Calico 的 IP 池?
calico网络隔离;定制 Calico Policy;定制 Calico 的 IP 池
复制代码

参考:

  centos7下安装docker(15.7容器跨主机网络---calico)

 weave

复制代码
weave概述;Linux bridge weave 和 Open vSwitch datapath
=============================================================================
weave 是 Weaveworks 开发的容器网络解决方案。weave 创建的虚拟网络可以将部署在多个主机上的容器连接起来。
    对容器来说,weave 就像一个巨大的以太网交换机,所有容器都被接入这个交换机,容器可以直接通信,无需 NAT 和端口映射。
    除此之外,weave 的 DNS 模块使容器可以通过 hostname 访问。

weave 不依赖分布式数据库(例如 etcd 和 consul)交换网络信息,每个主机上只需运行 weave 组件就能建立起跨主机容器网络。
-----------------------------------------
10.32.0.0/12 是 weave 网络使用的默认 subnet,如果此地址空间与现有 IP 冲突,可以通过 --ipalloc-range 分配特定的 subnet。
    weave launch --ipalloc-range 10.2.0.0/16
-----------------------------------------
weave 网络包含两个虚拟交换机:Linux bridge weave 和 Open vSwitch datapath;通过veth pair将两个交换机连接到一起
weave 负责将容器接入 weave 网络,datapath 负责在主机间 VxLAN 隧道中并收发数据。
weave概述;Linux bridge weave 和 Open vSwitch datapath
复制代码

 

复制代码
weave网络实验;网络流量路径分析;容器及host路由表等表项
=========================================================================================================================
###实验现象与教程有所不通,但现象还是通了的!!!
安装部署 weave
    在 host1 和 host2 上执行如下命令:
    curl -L git.io/weave -o /usr/local/bin/weave            ###链接似乎失效了
    wget -O /usr/local/bin/weave https://raw.githubusercontent.com/zettio/weave/master/weave #也不行
        #最终找了个百度网盘链接下了weave
    chmod a+x /usr/local/bin/weave

----------------------------
host1启动 weave
    weave launch #weave 运行了三个容器:
                    weave 是主程序,负责建立 weave 网络,收发数据 ,提供 DNS 服务等。
                        #weave容器网络模式为host
                    weaveplugin 是 libnetwork CNM driver,实现 Docker 网络。
                    weaveproxy 提供 Docker 命令的代理服务,当用户运行 Docker CLI 创建容器时,它会自动将容器添加到 weave 网络。
                    ###此处现象与教材不符:教材3个容器均为up;实验只有weave up;weaveplugin、weaveproxy为created

hiost1启动容器
    eval $(weave env)        #将后续的 docker 命令发给 weave proxy 处理。如果要恢复之前的环境,可执行 eval $(weave env --restore)。
    docker run -it centos    #默认配置下,weave 使用一个大 subnet(例如 10.32.0.0/12),所有主机的容器都从这个地址空间中分配 IP,因为同属一个 subnet,容器可以直接通信。
    docker run -e WEAVE_CIDR=net:10.32.2.0/24 -it centos    #指定容器网段,注意:小网段的容器与10.32.0.0/12这种大网段存在互访问题
    docker run -e WEAVE_CIDR=ip:10.32.6.6/24 -it centos        #指定容器网段,注意:小网段的容器与10.32.0.0/12这种大网段存在互访问题

--------------------------
host2启动weave
    weave launch 192.168.1.31
hiost2启动容器
    eval $(weave env)
    docker run -it centos
    
==============================================================================================================================================
网络流量路径分析;
所有host容器处于同一网段,所以host内部和host间互访的逻辑是一样,只是host内部不需要经过vxlan
容器间互访(host内部):
          (容器侧veth pair)     (bridge侧veth pair)                                                  
           28: ethwe@if29-----------------29: vethwepl7250@if28    14: vethwe-bridge@vethwe-datapath                        
    容器1(IP:10.40.0.0/12)==========================>weave(linux bridge)==================================>容器2(IP:10.32.0.33/12)
    路由10.32.0.0/12                                   weave容器使用host模式  
    
容器间互访(跨host):#看起来是跨host,其实对于容器,是同网段互访
          (容器侧veth pair)     (bridge侧veth pair)                                                  
           28: ethwe@if29-----------------29: vethwepl7250@if28    14: vethwe-bridge@vethwe-datapath    13: vethwe-datapath@vethwe-bridge     16: vxlan-6784                
    容器1(IP:10.40.0.0/12)==========================>weave(linux bridge)=======================================================>datapath(ovs)====================>ens33物理网卡=========......=======>容器2(IP:10.32.0.1/12)
    路由10.32.0.0/12                                   weave容器使用host模式                                                       vxlan-6784:vxlan设备                   数据包封装为vxlan报文
    直接封装目标容器2的mac


容器访问外部网络:
    使用第二网卡,访问外网逻辑与普通bridge容器一致,默认路由网关在docker0,然后做SNAT到外网
============================================================================================================================================
#容器1
[root@2289560ff2ac /]# ip -d a
26: eth0@if27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:0a:03:13:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0
    veth numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet 10.3.19.2/24 brd 10.3.19.255 scope global eth0
       valid_lft forever preferred_lft forever
28: ethwe@if29: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP group default
    link/ether 06:b2:90:84:dd:d4 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0
    veth numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet 10.40.0.0/12 scope global ethwe
       valid_lft forever preferred_lft forever
[root@2289560ff2ac /]# ip r s
default via 10.3.19.1 dev eth0
10.3.19.0/24 dev eth0 proto kernel scope link src 10.3.19.2
10.32.0.0/12 dev ethwe proto kernel scope link src 10.40.0.0
224.0.0.0/4 dev ethwe scope link

#容器2
[root@a519ed74da74 /]# ip -d a
22: eth0@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:0a:03:54:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0
    veth numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet 10.3.84.2/24 brd 10.3.84.255 scope global eth0
       valid_lft forever preferred_lft forever
24: ethwe@if25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP group default
    link/ether 5a:a4:44:b4:7b:15 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0
    veth numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet 10.32.0.1/12 scope global ethwe
       valid_lft forever preferred_lft forever
[root@a519ed74da74 /]# ip r s
default via 10.3.84.1 dev eth0
10.3.84.0/24 dev eth0 proto kernel scope link src 10.3.84.2
10.32.0.0/12 dev ethwe proto kernel scope link src 10.32.0.1
224.0.0.0/4 dev ethwe scope link

#host1
[root@host1 ~]# ip -d a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:ae:d6:88 brd ff:ff:ff:ff:ff:ff promiscuity 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet 192.168.1.31/24 brd 192.168.1.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:feae:d688/64 scope link
       valid_lft forever preferred_lft forever
3: ens36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:ae:d6:92 brd ff:ff:ff:ff:ff:ff promiscuity 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet 172.16.5.212/29 brd 172.16.5.215 scope global noprefixroute ens36
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:feae:d692/64 scope link
       valid_lft forever preferred_lft forever
6: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:a3:f6:cf:97 brd ff:ff:ff:ff:ff:ff promiscuity 0
    bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q bridge_id 8000.2:42:a3:f6:cf:97 designated_root 8000.2:42:a3:f6:cf:97 root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer    0.00 tcn_timer    0.00 topology_change_timer    0.00 gc_timer   93.59 vlan_default_pvid 1 vlan_stats_enabled 0 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 4 mcast_hash_max 512 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3125 mcast_stats_enabled 0 mcast_igmp_version 2 mcast_mld_version 1 nf_call_iptables 0 nf_call_ip6tables 0 nf_call_arptables 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet 10.3.19.1/24 brd 10.3.19.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:a3ff:fef6:cf97/64 scope link
       valid_lft forever preferred_lft forever
8: datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 72:c6:4c:d0:70:6f brd ff:ff:ff:ff:ff:ff promiscuity 1
    openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet6 fe80::70c6:4cff:fed0:706f/64 scope link
       valid_lft forever preferred_lft forever
10: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP group default qlen 1000
    link/ether 72:d2:26:ce:f6:40 brd ff:ff:ff:ff:ff:ff promiscuity 0
    bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q bridge_id 8000.72:d2:26:ce:f6:40 designated_root 8000.72:d2:26:ce:f6:40 root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer    0.00 tcn_timer    0.00 topology_change_timer    0.00 gc_timer  134.55 vlan_default_pvid 1 vlan_stats_enabled 0 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 4 mcast_hash_max 512 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3125 mcast_stats_enabled 0 mcast_igmp_version 2 mcast_mld_version 1 nf_call_iptables 0 nf_call_ip6tables 0 nf_call_arptables 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet6 fe80::70d2:26ff:fece:f640/64 scope link
       valid_lft forever preferred_lft forever
11: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 36:09:51:19:e1:7f brd ff:ff:ff:ff:ff:ff promiscuity 0
    dummy numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
13: vethwe-datapath@vethwe-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master datapath state UP group default
    link/ether 5a:35:47:be:6a:63 brd ff:ff:ff:ff:ff:ff promiscuity 1
    veth
    openvswitch_slave numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet6 fe80::5835:47ff:febe:6a63/64 scope link
       valid_lft forever preferred_lft forever
14: vethwe-bridge@vethwe-datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
    link/ether 5a:5c:45:cf:97:6b brd ff:ff:ff:ff:ff:ff promiscuity 1
    veth
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8001 port_no 0x1 designated_port 32769 designated_cost 0 designated_bridge 8000.72:d2:26:ce:f6:40 designated_root 8000.72:d2:26:ce:f6:40 hold_timer    0.00 message_age_timer    0.00 forward_delay_timer    0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet6 fe80::585c:45ff:fecf:976b/64 scope link
       valid_lft forever preferred_lft forever
21: vxlan-6784: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65535 qdisc noqueue master datapath state UNKNOWN group default qlen 1000
    link/ether 1e:a0:af:b3:5d:93 brd ff:ff:ff:ff:ff:ff promiscuity 1
    vxlan id 0 srcport 0 0 dstport 6784 nolearning ageing 300 udpcsum noudp6zerocsumtx udp6zerocsumrx external
    openvswitch_slave numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet6 fe80::1ca0:afff:feb3:5d93/64 scope link
       valid_lft forever preferred_lft forever
27: vethc38afb9@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 4a:64:a4:c4:db:9a brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 1
    veth
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8001 port_no 0x1 designated_port 32769 designated_cost 0 designated_bridge 8000.2:42:a3:f6:cf:97 designated_root 8000.2:42:a3:f6:cf:97 hold_timer    0.00 message_age_timer    0.00 forward_delay_timer    0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet6 fe80::4864:a4ff:fec4:db9a/64 scope link
       valid_lft forever preferred_lft forever
29: vethwepl7250@if28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
    link/ether 32:70:42:67:43:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 1
    veth
    bridge_slave state forwarding priority 32 cost 2 hairpin on guard off root_block off fastleave off learning on flood on port_id 0x8002 port_no 0x2 designated_port 32770 designated_cost 0 designated_bridge 8000.72:d2:26:ce:f6:40 designated_root 8000.72:d2:26:ce:f6:40 hold_timer    0.00 message_age_timer    0.00 forward_delay_timer    0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet6 fe80::3070:42ff:fe67:4302/64 scope link
       valid_lft forever preferred_lft forever
[root@host1 ~]# ip r s
default via 192.168.1.2 dev ens33 proto static metric 102
10.3.19.0/24 dev docker0 proto kernel scope link src 10.3.19.1
172.16.5.208/29 dev ens36 proto kernel scope link src 172.16.5.212 metric 101
192.168.1.0/24 dev ens33 proto kernel scope link src 192.168.1.31 metric 102

#host2
[root@host2 ~]# ip -d a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:8a:86:b4 brd ff:ff:ff:ff:ff:ff promiscuity 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet 192.168.1.32/24 brd 192.168.1.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe8a:86b4/64 scope link
       valid_lft forever preferred_lft forever
3: ens36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:8a:86:be brd ff:ff:ff:ff:ff:ff promiscuity 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet 172.16.5.210/29 brd 172.16.5.215 scope global noprefixroute ens36
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe8a:86be/64 scope link
       valid_lft forever preferred_lft forever
6: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:9f:18:2c:5a brd ff:ff:ff:ff:ff:ff promiscuity 0
    bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q bridge_id 8000.2:42:9f:18:2c:5a designated_root 8000.2:42:9f:18:2c:5a root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer    0.00 tcn_timer    0.00 topology_change_timer    0.00 gc_timer  129.19 vlan_default_pvid 1 vlan_stats_enabled 0 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 4 mcast_hash_max 512 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3125 mcast_stats_enabled 0 mcast_igmp_version 2 mcast_mld_version 1 nf_call_iptables 0 nf_call_ip6tables 0 nf_call_arptables 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet 10.3.84.1/24 brd 10.3.84.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:9fff:fe18:2c5a/64 scope link
       valid_lft forever preferred_lft forever
8: datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 66:bd:7a:ed:96:da brd ff:ff:ff:ff:ff:ff promiscuity 1
    openvswitch numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet6 fe80::64bd:7aff:feed:96da/64 scope link
       valid_lft forever preferred_lft forever
10: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP group default qlen 1000
    link/ether 22:84:db:2b:54:21 brd ff:ff:ff:ff:ff:ff promiscuity 0
    bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q bridge_id 8000.22:84:db:2b:54:21 designated_root 8000.22:84:db:2b:54:21 root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer    0.00 tcn_timer    0.00 topology_change_timer    0.00 gc_timer   88.23 vlan_default_pvid 1 vlan_stats_enabled 0 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 4 mcast_hash_max 512 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3125 mcast_stats_enabled 0 mcast_igmp_version 2 mcast_mld_version 1 nf_call_iptables 0 nf_call_ip6tables 0 nf_call_arptables 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet6 fe80::2084:dbff:fe2b:5421/64 scope link
       valid_lft forever preferred_lft forever
11: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 2e:a7:2c:da:8c:ba brd ff:ff:ff:ff:ff:ff promiscuity 0
    dummy numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
13: vethwe-datapath@vethwe-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master datapath state UP group default
    link/ether 5e:b2:e9:29:6e:3d brd ff:ff:ff:ff:ff:ff promiscuity 1
    veth
    openvswitch_slave numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet6 fe80::5cb2:e9ff:fe29:6e3d/64 scope link
       valid_lft forever preferred_lft forever
14: vethwe-bridge@vethwe-datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
    link/ether 5e:dc:63:98:65:0f brd ff:ff:ff:ff:ff:ff promiscuity 1
    veth
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8001 port_no 0x1 designated_port 32769 designated_cost 0 designated_bridge 8000.22:84:db:2b:54:21 designated_root 8000.22:84:db:2b:54:21 hold_timer    0.00 message_age_timer    0.00 forward_delay_timer    0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet6 fe80::5cdc:63ff:fe98:650f/64 scope link
       valid_lft forever preferred_lft forever
21: vxlan-6784: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65535 qdisc noqueue master datapath state UNKNOWN group default qlen 1000
    link/ether 46:da:2b:ad:5b:92 brd ff:ff:ff:ff:ff:ff promiscuity 1
    vxlan id 0 srcport 0 0 dstport 6784 nolearning ageing 300 udpcsum noudp6zerocsumtx udp6zerocsumrx external
    openvswitch_slave numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet6 fe80::44da:2bff:fead:5b92/64 scope link
       valid_lft forever preferred_lft forever
23: vethca7457f@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 26:3a:47:d1:94:fa brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 1
    veth
    bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8001 port_no 0x1 designated_port 32769 designated_cost 0 designated_bridge 8000.2:42:9f:18:2c:5a designated_root 8000.2:42:9f:18:2c:5a hold_timer    0.00 message_age_timer    0.00 forward_delay_timer    0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet6 fe80::243a:47ff:fed1:94fa/64 scope link
       valid_lft forever preferred_lft forever
25: vethwepl6319@if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
    link/ether 32:3e:75:12:f0:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 1
    veth
    bridge_slave state forwarding priority 32 cost 2 hairpin on guard off root_block off fastleave off learning on flood on port_id 0x8002 port_no 0x2 designated_port 32770 designated_cost 0 designated_bridge 8000.22:84:db:2b:54:21 designated_root 8000.22:84:db:2b:54:21 hold_timer    0.00 message_age_timer    0.00 forward_delay_timer    0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet6 fe80::303e:75ff:fe12:f004/64 scope link
       valid_lft forever preferred_lft forever
[root@host2 ~]# ip r s
default via 192.168.1.2 dev ens33 proto static metric 101
10.3.84.0/24 dev docker0 proto kernel scope link src 10.3.84.1
172.16.5.208/29 dev ens36 proto kernel scope link src 172.16.5.210 metric 100
192.168.1.0/24 dev ens33 proto kernel scope link src 192.168.1.32 metric 101
weave网络实验;网络流量路径分析;容器及host路由表等表项
复制代码
复制代码
外部网络访问weave容器
======================================================================
外部网络访问weave容器需要2个步骤:
    1.首先将主机加入到 weave 网络。
    2.然后把主机当作访问 weave 网络的网关。

    weave expose        #执行该命令之后,weave bridge将会分配到一个容器网段的IP;从此容器网络通过host连接到外部网络
外部网络访问weave容器
复制代码

 

posted @   雲淡風輕333  阅读(848)  评论(0编辑  收藏  举报
(评论功能已被禁用)
编辑推荐:
· SQL Server 2025 AI相关能力初探
· Linux系列:如何用 C#调用 C方法造成内存泄露
· AI与.NET技术实操系列(二):开始使用ML.NET
· 记一次.NET内存居高不下排查解决与启示
· 探究高空视频全景AR技术的实现原理
阅读排行:
· 阿里最新开源QwQ-32B,效果媲美deepseek-r1满血版,部署成本又又又降低了!
· 单线程的Redis速度为什么快?
· SQL Server 2025 AI相关能力初探
· AI编程工具终极对决:字节Trae VS Cursor,谁才是开发者新宠?
· 展开说说关于C#中ORM框架的用法!
点击右上角即可分享
微信分享提示