docker (centOS 7) 使用笔记5 - weave网络
weave官网 https://www.weave.works
1. 下载安装
sudo curl -L git.io/weave -o /usr/local/bin/weave sudo chmod a+x /usr/local/bin/weave
2. 部署weave网络
(1) 在第一台机器上运行,如果使用默认的 10.0.*.* 网段则如下
weave launch
本次测试使用自定义的网段,所以启动指令有所不同:
weave launch --ipalloc-range 168.108.0.0/16
启动成功后,会有3个weave的容器运行中
# docker ps -a c9ed14e97dfd weaveworks/weave:2.0.4 "/home/weave/weave..." 2 days ago Up 2 days weave 7db070b5f54e weaveworks/weaveexec:2.0.4 "/bin/false" 2 days ago Created weavevolumes-2.0.4 b6d603c8c7a8 weaveworks/weavedb "data-only" 2 days ago Created weavedb
可看到增加了虚拟网卡 weave
# ifconfig datapath: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1376 ether a6:66:9d:b6:5f:66 txqueuelen 1000 (Ethernet) RX packets 3 bytes 84 (84.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 192.168.0.1 netmask 255.255.240.0 broadcast 0.0.0.0 ether 02:42:97:9e:30:4b txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 docker_gwbridge: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.16.1 netmask 255.255.240.0 broadcast 0.0.0.0 ether 02:42:b9:64:2f:b8 txqueuelen 0 (Ethernet) RX packets 366610 bytes 29530131 (28.1 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 366610 bytes 29530131 (28.1 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.28.148.61 netmask 255.255.252.0 broadcast 10.28.151.255 ether 00:16:3e:0e:80:7a txqueuelen 1000 (Ethernet) RX packets 127115170 bytes 12384433822 (11.5 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 78033899 bytes 8572122284 (7.9 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 101.37.162.152 netmask 255.255.252.0 broadcast 101.37.163.255 ether 00:16:3e:0e:86:ce txqueuelen 1000 (Ethernet) RX packets 3995610 bytes 538305947 (513.3 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 4881735 bytes 4715682947 (4.3 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 1 (Local Loopback) RX packets 366610 bytes 29530131 (28.1 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 366610 bytes 29530131 (28.1 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth7720327: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 7e:69:f5:02:6d:9e txqueuelen 0 (Ethernet) RX packets 6 bytes 372 (372.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 9 bytes 798 (798.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 weave: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1376 ether c2:a6:97:90:01:0a txqueuelen 1000 (Ethernet) RX packets 3 bytes 84 (84.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
(2) 其他的节点加入加入上面已经创建的weave网络
weave launch 10.28.148.61 --ipalloc-range 168.108.0.0/16
(3) 创建网络成功的话,在每个节点上都可以用docker命令查看到weave网络
# docker network ls NETWORK ID NAME DRIVER SCOPE 7c19813ffbff bridge bridge local a7a2188380ba docker_gwbridge bridge local 7f97ac1cfe6e host host local z08xcdlswkbk ingress overlay swarm dfa68b3918b3 none null local 42f695c8c061 weave weavemesh local
3. docker启动测试
(1) 启动相当简单,仅需正常的docker命令中指定network为weave就行了
docker run -ti --network weave mytest
(2) 在2个节点上启动容器
在容器内部ifconfig可以看到容器使用的是weave的子网段,2个节点分别是168.108.0.1和168.108.192.0
[root@f451f6736785 /]# ifconfig ethwe0 Link encap:Ethernet HWaddr 42:6E:BF:E4:72:A7 inet addr:168.108.0.1 Bcast:0.0.0.0 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1376 Metric:1 RX packets:1 errors:0 dropped:0 overruns:0 frame:0 TX packets:1 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:42 (42.0 b) TX bytes:42 (42.0 b) eth0 Link encap:Ethernet HWaddr 02:42:C0:A8:10:06 inet addr:192.168.16.6 Bcast:0.0.0.0 Mask:255.255.240.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
[root@7c202270ff9f /]# ifconfig ethwe0 Link encap:Ethernet HWaddr F6:8D:A2:CB:EF:F5 inet addr:168.108.192.0 Bcast:0.0.0.0 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1376 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:1 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:42 (42.0 b) eth0 Link encap:Ethernet HWaddr 02:42:C0:A8:10:03 inet addr:192.168.16.3 Bcast:0.0.0.0 Mask:255.255.240.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
在容器里可以互相ping通
[root@f451f6736785 /]# ping 168.108.192.0 PING 168.108.192.0 (168.108.192.0) 56(84) bytes of data. 64 bytes from 168.108.192.0: icmp_seq=1 ttl=64 time=0.935 ms 64 bytes from 168.108.192.0: icmp_seq=2 ttl=64 time=0.334 ms 64 bytes from 168.108.192.0: icmp_seq=3 ttl=64 time=0.257 ms 64 bytes from 168.108.192.0: icmp_seq=4 ttl=64 time=0.386 ms ^C --- 168.108.192.0 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3845ms rtt min/avg/max/mdev = 0.257/0.478/0.935/0.267 ms
[root@7c202270ff9f /]# ping 168.108.0.1 PING 168.108.0.1 (168.108.0.1) 56(84) bytes of data. 64 bytes from 168.108.0.1: icmp_seq=1 ttl=64 time=0.428 ms 64 bytes from 168.108.0.1: icmp_seq=2 ttl=64 time=0.274 ms 64 bytes from 168.108.0.1: icmp_seq=3 ttl=64 time=0.344 ms 64 bytes from 168.108.0.1: icmp_seq=4 ttl=64 time=0.341 ms ^C --- 168.108.0.1 ping statistics --- 9 packets transmitted, 9 received, 0% packet loss, time 8592ms rtt min/avg/max/mdev = 0.235/0.301/0.428/0.056 ms
(3) 网速测试:
本次测试的环境是阿里云上的ECS,内网带宽为 1Gbits。
先安装iperf3(网速测试工具)
curl "http://downloads.es.net/pub/iperf/iperf-3.0.6.tar.gz" -o iperf-3.0.6.tar.gz tar xzvf iperf-3.0.6.tar.gz cd iperf-3.0.6 ./configure make make install
在节点2上启动iperf服务
# iperf3 -s ----------------------------------------------------------- Server listening on 5201 -----------------------------------------------------------
在节点1上启动网速测试
# iperf3 -c 168.108.192.0 Connecting to host 168.108.192.0, port 5201 [ 4] local 168.108.0.1 port 50208 connected to 168.108.192.0 port 5201 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 170 MBytes 1.42 Gbits/sec 1443 344 KBytes [ 4] 1.00-2.00 sec 95.2 MBytes 799 Mbits/sec 3432 835 KBytes [ 4] 2.00-3.00 sec 95.0 MBytes 797 Mbits/sec 3934 397 KBytes [ 4] 3.00-4.00 sec 96.2 MBytes 807 Mbits/sec 3306 684 KBytes [ 4] 4.00-5.00 sec 93.8 MBytes 786 Mbits/sec 4532 818 KBytes [ 4] 5.00-6.00 sec 95.0 MBytes 797 Mbits/sec 4308 617 KBytes [ 4] 6.00-7.00 sec 95.0 MBytes 797 Mbits/sec 4610 326 KBytes [ 4] 7.00-8.00 sec 95.0 MBytes 797 Mbits/sec 2607 887 KBytes [ 4] 8.00-9.00 sec 93.8 MBytes 786 Mbits/sec 4161 905 KBytes [ 4] 9.00-10.00 sec 95.0 MBytes 797 Mbits/sec 4314 666 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 1024 MBytes 859 Mbits/sec 36647 sender [ 4] 0.00-10.00 sec 1021 MBytes 856 Mbits/sec receiver iperf Done.
测试下来平均网速:发送速度 859 Mbits/sec ,收取速度 856 Mbits/sec。网速还是让人比较满意的。