[docker]docker自带的overlay网络实战
overlay网络实战
n3启动consul
docker run -d -p 8500:8500 -h consul --name consul progrium/consul -server -bootstrap
访问 Consul:
http://192.168.2.13:8500
n1 n2启动docker
iptables -P FORWARD ACCEPT
systemctl stop docker
dockerd --cluster-store=consul://192.168.2.13:8500 --cluster-advertise=eth0:2376
--cluster-store 指定 consul 的地址。
--cluster-advertise 告知 consul 自己的连接地址。
查看Consul/创建overlay网络/查看其ip网段
docker network create -d overlay ov_net1
[root@n1 ~]# docker network inspect ov_net1
[
{
"Name": "ov_net1",
"Id": "24e2f69b4d3475f84e4f5df65c13a29ffe4469c924d92d772eeb925900a4a40f",
"Created": "2017-12-26T21:42:43.86848909+08:00",
"Scope": "global",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
IPAM 是指 IP Address Management,docker 自动为 ov_net1 分配的 IP 空间为 10.0.0.0/24。
n1 n2跑容器在ov_net1上测网络连通性
- 在n1创建b1 在n2创建b2
docker run -itd --name b1 --net ov_net1 busybox
docker exec -it b1 ip a
docker run -itd --name b2 --net ov_net1 busybox
docker exec -it b2 ip a
- 测试连通性
[root@n1 ~]# docker exec b1 ping 10.0.0.3
PING 10.0.0.3 (10.0.0.3): 56 data bytes
64 bytes from 10.0.0.3: seq=0 ttl=64 time=1.123 ms
64 bytes from 10.0.0.3: seq=1 ttl=64 time=0.648 ms
- overlay网络自带dns
[root@n1 ~]# docker exec b1 ping 10.0.0.3
PING 10.0.0.3 (10.0.0.3): 56 data bytes
64 bytes from 10.0.0.3: seq=0 ttl=64 time=1.123 ms
64 bytes from 10.0.0.3: seq=1 ttl=64 time=0.648 ms
深究overlay网络细节
一块网卡的网段用于建vxlan隧道,另一块用于对接docker_gwbridge上网
- b1的网关是172.18.0.1
[root@n1 ~]# docker exec b1 ip r
default via 172.18.0.1 dev eth1
10.0.0.0/24 dev eth0 scope link src 10.0.0.2
172.18.0.0/16 dev eth1 scope link src 172.18.0.2
- 查看网桥docker_gwbridge
[root@n1 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
6ec6e565e277 docker_gwbridge bridge local
24e2f69b4d34 ov_net1 overlay global
- 查看docker_gwbridge的ip- 说明docker_gwbridge是容器的网关
[root@n1 ~]# docker network inspect docker_gwbridge|grep Gateway
"Gateway": "172.18.0.1"
- 查看docker_gwbridge的ip
[root@n1 ~]# ip a s docker_gwbridge
8: docker_gwbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 02:42:24:f2:17:b6 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 scope global docker_gwbridge
- 在创建一个b3
[root@n1 ~]# docker run -itd --name b3 --net ov_net1 busybox
41b677a7769514daeb6804142b8db4e8d7ad4c6623b363ae4fdcac5a252601b4
1.docker 会为每个 overlay 网络创建一个独立的 network namespace,其中会有一个 linux bridge br0
2.endpoint 还是由 veth pair 实现,一端连接到容器中(即 eth0),另一端连接到 namespace 的 br0 上。
3.br0 除了连接所有的 endpoint,还会连接一个 vxlan 设备,用于与其他 host 建立 vxlan tunnel。
- 查看ns( ln -s /var/run/docker/netns /var/run/netns)
[root@n1 ~]# ip netns exec 1-24e2f69b4d brctl show
RTNETLINK answers: Invalid argument
bridge name bridge id STP enabled interfaces
br0 8000.0e304477edf8 no veth0
veth1
vxlan0
- 查看vxlan0口的vlanid
[root@n1 ~]# ip netns exec 1-24e2f69b4d ip -d l show vxlan0|grep vxlan
5: vxlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UNKNOWN mode DEFAULT
vxlan id 256 srcport 0 0 dstport 4789 proxy l2miss l3miss ageing 300
查看overlay(vxlan)的数据包
https://github.com/lannyMa/scripts/blob/master/pkgs/overlay_vxlan.pcap