DOCKER使用 FLANNEL(ETCD+FLANNEL)网络
一、FLANNEL网络简介
Flannel是一种基于overlay网络的跨主机容器网络解决方案,也就是将TCP数据包封装在另一种网络包里面进行路由转发和通信,Flannel是CoreOS开发,专门用于docker多机互联的一个工具,让集群中的不同节点主机创建的容器都具有全集群唯一的虚拟ip地址,Flannel使用go语言编写
二、FLANNEL实现原理
2.1、原理说明
1、Flannel为每个host分配一个subnet,容器从这个subnet中分配IP,这些IP可以在host间路由,容器间无需使用nat和端口映射即可实现跨主机通信 2、每个subnet都是从一个更大的IP池中划分的,flannel会在每个主机上运行一个叫flanneld的agent,其职责就是从池子中分配subnet 3、Flannel使用etcd存放网络配置、已分配 的subnet、host的IP等信息
4、Flannel数据包在主机间转发是由backend实现的,目前已经支持UDP、VxLAN、host-gw、AWS VPC和GCE路由等多种backend
2.2、数据转发流程
1、容器直接使用目标容器的ip访问,默认通过容器内部的eth0发送出去。 2、报文通过veth pair被发送到vethXXX。 3、ethXXX是直接连接到虚拟交换机docker0的,报文通过虚拟bridge docker0发送出去。 4、查找路由表,外部容器ip的报文都会转发到flannel0虚拟网卡,这是一个P2P的虚拟网卡,然后报文就被转发到监听在另一端的flanneld。 5、flanneld通过etcd维护了各个节点之间的路由表,把原来的报文UDP封装一层,通过配置的iface发送出去。 6、报文通过主机之间的网络找到目标主机。 7、报文继续往上,到传输层,交给监听在8285端口的flanneld程序处理。 8、数据被解包,然后发送给flannel0虚拟网卡。 9、查找路由表,发现对应容器的报文要交给docker0。 10、docker0找到连到自己的容器,把报文发送过去。
三、部署ETCD集群
3.1、环境准备
节点名称 |
IP地址 |
安装软件 |
docker-01 |
192.168.1.220 |
etcd flannel |
docker-02 |
192.168.1.221 |
etcd flannel |
docker-03 |
192.168.1.222 |
etcd flannel
|
3.2、安装ETCD(三台主机操作)
[root@docker-01 ~]# yum -y install etcd
3.3、配置ETCD
[root@docker-01 ~]# cp /etc/etcd/etcd.conf{,_bak}
[root@docker-01 ~]# cat /etc/etcd/etcd.conf
ETCD_NAME="docker-01"
ETCD_DATA_DIR="/var/lib/etcd/docker-01.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.1.220:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.220:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.220:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.220:2379"
ETCD_INITIAL_CLUSTER="docker-01=http://192.168.1.220:2380,docker-02=http://192.168.1.221:2380,docker-03=http://192.168.1.222:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
[root@docker-02 ~]# cp /etc/etcd/etcd.conf{,_bak}
[root@docker-02 ~]# cat /etc/etcd/etcd.conf
ETCD_NAME="docker-02"
ETCD_DATA_DIR="/var/lib/etcd/docker-02.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.1.221:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.221:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.221:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.221:2379"
ETCD_INITIAL_CLUSTER="docker-01=http://192.168.1.220:2380,docker-02=http://192.168.1.221:2380,docker-03=http://192.168.1.222:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
[root@docker-03 ~]# cp /etc/etcd/etcd.conf{,_bak}
[root@docker-03 ~]# cat /etc/etcd/etcd.conf
ETCD_NAME="docker-03"
ETCD_DATA_DIR="/var/lib/etcd/docker-03.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.1.222:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.222:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.222:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.222:2379"
ETCD_INITIAL_CLUSTER="docker-01=http://192.168.1.220:2380,docker-02=http://192.168.1.221:2380,docker-03=http://192.168.1.222:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
3.4、修改ETCD启动文件(三台都操作)
[root@docker-01 ~]# cp /usr/lib/systemd/system/etcd.service{,_bak}
[root@docker-01 ~]# cat /usr/lib/systemd/system/etcd.service
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
User=etcd
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd \
--name=\"${ETCD_NAME}\" \
--data-dir=\"${ETCD_DATA_DIR}\" \
--listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" \
--listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" \
--initial-advertise-peer-urls=\"${ETCD_INITIAL_ADVERTISE_PEER_URLS}\" \
--advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" \
--initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" \
--initial-cluster-token=\"${ETCD_INITIAL_CLUSTER_TOKEN}\" \
--initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\""
Restart=on-failure
LimitNOFILE=65536
3.5、启动ETCD服务(三台都启动)
[root@docker-02 ~]# systemctl start etcd.service
[root@docker-02 ~]# etcdctl cluster-health
member 164a311aff833bc1 is healthy: got healthy result from http://192.168.1.222:2379
member b1eeb25e6baf68e0 is healthy: got healthy result from http://192.168.1.221:2379
member e7c8f1a60e57abe4 is healthy: got healthy result from http://192.168.1.220:2379
cluster is healthy
3.6、检测ETCD集群状态,至此ETCD安装完成
# 查看cluster状态
[root@docker-02 ~]# etcdctl cluster-health
member 164a311aff833bc1 is healthy: got healthy result from http://192.168.1.222:2379
member b1eeb25e6baf68e0 is healthy: got healthy result from http://192.168.1.221:2379
member e7c8f1a60e57abe4 is healthy: got healthy result from http://192.168.1.220:2379
cluster is healthy
# 列出etcd服务状态,从列出信息可以看出,目前是docker-03为主节点。
[root@docker-02 ~]# etcdctl member list
164a311aff833bc1: name=docker-03 peerURLs=http://192.168.1.222:2380 clientURLs=http://192.168.1.222:2379 isLeader=false
b1eeb25e6baf68e0: name=docker-02 peerURLs=http://192.168.1.221:2380 clientURLs=http://192.168.1.221:2379 isLeader=false
e7c8f1a60e57abe4: name=docker-01 peerURLs=http://192.168.1.220:2380 clientURLs=http://192.168.1.220:2379 isLeader=true
3.7、添加FLANNEL网络配置信息到ETCD
【注释: 此(flannel_use)目录自己可以定义,但是此处设置的目录必须与flannel配置文件中FLANNEL_ETCD_PREFIX="/flannel_use/network"配置保持一致,flannel启动程序只认带“config”的key,否则会报错Not a directory (/flannel_use/network)】
# 固定配置方式
[root@docker-01 ~]# etcdctl set /flannel_use/network/config '{"Network":"10.10.0.0/16"}'
{"Network":"10.10.0.0/16"}
四、部署FLANNEL
4.1、安装FLANNEL
[root@docker-01 ~]# yum install -y flannel
4.2、修改FLANNEL配置文件
[root@docker-01 ~]# cp /etc/sysconfig/flanneld{,_bak} [root@docker-01 ~]# cat /etc/sysconfig/flanneld # Flanneld configuration options # etcd url location. Point this to the server where etcd runs FLANNEL_ETCD_ENDPOINTS="http://192.168.1.220:2379,http://192.168.1.221:2379,http://192.168.1.222:2379" # etcd config key. This is the configuration key that flannel queries # For address range assignment FLANNEL_ETCD_PREFIX="/flannel_use/network" # Any additional options that you want to pass #FLANNEL_OPTIONS=""
4.3、启动FLANNEL
[root@docker-01 ~]# systemctl start flanneld
[root@docker-01 ~]# systemctl status flanneld ● flanneld.service - Flanneld overlay address etcd agent Loaded: loaded (/usr/lib/systemd/system/flanneld.service; disabled; vendor preset: disabled) Active: active (running) since 六 2020-02-22 18:28:36 CST; 30s ago Process: 3725 ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker (code=exited, status=0/SUCCESS) Main PID: 3717 (flanneld) Memory: 18.6M CGroup: /system.slice/flanneld.service └─3717 /usr/bin/flanneld -etcd-endpoints=http://192.168.1.220:2379,http://192.168.1.221:2379,http://192.168.1.222:2379 -etcd-prefix=/flannel_use/network 2月 22 18:28:36 docker-01 flanneld-start[3717]: I0222 18:28:36.327966 3717 main.go:132] Installing signal handlers 2月 22 18:28:36 docker-01 flanneld-start[3717]: I0222 18:28:36.328105 3717 manager.go:136] Determining IP address of default interface 2月 22 18:28:36 docker-01 flanneld-start[3717]: I0222 18:28:36.328777 3717 manager.go:149] Using interface with name ens33 and address 192.168.1.220 2月 22 18:28:36 docker-01 flanneld-start[3717]: I0222 18:28:36.328798 3717 manager.go:166] Defaulting external address to interface address (192.168.1.220) 2月 22 18:28:36 docker-01 flanneld-start[3717]: I0222 18:28:36.344442 3717 local_manager.go:179] Picking subnet in range 10.10.1.0 ... 10.10.255.0 2月 22 18:28:36 docker-01 flanneld-start[3717]: I0222 18:28:36.413920 3717 manager.go:250] Lease acquired: 10.10.17.0/24 2月 22 18:28:36 docker-01 flanneld-start[3717]: I0222 18:28:36.416447 3717 network.go:98] Watching for new subnet leases 2月 22 18:28:36 docker-01 systemd[1]: Started Flanneld overlay address etcd agent. 2月 22 18:28:37 docker-01 flanneld-start[3717]: I0222 18:28:37.811484 3717 network.go:191] Subnet added: 10.10.72.0/24 2月 22 18:28:39 docker-01 flanneld-start[3717]: I0222 18:28:39.514051 3717 network.go:191] Subnet added: 10.10.87.0/24
4.4、注释
启动Flannel后,一定要记得重启docker,这样Flannel配置分配的ip才能生效,即docker0虚拟网卡的ip会变成上面flannel设定的ip段 [root@docker-01 ~]# systemctl daemon-reload && systemctl restart docker
4.5、修改DOCKER启动/配置文件使用FLANNEL网络
[root@docker-01 ~]# cat /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd --insecure-registry=172.17.29.74 -H fd:// --containerd=/run/containerd/containerd.sock $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3
# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
4.6、重启DOCKER
[root@docker-01 ~]# systemctl daemon-reload && systemctl restart docker
4.7、查看DOCKER是否使用FLANNEL网络
[root@docker-01 ~]# ifconfig docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 10.10.17.1 netmask 255.255.255.0 broadcast 10.10.17.255 inet6 fe80::42:2bff:fe60:4583 prefixlen 64 scopeid 0x20<link> ether 02:42:2b:60:45:83 txqueuelen 0 (Ethernet) RX packets 37 bytes 2884 (2.8 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 45 bytes 4050 (3.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.220 netmask 255.255.255.0 broadcast 192.168.1.255 inet6 fe80::448f:7a09:b3fa:48e0 prefixlen 64 scopeid 0x20<link> inet6 2409:8a0c:1c:bf50:e01c:8280:b592:daa2 prefixlen 64 scopeid 0x0<global> ether 00:0c:29:c5:19:99 txqueuelen 1000 (Ethernet) RX packets 229870 bytes 47633933 (45.4 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 217461 bytes 21150952 (20.1 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472 inet 10.10.17.0 netmask 255.255.0.0 destination 10.10.17.0 inet6 fe80::1c63:9bc3:c290:9c3c prefixlen 64 scopeid 0x20<link> unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC) RX packets 222 bytes 18648 (18.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 36 bytes 2916 (2.8 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 698 bytes 42425 (41.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 698 bytes 42425 (41.4 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@docker-02 ~]# ifconfig docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 10.10.72.1 netmask 255.255.255.0 broadcast 10.10.72.255 inet6 fe80::42:c8ff:fe13:55b6 prefixlen 64 scopeid 0x20<link> ether 02:42:c8:13:55:b6 txqueuelen 0 (Ethernet) RX packets 34 bytes 2744 (2.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 25 bytes 2202 (2.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.221 netmask 255.255.255.0 broadcast 192.168.1.255 inet6 2409:8a0c:1c:bf50:741d:adf5:68de:37a3 prefixlen 64 scopeid 0x0<global> inet6 fe80::31aa:6517:f925:5776 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:2e:1b:fd txqueuelen 1000 (Ethernet) RX packets 224328 bytes 47917771 (45.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 212560 bytes 20602013 (19.6 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472 inet 10.10.72.0 netmask 255.255.0.0 destination 10.10.72.0 inet6 fe80::e3bc:6490:a376:384b prefixlen 64 scopeid 0x20<link> unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC) RX packets 15 bytes 1260 (1.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 35 bytes 2832 (2.7 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 1267 bytes 75489 (73.7 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1267 bytes 75489 (73.7 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@docker-03 ~]# ifconfig docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 10.10.87.1 netmask 255.255.255.0 broadcast 10.10.87.255 inet6 fe80::42:bbff:feff:e76e prefixlen 64 scopeid 0x20<link> ether 02:42:bb:ff:e7:6e txqueuelen 0 (Ethernet) RX packets 285 bytes 23324 (22.7 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 37 bytes 2874 (2.8 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.222 netmask 255.255.255.0 broadcast 192.168.1.255 inet6 fe80::7b56:460b:af9e:dfcb prefixlen 64 scopeid 0x20<link> inet6 2409:8a0c:1c:bf50:8d5:aee6:d0e7:743f prefixlen 64 scopeid 0x0<global> ether 00:0c:29:82:72:3f txqueuelen 1000 (Ethernet) RX packets 171823 bytes 41966958 (40.0 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 162113 bytes 15749871 (15.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472 inet 10.10.87.0 netmask 255.255.0.0 destination 10.10.87.0 inet6 fe80::fa24:1a86:40c2:2aae prefixlen 64 scopeid 0x20<link> unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC) RX packets 18 bytes 1512 (1.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 130 bytes 10812 (10.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 673 bytes 38475 (37.5 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 673 bytes 38475 (37.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
4.8、如果容器无法联通,是由于FLANNEL.0网卡和DOCKER0网卡通过IPTABLES的FORWARD转发,所以需确保如下设置(所有主机)
[root@docker-01 ~]# iptables -P FORWARD ACCEPT [root@docker-01 ~]# echo "1" > /proc/sys/net/ipv4/ip_forward
[root@docker-01 ~]# docker run -it busybox / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:0A:0A:11:02 inet addr:10.10.17.2 Bcast:10.10.17.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1472 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:648 (648.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) / #
[root@docker-02 ~]# docker run -it busybox / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:0A:0A:48:02 inet addr:10.10.72.2 Bcast:10.10.72.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1472 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:648 (648.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) / #
[root@docker-03 ~]# docker run -it busybox / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:0A:0A:57:02 inet addr:10.10.87.2 Bcast:10.10.87.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1472 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:648 (648.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) / #
/ # ping 10.10.87.2 PING 10.10.87.2 (10.10.87.2): 56 data bytes 64 bytes from 10.10.87.2: seq=0 ttl=60 time=0.961 ms 64 bytes from 10.10.87.2: seq=1 ttl=60 time=0.561 ms 64 bytes from 10.10.87.2: seq=2 ttl=60 time=0.622 ms 64 bytes from 10.10.87.2: seq=3 ttl=60 time=1.514 ms 64 bytes from 10.10.87.2: seq=4 ttl=60 time=1.434 ms 64 bytes from 10.10.87.2: seq=5 ttl=60 time=1.523 ms 64 bytes from 10.10.87.2: seq=6 ttl=60 time=0.629 ms 64 bytes from 10.10.87.2: seq=7 ttl=60 time=1.293 ms 64 bytes from 10.10.87.2: seq=8 ttl=60 time=1.302 ms 64 bytes from 10.10.87.2: seq=9 ttl=60 time=1.311 ms 64 bytes from 10.10.87.2: seq=10 ttl=60 time=1.228 ms 64 bytes from 10.10.87.2: seq=11 ttl=60 time=1.458 ms 64 bytes from 10.10.87.2: seq=12 ttl=60 time=1.484 ms 64 bytes from 10.10.87.2: seq=13 ttl=60 time=0.649 ms 64 bytes from 10.10.87.2: seq=14 ttl=60 time=1.481 ms 64 bytes from 10.10.87.2: seq=15 ttl=60 time=1.471 ms 64 bytes from 10.10.87.2: seq=16 ttl=60 time=1.490 ms 64 bytes from 10.10.87.2: seq=17 ttl=60 time=1.364 ms 64 bytes from 10.10.87.2: seq=18 ttl=60 time=2.004 ms ^C --- 10.10.87.2 ping statistics --- 19 packets transmitted, 19 packets received, 0% packet loss round-trip min/avg/max = 0.561/1.251/2.004 ms / # ping 10.10.72.2 PING 10.10.72.2 (10.10.72.2): 56 data bytes 64 bytes from 10.10.72.2: seq=0 ttl=60 time=1.538 ms 64 bytes from 10.10.72.2: seq=1 ttl=60 time=2.641 ms 64 bytes from 10.10.72.2: seq=2 ttl=60 time=5.197 ms ^C --- 10.10.72.2 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 1.538/3.125/5.197 ms
/ # ping 10.10.17.2 PING 10.10.17.2 (10.10.17.2): 56 data bytes 64 bytes from 10.10.17.2: seq=16 ttl=60 time=1.670 ms 64 bytes from 10.10.17.2: seq=17 ttl=60 time=1.463 ms 64 bytes from 10.10.17.2: seq=18 ttl=60 time=1.546 ms 64 bytes from 10.10.17.2: seq=19 ttl=60 time=0.882 ms 64 bytes from 10.10.17.2: seq=20 ttl=60 time=1.465 ms 64 bytes from 10.10.17.2: seq=21 ttl=60 time=1.390 ms 64 bytes from 10.10.17.2: seq=22 ttl=60 time=1.481 ms 64 bytes from 10.10.17.2: seq=23 ttl=60 time=0.601 ms 64 bytes from 10.10.17.2: seq=24 ttl=60 time=1.446 ms 64 bytes from 10.10.17.2: seq=25 ttl=60 time=0.919 ms 64 bytes from 10.10.17.2: seq=26 ttl=60 time=1.449 ms 64 bytes from 10.10.17.2: seq=27 ttl=60 time=1.478 ms 64 bytes from 10.10.17.2: seq=28 ttl=60 time=1.506 ms 64 bytes from 10.10.17.2: seq=29 ttl=60 time=0.526 ms 64 bytes from 10.10.17.2: seq=30 ttl=60 time=5.170 ms 64 bytes from 10.10.17.2: seq=31 ttl=60 time=0.956 ms 64 bytes from 10.10.17.2: seq=32 ttl=60 time=1.490 ms ^C --- 10.10.17.2 ping statistics --- 33 packets transmitted, 17 packets received, 48% packet loss round-trip min/avg/max = 0.526/1.496/5.170 ms / #
/ # ping 10.10.17.2 PING 10.10.17.2 (10.10.17.2): 56 data bytes 64 bytes from 10.10.17.2: seq=3 ttl=60 time=1.712 ms 64 bytes from 10.10.17.2: seq=4 ttl=60 time=1.395 ms 64 bytes from 10.10.17.2: seq=5 ttl=60 time=1.501 ms 64 bytes from 10.10.17.2: seq=6 ttl=60 time=1.987 ms 64 bytes from 10.10.17.2: seq=7 ttl=60 time=1.599 ms 64 bytes from 10.10.17.2: seq=8 ttl=60 time=1.600 ms 64 bytes from 10.10.17.2: seq=9 ttl=60 time=6.199 ms 64 bytes from 10.10.17.2: seq=10 ttl=60 time=0.656 ms 64 bytes from 10.10.17.2: seq=11 ttl=60 time=0.983 ms 64 bytes from 10.10.17.2: seq=12 ttl=60 time=2.045 ms ^C --- 10.10.17.2 ping statistics --- 13 packets transmitted, 10 packets received, 23% packet loss round-trip min/avg/max = 0.656/1.967/6.199 ms