Swarm集群部署、集群架构、集群管理 、服务管理

一、部署swarm集群

#docker swarm简介

Docker Swarm 和 Docker Compose 一样,都是 Docker 官方容器编排项目,但不同的是,Docker Compose 是一个在单个服务器或主机上创建多个容器的工具,而 Docker Swarm 则可以在多个服务器或主机上创建容器集群服务,对于微服务的部署,显然 Docker Swarm 会更加适合。

从 Docker 1.12.0 版本开始,Docker Swarm 已经包含在 Docker 引擎中(docker swarm),并且已经内置了服务发现工具,我们就不需要像之前一样,再配置 Etcd 或者 Consul 来进行服务发现配置了。

 

#swarm概念

 Swram是Docker公司推出的官方容器集群平台,基于go语言实现,代码开源在 https://github.com/docker/swarm .从2014年开始,到2018年项目终目开发,其中在2016年2月对架构进行重新设计,推出了v2版本,支持超过1千个节点。作为容器集群管理器,Swarm最大的优势之一就是100%支持标准的Docker API及工具(如Compose,docker-py等),Docker本身就可以很好地与Swarm进行集成。

 我们来看下Swarm做了什么?试想一下目前操作docker集群的方式,用户必须单独对每一个容器执行命令,如下图所示:

有了Swarm后,使用多台Docker宿主机的方式就变成了如下图的形式:

 

 #swarm集群

 

从上图可以看出,Swarm有两个角色(Manager、agent(也可称为worker)),简单说一下这两个角色的作用:

  • Manager:接收客户端服务定义,将任务发送到agnet节点,维护集群期望状态和集群管理功能以及leader选举。默认情况下manager节点也会运行任务,也可以配置只做管理任务。
  • agent:接收并执行从管理节点分配的任务,并报告任务当前的状态,以便Manager节点维护每个服务期望状态。

从上图还可以看出,Manager收到的请求可发细分为4大类:

  1. 第一类,针对已创建容器的操作,Swarm只是起到一个转发请求到特定宿主机的作用。
  2. 第二类,针对Docker镜像的操作。
  3. 第三类,创建新的容器docker create这一命令,其中涉及的集群调度会在下面的内容中分享;
  4. 第四类,其他获取集群整体信息的操作,比如获取所有容器信息、查看Docker版本等。

 

2.2swarm集群调度策略

 Swarm管理了多台Docker宿主机,用户在这些宿主机上创建容器时,究竟会与哪台宿主机产生交互呢?

2.2.1 Filter

 Swarm的调度策略可以按照指定调度策略分配容器到节点,但是有时候希望能对这些分配加以干预。比如,让I/O敏感的容器分配到安装了SSD的节点上;让计算敏感的容器分配到CPU核数多的机器上;让网络敏感的容器分配到高带宽的机房上,等等。

 这时可以通过过滤器(filter)来实现,,用来帮助用户筛选出符合他们条件的宿主机。目前支持五种过滤器:

  • Constraint
  • Affinity
  • Port
  • Dependency
  • Health

 本文为大家介绍前两种过滤器。

1、Constraint过滤器
 Constraint过滤器是绑定到节点的键值对,相当于给节点添加标签。可以在启动Docker容器的时候指定,使用swarm启动容器的时候,采用-e constarint:key=value的形式,可以过滤出匹配条件的节点。

 下面用个场景为例,我们将分别启动两个容器busybox容器,分别使用过滤器打上红色标签、绿色标签。

#docker service create --replicas 1 -e constraint:color=red busybox ping 127.0.0.1
#docker service create --replicas 1 -e constraint:color=green busybox ping 127.0.0.1

如果目前大家看着上面的命令不太理解,等看完了这篇文章之后,在回过头来看上面的命令就会明白了。

2、Affinity过滤器
 Affinity过滤器允许用户在启动一个容器的时候,让它分配到某个已有容器的节点上。

2.2.2 strategy

 使用了filter之后,Swarm还提供了strategy(策略)来选出最终运行容器的宿主机,目前swarm已经提供了有如下几种策略:

  1. random策略:random就是在候选宿主机中(agnet中)随机选择一台的策略;
  2. binpacking策略:binpacking则会在权衡宿主机CPU和内存的占用率之后选择能分配到最大资源的那台候选宿主机;
  3. spread策略:spread尝试把每个容器平均地部署到每个节点上。

 与此同时,调度策略支持对节点的信任机制,如果一个节点总是处于工作状态,没有失败连接的情况,那么相同条件下,该节点将会优先被选择。目前对宿主机资源的调度策略还在不断开发中,使用filter和strategy这种组合方式,对容器的部署仍然保留了Docker宿主机粒度的操作,已能满足大多数的需求了。

 


 #docker swarm 体系结构如如所示:

一个Manager下面有多个Worker(实际运行中每个都是一个container)

 

下图是一个Service和Replicas(复制品)模型图, service是nginx,但是下面有3个replicas nginx构成了一个集群。

swarm集群部署

#测试环境:
https://labs.play-with-docker.com
用docker账号登录

#网络规则
192.168.0.61 node1 #master节点
192.168.0.62 node2 #node节点
192.168.0.63 node3 #node节点

#初始化swarm集群 (备注:在node1节点上面操作)
sudo docker swarm init --advertise-addr 192.168.0.38

#将work节点加入swarm集群 (备注:在node2和node3节点上面操作)
sudo docker swarm join --token SWMTKN-1-3flgg8jgq9tmo3l3kazvit8fec0e91hea8dedvc691liswsqv8-3ipdbwzle03ogd5l5jxue0eo2 192.168.0.61:2377

#查看pod状态(备注:在node1节点上面操作)
$ sudo docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
hhkyr9oj2mkv1yplzd229zdwf node1 Ready Active 19.03.0-beta2
ndng4snylrdwhgr6dicww3fho node2 Ready Active 19.03.0-beta2
8vb0iln353y4am6sjul459tep * node3 Ready Active Leader 19.03.0-beta2

#查看帮助
$ sudo docker --help (备注:在node1节点上面操作)

Usage: docker [OPTIONS] COMMAND

A self-sufficient runtime for containers

Options:
--config string Location of client config files (default "/root/.docker")
-c, --context string Name of the context to use to connect to the daemon (overrides DOCKER_HOST env var and default context set with "docker context use")
-D, --debug Enable debug mode
-H, --host list Daemon socket(s) to connect to
-l, --log-level string Set the logging level ("debug"|"info"|"warn"|"error"|"fatal") (default "info")
--tls Use TLS; implied by --tlsverify
--tlscacert string Trust certs signed only by this CA (default "/root/.docker/ca.pem")
省略部分......

#查看docker版本
$ docker version (备注:在node1节点上面操作)
Client: Docker Engine - Community
Version: 19.03.0-beta2
API version: 1.40
Go version: go1.12.4
Git commit: c601560
Built: Fri Apr 19 00:57:20 2019
OS/Arch: linux/amd64
Experimental: false
省略部分......

#显示当前所有节点 (备注:在node1节点上面操作)
$ docker node ls
ID HOSTNAME STATUS
AVAILABILITY MANAGER STATUS ENGINE VERSION
hhkyr9oj2mkv1yplzd229zdwf node1 Ready
Active 19.03.0-beta2
ndng4snylrdwhgr6dicww3fho node2 Ready
Active 19.03.0-beta2
8vb0iln353y4am6sjul459tep * node3 Ready
Active Leader 19.03.0-beta2


#部署服务 (备注:在node1节点上面操作)
docker service create --name demo busybox sh -c "while true;do sleep 3600;done;"


#列出服务 (备注:在node1节点上面操作)
$ docker service ls
ID NAME MODE REPL
ICAS IMAGE PORTS
nfuka28cblpf demo replicated 1/1
busybox:latest

#查看服务 (备注:在node1节点上面操作)
$ docker service ps demo
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
v2rjd6gru6ns demo.1 busybox:latest node2 Running Running 38 seconds ago

#扩展服务数量,就是增加副本数量 (备注:在node1节点上面操作)
$ docker service scale demo=5
demo scaled to 5

#查看副本数 (备注:在node1节点上面操作)
$ docker service ls
ID NAME MODE REPL
ICAS IMAGE PORTS
nfuka28cblpf demo replicated 5/5
busybox:latest

#查看demo副本数量(备注:在node1节点上面操作)
$ docker service ps demo
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
v2rjd6gru6ns demo.1 busybox:latest node2 Running Running 8 minutes ago
nulr3x9qjt1m demo.2 busybox:latest node1 Running Running 4 minutes ago
yuxfv0drzmzw demo.3 busybox:latest node1 Running Running 4 minutes ago
8n2v2x3gpw8f demo.4 busybox:latest node2 Running Running 4 minutes ago
x997dqaiykj2 demo.5 busybox:latest node3 Running Running 4 minutes ago


#创建overlay网络,使两个容器之间实现互访
$ docker network create -d overlay demo (备注:在node1节点上面操作)
lfhkp40mqlcy1r1fm164oonzo
[node3] (local) root@192.168.0.61 ~

$ docker network ls (备注:在node1节点上面操作)
NETWORK ID NAME DRIVER SCOP
E
cd64b2256d12 bridge bridge loca
l
lfhkp40mqlcy demo overlay swar
m
183d22dd1659 docker_gwbridge bridge loca
l
75f2650e955b host host loca
l
2s10tftmg4f4 ingress overlay swar
m
be44ba126b60 none null loca
l

#创建服务:mysql (备注:在node1节点上面操作)
docker service create --name mysql --env MYSQL_ROOT_PASSWORD=root \
--env MYSQL_DATABASE=wordpress --network demo --mount type=volume,source=mysql-data,destination=/var/lib/mysql mysql:5.7

$ docker service ls (备注:在node1节点上面操作)
ID NAME MODE REPLICAS IMAGE PORTS
nfuka28cblpf demo replicated 5/5 busybox:latest
md67phemw4up mysql replicated 1/1 mysql:5.7

$ docker service ps mysql (备注:在node1节点上面操作)
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
a6zrrg4ij15j mysql.1 mysql:5.7 node3 Running Running about a minute ago

#创建wordpress服务 (备注:在node1节点上面操作)
docker service create --name wordpress -p 80:80 --env WORDPRESS_DB_PASSWORD=root \
--env WORDPRESS_DB_HOST=mysql --network demo wordpress


$ docker service ls (备注:在node1节点上面操作)
ID NAME MODE REPLICAS IMAGE PORTS
ye3iojuxvdwf mysql replicated 1/1 mysql:5.7
cku16jr5nfcg wordpress replicated 1/1 wordpress:latest *:80->80/tcp


$ docker service ps wordpress (备注:在node1节点上面操作)
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
n16an94tcb0p wordpress.1 wordpress:latest node2 Running Running about a minute ago


$ docker ps (备注:在node1节点上面操作)
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9f394dee1e89 mysql:5.7 "docker-entrypoint.s…" 4 minutes ago Up 4 mi
nutes 3306/tcp, 33060/tcp mysql.1.volm1hp3n2dkljqtb8h59e8sf


#访问网站 (备注:在外网进行访问)
http://ip

 

二、集群服务间通信之RoutingMesh

 

通过:LVS实现的,VIP地址。

2.1、在node2节点上面查看,容器被分配到node2上面之后,才会分配网络。
$ docker network ls (备注:在node1节点上面操作)
NETWORK ID NAME DRIVER SCOPE
1d3a7390993e bridge bridge local
oiy95kyiwybc demo overlay swarm
400751c7cbb6 docker_gwbridge bridge local
ed8a7fe3333e host host local
eftpkhpfwomq ingress overlay swarm
245e8b6ebb9f none null local

2.1、创建whoami服务(备注:在node1节点上面操作)
docker service create --name whoami -p 8000:8000 --network demo -d jwilder/whoami

#查看服务 (备注:在node1节点上面操作)
$ docker service ls
ID NAME MODE REPLICAS IMAGE
PORTS
ye3iojuxvdwf mysql replicated 1/1 mysql:5.7

ulsy2lxm6cng whoami replicated 1/1 jwilder/w
hoami:latest *:8000->8000/tcp
cku16jr5nfcg wordpress replicated 1/1 wordpress
:latest *:80->80/tcp

2.2、查看运行在那个节点
$ docker service ps whoami (备注:在node1节点上面操作)
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERRO
R PORTS
uva1e307dwag whoami.1 jwilder/whoami:latest node3 Running Running 59 seconds ago

2.3、docker ps (备注:在node3节点上面操作)
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
184989e17f76 jwilder/whoami:latest "/app/http" About a minute ago Up About a minute 8000/tcp whoami.1.uva1e307dwagnanuzodbqzfcf

2.4、访问whoami容器ip地址 (备注:在node1节点上面操作)
$ curl 127.0.0.1:8000
I'm 184989e17f76

2.5、创建服务busybox
$docker service create --name client -d --network demo busybox sh -c "while true; do sleep 3600; done"
we6jp0m72ss5m0vw9g36t2mmy

3.6、查看服务 (备注:在node1节点上面操作)
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS NAMES
PORTS 3306/tcp, 33060/tcp mysql.
we6jp0m72ss5 client replicated 1/1 busybox:latest

ye3iojuxvdwf mysql replicated 1/1 mysql:5.7

ulsy2lxm6cng whoami replicated 1/1 jwilder/whoami:latest
*:8000->8000/tcp
cku16jr5nfcg wordpress replicated 1/1 wordpress:latest
*:80->80/tcp

3.7、查看服务运行在那个节点上面 (备注:在node1节点上面操作)
$ docker service ps client
ID NAME IMAGE NODE DESIRED STATE CU
RRENT STATE ERROR PORTS
y002hyyst1r9 client.1 busybox:latest node3 Running Running 38 seconds ago

3.8、查看正在运行的容器 (备注:在node3节点上面操作)
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dbe5034a2d77 busybox:latest "sh -c 'while true; …" About a minute ago Up About a minute client.1.y002hyyst1r9pqyf6ad14oruo

184989e17f76 jwilder/whoami:latest "/app/http" 23 minutes ago Up 23 minutes 8000/tcp whoami.1.uva1e307dwagnanuzodbqzfcf

3.9、进入容器中 (备注:在node3节点上面操作)
$ docker exec -it dbe5 sh
/ # ping whoami
PING whoami (10.0.0.8): 56 data bytes
64 bytes from 10.0.0.8: seq=0 ttl=64 time=0.136 ms
64 bytes from 10.0.0.8: seq=1 ttl=64 time=0.111 ms
64 bytes from 10.0.0.8: seq=2 ttl=64 time=0.082 ms
64 bytes from 10.0.0.8: seq=3 ttl=64 time=0.065 ms
64 bytes from 10.0.0.8: seq=4 ttl=64 time=0.065 ms
64 bytes from 10.0.0.8: seq=5 ttl=64 time=0.163 ms

4.0、扩容副本(备注:在node1节点上面操作)
$ docker service scale whoami=2
whoami scaled to 2

#查看服务运行在那个节点上面 (备注:在node1节点上面操作)
$ docker service ps whoami
ID NAME IMAGE NODE DESIRED STATE
CURRENT STATE ERROR PORTSuva1e307dwag whoami.1 jwilder/whoami:latest node3 Running
Running 28 minutes ago
n4t7brq388um whoami.2 jwilder/whoami:latest node1 Running
Running 16 seconds ago


$ docker ps (备注:在node3节点上面操作)
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dbe5034a2d77 busybox:latest "sh -c 'while true; …" 8 minutes ago Up 8 minutes client.1.y002hyyst1r9pqyf6ad14oruo
184989e17f76 jwilder/whoami:latest "/app/http" 30 minutes ago Up 30 minutes 8000/tcp whoami.1.uva1e307dwagnanuzodbqzfcf

#进入容器 (备注:在node3节点上面操作)
$ docker exec -it 1849 sh
/app # ping whoami
PING whoami (10.0.0.8): 56 data bytes
64 bytes from 10.0.0.8: seq=0 ttl=64 time=0.208 ms
64 bytes from 10.0.0.8: seq=1 ttl=64 time=0.118 ms
64 bytes from 10.0.0.8: seq=2 ttl=64 time=0.074 ms
64 bytes from 10.0.0.8: seq=3 ttl=64 time=0.079 ms
/app # exit

#查看正在运行的容器
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
4fa28760a510 jwilder/whoami:latest "/app/http" 9 minutes ago Up 9 minutes
8000/tcp whoami.2.n4t7brq388umug2lagrz4uore
9f394dee1e89 mysql:5.7 "docker-entrypoint.s…" About an hour ago Up About an h
our 3306/tcp, 33060/tcp mysql.1.volm1hp3n2dkljqtb8h59e8sf

#进入容器中,发现增加了两个容器的IP地址
$ docker exec -it 4fa2 sh
/app # nslookup tasks.whoami
nslookup: can't resolve '(null)': Name does not resolve

Name: tasks.whoami
Address 1: 10.0.0.15 4fa28760a510
Address 2: 10.0.0.9 whoami.1.uva1e307dwagnanuzodbqzfcf.demo


#扩容副本(备注:在node1节点上面操作)
$ docker service scale whoami=3
whoami scaled to 3

#查看服务运行在那个节点上面 (备注:在node1节点上面操作)
$ docker service ps whoami
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
uva1e307dwag whoami.1 jwilder/whoami:latest node3 Running Running 40 minutes ago
n4t7brq388um whoami.2 jwilder/whoami:latest node1 Running Running 12 minutes ago
rl9b1uji65cg whoami.3 jwilder/whoami:latest node2 Running Running 8 seconds ago

#查看正在运行的容器,从上面看得知,是跑在node2节点上面 (备注:在node2节点上面操作)
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
27106f86d1e6 jwilder/whoami:latest "/app/http" About a minute ago Up About a minute 8000/tcp whoami.3.rl9b1uji65cg765ruk0t2tzee0aad26cdc500 wordpress:latest "docker-entrypoint.s…" About an hour ago Up About anhour 80/tcp wordpress.1.n16an94tcb0pz4xqoyf33asxw

#进入容器,发现有三个IP地址 (备注:在node2节点上面操作)
$ docker exec -it 2710 sh
/app # nslookup tasks.whoaminslookup: can't resolve '(null)': Name does not resolve
Name: tasks.whoami
Address 1: 10.0.0.16 27106f86d1e6
Address 2: 10.0.0.15 whoami.2.n4t7brq388umug2lagrz4uore.demo
Address 3: 10.0.0.9 whoami.1.uva1e307dwagnanuzodbqzfcf.demo/app #

#下载文件,查看内容 (备注:在node2节点上面操作)
/app # wget whoami:8000
Connecting to whoami:8000 (10.0.0.8:8000)index.html 100% |*****************************************************| 17 0:00:00 ETA
/app # lshttp index.html

/app # more index.html
I'm 27106f86d1e6

 

三、Routing Mesh的两种体现

Internal: Container和Container之间的访问通过overlay网络(通过vip虚拟IP)

Ingress: 如果服务有绑定接口,则此服务可以通过任意swarm节点的相应接口访问。

 

#Ingress Network
1、外部访问的负载均衡
2、服务端口被暴露到各个swarm节点
3、内部通过IPVS进行负载均衡

#扩展成2个副本
$ docker service scale whoami=2
whoami scaled to 2
overall progress: 2 out of 2 tasks
1/2: running
2/2: running
verify: Service converged
[node1] (local) root@192.168.0.38 ~
$ docker service ps whoami
ID NAME IMAGE NODE DESIRED STATE
CURRENT STATE ERROR PORTS
uva1e307dwag whoami.1 jwilder/whoami:latest node3 Running
Running about an hour ago
n4t7brq388um whoami.2 jwilder/whoami:latest node1 Running
Running 45 minutes ago

#访问容器IP地址,发现有负载均衡的功能
$ curl 127.0.0.1:8000
I'm 4fa28760a510

$ curl 127.0.0.1:8000
I'm 184989e17f76

$ curl 127.0.0.1:8000
I'm 4fa28760a510

$ curl 127.0.0.1:8000
I'm 184989e17f76

$ curl 127.0.0.1:8000
I'm 4fa28760a510

#发现是通过iptables进行转发的
$ iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination

Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-INGRESS all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhereACCEPT all -- anywhere anywhere
DROP all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)target prot opt source destination
Chain DOCKER (2 references)
target prot opt source destination

Chain DOCKER-INGRESS (1 references)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:8000
ACCEPT tcp -- anywhere anywhere state RELATED,ESTABLISHED tcp spt:8000
ACCEPT tcp -- anywhere anywhere tcp dpt:http
ACCEPT tcp -- anywhere anywhere state RELATED,ESTABLISHED tcp spt:http
RETURN all -- anywhere anywhere

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere

Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target prot opt source destination
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere

Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere

#查看docker网络
$ brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242df4ead86 no veth4b1f330
docker_gwbridge 8000.02426a3e3176 no veth45d7a41

#查看创建的网络名称:docker_gwbridge
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
9cab66600884 bridge bridge local
2wec717ukiqj demo overlay swarm
4a441f5afc2a docker_gwbridge bridge local
9c195868e91f host host local
xso8vdd1rk6s ingress overlay swarm
79cd1d0fe915 none null local

#查看详细信息
$ docker network inspect docker_gwbridge
[
{
"Name": "docker_gwbridge",
"Id": "4a441f5afc2a952577c52160e3ed863c78bdd30347dd25baa39c96d69f3dad96",
"Created": "2019-05-30T07:16:08.785013094Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"efa29bcf1389080372c7722aafe081526f6185b98d29dd7e4e5f0a16ab6e5b6e": {
"Name": "gateway_1cbaf00e6a8e",
"EndpointID": "b074b8b6c8193b2e95a79818fda30a1abfc91683dd08ceb6dbb11a60d496a07e",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
},
"ingress-sbox": {
"Name": "gateway_ingress-sbox",
"EndpointID": "21a7b9864747a7a1269f1648763852b6aa717ab3179c8264813406962c606cec",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.enable_icc": "false",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.name": "docker_gwbridge"
},
"Labels": {}
}
]

#查看目录下文件
$ ls /var/run/docker/netns
1-2wec717uki 1-xso8vdd1rk 1cbaf00e6a8e ingress_sbox lb_2wec717uk

#进入ingress里面
$ nsenter --net=/var/run/docker/netns/ingress_sbox 0d496a07e",
###############################################################
# WARNING!!!! #
# This is a sandbox environment. Using personal credentials #
# is HIGHLY! discouraged. Any consequences of doing so are #
# completely the user's responsibilites. #
# #
# The PWD team. # 62c606cec",
###############################################################

#查看ip
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP
link/ether 02:42:0a:ff:00:02 brd ff:ff:ff:ff:ff:ff
inet 10.255.0.2/16 brd 10.255.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet 10.255.0.5/32 brd 10.255.0.5 scope global eth0
valid_lft forever preferred_lft forever
8: eth1@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.19.0.2/16 brd 172.19.255.255 scope global eth1
valid_lft forever preferred_lft forever

 

$ iptables -nL -t mangle (备注:这里是做负载均衡的,在node1上面操作。)
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
MARK tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8000 MARK set 0x
101

Chain INPUT (policy ACCEPT)
target prot opt source destination 62c606cec",
MARK all -- 0.0.0.0/0 10.255.0.5 MARK set 0x101

Chain FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination

Chain POSTROUTING (policy ACCEPT)
target prot opt source destination

$ exit

#下载LVS管理工具,yum install ipvsadm

#再次进入root,执行ipvsadm -l

 

#了解Ingress Network数据包的走向详情

 

 

 参考:

https://juejin.im/post/5b80363e5188254307741bf1#heading-4

 https://www.jianshu.com/p/18ad7b838b0d   # 41篇

https://blog.csdn.net/weixin_33672400/article/details/86917813     #46篇

posted @ 2021-03-15 23:08  lclc  阅读(1594)  评论(0编辑  收藏  举报