docker-swarm网络
先删除上一次我们的三个副本
[root@node1 ~]# docker service ls ID NAME MODE REPLICAS IMAGE PORTS q7nb1cpvvgqn web1 replicated 3/3 192.168.172.128:5000/httpd:latest [root@node1 ~]# docker service rm web1 web1
然后我们重新创建副本
[root@node1 ~]# docker service create --name webserver --replicas=2 httpd tlfxl4w83kxpre4kq90hst5wm overall progress: 2 out of 2 tasks 1/2: running 2/2: running verify: Service converged [root@node1 ~]# docker service ps webserver ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS 82myr68i7217 webserver.1 httpd:latest node2 Running Running 34 seconds ago qnijdtjuufne webserver.2 httpd:latest node3 Running Running 33 seconds ago
通过docker inspect 可以查看到容器的ip地址,容器只能访问内部的地址,但是访问不到外网的那个地址
[root@node2 ~]# curl 172.17.0.2 <html><body><h1>It works!</h1></body></html>
更新一下,暴露给外部
[root@node1 ~]# docker service update --publish-add 8889:80 webserver
我们再来访问一下用外部地址,发现可以访问到
[root@node2 ~]# curl 192.168.172.131:8889 <html><body><h1>It works!</h1></body></html>
现在我们再查看一下网络
[root@node1 ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 0cec315fc82b bridge bridge local 6cfc4498f85f docker_gwbridge bridge local 5d6420257a59 host host local xpgy4pggvyjk ingress overlay swarm aef29cb54b5f none null local
Ingress 是swarm自动创建的一个网络 所有的swarm节点都可以使用这个网络
在swarm2上跑一个容器
[root@node2 ~]# docker run -it --network container:webserver.1.89yc81js827uymtki5vodxot1 192.168.172.128:5000/busybox
进入容器以后会发现有两个网络
13: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue link/ether 02:42:0a:00:00:06 brd ff:ff:ff:ff:ff:ff inet 10.0.0.6/24 brd 10.0.0.255 scope global eth0 valid_lft forever preferred_lft forever 15: eth1@if16: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff inet 172.18.0.3/16 brd 172.18.255.255 scope global eth1 valid_lft forever preferred_lft forever
和主机的docker_gwbridge对应
3: docker_gwbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:46:83:2f:85 brd ff:ff:ff:ff:ff:ff inet 172.18.0.1/16 brd 172.18.255.255 scope global docker_gwbridge valid_lft forever preferred_lft forever inet6 fe80::42:46ff:fe83:2f85/64 scope link valid_lft forever preferred_lft forever
服务之间的互相通信
服务发现
一种实现方法是将所有服务都publish出去,然后通过routing mesh访问,但是缺点是将服务暴露到了外网,造成安全隐患问题
如果不更新,,那么swarm就要提供一种机制,能够让服务通过简单的方法访问到其他的服务,当服务副本的ip发生变化的时候,不会影响到访问该服务,当服务副本数量发生变化的时候不影响访问,通过服务发现,服务的使用者不需要知道服务在哪里运行,ip是多少,就能和服务通信
创建一个overlay网络
[root@node1 ~]# docker network create --driver overlay caoyi 1m4pbh5ocntq1a1m2qj21nyyl [root@node1 ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 0cec315fc82b bridge bridge local 1m4pbh5ocntq caoyi overlay swarm 6cfc4498f85f docker_gwbridge bridge local 5d6420257a59 host host local xpgy4pggvyjk ingress overlay swarm aef29cb54b5f none null local
不可以直接使用ingress,因为ingress没有提供服务发现,所以必须创建自己的overlay网络
部署server到overlay
[root@node1 ~]# docker service create --name caoyi --replicas=3 --network caoyi httpd ##使用刚才创建的overlay网络caoyi sa5iunuj0n0ubor9a76wbfjk8 overall progress: 3 out of 3 tasks 1/3: running 2/3: running 3/3: running verify: Service converged
部署一个util服务用语测试挂载到同一个overlay网络
[root@node1 ~]# docker service create --name test --network caoyi busybox sleep 10000000 usr8hp1su61wfind9j2qn910b overall progress: 1 out of 1 tasks 1/1: running verify: Service converged
sleep 10000000的作用是保持服务容器一直处于运行状态
验证
[root@node1 ~]# docker exec test.1.ndtocntqwf7u75u9bbnq3t5zi ping -c 3 caoyi PING caoyi (10.0.1.2): 56 data bytes 64 bytes from 10.0.1.2: seq=0 ttl=64 time=0.082 ms 64 bytes from 10.0.1.2: seq=1 ttl=64 time=0.060 ms 64 bytes from 10.0.1.2: seq=2 ttl=64 time=0.057 ms --- caoyi ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.057/0.066/0.082 ms
可以看到caoyi的ip为10.0.1.2这是caoyi服务中的vip,swarm会将对vip的访问负载均衡dao每一个副本
"VirtualIPs": [ { "NetworkID": "1m4pbh5ocntq1a1m2qj21nyyl", "Addr": "10.0.1.2/24"
滚动更新
[root@node1 ~]# docker pull httpd:2.2 2.2: Pulling from library/httpd f49cf87b52c1: Pull complete 24b1e09cbcb7: Pull complete 8a4e0d64e915: Pull complete bcbe0eb4ca51: Pull complete 16e370c15d38: Pull complete Digest: sha256:9784d70c8ea466fabd52b0bc8cde84980324f9612380d22fbad2151df9a430eb Status: Downloaded newer image for httpd:2.2 docker.io/library/httpd:2.2 [root@node1 ~]# docker pull httpd:2.4 2.4: Pulling from library/httpd Digest: sha256:387f896f9b6867c7fa543f7d1a686b0ebe777ed13f6f11efc8b94bec743a1e51 Status: Downloaded newer image for httpd:2.4 docker.io/library/httpd:2.4 [root@node1 ~]# docker tag httpd:2.2 192.168.172.128:5000/httpd:2.2 [root@node1 ~]# docker tag httpd:2.4 192.168.172.128:5000/httpd:2.4 [root@node1 ~]# docker push 192.168.172.128:5000/httpd:2.2 The push refers to repository [192.168.172.128:5000/httpd] ab5efd5aec77: Pushed 9058feb62b4a: Pushed 3f7f50ced288: Pushed 71436bd6f1c4: Pushed 4bcdffd70da2: Pushed 2.2: digest: sha256:558680adf8285edcfe4813282986eb7143e3c372610c6ba488723786bd5b34c5 size: 1366 [root@node1 ~]# docker push 192.168.172.128:5000/httpd:2.4 The push refers to repository [192.168.172.128:5000/httpd] 5d727ac94391: Layer already exists 4cfc2b1d3e90: Layer already exists 484fa8d4774f: Layer already exists ca9ad7f0ab91: Layer already exists 13cb14c2acd3: Layer already exists 2.4: digest: sha256:ad116b4faf32a576572c1501e3c83ecae52ed3ba161de2f50a89d24b796bd3eb size: 1367
以上是下载httpd2.2和httpd2.4以及给镜像标签 上传镜像到私有仓库
现在我们来查看一下httpd的版本先运行一个httpd2.2
[root@node1 ~]# docker run -it 192.168.172.128:5000/httpd:2.2 apachectl -v Server version: Apache/2.2.34 (Unix) Server built: Jan 18 2018 23:12:10
[root@node1 ~]# docker service update --image 192.168.172.128:5000/httpd myweb caoyi overall progress: 3 out of 3 tasks ##更新一下再来查看 1/3: running [==================================================>] 2/3: running [==================================================>] 3/3: running [==================================================>] verify: Service converged
现在更新完以后我们再来查看
[root@node1 ~]# docker exec -it caoyi.1.j0urb78ojfo1f3b8wve7bo4sn apachectl -v Server version: Apache/2.4.43 (Unix) Server built: Jun 9 2020 07:00:39
我们把副本加到5个 每次更新两个 间隔1分20秒
[root@node1 ~]# docker service update --replicas 5 --update-parallelism 2 --update-delay 30s myweb
查看当前服务的配置
[root@node1 ~]# docker service inspect --pretty myweb ID: sa5iunuj0n0ubor9a76wbfjk8 Name: myweb Service Mode: Replicated Replicas: 5 Placement: UpdateConfig: Parallelism: 2 Delay: 30s On failure: pause Monitoring Period: 5s Max failure ratio: 0 Update order: stop-first RollbackConfig: Parallelism: 1 On failure: pause Monitoring Period: 5s Max failure ratio: 0 Rollback order: stop-first ContainerSpec: Image: 192.168.172.128:5000/httpd:latest@sha256:ad116b4faf32a576572c1501e3c83ecae52ed3ba161de2f50a89d24b796bd3eb Init: false Resources: Networks: caoyi Endpoint Mode: vip
可以看到更新的时候卡住了30秒,每次更新两个
现在我们看看有没有从2.2更新到2.4
[root@node1 ~]# docker service ps myweb ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS zz2at22n9d1y myweb.1 192.168.172.128:5000/httpd:2.4 node1 Running Running 2 minutes ago encntggj4u77 \_ myweb.1 192.168.172.128:5000/httpd:2.2 node2 Shutdown Shutdown 2 minutes ago rmmzup05knrr myweb.2 192.168.172.128:5000/httpd:2.4 node2 Running Running 2 minutes ago fvxt7jwccg90 \_ myweb.2 192.168.172.128:5000/httpd:2.2 node1 Shutdown Shutdown 2 minutes ago jox66cx2stxy myweb.3 192.168.172.128:5000/httpd:2.4 node2 Running Running about a minute ago skbbzxi6tnk6 \_ myweb.3 192.168.172.128:5000/httpd:2.2 node2 Shutdown Shutdown about a minute ago ruy30btf8m7r myweb.4 192.168.172.128:5000/httpd:2.4 node1 Running Running 2 minutes ago rc9c8r043dw5 \_ myweb.4 192.168.172.128:5000/httpd:2.2 node3 Shutdown Shutdown 2 minutes ago vduuqlxqs71g myweb.5 192.168.172.128:5000/httpd:2.4 node3 Running Running 2 minutes ago mivpa3fgwb43 \_ myweb.5 192.168.172.128:5000/httpd:2.2 node3 Shutdown Shutdown 2 minutes ago
镜像的回滚
[root@node1 ~]# docker service update --rollback myweb myweb rollback: manually requested rollback overall progress: rolling back update: 5 out of 5 tasks 1/5: running 2/5: running 3/5: running 4/5: running 5/5: running verify: Service converged [root@node1 ~]# docker service ps myweb ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS 6iody3mpjwo5 myweb.1 192.168.172.128:5000/httpd:2.2 node3 Running Running about a minute ago zz2at22n9d1y \_ myweb.1 192.168.172.128:5000/httpd:2.4 node1 Shutdown Shutdown about a minute ago encntggj4u77 \_ myweb.1 192.168.172.128:5000/httpd:2.2 node2 Shutdown Shutdown 9 minutes ago qtnpapix26x7 myweb.2 192.168.172.128:5000/httpd:2.2 node2 Running Running about a minute ago rmmzup05knrr \_ myweb.2 192.168.172.128:5000/httpd:2.4 node2 Shutdown Shutdown about a minute ago fvxt7jwccg90 \_ myweb.2 192.168.172.128:5000/httpd:2.2 node1 Shutdown Shutdown 9 minutes ago y06ujqzd1vdr myweb.3 192.168.172.128:5000/httpd:2.2 node2 Running Running about a minute ago jox66cx2stxy \_ myweb.3 192.168.172.128:5000/httpd:2.4 node2 Shutdown Shutdown about a minute ago skbbzxi6tnk6 \_ myweb.3 192.168.172.128:5000/httpd:2.2 node2 Shutdown Shutdown 8 minutes ago ydtuu63pq08d myweb.4 192.168.172.128:5000/httpd:2.2 node1 Running Running about a minute ago ruy30btf8m7r \_ myweb.4 192.168.172.128:5000/httpd:2.4 node1 Shutdown Shutdown about a minute ago rc9c8r043dw5 \_ myweb.4 192.168.172.128:5000/httpd:2.2 node3 Shutdown Shutdown 9 minutes ago 2pdi87j6nt1l myweb.5 192.168.172.128:5000/httpd:2.2 node3 Running Running about a minute ago vduuqlxqs71g \_ myweb.5 192.168.172.128:5000/httpd:2.4 node3 Shutdown Shutdown about a minute ago mivpa3fgwb43 \_ myweb.5 192.168.172.128:5000/httpd:2.2 node3 Shutdown Shutdown 9 minutes ago
Replicated mode 和global mode
第一种是根据资源的不同在节点上生成容器或停止容器
第二种是在每一个节点生成一个容器有且只有一个(不同服务名除外)
[root@node1 ~]# docker service create --name web1 --mode global 192.168.172.128:5000/httpd zqfp5b77ux7jdxcjle81jqtqu overall progress: 3 out of 3 tasks wl9tv3om7ubb: running hv3mxu6u7l7e: running 35rbxdx15fmi: running verify: Service converged
查看服务信息
[root@node1 ~]# docker service inspect --pretty web1 Name: web1 Service Mode: Global ## 模式是global
使用label控制容器在那个节点上
为每个节点定义label:
[root@node1 ~]# docker node update --label-add env=test node2
node2
[root@node1 ~]# docker node inspect --pretty node2
ID: hv3mxu6u7l7ei2h565c52yuf2
Labels:
- env=test ##
查看
为node3设置
[root@node1 ~]# docker node update --label-add env=test2 node3
node3
创建服务全部放在labels=test上
[root@node1 ~]# docker service create --name web2 --replicas 5 --constraint node.labels.env==test httpd 9qtdv8by3r3zgimnjkrwqdcku overall progress: 5 out of 5 tasks 1/5: running 2/5: running 3/5: running 4/5: running 5/5: running verify: Service converged [root@node1 ~]# docker service ps web2 ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS cetig5x4t4xx web2.1 httpd:latest node2 Running Running 18 seconds ago 9duicmaqvzjr web2.2 httpd:latest node2 Running Running 18 seconds ago gfmer9k3etr3 web2.3 httpd:latest node2 Running Running 18 seconds ago wtpuarblwryc web2.4 httpd:latest node2 Running Running 18 seconds ago lnq954vuav7r web2.5 httpd:latest node2 Running Running 18 seconds ago
查看
[root@node1 ~]# docker service inspect --pretty web2 ID: 9qtdv8by3r3zgimnjkrwqdcku Name: web2 Service Mode: Replicated Replicas: 5 Placement: Constraints: [node.labels.env==test] UpdateConfig: Parallelism: 1 On failure: pause Monitoring Period: 5s Max failure ratio: 0 Update order: stop-first RollbackConfig: Parallelism: 1 On failure: pause Monitoring Period: 5s Max failure ratio: 0 Rollback order: stop-first ContainerSpec: Image: httpd:latest@sha256:387f896f9b6867c7fa543f7d1a686b0ebe777ed13f6f11efc8b94bec743a1e51 Init: false Resources: Endpoint Mode: vip
迁移服务:
先删除标签为test的服务
docker service update --constraint-rm node.labels.env==test web2
再新添加一个
docker service update --constraint-add node.labels.env==test2 web1
docker service create --name web3 --mode global --constraint node.labels.env==test 192.168.172.128:5000/httpd