Docker4之Stack
-
Make sure you have published the
friendlyhello
image you created by pushing it to a registry. We’ll use that shared image here. -
Be sure your image works as a deployed container. Run this command, slotting in your info for
username
,repo
, andtag
:docker run -p 80:80 username/repo:tag
, then visithttp://localhost/
. -
Have a copy of your
docker-compose.yml
from Part 3 handy. -
Make sure that the machines you set up in part 4 are running and ready. Run
docker-machine ls
to verify this. If the machines are stopped, rundocker-machine start myvm1
to boot the manager, followed bydocker-machine start myvm2
to boot the worker. - Have the swarm you created in part 4 running and ready. Run
docker-machine ssh myvm1 "docker node ls"
to verify this. If the swarm is up, both nodes will report aready
status. If not, reinitialze the swarm and join the worker as described in Set up your swarm.
learned how to set up a swarm, which is a cluster of machines running Docker, and deployed an application to it, with containers running in concert on multiple machines.
you’ll reach the top of the hierarchy of distributed applications: the stack.
A stack is a group of interrelated services that share dependencies, and can be orchestrated and scaled together.
A single stack is capable of defining and coordinating the functionality of an entire application (though very complex applications may want to use multiple stacks).
Some good news is, you have technically been working with stacks since part 3, when you created a Compose file and used docker stack deploy
.
But that was a single service stack running on a single host, which is not usually what takes place in production.
Here, you will take what you’ve learned, make multiple services relate to each other, and run them on multiple machines.
Add a new service and redeploy
It’s easy to add services to our docker-compose.yml
file.
First, let’s add a free visualizer service that lets us look at how our swarm is scheduling containers.
1.Open up docker-compose.yml
in an editor and replace its contents with the following. Be sure to replace username/repo:tag
with your image details.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | version: "3" services: web: # replace username/repo:tag with your name and image details image: username /repo :tag deploy: replicas: 5 restart_policy: condition: on-failure resources: limits: cpus: "0.1" memory: 50M ports: - "80:80" networks: - webnet visualizer: image: dockersamples /visualizer :stable ports: - "8080:8080" volumes: - "/var/run/docker.sock:/var/run/docker.sock" deploy: placement: constraints: [node.role == manager] networks: - webnet networks: webnet: |
The only thing new here is the peer service to web
, named visualizer
.
You’ll see two new things here: a volumes
key, giving the visualizer access to the host’s socket file for Docker, and a placement
key, ensuring that this service only ever runs on a swarm manager – never a worker. That’s because this container, built from an open source project created by Docker, displays Docker services running on a swarm in a diagram.
2.Make sure your shell is configured to talk to myvm1
(full examples are here).
-
Run
docker-machine ls
to list machines and make sure you are connected tomyvm1
, as indicated by an asterisk next it. -
If needed, re-run
docker-machine env myvm1
, then run the given command to configure the shell.On Mac or Linux the command is:
1 | eval $(docker-machine env myvm1) |
On Windows the command is:
1 | & "C:\Program Files\Docker\Docker\Resources\bin\docker-machine.exe" env myvm1 | Invoke-Expression |
3.Re-run the docker stack deploy
command on the manager, and whatever services need updating will be updated:
1 2 3 | $ docker stack deploy -c docker-compose.yml getstartedlab Updating service getstartedlab_web ( id : angi1bf5e4to03qu9f93trnxm) Creating service getstartedlab_visualizer ( id : l9mnwkeq2jiononb5ihz9u7a4) |
4.Take a look at the visualizer.
You saw in the Compose file that visualizer
runs on port 8080. Get the IP address of one of your nodes by running docker-machine ls
. Go to either IP address at port 8080 and you will see the visualizer running:
The single copy of visualizer
is running on the manager as you expect, and the 5 instances of web
are spread out across the swarm. You can corroborate this visualization by running docker stack ps <stack>
:
1 | docker stack ps getstartedlab |
The visualizer is a standalone service that can run in any app that includes it in the stack. It doesn’t depend on anything else.
Now let’s create a service that does have a dependency: the Redis service that will provide a visitor counter.
Persist the data
Let’s go through the same workflow once more to add a Redis database for storing app data.
1.Save this new docker-compose.yml
file, which finally adds a Redis service. Be sure to replace username/repo:tag
with your image details.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | version: "3" services: web: # replace username/repo:tag with your name and image details image: username /repo :tag deploy: replicas: 5 restart_policy: condition: on-failure resources: limits: cpus: "0.1" memory: 50M ports: - "80:80" networks: - webnet visualizer: image: dockersamples /visualizer :stable ports: - "8080:8080" volumes: - "/var/run/docker.sock:/var/run/docker.sock" deploy: placement: constraints: [node.role == manager] networks: - webnet redis: image: redis ports: - "6379:6379" volumes: - /home/docker/data : /data deploy: placement: constraints: [node.role == manager] command : redis-server --appendonly yes networks: - webnet networks: webnet: |
Redis has an official image in the Docker library and has been granted the short image
name of just redis
, so no username/repo
notation here.
The Redis port, 6379, has been pre-configured by Redis to be exposed from the container to the host, and here in our Compose file we expose it from the host to the world, so you can actually enter the IP for any of your nodes into Redis Desktop Manager and manage this Redis instance, if you so choose.
Most importantly, there are a couple of things in the redis
specification that make data persist between deployments of this stack:
-
redis
always runs on the manager, so it’s always using the same filesystem.redis
accesses an arbitrary directory in the host’s file system as/data
inside the container, which is where Redis stores data.
Together, this is creating a “source of truth” in your host’s physical filesystem for the Redis data.
Without this, Redis would store its data in /data
inside the container’s filesystem, which would get wiped out if that container were ever redeployed.
This source of truth has two components:
-
- The placement constraint you put on the Redis service, ensuring that it always uses the same host.
- The volume you created that lets the container access
./data
(on the host) as/data
(inside the Redis container). While containers come and go, the files stored on./data
on the specified host will persist, enabling continuity.
You are ready to deploy your new Redis-using stack.
2.Create a ./data
directory on the manager:
1 | docker-machine ssh myvm1 "mkdir ./data" |
3.Make sure your shell is configured to talk to myvm1
(full examples are here).
-
Run
docker-machine ls
to list machines and make sure you are connected tomyvm1
, as indicated by an asterisk next it. -
If needed, re-run
docker-machine env myvm1
, then run the given command to configure the shell.On Mac or Linux the command is:
1 | eval $(docker-machine env myvm1) |
On Windows the command is:
1 | & "C:\Program Files\Docker\Docker\Resources\bin\docker-machine.exe" env myvm1 | Invoke-Expression |
- Run
docker stack deploy
one more time.
1 | $ docker stack deploy -c docker-compose.yml getstartedlab |
- Run
docker service ls
to verify that the three services are running as expected.
1 2 3 4 5 | $ docker service ls ID NAME MODE REPLICAS IMAGE PORTS x7uij6xb4foj getstartedlab_redis replicated 1 /1 redis:latest *:6379->6379 /tcp n5rvhm52ykq7 getstartedlab_visualizer replicated 1 /1 dockersamples /visualizer :stable *:8080->8080 /tcp mifd433bti1d getstartedlab_web replicated 5 /5 orangesnap /getstarted :latest *:80->80 /tcp |
- Check the web page at one of your nodes (e.g.
http://192.168.99.101
) and you’ll see the results of the visitor counter, which is now live and storing information on Redis.
Also, check the visualizer at port 8080 on either node’s IP address, and you’ll see the redis
service running along with the web
and visualizer
services.
stacks are inter-related services all running in concert, and that – surprise! – you’ve been using stacks since part three of this tutorial. You learned that to add more services to your stack, you insert them in your Compose file. Finally, you learned that by using a combination of placement constraints and volumes you can create a permanent home for persisting data, so that your app’s data survives when the container is torn down and redeployed.
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】凌霞软件回馈社区,博客园 & 1Panel & Halo 联合会员上线
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】博客园社区专享云产品让利特惠,阿里云新客6.5折上折
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 没有源码,如何修改代码逻辑?
· 一个奇形怪状的面试题:Bean中的CHM要不要加volatile?
· [.NET]调用本地 Deepseek 模型
· 一个费力不讨好的项目,让我损失了近一半的绩效!
· .NET Core 托管堆内存泄露/CPU异常的常见思路
· 微软正式发布.NET 10 Preview 1:开启下一代开发框架新篇章
· DeepSeek R1 简明指南:架构、训练、本地部署及硬件要求
· 没有源码,如何修改代码逻辑?
· NetPad:一个.NET开源、跨平台的C#编辑器
· 面试官:你是如何进行SQL调优的?
2016-12-23 代码性能优化-----减少数据库读取次数
2016-12-23 代码性能优化-----前端页面异步实现