minio server pool 进行集群扩容测试试用
minio 以前是推荐联邦解决集群的问题,但是现在已经废弃了,推荐通过server pool 模式进行集群的扩容处理,而且提供了比较全的命令,还是比较方便的
以下是一个简单的测试:包含了两个server pool,将1的数据迁移到2中
环境准备
- docker-compose
如下包含了两个server pool
version: '3.7'
services:
sidekick:
image: minio/sidekick:v1.2.0
tty: true
ports:
- "80:80"
command: --health-path=/minio/health/ready --address :80 http://minio{1...4}:9000
gateway:
image: minio/minio:RELEASE.2022-03-26T06-49-28Z
command: gateway s3 http://sidekick --console-address ":19000"
environment:
MINIO_ACCESS_KEY: minio
MINIO_SECRET_KEY: minio123
ports:
- "9000:9000"
- "19000:19000"
minio1:
image: minio/minio
volumes:
- data1-1:/data1
- data1-2:/data2
- data1-3:/data3
- data1-4:/data4
ports:
- "9001:9000"
- "19001:19001"
environment:
MINIO_ACCESS_KEY: minio
MINIO_SECRET_KEY: minio123
command: server http://minio{1...4}/data{1...4} http://minio{5...8}/data{1...4} --console-address ":19001"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
minio2:
image: minio/minio
volumes:
- data2-1:/data1
- data2-2:/data2
- data2-3:/data3
- data2-4:/data4
ports:
- "9002:9000"
- "19002:19002"
environment:
MINIO_ACCESS_KEY: minio
MINIO_SECRET_KEY: minio123
command: server http://minio{1...4}/data{1...4} http://minio{5...8}/data{1...4} --console-address ":19002"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
minio3:
image: minio/minio
volumes:
- data3-1:/data1
- data3-2:/data2
- data3-3:/data3
- data3-4:/data4
ports:
- "9003:9000"
- "19003:19003"
environment:
MINIO_ACCESS_KEY: minio
MINIO_SECRET_KEY: minio123
command: server http://minio{1...4}/data{1...4} http://minio{5...8}/data{1...4} --console-address ":19003"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
minio4:
image: minio/minio
volumes:
- data4-1:/data1
- data4-2:/data2
- data4-3:/data3
- data4-4:/data4
ports:
- "9004:9000"
- "19004:19004"
environment:
MINIO_ACCESS_KEY: minio
MINIO_SECRET_KEY: minio123
command: server http://minio{1...4}/data{1...4} http://minio{5...8}/data{1...4} --console-address ":19004"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
minio5:
image: minio/minio
volumes:
- data12-1:/data1
- data12-2:/data2
- data12-3:/data3
- data12-4:/data4
ports:
- "9005:9000"
- "19005:19004"
environment:
MINIO_ACCESS_KEY: minio
MINIO_SECRET_KEY: minio123
command: server http://minio{1...4}/data{1...4} http://minio{5...8}/data{1...4} --console-address ":19004"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
minio6:
image: minio/minio
volumes:
- data22-1:/data1
- data22-2:/data2
- data22-3:/data3
- data22-4:/data4
ports:
- "9006:9000"
- "19006:19004"
environment:
MINIO_ACCESS_KEY: minio
MINIO_SECRET_KEY: minio123
command: server http://minio{1...4}/data{1...4} http://minio{5...8}/data{1...4} --console-address ":19004"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
minio7:
image: minio/minio
volumes:
- data32-1:/data1
- data32-2:/data2
- data32-3:/data3
- data32-4:/data4
ports:
- "9007:9000"
- "19007:19004"
environment:
MINIO_ACCESS_KEY: minio
MINIO_SECRET_KEY: minio123
command: server http://minio{1...4}/data{1...4} http://minio{5...8}/data{1...4} --console-address ":19004"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
minio8:
image: minio/minio
volumes:
- data42-1:/data1
- data42-2:/data2
- data42-3:/data3
- data42-4:/data4
ports:
- "9008:9000"
- "19008:19004"
environment:
MINIO_ACCESS_KEY: minio
MINIO_SECRET_KEY: minio123
command: server http://minio{1...4}/data{1...4} http://minio{5...8}/data{1...4} --console-address ":19004"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
volumes:
data1-1:
data1-2:
data1-3:
data1-4:
data2-1:
data2-2:
data2-3:
data2-4:
data3-1:
data3-2:
data3-3:
data3-4:
data4-1:
data4-2:
data4-3:
data4-4:
data12-1:
data12-2:
data12-3:
data12-4:
data22-1:
data22-2:
data22-3:
data22-4:
data32-1:
data32-2:
data32-3:
data32-4:
data42-1:
data42-2:
data42-3:
data42-4:
- 启动&信息
可以看到相关server pool 的信息
退役操作
- 切换命令
mc admin decommission start serverpool http://minio{1...4}/data{1...4}
- 状态
mc admin decommission status serverpool
说明:此时server pool 1 的节点数据还是可以写入的,同时会同步到server pool 2 中
删除server pool
核心是修改存储池,
- 参考配置
对于docker 运行就是将http://minio{1...4}/data{1...4}
移除,如下
ommand: server http://minio{5...8}/data{1...4} --console-address ":19004"
- 服务重启
重启minio 集群(建议同时重启,可以使用ansible 等工具) - 修改lb 的配置
我们一般使用的是nginx 主要是进行nginx 的修改,对于流量进行处理
文件效果
- server pool 1
开始的时候是有数据的
- server pool2
server pool 1 的数据到server pool 2 了
说明
以上这是一个简单的测试,对于大规模环境可能会碰到不少问题,按照官方的说明是,多试以及通过查看日志解决,server pool 对于集群的扩容以及存储升级还是一个不错的选择,值得使用,同时进行集群配置的时候注意server pool的顺序,按照目前官方的玩法,不同顺序会生成不同的deploy id,否则会有All ServerPool Must Have Same Deployment ID
的问题,对于降级的server pool 需要移除,并清理数据,才能重新添加到server pool 中,同时注意新创建的server pool 不能进行初始化,否则也会有
deploy id 的问题
参考资料
https://github.com/minio/minio/blob/master/docs/federation/lookup/README.md
https://blog.min.io/server-pools-streamline-storage-operations/
https://min.io/docs/minio/linux/operations/install-deploy-manage/decommission-server-pool.html
https://min.io/docs/minio/windows/operations/concepts.html#id5
https://min.io/docs/minio/linux/reference/minio-server/minio-server.html#command-minio.server