TIDB-DM数据迁移第三部(集群管理)
1、对现在 dm 集群进行缩容,将 free 状态的 worker 下线。
tiup dm display dm-test
查看 free 状态节点
tiup dm scale-in dm 172.16.1.13:8262 -N
2、扩容 DM 集群
我是在一个机器上启的多实例,所以用的是另一个端口
#配置文件
worker_servers:
- host: 172.16.1.13
ssh_port: 22
port: 8263
deploy_dir: "/dm-deploy/dm-worker-8263"
log_dir: "/dm-deploy/dm-worker-8263/log"
[root@root dm]# tiup dm scale-out dm-test dm-scale.yaml -uroot -p
tiup is checking updates for component dm ...
Starting component `dm`: /root/.tiup/components/dm/v1.11.1/tiup-dm scale-out dm-test dm-scale.yaml -uroot -p
Input SSH password:
+ Detect CPU Arch Name
- Detecting node 172.16.1.13 Arch info ... Done
+ Detect CPU OS Name
- Detecting node 172.16.1.13 OS info ... Done
Please confirm your topology:
Cluster type: dm
Cluster name: dm-test
Cluster version: v6.4.0
Role Host Ports OS/Arch Directories
---- ---- ----- ------- -----------
dm-worker 172.16.1.13 8263 linux/x86_64 /dm-deploy/dm-worker-8263,/dm-data/dm-worker-8263
Attention:
1. If the topology is not what you expected, check your yaml file.
2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/dm/clusters/dm-test/ssh/id_rsa, publicKey=/root/.tiup/storage/dm/clusters/dm-test/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=root, host=172.16.1.13
+ [Parallel] - UserSSH: user=root, host=172.16.1.13
+ [Parallel] - UserSSH: user=root, host=172.16.1.13
+ [Parallel] - UserSSH: user=root, host=172.16.1.13
+ [Parallel] - UserSSH: user=root, host=172.16.1.13
+ Download TiDB components
- Download dm-worker:v6.4.0 (linux/amd64) ... Done
+ Initialize target host environments
+ Deploy TiDB instance
- Deploy instance dm-worker -> 172.16.1.13:8263 ... Done
+ Copy certificate to remote host
+ Generate scale-out config
- Generate scale-out config dm-worker -> 172.16.1.13:8263 ... Done
+ Init monitor config
Enabling component dm-worker
Enabling instance 172.16.1.13:8263
Enable instance 172.16.1.13:8263 success
+ [ Serial ] - Save meta
+ [ Serial ] - Start new instances
Starting component dm-worker
Starting instance 172.16.1.13:8263
Start instance 172.16.1.13:8263 success
+ Refresh components conifgs
- Generate config dm-master -> 172.16.1.13:8261 ... Done
- Generate config dm-worker -> 172.16.1.13:8262 ... Done
- Generate config dm-worker -> 172.16.1.13:8263 ... Done
- Generate config prometheus -> 172.16.1.13:9090 ... Done
- Generate config grafana -> 172.16.1.13:3000 ... Done
- Generate config alertmanager -> 172.16.1.13:9093 ... Done
+ Reload prometheus and grafana
- Reload prometheus -> 172.16.1.13:9090 ... Done
- Reload grafana -> 172.16.1.13:3000 ... Done
Scaled cluster `dm-test` out successfully
再次查看集群状态
[root@root dm]# tiup dm display dm-test
tiup is checking updates for component dm ...
Starting component `dm`: /root/.tiup/components/dm/v1.11.1/tiup-dm display dm-test
Cluster type: dm
Cluster name: dm-test
Cluster version: v6.4.0
Deploy user: root
SSH type: builtin
Grafana URL: http://172.16.1.13:3000
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
172.16.1.13:9093 alertmanager 172.16.1.13 9093/9094 linux/x86_64 Up /dm-data/alertmanager-9093 /dm-deploy/alertmanager-9093
172.16.1.13:8261 dm-master 172.16.1.13 8261/8291 linux/x86_64 Healthy|L /dm-data/dm-master-8261 /dm-deploy/dm-master-8261
172.16.1.13:8262 dm-worker 172.16.1.13 8262 linux/x86_64 Bound /dm-data/dm-worker-8262 /dm-deploy/dm-worker-8262
172.16.1.13:8263 dm-worker 172.16.1.13 8263 linux/x86_64 Free /dm-data/dm-worker-8263 /dm-deploy/dm-worker-8263
172.16.1.13:3000 grafana 172.16.1.13 3000 linux/x86_64 Up - /dm-deploy/grafana-3000
172.16.1.13:9090 prometheus 172.16.1.13 9090 linux/x86_64 Up /dm-data/prometheus-9090 /dm-deploy/prometheus-9090
4、停止复制任务
[root@root dm]# tiup dmctl --master-addr 172.16.1.13:8261 stop-task dm_task.yml
tiup is checking updates for component dmctl ...
Starting component `dmctl`: /root/.tiup/components/dmctl/v6.4.0/dmctl/dmctl --master-addr 172.16.1.13:8261 stop-task dm_task.yml
{
"op": "Delete",
"result": true,
"msg": "",
"sources": [
{
"result": true,
"msg": "",
"source": "mysql-01",
"worker": "dm-172.16.1.13-8262"
}
]
}
5、关闭 DM 集群
[root@root dm]# tiup dm stop dm-test
6、销毁 DM 集群
tiup dm destory dm-test
7、其它
即使停了同步任务,但 worker 节点也不是 free 状态,因为之前创建过数据源,绑定到了 worker 节点,需要将 数据源停掉。
[root@root dm]# tiup dmctl --master-addr 172.16.1.13:8261 operate-source stop source-mysql-01.yaml
tiup is checking updates for component dmctl ...
Starting component `dmctl`: /root/.tiup/components/dmctl/v6.4.0/dmctl/dmctl --master-addr 172.16.1.13:8261 operate-source stop source-mysql-01.yaml
{
"result": true,
"msg": "",
"sources": [
{
"result": true,
"msg": "",
"source": "mysql-01",
"worker": "dm-172.16.1.13-8262"
}
]
}
再次查看集群信息, worker 节点都是 free 状态。
[root@root dm]# tiup dm display dm-test
tiup is checking updates for component dm ...
Starting component `dm`: /root/.tiup/components/dm/v1.11.1/tiup-dm display dm-test
Cluster type: dm
Cluster name: dm-test
Cluster version: v6.4.0
Deploy user: root
SSH type: builtin
Grafana URL: http://172.16.1.13:3000
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
172.16.1.13:9093 alertmanager 172.16.1.13 9093/9094 linux/x86_64 Up /dm-data/alertmanager-9093 /dm-deploy/alertmanager-9093
172.16.1.13:8261 dm-master 172.16.1.13 8261/8291 linux/x86_64 Healthy|L /dm-data/dm-master-8261 /dm-deploy/dm-master-8261
172.16.1.13:8262 dm-worker 172.16.1.13 8262 linux/x86_64 Free /dm-data/dm-worker-8262 /dm-deploy/dm-worker-8262
172.16.1.13:8263 dm-worker 172.16.1.13 8263 linux/x86_64 Free /dm-data/dm-worker-8263 /dm-deploy/dm-worker-8263
172.16.1.13:3000 grafana 172.16.1.13 3000 linux/x86_64 Up - /dm-deploy/grafana-3000
172.16.1.13:9090 prometheus 172.16.1.13 9090 linux/x86_64 Up /dm-data/prometheus-9090 /dm-deploy/prometheus-9090
Total nodes: 6