tidb4.0.4使用tiup扩容TiKV 节点
环境:centos7、tidb4.0.4、tiup-v1.0.8
添加两个tikv节点 172.21.210.37-38
思路:初始化两台服务器、配置ssh互通——>编辑配置文件——>执行扩容命令——>重启grafana
1、初始化服务器、配置ssh互通
1 2 3 4 | 1、时间同步 2、配置ssh ssh-copy-id root@172.21.210.37 ssh-copy-id root@172.21.210.38 |
2、编辑配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | tiup cluster list #查看当前的集群名称列表 tiup cluster edit-config <cluster-name> #查看集群配置、拷贝对应的配置 vi scale- out .yaml tikv_servers: - host: 172.21.210.37 ssh_port: 22 port: 20160 status_port: 20180 deploy_dir: /data1/tidb-deploy/tikv-20160 data_dir: /data1/tidb-data/tikv-20160 arch: amd64 os: linux - host: 172.21.210.38 ssh_port: 22 port: 20160 status_port: 20180 deploy_dir: /data1/tidb-deploy/tikv-20160 data_dir: /data1/tidb-data/tikv-20160 arch: amd64 os: linux |
3、执行扩容命令
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 | 此处假设当前执行命令的用户和新增的机器打通了互信,如果不满足已打通互信的条件,需要通过 -p 来输入新机器的密码,或通过 -i 指定私钥文件。 tiup cluster scale- out <cluster-name> scale- out .yaml 预期输出 Scaled cluster <cluster-name> out successfully 信息,表示扩容操作成功 root@host-172-21-210-32 tidb_config]# tiup cluster scale- out tidb scale- out .yaml Starting component `cluster`: scale- out tidb scale- out .yaml Please confirm your topology: TiDB Cluster: tidb TiDB Version: v4.0.4 Type Host Ports OS/Arch Directories ---- ---- ----- ------- ----------- tikv 172.21.210.37 20160/20180 linux/x86_64 /data1/tidb-deploy/tikv-20160,/data1/tidb-data/tikv-20160 tikv 172.21.210.38 20160/20180 linux/x86_64 /data1/tidb-deploy/tikv-20160,/data1/tidb-data/tikv-20160 Attention: 1. If the topology is not what you expected, check your yaml file. 2. Please confirm there is no port/directory conflicts in same host. Do you want to continue ? [y/N]: y + [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb/ssh/id_rsa.pub - Download tikv:v4.0.4 (linux/amd64) ... Done + [ Serial ] - RootSSH: user=root, host=172.21.210.38, port=22, key=/root/.ssh/id_rsa + [ Serial ] - EnvInit: user=tidb, host=172.21.210.38 + [ Serial ] - RootSSH: user=root, host=172.21.210.37, port=22, key=/root/.ssh/id_rsa + [ Serial ] - EnvInit: user=tidb, host=172.21.210.37 + [ Serial ] - Mkdir: host=172.21.210.37, directories= '/data1/tidb-deploy' , '/data1/tidb-data' + [ Serial ] - Mkdir: host=172.21.210.38, directories= '/data1/tidb-deploy' , '/data1/tidb-data' + [Parallel] - UserSSH: user=tidb, host=172.21.210.32 + [Parallel] - UserSSH: user=tidb, host=172.21.210.39 + [Parallel] - UserSSH: user=tidb, host=172.21.210.33 + [Parallel] - UserSSH: user=tidb, host=172.21.210.34 + [Parallel] - UserSSH: user=tidb, host=172.21.210.32 + [Parallel] - UserSSH: user=tidb, host=172.21.210.33 + [Parallel] - UserSSH: user=tidb, host=172.21.210.35 + [Parallel] - UserSSH: user=tidb, host=172.21.210.32 + [Parallel] - UserSSH: user=tidb, host=172.21.210.36 + [Parallel] - UserSSH: user=tidb, host=172.21.210.32 + [Parallel] - UserSSH: user=tidb, host=172.21.210.32 + [ Serial ] - UserSSH: user=tidb, host=172.21.210.38 + [ Serial ] - UserSSH: user=tidb, host=172.21.210.37 + [ Serial ] - Mkdir: host=172.21.210.38, directories= '/data1/tidb-deploy/tikv-20160' , '/data1/tidb-deploy/tikv-20160/log' , '/data1/tidb-deploy/tikv-20160/bin' , '/data1/tidb-deploy/tikv-20160/conf' , '/data1/tidb-deploy/tikv-20160/scripts' + [ Serial ] - Mkdir: host=172.21.210.37, directories= '/data1/tidb-deploy/tikv-20160' , '/data1/tidb-deploy/tikv-20160/log' , '/data1/tidb-deploy/tikv-20160/bin' , '/data1/tidb-deploy/tikv-20160/conf' , '/data1/tidb-deploy/tikv-20160/scripts' - Copy blackbox_exporter -> 172.21.210.37 ... ? Mkdir: host=172.21.210.37, directories= '/data1/tidb-deploy/monitor-9100' ,'/data1/t... - Copy blackbox_exporter -> 172.21.210.37 ... ? Mkdir: host=172.21.210.37, directories= '/data1/tidb-deploy/monitor-9100' ,'/data1/t... - Copy node_exporter -> 172.21.210.37 ... ? CopyComponent: component=node_exporter, version=v0.17.0, remote=172.21.210.37:/data1/t... - Copy blackbox_exporter -> 172.21.210.37 ... ? MonitoredConfig: cluster=tidb, user=tidb, node_exporter_port=9100, blackbox_export... - Copy node_exporter -> 172.21.210.38 ... Done + [ Serial ] - ScaleConfig: cluster=tidb, user=tidb, host=172.21.210.37, service=tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir= + [ Serial ] - ScaleConfig: cluster=tidb, user=tidb, host=172.21.210.38, service=tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir= + [ Serial ] - ClusterOperate: operation=StartOperation, options={Roles:[] Nodes:[] Force: false SSHTimeout:0 OptTimeout:120 APITimeout:0 IgnoreConfigCheck: false RetainDataRoles:[] RetainDataNodes:[]} Starting component pd Starting instance pd 172.21.210.33:2379 Starting instance pd 172.21.210.32:2379 Start pd 172.21.210.33:2379 success Start pd 172.21.210.32:2379 success Starting component node_exporter Starting instance 172.21.210.32 Start 172.21.210.32 success Starting component blackbox_exporter Starting instance 172.21.210.32 Start 172.21.210.32 success Starting component node_exporter Starting instance 172.21.210.33 Start 172.21.210.33 success Starting component blackbox_exporter Starting instance 172.21.210.33 Start 172.21.210.33 success Starting component tikv Starting instance tikv 172.21.210.35:20160 Starting instance tikv 172.21.210.34:20160 Starting instance tikv 172.21.210.39:20160 Starting instance tikv 172.21.210.36:20160 Start tikv 172.21.210.39:20160 success Start tikv 172.21.210.34:20160 success Start tikv 172.21.210.35:20160 success Start tikv 172.21.210.36:20160 success Starting component node_exporter Starting instance 172.21.210.35 Start 172.21.210.35 success Starting component blackbox_exporter Starting instance 172.21.210.35 Start 172.21.210.35 success Starting component node_exporter Starting instance 172.21.210.34 Start 172.21.210.34 success Starting component blackbox_exporter Starting instance 172.21.210.34 Start 172.21.210.34 success Starting component node_exporter Starting instance 172.21.210.39 Start 172.21.210.39 success Starting component blackbox_exporter Starting instance 172.21.210.39 Start 172.21.210.39 success Starting component node_exporter Starting instance 172.21.210.36 Start 172.21.210.36 success Starting component blackbox_exporter Starting instance 172.21.210.36 Start 172.21.210.36 success Starting component tidb Starting instance tidb 172.21.210.33:4000 Starting instance tidb 172.21.210.32:4000 Start tidb 172.21.210.32:4000 success Start tidb 172.21.210.33:4000 success Starting component prometheus Starting instance prometheus 172.21.210.32:9090 Start prometheus 172.21.210.32:9090 success Starting component grafana Starting instance grafana 172.21.210.32:3000 Start grafana 172.21.210.32:3000 success Starting component alertmanager Starting instance alertmanager 172.21.210.32:9093 Start alertmanager 172.21.210.32:9093 success Checking service state of pd 172.21.210.32 Active: active (running) since Fri 2020-10-16 22:50:31 CST; 2 weeks 5 days ago 172.21.210.33 Active: active (running) since Fri 2020-10-16 22:50:22 CST; 2 weeks 5 days ago Checking service state of tikv 172.21.210.34 Active: active (running) since Fri 2020-10-16 22:50:19 CST; 2 weeks 5 days ago 172.21.210.35 Active: active (running) since Fri 2020-10-16 22:50:19 CST; 2 weeks 5 days ago 172.21.210.36 Active: active (running) since Sat 2020-10-17 02:25:23 CST; 2 weeks 5 days ago 172.21.210.39 Active: active (running) since Fri 2020-10-16 23:34:13 CST; 2 weeks 5 days ago Checking service state of tidb 172.21.210.32 Active: active (running) since Fri 2020-10-16 22:50:49 CST; 2 weeks 5 days ago 172.21.210.33 Active: active (running) since Fri 2020-10-16 22:50:40 CST; 2 weeks 5 days ago Checking service state of prometheus 172.21.210.32 Active: active (running) since Sat 2020-10-17 02:25:27 CST; 2 weeks 5 days ago Checking service state of grafana 172.21.210.32 Active: active (running) since Fri 2020-10-16 23:55:07 CST; 2 weeks 5 days ago Checking service state of alertmanager 172.21.210.32 Active: active (running) since Fri 2020-10-16 22:51:06 CST; 2 weeks 5 days ago + [Parallel] - UserSSH: user=tidb, host=172.21.210.38 + [Parallel] - UserSSH: user=tidb, host=172.21.210.37 + [ Serial ] - save meta + [ Serial ] - ClusterOperate: operation=StartOperation, options={Roles:[] Nodes:[] Force: false SSHTimeout:0 OptTimeout:120 APITimeout:0 IgnoreConfigCheck: false RetainDataRoles:[] RetainDataNodes:[]} Starting component tikv Starting instance tikv 172.21.210.38:20160 Starting instance tikv 172.21.210.37:20160 Start tikv 172.21.210.37:20160 success Start tikv 172.21.210.38:20160 success Starting component node_exporter Starting instance 172.21.210.37 Start 172.21.210.37 success Starting component blackbox_exporter Starting instance 172.21.210.37 Start 172.21.210.37 success Starting component node_exporter Starting instance 172.21.210.38 Start 172.21.210.38 success Starting component blackbox_exporter Starting instance 172.21.210.38 Start 172.21.210.38 success Checking service state of tikv 172.21.210.37 Active: active (running) since Thu 2020-11-05 11:33:46 CST; 3s ago 172.21.210.38 Active: active (running) since Thu 2020-11-05 11:33:46 CST; 2s ago + [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.32, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/alertmanager-9093.service, deploy_dir=/data1/tidb-deploy/alertmanager-9093, data_dir=[/data1/tidb-data/alertmanager-9093], log_dir=/data1/tidb-deploy/alertmanager-9093/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache + [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.36, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache + [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.32, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tidb-4000.service, deploy_dir=/data1/tidb-deploy/tidb-4000, data_dir=[], log_dir=/data1/tidb-deploy/tidb-4000/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache + [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.32, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/pd-2379.service, deploy_dir=/data1/tidb-deploy/pd-2379, data_dir=[/data1/tidb-data/pd-2379], log_dir=/data1/tidb-deploy/pd-2379/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache + [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.37, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache + [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.33, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tidb-4000.service, deploy_dir=/data1/tidb-deploy/tidb-4000, data_dir=[], log_dir=/data1/tidb-deploy/tidb-4000/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache + [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.35, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache + [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.32, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/prometheus-9090.service, deploy_dir=/data1/tidb-deploy/prometheus-9090, data_dir=[/data1/tidb-data/prometheus-9090], log_dir=/data1/tidb-deploy/prometheus-9090/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache + [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.34, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache + [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.32, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/grafana-3000.service, deploy_dir=/data1/tidb-deploy/grafana-3000, data_dir=[], log_dir=/data1/tidb-deploy/grafana-3000/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache + [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.38, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache + [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.33, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/pd-2379.service, deploy_dir=/data1/tidb-deploy/pd-2379, data_dir=[/data1/tidb-data/pd-2379], log_dir=/data1/tidb-deploy/pd-2379/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache + [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.39, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache + [ Serial ] - ClusterOperate: operation=RestartOperation, options={Roles:[prometheus] Nodes:[] Force: false SSHTimeout:0 OptTimeout:120 APITimeout:0 IgnoreConfigCheck: false RetainDataRoles:[] RetainDataNodes:[]} Stopping component prometheus Stopping instance 172.21.210.32 Stop prometheus 172.21.210.32:9090 success Starting component prometheus Starting instance prometheus 172.21.210.32:9090 Start prometheus 172.21.210.32:9090 success Starting component node_exporter Starting instance 172.21.210.32 Start 172.21.210.32 success Starting component blackbox_exporter Starting instance 172.21.210.32 Start 172.21.210.32 success Checking service state of pd 172.21.210.33 Active: active (running) since Fri 2020-10-16 22:50:22 CST; 2 weeks 5 days ago 172.21.210.32 Active: active (running) since Fri 2020-10-16 22:50:31 CST; 2 weeks 5 days ago Checking service state of tikv 172.21.210.35 Active: active (running) since Fri 2020-10-16 22:50:19 CST; 2 weeks 5 days ago 172.21.210.39 Active: active (running) since Fri 2020-10-16 23:34:13 CST; 2 weeks 5 days ago 172.21.210.34 Active: active (running) since Fri 2020-10-16 22:50:19 CST; 2 weeks 5 days ago 172.21.210.36 Active: active (running) since Sat 2020-10-17 02:25:23 CST; 2 weeks 5 days ago Checking service state of tidb 172.21.210.32 Active: active (running) since Fri 2020-10-16 22:50:49 CST; 2 weeks 5 days ago 172.21.210.33 Active: active (running) since Fri 2020-10-16 22:50:40 CST; 2 weeks 5 days ago Checking service state of prometheus 172.21.210.32 Active: active (running) since Thu 2020-11-05 11:33:53 CST; 2s ago Checking service state of grafana 172.21.210.32 Active: active (running) since Fri 2020-10-16 23:55:07 CST; 2 weeks 5 days ago Checking service state of alertmanager 172.21.210.32 Active: active (running) since Fri 2020-10-16 22:51:06 CST; 2 weeks 5 days ago + [ Serial ] - UpdateTopology: cluster=tidb Scaled cluster `tidb` out successfully |
4、查看集群状态、重启grafana
1 2 3 4 | 检查集群状态 tiup cluster display <cluster-name> 重启grafana tiup cluster restart tidb -R grafana |
做一个决定,并不难,难的是付诸行动,并且坚持到底。
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· Linux系列:如何用heaptrack跟踪.NET程序的非托管内存泄露
· 开发者必知的日志记录最佳实践
· SQL Server 2025 AI相关能力初探
· Linux系列:如何用 C#调用 C方法造成内存泄露
· AI与.NET技术实操系列(二):开始使用ML.NET
· 无需6万激活码!GitHub神秘组织3小时极速复刻Manus,手把手教你使用OpenManus搭建本
· C#/.NET/.NET Core优秀项目和框架2025年2月简报
· Manus爆火,是硬核还是营销?
· 终于写完轮子一部分:tcp代理 了,记录一下
· 【杭电多校比赛记录】2025“钉耙编程”中国大学生算法设计春季联赛(1)