tidb(2)启动和关闭数据库

本地测试集群环境:
服务器1:172.16.1.10
服务器2:172.16.1.11
服务器3:172.16.1.12
三台机器的 root 用户都已经配置好了免密。

1、生成修改配置文件

tiup cluster template > topology.yaml
 
 
根据个本地环境,主要修改 IP 地址部分内容,配置文件中将 ti-flash 部分注释掉了,暂时不安装 ti-flash。

[root@root ~]# cat topology.yaml 
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
  # # The user who runs the tidb cluster.
  user: "root"
  # # group is used to specify the group name the user belong to if it's not the same as user.
  # group: "tidb"
  # # SSH port of servers in the managed cluster.
  ssh_port: 22
  # # Storage directory for cluster deployment files, startup scripts, and configuration files.
  deploy_dir: "/tidb-deploy"
  # # TiDB Cluster data storage directory
  data_dir: "/tidb-data"
  # # Supported values: "amd64", "arm64" (default: "amd64")
  arch: "amd64"
  # # Resource Control is used to limit the resource of an instance.
  # # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html
  # # Supports using instance-level `resource_control` to override global `resource_control`.
  # resource_control:
  #   # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#MemoryLimit=bytes
  #   memory_limit: "2G"
  #   # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#CPUQuota=
  #   # The percentage specifies how much CPU time the unit shall get at maximum, relative to the total CPU time available on one CPU. Use values > 100% for allotting CPU time on more than one CPU.
  #   # Example: CPUQuota=200% ensures that the executed processes will never get more than two CPU time.
  #   cpu_quota: "200%"
  #   # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#IOReadBandwidthMax=device%20bytes
  #   io_read_bandwidth_max: "/dev/disk/by-path/pci-0000:00:1f.2-scsi-0:0:0:0 100M"
  #   io_write_bandwidth_max: "/dev/disk/by-path/pci-0000:00:1f.2-scsi-0:0:0:0 100M"

# # Monitored variables are applied to all the machines.
monitored:
  # # The communication port for reporting system information of each node in the TiDB cluster.
  node_exporter_port: 9100
  # # Blackbox_exporter communication port, used for TiDB cluster port monitoring.
  blackbox_exporter_port: 9115
  # # Storage directory for deployment files, startup scripts, and configuration files of monitoring components.
  # deploy_dir: "/tidb-deploy/monitored-9100"
  # # Data storage directory of monitoring components.
  # data_dir: "/tidb-data/monitored-9100"
  # # Log storage directory of the monitoring component.
  # log_dir: "/tidb-deploy/monitored-9100/log"

# # Server configs are used to specify the runtime configuration of TiDB components.
# # All configuration items can be found in TiDB docs:
# # - TiDB: https://pingcap.com/docs/stable/reference/configuration/tidb-server/configuration-file/
# # - TiKV: https://pingcap.com/docs/stable/reference/configuration/tikv-server/configuration-file/
# # - PD: https://pingcap.com/docs/stable/reference/configuration/pd-server/configuration-file/
# # - TiFlash: https://docs.pingcap.com/tidb/stable/tiflash-configuration
# #
# # All configuration items use points to represent the hierarchy, e.g:
# #   readpool.storage.use-unified-pool
# #           ^       ^
# # - example: https://github.com/pingcap/tiup/blob/master/examples/topology.example.yaml.
# # You can overwrite this configuration via the instance-level `config` field.
# server_configs:
  # tidb:
  # tikv:
  # pd:
  # tiflash:
  # tiflash-learner:

# # Server configs are used to specify the configuration of PD Servers.
pd_servers:
  # # The ip address of the PD Server.
  - host: 172.16.1.10
    # # SSH port of the server.
    # ssh_port: 22
    # # PD Server name
    # name: "pd-1"
    # # communication port for TiDB Servers to connect.
    # client_port: 2379
    # # Communication port among PD Server nodes.
    # peer_port: 2380
    # # PD Server deployment file, startup script, configuration file storage directory.
    # deploy_dir: "/tidb-deploy/pd-2379"
    # # PD Server data storage directory.
    # data_dir: "/tidb-data/pd-2379"
    # # PD Server log file storage directory.
    # log_dir: "/tidb-deploy/pd-2379/log"
    # # numa node bindings.
    # numa_node: "0,1"
    # # The following configs are used to overwrite the `server_configs.pd` values.
    # config:
    #   schedule.max-merge-region-size: 20
    #   schedule.max-merge-region-keys: 200000
  - host: 172.16.1.11
    # ssh_port: 22
    # name: "pd-1"
    # client_port: 2379
    # peer_port: 2380
    # deploy_dir: "/tidb-deploy/pd-2379"
    # data_dir: "/tidb-data/pd-2379"
    # log_dir: "/tidb-deploy/pd-2379/log"
    # numa_node: "0,1"
    # config:
    #   schedule.max-merge-region-size: 20
    #   schedule.max-merge-region-keys: 200000
  - host: 172.16.1.12
    # ssh_port: 22
    # name: "pd-1"
    # client_port: 2379
    # peer_port: 2380
    # deploy_dir: "/tidb-deploy/pd-2379"
    # data_dir: "/tidb-data/pd-2379"
    # log_dir: "/tidb-deploy/pd-2379/log"
    # numa_node: "0,1"
    # config:
    #   schedule.max-merge-region-size: 20
    #   schedule.max-merge-region-keys: 200000

# # Server configs are used to specify the configuration of TiDB Servers.
tidb_servers:
  # # The ip address of the TiDB Server.
  - host: 172.16.1.10
    # # SSH port of the server.
    # ssh_port: 22
    # # The port for clients to access the TiDB cluster.
    # port: 4000
    # # TiDB Server status API port.
    # status_port: 10080
    # # TiDB Server deployment file, startup script, configuration file storage directory.
    # deploy_dir: "/tidb-deploy/tidb-4000"
    # # TiDB Server log file storage directory.
    # log_dir: "/tidb-deploy/tidb-4000/log"
  # # The ip address of the TiDB Server.
  - host: 172.16.1.11
    # ssh_port: 22
    # port: 4000
    # status_port: 10080
    # deploy_dir: "/tidb-deploy/tidb-4000"
    # log_dir: "/tidb-deploy/tidb-4000/log"
  - host: 172.16.1.12
    # ssh_port: 22
    # port: 4000
    # status_port: 10080
    # deploy_dir: "/tidb-deploy/tidb-4000"
    # log_dir: "/tidb-deploy/tidb-4000/log"

# # Server configs are used to specify the configuration of TiKV Servers.
tikv_servers:
  # # The ip address of the TiKV Server.
  - host: 172.16.1.10
    # # SSH port of the server.
    # ssh_port: 22
    # # TiKV Server communication port.
    # port: 20160
    # # TiKV Server status API port.
    # status_port: 20180
    # # TiKV Server deployment file, startup script, configuration file storage directory.
    # deploy_dir: "/tidb-deploy/tikv-20160"
    # # TiKV Server data storage directory.
    # data_dir: "/tidb-data/tikv-20160"
    # # TiKV Server log file storage directory.
    # log_dir: "/tidb-deploy/tikv-20160/log"
    # # The following configs are used to overwrite the `server_configs.tikv` values.
    # config:
    #   log.level: warn
  # # The ip address of the TiKV Server.
  - host: 172.16.1.11
    # ssh_port: 22
    # port: 20160
    # status_port: 20180
    # deploy_dir: "/tidb-deploy/tikv-20160"
    # data_dir: "/tidb-data/tikv-20160"
    # log_dir: "/tidb-deploy/tikv-20160/log"
    # config:
    #   log.level: warn
  - host: 172.16.1.12
    # ssh_port: 22
    # port: 20160
    # status_port: 20180
    # deploy_dir: "/tidb-deploy/tikv-20160"
    # data_dir: "/tidb-data/tikv-20160"
    # log_dir: "/tidb-deploy/tikv-20160/log"
    # config:
    #   log.level: warn

# # Server configs are used to specify the configuration of TiFlash Servers.
tiflash_servers:
  # # The ip address of the TiFlash Server.
#  - host: 172.16.1.12
    # # SSH port of the server.
    # ssh_port: 22
    # # TiFlash TCP Service port.
    # tcp_port: 9000
    # # TiFlash HTTP Service port.
    # http_port: 8123
    # # TiFlash raft service and coprocessor service listening address.
    # flash_service_port: 3930
    # # TiFlash Proxy service port.
    # flash_proxy_port: 20170
    # # TiFlash Proxy metrics port.
    # flash_proxy_status_port: 20292
    # # TiFlash metrics port.
    # metrics_port: 8234
    # # TiFlash Server deployment file, startup script, configuration file storage directory.
    # deploy_dir: /tidb-deploy/tiflash-9000
    ## With cluster version >= v4.0.9 and you want to deploy a multi-disk TiFlash node, it is recommended to
    ## check config.storage.* for details. The data_dir will be ignored if you defined those configurations.
    ## Setting data_dir to a ','-joined string is still supported but deprecated.
    ## Check https://docs.pingcap.com/tidb/stable/tiflash-configuration#multi-disk-deployment for more details.
    # # TiFlash Server data storage directory.
    # data_dir: /tidb-data/tiflash-9000
    # # TiFlash Server log file storage directory.
    # log_dir: /tidb-deploy/tiflash-9000/log
  # # The ip address of the TiKV Server.
#  - host: 10.0.1.21
    # ssh_port: 22
    # tcp_port: 9000
    # http_port: 8123
    # flash_service_port: 3930
    # flash_proxy_port: 20170
    # flash_proxy_status_port: 20292
    # metrics_port: 8234
    # deploy_dir: /tidb-deploy/tiflash-9000
    # data_dir: /tidb-data/tiflash-9000
    # log_dir: /tidb-deploy/tiflash-9000/log

# # Server configs are used to specify the configuration of Prometheus Server.  
monitoring_servers:
  # # The ip address of the Monitoring Server.
  - host: 172.16.1.10
    # # SSH port of the server.
    # ssh_port: 22
    # # Prometheus Service communication port.
    # port: 9090
    # # ng-monitoring servive communication port
    # ng_port: 12020
    # # Prometheus deployment file, startup script, configuration file storage directory.
    # deploy_dir: "/tidb-deploy/prometheus-8249"
    # # Prometheus data storage directory.
    # data_dir: "/tidb-data/prometheus-8249"
    # # Prometheus log file storage directory.
    # log_dir: "/tidb-deploy/prometheus-8249/log"

# # Server configs are used to specify the configuration of Grafana Servers.  
grafana_servers:
  # # The ip address of the Grafana Server.
  - host: 172.16.1.10
    # # Grafana web port (browser access)
    # port: 3000
    # # Grafana deployment file, startup script, configuration file storage directory.
    # deploy_dir: /tidb-deploy/grafana-3000

# # Server configs are used to specify the configuration of Alertmanager Servers.  
alertmanager_servers:
  # # The ip address of the Alertmanager Server.
  - host: 172.16.1.10                                          
    # # SSH port of the server.
    # ssh_port: 22
    # # Alertmanager web service port.
    # web_port: 9093
    # # Alertmanager communication port.
    # cluster_port: 9094
    # # Alertmanager deployment file, startup script, configuration file storage directory.
    # deploy_dir: "/tidb-deploy/alertmanager-9093"
    # # Alertmanager data storage directory.
    # data_dir: "/tidb-data/alertmanager-9093"
    # # Alertmanager log file storage directory.
    # log_dir: "/tidb-deploy/alertmanager-9093/log"

2、检测环境是否OK

tiup cluster check ./topology.yaml --apply --user root -p
如果不合格的部分会自动尝试进行修复,多执行几次,看看最后的 FAIL 部分卡在哪里,针对性的修复一下,我本地遇到的问题是磁盘属性不对,及部分依赖包没有安装,将包安装好之后顺利通过检测,但磁盘属性不对的问题,懒的折腾了,毕竟只是测试一下。
 
 

3、安装 tidb 各组件

tiup list tidb --查看可以安装的版本号
安装tidb,会提示输入密码。
在安装过程中有一些组件总是下载失败,多执行几次安装命令就好了。
tiup cluster deploy tidb-test v5.1.1 ./topology.yaml --user root -p

查看集群列表

[root@root ~]# tiup cluster list
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.10.3/tiup-cluster list
Name       User  Version  Path                                            PrivateKey
----       ----  -------  ----                                            ----------
tidb-test  root  v5.1.1   /root/.tiup/storage/cluster/clusters/tidb-test  /root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa

 
查看集群状态,STATUS 列显示都是DOWN或N/A。

[root@root ~]# tiup cluster display tidb-test
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.10.3/tiup-cluster display tidb-test
Cluster type:       tidb
Cluster name:       tidb-test
Cluster version:    v5.1.1
Deploy user:        root
SSH type:           builtin
Grafana URL:        http://172.16.1.10:3000
ID                 Role          Host         Ports        OS/Arch       Status  Data Dir                      Deploy Dir
--                 ----          ----         -----        -------       ------  --------                      ----------
172.16.1.10:9093   alertmanager  172.16.1.10  9093/9094    linux/x86_64  Down    /tidb-data/alertmanager-9093  /tidb-deploy/alertmanager-9093
172.16.1.10:3000   grafana       172.16.1.10  3000         linux/x86_64  Down    -                             /tidb-deploy/grafana-3000
172.16.1.10:2379   pd            172.16.1.10  2379/2380    linux/x86_64  Down    /tidb-data/pd-2379            /tidb-deploy/pd-2379
172.16.1.11:2379   pd            172.16.1.11  2379/2380    linux/x86_64  Down    /tidb-data/pd-2379            /tidb-deploy/pd-2379
172.16.1.12:2379   pd            172.16.1.12  2379/2380    linux/x86_64  Down    /tidb-data/pd-2379            /tidb-deploy/pd-2379
172.16.1.10:9090   prometheus    172.16.1.10  9090         linux/x86_64  Down    /tidb-data/prometheus-9090    /tidb-deploy/prometheus-9090
172.16.1.10:4000   tidb          172.16.1.10  4000/10080   linux/x86_64  Down    -                             /tidb-deploy/tidb-4000
172.16.1.11:4000   tidb          172.16.1.11  4000/10080   linux/x86_64  Down    -                             /tidb-deploy/tidb-4000
172.16.1.12:4000   tidb          172.16.1.12  4000/10080   linux/x86_64  Down    -                             /tidb-deploy/tidb-4000
172.16.1.10:20160  tikv          172.16.1.10  20160/20180  linux/x86_64  N/A     /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
172.16.1.11:20160  tikv          172.16.1.11  20160/20180  linux/x86_64  N/A     /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
172.16.1.12:20160  tikv          172.16.1.12  20160/20180  linux/x86_64  N/A     /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
Total nodes: 12

 

4、启动 TIDB

开始启动,末尾会有提示,启动成功。

[root@root ~]# tiup cluster start tidb-test
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.10.3/tiup-cluster start tidb-test
Starting cluster tidb-test...
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=root, host=172.16.1.11
+ [Parallel] - UserSSH: user=root, host=172.16.1.12
+ [Parallel] - UserSSH: user=root, host=172.16.1.10
+ [Parallel] - UserSSH: user=root, host=172.16.1.11
+ [Parallel] - UserSSH: user=root, host=172.16.1.12
+ [Parallel] - UserSSH: user=root, host=172.16.1.10
+ [Parallel] - UserSSH: user=root, host=172.16.1.10
+ [Parallel] - UserSSH: user=root, host=172.16.1.10
+ [Parallel] - UserSSH: user=root, host=172.16.1.10
+ [Parallel] - UserSSH: user=root, host=172.16.1.11
+ [Parallel] - UserSSH: user=root, host=172.16.1.12
+ [Parallel] - UserSSH: user=root, host=172.16.1.10
+ [ Serial ] - StartCluster
Starting component pd
        Starting instance 172.16.1.12:2379
        Starting instance 172.16.1.10:2379
        Starting instance 172.16.1.11:2379
        Start instance 172.16.1.12:2379 success
        Start instance 172.16.1.11:2379 success
        Start instance 172.16.1.10:2379 success
Starting component tikv
        Starting instance 172.16.1.10:20160
        Starting instance 172.16.1.11:20160
        Starting instance 172.16.1.12:20160
        Start instance 172.16.1.11:20160 success
        Start instance 172.16.1.12:20160 success
        Start instance 172.16.1.10:20160 success
Starting component tidb
        Starting instance 172.16.1.12:4000
        Starting instance 172.16.1.10:4000
        Starting instance 172.16.1.11:4000
        Start instance 172.16.1.10:4000 success
        Start instance 172.16.1.12:4000 success
        Start instance 172.16.1.11:4000 success
Starting component prometheus
        Starting instance 172.16.1.10:9090
        Start instance 172.16.1.10:9090 success
Starting component grafana
        Starting instance 172.16.1.10:3000
        Start instance 172.16.1.10:3000 success
Starting component alertmanager
        Starting instance 172.16.1.10:9093
        Start instance 172.16.1.10:9093 success
Starting component node_exporter
        Starting instance 172.16.1.12
        Starting instance 172.16.1.10
        Starting instance 172.16.1.11
        Start 172.16.1.10 success
        Start 172.16.1.11 success
        Start 172.16.1.12 success
Starting component blackbox_exporter
        Starting instance 172.16.1.12
        Starting instance 172.16.1.10
        Starting instance 172.16.1.11
        Start 172.16.1.10 success
        Start 172.16.1.11 success
        Start 172.16.1.12 success
+ [ Serial ] - UpdateTopology: cluster=tidb-test
Started cluster `tidb-test` successfully

 
再次查看集群状态,STATUS 列都已经是UP

[root@root ~]# tiup cluster display tidb-test
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.10.3/tiup-cluster display tidb-test
Cluster type:       tidb
Cluster name:       tidb-test
Cluster version:    v5.1.1
Deploy user:        root
SSH type:           builtin
Dashboard URL:      http://172.16.1.11:2379/dashboard
Grafana URL:        http://172.16.1.10:3000
ID                 Role          Host         Ports        OS/Arch       Status  Data Dir                      Deploy Dir
--                 ----          ----         -----        -------       ------  --------                      ----------
172.16.1.10:9093   alertmanager  172.16.1.10  9093/9094    linux/x86_64  Up      /tidb-data/alertmanager-9093  /tidb-deploy/alertmanager-9093
172.16.1.10:3000   grafana       172.16.1.10  3000         linux/x86_64  Up      -                             /tidb-deploy/grafana-3000
172.16.1.10:2379   pd            172.16.1.10  2379/2380    linux/x86_64  Up|L    /tidb-data/pd-2379            /tidb-deploy/pd-2379
172.16.1.11:2379   pd            172.16.1.11  2379/2380    linux/x86_64  Up|UI   /tidb-data/pd-2379            /tidb-deploy/pd-2379
172.16.1.12:2379   pd            172.16.1.12  2379/2380    linux/x86_64  Up      /tidb-data/pd-2379            /tidb-deploy/pd-2379
172.16.1.10:9090   prometheus    172.16.1.10  9090         linux/x86_64  Up      /tidb-data/prometheus-9090    /tidb-deploy/prometheus-9090
172.16.1.10:4000   tidb          172.16.1.10  4000/10080   linux/x86_64  Up      -                             /tidb-deploy/tidb-4000
172.16.1.11:4000   tidb          172.16.1.11  4000/10080   linux/x86_64  Up      -                             /tidb-deploy/tidb-4000
172.16.1.12:4000   tidb          172.16.1.12  4000/10080   linux/x86_64  Up      -                             /tidb-deploy/tidb-4000
172.16.1.10:20160  tikv          172.16.1.10  20160/20180  linux/x86_64  Up      /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
172.16.1.11:20160  tikv          172.16.1.11  20160/20180  linux/x86_64  Up      /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
172.16.1.12:20160  tikv          172.16.1.12  20160/20180  linux/x86_64  Up      /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
Total nodes: 12
[root@root ~]# 

 

5、连接任意 tidb-server 节点

[root@root ~]# mysql -h172.16.1.12 -uroot -P 4000
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.7.25-TiDB-v5.1.1 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> 
MySQL [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| INFORMATION_SCHEMA |
| METRICS_SCHEMA     |
| PERFORMANCE_SCHEMA |
| mysql              |
| test               |
+--------------------+
5 rows in set (0.001 sec)

MySQL [(none)]> 

 

6、关闭数据库

关闭数据库

[root@root ~]# tiup cluster stop tidb-test
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.10.3/tiup-cluster stop tidb-test
Will stop the cluster tidb-test with nodes: , roles: .
Do you want to continue? [y/N]:(default=N) y
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=root, host=172.16.1.11
+ [Parallel] - UserSSH: user=root, host=172.16.1.12
+ [Parallel] - UserSSH: user=root, host=172.16.1.10
+ [Parallel] - UserSSH: user=root, host=172.16.1.11
+ [Parallel] - UserSSH: user=root, host=172.16.1.12
+ [Parallel] - UserSSH: user=root, host=172.16.1.10
+ [Parallel] - UserSSH: user=root, host=172.16.1.10
+ [Parallel] - UserSSH: user=root, host=172.16.1.10
+ [Parallel] - UserSSH: user=root, host=172.16.1.10
+ [Parallel] - UserSSH: user=root, host=172.16.1.12
+ [Parallel] - UserSSH: user=root, host=172.16.1.10
+ [Parallel] - UserSSH: user=root, host=172.16.1.11
+ [ Serial ] - StopCluster
Stopping component alertmanager
        Stopping instance 172.16.1.10
        Stop alertmanager 172.16.1.10:9093 success
Stopping component grafana
        Stopping instance 172.16.1.10
        Stop grafana 172.16.1.10:3000 success
Stopping component prometheus
        Stopping instance 172.16.1.10
        Stop prometheus 172.16.1.10:9090 success
Stopping component tidb
        Stopping instance 172.16.1.12
        Stopping instance 172.16.1.11
        Stopping instance 172.16.1.10
        Stop tidb 172.16.1.10:4000 success
        Stop tidb 172.16.1.12:4000 success
        Stop tidb 172.16.1.11:4000 success
Stopping component tikv
        Stopping instance 172.16.1.12
        Stopping instance 172.16.1.10
        Stopping instance 172.16.1.11
        Stop tikv 172.16.1.10:20160 success
        Stop tikv 172.16.1.11:20160 success
        Stop tikv 172.16.1.12:20160 success
Stopping component pd
        Stopping instance 172.16.1.12
        Stopping instance 172.16.1.10
        Stopping instance 172.16.1.11
        Stop pd 172.16.1.10:2379 success
        Stop pd 172.16.1.11:2379 success
        Stop pd 172.16.1.12:2379 success
Stopping component node_exporter
        Stopping instance 172.16.1.12
        Stopping instance 172.16.1.10
        Stopping instance 172.16.1.11
        Stop 172.16.1.10 success
        Stop 172.16.1.11 success
        Stop 172.16.1.12 success
Stopping component blackbox_exporter
        Stopping instance 172.16.1.12
        Stopping instance 172.16.1.10
        Stopping instance 172.16.1.11
        Stop 172.16.1.10 success
        Stop 172.16.1.11 success
        Stop 172.16.1.12 success
Stopped cluster `tidb-test` successfully

查看集群状态,STATUS 列已经是DOWN

[root@root ~]# tiup cluster display tidb-test
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.10.3/tiup-cluster display tidb-test
Cluster type:       tidb
Cluster name:       tidb-test
Cluster version:    v5.1.1
Deploy user:        root
SSH type:           builtin
Grafana URL:        http://172.16.1.10:3000
ID                 Role          Host         Ports        OS/Arch       Status  Data Dir                      Deploy Dir
--                 ----          ----         -----        -------       ------  --------                      ----------
172.16.1.10:9093   alertmanager  172.16.1.10  9093/9094    linux/x86_64  Down    /tidb-data/alertmanager-9093  /tidb-deploy/alertmanager-9093
172.16.1.10:3000   grafana       172.16.1.10  3000         linux/x86_64  Down    -                             /tidb-deploy/grafana-3000
172.16.1.10:2379   pd            172.16.1.10  2379/2380    linux/x86_64  Down    /tidb-data/pd-2379            /tidb-deploy/pd-2379
172.16.1.11:2379   pd            172.16.1.11  2379/2380    linux/x86_64  Down    /tidb-data/pd-2379            /tidb-deploy/pd-2379
172.16.1.12:2379   pd            172.16.1.12  2379/2380    linux/x86_64  Down    /tidb-data/pd-2379            /tidb-deploy/pd-2379
172.16.1.10:9090   prometheus    172.16.1.10  9090         linux/x86_64  Down    /tidb-data/prometheus-9090    /tidb-deploy/prometheus-9090
172.16.1.10:4000   tidb          172.16.1.10  4000/10080   linux/x86_64  Down    -                             /tidb-deploy/tidb-4000
172.16.1.11:4000   tidb          172.16.1.11  4000/10080   linux/x86_64  Down    -                             /tidb-deploy/tidb-4000
172.16.1.12:4000   tidb          172.16.1.12  4000/10080   linux/x86_64  Down    -                             /tidb-deploy/tidb-4000
172.16.1.10:20160  tikv          172.16.1.10  20160/20180  linux/x86_64  N/A     /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
172.16.1.11:20160  tikv          172.16.1.11  20160/20180  linux/x86_64  N/A     /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
172.16.1.12:20160  tikv          172.16.1.12  20160/20180  linux/x86_64  N/A     /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
Total nodes: 12

占用端口号小结:
pd-server 2379 客户端端口
2380 通讯端口

tikv-serve 20160 服务端口
20180 状态端口

tidb-server 4000 加入集群端口
10080 状态端口

posted on 2022-09-01 20:07  柴米油盐酱醋  阅读(754)  评论(0编辑  收藏  举报

导航