TiDB TiUP 方式部署集群

一. 软硬件环境检查
1. 系统版本要求
Linux 操作系统平台
版本
Red Hat Enterprise Linux
7.3 及以上
CentOS
7.3 及以上
Oracle Enterprise Linux
7.3 及以上
Ubuntu LTS
16.04 及以上
2.在TiKV部署机器上 挂载数据盘 (如果没有直接跳过)
1. 查看 数据盘
fdisk -l
2. 创建分区
parted -s -a optimal /dev/nvme0n1 mklabel gpt -- mkpart primary ext4 1 -1
3. 格式化
mkfs.ext4 /dev/nvme0n1p1
4. 查看分区UUID
lsblk -f
nvme0n1
└─nvme0n1p1 ext4         c51eb23b-195c-4061-92a9-3fad812cc12f
5. 编辑/etc/fstab 文件 添加 nodelalloc 挂载参数
vi /etc/fstab
 
UUID=c51eb23b-195c-4061-92a9-3fad812cc12f /data1 ext4 defaults,nodelalloc,noatime 0 2
6. 挂载数据盘
mkdir /data1 && \
mount -a
7. 检查 是否生效 
mount -t ext4
# 参数中包含nodelalloc 表示生效
/dev/nvme0n1p1 on /data1 type ext4 (rw,noatime,nodelalloc,data=ordered)
3. 关闭swap 分区
```bash echo "vm.swappiness = 0">> /etc/sysctl.conf swapoff -a && swapon -a sysctl -p ```
4. 关闭 目标机器防火墙
1. 检测方防火墙状态 (CentOS 7.6系统)
sudo firewall-cmd --state
sudo systemctl status firewalld.service
 
2. 关闭防火墙
sudo systemctl stop firewalld.service
 
3. 关闭防火墙自启动服务
sudo systemctl disable firewalld.service
 
4. 检查防火墙状态
sudo systemctl status firewalld.service
5. 安装ntp 服务
1.检查 ntp 服务状态
sudo systemctl status ntpd.service
# runing 表示运行
ntpstat
# synchronised to NTP server 表示正常同步
 
2. 如果没有运行请安装,
sudo yum install ntp ntpdate && \
sudo systemctl start ntpd.service && \
sudo systemctl enable ntpd.service
6. 手动安装SSH 互信
1. 以 root 用户登录到目标机器, 并 创建 tidb 用户
useradd tidb && \
passwd tidb
2. visudo  并在最后添加 tidb ALL=(ALL) NOPASSWD: ALL  配置sudo 免密码
visudo
tidb ALL=(ALL) NOPASSWD: ALL
3. 以 tidb 用户登录到 控制机上 执行命令 配置互信
ssh-copy-id -i ~/.ssh/id_rsa.pub 172.16.5.52
3. ssh 链接 目标机器, 并sudo 到root 测试SSH互信和sudo 免密是否生效
ssh 172.16.5.52
sudo su - tidb
7. 安装 numactl 工具 为了单机多实例 时 隔离 cpu 资源
1. 手动 链接到目标机 上进行安装
sudo yum -y install numactl
 
2. 在 安装tiup cluster 组件后 批量对目标机进行安装
tiup cluster exec tidb-test --sudo --command "yum -y install numactl"
 
二. 在控制机上安装 TiUP 组件
1. 安装TiUP 组件
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
2. 刷新环境变量
source .bash_profile
3. 检查 TiUP 组件是否安装成功, 安装TiUP cluster 组件 并检查 cluster组件版本信息
which tiup
 
tiup cluster
tiup --binary cluster
 
三. 编辑初始化配置文件 (这里我们使用的最小拓扑安装, 其他的也安装了, 安装过程基本一样)
1.编辑配置文件 topology.yaml (这里我们对端口和路径进行了自定义)
[tidb@CentOS76_VM ~]$ vim topology.yaml 
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
  user: "tidb"
  ssh_port: 22
  deploy_dir: "/tidb-deploy"
  data_dir: "/tidb-data"
 
# # Monitored variables are applied to all the machines.
monitored:
  node_exporter_port: 19100
  blackbox_exporter_port: 19115
  deploy_dir: "/tidb-deploy/test/monitored-9100"
  data_dir: "/tidb-data/test/monitored-9100"
  log_dir: "/tidb-deploy/test/monitored-9100/log"
 
# # Server configs are used to specify the runtime configuration of TiDB components.
# # All configuration items can be found in TiDB docs:
# # - TiDB: https://pingcap.com/docs/stable/reference/configuration/tidb-server/configuration-file/
# # - TiKV: https://pingcap.com/docs/stable/reference/configuration/tikv-server/configuration-file/
# # - PD: https://pingcap.com/docs/stable/reference/configuration/pd-server/configuration-file/
# # All configuration items use points to represent the hierarchy, e.g:
# #   readpool.storage.use-unified-pool
# #      
# # You can overwrite this configuration via the instance-level `config` field.
 
server_configs:
  tidb:
    log.slow-threshold: 300
    binlog.enable: false
    binlog.ignore-error: false
  tikv:
    # server.grpc-concurrency: 4
    # raftstore.apply-pool-size: 2
    # raftstore.store-pool-size: 2
    # rocksdb.max-sub-compactions: 1
    # storage.block-cache.capacity: "16GB"
    # readpool.unified.max-thread-count: 12
    readpool.storage.use-unified-pool: false
    readpool.coprocessor.use-unified-pool: true
  pd:
    schedule.leader-schedule-limit: 4
    schedule.region-schedule-limit: 2048
    schedule.replica-schedule-limit: 64
 
pd_servers:
  - host: 172.16.5.52
    ssh_port: 22
    name: "pd-1"
    client_port: 23794
    peer_port: 23804
    deploy_dir: "/tidb-deploy/test/pd-2379"
    data_dir: "/tidb-data/test/pd-2379"
    log_dir: "/tidb-deploy/test/pd-2379/log"
    # numa_node: "0,1"
    # # The following configs are used to overwrite the `server_configs.pd` values.
    # config:
    #   schedule.max-merge-region-size: 20
    #   schedule.max-merge-region-keys: 200000
  - host: 172.16.4.29
    ssh_port: 22
    name: "pd-2"
    client_port: 23794
    peer_port: 23804
    deploy_dir: "/tidb-deploy/test/pd-2379"
    data_dir: "/tidb-data/test/pd-2379"
    log_dir: "/tidb-deploy/test/pd-2379/log"
  - host: 172.16.4.56
    ssh_port: 22
    name: "pd-3"
    client_port: 23794
    peer_port: 23804
    deploy_dir: "/tidb-deploy/test/pd-2379"
    data_dir: "/tidb-data/test/pd-2379"
    log_dir: "/tidb-deploy/test/pd-2379/log"
 
tidb_servers:
  - host: 172.16.5.52
    ssh_port: 22
    port: 4004
    status_port: 10084
    deploy_dir: "/tidb-deploy/test/tidb-4000"
    log_dir: "/tidb-deploy/test/tidb-4000/log"
    # numa_node: "0,1"
    # # The following configs are used to overwrite the `server_configs.tidb` values.
    # config:
    #   log.slow-query-file: tidb-slow-overwrited.log
  - host: 172.16.4.29
    ssh_port: 22
    port: 4004
    status_port: 10084
    deploy_dir: "/tidb-deploy/test/tidb-4000"
    log_dir: "/tidb-deploy/test/tidb-4000/log"
  - host: 172.16.4.56
    ssh_port: 22
    port: 4004
    status_port: 10084
    deploy_dir: "/tidb-deploy/test/tidb-4000"
    log_dir: "/tidb-deploy/test/tidb-4000/log"
 
tikv_servers:
  - host: 172.16.4.30
    ssh_port: 22
    port: 20164
    status_port: 20184
    deploy_dir: "/tidb-deploy/test/tikv-20160"
    data_dir: "/tidb-data/test/tikv-20160"
    log_dir: "/tidb-deploy/test/tikv-20160/log"
    # numa_node: "0,1"
    # # The following configs are used to overwrite the `server_configs.tikv` values.
    # config:
    #   server.grpc-concurrency: 4
    #   server.labels: { zone: "zone1", dc: "dc1", host: "host1" }
  - host: 172.16.4.224
    ssh_port: 22
    port: 20164
    status_port: 20184
    deploy_dir: "/tidb-deploy/test/tikv-20160"
    data_dir: "/tidb-data/test/tikv-20160"
    log_dir: "/tidb-deploy/test/tikv-20160/log"
  - host: 172.16.5.208
    ssh_port: 22
    port: 20164
    status_port: 20184
    deploy_dir: "/tidb-deploy/test/tikv-20160"
    data_dir: "/tidb-data/test/tikv-20160"
    log_dir: "/tidb-deploy/test/tikv-20160/log"
 
monitoring_servers:
  - host: 172.16.5.52
    ssh_port: 22
    port: 9490
    deploy_dir: "/tidb-deploy/test/prometheus-8249"
    data_dir: "/tidb-data/test/prometheus-8249"
    log_dir: "/tidb-deploy/test/prometheus-8249/log"
 
grafana_servers:
  - host: 172.16.5.52
    port: 3004
    deploy_dir: /tidb-deploy/test/grafana-3000
 
alertmanager_servers:
  - host: 172.16.5.52
    ssh_port: 22
    web_port: 9493
    cluster_port: 9494
    deploy_dir: "/tidb-deploy/test/alertmanager-9093"
    data_dir: "/tidb-data/test/alertmanager-9093"
    log_dir: "/tidb-deploy/test/alertmanager-9093/log"

 

 
四. 部署 检查 启动 管理 集群
1. 执行部署命令
tiup cluster deploy tidb-test v4.0.0 ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa]
 
通过 TiUP cluster 部署的集群名称为 tidb-test
部署版本为 v4.0.0,最新版本可以通过执行 tiup list tidb 来查看 TiUP 支持的版本
初始化配置文件为 topology.yaml
--user root:通过 root 用户登录到目标主机完成集群部署,该用户需要有 ssh 到目标机器的权限,
    并且在目标机器有 sudo 权限。也可以用其他有 ssh 和 sudo 权限的用户完成部署。
[-i][-p]:非必选项,如果已经配置免密登陆目标机,则不需填写。否则选择其一即可,
    [-i] 为可登录到目标机的 root 用户(或 --user 指定的其他用户)的私钥,也可使用 [-p] 交互式输入该用户的密码
 
成功部署后会输出 Deployed cluster `tidb-test` successfully
2. 查看TiUP 管理的集群
tiup cluster list
[tidb@CentOS76_VM ~]$ tiup cluster list
Starting component `cluster`:  list
Name          User  Version  Path                                                    PrivateKey
----          ----  -------  ----                                                    ----------
tidb-binlog   tidb  v4.0.0   /home/tidb/.tiup/storage/cluster/clusters/tidb-binlog   /home/tidb/.tiup/storage/cluster/clusters/tidb-binlog/ssh/id_rsa
tidb-test     tidb  v4.0.0   /home/tidb/.tiup/storage/cluster/clusters/tidb-test     /home/tidb/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa
tidb-ticdc    tidb  v4.0.0   /home/tidb/.tiup/storage/cluster/clusters/tidb-ticdc    /home/tidb/.tiup/storage/cluster/clusters/tidb-ticdc/ssh/id_rsa
tidb-tiflash  tidb  v4.0.0   /home/tidb/.tiup/storage/cluster/clusters/tidb-tiflash  /home/tidb/.tiup/storage/cluster/clusters/tidb-tiflash/ssh/id_rsa
 
3. 查看集群 状态
tiup cluster display tidb-test
[tidb@CentOS76_VM ~]$ tiup cluster display tidb-test
Starting component `cluster`:  display tidb-test
TiDB Cluster: tidb-test
TiDB Version: v4.0.0
ID                  Role          Host          Ports        OS/Arch       Status    Data Dir                           Deploy Dir
--                  ----          ----          -----        -------       ------    --------                           ----------
172.16.5.52:9493    alertmanager  172.16.5.52   9493/9494    linux/x86_64  inactive  /tidb-data/test/alertmanager-9093  /tidb-deploy/test/alertmanager-9093
172.16.5.52:3004    grafana       172.16.5.52   3004         linux/x86_64  inactive  -                                  /tidb-deploy/test/grafana-3000
172.16.4.29:23794   pd            172.16.4.29   23794/23804  linux/x86_64  Down      /tidb-data/test/pd-2379            /tidb-deploy/test/pd-2379
172.16.4.56:23794   pd            172.16.4.56   23794/23804  linux/x86_64  Down      /tidb-data/test/pd-2379            /tidb-deploy/test/pd-2379
172.16.5.52:23794   pd            172.16.5.52   23794/23804  linux/x86_64  Down      /tidb-data/test/pd-2379            /tidb-deploy/test/pd-2379
172.16.5.52:9490    prometheus    172.16.5.52   9490         linux/x86_64  inactive  /tidb-data/test/prometheus-8249    /tidb-deploy/test/prometheus-8249
172.16.4.29:4004    tidb          172.16.4.29   4004/10084   linux/x86_64  Down      -                                  /tidb-deploy/test/tidb-4000
172.16.4.56:4004    tidb          172.16.4.56   4004/10084   linux/x86_64  Down      -                                  /tidb-deploy/test/tidb-4000
172.16.5.52:4004    tidb          172.16.5.52   4004/10084   linux/x86_64  Down      -                                  /tidb-deploy/test/tidb-4000
172.16.4.224:20164  tikv          172.16.4.224  20164/20184  linux/x86_64  Down      /tidb-data/test/tikv-20160         /tidb-deploy/test/tikv-20160
172.16.4.30:20164   tikv          172.16.4.30   20164/20184  linux/x86_64  Down      /tidb-data/test/tikv-20160         /tidb-deploy/test/tikv-20160
172.16.5.208:20164  tikv          172.16.5.208  20164/20184  linux/x86_64  Down      /tidb-data/test/tikv-20160         /tidb-deploy/test/tikv-20160
4. 启动集群
tiup cluster start tidb-test
5. 启动后, 验证集群状态
1. tiup cluster display tidb-test
# status 全部 UP 启动成功
2. 我们可以连接数据库, 看能否成功连接 
mysql -u root -h 10.0.1.4 -P 4004
 
posted @ 2020-07-31 18:05  NzoyX  阅读(1582)  评论(0编辑  收藏  举报