clickhouse集群部署21.6

环境:
OS:Centos 7
版本:21.6.6.51
机器:
192.168.1.118
192.168.1.134
192.168.1.135

 

-----------------------zookeeper部署-----------------------------------------
安装步骤请参考:https://www.cnblogs.com/hxlasky/p/14665752.html

 

------------------------单机安装---------------------------------------------
每台机器安装步骤相同
1.下载clickhouse
需要下载如下3个按照包
clickhouse-common-static-21.6.6.51-2.x86_64.rpm
clickhouse-client-21.6.6.51-2.noarch.rpm
clickhouse-server-21.6.6.51-2.noarch.rpm

下载地址为:https://packagecloud.io/Altinity/clickhouse
或是这里下载

https://mirrors.tuna.tsinghua.edu.cn/clickhouse/rpm/stable/x86_64/

 

2.安装
rpm -ivh clickhouse-common-static-21.6.6.51-2.x86_64.rpm
rpm -ivh clickhouse-client-21.6.6.51-2.noarch.rpm
rpm -ivh clickhouse-server-21.6.6.51-2.noarch.rpm

这里会提示输入默认的密码,可以直接回车 不需要密码,后面单独对用户进行设置密码


3.修改数据目录和日志目录

首先创建数据目录和日志目录

mkdir -p /opt/clickhouse/data
mkdir -p /opt/clickhouse/logs

 

修改目录权限:

 

chown -R clickhouse:clickhouse /opt/clickhouse
chown -R clickhouse:clickhouse /var/lib/clickhouse

 

 


vi /etc/clickhouse-server/config.xml

数据目录修改的地方
<!-- Path to data directory, with trailing slash. -->
<path>/opt/clickhouse/data/</path>

日志目录修改的地方
<level>trace</level>
<log>/opt/clickhouse/logs/clickhouse-server.log</log>
<errorlog>/opt/clickhouse/logs/clickhouse-server.err.log</errorlog>

 

4.配置远程登录
vi /etc/clickhouse-server/config.xml

将下面两行开启,之前是注释掉的

 

    <!-- Default values - try listen localhost on IPv4 and IPv6. -->

    <listen_host>::1</listen_host>
    <listen_host>0.0.0.0</listen_host> ##这里要修改成0.0.0.0 之前的127.0.0.1

 

 

 

 

这个时候可以把配置文件拷贝到另外的2个节点也尝试启动

 

5.启动
[root@localhost clickhouse21]# systemctl start clickhouse-server.service

 

6.登录
clickhouse-client --host localhost --port=9000 -m
clickhouse-client --host 192.168.1.118 --port=9000 -m

 

------------------------集群安装---------------------------------------------

1.创建 /etc/metrika.xml 文件

perftest_3shards_1replicas 是集群的名称(为了方便可以定义为default),该名称可以自定义的,建库建表的时候需要加上该名称,如CREATE DATABASE db_test ON CLUSTER 集群名称,若不加的会创建的是本地库、本地表;

 

复制代码
<?xml version="1.0"?>
<yandex>
  <clickhouse_remote_servers>
    <perftest_3shards_1replicas>
      <shard>
        <internal_replication>true</internal_replication>
        <replica>
          <host>192.168.1.118</host>
          <port>9000</port>
        </replica>
      </shard>
      <shard>
        <replica>
          <internal_replication>true</internal_replication>
          <host>192.168.1.134</host>
          <port>9000</port>
        </replica>
      </shard>
      <shard>
        <internal_replication>true</internal_replication>
        <replica>
          <host>192.168.1.135</host>
          <port>9000</port>
        </replica>
      </shard>
    </perftest_3shards_1replicas>
  </clickhouse_remote_servers>
  <!--zookeeper相关配置-->
  <zookeeper-servers>
    <node index="1">
      <host>192.168.1.118</host>
      <port>2182</port>
    </node>
    <node index="2">
      <host>192.168.1.134</host>
      <port>2182</port>
    </node>
    <node index="3">
      <host>192.168.1.135</host>
      <port>2182</port>
    </node>
  </zookeeper-servers>
  <macros>
    <shard_name>01</shard_name>
    <replica>192.168.1.118</replica>
  </macros>
  <networks>
    <ip>::/0</ip>
  </networks>
  <clickhouse_compression>
    <case>
      <min_part_size>10000000000</min_part_size>
      <min_part_size_ratio>0.01</min_part_size_ratio>
      <method>lz4</method>
    </case>
  </clickhouse_compression>
</yandex>
复制代码

 

每台机器都创建该文件,红色部分需要根据机器ip进行修改,不同的地方如下,同时注意zookeeper配置的端口
192.168.1.118:

<macros>
     <shard_name>01</shard_name>
     <replica>192.168.1.118</replica>
</macros>

 


192.168.1.134:

<macros>
    <shard_name>02</shard_name>
    <replica>192.168.1.134</replica>
</macros>

 


192.168.1.135:

<macros>
    <shard_name>03</shard_name>
    <replica>192.168.1.135</replica>
</macros>

 

 

2.修改 /etc/clickhouse-server/config.xml 文件

 

在如下项目下加入<include_from>/etc/metrika.xml</include_from>

 

    <!-- If element has 'incl' attribute, then for it's value will be used corresponding substitution from another file.
         By default, path to file with substitutions is /etc/metrika.xml. It could be changed in config in 'include_from' element.
         Values for substitutions are specified in /yandex/name_of_substitution elements in that file.
      -->

    <include_from>/etc/metrika.xml</include_from>

 

 

配置zookeeper,原来是注释掉的,去掉注释后加入红色部分(zookeeper)

复制代码
    <zookeeper incl="zookeeper-servers" optional="true">
        <node>
            <host>example1</host>
            <port>2181</port>
        </node>
        <node>
            <host>example2</host>
            <port>2181</port>
        </node>
        <node>
            <host>example3</host>
            <port>2181</port>
        </node>
    </zookeeper>
复制代码

 

 

配置clickhouse_remote_servers,找到如下项目,加入红色部分

 

    <!-- Configuration of clusters that could be used in Distributed tables.
         https://clickhouse.tech/docs/en/operations/table_engines/distributed/
      -->
    <remote_servers incl="clickhouse_remote_servers">

 

 

 

 

红色部分是添加上去的.

 

3.重新启动
systemctl stop clickhouse-server.service
systemctl start clickhouse-server.service
systemctl status clickhouse-server.service

 

4.登录查看集群情况
在其中提个节点上登陆查看即可

 

复制代码
clickhouse-client --host 192.168.1.118 --port=9000
[root@localhost clickhouse-server]# clickhouse-client --host 192.168.1.118 --port=9000
ClickHouse client version 21.6.6.51 (official build).
Connecting to 192.168.1.118:9000 as user default.
Connected to ClickHouse server version 21.6.6 revision 54448.

localhost :) select * from system.clusters;

SELECT *
FROM system.clusters

Query id: e91b2180-45ae-4174-b733-228b9c14cdb3

┌─cluster──────────────────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name─────┬─host_address──┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─slowdowns_count─┬─estimated_recovery_time─┐
│ perftest_3shards_1replicas                   │         111192.168.1.118192.168.1.11890001default │                  │            000 │
│ perftest_3shards_1replicas                   │         211192.168.1.134192.168.1.13490000default │                  │            000 │
│ perftest_3shards_1replicas                   │         311192.168.1.135192.168.1.13590000default │                  │            000 │
│ test_cluster_two_shards                      │         111127.0.0.1127.0.0.190001default │                  │            000 │
│ test_cluster_two_shards                      │         211127.0.0.2127.0.0.290000default │                  │            000 │
│ test_cluster_two_shards_internal_replication │         111127.0.0.1127.0.0.190001default │                  │            000 │
│ test_cluster_two_shards_internal_replication │         211127.0.0.2127.0.0.290000default │                  │            000 │
│ test_cluster_two_shards_localhost            │         111 │ localhost     │ ::190001default │                  │            000 │
│ test_cluster_two_shards_localhost            │         211 │ localhost     │ ::190001default │                  │            000 │
│ test_shard_localhost                         │         111 │ localhost     │ ::190001default │                  │            000 │
│ test_shard_localhost_secure                  │         111 │ localhost     │ ::194400default │                  │            000 │
│ test_unavailable_shard                       │         111 │ localhost     │ ::190001default │                  │            000 │
│ test_unavailable_shard                       │         211 │ localhost     │ ::110default │                  │            000 │
└──────────────────────────────────────────────┴───────────┴──────────────┴─────────────┴───────────────┴───────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────┴─────────────────────────┘

13 rows in set. Elapsed: 0.006 sec. 
复制代码

 

5.查看zookeeper信息

复制代码
localhost :) select * from system.zookeeper WHERE path='/';

SELECT *
FROM system.zookeeper
WHERE path = '/'

Query id: 1dad1e44-5f79-4b85-88fd-fd79dd5e407a

┌─name───────┬─value─┬──────czxid─┬──────mzxid─┬───────────────ctime─┬───────────────mtime─┬─version─┬─cversion─┬─aversion─┬─ephemeralOwner─┬─dataLength─┬─numChildren─┬──────pzxid─┬─path─┐
│ zookeeper  │       │          001970-01-01 08:00:001970-01-01 08:00:000 │       -200020 │ /    │
│ clickhouse │       │ 429496729842949672982022-01-15 09:02:272022-01-15 09:02:270100014294967299 │ /    │
└────────────┴───────┴────────────┴────────────┴─────────────────────┴─────────────────────┴─────────┴──────────┴──────────┴────────────────┴────────────┴─────────────┴────────────┴──────┘

2 rows in set. Elapsed: 0.007 sec. 
复制代码

 

6.数据验证
创建数据库:
登录其中一个节点上执行我们这里登陆到 192.168.1.118 上执行:

复制代码
[root@localhost clickhouse-server]# clickhouse-client --host localhost -m


localhost :) CREATE DATABASE db_test ON CLUSTER perftest_3shards_1replicas;

CREATE DATABASE db_test ON CLUSTER perftest_3shards_1replicas

Query id: cd14d017-2333-478a-a914-c7dee2437b9b

┌─host──────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ 192.168.1.11890000 │       │                   22 │
└───────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘
┌─host──────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ 192.168.1.13490000 │       │                   10 │
└───────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘
┌─host──────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ 192.168.1.13590000 │       │                   00 │
└───────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘

3 rows in set. Elapsed: 0.447 sec. 

localhost :) show databases;

SHOW DATABASES

Query id: eaecfd6b-f98d-48a8-bfb0-49d80e8e147e

┌─name────┐
│ db_test │
│ default │
│ system  │
└─────────┘
复制代码

 

这个时候登陆另外的节点上查看数据库,都可以看到有新创建的数据库db_test

复制代码
[root@localhost clickhouse21]# clickhouse-client --host 192.168.1.134 -m
localhost :) show databases;

SHOW DATABASES

Query id: 5b7f7379-3e2f-4f8d-b411-5163ad151ba0

┌─name────┐
│ db_test │
│ default │
│ system  │
└─────────┘

[root@localhost clickhouse21]# clickhouse-client --host 192.168.1.135 -m

localhost :) show databases;

SHOW DATABASES

Query id: f446b2c4-eac4-4848-8980-0ad1fe80080e

┌─name────┐
│ db_test │
│ default │
│ system  │
└─────────┘
复制代码

 

创建本地表

登陆到其中一台机器上执行,我这里是在 192.168.1.118 上执行

 

创建数据库
CREATE DATABASE db_test ON CLUSTER perftest_3shards_1replicas;


[root@localhost clickhouse-server]# clickhouse-client --host 192.168.1.118 -m
localhost :) use db_test;

带上了ON CLUSTER关键字会在每个节点上创建一个表
CREATE TABLE user_local ON CLUSTER perftest_3shards_1replicas
(
id Int32,
name String
) engine = MergeTree
order by id
settings index_granularity = 8192;


删除每个节点上的表需要带上 ON CLUSTER 关键字
drop table user_local ON CLUSTER perftest_3shards_1replicas;


创建分布式表,同理带上了 ON CLUSTER 关键字,每个节点上都会创建同样的表
CREATE TABLE user_all ON CLUSTER perftest_3shards_1replicas(
id Int32,
name String) ENGINE = Distributed(perftest_3shards_1replicas, db_test, user_local,id);

分布式表的字段顺序可以与本地表不一致,查询的时候会自动映射到本地表相对应名字相同的字段 

 

 

7.写入数据
本地表写入数据

复制代码
192.168.1.118
insert into user_local(id, name) values (1, 'zhangsan');
insert into user_local(id, name) values (2, 'lisi');

192.168.1.134:
insert into user_local(id, name) values (3, 'wangm');
insert into user_local(id, name) values (4, 'lijing');

192.168.1.135:
insert into user_local(id, name) values (5, 'zhangquan');
insert into user_local(id, name) values (6, 'lihua');
复制代码

 

查询分布式表:
在任何一个节点上查询,可以看到分布式表查出了3个节点所有的数据了

复制代码
localhost :) select * from user_all order by id;

SELECT *
FROM user_all
ORDER BY id ASC

Query id: 1c5a7bdf-b581-469d-8153-46563eb3e3af

┌─id─┬─name─────┐
│  1 │ zhangsan │
└────┴──────────┘
┌─id─┬─name─┐
│  2 │ lisi │
└────┴──────┘
┌─id─┬─name──┐
│  3 │ wangm │
└────┴───────┘
┌─id─┬─name───┐
│  4 │ lijing │
└────┴────────┘
┌─id─┬─name──────┐
│  5 │ zhangquan │
└────┴───────────┘
┌─id─┬─name──┐
│  6 │ lihua │
└────┴───────┘
复制代码

 

我们可以从分布式表里写入数据,但是落到具体那个本地的节点,是通过clickhouse内部的规则机制决定的
insert into user_all(id, name) values (7, 'wangmingquan');

 

8.添加字段

在任何一个节点上执行:
alter table user_local ON CLUSTER perftest_3shards_1replicas add column name1 String;

在每个节点上执行,看相应字段是否添加上
show create table user_local;


但是这个时候 user_all 不会自动添加字段的,需要手工进行添加
alter table user_all ON CLUSTER perftest_3shards_1replicas add column name1 String;

 

添加一个在本地表不存在的字段
alter table user_all ON CLUSTER perftest_3shards_1replicas add column name2 String;

 

该语句可以执行成功,但是查询的时候报错误:

复制代码
SELECT *
FROM user_all

Query id: 4c0c2717-2b30-4d38-805b-5d8aea6248a1


0 rows in set. Elapsed: 0.004 sec. 

Received exception from server (version 21.6.6):
Code: 47. DB::Exception: Received from 192.168.1.118:9000. DB::Exception: There's no column 'user_local.name2' in table 'user_local': While processing user_local.name2.
复制代码

 

 

删除字段
alter table user_all ON CLUSTER perftest_3shards_1replicas drop column name2;

 

9.删除数据

删除数据需要在本地表上操作,登录每个节点进行删除

alter table tb_test_local on cluster default delete where 1=1;

 

 

posted @   slnngk  阅读(283)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 实操Deepseek接入个人知识库
· CSnakes vs Python.NET:高效嵌入与灵活互通的跨语言方案对比
· 【.NET】调用本地 Deepseek 模型
· Plotly.NET 一个为 .NET 打造的强大开源交互式图表库
· 上周热点回顾(2.17-2.23)
历史上的今天:
2020-01-17 配上ssh等效连接
2019-01-17 es日常维护
点击右上角即可分享
微信分享提示