clickhouse主从部署(1分片2副本)

 

环境:
192.168.1.118
192.168.1.134

OS:Centos 7

DB:21.9.7

 

1.安装zookeeper
可以安装集群也可以安装单节点

集群的安装步骤请参考:https://www.cnblogs.com/hxlasky/p/14665752.html

我这里安装的是单节点的zookeeper

 

2.每个节点上都安装好clickhouse
安装步骤请参考
https://www.cnblogs.com/hxlasky/p/15325313.html

 

3.创建 /etc/metrika.xml 文件

192.168.1.118节点

复制代码
<?xml version="1.0"?>
<yandex>
  <clickhouse_remote_servers>
    <perftest_1shards_2replicas>
      <shard>
        <internal_replication>true</internal_replication>
        <replica>
          <host>192.168.1.118</host>
          <port>9000</port>
        </replica>
        <replica>
          <host>192.168.1.134</host>
          <port>9000</port>
        </replica>
      </shard>
    </perftest_1shards_2replicas>
  </clickhouse_remote_servers>
  <!--zookeeper相关配置-->
  <zookeeper-servers>
    <node index="1">
      <host>192.168.1.118</host>
      <port>2181</port>
    </node>
  </zookeeper-servers>
  <macros>
    <shard_name>01</shard_name>
    <replica>192.168.1.118</replica>
  </macros>
  <networks>
    <ip>::/0</ip>
  </networks>
  <clickhouse_compression>
    <case>
      <min_part_size>10000000000</min_part_size>
      <min_part_size_ratio>0.01</min_part_size_ratio>
      <method>lz4</method>
    </case>
  </clickhouse_compression>
</yandex>
复制代码

 

 

192.168.1.134节点

复制代码
<?xml version="1.0"?>
<yandex>
  <clickhouse_remote_servers>
    <perftest_1shards_2replicas>
      <shard>
        <internal_replication>true</internal_replication>
        <replica>
          <host>192.168.1.118</host>
          <port>9000</port>
        </replica>
        <replica>
          <host>192.168.1.134</host>
          <port>9000</port>
        </replica>
      </shard>
    </perftest_1shards_2replicas>
  </clickhouse_remote_servers>
  <!--zookeeper相关配置-->
  <zookeeper-servers>
    <node index="1">
      <host>192.168.1.118</host>
      <port>2181</port>
    </node>
  </zookeeper-servers>
  <macros>
    <shard_name>01</shard_name>
    <replica>192.168.1.134</replica>
  </macros>
  <networks>
    <ip>::/0</ip>
  </networks>
  <clickhouse_compression>
    <case>
      <min_part_size>10000000000</min_part_size>
      <min_part_size_ratio>0.01</min_part_size_ratio>
      <method>lz4</method>
    </case>
  </clickhouse_compression>
</yandex>
复制代码

 

 

4.修改配置文件,引入metrika.xml

修改 /etc/clickhouse-server/config.xml 文件

复制代码
<yandex>
    <!--引入metrika.xml-->
    <include_from>/etc/metrika.xml</include_from>
    
    <!--引用Zookeeper配置的定义-->
    <zookeeper incl="zookeeper-servers" optional="true">
    
    <!--clickhouse_remote_servers节点引入-->
    <remote_servers incl="clickhouse_remote_servers">  
        <!-- Test only shard config for testing distributed storage -->
        <test_shard_localhost>
            <!-- Inter-server per-cluster secret for Distributed queries
                 default: no secret (no authentication will be performed)

                 If set, then Distributed queries will be validated on shards, so at least:
                 - such cluster should exist on the shard,
                 - such cluster should have the same secret.

    <macros incl="macros" optional="true">
    </macros>
复制代码

 

将配置文件拷贝到另外一个节点
scp /etc/clickhouse-server/config.xml root@192.168.1.134:/etc/clickhouse-server/

 

5.修改内部通信参数
vi /etc/clickhouse-server/config.xml

 

192.168.1.118节点
<interserver_http_host>192.168.1.118</interserver_http_host>

192.168.1.134节点
<interserver_http_host>192.168.1.134</interserver_http_host>

 

6.重启
两个节点都需要重启动
systemctl stop clickhouse-server.service
systemctl start clickhouse-server.service
systemctl status clickhouse-server.service

 

7.查看集群情况
192.168.1.118和192.168.1.134上都查看

复制代码
[root@localhost clickhouse219]#clickhouse-client --host 192.168.1.118 --port=9000
SELECT *
FROM system.clusters

Query id: cae1ecb0-35c9-445b-892e-9e40f3ceff5a

┌─cluster──────────────────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name─────┬─host_address──┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─slowdowns_count─┬─estimated_recovery_time─┐
│ perftest_1shards_2replicas                   │         111192.168.1.118192.168.1.11890001default │                  │            000 │
│ perftest_1shards_2replicas                   │         112192.168.1.134192.168.1.13490000default │                  │            000 │
│ test_cluster_two_shards                      │         111127.0.0.1127.0.0.190001default │                  │            000 │
│ test_cluster_two_shards                      │         211127.0.0.2127.0.0.290000default │                  │            000 │
│ test_cluster_two_shards_internal_replication │         111127.0.0.1127.0.0.190001default │                  │            000 │
│ test_cluster_two_shards_internal_replication │         211127.0.0.2127.0.0.290000default │                  │            000 │
│ test_cluster_two_shards_localhost            │         111 │ localhost     │ ::190001default │                  │            000 │
│ test_cluster_two_shards_localhost            │         211 │ localhost     │ ::190001default │                  │            000 │
│ test_shard_localhost                         │         111 │ localhost     │ ::190001default │                  │            000 │
│ test_shard_localhost_secure                  │         111 │ localhost     │ ::194400default │                  │            000 │
│ test_unavailable_shard                       │         111 │ localhost     │ ::190001default │                  │            000 │
│ test_unavailable_shard                       │         211 │ localhost     │ ::110default │                  │            000 │
└──────────────────────────────────────────────┴───────────┴──────────────┴─────────────┴───────────────┴───────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────┴─────────────────────────┘

12 rows in set. Elapsed: 0.007 sec. 
复制代码

 

8.创建数据库

在其中一个节点上执行,我这里是在192.168.1.118上执行

复制代码
CREATE DATABASE db_test ON CLUSTER perftest_1shards_2replicas;

localhost :) CREATE DATABASE db_test ON CLUSTER perftest_1shards_2replicas;

CREATE DATABASE db_test ON CLUSTER perftest_1shards_2replicas

Query id: 631e56fc-f4ed-4a69-a289-d3abb474965f

┌─host──────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ 192.168.1.11890000 │       │                   11 │
└───────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘
┌─host──────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ 192.168.1.13490000 │       │                   00 │
└───────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘
复制代码

 

 

复制代码
登陆查看数据库
clickhouse-client --host 192.168.1.118 --port=9000 -m
localhost :) show databases;

SHOW DATABASES

Query id: 55c214f1-6d65-4f49-bab0-134442c7d40e

┌─name────┐
│ db_test │
│ default │
│ system  │
└─────────┘

clickhouse-client --host 192.168.1.134 --port=9000 -m

localhost :) show databases;

SHOW DATABASES

Query id: 55c214f1-6d65-4f49-bab0-134442c7d40e

┌─name────┐
│ db_test │
│ default │
│ system  │
└─────────┘
复制代码

 

9.创建表并写入数据

在其中一个节点上操作,我这里在192.168.1.118上执行

复制代码
clickhouse-client --host 192.168.1.118 --port=9000 -m

CREATE TABLE db_test.tb_test01 ON CLUSTER 'perftest_1shards_2replicas'
(
`id` Int64,
`p` Int16
)
ENGINE = ReplicatedMergeTree( '/clickhouse/tables/replicated/tb_test01', '{replica}')
PARTITION BY p
ORDER BY id;

INSERT INTO db_test.tb_test01 VALUES(33,33);
INSERT INTO db_test.tb_test01 VALUES(44,44);
INSERT INTO db_test.tb_test01 VALUES(55,55);
INSERT INTO db_test.tb_test01 VALUES(66,66);
复制代码

 

另外一个节点查看

复制代码
clickhouse-client --host 192.168.1.134 --port=9000 -m

localhost :) select * from db_test.tb_test01;

SELECT *
FROM db_test.tb_test01

Query id: c231b503-1d01-488b-8ea5-ccac69c02b78

┌─id─┬──p─┐
│ 5555 │
└────┴────┘
┌─id─┬──p─┐
│ 6666 │
└────┴────┘
┌─id─┬──p─┐
│ 3333 │
└────┴────┘
┌─id─┬──p─┐
│ 4444 │
└────┴────┘
复制代码

 

posted @   slnngk  阅读(3741)  评论(0编辑  收藏  举报
努力加载评论中...
点击右上角即可分享
微信分享提示