citus 11 参考简单使用
基于官当的docker-compose 模板简单测试下citus 11 的查询特性,对于负载lb 基于了haproxy
测试说明
使用docker-compose 进行worker 节点的扩容,注意是先准备好的,实际使用的时候会基于管理sql 添加节点的
环境准备
- docker-compose 文件
version: "3"
services:
haproxy:
build: ./
volumes:
- "./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg"
ports:
- 5433:5433
- 8404:8404
master:
container_name: "${COMPOSE_PROJECT_NAME:-citus}_master"
image: "citusdata/citus:11.0.2"
ports: [ "${COORDINATOR_EXTERNAL_PORT:-5432}:5432" ]
labels: [ "com.citusdata.role=Master" ]
environment:
&AUTH
POSTGRES_USER: "${POSTGRES_USER:-postgres}"
POSTGRES_PASSWORD: "${POSTGRES_PASSWORD}"
PGUSER: "${POSTGRES_USER:-postgres}"
PGPASSWORD: "${POSTGRES_PASSWORD}"
POSTGRES_HOST_AUTH_METHOD: "${POSTGRES_HOST_AUTH_METHOD:-trust}"
worker:
image: "citusdata/citus:11.0.2"
labels: [ "com.citusdata.role=Worker" ]
depends_on: [ manager ]
environment: *AUTH
command: "/wait-for-manager.sh"
volumes:
- healthcheck-volume:/healthcheck
manager:
container_name: "${COMPOSE_PROJECT_NAME:-citus}_manager"
image: "citusdata/membership-manager:0.3.0"
volumes:
- "${DOCKER_SOCK:-/var/run/docker.sock}:/var/run/docker.sock"
- healthcheck-volume:/healthcheck
depends_on: [ master ]
environment: *AUTH
volumes:
healthcheck-volume:
haproxy 配置
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
# daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode tcp
log global
option tcplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
resolvers mynameservers
nameserver ns1 127.0.0.11:53
frontend mysite
bind :5433
default_backend webservers
backend webservers
balance roundrobin
mode tcp
server web1 worker:5432 check maxconn 300 resolvers mynameservers # 基于dns 的lb,为了测试,没有添加协调节点的服务
frontend stats
bind *:8404
mode http
stats enable
stats uri /stats
http-request use-service prometheus-exporter if { path /metrics }
stats refresh 10s
stats admin if LOCALHOST
haproxy 基础镜像
FROM dalongrong/haproxy:2.6-debian-quic
测试
- 启动
docker-compose up -d
- worker 节点扩容
默认的member 服务会自动添加节点的
docker-compose -p citus scale worker=5
效果
SELECT master_get_active_worker_nodes();
haproxy 监控
- 创建分布式表
CREATE TABLE data (key text primary key, value jsonb);
SELECT create_distributed_table('data','key');
CREATE TABLE data_audit (operation text, key text, new_value jsonb, change_time timestamptz default now());
SELECT create_distributed_table('data_audit','key', colocate_with := 'data');
-- we know this function only writes to a co-located table using the same key
CREATE OR REPLACE FUNCTION audit_trigger()
RETURNS trigger
AS $$
DECLARE
BEGIN
INSERT INTO data_audit VALUES (TG_OP, Coalesce(OLD.key, NEW.key), NEW.value);
RETURN NULL;
END;
$$ LANGUAGE plpgsql;
-- so, it is safe to enable triggers on distributed tables
SET citus.enable_unsafe_triggers TO on;
CREATE TRIGGER data_audit_trigger
AFTER INSERT OR UPDATE OR DELETE ON data
FOR EACH ROW EXECUTE FUNCTION audit_trigger();
查询以及写入
我们可以通过工具连接pg 协调节点是5432,lb 是5433 会发现都是可以写入以及查询的,但是对于schema 的处理就需要通过协调节点操作了
一些参考资料
来自官方介绍
说明
citus 11 是一个很值得使用的版本,企业特性完全开源了,这样大规划使用风险就很小了,而且维护就更加方便了
参考资料
https://www.citusdata.com/blog/2022/06/17/citus-11-goes-fully-open-source/#any-node