KingbaseESV8R6手工vacuum带有全局分区索引的分区表的影响
背景
客户现场有这样一个案例,有张500个分区的大表,每个分区有20万条记录。有update 非常频繁,经常会触发autovacuum。由于表很大,autovacuum 耗时很长。据现场同事反馈,手工vacuum分区时,有报错:“multiple backends attempting to wait for pincount 1”,大致意思是手工vacuum和autovacuum进程冲突了。经查询sys_stat_activity视图,果然 三个默认的autovacuum进程都在繁忙中。可以查看autovacuum_max_workers参数,默认3,表示autovacuum同时工作的进程数。
面对这样的问题,首先应在应用规划部署前根据业务类型避免这种数据库的架构的存在,比如,拆分业务,或者提前增加参数设置,存储性能等,以避免此风险的发生,目前可以调大autovacuum_max_workers进程数,可是调大这个参数可能造成IO压力过大,这还需要配合监控系统资源使用适度调整该参数。由于需要征得客户同意,目前的方案是关闭autovacuum进程,然后开启手工vacuum分区表。接来下要验证拥有本地分区索引,和全局分区索引的情况下。手工vacuum分区表的影响。
实验环境,关闭autovacuum进程:
测试
验证本地分区索引的分区表,同时vacuum两个分区是否有锁等待,
create table part_tab01(part_key char(1),state char(1),desc_content text)
partition by range(part_key)
(
partition part_0 values less than(1),
partition part_1 values less than(2)
);
insert into part_tab01 select '0','0',repeat('a',1000) from generate_series(1,500000);
insert into part_tab01 select '0','1',repeat('b',1000) from generate_series(1,10);
insert into part_tab01 select '1','1',repeat('a',1000) from generate_series(1,500000);
insert into part_tab01 select '1','0',repeat('b',1000) from generate_series(1,10);
create index idx_part_tab01_state on part_tab01(state);
select count(*) from part_tab01_part_0;
select count(*) from part_tab01_part_1;
alter system set autovacuum=off;
select sys_reload_conf();
show autovacuum;
update part_tab01_part_0 set desc_content='aaa';
update part_tab01_part_1 set desc_content='bbb';
检查分区膨胀率
SELECT
schemaname||'.'||relname,
n_dead_tup,
n_live_tup,
round(n_dead_tup * 100 / (n_live_tup + n_dead_tup),2) AS dead_tup_ratio
FROM
pg_stat_all_tables
WHERE relname='part_tab01_part_0'
ORDER BY dead_tup_ratio DESC;
?column? | n_dead_tup | n_live_tup | dead_tup_ratio
--------------------------+------------+------------+----------------
public.part_tab01_part_0 | 100010 | 100010 | 50.00
(1 row)
SELECT
schemaname||'.'||relname,
n_dead_tup,
n_live_tup,
round(n_dead_tup * 100 / (n_live_tup + n_dead_tup),2) AS dead_tup_ratio
FROM
pg_stat_all_tables
WHERE relname='part_tab01_part_1'
ORDER BY dead_tup_ratio DESC;
?column? | n_dead_tup | n_live_tup | dead_tup_ratio
--------------------------+------------+------------+----------------
public.part_tab01_part_1 | 100010 | 100010 | 50.00
(1 row)
分别在两个窗口同时执行如下vacuum命令
vacuum part_tab01_part_0;
vacuum part_tab01_part_1;
没有锁等待,可以同时完成两个分区的vacuum
验证全局分区索引的分区表,同时vacuum两个分区是否有锁等待
create table t1_part(id1 integer,id2 integer,id3 integer)
partition by range(id2)
(
partition part01 values less than(10000),
partition part02 values less than(20000),
partition part03 values less than(30000),
partition part04 values less than(40000),
partition part05 values less than(50000),
partition part06 values less than(60000),
partition part07 values less than(70000),
partition part08 values less than(80000),
partition part09 values less than(90000),
partition part10 values less than(maxvalue)
);
create unique index idx2_t1_part on t1_part(id2) global ;
create index idx3_t1_part on t1_part(id2) ;
查看索引
TEST=# \di+ idx2_t1_part
List of relations
Schema | Name | Type | Owner | Table | Size | Description
--------+--------------+--------------+--------+---------+------------+-------------
public | idx2_t1_part | global index | system | t1_part | 8192 bytes |
(1 row)
TEST=# \di+ idx3_t1_part;
List of relations
Schema | Name | Type | Owner | Table | Size | Description
--------+--------------+-------------------+--------+---------+---------+-------------
public | idx3_t1_part | partitioned index | system | t1_part | 0 bytes |
(1 row)
insert into t1_part select generate_series(1,100000),generate_series(1,100000),generate_series(1,100000);
update t1_part set id3='443';
用下面语句查询每个分区膨胀率早已经超过了autovacuum阈值,由于分区众多,结果略过
SELECT
schemaname||'.'||relname,
n_dead_tup,
n_live_tup,
round(n_dead_tup * 100 / (n_live_tup + n_dead_tup),2) AS dead_tup_ratio
FROM
pg_stat_all_tables
WHERE relname ='t1_part_part01'
ORDER BY dead_tup_ratio DESC;
分别在多个窗口同时执行如下vacuum命令
vacuum t1_part_part01
vacuum t1_part_part02
vacuum t1_part_part03
果然并发执行子分区时,全局分区索引报错,不允许并发vacuum子分区,原因是当vacuum分区表时,同时包含了vacuum index操作,vacuum多个分区表同时,对同一个全局分区索引vacuum引起的等待。
TEST=# vacuum t1_part_part02
TEST-# ;
ERROR: multiple active vacuums for index "idx2_t1_part"
这时,只能vacuum主分区表可以成功
TEST=# vacuum t1_part;
VACUUM
分别查看各分区表的膨胀率已经降到了0,由于分区众多,这里只列举了两个分区
TEST=# SELECT
TEST-# schemaname||'.'||relname,
TEST-# n_dead_tup,
TEST-# n_live_tup,
TEST-# round(n_dead_tup * 100 / (n_live_tup + n_dead_tup),2) AS dead_tup_ratio
TEST-# FROM
TEST-# pg_stat_all_tables
TEST-# WHERE relname ='t1_part_part05'
TEST-# ORDER BY dead_tup_ratio DESC;
?column? | n_dead_tup | n_live_tup | dead_tup_ratio
-----------------------+------------+------------+----------------
public.t1_part_part05 | 0 | 10000 | 0.00
(1 row)
TEST=# SELECT
TEST-# schemaname||'.'||relname,
TEST-# n_dead_tup,
TEST-# n_live_tup,
TEST-# round(n_dead_tup * 100 / (n_live_tup + n_dead_tup),2) AS dead_tup_ratio
TEST-# FROM
TEST-# pg_stat_all_tables
TEST-# WHERE relname ='t1_part_part06'
TEST-# ORDER BY dead_tup_ratio DESC;
?column? | n_dead_tup | n_live_tup | dead_tup_ratio
-----------------------+------------+------------+----------------
public.t1_part_part06 | 0 | 10000 | 0.00
(1 row)
总结
- 在对表进行Vacuum时,需要获取ShareUpdateExclusiveLock锁,而同一张表两个ShareUpdateExclusiveLock锁是冲突的,不允许两个进程vacuum 同一张表。
- 对于分区表父表vacuum,实际是对分区子表进行vacuum
对于有全局分区索引的分区表,在手工vacuum表时,首先vacuum主表,然后需要用到 INDEX_CLEANUP特性,vacuum (verbose, INDEX_CLEANUP false) t1_part;尤其适用于本案例,有全局分区索引时,表和索引膨胀率很高时,建议在vacuum 表时,跳过索引,最后vacuum表后,重建索引。
这部分内容请参考博客园文档《KingbaseES V8R6 vacuum index_cleanup 选项》
https://www.cnblogs.com/kingbase/p/16145397.html
KINGBASE研究院