lightdb找出慢sql
lightdb默认安装lt_stat_statements,可使用该插件记录的统计信息来查询
sql执行情况会记录在pg_stat_statements视图中
lightdb@postgres=# \d pg_stat_statements
View "lt_catalog.pg_stat_statements"
Column | Type | Collation | Nullable | Default
---------------------+------------------+-----------+----------+---------
userid | oid | | |
dbid | oid | | |
queryid | bigint | | |
query | text | | |
plans | bigint | | |
total_plan_time | double precision | | |
min_plan_time | double precision | | |
max_plan_time | double precision | | |
mean_plan_time | double precision | | |
stddev_plan_time | double precision | | |
calls | bigint | | |
total_exec_time | double precision | | |
min_exec_time | double precision | | |
max_exec_time | double precision | | |
mean_exec_time | double precision | | |
stddev_exec_time | double precision | | |
rows | bigint | | |
shared_blks_hit | bigint | | |
shared_blks_read | bigint | | |
shared_blks_dirtied | bigint | | |
shared_blks_written | bigint | | |
local_blks_hit | bigint | | |
local_blks_read | bigint | | |
local_blks_dirtied | bigint | | |
local_blks_written | bigint | | |
temp_blks_read | bigint | | |
temp_blks_written | bigint | | |
blk_read_time | double precision | | |
blk_write_time | double precision | | |
wal_records | bigint | | |
wal_fpi | bigint | | |
wal_bytes | numeric | | |
1、按照max_exec_time排序找出执行时间最长的10条sql
select max_exec_time,query from pg_stat_statements order by max_exec_time desc limit 10;
2、按照calls排序找出执行最频繁的sql
select calls,query from pg_stat_statements order by max_plan_time desc limit 10;
3、找出某些查询的执行时间超过了所有查询总时间的一个百分比来确定这些消耗时间最多的sql语句
(1)查询所有查询消耗的总时间
SELECT sum(total_exec_time) AS total_time,
sum(blk_read_time+blk_write_time) AS io_time,
sum(total_exec_time-blk_read_time-blk_write_time) AS cpu_time,
sum(calls) AS ncalls,
sum(rows) AS total_rows
FROM pg_stat_statements
WHERE dbid IN
(SELECT oid
FROM pg_database
WHERE datname=current_database());
(2)查询出消耗的cpu时间超过所有sql语句消耗的总cpu时间的5%的sql
with total AS
(SELECT sum(total_exec_time) AS total_time,
sum(blk_read_time+blk_write_time) AS io_time,
sum(total_exec_time-blk_read_time-blk_write_time) AS cpu_time,
sum(calls) AS ncalls,
sum(rows) AS total_rows
FROM pg_stat_statements
WHERE dbid IN
(SELECT oid
FROM pg_database
WHERE datname=current_database()))
SELECT *,
(pss.total_exec_time -pss.blk_read_time-pss.blk_write_time)/total.cpu_time*100 cpu_pct
FROM pg_stat_statements pss,total
WHERE (pss.total_exec_time -pss.blk_read_time-pss.blk_write_time)/total.cpu_time >= 0.05
ORDER BY pss.total_exec_time -pss.blk_read_time-pss.blk_write_time desc;
(3)查询出消耗的io时间超过所有sql语句消耗的总io时间的5%的sql
with total AS
(SELECT sum(total_exec_time) AS total_time,
sum(blk_read_time+blk_write_time) AS io_time,
sum(total_exec_time-blk_read_time-blk_write_time) AS cpu_time,
sum(calls) AS ncalls,
sum(rows) AS total_rows
FROM pg_stat_statements
WHERE dbid IN
(SELECT oid
FROM pg_database
WHERE datname=current_database()))
SELECT *,
(pss.blk_read_time+pss.blk_write_time)/total.io_time*100 io_pct
FROM pg_stat_statements pss,total
WHERE (pss.blk_read_time+pss.blk_write_time)/total.io_time >= 0.05 and total.io_time > 0
ORDER BY pss.blk_read_time+pss.blk_write_time desc;
4、通常在一些较大的表上做全表扫描,效率不会太高,可以用下面的sql把走全表扫描次数超过10次且尺寸大于100KB的表找出来:
SELECT relname,
pg_relation_size(relid) AS rel_size,
seq_scan,
idx_scan
FROM pg_stat_all_tables
WHERE pg_relation_size(relid) > 100000
AND seq_scan >10
AND schemaname='public'
ORDER BY seq_scan desc;
根据找出来的表,然后根据表名再到pg_stat_statements中把走全表扫描的sql找出来,然后分析一下看是否是因为漏建了索引还是其他原因导致的全表扫描