Excessive AWR Growth From Partitioned Objects Such as SYS.WRH$_EVENT_HISTOGRAM Causing Sysaux to Grow (Doc ID 1912201.1)
APPLIES TO:
Oracle Database Cloud Service - Version N/A and laterOracle Database - Enterprise Edition - Version 11.2.0.4 and later
Oracle Database Cloud Schema Service - Version N/A and later
Gen 1 Exadata Cloud at Customer (Oracle Exadata Database Cloud Machine) - Version N/A and later
Oracle Database Exadata Express Cloud Service - Version N/A and later
Information in this document applies to any platform.
SYMPTOMS
- The AWR is growing on a nightly basis despite very little activity on the database
- Awrinfo shows following partitioned objects taking most space:
WRH$_OSSTAT.WRH$_OSSTAT_703264452_0 - 74% TABLE PARTITIONThis report can be found under the Oracle Home directory in the rdbms/admin subdirectory. It can be executed as follows:
EVENTS 580.0 WRH$_EVENT_HISTOGRAM_PK.WRH$_EVENT__703264452_0 - 97% INDEX PARTITION
EVENTS 489.0 WRH$_EVENT_HISTOGRAM.WRH$_EVENT__703264452_0 - 98% TABLE PARTITION
EVENTS 168.0 WRH$_SYSTEM_EVENT_PK.WRH$_SYSTEM_703264452_0 - 75% INDEX PARTITION
EVENTS 164.0 WRH$_SYSTEM_EVENT.WRH$_SYSTEM_703264452_0 - 94% TABLE PARTITIONSQL> @?/rdbms/admin/awrinfo.sql
CAUSE
This is due to following known issue:
However, even after applying bug patch 14084247, this fix will not have an immediate effect and may take some time to "catch up".
Because of purge operation failure, the split partition does not occur. This causes problems because partitions grow bigger and bigger affecting the query performance, table space reclaimation, and further worsening the purging operation.
When we reach a certain point in the AWR repository where the purge job times out, it will grow the AWR repository on a daily basis without ever reducing the size.
After 14084247 fix is applied, this is the order of relevant purge steps:
2) purge by callback (anything can happen here, but mostly no-ops or regular DELETE operations)
3) purge regular non-partitioned tables (i.e, DELETE)
4) split partitions (DDL)
This is the crux of the problem: if step 2 or step 3 is slow, the partitions do not split. And if partitions do not split, it means they do not change to expired. So step 1 will not be able to drop them next time. So they grow and grow, which can itself contribute to making step 2 or step 3 slow, depending on how the data is related.
The fix will add a separate split (step 4 above), done by the snapshot when needed. That means that the next time purge runs, there will be one more partition. However, that does not mean that it is necessarily time to purge that partition yet. Since it was only created a few hours ago, it still has active data in it.
In this case, the retention period is 8 days.
The partition(s) created by the snapshot will typically hang around throughout the retention period. If they are created on 1-JAN, they won't be purged (by step 1 in the regular purge) until 9-JAN. It should get deleted no later than whatever the retention period is (8 days from now in this case). The fix adds partitions for future snapshots, leaving the existing content in the first partition, where it has been building from the beginning.
SOLUTION
1. Download and Apply Patch 14084247.
2. Another way to speed up the purging is to lower the retention period on a temporary basis to 1 day:
If the above command results in the following error, alter the baseline window size:
ORA-13541: system moving window baseline size (691200) greater than retention
(86400)
ORA-06512: at "SYS.DBMS_WORKLOAD_REPOSITORY", line 174
ORA-06512: at "SYS.DBMS_WORKLOAD_REPOSITORY", line 222
ORA-06512: at line 1
To alter the baseline window size:
Then, rerun the snapshot setting:
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 开源Multi-agent AI智能体框架aevatar.ai,欢迎大家贡献代码
· Manus重磅发布:全球首款通用AI代理技术深度解析与实战指南
· 被坑几百块钱后,我竟然真的恢复了删除的微信聊天记录!
· 没有Manus邀请码?试试免邀请码的MGX或者开源的OpenManus吧
· 园子的第一款AI主题卫衣上架——"HELLO! HOW CAN I ASSIST YOU TODAY