Oracle Database - Enterprise Edition - Version 10.2.0.2 and later Oracle Database Cloud Schema Service - Version N/A and later Oracle Database Exadata Cloud Machine - Version N/A and later Oracle Cloud Infrastructure - Database Service - Version N/A and later Oracle Database Exadata Express Cloud Service - Version N/A and later Information in this document applies to any platform.
*** Checked for relevance on 6-Oct-2014 ***
*** Checked for relevance on 8-Mar-2016 ***
Symptoms
Cluster C_OBJ#_INTCOL# is growing too fast.
Cause
The default stats job (GATHER_STATS_JOB) or any other job based on
the DBMS_STATS used to collect Database statistics is being run
frequently, while having a large number of subpartitions within the
database. To explain how the large number of subpartitions plays a role in the problem, consider the following example:
If 55000 subpartitions are being used, all of which have data and if
each subpartition has 100 columns and further if the histogram has 200
buckets, then the worst case scenario for number of rows in the histgrm$
table for these objects is:
55000*100*200 = approx 1 billion rows
That's the worst case scenario since there may not be 100 columns and
all subpartitions may not have data and all columns may not have 200
buckets. But it illustrates the point that histograms can take up a lot
of space as new objects are created and new data is loaded.
To make sure of the previous information, perform the following SQL queries:
1. SQL> SELECT STATE FROM DBA_SCHEDULER_JOBS WHERE JOB_NAME = 'GATHER_STATS_JOB';
If the result is "scheduled", then the default stats job is automatically running.
2. SQL> select job_name, schedule_name, last_start_date,repeat_interval,next_run_date from dba_scheduler_jobs;
SQL> select * from dba_scheduler_wingroup_members;
SQL> select * from dba_scheduler_window_details;
This is to check for any other job that gathers statistics and
determine its scheduled run time and that of the default stats job.
3. SQL> select count(*) from histgrm$;
Before and after running the job, if the histgrm$ number of rows increases obviously, then the assumptions are correct.
4. Check the number of subpartitions in the database if it has been increased or not.
Solution
To workaround this issue:
1. Adjust the scheduler of the jobs so as not to run so often or run them manually.
2. Avoid adding so many subpartitions to the database.
Other than recreating the database, there is no other supported way to reduce the size of the cluster.