[20180926]神奇的规避ORA-01795方法.txt
[20180926]神奇的规避ORA-01795方法.txt
--//大家知道in里面的值限制1000个值,如果超出报ORA-01795错误.
D:\> ooerr 01795
01795, 00000, "maximum number of expressions in a list is 1000"
D:\> oerr ora 01795
01795, 00000, "maximum number of expressions in a list is 1000"
// *Cause: Number of expressions in the query exceeded than 1000.
// Note that unused column/expressions are also counted
// Maximum number of expressions that are allowed are 1000.
// *Action: Reduce the number of expressions in the list and resubmit.
--//别人的awr报表我仔细查看,发现对方规避ora-01795的方法,使用集合的概念,仔细一想难道集合就没有1000个的限制吗?
--//我仔细看对方程序是写死的,2000个集合,明显这样是没有问题的.
--//有时候还是给佩服开发的想象力,我还是做一些例子说明问题.
1.环境:
SCOTT@test01p> @ ver1
PORT_STRING VERSION BANNER CON_ID
------------------------------ -------------- -------------------------------------------------------------------------------- ----------
IBMPC/WIN_NT64-9.1.0 12.1.0.1.0 Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production 0
SCOTT@test01p> create table t as select * from all_objects;
Table created.
SCOTT@test01p> create unique index i_t_object_id on t(object_id);
Index created.
--//分析表略.
2.测试1:
variable b1 number;
variable b2 number;
exec :b1 :=42;
exec :b2 :=47;
SCOTT@test01p> select * from t where (1,object_id) in ((1,:b1),(1,:b2));
OWNER OBJECT_NAME SUBOBJECT_NAME OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE CREATED LAST_DDL_TIME TIMESTAMP
----- ----------- -------------- --------- -------------- -------------------- ------------------- ------------------- -------------------
SYS I_ICOL1 42 42 INDEX 2013-06-28 09:03:29 2013-06-28 09:03:29 2013-06-28:09:03:29
SYS I_USER2 47 47 INDEX 2013-06-28 09:03:29 2013-06-28 09:03:29 2013-06-28:09:03:29
--//OK没有问题,里面仅仅包含2个集合,如果更多呢?
3.测试2:
select 'variable b'||to_char(level)||' number;' txt from dual connect by level<=1001
union all
select 'exec :b'||to_char(level)||' :='|| to_char(level) txt from dual connect by level<=1001
union all
select 'select * from t where (1,object_id) in (' txt from dual
union all
select '(1,:b'||to_char(level)||'),' txt from dual connect by level<=1000
union all
select '(1,:b'||to_char(1001)||'));' txt from dual ;
--//将输出保存1个脚本,适当编辑执行.ok,没有任何问题.执行计划如下:
Plan hash value: 3525592940
----------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers |
----------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | | 196 (100)| | 891 |00:00:00.01 | 40 |
| 1 | INLIST ITERATOR | | 1 | | | | | 891 |00:00:00.01 | 40 |
| 2 | TABLE ACCESS BY INDEX ROWID| T | 1001 | 1001 | 112K| 196 (0)| 00:00:01 | 891 |00:00:00.01 | 40 |
|* 3 | INDEX UNIQUE SCAN | I_T_OBJECT_ID | 1001 | 1001 | | 178 (0)| 00:00:01 | 891 |00:00:00.01 | 22 |
----------------------------------------------------------------------------------------------------------------------------------------
--//1001个集合没有任何问题.
--//修改b200-b999变量等于0,测试看看:
Plan hash value: 3525592940
----------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers |
----------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | | 196 (100)| | 193 |00:00:00.01 | 16 |
| 1 | INLIST ITERATOR | | 1 | | | | | 193 |00:00:00.01 | 16 |
| 2 | TABLE ACCESS BY INDEX ROWID| T | 203 | 1001 | 112K| 196 (0)| 00:00:01 | 193 |00:00:00.01 | 16 |
|* 3 | INDEX UNIQUE SCAN | I_T_OBJECT_ID | 203 | 1001 | | 178 (0)| 00:00:01 | 193 |00:00:00.01 | 12 |
----------------------------------------------------------------------------------------------------------------------------------------
--//可以发现starts=203,重复值不会扫描的.
4.我的估计:
--//对方2000个集合写死的,我估计可能是一个数组,初始赋值0(或者查询不到的值),然后赋值.
--//通过这样的方式实现绑定变量.
--//我个人还是趋向这样的方式使用临时表更加合理简单一些.