博客园  :: 首页  :: 新随笔  :: 联系 :: 订阅 订阅  :: 管理

Oracle 课程三之表设计

Posted on 2014-01-23 09:52  徐正柱-  阅读(1069)  评论(0编辑  收藏  举报

完成本课程的学习后,您应该能够:

•普通堆表优点和缺点
•理解rowid
•全局临时表优点、缺点和适用场景
•分区表的类型和原理、优点和缺点、适用场景
•表字段的高效设计
•sequence的设计
 
1.数据存储方式
  • 在关系数据库中有很多种数据存储方式,有些DBMS全部支持,有些则只支持其中的一部分。数据库的存储方式至关重要,它对数据的修改和查询都有直接的影响。
  • 存储方式可分为两种:随机存储方式和固定存储方式。前者存储效率必然要高于后者。但如我们的人生一样,得到多少就意味着要失去多少。
  • 随机存储方式在写入数据时可以轻而易举的进行存储,在查询时候则要付出更高的代价;
  • 固定存储方式在写入数据时花费了时间和精力,则在查询上获得性能的提升。
2.堆表
  堆表的最大特征就是数据的存储独立性,即数据的存储与数据值没有任何关联地被存储在磁盘的任意位置上(允许数据被存储在磁盘的任意位置上)。

     从另一个角度看,随机存储方式就是数据所占用的位置分散到不同的数据块上。由于数据被分散地存储在多个数据块上,数据的读取效率也同样的会随着它们的分散程度的不同而不同,即分散程度越高,数据读取效率越低;分散程度越低,数据读取效率越高。

2.1堆表的优、缺点

•优点

      语法简单,使用方便

•缺点
  表更新产生redo开销大
  delete无法释放空间
  表记录太大检索较慢
  索引回表开销大
  即使有序插入,难有序读出
•适用场景

     适用大部分场景

实验2.1 测试Redo大小

http://blog.csdn.net/guogang83/article/details/7848974
测量redo脚本:
create or replace view v_measure_redo_size
as select name, value
  from v$mystat, v$statname
 where v$mystat.statistic# = v$statname.statistic#
   and v$statname.name = 'redo size';

测量redo、undo脚本:
create or replace view v_measure_redo_undo_size
as select name, value
  from v$mystat, v$statname
 where v$mystat.statistic# = v$statname.statistic#
   and (v$statname.name = 'redo size' or
       v$statname.name = 'undo change vector size');
测量Redo、undo脚本
SQL> create table test as select * from dba_objects;

SQL> select segment_name,bytes/1024/1024 from user_segments s where s.segment_name='TEST';
SEGMENT_NAME               BYTES/1024/1024
------------------------- ---------------
TEST                             6

SQL> select * from v_measure_redo_size;
NAME                                                                  VALUE
---------------------------------------------------------------- ----------
redo size     
                                                        86644
SQL> delete from test;
SQL> commit;

SQL> select * from v_measure_redo_size;
NAME                                                                  VALUE
---------------------------------------------------------------- ----------
redo size                                                          18293628

SQL> select (18293628-86644)/1024/1024 from dual;
(18293628-86644)/1024/1024
--------------------------
                 17.363533
SQL> insert into test select * from dba_objects;
SQL> commit;
SQL> create index ind_object_id on test(object_id);

SQL> select * from v_measure_redo_size;
NAME                                                                  VALUE
---------------------------------------------------------------- ----------
redo size                                                          24878160
SQL> delete from test;
SQL> commit;
SQL> select * from v_measure_redo_size;
NAME                                                                  VALUE
---------------------------------------------------------------- ----------
redo size                                                          53255224
SQL> select (53255224-24878160)/1024/1024 from dual;
(53255224-24878160)/1024/1024
-----------------------------
                   27.0624771
表理新产生redo开销大
SQL> drop table test purge;
SQL> create table test as select * from dba_objects;

SQL> set autotrace on
--第二次执行此SQL语句的结果
SQL> select count(*) from test;
SQL> set autotrace off
SQL> set autotrace on
SQL> delete from test;
已删除50417行。
SQL> commit;

SQL> select count(*) from test;
  COUNT(*)
----------
         0

SQL> truncate table test;
SQL> select count(*) from test;
  COUNT(*)
----------
         0
delete无法释放空间
SQL> drop table test purge;
SQL> create table test as select * from dba_objects;
SQL> create  unique index ind_object_id on test(object_id);

SQL> set autotrace traceonly
SQL> select * from test where object_id <100;
已选择98行。
执行计划
----------------------------------------------------------
Plan hash value: 3428108236
---------------------------------------------------------------------------------------------
| Id  | Operation                   | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |               |    98 | 17346 |     4   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TEST          |    98 | 17346 |     4   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | IND_OBJECT_ID |    98 |       |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("OBJECT_ID"<100)
Note
-----
   - dynamic sampling used for this statement
统计信息
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         17  consistent gets
          0  physical reads
          0  redo size
       9661  bytes sent via SQL*Net to client
        451  bytes received via SQL*Net from client
          8  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         98  rows processed

SQL> select object_id from test where object_id <100;
已选择98行。
执行计划
----------------------------------------------------------
Plan hash value: 2038338801
----------------------------------------------------------------------------------
| Id  | Operation        | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |               |    98 |  1274 |     2   (0)| 00:00:01 |
|*  1 |  INDEX RANGE SCAN| IND_OBJECT_ID |    98 |  1274 |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("OBJECT_ID"<100)
Note
-----
   - dynamic sampling used for this statement
统计信息
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          9  consistent gets
          0  physical reads
          0  redo size
       1668  bytes sent via SQL*Net to client
        451  bytes received via SQL*Net from client
          8  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         98  rows processed
索引回表开销大
SQL> create table test(a number);
SQL> insert into test values(1);
SQL> insert into test values(2);
SQL> insert into test values(3);
SQL> insert into test values(4);
SQL> commit;

SQL> select * from test;

         A
----------
         1
         2
         3
         4

SQL> delete from test where a=2;
SQL> commit;
SQL> select * from test;
         A
----------
         1
         3
         4
SQL> insert into test values(2);
SQL> commit;
SQL> select * from test;
         A
----------
         1
         3
         4
         2
SQL> select * from test order by a;

         A
----------
         1
         2
         3
         4
即使有序插入,难有序读出(同时可以dump block试试)

 

2.2理解Rowid

      rowid是伪列(pseudocolumn),伪列的意思是实际上这一列本身在数据字典中并不存在,在查询结果输出时它被构造出来。rowid并不会真正存在于表的data block中,但是它会存在于index当中,用来通过rowid来寻找表中的行数据。

      rowid的组成:数据对象号(6位)+相对文件号(3位)+数据块号(6位)+在数据块中的行号(3位)。如: AAAMimAAFAAAAAMAAC

  select t.rowid,t.* from test t; --AAAMimAAFAAAAAMAAC

select 12*64*64+34*64+38 from dual; --51366
64进制  A-Z(0-25)a-z(26-51)0-9(52-61)+/(62-63)   
select
    rowid,
    dbms_rowid.rowid_object(rowid) object_id,--51366(AAAMim)AAFAAAAAMAAC 数据对象号
    dbms_rowid.rowid_relative_fno(rowid) file_id, --5 AAAMim(AAF)AAAAAMAAC 相对文件号
    dbms_rowid.rowid_block_number(rowid) block_id, --12 AAAMimAAF(AAAAAM)AAC 在第几个块
    dbms_rowid.rowid_row_number(rowid) num  --2  AAAMimAAFAAAAAM(AAC)在block中的行数
from test where object_id =51350;

3.临时表

3.1全局临时表—类型

•基于session

      on commit preserve rows—退出session,记录就删除

•基于事务

      on commit delete rows—commit或退出session,记录就删除

•优点

       DML操作日志少、 高效删除记录、不同会话独立

•缺点

      从数据安全性考虑,数据丢失无法恢复

•适用场景

  运行过程中临时处理的中间结果集

  实验3.1.1全局临时表—DML产生undo、redo量测试

操作 基于Session(M) 基于事务(M) 堆表(M)
  undo redo undo redo undo redo
insert 0.17268753 0.2619934 0.1726875 0.26203156 0.18679428 5.4342499
update 2.99539566 3.2135201 3.0456696 3.26722717 6019467545 16.246155
delete 10.8601456 14.261208 10.860336 14.2608949 10.7915955 17.363735

 

 

 

 

 

drop table t_session purge;
drop table t_transaction purge;
create global temporary table t_session on commit preserve rows
 as select * from dba_objects where 1<>1;
create global temporary table t_transaction on commit delete rows
 as select * from dba_objects where 1<>1;
测试表准备
select * from v_measure_redo_undo_size;
insert into t_session select * from dba_objects;
select * from v_measure_redo_undo_size;
select ()/1024/1024 undo,()/1024/1024 redo from dual;

select * from v_measure_redo_undo_size;
insert into t_transaction select * from dba_objects;
select * from v_measure_redo_undo_size;
select ()/1024/1024 undo,()/1024/1024 redo from dual;

drop table test purge;
create table test as select * from dba_objects where  1<>1;
select * from v_measure_redo_undo_size;
insert into test select * from dba_objects;
commit;
select * from v_measure_redo_undo_size;
select ()/1024/1024 undo,()/1024/1024 redo from dual;
DML生成日志大小测试之insert
insert into t_session select * from dba_objects;
select * from v_measure_redo_undo_size;
update t_session set object_name=object_name;
select * from v_measure_redo_undo_size;
select ()/1024/1024 undo,()/1024/1024 redo from dual;

insert into t_transaction select * from dba_objects;
select * from v_measure_redo_undo_size;
update t_transaction set object_name=object_name;
select * from v_measure_redo_undo_size;
select ()/1024/1024 undo,()/1024/1024 redo from dual;

drop table test purge;
create table test as select * from dba_objects;
select * from v_measure_redo_undo_size;
update test set object_name=object_name;
commit;
select * from v_measure_redo_undo_size;
select ()/1024/1024 undo,()/1024/1024 redo from dual;
DML生成日志大小测试之update
insert into t_session select * from dba_objects;
select * from v_measure_redo_undo_size;
delete from t_session;
select * from v_measure_redo_undo_size;
select ()/1024/1024 undo,()/1024/1024 redo from dual;


insert into t_transaction select * from dba_objects;
select * from v_measure_redo_undo_size;
delete from t_transaction;
select * from v_measure_redo_undo_size;
select ()/1024/1024 undo,()/1024/1024 redo from dual;

drop table test purge;
create table test as select * from dba_objects;
select * from v_measure_redo_undo_size;
delete from test;
commit;
select * from v_measure_redo_undo_size;
select ()/1024/1024 undo,()/1024/1024 redo from dual;
DML生成日志大小测试之delete

  实验3.1.2全局临时表—高效删除、不同会话独立

1.1基于session的临时表
insert into t_session select * from dba_objects;
select count(*) from t_session;
退出此session重新登录
select count(*) from t_session;

1.2基于trasaction的临时表
insert into t_transaction select * from dba_objects;
select count(*) from t_transaction;
select * from v_measure_redo_size;
commit或退出此session重新登录
select count(*) from t_transaction;       
select * from v_measure_redo_size;
3.2.1.高效删除实验

 

 session1:
     insert into t_session select * from dba_objects;
     commit;
     select count(*) from t_session;       
   
session2:
     select count(*) from t_session; 
3.2.2不同会话独立实验

 

3.2思考题:

  某数据库每天归档都超过50G,比整个数据库还大,产生日志的主要有四个临时表:GDT_MODLOG_ATTRIBUTES,GDT_MODLOG_ELEMINFO,GDT_MODLOG_ORDS,GDT_MODLOG_SCALARS,涉及的操作是大量的delete,请问如何调优?

 

4.分区表

4.1分区的作用:

  分区更利于数据维护,可以单独对分区及分区索引进行操作。在分区对象中,可以只对单独分区进行数据加载、备份、恢复以及索引重建等操作,而不必对整个表进行操作。
  提高查询效率。在某些时候,分区可以让查询更快,因为oracle有分区裁剪功能,只对需要处理的分区进行扫描,这样扫描的数据块会大大减少,提高查询效率。
 
4.1分区的种类
  • 范围分区(range):
    根据某个字段的值,以固定的范围作为一个分区来划分数据在实际应用中,按照时间字段来划分分区具有非常大的实际意义:

  优点:

用户知道具体数据在哪个分区
通过分区可以有效实施大批量数据操作(truncate,drop)
提升归档、备份、恢复的便捷度

  缺点:分区数据可能不均匀

   范围分区应用:数据过期化处理

  在海量数据的数据库设计中,我们需要提前考虑数据存储的时间。对过期数据进行归档,在数据库中只保留特定时长的数据。在这种情况下,范围分区便可发挥非常好的作用。

  通常我们会对过期数据做如下处理:

  a.删除
  b.移植到离线数据库(表空间导出)
  c.做数据归档

  实验4.1利用范围分区进行数据过期化处理

drop table t_range purge;
drop table t purge;
create table t (id number not null PRIMARY KEY, test_date date) nologging;
create table t_range (id number not null PRIMARY KEY, test_date date) partition by range (test_date)
(
partition p_2013_1 values less than (to_date('2013-05-01', 'yyyy-mm-dd')),
partition p_2013_2 values less than (to_date('2013-06-01', 'yyyy-mm-dd')),
partition p_2013_3 values less than (to_date('2013-07-01', 'yyyy-mm-dd')),
partition p_2013_4 values less than (to_date('2013-08-01', 'yyyy-mm-dd')),
partition p_2013_5 values less than (to_date('2013-09-01', 'yyyy-mm-dd')),
partition p_2013_6 values less than (to_date('2013-10-01', 'yyyy-mm-dd')),
partition p_max values less than (MAXVALUE)
) nologging;

insert /*+append */ into t  select rownum,
       to_date(to_char(sysdate - 80, 'J') +
               trunc(dbms_random.value(0, 70)),
               'J')
  from dual
connect by rownum <= 100000;

insert /*+append */  into t_range select * from t;


用autotrace看下面两句话的执行计划:
exec dbms_stats.gather_table_stats(user,'T_RANGE');
exec dbms_stats.gather_table_stats(user,'T');
select * from t_range ;
select * from t  where test_date = to_date('2013-05-28', 'yyyy-mm-dd');
select * from t_range where test_date = to_date('2013-05-28', 'yyyy-mm-dd');

原理:
select * from user_segments s where s.segment_name='T_RANGE';

---删除我们不需要的数据
select *from t_range  partition(p_2013_1); 
select *from t_range  partition(p_2013_2); 
alter table t_range drop partition p_2013_1;
alter table t_range truncate partition p_2013_2;
范围分区-过期化处理
  数据有过期化处理的问题,范围分区是唯一选择
  Select  …  from t where t_time > xxxx and t_time < xxxx
  这种查询条件在数据仓库,或者报表处理中是非常常见的
  • 哈希分区(hash):
  哈希分区:通过对需要分区的字段数据进行hash函数运算,从而使数据均匀的分布在各个分区上.
  哈希分区适用于各个分区上数据分区要求均匀的情况下使用,要求分区字段没有太大的数据偏移.
SQL> Create table t_hash partition by hash(object_id) partitions 8 as select * from dba_objects;

SQL> exec dbms_stats.gather_table_stats(user,'T_HASH');

SQL> select s.table_name,s.partition_name,s.num_rows from user_tab_partitions s where s.table_name=‘T_HASH’;
Hash分区示例

优点:

  数据分布均匀,避免查询数据时集中在某一个地方,从而避免热块的竞争,改善IO。
  实施起来非常简单。

缺点:

  用户无法知道具体数据在哪个分区。
特点:
  不能对单个分区drop、merge、split;
  Hash分区不能drop、merge、split分区;
  Hash分区可以单独对分区进行truncate;
  Hash分区字段仅适用于选择性高的字段,否则没有意义。

  按照官方的解释,hash分区的个数建议是2的n次方为合适。

SQL> Create table t_hash_2 partition by hash(object_type) partitions 8 as select * from dba_objects;

-------extents分布不均匀
SQL> Select partition_name,count(*) from user_extents where segment_name = 'T_HASH_2' group by parti
tion_name;

PARTITION_NAME                   COUNT(*)
------------------------------ ----------
SYS_P34                                18
SYS_P29                                 4
SYS_P36                                 6
SYS_P30                                 4
SYS_P31                                 3
SYS_P33                                 8
SYS_P35                                17
SYS_P32                                 1

-----数据量同样分布不均匀
SQL>  select count(*) from t_hash_2 partition (SYS_P31)
  2   union all
  3   select count(*) from t_hash_2 partition (SYS_P32)
  4   union all
  5   select count(*) from t_hash_2 partition (SYS_P33)
  6   union all
  7   select count(*) from t_hash_2 partition (SYS_P34)
  8   union all
  9   select count(*) from t_hash_2 partition (SYS_P35);

  COUNT(*)
----------
      1324
        26
      4568
     20267
     16432
Hash分区特点示例

哈希分区的适用场景:

  当创建分区列上的数据重复率很低时,哈希分区会达到非常好的效率,所以,当表数据特征为静态数据时,也就是说此表的数据量虽然很大,但是数据都是用户需要的,不可以做过期化处理时,hash分区非常有效。

例如:用户表、资料表等 

  • 列表分区(list):
    列表分区:按照字段的值来进行分区的方法,非常适用于高重复率字段。值和分区可实现一一映射。
  优缺点:与范围分区一致。
 
  列表分区的适用场景:当分区字段重复率很高时,列表分区非常有效。

  例如:地域划分、公司编码

   实验4.1:列表分区

drop table t_list purge;
CREATE TABLE T_LIST 
( 
    ID   NUMBER(7) NOT NULL PRIMARY KEY, 
    CITY VARCHAR2(10)
) 
PARTITION BY LIST (CITY) 
( 
      PARTITION P_BEIJING  VALUES ('BEIJING') , 
      PARTITION P_SHANGHAI VALUES ('SHANGHAI'),
      PARTITION P_DEF VALUES (DEFAULT));

insert into t_list values (1,'BEIJING');
insert into t_list values (2,'SHANGHAI');
insert into t_list values (3,'SHANGHAI');
insert into t_list values (4,'SHANGHAI');
insert into t_list values (5,'SHANGHAI');
insert into t_list values (6,'SHANGHAI');
insert into t_list values (7,'SHENZHEN');

select * from t_list where city = 'SHENZHEN';
列表分区示例

 

  •   组合分区
   在某些时候根据业务要求,我们可以对一个海量数据的分区表继续进行细化,从而创建组合分区或者叫子分区来对数据进行划分。
组合分区种类:

  在Oracle10gR2中,仅支持以下几种组合分区方式:

  范围-列表分区(range-list):
  范围-哈希分区(range-hash):

  注:11g组合分区还添加了range-range,list-list….等等类型。

------------------------------1.range list------------------------------
Drop table t_range_list purge;
CREATE TABLE t_range_list
(object_id NUMBER(10),
object_name VARCHAR2(50),
object_type VARCHAR2(20))
PARTITION BY RANGE(object_id) subpartition by list(object_type)
SUBPARTITION template(
SUBPARTITION t_list_table VALUES('TABLE'),
SUBPARTITION t_list_index VALUES('INDEX'),
SUBPARTITION t_list_syn VALUES('SYNONYM'),
SUBPARTITION t_list_def VALUES(default))
(
PARTITION object_id_10000 VALUES LESS THAN (10000),
PARTITION object_id_20000 VALUES LESS THAN (20000),
PARTITION object_id_30000 VALUES LESS THAN (30000),
PARTITION object_id_40000 VALUES LESS THAN (40000),
PARTITION object_id_50000 VALUES LESS THAN (50000),
PARTITION object_id_60000 VALUES LESS THAN (60000),
PARTITION object_id_max VALUES LESS THAN (MAXVALUE)
);

insert /*+append*/into t_range_list select object_id,object_name,object_type from dba_objects;

select * from t_range_list where object_type = 'TABLE' and object_id = 111;

-----------------------------2.range hash---------------------------
drop table t_range_hash purge;

CREATE TABLE t_range_hash
(object_id NUMBER(10),
object_name VARCHAR2(50),
object_type VARCHAR2(20))
PARTITION BY RANGE(object_id) subpartition by hash(OBJECT_NAME)
SUBPARTITIONS 4
(
PARTITION object_id_10000 VALUES LESS THAN (10000),
PARTITION object_id_20000 VALUES LESS THAN (20000),
PARTITION object_id_30000 VALUES LESS THAN (30000),
PARTITION object_id_40000 VALUES LESS THAN (40000),
PARTITION object_id_50000 VALUES LESS THAN (50000),
PARTITION object_id_60000 VALUES LESS THAN (60000),
PARTITION object_id_max VALUES LESS THAN (MAXVALUE)
);

insert /*+append*/ into t_range_hash select object_id,object_name,object_type from dba_objects;

select * from t_range_hash where object_name = 'T_RANGE_HASH';

查看复合分区定义:
select dbms_metadata.get_ddl('TABLE',upper('t_range_list') )from dual ;
组合分区
 
4.2分区的原理
select  segment_name, 
                partition_name,
                  segment_type,
               bytes/1024/1024|| 'Mb',
          tablespace_name from user_segments
     where segment_name in('@segment_name');
分区原理
 
4.3分区的陷阱
  
  a.分区可能用不上的情况
  b.分区字段修改会报错,可以使用下列语句避免: Alter table tbl_name ENABLE ROW MOVEMENT; 。一个字段修改会触发三个动作,update、delete、insert。不过存在副作用,rowid会发生变化,数据做恢复时无法恢复到原来的状态。
CREATE TABLE TEST_PART 
  (
    ID NUMBER
  ) 
  PARTITION BY RANGE (ID)
  ( PARTITION P1 VALUES LESS THAN (100), 
    PARTITION P2 VALUES LESS THAN (MAXVALUE)
  );
INSERT INTO TEST_PART VALUES (20);
INSERT INTO TEST_PART VALUES (20);
commit;
UPDATE TEST_PART SET ID = 120 WHERE ROWNUM = 1;
commit;

Alter table TEST_PART ENABLE ROW MOVEMENT;
UPDATE TEST_PART SET ID = 120 WHERE ROWNUM = 1;

select * from TEST_PART;

分区字段修改后发生的事情:
在分区1中:
update "TEST_PART" set "ID" = '20' where "ID" = '120' and ROWID = 'AAAB5OAAFAAAAegAAA';
delete from "TEST_PART" where "ID" = '120' and ROWID = 'AAAB5OAAFAAAAegAAA'; 
在分区2中:
insert into "TEST_PART"("ID") values ('120');
Update分区字段的问题

4.4思考:分区和分表有什么区别?各有什么优缺点?各自适应的场景

 
5.字段概述与设计

  合理的字段类型设计可以省去后期维护的很多麻烦。

  例如:

  两个表进行关联,关联字段类型不一致将导致索引失效。因为这里存在隐式转换。将数值设计为varchar型也将导致隐式转换。
SQL> create table test1 (id number(10));
SQL> create table test2 (id varchar2(10));
---往两个表分别插入10万行数据
DECLARE 
  I  NUMBER;
begin 
  for i in 1..1000000 loop 
     insert into test2 values(i) ;
  end loop 
commit ;
end ;
/
----为两个表建立主键
SQL> alter table test2 add primary key (id);

表已更改。

SQL> select * from test2 where id = 1;

        ID
----------
         1

已用时间:  00: 00: 00.00

执行计划
----------------------------------------------------------
Plan hash value: 2801535751

---------------------------------------------------------------------------------
| Id  | Operation         | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |             |     1 |    13 |     1   (0)| 00:00:01 |
|*  1 |  INDEX UNIQUE SCAN| SYS_C005143 |     1 |    13 |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("ID"=1)


统计信息
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          2  consistent gets
          0  physical reads
          0  redo size
        402  bytes sent via SQL*Net to client
        385  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
          
SQL> select * from test2 where id = 1;

ID
----------
1

已用时间:  00: 00: 00.08

执行计划
----------------------------------------------------------
Plan hash value: 300966803

---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |     1 |     7 |    40  (13)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TEST2 |     1 |     7 |    40  (13)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(TO_NUMBER("ID")=1)

Note
-----
   - dynamic sampling used for this statement


统计信息
----------------------------------------------------------
        216  recursive calls
          0  db block gets
        254  consistent gets
          0  physical reads
          0  redo size
        401  bytes sent via SQL*Net to client
        385  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          6  sorts (memory)
          0  sorts (disk)
          1  rows processed
索引失效实验

5.1字段类型设计与实际业务不符引发的问题

  应该设计成number的,结果设计成了varchar2,会引发什么问题?

SQL> create table test(id varchar2(10));
表已创建。SQL> declare
i number;
begin
for i in 1..100 loop
insert into test values(i);
end loop;
end;
PL/SQL 过程已成功完成。
SQL> commit;
提交完成。SQL> select count(*) from test where id <'9';----猜猜是多少,难道不是8?
  COUNT(*)----------
        89
SQL> select count(*) from test where id <'19';
  COUNT(*)
----------        11
SQL> select * from test where id < '19';
ID----------
1
10
11
12
13
14
15
16
17
18
100
View Code

    Number 和 float的选择:

  Number 是以十进制存储
  Float是以二进制存储
  Number能表示的数字,float不一定能表示。
  Number更加直观。

5.2字段长度设计

  通常来说,我们在设计一个表的时候,需要首先对业务场景进行估算,得出合理的字段长度范围,而不是滥用资源。
  合理的字段长度可以:
  提高响应性能
  减少存储资源(除varchar2型)
  在一定意义上对字段进行约束

SQL> create table test1 (c CHAR(10),c2 CHAR(1),c3 CHAR(1));

SQL> create table test2 (c CHAR(1000),c2 CHAR(1000),c3 CHAR(1000));

SQL> INSERT INTO test1 SELECT '*','*','*' FROM DUAL CONNECT BY ROWNUM<=100000;

已创建100000行。

SQL> INSERT INTO test2 SELECT '*','*','*' FROM DUAL CONNECT BY ROWNUM<=100000;

已创建100000行。

SQL> SELECT COUNT(DISTINCT DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)) FROM TEST1;

COUNT(DISTINCTDBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID))
---------------------------------------------------
                                                276

SQL> SELECT COUNT(DISTINCT DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)) FROM TEST2;

COUNT(DISTINCTDBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID))
---------------------------------------------------
                                              50000
------
SQL> SET AUTOTRACE TRACEONLY
SQL> SET TIMING ON
SQL> SELECT * FROM TEST1;

已选择100000行。

已用时间:  00: 00: 00.54

执行计划
----------------------------------------------------------
Plan hash value: 4122059633

---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |   100K|  1758K|    67   (6)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| TEST1 |   100K|  1758K|    67   (6)| 00:00:01 |
---------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement


统计信息
----------------------------------------------------------
          4  recursive calls
          0  db block gets
       7020  consistent gets
          0  physical reads
          0  redo size
    2867115  bytes sent via SQL*Net to client
      73711  bytes received via SQL*Net from client
       6668  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     100000  rows processed

SQL> SELECT * FROM TEST2;

已选择100000行。

已用时间:  00: 00: 35.80

执行计划
----------------------------------------------------------
Plan hash value: 300966803

---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |   100K|   286M| 11003   (1)| 00:02:13 |
|   1 |  TABLE ACCESS FULL| TEST2 |   100K|   286M| 11003   (1)| 00:02:13 |
---------------------------------------------------------------------------


统计信息
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      53349  consistent gets
      36999  physical reads
          0  redo size
  303567115  bytes sent via SQL*Net to client
      73711  bytes received via SQL*Net from client
       6668  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     100000  rows processed
不合理的字段设计实验

5.3字段顺序对性能的影响

  在我们做设计的时候,我们往往不考虑字段的摆放顺序。实际上,字段的摆放顺序对性能是有影响的。
  a.越靠后的字段效率越低;
  b.越靠后的字段操作开销越大.

设计实验
set serveroutput on
set echo on
---创建有250个字段的表
declare
v_sql varchar2(32767);
begin
 v_sql:='create table t(';
 for i in 1..250
  loop
   v_sql:=v_sql||'n'||i||' number,';
  end loop;
  v_sql:=v_sql||'r_pad varchar2(1000))';
  execute immediate v_sql;
end;
/

---插入1万行数据
declare
v_sql varchar2(4000);
begin
 v_sql:='insert into t select ';
 for i in 1..250
  loop
   v_sql:=v_sql||'0,';
  end loop;
  v_sql:=v_sql||' null from dual connect by level <=10000';
  execute immediate v_sql;
  commit;
end;
/

---收集统计信息
exec dbms_stats.gather_table_stats(user,'t');

---执行sql,查出count每个字段的耗时
declare
 v_dummy PLS_INTEGER;
 v_start PLS_INTEGER;
 v_stop PLS_INTEGER;
 v_sql VARCHAR2(100);
BEGIN
 v_start :=dbms_utility.get_time;
 for j in 1..20
  loop
   execute immediate 'select count(*) from t ' into v_dummy;
  end loop;
  v_stop:= dbms_utility.get_time;
  dbms_output.put_line('COUNT* 20次的时间是:'|| to_char((v_stop-v_start)*10,'999')||'毫秒'); 
 for i in 1..250
  loop
   v_sql :='select count(n'||i||') from t';
   v_start :=dbms_utility.get_time;
   for j in 1..20
    loop
     execute immediate v_sql into v_dummy;
    end loop;
    v_stop:=dbms_utility.get_time;
    dbms_output.put_line('count'||i||'列20次的时间是:'||to_char((v_stop-v_start)*10,'999')||'毫秒');
   end loop;
end;
/

http://blog.csdn.net/stevendbaguo/article/details/8880754
字段顺序的性能实验

5.4字段个数对性能的影响

  通常,我们为了确保表查询的性能,为防止过多表的关联,会建立很多冗余字段来确保性能。但是往往,增加了冗余字段反而会影响性能,甚至增加数据库的负担。
  表字段越多,占用的数据空间就越多,在同一个数据块中的记录就越少,查询可能就要跨数据块,从而发生行链接或行迁移,影响性能.

6.Sequence性能设计

  通常,我们使用Oracle数据库sequence做一个表的自增长UID功能时,往往使用默认的参数来建立一个sequence,之后发现默认的序列存在很多问题:
  a.潜在的行锁争用

  b.过多的redo log生成

  c.产生enq: SQ – contention等待事件

 

Drop sequence  sq1;
Drop sequence  sq2;
create sequence sq1 nocache;
create sequence sq2 cache 20;
Set autotrace traceonly
Select sq1.nextval from dual;
Select sq2.nextval from dual;
Sequence实验

 

  我们先来了解两个可能影响性能的重要参数
Cache:此值的意义指的是oracle预先在内存中放置一些
Sequence,这样存取的快些。
Order: 默认是NOORDER,如果设置为ORDER;在单实例环境没有影响,在RAC环境此时,多实例实际缓存相同的序列,此时在多个实例并发取该序列的时候,会有短暂的资源竞争来在多实例之间进行同步。因次性能相比noorder要差,所以RAC环境非必须的情况下不要使用ORDER,尤其要避免NOCACHE? ORDER组合

6.1Sequence cache设计

  高并发系统cache的值设定的大一些,过大的cache值有可能会造成跳号。

Cache实验(nocache、cache 20、cache 100、cache 1000):
SQL> create sequence sq1 nocache;

SQL> create sequence sq2 cache 20;

SQL>  declare
     x number;
     begin
     for i in 1 .. 10000 loop
     select sq1.nextval into x from dual;
     end loop;
     end;
     /

PL/SQL 过程已成功完成。

已用时间:  00: 00: 02.73

SQL> declare
    x number;
    begin
    for i in 1 .. 10000 loop
    select sq2.nextval into x from dual;
    end loop;
    end;
    /

PL/SQL 过程已成功完成。

已用时间:  00: 00: 00.31

----跳号实验
create sequence sq_test cache 20;
Select sq_test.nextval from dual; 1
Select sq_test.nextval from dual; 2
Select sq_test.nextval from dual; 3
Alter system flush shared_pool;
Select sq_test.nextval from dual;
Sequence Cache实验