Oracle Redo 并行机制
转自:http://www.hellodba.com/reader.php?ID=28&lang=cn
Redo log 是用于恢复和一个高级特性的重要数据,一个redo条目包含了相应操作导致的数据库变化的所有信息,所有redo条目最终都要被写入redo文件中去。Redo log buffer是为了避免Redo文件IO导致性能瓶颈而在sga中分配出的一块内存。一个redo条目首先在用户内存(PGA)中产生,然后由oracle服务进程拷贝到log buffer中,当满足一定条件时,再由LGWR进程写入redo文件。由于log buffer是一块“共享”内存,为了避免冲突,它是受到redo allocation latch保护的,每个服务进程需要先获取到该latch才能分配redo buffer。因此在高并发且数据修改频繁的oltp系统中,我们通常可以观察到redo allocation latch的等待。Redo写入redo buffer的整个过程如下:
在PGA中生产Redo Enrey -> 服务进程获取Redo Copy latch(存在多个---CPU_COUNT*2) -> 服务进程获取redo allocation latch(仅1个) -> 分配log buffer -> 释放redo allocation latch -> 将Redo Entry写入Log Buffer -> 释放Redo Copy latch;
shared strand
为了减少redo allocation latch等待,在oracle 9.2中,引入了log buffer的并行机制。其基本原理就是,将log buffer划分为多个小的buffer,这些小的buffer被成为strand(为了和之后出现的private strand区别,它们被称之为shared strand)。每一个strand受到一个单独redo allocation latch的保护。多个shared strand的出现,使原来序列化的redo buffer分配变成了并行的过程,从而减少了redo allocation latch等待。
shared strand的初始数据量是由参数log_parallelism控制的;在10g中,该参数成为隐含参数,并新增参数_log_parallelism_max控制shared strand的最大数量;_log_parallelism_dynamic则控制是否允许shared strand数量在_log_parallelism和_log_parallelism_max之间动态变化。
- HELLODBA.COM>select nam.ksppinm, val.KSPPSTVL, nam.ksppdesc
- 2 from sys.x$ksppi nam,
- 3 sys.x$ksppsv val
- 4 where nam.indx = val.indx
- 5 --AND nam.ksppinm LIKE '_%'
- 6 AND upper(nam.ksppinm) LIKE '%LOG_PARALLE%';
- KSPPINM KSPPSTVL KSPPDESC
- -------------------------- ---------- ------------------------------------------
- _log_parallelism 1 Number of log buffer strands
- _log_parallelism_max 2 Maximum number of log buffer strands
- _log_parallelism_dynamic TRUE Enable dynamic strands
每一个shared strand的大小 = log_buffer/(shared strand数量)。strand信息可以由表x$kcrfstrand查到(包含shared strand和后面介绍的private strand,10g以后存在)。
- HELLODBA.COM>select indx,strand_size_kcrfa from x$kcrfstrand where last_buf_kcrfa != '00';
- INDX STRAND_SIZE_KCRFA
- ---------- -----------------
- 0 3514368
- 1 3514368
- HELLODBA.COM>show parameter log_buffer
- NAME TYPE VALUE
- ------------------------------------ ----------- ------------------------------
- log_buffer integer 7028736
关于shared strand的数量设置,16个cpu之内最大默认为2,当系统中存在redo allocation latch等待时,每增加16个cpu可以考虑增加1个strand,最大不应该超过8。并且_log_parallelism_max不允许大于cpu_count。
注意:在11g中,参数_log_parallelism被取消,shared strand数量由_log_parallelism_max、_log_parallelism_dynamic和cpu_count控制。
Private strand
为了进一步降低redo buffer冲突,在10g中引入了新的strand机制——Private strand。Private strand不是从log buffer中划分的,而是在shared pool中分配的一块内存空间。
- HELLODBA.COM>select * from V$sgastat where name like '%strand%';
- POOL NAME BYTES
- ------------ -------------------------- ----------
- shared pool private strands 2684928
- HELLODBA.COM>select indx,strand_size_kcrfa from x$kcrfstrand where last_buf_kcrfa = '00';
- INDX STRAND_SIZE_KCRFA
- ---------- -----------------
- 2 66560
- 3 66560
- 4 66560
- 5 66560
- 6 66560
- 7 66560
- 8 66560
- ...
Private strand的引入为Oracle的Redo/Undo机制带来很大的变化。每一个Private strand受到一个单独的redo allocation latch保护,每个Private strand作为“私有的”strand只会服务于一个活动事务。获取到了Private strand的用户事务不是在PGA中而是在Private strand生成Redo,当flush private strand或者commit时,Private strand被批量写入log文件中。如果新事务申请不到Private strand的redo allocation latch,则会继续遵循旧的redo buffer机制,申请写入shared strand中。事务是否使用Private strand,可以由x$ktcxb的字段ktcxbflg的新增的第13位鉴定:
- HELLODBA.COM>select decode(bitand(ktcxbflg, 4096),0,1,0) used_private_strand, count(*)
- 2 from x$ktcxb
- 3 where bitand(ksspaflg, 1) != 0
- 4 and bitand(ktcxbflg, 2) != 0
- 5 group by bitand(ktcxbflg, 4096);
- USED_PRIVATE_STRAND COUNT(*)
- ------------------- ----------
- 1 10
- 0 1
对于使用Private strand的事务,无需先申请Redo Copy Latch,也无需申请Shared Strand的redo allocation latch,而是flush或commit是批量写入磁盘,因此减少了Redo Copy Latch和redo allocation latch申请/释放次数、也减少了这些latch的等待,从而降低了CPU的负荷。过程如下:
事务开始 -> 申请Private strand的redo allocation latch (申请失败则申请Shared Strand的redo allocation latch) -> 在Private strand中生产Redo Enrey -> Flush/Commit -> 申请Redo Copy Latch -> 服务进程将Redo Entry批量写入Log File -> 释放Redo Copy Latch -> 释放Private strand的redo allocation latch
注意:对于未能获取到Private strand的redo allocation latch的事务,在事务结束前,即使已经有其它事务释放了Private strand,也不会再申请Private strand了。
每个Private strand的大小为65K。10g中,shared pool中的Private strands的大小就是活跃会话数乘以65K,而11g中,在shared pool中需要为每个Private strand额外分配4k的管理空间,即:数量*69k。
- --10g:
- SQL> select * from V$sgastat where name like '%strand%';
- POOL NAME BYTES
- ------------ -------------------------- ----------
- shared pool private strands 1198080
- HELLODBA.COM>select trunc(value * KSPPSTVL / 100) * 65 * 1024
- 2 from (select value from v$parameter where name = 'transactions') a,
- 3 (select val.KSPPSTVL
- 4 from sys.x$ksppi nam, sys.x$ksppsv val
- 5 where nam.indx = val.indx
- 6 AND nam.ksppinm = '_log_private_parallelism_mul') b;
- TRUNC(VALUE*KSPPSTVL/100)*65*1024
- -------------------------------------
- 1198080
- --11g:
- HELLODBA.COM>select * from V$sgastat where name like '%strand%';
- POOL NAME BYTES
- ------------ -------------------------- ----------
- shared pool private strands 706560
- HELLODBA.COM>select trunc(value * KSPPSTVL / 100) * (65 + 4) * 1024
- 2 from (select value from v$parameter where name = 'transactions') a,
- 3 (select val.KSPPSTVL
- 4 from sys.x$ksppi nam, sys.x$ksppsv val
- 5 where nam.indx = val.indx
- 6 AND nam.ksppinm = '_log_private_parallelism_mul') b;
- TRUNC(VALUE*KSPPSTVL/100)*(65+4)*1024
- -------------------------------------
- 706560
Private strand的数量受到2个方面的影响:logfile的大小和活跃事务数量。
参数_log_private_mul指定了使用多少logfile空间预分配给Private strand,默认为5。我们可以根据当前logfile的大小(要除去预分配给log buffer的空间)计算出这一约束条件下能够预分配多少个Private strand:
- HELLODBA.COM>select bytes from v$log where status = 'CURRENT';
- BYTES
- ----------
- 52428800
- HELLODBA.COM>select trunc(((select bytes from v$log where status = 'CURRENT') - (select to_number(value) from v$parameter where name = 'log_buffer'))*
- 2 (select to_number(val.KSPPSTVL)
- 3 from sys.x$ksppi nam, sys.x$ksppsv val
- 4 where nam.indx = val.indx
- 5 AND nam.ksppinm = '_log_private_mul') / 100 / 66560)
- 6 as "calculated private strands"
- 7 from dual;
- calculated private strands
- --------------------------
- 5
- HELLODBA.COM>select count(1) "actual private strands" from x$kcrfstrand where last_buf_kcrfa = '00';
- actual private strands
- ----------------------
- 5
当logfile切换后(和checkpoint一样,切换之前必须要将所有Private strand的内容flush到logfile中,因此我们在alert log中可能会发现日志切换信息之前会有这样的信息:"Private strand flush not complete",这是可以被忽略的),会重新根据切换后的logfile的大小计算对Private strand的限制:
- HELLODBA.COM>alter system switch logfile;
- System altered.
- HELLODBA.COM>select bytes from v$log where status = 'CURRENT';
- BYTES
- ----------
- 104857600
- HELLODBA.COM>select trunc(((select bytes from v$log where status = 'CURRENT') - (select to_number(value) from v$parameter where name = 'log_buffer'))*
- 2 (select to_number(val.KSPPSTVL)
- 3 from sys.x$ksppi nam, sys.x$ksppsv val
- 4 where nam.indx = val.indx
- 5 AND nam.ksppinm = '_log_private_mul') / 100 / 66560)
- 6 as "calculated private strands"
- 7 from dual;
- calculated private strands
- --------------------------
- 13
- HELLODBA.COM>select count(1) "actual private strands" from x$kcrfstrand where last_buf_kcrfa = '00';
- actual private strands
- ----------------------
- 13
参数_log_private_parallelism_mul用于推算活跃事务数量在最大事务数量中的百分比,默认为10。Private strand的数量不能大于活跃事务的数量。
- HELLODBA.COM>show parameter transactions
- NAME TYPE VALUE
- ------------------------------------ ----------- ------------------------------
- transactions integer 222
- transactions_per_rollback_segment integer 5
- HELLODBA.COM>select trunc((select to_number(value) from v$parameter where name = 'transactions') *
- 2 (select to_number(val.KSPPSTVL)
- 3 from sys.x$ksppi nam, sys.x$ksppsv val
- 4 where nam.indx = val.indx
- 5 AND nam.ksppinm = '_log_private_parallelism_mul') / 100 )
- 6 as "calculated private strands"
- 7 from dual;
- calculated private strands
- --------------------------
- 22
- HELLODBA.COM>select count(1) "actual private strands" from x$kcrfstrand where last_buf_kcrfa = '00';
- actual private strands
- ----------------------
- 22
注:在预分配Private strand时,会选择上述2个条件限制下最小一个数量。但相应的shared pool的内存分配和redo allocation latch的数量是按照活跃事务数预分配的。
因此,如果logfile足够大,_log_private_parallelism_mul与实际活跃进程百分比基本相符的话,Private strand的引入基本可以消除redo allocation latch的争用问题。