RAC维护

RAC管理工具srvctl

node2-> srvctl
Usage: srvctl <command> <object> [<options>]
    commands: enable|disable|start|stop|relocate|status|add|remove|modify|getenv|setenv|unsetenv|config
    objects: database|instance|service|nodeapps|vip|asm|diskgroup|listener|srvpool|server|scan|scan_listener|oc4j|home|filesystem|gns
For detailed help on each command and object and its options use:
  srvctl <command> -h or
  srvctl <command> <object> -h
node2-> srvctl start -h

The SRVCTL start command starts, Oracle Clusterware enabled, non-running objects.

Usage: srvctl start database -d <db_unique_name> [-o <start_options>]
Usage: srvctl start instance -d <db_unique_name> {-n <node_name> [-i <inst_name>] | -i <inst_name_list>} [-o <start_options>]
Usage: srvctl start service -d <db_unique_name> [-s "<service_name_list>" [-n <node_name> | -i <inst_name>] ] [-o <start_options>]
Usage: srvctl start nodeapps [-n <node_name>] [-v]
Usage: srvctl start vip { -n <node_name> | -i <vip_name> } [-v]
Usage: srvctl start asm [-n <node_name>] [-o <start_options>]
Usage: srvctl start listener [-l <lsnr_name>] [-n <node_name>]
Usage: srvctl start scan [-i <ordinal_number>] [-n <node_name>]
Usage: srvctl start scan_listener [-n <node_name>] [-i <ordinal_number>]
Usage: srvctl start oc4j [-v]
Usage: srvctl start home -o <oracle_home> -s <state_file> -n <node_name>
Usage: srvctl start filesystem -d <volume_device> [-n <node_name>]
Usage: srvctl start diskgroup -g <dg_name> [-n "<node_list>"]
Usage: srvctl start gns [-v] [-l <log_level>] [-n <node_name>]
For detailed help on each command and object and its options use:
  srvctl <command> <object> -h
View Code

RAC管理工具crsctl

node1-> crsctl
Usage: crsctl <command> <object> [<options>]
    command: enable|disable|config|start|stop|relocate|replace|stat|add|delete|modify|getperm|setperm|check|set|get|unset|debug|lsmodules|query|pin|unpin
For complete usage, use:
    crsctl [-h | --help]
For detailed help on each command and object and its options use:
    crsctl <command> <object> -h  e.g. crsctl relocate resource -h
node1-> crsctl start -h
Usage:
  crsctl start resource {<resName> [...]|-w <filter>]|-all} [-n <server>] [-k <cid>] [-d <did>] [-env "env1=val1,env2=val2,..."] [-f] [-i]
     Start designated resources
where
     resName [...]    One or more blank-separated resource names
     -w               Resource filter
     -all             All resources
     -n               Server name
     -k               Resource cardinality ID
     -d               Resource degree ID
     -env             Start command environment
     -f               Force option
     -i               Fail if request cannot be processed immediately

  crsctl start crs [-excl]
     Start OHAS on this server
     -excl      Start Oracle Clusterware in exclusive mode

  crsctl start cluster [[-all]|[-n <server>[...]]]
    Start CRS stack
where
    Default         Start local server
    -all            Start all servers
    -n              Start named servers
    server [...]    One or more blank-separated server names
View Code

查看服务状态

node1-> crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.DATA.dg    ora....up.type ONLINE    ONLINE    node1       
ora.FLASH.dg   ora....up.type ONLINE    ONLINE    node1       
ora.GRIDDG.dg  ora....up.type ONLINE    ONLINE    node1       
ora....ER.lsnr ora....er.type ONLINE    ONLINE    node1       
ora....N1.lsnr ora....er.type ONLINE    ONLINE    node2       
ora.asm        ora.asm.type   ONLINE    ONLINE    node1       
ora.cvu        ora.cvu.type   ONLINE    ONLINE    node2       
ora.devdb.db   ora....se.type ONLINE    ONLINE    node1       
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE               
ora....network ora....rk.type ONLINE    ONLINE    node1       
ora....SM1.asm application    ONLINE    ONLINE    node1       
ora....E1.lsnr application    ONLINE    ONLINE    node1       
ora.node1.gsd  application    OFFLINE   OFFLINE               
ora.node1.ons  application    ONLINE    ONLINE    node1       
ora.node1.vip  ora....t1.type ONLINE    ONLINE    node1       
ora....SM2.asm application    ONLINE    ONLINE    node2       
ora....E2.lsnr application    ONLINE    ONLINE    node2       
ora.node2.gsd  application    OFFLINE   OFFLINE               
ora.node2.ons  application    ONLINE    ONLINE    node2       
ora.node2.vip  ora....t1.type ONLINE    ONLINE    node2       
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    node2       
ora.ons        ora.ons.type   ONLINE    ONLINE    node1       
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    node2       
node1-> 
View Code

检查CRS状态

检查本地节点的CRS状态

[root@node1 ~]# su - grid
node1-> crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
View Code

检查集群的CRS状态

[root@node1 ~]# su - grid
node1-> crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
View Code

检查集群中节点的配置信息

[root@node1 ~]# su - grid
node1-> olsnodes
node1
node2
node1-> olsnodes -n
node1   1
node2   2
node1-> olsnodes -n -i
node1   1       node1-vip
node2   2       node2-vip
node1-> olsnodes -n -i -s
node1   1       node1-vip       Active
node2   2       node2-vip       Active
node1-> 
View Code

查看集群软件各种版本信息

检查集群软件使用版本信息

node1-> id
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper),1300(dba)
node1-> crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.3.0]
View Code

检查集群软件发行版本信息

node1-> id
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper),1300(dba)
node1-> crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.3.0]
View Code

检查集群软件安装版本信息

node1-> id
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper),1300(dba)
node1-> crsctl query crs softwareversion
Oracle Clusterware version on node [node1] is [11.2.0.3.0]
View Code

查看集群软件的表决磁盘信息

node1-> id
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper),1300(dba)
node1-> crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   813f0beab69c4f6bbf1bb75622cb9013 (ORCL:VOL1) [GRIDDG]
Located 1 voting disk(s).
View Code

查看系统DNS配置相关信息

node1-> id
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper),1300(dba)
node1-> crsctl query dns -servers
CRS-10018: the following configuration was found on the system:
CRS-10019: There are 1 domains in search order. They are:
localdomain
CRS-10022: There are 1 name servers. They are:
192.168.1.176
CRS-10020: number of retry attempts for name lookup is: 4
CRS-10021: timeout for each name lookup is: 5
node1-> cat /etc/resolv.conf 
# Generated by NetworkManager


# No nameservers found; try putting DNS servers into your
# ifcfg files in /etc/sysconfig/network-scripts like so:
#
# DNS1=xxx.xxx.xxx.xxx
# DNS2=xxx.xxx.xxx.xxx
# DOMAIN=lab.foo.com bar.foo.com
search localdomain
nameserver 192.168.1.176
View Code

查看集群SCANIP信息

node1-> id
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper),1300(dba)
node1-> srvctl config scan
SCAN name: scan-cluster.localdomain, Network: 1/192.168.1.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /scan-cluster.localdomain/192.168.1.203
View Code

查看集群SCANListener信息

node1-> id
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper),1300(dba)
node1-> srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
View Code

查看集群软件OCR文件信息

node1-> id
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper),1300(dba)
node1-> srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
node1-> ocrcheck 
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2872
         Available space (kbytes) :     259248
         ID                       :  261559184
         Device/File Name         :    +GRIDDG
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user

node1-> exit
logout
[root@node1 ~]# /u01/app/11.2.0/grid/bin/ocrcheck -local
Status of Oracle Local Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2652
         Available space (kbytes) :     259468
         ID                       : 1902369579
         Device/File Name         : /u01/app/11.2.0/grid/cdata/node1.olr
                                    Device/File integrity check succeeded

         Local registry integrity check succeeded

         Logical corruption check succeeded
grid用户权限不足,需要以root用户身份查看

查看集群软件OCR备份信息

node1-> id
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper),1300(dba)
node1-> ocrconfig -showbackup

node1     2013/07/02 06:59:51     /u01/app/11.2.0/grid/cdata/scan-cluster/backup00.ocr

node1     2013/07/02 06:59:51     /u01/app/11.2.0/grid/cdata/scan-cluster/day.ocr

node1     2013/07/02 06:59:51     /u01/app/11.2.0/grid/cdata/scan-cluster/week.ocr
PROT-25: Manual backups for the Oracle Cluster Registry are not available
View Code

启动、停止集群数据库

node1-> id
uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper)
node1-> srvctl stop database -d devdb
View Code

启动、停止集群服务

[root@node1 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster -all
View Code

RAC启、停单节点

srvctl start database -d devdb -i devdb1

srvctl stop database -d devdb -i devdb2
View Code

实验:负载均衡和故障转移

实验目的:从不同的客户端连接,查看RAC的负载均衡功能;模拟节点故障,查看RAC的故障转移功能,vip漂移.

[root@node1 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster
CRS-2673: Attempting to stop 'ora.crsd' on 'node1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.GRIDDG.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.devdb.db' on 'node1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.node1.vip' on 'node1'
CRS-2677: Stop of 'ora.node1.vip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.node1.vip' on 'node2'
CRS-2677: Stop of 'ora.devdb.db' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.FLASH.dg' on 'node1'
CRS-2676: Start of 'ora.node1.vip' on 'node2' succeeded
CRS-2677: Stop of 'ora.FLASH.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.GRIDDG.dg' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node1'
CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'node1'
CRS-2677: Stop of 'ora.ons' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'node1'
CRS-2677: Stop of 'ora.net1.network' on 'node1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node1' has completed
CRS-2677: Stop of 'ora.crsd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'node1'
CRS-2673: Attempting to stop 'ora.evmd' on 'node1'
CRS-2673: Attempting to stop 'ora.asm' on 'node1'
CRS-2677: Stop of 'ora.evmd' on 'node1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'node1' succeeded
CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'node1'
CRS-2677: Stop of 'ora.cssd' on 'node1' succeeded
关闭节点1上的集群软件,让所有的服务由node2来承担
[root@node1 ~]# /u01/app/11.2.0/grid/bin/crs_stat -t
CRS-0184: Cannot communicate with the CRS daemon.
此时节点1已经无法提供服务,所有的连接将都通过节点2来完成
SQL> select inst_id,sid,serial#,paddr,username from gV$session where username is not null;

   INST_ID        SID    SERIAL# PADDR            USERNAME
---------- ---------- ---------- ---------------- ------------------------------
         2         15          1 00000000918A5F50 SYS
         2         21         27 00000000918B2790 SYS
         2         27          1 00000000918B48F0 SYS
         2         31         65 00000000918B8BB0 SCOTT
         2         35         33 00000000918C3290 SYS
验证是否是通过节点2连接上来
[root@node2 ~]# /u01/app/11.2.0/grid/bin/crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.DATA.dg    ora....up.type ONLINE    ONLINE    node2       
ora.FLASH.dg   ora....up.type ONLINE    ONLINE    node2       
ora.GRIDDG.dg  ora....up.type ONLINE    ONLINE    node2       
ora....ER.lsnr ora....er.type ONLINE    ONLINE    node2       
ora....N1.lsnr ora....er.type ONLINE    ONLINE    node2       
ora.asm        ora.asm.type   ONLINE    ONLINE    node2       
ora.cvu        ora.cvu.type   ONLINE    ONLINE    node2       
ora.devdb.db   ora....se.type ONLINE    ONLINE    node2       
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE               
ora....network ora....rk.type ONLINE    ONLINE    node2       
ora.node1.vip  ora....t1.type ONLINE    ONLINE    node2       
ora....SM2.asm application    ONLINE    ONLINE    node2       
ora....E2.lsnr application    ONLINE    ONLINE    node2       
ora.node2.gsd  application    OFFLINE   OFFLINE               
ora.node2.ons  application    ONLINE    ONLINE    node2       
ora.node2.vip  ora....t1.type ONLINE    ONLINE    node2       
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    node2       
ora.ons        ora.ons.type   ONLINE    ONLINE    node2       
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    node2  
验证节点的所有服务,均由node2承担
[root@node2 ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0C:29:70:8F:EA  
          inet addr:192.168.1.192  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe70:8fea/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2663 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1666 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:370296 (361.6 KiB)  TX bytes:255801 (249.8 KiB)

eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:70:8F:EA  
          inet addr:192.168.1.193  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth0:2    Link encap:Ethernet  HWaddr 00:0C:29:70:8F:EA  
          inet addr:192.168.1.203  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth0:3    Link encap:Ethernet  HWaddr 00:0C:29:70:8F:EA  
          inet addr:192.168.1.194  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth1      Link encap:Ethernet  HWaddr 00:0C:29:70:8F:F4  
          inet addr:192.168.137.12  Bcast:192.168.137.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe70:8ff4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:217323 errors:0 dropped:0 overruns:0 frame:0
          TX packets:265784 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:105910612 (101.0 MiB)  TX bytes:175678177 (167.5 MiB)

eth1:1    Link encap:Ethernet  HWaddr 00:0C:29:70:8F:F4  
          inet addr:169.254.192.219  Bcast:169.254.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:57300 errors:0 dropped:0 overruns:0 frame:0
          TX packets:57300 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:27887059 (26.5 MiB)  TX bytes:27887059 (26.5 MiB)
查看所有的IP都已经漂移值node2上来
[root@node1 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node1'
CRS-2676: Start of 'ora.cssdmonitor' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'node1'
CRS-2672: Attempting to start 'ora.diskmon' on 'node1'
CRS-2676: Start of 'ora.diskmon' on 'node1' succeeded
CRS-2676: Start of 'ora.cssd' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'node1'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'node1'
CRS-2676: Start of 'ora.ctssd' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'node1'
CRS-2676: Start of 'ora.evmd' on 'node1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'node1'
CRS-2676: Start of 'ora.asm' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'node1'
CRS-2676: Start of 'ora.crsd' on 'node1' succeeded
重新启动node1的集群服务
[root@node1 ~]# /u01/app/11.2.0/grid/bin/crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.DATA.dg    ora....up.type ONLINE    ONLINE    node1       
ora.FLASH.dg   ora....up.type ONLINE    ONLINE    node1       
ora.GRIDDG.dg  ora....up.type ONLINE    ONLINE    node1       
ora....ER.lsnr ora....er.type ONLINE    ONLINE    node1       
ora....N1.lsnr ora....er.type ONLINE    ONLINE    node2       
ora.asm        ora.asm.type   ONLINE    ONLINE    node1       
ora.cvu        ora.cvu.type   ONLINE    ONLINE    node2       
ora.devdb.db   ora....se.type ONLINE    ONLINE    node2       
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE               
ora....network ora....rk.type ONLINE    ONLINE    node1       
ora....SM1.asm application    ONLINE    ONLINE    node1       
ora....E1.lsnr application    ONLINE    ONLINE    node1       
ora.node1.gsd  application    OFFLINE   OFFLINE               
ora.node1.ons  application    ONLINE    ONLINE    node1       
ora.node1.vip  ora....t1.type ONLINE    ONLINE    node1       
ora....SM2.asm application    ONLINE    ONLINE    node2       
ora....E2.lsnr application    ONLINE    ONLINE    node2       
ora.node2.gsd  application    OFFLINE   OFFLINE               
ora.node2.ons  application    ONLINE    ONLINE    node2       
ora.node2.vip  ora....t1.type ONLINE    ONLINE    node2       
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    node2       
ora.ons        ora.ons.type   ONLINE    ONLINE    node1       
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    node2    
再次查看集群,现在已经由node1、node2各自分担服务
SQL> select inst_id,sid,serial#,paddr,username from gV$session where username is not null;

   INST_ID        SID    SERIAL# PADDR            USERNAME
---------- ---------- ---------- ---------------- ------------------------------
         2         15          1 00000000918A5F50 SYS
         2         21         27 00000000918B2790 SYS
         2         27          1 00000000918B48F0 SYS
         2         31         65 00000000918B8BB0 SCOTT
         2         35         33 00000000918C3290 SYS
         2        161         59 00000000918A7000 SYS
         1         15          3 00000000918A5F50 SYS
         1         16          9 00000000918B2790 SYS
         1        154         19 00000000918B59A0 SCOTT
         1        156         25 00000000918B3840 SYS

10 rows selected.
再次使用SQLPLUS客户端连接,新连接上来的客户端存在node1提供的服务了;负载均衡一切正常.

如何查看ASM磁盘组使用情况?

SQL> show user
USER is "SYS"
SQL> select name,total_mb,free_mb from v$asm_diskgroup;

NAME                             TOTAL_MB    FREE_MB
------------------------------ ---------- ----------
DATA                                10236       8100
FLASH                               10236       9821
GRIDDG                               1022        624
View Code

查看RAC数据库的运行状态

node2-> id
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper),1300(dba)
node2-> srvctl status database -d devdb
Instance devdb1 is running on node node1
Instance devdb2 is running on node node2
View Code

查看日志情况

SQL> set line 200
SQL> select * from v$log;

    GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIME          NEXT_CHANGE# NEXT_TIME
---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ------------------- ------------ -------------------
         1          1         15   52428800        512          2 NO  CURRENT                1271454 2013/07/03 17:15:30   2.8147E+14
         2          1         14   52428800        512          2 YES INACTIVE               1271439 2013/07/03 17:15:13      1271454 2013/07/03 17:15:30
         3          2         11   52428800        512          2 NO  CURRENT                1272703 2013/07/03 17:29:20   2.8147E+14 2013/07/03 17:29:20
         4          2         10   52428800        512          2 YES INACTIVE               1271444 2013/07/03 17:15:23      1271450 2013/07/03 17:15:25
View Code

RAC环境中查看用户是从哪个节点中登陆上来的

 

SQL> select inst_id,sid,serial#,paddr,username from gV$session where username = 'SCOTT'

   INST_ID        SID    SERIAL# PADDR            USERNAME
---------- ---------- ---------- ---------------- ------------------------------
         1         31       1155 000000009E0B48F0 SCOTT
         1        166        735 000000009E0AB2C0 SCOTT

SQL>
View Code

 

 

 

RAC主库准备工作

>>RAC主库必须置为归档模式(节点1与节点2一致)

SQL> archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     14
Next log sequence to archive   15
Current log sequence           15
View Code

>>RAC主库必须置为ForceLogging模式(节点1与节点2一致)

SQL> select name,log_mode,force_logging from gv$database;

NAME      LOG_MODE     FOR
--------- ------------ ---
DEVDB     ARCHIVELOG   YES
DEVDB     ARCHIVELOG   YES
View Code

>>RAC主库执行RMAN全备(数据库和归档日志)

 

run{
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
allocate channel c4 type disk;
backup database format '/rman_backup/Full_%U.bak';
backup archivelog all format '/rman_backup/ARC_%U.bak';
release channel c1;
release channel c2;
release channel c3;
release channel c4;
}
View Code

 

>>RAC主库执行创建物理备库控制文件

 

backup device type disk format '/rman_backup/standby_%U.ctl' current controfile for standby;
RMAN

 

>>RAC主库创建物理备库初始化参数文件

 

SQL>create pfile='/rman_backup/initphydb.ora' from spfile;
AS SYSDBA

 

>>RAC主库修改口令文件,使双节点SYS用户口令一致

alter sys identified by oracle;
node1
alter sys identifide by roacle;
node2

 

 

 

 

 

 

 

 

 

 

 

posted @ 2013-07-02 08:59  ArcerZhang  阅读(557)  评论(0编辑  收藏  举报