Data Guard Physical Standby - RAC Primary to RAC Standby 使用第二个网络 (Doc ID 1349977.1)
Data Guard Physical Standby - RAC Primary to RAC Standby using a second network (Doc ID 1349977.1)
APPLIES TO:
Oracle Database - Enterprise Edition - Version 11.2.0.1 to 11.2.0.4 [Release 11.2]
Information in this document applies to any platform.
GOAL
The primary goal of this how-to is establish a data guard configuration where the primary site is RAC and the standby site is RAC.
此方法的主要目标是建立一个数据保护配置,其中主站点是RAC,备用站点是RAC。
Please Note: The cluster nodes each have 3 network interfaces, 1 to used for the cluster interconnect, 1 for the public network and 1 for log shipping purposes between the primary and standby sites. These interfaces must be in place prior to starting the configuration steps detailed in this document. 请注意:每个群集节点具有3个网络接口,其中1个用于群集互连,1个用于公用网络,1个用于主站点和备用站点之间的日志传送。在开始本文档中详细介绍的配置步骤之前,必须已安装这些接口。
SOLUTION
Cluster Configuration Information 群集配置信息
Details the configuration prior to establishing the data guard broker environment for RAC Primary to RAC Standby using a second network and set of network interfaces in each GRID cluster and RAC database site. 在为每个GRID群集和RAC数据库站点中使用第二个网络和一组网络接口建立从RAC主库到RAC备库data guard broker环境之前,请详细介绍配置。
RAC Primary Nodes
Public IP Addresses/Interfaces: Interface: eth0 Node 1: 10.187.115.125 grid1vm1.au.oracle.com 10.187.115.139 grid1vm1-vip.au.oracle.com Node 2: 10.187.115.126 grid1vm2.au.oracle.com 10.187.115.140 grid1vm2-vip.au.oracle.com Single Client Access Name (SCAN) 10.187.115.133 grid1vmscan1.au.oracle.com Data Guard IP Addresses/Interfaces: Interface: eth2 Node 1: 192.168.11.125 grid1vm1-dg.au.oracle.com 192.168.11.225 grid1vm1-dg-vip.au.oracle.com Node 2: 192.168.11.126 grid1vm2-dg.au.oracle.com 192.168.11.226 grid1vm2-dg-vip.au.oracle.com Database Details db_name db112a db_unique_name db112a instance_names db112a1, db112a2
RAC Standby Nodes
Public IP Addresses/Interfaces: Interface: eth0 Node 1: 10.187.115.128 grid2vm1.au.oracle.com 10.187.115.142 grid2vm1-vip.au.oracle.com grid2vm1-vip Node 2: 10.187.115.129 grid2vm2.au.oracle.com 10.187.115.143 grid2vm2-vip.au.oracle.com grid2vm2-vip Single Client Access Name (SCAN) 10.187.115.136 grid2vmscan1.au.oracle.com Data Guard IP Addresses/Interfaces: Interface: eth2 Node 1: 192.168.11.128 grid2vm1-dg 192.168.11.228 grid2vm1-dg-vip.au.oracle.com Node 2: 192.168.11.129 grid2vm2-dg 192.168.11.229 grid2vm2-dg-vip.au.oracle.com Database Details db_name db112a db_unique_name db112a_stb instance_names db112a1, db112a2
Listener Configuration Details 监听配置详细信息
The way the listeners now operate in an 11.2 environment has altered dramatically when compared to previous releases. Some of the changes relevant to data guard are the following: 与以前的版本相比,监听现在在11.2环境中的运行方式已发生了巨大变化。与数据保护相关的一些更改如下
- The introduction of the SCAN and SCAN listener(s) (there can be up to 3 SCAN listeners) for handling client connections. 引入了用于处理客户端连接的SCAN和SCAN监听(最多3个SCAN监听)
- The listener configuration details are now held in both the GRID cluster registry (OCR) and the GRID_HOME/network/admin directory. 现在,监听配置详细信息同时保存在GRID群集注册表(OCR)和 GRID_HOME/network/admin目录中
- RAC enabled listeners must run out of the 11.2 GRID Home 启用RAC的监听必须用 11.2 GRID Home
- RAC enabled listeners must have Virtual IP Addresses in order for the listener to be configured 启用RAC的监听必须具有虚拟IP地址才能配置
- The network the listeners are to be configured against must be registered in the OCR as a resource 监听要针对其配置监听器的网络必须在OCR中注册为资源
- Administration of the listeners should always be performed through netca 应该始终通过netca进行管理监听
The SCAN Listener Configuration SCAN监听器配置
The SCAN listener and SCAN VIP can reside on any node of the cluster and move from node to node as a result of nodes leaving the cluster. SCAN监听器和SCAN VIP可以驻留在群集的任何节点上,并且由于节点离开群集而在节点之间移动。
The Primary Site:
In the example below we can see the SCAN listener at the Primary Site is running on the node grid1vm2 在下面的示例中,我们可以看到主站点上的SCAN监听器正在节点 grid1vm2 上运行
[oracle@grid1vm2 ~]$ ps -ef | grep tns oracle 13775 1 0 Aug18 ? 00:00:02 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER_DG -inherit oracle 14737 1 0 Aug12 ? 00:00:08 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER -inherit oracle 18728 1 0 Aug19 ? 00:00:02 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER_SCAN1 -inherit [oracle@grid1vm2 ~]$ crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- .. . ora.LISTENER.lsnr ONLINE ONLINE grid1vm1 ONLINE ONLINE grid1vm2 .. . ora.net1.network ONLINE ONLINE grid1vm1 ONLINE ONLINE grid1vm2 .. . -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE grid1vm2 ora.db112a.db 1 ONLINE ONLINE grid1vm1 Open 2 ONLINE ONLINE grid1vm2 Open .. . ora.grid1vm1.vip 1 ONLINE ONLINE grid1vm1 .. . ora.grid1vm2.vip 1 ONLINE ONLINE grid1vm2 .. . ora.scan1.vip 1 ONLINE ONLINE grid1vm2
As a result the status of the SCAN listener and its service listing can only be checked from the node it is running on. You will not be able to access the SCAN listeners details from a node it is NOT running on.
结果,只能从运行它的节点上检查 SCAN listener 的状态及其服务列表。您将无法从未运行的节点访问 SCAN listener 详细信息。
[oracle@grid1vm2 ~]$ lsnrctl status listener_scan1 LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 20-AUG-2011 01:07:59 Copyright (c) 1991, 2010, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1))) STATUS of the LISTENER ------------------------ Alias LISTENER_SCAN1 Version TNSLSNR for Linux: Version 11.2.0.2.0 - Production Start Date 19-AUG-2011 00:04:39 Uptime 1 days 1 hr. 3 min. 19 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/11.2.0.2/grid/network/admin/listener.ora Listener Log File /u01/app/oracle/diag/tnslsnr/grid1vm2/listener_scan1/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.187.115.133)(PORT=1521))) Services Summary... Service "DB112A.au.oracle.com" has 2 instance(s). Instance "db112a1", status READY, has 2 handler(s) for this service... Instance "db112a2", status READY, has 2 handler(s) for this service... .. . The command completed successfully [oracle@grid1vm2 ~]$ srvctl status scan_listener SCAN Listener LISTENER_SCAN1 is enabled SCAN listener LISTENER_SCAN1 is running on node grid1vm2 [oracle@grid1vm2 ~]$ srvctl config scan_listener SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
The Standby site:
[oracle@grid2vm1 ~]$ . oraenv ORACLE_SID = [oracle] ? +ASM1 The Oracle base has been set to /u01/app/oracle [oracle@grid2vm1 ~]$ ps -ef | grep tns oracle 3202 1 0 Aug11 ? 00:00:06 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER_SCAN1 -inherit oracle 8051 1 0 04:49 ? 00:00:03 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER_DG -inherit oracle 10572 1 0 05:26 ? 00:00:01 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER -inherit oracle 29833 29788 0 23:18 pts/1 00:00:00 grep tns [oracle@grid2vm1 ~]$ crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- .. . ora.LISTENER.lsnr ONLINE ONLINE grid2vm1 ONLINE ONLINE grid2vm2 .. . ora.net1.network ONLINE ONLINE grid2vm1 ONLINE ONLINE grid2vm2 .. . -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE grid2vm1 .. . ora.db112a_stb.db 1 ONLINE INTERMEDIATE grid2vm1 Mounted (Closed) 2 ONLINE INTERMEDIATE grid2vm2 Mounted (Closed) .. . ora.grid2vm1.vip 1 ONLINE ONLINE grid2vm1 .. . ora.grid2vm2.vip 1 ONLINE ONLINE grid2vm2 .. . ora.scan1.vip 1 ONLINE ONLINE grid2vm1 [oracle@grid2vm1 ~]$ lsnrctl status listener_scan1 LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 19-AUG-2011 23:19:09 Copyright (c) 1991, 2010, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1))) STATUS of the LISTENER ------------------------ Alias LISTENER_SCAN1 Version TNSLSNR for Linux: Version 11.2.0.2.0 - Production Start Date 11-AUG-2011 20:18:32 Uptime 8 days 3 hr. 0 min. 36 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/11.2.0.2/grid/network/admin/listener.ora Listener Log File /u01/app/11.2.0.2/grid/log/diag/tnslsnr/grid2vm1/listener_scan1/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.187.115.136)(PORT=1521))) Services Summary... Service "db112a_stb.au.oracle.com" has 2 instance(s). Instance "db112a1", status READY, has 1 handler(s) for this service... Instance "db112a2", status READY, has 1 handler(s) for this service... .. . The command completed successfully [oracle@grid2vm1 ~]$ srvctl status scan_listener SCAN Listener LISTENER_SCAN1 is enabled SCAN listener LISTENER_SCAN1 is running on node grid2vm1 [oracle@grid2vm1 ~]$ srvctl config scan_listener SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
The Local Listeners Configuration 本地监听器配置
Each node in the cluster has a node listener which will be bound to the VIP and public host IP. 群集中的每个节点都有一个节点监听器,该监听器将绑定到VIP和公共主机IP。
The Primary Site:
[oracle@grid1vm2 ~]$ ps -ef | grep tns oracle 14737 1 0 Aug12 ? 00:00:09 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER -inherit oracle 18728 1 0 Aug19 ? 00:00:02 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER_SCAN1 -inherit [oracle@grid1vm2 ~]$ /sbin/ifconfig eth0 Link encap:Ethernet HWaddr 00:16:3E:6D:71:40 inet addr:10.187.115.126 Bcast:10.187.115.255 Mask:255.255.254.0 inet6 addr: fe80::216:3eff:fe6d:7140/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6457652 errors:0 dropped:0 overruns:0 frame:0 TX packets:173719 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:840105961 (801.1 MiB) TX bytes:93958502 (89.6 MiB) eth0:3 Link encap:Ethernet HWaddr 00:16:3E:6D:71:40 inet addr:10.187.115.140 Bcast:10.187.115.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 [oracle@grid1vm2 ~]$ lsnrctl status LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 20-AUG-2011 01:34:24 Copyright (c) 1991, 2010, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 11.2.0.2.0 - Production Start Date 12-AUG-2011 01:26:54 Uptime 8 days 0 hr. 7 min. 30 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/11.2.0.2/grid/network/admin/listener.ora Listener Log File /u01/app/oracle/diag/tnslsnr/grid1vm2/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.187.115.126)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.187.115.140)(PORT=1521))) Services Summary... Service "+ASM" has 1 instance(s). Instance "+ASM2", status READY, has 1 handler(s) for this service... Service "DB112A.au.oracle.com" has 1 instance(s). Instance "db112a2", status READY, has 2 handler(s) for this service... .. . The command completed successfully
The Standby Site:
[oracle@grid2vm1 ~]$ ps -ef | grep tns oracle 3202 1 0 Aug11 ? 00:00:06 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER_SCAN1 -inherit oracle 10572 1 0 05:26 ? 00:00:01 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER -inherit [oracle@grid2vm1 ~]$ /sbin/ifconfig eth0 Link encap:Ethernet HWaddr 00:16:3E:F4:35:04 inet addr:10.187.115.128 Bcast:10.187.115.255 Mask:255.255.254.0 inet6 addr: fe80::216:3eff:fef4:3504/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8149993 errors:0 dropped:0 overruns:0 frame:0 TX packets:951427 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2737221120 (2.5 GiB) TX bytes:4646486293 (4.3 GiB) eth0:3 Link encap:Ethernet HWaddr 00:16:3E:F4:35:04 inet addr:10.187.115.142 Bcast:10.187.115.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 [oracle@grid2vm1 ~]$ lsnrctl status LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 19-AUG-2011 23:27:03 Copyright (c) 1991, 2010, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 11.2.0.2.0 - Production Start Date 19-AUG-2011 05:26:38 Uptime 0 days 18 hr. 0 min. 25 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/11.2.0.2/grid/network/admin/listener.ora Listener Log File /u01/app/oracle/diag/tnslsnr/grid2vm1/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.187.115.128)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.187.115.142)(PORT=1521))) Services Summary... Service "+ASM" has 1 instance(s). Instance "+ASM1", status READY, has 1 handler(s) for this service... Service "db112a_stb.au.oracle.com" has 1 instance(s). Instance "db112a1", status READY, has 2 handler(s) for this service... .. . The command completed successfully The RDBMS TNSNAMES.ora [oracle@grid1vm1 admin]$ cat tnsnames.ora # tnsnames.ora.grid1vm1 Network Configuration File: /u01/app/oracle/product/11.2.0/dbhome_2/network/admin/tnsnames.ora.grid1vm1 # Generated by Oracle configuration tools. DB112A_PRM = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = grid1vmscan1.au.oracle.com)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = db112a.au.oracle.com) ) ) DB112A_STB = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = grid2vmscan1.au.oracle.com)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = db112a_stb.au.oracle.com) ) )
Building the Data Guard Broker and Log Shipping Network 建立Data Guard Broker和日志传送网络
The process for establishing the environment as a Data Guard configuration using a second network other than the public network 使用除公共网络以外的第二个网络将环境作为Data Guard配置建立的过程
1. Establish the new entries in the hosts file for the second network interfaces and VIP's required. The following hosts file also includes the SCAN and publicly resolvable hostnames and VIPS which would normally be resolved through the DNS. This hosts file is the same across all cluster nodes and both the primary and standby sites.
1. 在hosts文件中为第二个网络接口和所需的VIP建立新条目。以下hosts文件还包括SCAN和publicly解析的主机名以及VIPS,通常可以通过DNS进行解析。在所有群集节点以及主站点和备用站点上,此hosts文件均相同。
[root@ovmsrv1 ~]# cat /etc/hosts # Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1 ovmsrv1.au.oracle.com ovmsrv1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 ####################### # Public IP Addresses ####################### # # Oracle Virtual Servers - Hypervisors # 10.187.115.123 ovmsrv1.au.oracle.com ovmsrv1 10.187.115.124 ovmsrv2.au.oracle.com ovmsrv2 # # Virtual Machines # 10.187.115.125 grid1vm1.au.oracle.com grid1vm1 10.187.115.126 grid1vm2.au.oracle.com grid1vm2 10.187.115.127 grid1vm3.au.oracle.com grid1vm3 10.187.115.128 grid2vm1.au.oracle.com grid2vm1 10.187.115.129 grid2vm2.au.oracle.com grid2vm2 10.187.115.130 grid2vm3.au.oracle.com grid2vm3 10.187.115.131 grid1filer1.au.oracle.com grid1filer1 10.187.115.132 grid2filer1.au.oracle.com grid2filer1 10.187.115.133 grid1vmscan1.au.oracle.com grid1vmscan1 10.187.115.134 grid1vmscan2.au.oracle.com grid1vmscan2 10.187.115.135 grid1vmscan3.au.oracle.com grid1vmscan3 10.187.115.136 grid2vmscan1.au.oracle.com grid2vmscan1 10.187.115.137 grid2vmscan2.au.oracle.com grid2vmscan2 10.187.115.138 grid2vmscan3.au.oracle.com grid2vmscan3 10.187.115.139 grid1vm1-vip.au.oracle.com grid1vm1-vip 10.187.115.140 grid1vm2-vip.au.oracle.com grid1vm2-vip 10.187.115.141 grid1vm3-vip.au.oracle.com grid1vm3-vip 10.187.115.142 grid2vm1-vip.au.oracle.com grid2vm1-vip 10.187.115.143 grid2vm2-vip.au.oracle.com grid2vm2-vip 10.187.115.144 grid2vm3-vip.au.oracle.com grid2vm3-vip ###################### # Private IP Addresses ###################### # # Interconnect # 192.168.10.123 ovmsrv1-prv 192.168.10.124 ovmsrv2-prv 192.168.10.125 grid1vm1-prv 192.168.10.126 grid1vm2-prv 192.168.10.127 grid1vm3-prv 192.168.10.128 grid2vm1-prv 192.168.10.129 grid2vm2-prv 192.168.10.130 grid2vm3-prv 192.168.10.131 gridfiler1-prv 192.168.10.132 gridfiler2-prv ################################## # Data Guard Log Shipping Network ##################################a # # Data Guard Private IP's # 192.168.11.125 grid1vm1-dg 192.168.11.126 grid1vm2-dg 192.168.11.127 grid1vm3-dg 192.168.11.128 grid2vm1-dg 192.168.11.129 grid2vm2-dg 192.168.11.130 grid2vm3-dg # # Data Guard VIP's # 192.168.11.225 grid1vm1-dg-vip.au.oracle.com grid1vm1-dg-vip 192.168.11.226 grid1vm2-dg-vip.au.oracle.com grid1vm2-dg-vip 192.168.11.227 grid1vm3-dg-vip.au.oracle.com grid1vm3-dg-vip 192.168.11.228 grid2vm1-dg-vip.au.oracle.com grid2vm1-dg-vip 192.168.11.229 grid2vm2-dg-vip.au.oracle.com grid2vm2-dg-vip 192.168.11.230 grid2vm3-dg-vip.au.oracle.com grid2vm3-dg-vip
2. Add the new network configuration to the GRID environment. The new network will running on eth2 using the network subnet of 192.168.11.0. In this case the same network is being used between the primary and standby sites as there is now router and the connection is facilitated through a network switch only.
2. 将新的网络配置添加到GRID环境。新网络将使用192.168.11.0的网络子网在eth2上运行。在这种情况下,主站点和备用站点之间使用的网络与现在的路由器相同,并且仅通过网络交换机即可实现连接。
At the Primary site on one of the cluster nodes add the new network resource using srvctl.
在一个群集节点上的主站点上,使用srvctl添加新的网络资源
[root@grid1vm1 ~]# srvctl add network -k 2 -S 192.168.11.0/255.255.255.0/eth2 -w static -v Successfully added Network. [root@grid1vm1 ~]# crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- .. . ora.net1.network ONLINE ONLINE grid1vm1 ONLINE ONLINE grid1vm2 ora.net2.network OFFLINE OFFLINE grid1vm1 OFFLINE OFFLINE grid1vm2 ..
At the Standby Site on one of the cluster nodes add the new network resource using srvctl
在一个群集节点上的备用站点上,使用srvctl添加新的网络资源
[root@grid2vm1 ~]# srvctl add network -k 2 -S 192.168.11.0/255.255.255.0/eth2 -w static -v [root@grid2vm1 ~]# crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- .. . ora.net1.network ONLINE ONLINE grid2vm1 ONLINE ONLINE grid2vm2 ora.net2.network OFFLINE OFFLINE grid2vm1 OFFLINE OFFLINE grid2vm2 ..
3. Start the new network resources. 启动新的网络资源
From the Primary Site:
[root@grid1vm1 ~]# crsctl start res ora.net2.network CRS-2672: Attempting to start 'ora.net2.network' on 'grid1vm1' CRS-2672: Attempting to start 'ora.net2.network' on 'grid1vm2' CRS-2676: Start of 'ora.net2.network' on 'grid1vm2' succeeded CRS-2676: Start of 'ora.net2.network' on 'grid1vm1' succeeded [root@grid1vm1 ~]# crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- .. . ora.net1.network ONLINE ONLINE grid1vm1 ONLINE ONLINE grid1vm2 ora.net2.network ONLINE ONLINE grid1vm1 ONLINE ONLINE grid1vm2 .. . ora.db112a.db 1 ONLINE ONLINE grid1vm1 Open 2 ONLINE ONLINE grid1vm2 Open
[root@grid2vm1 ~]# crsctl start res ora.net2.network CRS-2672: Attempting to start 'ora.net2.network' on 'grid2vm1' CRS-2672: Attempting to start 'ora.net2.network' on 'grid2vm2' CRS-2676: Start of 'ora.net2.network' on 'grid2vm2' succeeded CRS-2676: Start of 'ora.net2.network' on 'grid2vm1' succeeded [root@grid2vm1 ~]# crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- .. . ora.net1.network ONLINE ONLINE grid2vm1 ONLINE ONLINE grid2vm2 ora.net2.network ONLINE ONLINE grid2vm1 ONLINE ONLINE grid2vm2 .. . ora.db112a_stb.db 1 ONLINE INTERMEDIATE grid2vm1 Mounted (Closed) 2 ONLINE INTERMEDIATE grid2vm2 Mounted (Closed)
4. Add the new VIP addresses to GRID environment 将新的VIP地址添加到GRID环境
[root@grid1vm1 ~]# tail /etc/hosts # Data Guard VIP's 192.168.11.225 grid1vm1-dg-vip 192.168.11.226 grid1vm2-dg-vip 192.168.11.227 grid1vm3-dg-vip 192.168.11.228 grid2vm1-dg-vip 192.168.11.229 grid2vm2-dg-vip 192.168.11.230 grid2vm3-dg-vip
At the Primary site from one of the cluster nodes 在群集节点之一的主站点上
[root@grid1vm1 ~]# srvctl add vip -n grid1vm1 -A 192.168.11.225/255.255.255.0 -k 2 [root@grid1vm1 ~]# srvctl add vip -n grid1vm2 -A 192.168.11.226/255.255.255.0 -k 2
At the Standby site from one of the cluster nodes 在群集节点之一的备用站点上
[root@grid2vm1 ~]# srvctl add vip -n grid2vm1 -A 192.168.11.228/255.255.255.0 -k 2 [root@grid2vm1 ~]# srvctl add vip -n grid2vm2 -A 192.168.11.229/255.255.255.0 -k 2
5. Start the new VIP resources on each cluster node 在每个群集节点上启动新的VIP资源
At the Primary site from one of the cluster nodes 在群集节点之一的主站点上
[root@grid1vm1 ~]# srvctl start vip -i grid1vm1-dg-vip [root@grid1vm1 ~]# srvctl start vip -i grid1vm2-dg-vip [root@grid1vm1 ~]# crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- .. . ora.net1.network ONLINE ONLINE grid1vm1 ONLINE ONLINE grid1vm2 ora.net2.network ONLINE ONLINE grid1vm1 ONLINE ONLINE grid1vm2 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- .. . ora.db112a.db 1 ONLINE ONLINE grid1vm1 Open 2 ONLINE ONLINE grid1vm2 Open ora.grid1vm1-dg-vip.vip 1 ONLINE ONLINE grid1vm1 ora.grid1vm1.vip 1 ONLINE ONLINE grid1vm1 ora.grid1vm2-dg-vip.vip 1 ONLINE ONLINE grid1vm2 ora.grid1vm2.vip
At the Standby site from one of the cluster nodes 在群集节点之一的备用站点上
[root@grid2vm1 ~]# srvctl start vip -i grid2vm1-dg-vip [root@grid2vm1 ~]# srvctl start vip -i grid2vm2-dg-vip [root@grid2vm1 ~]# crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- .. . ora.net1.network ONLINE ONLINE grid2vm1 ONLINE ONLINE grid2vm2 ora.net2.network ONLINE ONLINE grid2vm1 ONLINE ONLINE grid2vm2 .. . -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- .. . ora.db112a_stb.db 1 ONLINE INTERMEDIATE grid2vm1 Mounted (Closed) 2 ONLINE INTERMEDIATE grid2vm2 Mounted (Closed) ora.grid2vm1-dg-vip.vip 1 ONLINE ONLINE grid2vm1 ora.grid2vm1.vip 1 ONLINE ONLINE grid2vm1 ora.grid2vm2-dg-vip.vip 1 ONLINE ONLINE grid2vm2 ora.grid2vm2.vip 1 ONLINE ONLINE grid2vm2
6. Create the new listener for handling Data Guard Broker based connections 创建新的监听器以处理基于Data Guard Broker的连接
From the Primary Site and first node of the Primary add the new data guard specific listener as the owner of the GRID home to the new network using netca. In this case TCP is the protocol and port 1555 will be used. The new listener name will be listener_dg
从主站点和主节点的第一个节点,使用netca将新的Data Guard特定侦听器作为GRID主目录的所有者添加到新网络。在这种情况下,TCP是协议,将使用端口1555。新的侦听器名称将为listener_dg
[oracle@grid1vm1 ~]$ netca Oracle Net Services Configuration: Oracle Net Configuration Assistant is launched from Grid Infrastructure home. Network configuration will be clusterwide. Configuring Listener:LISTENER_DG grid1vm1... grid1vm2... Listener configuration complete. Oracle Net Listener Startup: Listener started successfully. Oracle Net Services configuration successful. The exit code is 0 [oracle@grid1vm1 ~]$ crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- .. . ora.LISTENER.lsnr ONLINE ONLINE grid1vm1 ONLINE ONLINE grid1vm2 ora.LISTENER_DG.lsnr ONLINE ONLINE grid1vm1 ONLINE ONLINE grid1vm2 .. . ora.net1.network ONLINE ONLINE grid1vm1 ONLINE ONLINE grid1vm2 ora.net2.network ONLINE ONLINE grid1vm1 ONLINE ONLINE grid1vm2 .. . -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE grid1vm1 ora.db112a.db 1 ONLINE ONLINE grid1vm1 Open 2 ONLINE ONLINE grid1vm2 Open ora.grid1vm1-dg-vip.vip 1 ONLINE ONLINE grid1vm1 ora.grid1vm1.vip 1 ONLINE ONLINE grid1vm1 ora.grid1vm2-dg-vip.vip 1 ONLINE ONLINE grid1vm2 ora.grid1vm2.vip 1 ONLINE ONLINE grid1vm2 ora.oc4j 1 OFFLINE OFFLINE ora.scan1.vip 1 ONLINE ONLINE grid1vm1
From the Standby Site and first node of the Standby add the new data guard specific listener as the owner of the GRID home to the new network using netca. In this case TCP is the protocol and port 1555 will be used. The new listener name will be listener_dg
从备用站点和备用服务器的第一个节点,使用netca将新的数据保护特定侦听器作为GRID主目录的所有者添加到新网络。在这种情况下,TCP是协议,将使用端口1555。新的侦听器名称将为listener_dg
[oracle@grid2vm1 ~]$ netca Oracle Net Services Configuration: Oracle Net Configuration Assistant is launched from Grid Infrastructure home. Network configuration will be clusterwide. Configuring Listener:LISTENER_DG grid2vm1 grid2vm2 Listener configuration complete. Oracle Net Listener Startup: Listener started successfully. Oracle Net Services configuration successful. The exit code is 0 [oracle@grid2vm1 ~]$ crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- .. . ora.LISTENER.lsnr ONLINE ONLINE grid2vm1 ONLINE ONLINE grid2vm2 ora.LISTENER_DG.lsnr ONLINE ONLINE grid2vm1 ONLINE ONLINE grid2vm2 .. . ora.net1.network ONLINE ONLINE grid2vm1 ONLINE ONLINE grid2vm2 ora.net2.network ONLINE ONLINE grid2vm1 ONLINE ONLINE grid2vm2 .. . -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE grid2vm1 ora.db112a_stb.db 1 ONLINE ONLINE grid2vm1 Open 2 ONLINE ONLINE grid2vm2 Open ora.grid2vm1-dg-vip.vip 1 ONLINE ONLINE grid2vm1 ora.grid2vm1.vip 1 ONLINE ONLINE grid2vm1 ora.grid2vm2-dg-vip.vip 1 ONLINE ONLINE grid2vm2 ora.grid2vm2.vip 1 ONLINE ONLINE grid2vm2 .. . ora.scan1.vip 1 ONLINE ONLINE grid2vm1
7. The Log shipping between the sites must use the default database service for shipping of logs to ensure the service is available for log shipping purposes all throughout the process of switchover operations. The default database service will be registered in the listeners using the db_unique_name value for the primary and standby sites.
7. 站点之间的日志传送必须使用默认的数据库服务来传送日志,以确保该服务可用于整个切换操作过程中的日志传送目的。将使用主站点和备用站点的db_unique_name值在侦听器中注册默认数据库服务
For Example:
The Primary Site listener used in log shipping will show - 日志传送中使用的主站点侦听器将显示-
[grid@grid1vm1 ~]$ lsnrctl services listener_dg LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 19-SEP-2014 11:01:07 Copyright (c) 1991, 2011, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_DG))) Services Summary... Service "db112a.au.oracle.com" has 2 instance(s). Instance "db112a1", status READY, has 2 handler(s) for this service... Handler(s): "DEDICATED" established:6 refused:0 state:ready LOCAL SERVER "DEDICATED" established:0 refused:0 state:ready REMOTE SERVER (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.11.225)(PORT=1555))) Instance "db112a2", status READY, has 1 handler(s) for this service... Handler(s): "DEDICATED" established:1 refused:0 state:ready REMOTE SERVER (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.11.226)(PORT=1555)))
The Standy Site listener used in log shipping will show - 日志传送中使用的Standy Site侦听器将显示-
[grid@grid2vm1 ~]$ lsnrctl services listener_dg LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 20-SEP-2014 05:29:40 Copyright (c) 1991, 2011, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_DG))) Services Summary... Service "db112a_stb.au.oracle.com" has 2 instance(s). Instance "db112a1", status READY, has 2 handler(s) for this service... Handler(s): "DEDICATED" established:5 refused:0 state:ready LOCAL SERVER "DEDICATED" established:0 refused:0 state:ready REMOTE SERVER (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.11.228)(PORT=1555))) Instance "db112a2", status READY, has 1 handler(s) for this service... Handler(s): "DEDICATED" established:2 refused:0 state:ready REMOTE SERVER (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.11.229)(PORT=1555)))
8. Add the new TNS alias entries required for log shipping and listener registrations for each instance in each of the clusters associated with the primary and standby sites. These need to added to the tnsnames.ora in the home from which the RDBMS instances are running. This tnsnames.ora will need to be replicated across all cluster nodes and both the primary and standby sites.
8.在与主站点和备用站点关联的每个群集中,为每个实例添加日志传送和侦听器注册所需的新TNS别名条目。这些需要添加到运行RDBMS实例的主目录中的tnsnames.ora中。需要在所有群集节点以及主站点和备用站点之间复制此tnsnames.ora。
# tnsnames.ora.grid1vm1 Network Configuration File: /u01/app/oracle/product/11.2.0/dbhome_2/network/admin/tnsnames.ora.grid1vm1 # Generated by Oracle configuration tools. # Public network interface based connections via the SCAN DB112A_PRM = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = grid1vmscan1.au.oracle.com)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = db112a.au.oracle.com) ) ) DB112A_STB = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = grid2vmscan1.au.oracle.com)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = db112a_stb.au.oracle.com) ) ) # Private network interface alias' used to facilitate shipping and switchover operations DB112A_PRM_DGMGRL = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = grid1vm1-dg-vip.au.oracle.com)(PORT = 1555)) (ADDRESS = (PROTOCOL = TCP)(HOST = grid1vm2-dg-vip.au.oracle.com)(PORT = 1555)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = db112a_dgmgrl.au.oracle.com) ) ) DB112A_STB_DGMGRL = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = grid2vm1-dg-vip.au.oracle.com)(PORT = 1555)) (ADDRESS = (PROTOCOL = TCP)(HOST = grid2vm2-dg-vip.au.oracle.com)(PORT = 1555)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = db112a_stb_dgmgrl.au.oracle.com) ) ) DB112A_STB_SHIP = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = grid2vm1-dg-vip.au.oracle.com)(PORT = 1555)) (ADDRESS = (PROTOCOL = TCP)(HOST = grid2vm2-dg-vip.au.oracle.com)(PORT = 1555)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = db112a_stb.au.oracle.com) ) ) DB112A_PRM_SHIP = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = grid1vm1-dg-vip.au.oracle.com)(PORT = 1555)) (ADDRESS = (PROTOCOL = TCP)(HOST = grid1vm2-dg-vip.au.oracle.com)(PORT = 1555)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = db112a.au.oracle.com) ) ) # # LISTENER_NETWORKS Aliases used to implement service registrations with both local and remote listeners across the cluster nodes # and data guard environment. # GRID1VM1_LOCAL_NET1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 10.187.115.125 )(PORT = 1521))) GRID1VM1_LOCAL_NET2 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.11.225 )(PORT = 1555))) GRID1VM2_LOCAL_NET1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 10.187.115.126 )(PORT = 1521))) GRID1VM2_LOCAL_NET2 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.11.226 )(PORT = 1555))) GRID1_REMOTE_NET2 = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.11.225 )(PORT = 1555))) (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.11.226 )(PORT = 1555)))) GRID2VM1_LOCAL_NET1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 10.187.115.128 )(PORT = 1521))) GRID2VM1_LOCAL_NET2 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.11.228 )(PORT = 1555))) GRID2VM2_LOCAL_NET1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 10.187.115.129 )(PORT = 1521))) GRID2VM2_LOCAL_NET2 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.11.229 )(PORT = 1555))) GRID2_REMOTE_NET2 = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.11.228 )(PORT = 1555))) (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.11.229 )(PORT = 1555))))
9. In order for the RDBMS instances to be automatically restarted after a switchover operation the static connection end points and services must be set in each listener.ora for the new listener, listener_dg
9. 为了使RDBMS实例在切换操作后自动重新启动,必须在每个listener.ora中为新的监听器listener_dg设置静态连接端点和服务。
The following entry needs to be placed in each listener.ora in the GRID home across all nodes in the cluster at each site
需要将以下条目放置在GRID主页中每个站点的集群中所有节点上的每个listener.ora中
For Primary site node 1 and instance 1 对于主站点节点1和实例1
SID_LIST_LISTENER_DG = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = db112a_dgmgrl.au.oracle.com) (ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_2) (SID_NAME = db112a1) ) )
For Primary site node 2 and instance 2 对于主站点节点2和实例2
SID_LIST_LISTENER_DG = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = db112a_dgmgrl.au.oracle.com) (ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_2) (SID_NAME = db112a2) ) )
For Standby site node 1 and instance 1 对于备用站点节点1和实例1
SID_LIST_LISTENER_DG = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = db112a_stb_dgmgrl.au.oracle.com) (ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_2) (SID_NAME = db112a1) ) )
For Standby site node 2 and instance 2 对于备用站点节点2和实例2
SID_LIST_LISTENER_DG = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = db112a_stb_dgmgrl.au.oracle.com) (ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_2) (SID_NAME = db112a2) ) )
10. Set the listener_networks parameter to ensure the instances register the broker related services in the appropriate listeners within the cluster.
10. 设置listener_networks参数,以确保实例在群集内的适当侦听器中注册与代理相关的服务。
For Primary site node 1 and instance 1 set the listener_network parameter across each cluster instances using the appropriate alias for each instance.
对于主站点节点1和实例1,使用每个实例的适当别名在每个群集实例上设置listener_network参数。
SQL> ALTER SYSTEM SET listener_networks='((NAME=network1)(LOCAL_LISTENER=GRID1VM1_LOCAL_NET1)(REMOTE_LISTENER=grid1vmscan1:1521))','((NAME=network2)(LOCAL_LISTENER=GRID1VM1_LOCAL_NET2)(REMOTE_LISTENER=GRID1_REMOTE_NET2))' SCOPE=BOTH SID='db112a1'; SQL> ALTER SYSTEM SET listener_networks='((NAME=network1)(LOCAL_LISTENER=GRID1VM2_LOCAL_NET1)(REMOTE_LISTENER=grid1vmscan1:1521))','((NAME=network2)(LOCAL_LISTENER=GRID1VM2_LOCAL_NET2)(REMOTE_LISTENER=GRID1_REMOTE_NET2))' SCOPE=BOTH SID='db112a2';
For the Standby site node 1 and instance 1 set the listener_network parameter across each cluster instances using the appropriate alias for each instance.
对于备用站点节点1和实例1,使用每个实例的适当别名在每个群集实例上设置listener_network参数。
SQL> ALTER SYSTEM SET listener_networks='((NAME=network1)(LOCAL_LISTENER=GRID2VM1_LOCAL_NET1)(REMOTE_LISTENER=grid2vmscan1:1521))','((NAME=network2)(LOCAL_LISTENER=GRID2VM1_LOCAL_NET2)(REMOTE_LISTENER=GRID2_REMOTE_NET2))' SCOPE=BOTH SID='db112a1'; SQL> ALTER SYSTEM SET listener_networks='((NAME=network1)(LOCAL_LISTENER=GRID2VM2_LOCAL_NET1)(REMOTE_LISTENER=grid2vmscan1:1521))','((NAME=network2)(LOCAL_LISTENER=GRID2VM2_LOCAL_NET2)(REMOTE_LISTENER=GRID2_REMOTE_NET2))' SCOPE=BOTH SID='db112a2';
11. Set the data guard broker specific parameters in the instances across the cluster and each site associated with the Data Guard Broker configuration. In RAC the broker configuration files must be placed in cluster aware storage, ideally ASM based storage in 11.2.
11. 在整个群集以及与Data Guard Broker配置关联的每个站点的实例中,设置Data Guard Broker的特定参数。在RAC中,代理配置文件必须放置在群集感知的存储中,最好是在11.2中基于ASM的存储中。
From the Primary site set the location for the broker configuration files and enable the broker.
在主站点上,设置代理配置文件的位置并启用Broker。
SQL> ALTER SYSTEM SET dg_broker_config_file1='+DATA/db112a/dg_broker1.dat' SCOPE=SPFILE SID='*'; SQL> ALTER SYSTEM SET dg_broker_config_file2='+FRA/db112a/dg_broker2.dat' SCOPE=SPFILE SID='*'; SQL> ALTER SYSTEM SET dg_broker_start=TRUE SCOPE=SPFILE SID='*';
From the Standby site set the location for the broker configuration files and enable the broker.
在“备用”站点中,设置代理配置文件的位置并启用Broker。
SQL> ALTER SYSTEM SET dg_broker_config_file1='+DATA/db112a_stb/dg_broker1.dat' SCOPE=SPFILE SID='*'; SQL> ALTER SYSTEM SET dg_broker_config_file2='+DATA/db112a_stb/dg_broker2.dat' SCOPE=SPFILE SID='*'; SQL> ALTER SYSTEM SET dg_broker_start=TRUE SCOPE=SPFILE SID='*';
12. Create the Broker configuration. From the Primary site and node 1 having the environment set to the RDBMS home and SID for instance 1 perform the following:
12. 创建Broker配置。从环境设置为RDBMS主目录和实例1的SID的主站点和节点1执行以下操作
[root@grid1vm1 trace]# su - oracle [oracle@grid1vm1 ~]$ . oraenv ORACLE_SID = [oracle] ? db112a1 The Oracle base has been set to /u01/app/oracle [oracle@grid1vm1 ~]$ dgmgrl DGMGRL for Linux: Version 11.2.0.2.0 - 64bit Production Copyright (c) 2000, 2009, Oracle. All rights reserved. Welcome to DGMGRL, type "help" for information. DGMGRL> connect / Connected. DGMGRL> create configuration dg112 as > primary database is db112a connect identifier is db112a_prm_ship; Configuration "dg112" created with primary database "db112a". DGMGRL> add database db112a_stb as connect identifier is db112a_stb_ship maintained as physical; Database "db112a_stb" added.
13. Set the Static Connect Identifier in the Broker Configuration for each Site
13. 在每个站点的代理配置中设置静态连接标识符
DGMGRL> edit instance db112a2 on database db112a set PROPERTY StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=grid1vm2-dg-vip.au.oracle.com)(PORT=1555))(CONNECT_DATA=(SERVICE_NAME=db112a_dgmgrl.au.oracle.com)(INSTANCE_NAME=db112a2)(SERVER=DEDICATED)))'; DGMGRL> edit instance db112a1 on database db112a set PROPERTY StaticConnectIdentifier='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=grid1vm1-dg-vip.au.oracle.com)(PORT=1555))(CONNECT_DATA=(SERVICE_NAME=db112a_dgmgrl.au.oracle.com)(INSTANCE_NAME=db112a1)(SERVER=DEDICATED)))'; DGMGRL> edit instance db112a2 on database db112a_stb set PROPERTY StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=grid2vm2-dg-vip.au.oracle.com)(PORT=1555))(CONNECT_DATA=(SERVICE_NAME=db112a_stb_dgmgrl.au.oracle.com)(INSTANCE_NAME=db112a2)(SERVER=DEDICATED)))'; DGMGRL> edit instance db112a1 on database db112a_stb set PROPERTY StaticConnectIdentifier='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=grid2vm1-dg-vip.au.oracle.com)(PORT=1555))(CONNECT_DATA=(SERVICE_NAME=db112a_stb_dgmgrl.au.oracle.com)(INSTANCE_NAME=db112a1)(SERVER=DEDICATED)))';
14. Enable the Broker configuration 启用Broker配置
DGMGRL> enable configuration;
It will take a little time before the configuration is created as the broker must establish the broker configuration file and post these to the primary and standby sites.
创建配置将花费一些时间,因为broker必须建立broker配置文件并将其发布到主站点和备用站点
If all sites TNS aliases and connect strings used in the configuration and parameters within the RDBMS instances are set correctly then the configuration should be enabled successfully without error.
如果在RDBMS实例中的配置和参数中使用的所有站点TNS别名和连接字符串均已正确设置,则应成功启用配置,而不会出现错误。
DGMGRL> show configuration; Configuration - dg112 Protection Mode: MaxPerformance Databases: db112a - Primary database db112a_stb - Physical standby database Fast-Start Failover: DISABLED Configuration Status: SUCCESS DGMGRL> show database verbose db112a Database - db112a Role: PRIMARY Intended State: TRANSPORT-ON Instance(s): db112a1 db112a2 Properties: DGConnectIdentifier = 'db112a_prm_ship' ObserverConnectIdentifier = '' LogXptMode = 'ASYNC' DelayMins = '0' Binding = 'optional' MaxFailure = '0' MaxConnections = '1' ReopenSecs = '300' NetTimeout = '30' RedoCompression = 'DISABLE' LogShipping = 'ON' PreferredApplyInstance = '' ApplyInstanceTimeout = '0' ApplyParallel = 'AUTO' StandbyFileManagement = 'MANUAL' ArchiveLagTarget = '0' LogArchiveMaxProcesses = '4' LogArchiveMinSucceedDest = '1' DbFileNameConvert = '' LogFileNameConvert = '' FastStartFailoverTarget = '' InconsistentProperties = '(monitor)' InconsistentLogXptProps = '(monitor)' SendQEntries = '(monitor)' LogXptStatus = '(monitor)' RecvQEntries = '(monitor)' SidName(*) StaticConnectIdentifier(*) StandbyArchiveLocation(*) AlternateLocation(*) LogArchiveTrace(*) LogArchiveFormat(*) TopWaitEvents(*) (*) - Please check specific instance for the property value Database Status: SUCCESS DGMGRL> show database verbose db112a_stb Database - db112a_stb Role: PHYSICAL STANDBY Intended State: APPLY-ON Transport Lag: 0 seconds Apply Lag: 0 seconds Real Time Query: OFF Instance(s): db112a1 (apply instance) db112a2 Properties: DGConnectIdentifier = 'db112a_stb_ship' ObserverConnectIdentifier = '' LogXptMode = 'ASYNC' DelayMins = '0' Binding = 'optional' MaxFailure = '0' MaxConnections = '1' ReopenSecs = '300' NetTimeout = '30' RedoCompression = 'DISABLE' LogShipping = 'ON' PreferredApplyInstance = '' ApplyInstanceTimeout = '0' ApplyParallel = 'AUTO' StandbyFileManagement = 'MANUAL' ArchiveLagTarget = '0' LogArchiveMaxProcesses = '4' LogArchiveMinSucceedDest = '1' DbFileNameConvert = '' LogFileNameConvert = '' FastStartFailoverTarget = '' InconsistentProperties = '(monitor)' InconsistentLogXptProps = '(monitor)' SendQEntries = '(monitor)' LogXptStatus = '(monitor)' RecvQEntries = '(monitor)' SidName(*) StaticConnectIdentifier(*) StandbyArchiveLocation(*) AlternateLocation(*) LogArchiveTrace(*) LogArchiveFormat(*) TopWaitEvents(*) (*) - Please check specific instance for the property value Database Status: SUCCESS
REFERENCES
NOTE:880017.1 - 11gR2 RAC DB switchover using DG broker
NOTE:1063571.1 - How to Configure A Second Listener on a Separate Network in 11.2 Grid Infrastructure
NOTE:730361.1 - Changing the network used by the Data Guard Broker Version 10.2 for redo transport