返回顶部

欢迎来到菜鸟大明儿哥的博客

我们一起交流学习,不断提升自我

Oracle RAC启停步骤

Oracle RAC集群需要关机维护 版本为12C

 

一、关闭 

操作步骤

--确认集群的db_unique_name,本初的db_unique_name为orcl

SQL> show parameter name

 

NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

cell_offloadgroup_name               string

db_file_name_convert                 string

db_name                              string      orcl

db_unique_name                       string      orcl

global_names                         boolean     FALSE

instance_name                        string      orcl1

lock_name_space                      string

log_file_name_convert                string

processor_group_name                 string

service_names                        string      orcl

 

--确认集群的instance_name

SQL> select instance_name,status from gv$instance;

 

INSTANCE_NAME    STATUS

---------------- ------------

orcl2            OPEN

orcl1            OPEN

#两个实例名为orcl1 和 orcl2

 

--关闭节点一监听,确保应用无法通过监听连接数据库

[grid@orcldb1 ~]$ srvctl stop listener -n orcldb1

 

#使用crs_stat -t -v命令或者使用srvctl status listener命令检查监听运行情况

[grid@orcldb1 ~]$ srvctl status listener -n orcldb1

Listener LISTENER is enabled on node(s): orcldb1

Listener LISTENER is not running on node(s): orcldb1

 

[grid@orcldb2 ~]$ srvctl status listener

Listener LISTENER is enabled 

Listener LISTENER is running on node(s): orcldb1,orcldb2

 

#确保关闭前实例上没有session在执行,如果有停机窗口的话建议杀一次local=no的会话

[oracle@orcldb1 ~]$ ps -ef |grep -i local=no |wc -l

1

[oracle@orcldb1 ~]$ ps -ef |grep -i local=no |cut -c 10-15|xargs kill -9

kill 7803: No such process

 

1.关闭数据库:

用grid用户执行srvctl命令

语法:srvctl stop database -d dbname [-o immediate]

作用:可以一次性关闭dbname的所有实例

[oracle@orcldb1 ~]$ srvctl stop database -d orcl  -停止所有节点上的实例

 

查看状态:

[grid@orcldb1 ~]$ srvctl status database -d orcl    

Instance orcl1 is not running on node orcldb1

Instance orcl2 is not running on node orcldb2

 

--关闭集群节点一上的数据库实例

[grid@orcldb1 ~]$ srvctl stop instance -o immediate -d orcl -i orcldb1

[oracle@orcldb1 ~]$ sqlplus / as sysdba

 

SQL*Plus: Release 11.2.0.4.0 Production on Thu Apr 13 12:41:30 2017

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

#确保数据库实例已经被关闭

 

 

[grid@orcldb1 ~]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora....DISK.dg ora....up.type 0/5 0/ ONLINE ONLINE orcldb1
ora.DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE orcldb1
ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE orcldb2
ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE orcldb2
ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE orcldb1
ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE orcldb2
ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE
ora....network ora....rk.type 0/5 0/ ONLINE ONLINE orcldb1
ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE orcldb2
ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE orcldb1
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE orcldb1
ora....E1.lsnr application 0/5 0/0 OFFLINE OFFLINE
ora....de1.gsd application 0/5 0/0 OFFLINE OFFLINE
ora....de1.ons application 0/3 0/0 ONLINE ONLINE orcldb1
ora....de1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE orcldb1
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE orcldb2
ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE orcldb2
ora....de2.gsd application 0/5 0/0 OFFLINE OFFLINE
ora....de2.ons application 0/3 0/0 ONLINE ONLINE orcldb2
ora....de2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE orcldb2
ora.orcl.db ora....se.type 0/2 0/1 ONLINE ONLINE orcldb2
ora....ry.acfs ora....fs.type 0/5 0/ ONLINE ONLINE orcldb1
ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE orcldb2

 

#可以看到,未关闭集群节点一上的实例前,ora.orcl.db的服务是在节点一上,关闭节点一的数据库实例后,ora.orcl.db已经飘在了节点二上了。

 

或者使用srvctl status database 命令检查
[grid@orcldb1 ~]$ srvctl status database -d orcl
Instance orcl1 is not running on node orcldb1
Instance orcl2 is running on node orcldb2

 


2.停止HAS(High Availability Services),必须以root用户操作
[root@orcldb1 oracle]# cd /u01/grid/11.2.0/grid/bin(GI_HOME/bin)
[root@orcldb1 bin]# ./crsctl stop has -f
[root@orcldb1 bin]# ./crsctl stop crs -f
本命令只能关闭当前节点的CRS服务,因此需要在RAC的所有节点上执行,启动也一样。has与crs等同

 



 


--关闭集群节点一的ASM服务
[grid@orcldb1 ~]$ srvctl stop asm -n orcldb1
PRCR-1014 : Failed to stop resource ora.asm
PRCR-1065 : Failed to stop resource ora.asm
CRS-2529: Unable to act on 'ora.asm' because that would require stopping or relocating 'ora.CLUSTER_DISK.dg', but the force option was not specified
#此处关闭asm实例在报错,由于ora.CLUSTER_DISK的存在,指明此需要强制指定参数-f才能关闭ASM.此处如果是执行-f参数的话将会已shutdown abort的方式强制关闭数据库,生产环境上执行此命令有风险
--#11Gr2下,ASM是cssd下管理的,需要把cssd关闭后,才能关闭ASM服务

 



CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'orcldb1'
CRS-2673: Attempting to stop 'ora.crsd' on 'orcldb1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'orcldb1'
CRS-2673: Attempting to stop 'ora.CLUSTER_DISK.dg' on 'orcldb1'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'orcldb1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'orcldb1'
CRS-2673: Attempting to stop 'ora.orcldb1.vip' on 'orcldb1'
CRS-2677: Stop of 'ora.orcldb1.vip' on 'orcldb1' succeeded
CRS-2672: Attempting to start 'ora.orcldb1.vip' on 'orcldb2'
CRS-2677: Stop of 'ora.registry.acfs' on 'orcldb1' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'orcldb1' succeeded
CRS-2676: Start of 'ora.orcldb1.vip' on 'orcldb2' succeeded
CRS-2677: Stop of 'ora.CLUSTER_DISK.dg' on 'orcldb1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'orcldb1'
CRS-2677: Stop of 'ora.asm' on 'orcldb1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'orcldb1'
CRS-2677: Stop of 'ora.ons' on 'orcldb1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'orcldb1'
CRS-2677: Stop of 'ora.net1.network' on 'orcldb1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'orcldb1' has completed
CRS-2677: Stop of 'ora.crsd' on 'orcldb1' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'orcldb1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'orcldb1'
CRS-2673: Attempting to stop 'ora.evmd' on 'orcldb1'
CRS-2673: Attempting to stop 'ora.asm' on 'orcldb1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'orcldb1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'orcldb1'
CRS-2677: Stop of 'ora.crf' on 'orcldb1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'orcldb1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'orcldb1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'orcldb1' succeeded
CRS-2677: Stop of 'ora.asm' on 'orcldb1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'orcldb1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'orcldb1' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'orcldb1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'orcldb1'
CRS-2677: Stop of 'ora.cssd' on 'orcldb1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'orcldb1'
CRS-2677: Stop of 'ora.gipcd' on 'orcldb1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'orcldb1'
CRS-2677: Stop of 'ora.gpnpd' on 'orcldb1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'orcldb1' has completed
CRS-4133: Oracle High Availability Services has been stopped.

 

 

#可以看到,CRS命令执行输出,节点一的cluster服务已经完全飘在了节点二上。

[grid@oranode2 ~]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------

ora....DISK.dg ora....up.type 0/5 0/ ONLINE ONLINE orcldb2
ora.DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE orcldb2
ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE orcldb2
ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE orcldb2
ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE orcldb2
ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE orcldb2
ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE
ora....network ora....rk.type 0/5 0/ ONLINE ONLINE orcldb2
ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE orcldb2
ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE orcldb2
ora....de1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE orcldb2
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE orcldb2
ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE orcldb2
ora....de2.gsd application 0/5 0/0 OFFLINE OFFLINE
ora....de2.ons application 0/3 0/0 ONLINE ONLINE orcldb2
ora....de2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE orcldb2
ora.orcl.db ora....se.type 0/2 0/1 ONLINE ONLINE orcldb2
ora....ry.acfs ora....fs.type 0/5 0/ ONLINE ONLINE orcldb2
ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE orcldb2

 

 

--确保ASM服务已经关闭
[grid@orcldb2 ~]$ srvctl status asm -n orcldb1
ASM is not running on orcldb1


[grid@orcldb2 ~]$ srvctl status asm -n orcldb2
ASM is running on orcldb2

 


3.停止节点集群服务,必须以root用户:
[root@orcldb1 oracle]# cd /u01/grid/11.2.0/grid/bin
[root@orcldb1 bin]# ./crsctl stop cluster ----停止本节点集群服务
[root@orcldb1 bin]# ./crsctl stop cluster -all ---停止所有节点服务


也可以如下控制所停节点:
[root@orcldb1 bin]# crsctl stop cluster -n rac1 rac2
CRS-2677: Stop of 'ora.cssd' on 'orcldb1' succeeded
CRS-2677: Stop of 'ora.cssd' on 'orcldb2' succeeded
。。。。。。。。。。。省略日志输出。。。。。。。。。。。。。。


你如果想一条命令把所有的进程全部停止可以使用上述命令。如果不指定参数的话对当前节点有效,如果指定参数的话对相关参数节点有效。

 

 

--验证集群节点一的oracle相关服务已经完全关闭
[grid@orcldb1 ~]$ ps -ef |grep -i ora
root 1555 1 0 12:03 ? 00:00:10 /u01/app/11.2.0/grid/jdk/jre/bin/java -Xms64m -Xmx256m -classpath /u01/app/11.2.0/grid/tfa/orcldb1/tfa_home/jar/RATFA.jar:/u01/app/11.2.0/grid/tfa/orcldb1/tfa_home/jar/je-4.0.103.jar:/u01/app/11.2.0/grid/tfa/orcldb1/tfa_home/jar/ojdbc6.jar oracle.rat.tfa.TFAMain /u01/app/11.2.0/grid/tfa/orcldb1/tfa_home
root 1775 1704 0 12:03 ? 00:00:01 hald-addon-storage: polling /dev/sr0 (every 2 sec)
grid 11980 11916 0 13:14 pts/0 00:00:00 grep -i ora

 

[grid@orcldb1 ~]$ ps -ef |grep -i asm
grid 11988 11916 0 13:14 pts/0 00:00:00 grep -i asm

 


4.检查集群进程状态
[root@rac1 bin]# crsctl check cluster
详细输出
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

 

[root@rac1 bin]# crs_stat -t -v

 

 

只检查本节点的集群状态
crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

 

 

--确保集群节点二可以正常访问
SQL> select instance_name,status from gv$instance;

 

INSTANCE_NAME STATUS
---------------- ------------
orcl2 OPEN

 

二、启动

 

开启节点一上ORACLE服务


开启步骤:
[root@orcldb1 ~]# /u01/app/11.2.0/grid/bin/crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

 

--确保节点一上CRS服务已经开启成功(一般重启OS后自动启动)

 

如果需要关闭自动启动,可以参考下面操作

crs软件关闭自动启动
$ORACLE_HOME/bin/crsctl disable crs

关闭数据库的自动启动
$ORACLE_HOME/bin/srvctl disable database -d orcl

 


[grid@orcldb1 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------

ora....DISK.dg ora....up.type ONLINE ONLINE orcldb1
ora.DATA.dg ora....up.type ONLINE ONLINE orcldb1
ora....ER.lsnr ora....er.type ONLINE ONLINE orcldb2
ora....N1.lsnr ora....er.type ONLINE ONLINE orcldb2
ora.asm ora.asm.type ONLINE ONLINE orcldb1
ora.cvu ora.cvu.type ONLINE ONLINE orcldb2
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE orcldb1
ora.oc4j ora.oc4j.type ONLINE ONLINE orcldb2
ora.ons ora.ons.type ONLINE ONLINE orcldb1
ora....SM1.asm application ONLINE ONLINE orcldb1
ora....E1.lsnr application OFFLINE OFFLINE
ora....de1.gsd application OFFLINE OFFLINE
ora....de1.ons application ONLINE ONLINE orcldb1
ora....de1.vip ora....t1.type ONLINE ONLINE orcldb1
ora....SM2.asm application ONLINE ONLINE orcldb2
ora....E2.lsnr application ONLINE ONLINE orcldb2
ora....de2.gsd application OFFLINE OFFLINE
ora....de2.ons application ONLINE ONLINE orcldb2
ora....de2.vip ora....t1.type ONLINE ONLINE orcldb2
ora.orcl.db ora....se.type ONLINE ONLINE orcldb2
ora....ry.acfs ora....fs.type ONLINE ONLINE orcldb1
ora.scan1.vip ora....ip.type ONLINE ONLINE orcldb2


--确保ASM服务已经运行在两个节点上
[grid@orcldb1 ~]$ srvctl status asm
ASM is running on orcldb2,orcldb1


--开启集群节点一数据库实例
[grid@orcldb1 ~]$ srvctl start instance -d orcl -i orcldb1


--验证是否启动成功
[grid@orcldb1 ~]$ srvctl status database -d orcl
Instance orcl1 is running on node orcldb1
Instance orcl2 is running on node orcldb2


--开启完成后集群服务校验,确保服务开启成功且各自运行在集群节点上
[grid@orcldb1 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....DISK.dg ora....up.type ONLINE ONLINE orcldb1
ora.DATA.dg ora....up.type ONLINE ONLINE orcldb1
ora....ER.lsnr ora....er.type ONLINE ONLINE orcldb1
ora....N1.lsnr ora....er.type ONLINE ONLINE orcldb2
ora.asm ora.asm.type ONLINE ONLINE orcldb1
ora.cvu ora.cvu.type ONLINE ONLINE orcldb2
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE orcldb1
ora.oc4j ora.oc4j.type ONLINE ONLINE orcldb2
ora.ons ora.ons.type ONLINE ONLINE orcldb1
ora....SM1.asm application ONLINE ONLINE orcldb1
ora....E1.lsnr application ONLINE ONLINE orcldb1
ora....de1.gsd application OFFLINE OFFLINE
ora....de1.ons application ONLINE ONLINE orcldb1
ora....de1.vip ora....t1.type ONLINE ONLINE orcldb1
ora....SM2.asm application ONLINE ONLINE orcldb2
ora....E2.lsnr application ONLINE ONLINE orcldb2
ora....de2.gsd application OFFLINE OFFLINE
ora....de2.ons application ONLINE ONLINE orcldb2
ora....de2.vip ora....t1.type ONLINE ONLINE orcldb2
ora.orcl.db ora....se.type ONLINE ONLINE orcldb1
ora....ry.acfs ora....fs.type ONLINE ONLINE orcldb1
ora.scan1.vip ora....ip.type ONLINE ONLINE orcldb2
[grid@orcldb1 ~]$


--节点停机开启操作与节点一上操作顺序一致

 

启动过程(CRS集群启动->启动数据库)
1.启动HAS、CRS
单一节点启动
[root@orcldb2 ~]# crsctl start has
[root@orcldb2 ~]# crsctl start crs
[root@orcldb2 ~]# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online


所有节点启动
[root@orcldb1 bin]# crsctl start cluster -n rac1 rac2
CRS-4123: Oracle High Availability Services has been started.
[root@orcldb1 bin]# crsctl start cluster -all
[root@orcldb2 ~]# crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

此命令会在后台启动所有RAC CRS相关进程

[root@orcldb2 ~]# crs_stat -t -v
CRS-0184: Cannot communicate with the CRS daemon.
因为start has启动的crs进程比较多因此会启动的比较慢,我的机器等待了5分钟,在没有完全启动成功之前会报上述错误,需要耐心等待一段时间后执行下面命令即可查看到所有CRS相关进程服务已经启动。

[root@orcldb2 ~]# crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------

ora.DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE orcldb1
ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE orcldb1
ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE orcldb2
ora....N2.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE orcldb1
ora....N3.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE orcldb1
ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE orcldb1
ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE orcldb1
ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE
ora....network ora....rk.type 0/5 0/ ONLINE ONLINE orcldb1
ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE orcldb1
ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE orcldb1
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE orcldb1
ora....C1.lsnr application 0/5 0/0 ONLINE ONLINE orcldb1
ora.orcldb1.gsd application 0/5 0/0 OFFLINE OFFLINE
ora.orcldb1.ons application 0/3 0/0 ONLINE ONLINE orcldb1
ora.orcldb1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE orcldb1
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE orcldb2
ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE orcldb2
ora.orcldb2.gsd application 0/5 0/0 OFFLINE OFFLINE
ora.orcldb2.ons application 0/3 0/0 ONLINE ONLINE orcldb2
ora.orcldb2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE orcldb2
ora....ry.acfs ora....fs.type 0/5 0/ ONLINE ONLINE orcldb1
ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE orcldb2
ora.scan2.vip ora....ip.type 0/0 0/0 ONLINE ONLINE orcldb1
ora.scan3.vip ora....ip.type 0/0 0/0 ONLINE ONLINE orcldb1


说明:
英文解释
ora.gsd is OFFLINE by default ifthere is no 9i database in the cluster.
ora.oc4j is OFFLINE in 11.2.0.1 as DatabaseWorkload Management(DBWLM) is unavailable. these can be ignored in11gR2 RAC.
中文解释
ora.gsd是集群服务中用于与9i数据库进行通信的一个进程,在当前版本中为了向后兼容才保存下来,状态为OFFLINE不影响CRS的正常运行与性能,我们忽略即可
ora.oc4j是在11.2.0.2以上版本中有效的服务进程,用于DBWLM的资源管理,因此在11.2.0.1以下版本并没有使用


2.启动数据库:
oracl用户执行srvctl命令:
语法:srvctl start|stop|status database -d dbname [-o immediate]
作用:可以一次性启动dbname的所有实例

[oracle@rac1 ~]$ srvctl start database -d orcl -启动所有节点上的实例
然后查看状态:
[oracle@rac1 ~]$ srvctl status database -d orcl

 

其他一些集群命令:
3.详细输出资源全名称并检查状态
crsctl status resource -t
crsctl status resource


4.常用srvctl命令
指定dbname上某个实例
srvctl start|stop|status instance -d -i <instance_name>


5.显示RAC下所有实例配置与状态
srvctl status|config database -d


6.显示所有节点的应用服务(VIP,GSD,listener,ONS)
srvctl start|stop|status nodeapps -n<node_name>


7.ASM进程服务管理
srvctl start|stop|status|config asm -n [-i <asm_inst_name>] [-o<oracle_home>]
srvctl config asm -a
srvctl status asm -a


8.可以获取所有的环境信息:
srvctl getenv database -d [-i<instance_name>]


9.设置全局环境和变量:
srvctl setenv database -d -t LANG=en


10.在OCR中删除已有的数据库信息
srvctl remove database -d


11.向OCR中添加一个数据库的实例:
srvctl add instance -d -i<instance_name> -n
srvctl add instance -d -i<instance_name> -n


12.检查监听的状态
srvctl status listener
srvctl config listener -a


SCAN配置信息
srvctl config scan


SCAN listener状态信息
srvctl status scan

 

小结:crsctl命令是一个集群级别命令,可以对所有集群资源进行统一启动、停止等管理操作
srvctl命令是一个服务级别命令,可以对单一服务资源进行统一启动、停止等管理操作

 

附:srvctl命令启动与停止的详细帮助
[root@rac2 ~]# srvctl start -h

 

The SRVCTL start command starts, Oracle Clusterware enabled, non-running objects.

 

Usage: srvctl start database -d <db_unique_name>[-o <start_options>] [-n ]
Usage: srvctl start instance -d <db_unique_name>{-n <node_name>[-i <inst_name>] | -i <inst_name_list>} [-o <start_options>]
Usage: srvctl start service -d <db_unique_name>[-s "<service_name_list>" [-n <node_name>| -i <inst_name>] ] [-o <start_options>]
Usage: srvctl start nodeapps [-n <node_name>] [-g] [-v]
Usage: srvctl start vip { -n <node_name>| -i <vip_name>} [-v]
Usage: srvctl start asm [-n <node_name>] [-o <start_options>]
Usage: srvctl start listener [-l <lsnr_name>] [-n <node_name>]
Usage: srvctl start scan [-i <ordinal_number>] [-n <node_name>]
Usage: srvctl start scan_listener [-n <node_name>] [-i <ordinal_number>]
Usage: srvctl start oc4j [-v]
Usage: srvctl start home -o <oracle_home>-s <state_file>-n <node_name>
Usage: srvctl start filesystem -d <volume_device>[-n <node_name>]
Usage: srvctl start diskgroup -g <dg_name>[-n "<node_list>"]
Usage: srvctl start gns [-l <log_level>] [-n <node_name>] [-v]
Usage: srvctl start cvu [-n <node_name>]
For detailed help on each command and object and its options use:

 

posted @ 2021-07-13 16:24  菜鸟大明儿哥  阅读(3077)  评论(0编辑  收藏  举报