Oracle 11g RAC 添加删除节点参考

参考文档

How to Add Node/Instance or Remove Node/Instance with Oracle Clusterware and RAC (Doc ID 1332451.1)
How to Remove/Delete a Node From Grid Infrastructure Clusterware When the Node Has Failed (Doc ID 1262925.1)

删除节点

1、删除数据库实例 本次实验是删除stuaapp02节点

  • 在另外一个节点运行
[oracle@stuaapp01 rdbms]$ dbca -silent -deleteInstance -nodeList stuaapp02 -gdbName lenovo -instanceName lenovo2 -sysDBAUserName sys -sysDBAPassword oracle
Deleting instance
1% complete
2% complete
6% complete
13% complete
20% complete
26% complete
33% complete
40% complete
46% complete
53% complete
60% complete
66% complete
Completing instance management.
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/lenovo4.log" for further details.
验证:
SQL> select thread#,status from v$thread;

   THREAD# STATUS
---------- ------
         1 OPEN
[oracle@stuaapp01 rdbms]$  srvctl config database -d lenovo
Database unique name: lenovo
Database name: 
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/LENOVO/spfilelenovo.ora
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: lenovo
Database instances: lenovo1
Disk Groups: DATA,FRA
Mount point paths: 
Services: 
Type: RAC
Database is administrator managed

2、删除Oracle RAC软件

  • 1 在需要删除的节点运行
[oracle@stuaapp02 trace]$ srvctl disable listener -l listener -n stuaapp02
[oracle@stuaapp02 trace]$ srvctl stop listener -l listener -n stuaapp02			
  • 2 在需要删除的节点运行
[oracle@stuaapp02 trace]$ cd $ORACLE_HOME/oui/bin/
[oracle@stuaapp02 bin]$ pwd
/u01/app/oracle/product/11.2.0/dbhome_1/oui/bin
[oracle@stuaapp02 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=stuaapp02" -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB.   Actual 3855 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.	
  • 3 在需要删除的节点运行
[oracle@stuaapp02 bin]$ cd $ORACLE_HOME/deinstall/
[oracle@stuaapp02 deinstall]$ pwd                 
/u01/app/oracle/product/11.2.0/dbhome_1/deinstall
[oracle@stuaapp02 deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...			
  • 4 在其他节点运行
[oracle@stuaapp01 rdbms]$ cd $ORACLE_HOME/oui/bin/
[oracle@stuaapp01 bin]$ pwd
/u01/app/oracle/product/11.2.0/dbhome_1/oui/bin
[oracle@stuaapp01 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=stuaapp01"
Starting Oracle Universal Installer...

3、删除Node节点

  • 1 确认目录
[grid@stuaapp01 ~]$ echo $ORACLE_HOME
/u01/app/11.2.0/grid
  • 2 确定需要删除的节点名称
[grid@stuaapp01 ~]$ olsnodes -s -t
stuaapp01       Active  Unpinned
stuaapp02       Active  Unpinned
  • 3 禁用集群应用和守护进程

在需要删除的节点上,使用root用户运行脚本

[root@stuaapp02 install]# pwd
/u01/app/11.2.0/grid/crs/install

[root@stuaapp02 install]# ./rootcrs.pl -deconfig -force 
  • 4 从集群中删除节点

在其他节点运行,使用root用户运行

原文操作# crsctl delete node -n node_to_be_deleted

[root@stuaapp01 ~]# crsctl delete node -n stuaapp02
CRS-4661: Node stuaapp02 successfully deleted.
  • 5 在需要删除的节点上,使用grid用户运行
原文操作$ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES={node_to_be_deleted}" CRS=TRUE -silent -local

[grid@stuaapp02 bin]$ pwd
/u01/app/11.2.0/grid/oui/bin
[grid@stuaapp02 bin] ./runInstaller -updateNodelist ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=stuaapp02" CRS=TRUE -silent -local

确认文件inventory.xml没有被更新

  • 6 在需要删除的节点上运行,使用grid用户运行
[grid@stuaapp02 deinstall]$ pwd
/u01/app/11.2.0/grid/deinstall
[grid@stuaapp02 deinstall]$ ./deinstall -local

注意:如果不指定-local选项,那么默认将会把所有的集群信息全部删除,这是非常危险的操作
运行此命令过程中需要进行多次手动配置,请注意!!!!

  • 7 在其他节点运行

使用grid用户运行

原文操作$ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES={remaining_nodes_list}" CRS=TRUE -silent
[grid@stuaapp01 bin]$ pwd
/u01/app/11.2.0/grid/oui/bin
[grid@stuaapp01 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=stuaapp01" CRS=TRUE -silent

使用oracle用户运行

原文操作$ ./runInstaller -updateNodeList ORACLE_HOME=ORACLE_HOME"CLUSTER_NODES={remaining_nodes_list}"
[oracle@stuaapp01 bin]$ pwd
/u01/app/oracle/product/11.2.0/dbhome_1/oui/bin			
[oracle@stuaapp01 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=stuaapp01"
  • 8 验证,使用grid用户运行
原文操作$ cluvfy stage -post nodedel -n node_list [-verbose]

[grid@stuaapp01 ~]$ cluvfy stage -post nodedel -n stuaapp02 -verbose

添加节点

1、增加集群,在grid用户

[grid@stuaapp01 bin]$ cluvfy stage -pre nodeadd -n stuaapp02
[grid@pipi1 bin]$ export IGNORE_PREADDNODE_CHECKS=Y
[grid@stuaapp01 bin]$ pwd
/u01/app/11.2.0/grid/oui/bin
[grid@stuaapp01 bin]$  ./addNode.sh -silent "CLUSTER_NEW_NODES={stuaapp02}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={stuaapp02-vip}"

[root@stuaapp02 oracle]# /u01/app/11.2.0/grid/root.sh

2、增加oracle软件 在oracle用户

[oracle@stuaapp01 bin]$ pwd
/u01/app/oracle/product/11.2.0/dbhome_1/oui/bin

[oracle@stuaapp01 bin]$ ./addNode.sh "CLUSTER_NEW_NODES={stuaapp02}"

3、增加实例

在有实例的节点运行

dbca -silent -addInstance -nodeList stuaapp02 -gdbName lenovo -instanceName lenovo2 -sysDBAUserName sys -sysDBAPassword oracle

4、添加redolog--实验中并没有做这些操作

SQL> alter database add logfile thread 2 group 4 '+DATA' size 50m;
SQL> alter database add logfile thread 2 group 5 '+DATA' size 50m;
SQL> alter database add logfile thread 2 group 6 '+DATA' size 50m;
SQL> alter database enable public thread 2;
SQL> create undo tablespace undotbs2 datafile '+data' size 50m;  
SQL> alter system set undo_tablespace=undotbs2 scope=spfile sid='lenovo2';
SQL>  alter system set instance_number=2 scope=spfile sid='lenovo2';
SQL>  alter system set cluster_database_instances=2 scope=spfile sid='*';

5、添加实例到集群

[oracle@stuaapp01 dbs]$ srvctl add instance -d lenovo -i lenovo2 -n stuaapp02
[oracle@stuaapp01 dbs]$ srvctl status database -d lenovo
Instance lenovo1 is running on node stuaapp01
Instance lenovo2 is running on node stuaapp02
[oracle@stuaapp01 dbs]$ srvctl stop instance -d lenovo -i lenovo2
[oracle@stuaapp01 dbs]$ srvctl status database -d lenovo
Instance lenovo1 is running on node stuaapp01
Instance lenovo2 is not running on node stuaapp02
[oracle@stuaapp01 dbs]$ srvctl start instance -d lenovo -i lenovo2
[oracle@stuaapp01 dbs]$ srvctl stop instance -d lenovo -i lenovo2
[oracle@stuaapp01 dbs]$ srvctl start instance -d lenovo -i lenovo2
[oracle@stuaapp01 dbs]$ exit

posted on 2020-08-28 14:20  空白葛  阅读(1044)  评论(0编辑  收藏  举报

导航