Oracle RAC环境下ASM磁盘组扩容

Oracle RAC环境下ASM磁盘组扩容

​实验环境:
操作系统环境 :OEL 3.8.13-68.3.4.el6uek.x86_64
数据库环境 :11.2.0.4 GRID及DATABASE

实验目的:
存储空间不足,需要给磁盘组DATA加一块磁盘,而后观察是否运行正常


**********准备工作**********

1.检查各个节点database、asm及grid日志是否有错误信息

SQL> select instance_name,host_name,startup_time,status from gv$instance;

INSTANCE_NAME HOST_NAME STARTUP_TIME STATUS
---------------- ---------------------------------------------------------------- ------------------- -----------
+ASM1 node01 2018-04-16 12:53:15 STARTED
+ASM2 node02 2018-04-16 13:00:58 STARTED

SQL> select instance_name,host_name,startup_time,status from gv$instance;

INSTANCE_NAME HOST_NAME STARTUP_TIME STATUS
---------------- ---------------------------------------------------------------- ------------------- ------------
racdb1 node01 2018-04-16 13:51:24 OPEN
racdb2 node02 2018-04-16 13:51:28 OPEN

检查node01和node02节点数据库、ASM及grid日志是否有错误信息

2.ASM磁盘组及磁盘检查

SQL> select * from gv$asm_client order by 1,2;

   INST_ID GROUP_NUMBER INSTANCE_NAME DB_NAME STATUS SOFTWARE_VERSION COMPATIBLE_VERSION
---------- ------------ ------------------------------ -------- ------------ ------------------------------ ------------------------------
  1 1 +ASM1 +ASM CONNECTED 11.2.0.4.0 11.2.0.4.0
  1 2 +ASM1 +ASM CONNECTED 11.2.0.4.0 11.2.0.4.0
  1 2 racdb1 racdb CONNECTED 11.2.0.4.0 11.2.0.4.0
  2 1 +ASM2 +ASM CONNECTED 11.2.0.4.0 11.2.0.4.0
  2 2 +ASM2 +ASM CONNECTED 11.2.0.4.0 11.2.0.4.0
  2 2 racdb2 racdb CONNECTED 11.2.0.4.0 11.2.0.4.0

SQL> select group_number,name,state,type,total_mb,free_mb,usable_file_mb,allocation_unit_size/1024/1024 unit_mb from v$asm_diskgroup order by 1;

GROUP_NUMBER NAME STATE TYPE TOTAL_MB FREE_MB USABLE_FILE_MB UNIT_MB
------------ -------------------- ----------- ------ ---------- ---------- -------------- ----------
    1 OCR_VF MOUNTED NORMAL 6144 5218 1585 1
    2 DATA MOUNTED EXTERN 8192 5740 5740 1

SQL> select group_number,name,path,mount_status,state,redundancy,total_mb,free_mb,failgroup,create_date from v$asm_disk order by 1,2;

GROUP_NUMBER NAME PATH MOUNT_S STATE REDUNDA TOTAL_MB FREE_MB FAILGROUP CREATE_DATE
------------ -------------------- ------------------------------ ------- -------- ------- ---------- ---------- -------------------- -------------------
    1 OCR_VF_0000 /dev/asm-diskb CACHED NORMAL UNKNOWN 2048 1739 OCR_VF_0000 2018-04-16 12:50:33
    1 OCR_VF_0001 /dev/asm-diskc CACHED NORMAL UNKNOWN 2048 1739 OCR_VF_0001 2018-04-16 12:50:33
    1 OCR_VF_0002 /dev/asm-diskd CACHED NORMAL UNKNOWN 2048 1740 OCR_VF_0002 2018-04-16 12:50:33
    2 DATA_0000 /dev/asm-diske CACHED NORMAL UNKNOWN 4096 2871 DATA_0000 2018-04-16 13:10:07
    2 DATA_0001 /dev/asm-diskf CACHED NORMAL UNKNOWN 4096 2869 DATA_0001 2018-04-16 13:10:07

3.系统层面记录UDEV结果

[root@node01 ~]# cat /etc/udev/rules.d/99-oracle-asmdisks.rules
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB1ed55b97-19b7f780", NAME="asm-diskb", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VBacfab6a9-29b2beda", NAME="asm-diskc", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB70280fd2-8b5bc96a", NAME="asm-diskd", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VBfb27a599-aec3c0fe", NAME="asm-diske", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB7ee690ce-ab92de29", NAME="asm-diskf", OWNER="grid", GROUP="asmadmin", MODE="0660"

[root@node01 ~]# ll /dev/asm-disk*
brw-rw---- 1 grid asmadmin 8, 16 Apr 16 14:49 /dev/asm-diskb
brw-rw---- 1 grid asmadmin 8, 32 Apr 16 14:49 /dev/asm-diskc
brw-rw---- 1 grid asmadmin 8, 48 Apr 16 14:49 /dev/asm-diskd
brw-rw---- 1 grid asmadmin 8, 64 Apr 16 14:49 /dev/asm-diske
brw-rw---- 1 grid asmadmin 8, 80 Apr 16 14:49 /dev/asm-diskf

************数据库层面工作************

4.停止对应数据库的业务

此处不做赘述

5.检查数据库会话及停止监听

SQL> select * from v$session;

检查是否还有应用连接数据库,如果没有则停止监听
检查各个节点监听状态

[grid@node02 ~]$ srvctl status listener -n node01
Listener LISTENER is enabled on node(s): node01
Listener LISTENER is running on node(s): node01
[grid@node02 ~]$ srvctl status listener -n node02
Listener LISTENER is enabled on node(s): node02
Listener LISTENER is running on node(s): node02

禁止监听自启动

[grid@node02 ~]$ srvctl disable listener -n node01
[grid@node02 ~]$ srvctl disable listener -n node02

停止监听

[grid@node02 ~]$ srvctl stop listener -n node01
[grid@node02 ~]$ srvctl stop listener -n node02

查看停止及关闭自启后的监听状态

[grid@node02 ~]$ srvctl status listener -n node01
Listener LISTENER is disabled on node(s): node01
Listener LISTENER is not running on node(s): node01
[grid@node02 ~]$ srvctl status listener -n node02
Listener LISTENER is disabled on node(s): node02
Listener LISTENER is not running on node(s): node02

6.关闭数据库

检查数据库状态

[grid@node02 ~]$ srvctl status database -d racdb
Instance racdb1 is running on node node01
Instance racdb2 is running on node node02

检查数据库配置

[grid@node02 ~]$ srvctl config database -d racdb
Database unique name: racdb
Database name: racdb
Oracle home: /u01/app/oracle/product/11.2.0.4/dbhome_1
Oracle user: oracle
Spfile: +DATA/racdb/spfileracdb.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: racdb
Database instances: racdb1,racdb2
Disk Groups: DATA
Mount point paths:
Services:
Type: RAC
Database is administrator managed

禁止数据库自启动(需切换root用户)

[root@node02 bin]# ./srvctl disable database -d racdb

关闭数据库

[grid@node02 ~]$ srvctl stop database -d racdb

检查关闭后数据库状态

[grid@node02 ~]$ srvctl status database -d racdb
Instance racdb1 is not running on node node01
Instance racdb2 is not running on node node02

7.关闭集群软件

查看各个节点集群是否为自启动

[root@node01 bin]# ./crsctl config has
CRS-4622: Oracle High Availability Services autostart is enabled.
[root@node02 bin]# ./crsctl config has
CRS-4622: Oracle High Availability Services autostart is enabled.

禁止各个节点的自启动

[root@node01 bin]# ./crsctl disable has
CRS-4621: Oracle High Availability Services autostart is disabled.
[root@node02 bin]# ./crsctl disable has
CRS-4621: Oracle High Availability Services autostart is disabled.

查看各个节点禁止自启动是否生效

[root@node01 bin]# ./crsctl config has
CRS-4621: Oracle High Availability Services autostart is disabled.
[root@node02 bin]# ./crsctl config has
CRS-4621: Oracle High Availability Services autostart is disabled.

停止各个节点集群

[root@node01 bin]# ./crsctl stop has
[root@node02 bin]# ./crsctl stop has

查看各个节点是否还有后台进程

[grid@node01 ~]$ ps -ef | grep css
grid 6888 31689 0 15:12 pts/0 00:00:00 grep css
[grid@node01 ~]$ ps -ef | grep crs
grid 6893 31689 0 15:12 pts/0 00:00:00 grep crs
[grid@node01 ~]$ ps -ef | grep asm
grid 6895 31689 0 15:12 pts/0 00:00:00 grep asm

***************OS层面工作***************

8.系统添加共享磁盘后测试及重启

此处不做赘述

9.各个节点使用UDEV添加磁盘

[root@node01 ~]# cat /etc/udev/rules.d/99-oracle-asmdisks.rules
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB1ed55b97-19b7f780", NAME="asm-diskb", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VBacfab6a9-29b2beda", NAME="asm-diskc", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB70280fd2-8b5bc96a", NAME="asm-diskd", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VBfb27a599-aec3c0fe", NAME="asm-diske", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB7ee690ce-ab92de29", NAME="asm-diskf", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VBee83e2f5-8316b596", NAME="asm-diskg", OWNER="grid", GROUP="asmadmin", MODE="0660"
[root@node01 ~]# ll /dev/asm-disk*
brw-rw---- 1 grid asmadmin 8, 16 Apr 16 15:18 /dev/asm-diskb
brw-rw---- 1 grid asmadmin 8, 32 Apr 16 15:18 /dev/asm-diskc
brw-rw---- 1 grid asmadmin 8, 48 Apr 16 15:18 /dev/asm-diskd
brw-rw---- 1 grid asmadmin 8, 64 Apr 16 15:18 /dev/asm-diske
brw-rw---- 1 grid asmadmin 8, 80 Apr 16 15:18 /dev/asm-diskf
brw-rw---- 1 grid asmadmin 8, 96 Apr 16 15:23 /dev/asm-diskg

确保各个节点可以识别到磁盘且一致

**********数据库层面工作*********

10.启动集群

[root@node01 ~]# cd /u01/app/ghome/grid/bin/
[root@node01 bin]# ./crsctl start has
CRS-4123: Oracle High Availability Services has been started.
[root@node02 ~]# cd /u01/app/ghome/grid/bin/
[root@node02 bin]# ./crsctl start has
CRS-4123: Oracle High Availability Services has been started.

11.检查集群的各个组件是否启动正常

此处不做赘述

12.ASM扩容

检查asm是否识别到未添加的磁盘

SQL> select group_number,name,state,type,total_mb,free_mb,usable_file_mb,allocation_unit_size/1024/1024 unit_mb from v$asm_diskgroup order by 1;

GROUP_NUMBER NAME STATE TYPE TOTAL_MB FREE_MB USABLE_FILE_MB UNIT_MB
------------ -------------------- ----------- ------ ---------- ---------- -------------- ----------
    1 DATA MOUNTED EXTERN 8192 5740 5740 1
    2 OCR_VF MOUNTED NORMAL 6144 5218 1585 1

SQL> select group_number,name,path,mount_status,state,redundancy,total_mb,free_mb,failgroup,create_date from v$asm_disk order by 1,2;

GROUP_NUMBER NAME PATH MOUNT_S STATE REDUNDA TOTAL_MB FREE_MB FAILGROUP CREATE_DA
------------ -------------------- ------------------------------ ------- -------- ------- ---------- ---------- ------------------------------ ---------
    0 /dev/asm-diskg CLOSED NORMAL UNKNOWN 0 0
    1 DATA_0000 /dev/asm-diske CACHED NORMAL UNKNOWN 4096 2871 DATA_0000 16-APR-18
    1 DATA_0001 /dev/asm-diskf CACHED NORMAL UNKNOWN 4096 2869 DATA_0001 16-APR-18
    2 OCR_VF_0000 /dev/asm-diskb CACHED NORMAL UNKNOWN 2048 1739 OCR_VF_0000 16-APR-18
    2 OCR_VF_0001 /dev/asm-diskc CACHED NORMAL UNKNOWN 2048 1739 OCR_VF_0001 16-APR-18
    2 OCR_VF_0002 /dev/asm-diskd CACHED NORMAL UNKNOWN 2048 1740 OCR_VF_0002 16-APR-18

给磁盘组DATA添加磁盘

SQL> alter diskgroup DATA add disk '/dev/asm-diskg' rebalance power 10;

Diskgroup altered.

如果添加时间过长,可查看v$asm_operation来观察进度。

检查添加结果

SQL> select group_number,name,path,mount_status,state,redundancy,total_mb,free_mb,failgroup,create_date from v$asm_disk order by 1,2;

GROUP_NUMBER NAME PATH MOUNT_S STATE REDUNDA TOTAL_MB FREE_MB FAILGROUP CREATE_DA
------------ -------------------- ------------------------------ ------- -------- ------- ---------- ---------- ------------------------------ ---------
    1 DATA_0000 /dev/asm-diske CACHED NORMAL UNKNOWN 4096 3276 DATA_0000 16-APR-18
    1 DATA_0001 /dev/asm-diskf CACHED NORMAL UNKNOWN 4096 3277 DATA_0001 16-APR-18
    1 DATA_0002 /dev/asm-diskg CACHED NORMAL UNKNOWN 4096 3281 DATA_0002 16-APR-18
    2 OCR_VF_0000 /dev/asm-diskb CACHED NORMAL UNKNOWN 2048 1739 OCR_VF_0000 16-APR-18
    2 OCR_VF_0001 /dev/asm-diskc CACHED NORMAL UNKNOWN 2048 1739 OCR_VF_0001 16-APR-18
    2 OCR_VF_0002 /dev/asm-diskd CACHED NORMAL UNKNOWN 2048 1740 OCR_VF_0002 16-APR-18

6 rows selected.

SQL> select group_number,name,state,type,total_mb,free_mb,usable_file_mb,allocation_unit_size/1024/1024 unit_mb from v$asm_diskgroup order by 1;


GROUP_NUMBER NAME STATE TYPE TOTAL_MB FREE_MB USABLE_FILE_MB UNIT_MB
------------ -------------------- ----------- ------ ---------- ---------- -------------- ----------
    1 DATA MOUNTED EXTERN 12288 9834 9834 1
    2 OCR_VF MOUNTED NORMAL 6144 5218 1585 1

13.启动数据库

[grid@node01 ~]$ srvctl status database -d racdb
Instance racdb1 is running on node node01
Instance racdb2 is running on node node02

至此,扩容成功

posted @ 2018-04-17 15:47  清酒丶  阅读(1568)  评论(0编辑  收藏  举报