我刚搭建的10.2.0.4RAC起不来了
今天刚刚搭建好的10.2.0.4RAC,关机重新启动之后起不来了,非常高兴,開始填坑:
[oracle@rac2 ~]$ crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora.orcl.db application 0/0 0/1 ONLINE ONLINE rac2 ora.orcl.db.cs application 0/0 0/1 ONLINE ONLINE rac2 ora....cl1.srv application 0/0 0/0 ONLINE OFFLINE ora....cl2.srv application 0/0 0/0 ONLINE ONLINE rac2 ora....l1.inst application 0/5 0/0 ONLINE OFFLINE ora....l2.inst application 0/5 0/0 ONLINE ONLINE rac2 ora....SM1.asm application 0/5 0/0 ONLINE OFFLINE ora....C1.lsnr application 0/5 0/0 ONLINE OFFLINE ora.rac1.gsd application 0/5 0/0 ONLINE OFFLINE ora.rac1.ons application 0/3 0/0 ONLINE OFFLINE ora.rac1.vip application 0/0 0/0 ONLINE ONLINE rac2 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE rac2 ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE rac2 ora.rac2.gsd application 0/5 0/0 ONLINE ONLINE rac2 ora.rac2.ons application 0/3 0/0 ONLINE ONLINE rac2 ora.rac2.vip application 0/0 0/0 ONLINE ONLINE rac2
首先在rac1 startup一下
[oracle@rac1 ~]$ sqlplus / as sysdba SQL*Plus: Release 10.2.0.4.0 - Production on Mon Dec 21 16:40:06 2015 Copyright (c) 1982, 2007, Oracle. All Rights Reserved. Connected to an idle instance. SQL> startup ORA-01078: failure in processing system parameters ORA-01565: error in identifying file '+DATA/orcl/spfileorcl.ora' ORA-17503: ksfdopn:2 Failed to open file +DATA/orcl/spfileorcl.ora ORA-15077: could not locate ASM instance serving a required diskgroup ORA-29701: unable to connect to Cluster Manager
看现象。应该是存储挂了。
[oracle@rac1 ~]$ cd /dev/raw [oracle@rac1 raw]$ ll total 0 crw-r----- 1 root oinstall 162, 1 Dec 21 15:43 raw1 crw-r----- 1 root oinstall 162, 2 Dec 21 15:43 raw2 crw-r----- 1 root oinstall 162, 3 Dec 21 15:43 raw3 crw------- 1 root root 162, 4 Dec 21 15:43 raw4 crw------- 1 root root 162, 5 Dec 21 15:43 raw5 crw------- 1 root root 162, 6 Dec 21 15:43 raw6 crw------- 1 root root 162, 7 Dec 21 15:43 raw7
这种权限,让rac1情何以堪!
!。
看日志
more /u01/crs/oracle/product/10/app/log/rac1/cssd/ocssd.log [ CSSD]2015-12-21 02:07:38.367 [4291541888] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (0//dev/raw/raw3) [ CSSD]2015-12-21 02:07:38.367 [1136974144] >TRACE: clssnmvDPT: spawned for disk 0 (/dev/raw/raw3) [ CSSD]2015-12-21 02:07:38.367 [1136974144] >ERROR: Internal Error Information: Category: 1234 Operation: scls_block_open Location: open Other: open failed /dev/raw/raw3 Dep: 13设置权限,重新启动。搞定。