Oracle 11g RAC ohasd failed to start at /u01/app/11.2.0/grid/crs/install/rootcrs.pl line 443 解决方法

文章来自:CSDN-David Dai

一.问题描述 

在Oracle Linux 6.1 上安装11.2.0.1 的RAC,在安装grid时执行root.sh 脚本,报错,如下:

 1 [root@rac1 bin]#/u01/app/11.2.0/grid/root.sh
 2 Running Oracle 11g root.sh script...
 3  
 4 The following environment variables are setas:
 5    ORACLE_OWNER= oracle
 6    ORACLE_HOME=  /u01/app/11.2.0/grid
 7  
 8 Enter the full pathname of the local bindirectory: [/usr/local/bin]:
 9   Copying dbhome to /usr/local/bin ...
10   Copying oraenv to /usr/local/bin ...
11   Copying coraenv to /usr/local/bin ...
12  
13 Entries will be added to the /etc/oratabfile as needed by
14 Database Configuration Assistant when adatabase is created
15 Finished running generic part of root.shscript.
16 Now product-specific root actions will beperformed.
17 2012-06-27 10:31:18: Parsing the host name
18 2012-06-27 10:31:18: Checking for superuser privileges
19 2012-06-27 10:31:18: User has super userprivileges
20 Using configuration parameter file:/u01/app/11.2.0/grid/crs/install/crsconfig_params
21 Creating trace directory
22 LOCAL ADD MODE
23 Creating OCR keys for user 'root', privgrp'root'..
24 Operation successful.
25  root wallet
26  root wallet cert
27  root cert export
28  peer wallet
29   profile reader wallet
30   pawallet
31  peer wallet keys
32   pawallet keys
33  peer cert request
34   pacert request
35  peer cert
36   pacert
37  peer root cert TP
38  profile reader root cert TP
39   paroot cert TP
40  peer pa cert TP
41   papeer cert TP
42  profile reader pa cert TP
43  profile reader peer cert TP
44  peer user cert
45   pauser cert
46 Adding daemon to inittab
47 CRS-4124: Oracle High Availability Services startup failed.
48 CRS-4000: Command Start failed, or completed with errors.
49 ohasd failed to start: Inappropriate ioctl for device
50 ohasd failed to start at/u01/app/11.2.0/grid/crs/install/rootcrs.pl line 443.

据说这个错误只在linux 6.1下,且Oracle 版本为11.2.0.1的时候出现,在11.2.0.3的时候就不会有这种问题,而解决方法就是在生成了文件/var/tmp/.oracle/npohasd文件后,root立即执行命令:

/bin/dd if=/var/tmp/.oracle/npohasd of=/dev/nullbs=1024 count=1

二.清除安装历史记录

这里有两种方法:1.清除grid,2,清除root.sh.

2.1 清除GRID
在我们继续执行之前先清除GRID,具体步骤参考:
RAC 卸载 说明
http://blog.csdn.net/tianlesoftware/article/details/5892225

在所有节点执行:

rm –rf /etc/oracle/*

rm -rf /etc/init.d/init.cssd

rm -rf /etc/init.d/init.crs

rm -rf /etc/init.d/init.crsd

rm -rf /etc/init.d/init.evmd

rm -rf /etc/rc2.d/K96init.crs

rm -rf /etc/rc2.d/S96init.crs

rm -rf /etc/rc3.d/K96init.crs

rm -rf /etc/rc3.d/S96init.crs

rm -rf /etc/rc5.d/K96init.crs

rm -rf /etc/rc5.d/S96init.crs

rm -rf /etc/oracle/scls_scr

rm -rf /etc/inittab.crs

 

rm -rf /var/tmp/.oracle/*

or

rm -rf /tmp/.oracle/*

 

移除ocr.loc 文件,通常在/etc/oracle 目录下:

[root@rac1 ~]# cd /etc/oracle

You have new mail in /var/spool/mail/root

[root@rac1 oracle]# ls

lastgasp ocr.loc ocr.loc.orig olr.loc olr.loc.orig oprocd

[root@rac1 oracle]# rm -rf ocr.*

 

格式化ASM 裸设备:

[root@rac1 utl]# ll /dev/asm*

brw-rw---- 1 oracle dba 8, 17 Jun 27 09:38 /dev/asm-disk1

brw-rw---- 1 oracle dba 8, 33 Jun 27 09:38/dev/asm-disk2

brw-rw---- 1 oracle dba 8, 49 Jun 27 09:38/dev/asm-disk3

brw-rw---- 1 oracle dba 8, 65 Jun 27 09:38/dev/asm-disk4

 

dd if=/dev/zero of=/dev/asm-disk1 bs=1Mcount=256

dd if=/dev/zero of=/dev/asm-disk2 bs=1Mcount=256

dd if=/dev/zero of=/dev/asm-disk3 bs=1Mcount=256

dd if=/dev/zero of=/dev/asm-disk4 bs=1Mcount=256

 

移除/tmp/CVU* 目录:

[root@rac1 ~]# rm -rf /tmp/CVU*

 

删除/var/opt目录下的Oracle信息和ORACLE_BASE目录:

 

# rm -rf /data/oracle

# rm -rf /var/opt/oracle

 

删除/usr/local/bin目录下的设置:

# rm -rf /usr/local/bin/dbhome

# rm -rf /usr/local/bin/oraenv

# rm -rf /usr/local/bin/coraenv

 

移除Grid 安装目录,并重建:

[root@rac1 oracle]# rm -rf /u01/app

 

[root@rac2 u01]# mkdir -p /u01/app/11.2.0/grid

[root@rac2 u01]# mkdir -p/u01/app/oracle/product/11.2.0/db_1

[root@rac2 u01]# chown -R oracle:oinstall/u01

[root@rac2 u01]# chmod -R775 /u01/

 

2.2 清除root.sh 记录
使用rootcrs.pl 命令来清楚记录,命令如下:

 1 [root@rac1 oracle]#/u01/app/11.2.0/grid/crs/install/rootcrs.pl-deconfig  -verbose -force
 2 2012-06-27 14:30:17: Parsing the host name
 3 2012-06-27 14:30:17: Checking for superuserprivileges
 4 2012-06-27 14:30:17: User has superuserprivileges
 5 Using configuration parameterfile:/u01/app/11.2.0/grid/crs/install/crsconfig_params
 6 Failure to execute: Inappropriate ioctlfordevice for command /u01/app/11.2.0/grid/bin/crsctl check cluster -n rac1
 7 Failure to execute: Inappropriate ioctlfordevice for command /u01/app/11.2.0/grid/bin/crsctl check cluster -n rac1
 8 Usage: srvctl <command><object>[<options>]
 9    commands:enable|disable|start|stop|status|add|remove|modify|getenv|setenv|unsetenv|config
10    objects:database|service|asm|diskgroup|listener|home|ons|eons
11 For detailed help on each command andobjectand its options use:
12  srvctl <command> -h or
13  srvctl <command> <object>-h
14 PRKO-2012 : nodeapps object is notsupportedin Oracle Restart
15 sh: /u01/app/11.2.0/grid/bin/clsecho:Nosuch file or directory
16 Can'texec"/u01/app/11.2.0/grid/bin/clsecho": No such file or directoryat/u01/app/11.2.0/grid/lib/acfslib.pm line 937.
17 Failure to execute: Inappropriate ioctlfordevice for command /u01/app/11.2.0/grid/bin/crsctl check cluster -n rac1
18 You must kill crs processes or rebootthesystem to properly
19 cleanup the processes started byOracleclusterware
20 2560+0 records in
21 2560+0 records out
22 10485760 bytes (10 MB) copied, 0.0373402s,281 MB/s
23 error: package cvuqdisk is not installed
24 Successfully deconfigured Oracleclusterwarestack on this node
25 You have new mail in /var/spool/mail/root
26 [root@rac1 oracle]#

三.重新安装并处理问题

在执行/u01/app/11.2.0/grid/root.sh脚本的时候开2个root的shell窗口,一个用来执行脚本,一个用来监控/var/tmp/.oracle/npohasd文件,看到就用root立即执行命令:
/bin/ddif=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1

  1 [root@rac1 oracle]#/u01/app/11.2.0/grid/root.sh
  2 Running Oracle 11g root.sh script...
  3  
  4 The following environment variables are setas:
  5    ORACLE_OWNER= oracle
  6    ORACLE_HOME=  /u01/app/11.2.0/grid
  7  
  8 Enter the full pathname of the local bindirectory: [/usr/local/bin]:
  9 The file "dbhome" already existsin /usr/local/bin.  Overwrite it? (y/n)
 10 [n]:
 11 The file "oraenv" already existsin /usr/local/bin.  Overwrite it? (y/n)
 12 [n]:
 13 The file "coraenv" already existsin /usr/local/bin.  Overwrite it? (y/n)
 14 [n]:
 15  
 16 Entries will be added to the /etc/oratabfile as needed by
 17 Database Configuration Assistant when adatabase is created
 18 Finished running generic part of root.shscript.
 19 Now product-specific root actions will beperformed.
 20 2012-06-27 14:32:21: Parsing the host name
 21 2012-06-27 14:32:21: Checking for superuser privileges
 22 2012-06-27 14:32:21: User has super userprivileges
 23 Using configuration parameter file:/u01/app/11.2.0/grid/crs/install/crsconfig_params
 24 LOCAL ADD MODE
 25 Creating OCR keys for user 'root', privgrp'root'..
 26 Operation successful.
 27   rootwallet
 28  root wallet cert
 29  root cert export
 30  peer wallet
 31  profile reader wallet
 32   pawallet
 33  peer wallet keys
 34   pawallet keys
 35  peer cert request
 36   pacert request
 37  peer cert
 38   pacert
 39  peer root cert TP
 40  profile reader root cert TP
 41   paroot cert TP
 42  peer pa cert TP
 43   papeer cert TP
 44  profile reader pa cert TP
 45  profile reader peer cert TP
 46  peer user cert
 47   pauser cert
 48  
 49 --------注意-------------
 50 看到root.sh 执行到这里的时候,我们就可以在另一个窗口不断的刷我们的dd命令了,如果有更好的方法也可以,我这里是这么操作的:
 51 [root@rac1 ~]# /bin/ddif=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1
 52 /bin/dd: opening`/var/tmp/.oracle/npohasd': No such file or directory
 53 [root@rac1 ~]# /bin/ddif=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1
 54 /bin/dd: opening`/var/tmp/.oracle/npohasd': No such file or directory
 55 [root@rac1 ~]# /bin/ddif=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1
 56 /bin/dd: opening`/var/tmp/.oracle/npohasd': No such file or directory
 57 [root@rac1 ~]# /bin/ddif=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1
 58 /bin/dd: opening`/var/tmp/.oracle/npohasd': No such file or directory
 59 [root@rac1 ~]# /bin/ddif=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1
 60 /bin/dd: opening`/var/tmp/.oracle/npohasd': No such file or directory
 61 You have new mail in /var/spool/mail/root
 62 [root@rac1 ~]# /bin/ddif=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1
 63 --只要dd命令成功执行,我们的root.sh 就可以顺利完成了。
 64  
 65 --------End --------------
 66  
 67  
 68 Adding daemon to inittab
 69 CRS-4123: Oracle High Availability Serviceshas been started.
 70 ohasd is starting
 71 ADVM/ACFS is not supported onoraclelinux-release-6Server-1.0.2.x86_64
 72  
 73  
 74  
 75 CRS-2672: Attempting to start 'ora.gipcd'on 'rac1'
 76 CRS-2672: Attempting to start 'ora.mdnsd'on 'rac1'
 77 CRS-2676: Start of 'ora.gipcd' on 'rac1'succeeded
 78 CRS-2676: Start of 'ora.mdnsd' on 'rac1'succeeded
 79 CRS-2672: Attempting to start 'ora.gpnpd'on 'rac1'
 80 CRS-2676: Start of 'ora.gpnpd' on 'rac1'succeeded
 81 CRS-2672: Attempting to start'ora.cssdmonitor' on 'rac1'
 82 CRS-2676: Start of 'ora.cssdmonitor' on'rac1' succeeded
 83 CRS-2672: Attempting to start 'ora.cssd' on'rac1'
 84 CRS-2672: Attempting to start 'ora.diskmon'on 'rac1'
 85 CRS-2676: Start of 'ora.diskmon' on 'rac1'succeeded
 86 CRS-2676: Start of 'ora.cssd' on 'rac1'succeeded
 87 CRS-2672: Attempting to start 'ora.ctssd'on 'rac1'
 88 CRS-2676: Start of 'ora.ctssd' on 'rac1'succeeded
 89  
 90 ASM created and started successfully.
 91  
 92 DiskGroup DATA created successfully.
 93  
 94 clscfg: -install mode specified
 95 Successfully accumulated necessary OCRkeys.
 96 Creating OCR keys for user 'root', privgrp'root'..
 97 Operation successful.
 98 CRS-2672: Attempting to start 'ora.crsd' on'rac1'
 99 CRS-2676: Start of 'ora.crsd' on 'rac1'succeeded
100 CRS-4256: Updating the profile
101 Successful addition of voting disk372c42f3b2bc4f66bf8b52d2526104e3.
102 Successfully replaced voting disk groupwith +DATA.
103 CRS-4256: Updating the profile
104 CRS-4266: Voting file(s) successfullyreplaced
105 ## STATE    File Universal Id                File Name Disk group
106 -- -----    -----------------                --------- ---------
107  1.ONLINE   372c42f3b2bc4f66bf8b52d2526104e3(/dev/asm-disk1) [DATA]
108 Located 1 voting disk(s).
109 CRS-2673: Attempting to stop 'ora.crsd' on'rac1'
110 CRS-2677: Stop of 'ora.crsd' on 'rac1'succeeded
111 CRS-2673: Attempting to stop 'ora.asm' on'rac1'
112 CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
113 CRS-2673: Attempting to stop 'ora.ctssd' on'rac1'
114 CRS-2677: Stop of 'ora.ctssd' on 'rac1'succeeded
115 CRS-2673: Attempting to stop'ora.cssdmonitor' on 'rac1'
116 CRS-2677: Stop of 'ora.cssdmonitor' on'rac1' succeeded
117 CRS-2673: Attempting to stop 'ora.cssd' on'rac1'
118 CRS-2677: Stop of 'ora.cssd' on 'rac1'succeeded
119 CRS-2673: Attempting to stop 'ora.gpnpd' on'rac1'
120 CRS-2677: Stop of 'ora.gpnpd' on 'rac1'succeeded
121 CRS-2673: Attempting to stop 'ora.gipcd' on'rac1'
122 CRS-2677: Stop of 'ora.gipcd' on 'rac1'succeeded
123 CRS-2673: Attempting to stop 'ora.mdnsd' on'rac1'
124 CRS-2677: Stop of 'ora.mdnsd' on 'rac1'succeeded
125 CRS-2672: Attempting to start 'ora.mdnsd'on 'rac1'
126 CRS-2676: Start of 'ora.mdnsd' on 'rac1'succeeded
127 CRS-2672: Attempting to start 'ora.gipcd'on 'rac1'
128 CRS-2676: Start of 'ora.gipcd' on 'rac1'succeeded
129 CRS-2672: Attempting to start 'ora.gpnpd'on 'rac1'
130 CRS-2676: Start of 'ora.gpnpd' on 'rac1'succeeded
131 CRS-2672: Attempting to start'ora.cssdmonitor' on 'rac1'
132 CRS-2676: Start of 'ora.cssdmonitor' on'rac1' succeeded
133 CRS-2672: Attempting to start 'ora.cssd' on'rac1'
134 CRS-2672: Attempting to start 'ora.diskmon'on 'rac1'
135 CRS-2676: Start of 'ora.diskmon' on 'rac1'succeeded
136 CRS-2676: Start of 'ora.cssd' on 'rac1'succeeded
137 CRS-2672: Attempting to start 'ora.ctssd'on 'rac1'
138 CRS-2676: Start of 'ora.ctssd' on 'rac1'succeeded
139 CRS-2672: Attempting to start 'ora.asm' on'rac1'
140 CRS-2676: Start of 'ora.asm' on 'rac1'succeeded
141 CRS-2672: Attempting to start 'ora.crsd' on'rac1'
142 CRS-2676: Start of 'ora.crsd' on 'rac1'succeeded
143 CRS-2672: Attempting to start 'ora.evmd' on'rac1'
144 CRS-2676: Start of 'ora.evmd' on 'rac1'succeeded
145 CRS-2672: Attempting to start 'ora.asm' on'rac1'
146 CRS-2676: Start of 'ora.asm' on 'rac1'succeeded
147 CRS-2672: Attempting to start 'ora.DATA.dg'on 'rac1'
148 CRS-2676: Start of 'ora.DATA.dg' on 'rac1'succeeded
149  
150 rac1    2012/06/27 14:39:25    /u01/app/11.2.0/grid/cdata/rac1/backup_20120627_143925.olr
151 Preparing packages for installation...
152 cvuqdisk-1.0.7-1
153 Configure Oracle Grid Infrastructure for aCluster ... succeeded
154 Updating inventory properties forclusterware
155 Starting Oracle Universal Installer...
156  
157 Checking swap space: must be greater than500 MB.   Actual 969 MB    Passed
158 The inventory pointer is located at/etc/oraInst.loc
159 The inventory is located at/u01/app/oraInventory
160 'UpdateNodeList' was successful.
161 [root@rac1 oracle]#

这里root.sh成功执行,方法可行。

注意:
在所有节点执行root.sh 都需要使用dd命令。

posted @ 2014-08-21 16:58  学海无涯1999  阅读(5891)  评论(0编辑  收藏  举报