rac 10g 加入节点具体解释
目标:
当前我环境中是有两个节点RAC1和RAC2 节点。如今添加一个RAC3节点。
概要:为现有的Oracle10g RAC 加入节点大致包含下面步骤:
1. 配置新的server节点上的硬件及操作系统环境
2. 向 Cluster 集群中增加该节点
3. 在新节点上安装 Oracle Database software
4. 为新的节点配置监听器 LISTENER
5. 通过 DBCA 为新的节点加入实例
注意:在新的server节点上配置操作系统环境
1. 这包含配置该节点今后使用的 public network 公用网络和 private network 接口,不要忘记在 hosts 文件里增加之前节点的网络信息。并将该份完整的hosts 文件传到集群Cluster 中已有的节点上,保证处处能够訪问。
2. 同一时候须要在原有的基础上配置 oracle(或其它DBA)用户的身份等价性。这须要将新节点上生成的 id_rsa.pub 和id_dsa.pub 文件里的信息追加到authorized_keys 文件里,并保证在全部节点上均有这样一份同样的authorized_keys 文件。
3. 调整新节点上的操作系统内核參数。保证其满足今后在该节点上执行实例的内存要求以及 10g RAC Cluster 的推荐的udp网络參数。
4. 调整新节点上的系统时间以保持同其它节点一致,或者配置NTP 服务。
5. 保证原有 Cluster 中全部节点上的CRS 都正常执行,否则addNode时会报错。
6. 配置 clusterware 和database 软件的安装文件夹,要求路径和原有节点上的一致。
一. 准备工作
1. 查看当前RAC环境版本号和状态
SYS@RACDB1>select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for Linux: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 – Production
[root@rac1 bin]# ./crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....B1.inst application ONLINE ONLINE rac1
ora....B2.inst application ONLINE ONLINE rac2
ora.RACDB.db application ONLINE ONLINE rac1
ora....SM1.asm application ONLINE ONLINE rac1
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora....SM2.asm application ONLINE ONLINE rac2
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2
2. 配置节点3
[root@rac3 ~]# hostname
rac3
[root@rac3 ~]# id oracle
uid=54321(oracle) gid=54321(oinstall)groups=54321(oinstall),54322(dba)
[root@rac3 ~]# ifconfig
eth0 Link encap:Ethernet HWaddr00:50:56:25:82:62
inet addr:192.168.90.10 Bcast:192.168.90.255 Mask:255.255.255.0
inet6 addr: fe80::250:56ff:fe25:8262/64 Scope:Link
UPBROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1 errors:0 dropped:0overruns:0 frame:0
TXpackets:12 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RXbytes:60 (60.0 b) TX bytes:720 (720.0 b)
Interrupt:67 Base address:0x2024
eth1 Link encap:Ethernet HWaddr00:50:56:2D:0F:8D
inet addr:192.168.90.11 Bcast:192.168.90.255 Mask:255.255.255.0
inet6 addr: fe80::250:56ff:fe2d:f8d/64 Scope:Link
UPBROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RXpackets:20 errors:0 dropped:0 overruns:0 frame:0
TXpackets:14 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RXbytes:1200 (1.1 KiB) TX bytes:804 (804.0b)
Interrupt:67 Base address:0x20a4
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UPLOOPBACK RUNNING MTU:16436 Metric:1
RXpackets:723 errors:0 dropped:0 overruns:0 frame:0
TXpackets:723 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RXbytes:1217460 (1.1 MiB) TX bytes:1217460(1.1 MiB)
3. 配置三个节点/etc/hosts映射
注意:三个节点都要配置!。!
[root@rac3 ~]# vi /etc/hosts
# Do not remove the following line, or variousprograms
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
192.168.90.2 rac1
192.168.90.5 rac2
192.168.91.3 rac1-priv
192.168.91.6 rac2-priv
192.168.90.3 rac1-vip
192.168.90.4 rac2-vip
#=========add node rac3==========#
192.168.90.10 rac3
192.168.91.11 rac3-priv
192.168.90.7 rac3-vip
检查网络是否通畅:
[root@rac3 ~]# ping rac1
PING rac1 (192.168.90.2) 56(84) bytes of data.
64 bytes from rac1 (192.168.90.2): icmp_seq=1ttl=64 time=0.614 ms
64 bytes from rac1 (192.168.90.2): icmp_seq=2ttl=64 time=0.169 ms
--- rac1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss,time 1000ms
rtt min/avg/max/mdev = 0.169/0.391/0.614/0.223 ms
4. 配置OS级别 内核參数
这里显示的都是要加入的參数,原来的都不要动!!
。
[root@rac3 ~]# cat /etc/sysctl.conf
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 1048576
net.core.rmem_max = 1048576
net.core.wmem_default = 262144
net.core.wmem_max = 262144
/sbin/sysctl –p 生效一下
[root@rac3 ~]# cat /etc/security/limits.conf
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
[root@rac3 ~]# cat /etc/pam.d/login
session required pam_selinux.so open
[root@rac3 ~]# cat /etc/profile
if [ $USER = oracle ]; then
if [ $SHELL = /bin/ksh ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi
5. 检查所须要的rpm是否都在,没有的话详细安装
[root@rac3 ~]# rpm -q binutils compat-dbcontrol-center gcc gcc-c++ glibc glibc-common libstdc++ libstdc++-devel make
binutils-2.17.50.0.6-14.el5
compat-db-4.2.52-5.1
control-center-2.16.0-16.el5
gcc-4.1.2-50.el5
gcc-c++-4.1.2-50.el5
glibc-2.5-58
glibc-common-2.5-58
libstdc++-4.1.2-50.el5
libstdc++-devel-4.1.2-50.el5
make-3.81-3.el5
二. 開始配置Openfiler存储
用节点3打开火狐配置:
配置略。
。。
就是加了一个rac3的MAPPING
2.1 在节点3上发现存储 假设没有返回值 重新启动一下Linux
[root@rac3 ~]# iscsiadm --mode discoverydb --typesendtargets --portal 192.168.90.8 --discover
192.168.90.8:3260,1 rac3
[root@rac3 ~]#
[root@rac3 ~]#
[root@rac3 ~]# iscsiadm --mode node --targetnamerac3 192.168.90.10:3260 --login
Logging in to [iface: default, target: rac3,portal: 192.168.90.8,3260]
Login to [iface: default, target: rac3, portal:192.168.90.8,3260] successful.
2.2 配置开机启动规则
[root@rac3 ~]# cat /etc/udev/rules.d/60-raw.rules
# Enter raw device bindings here.
#
# An example would be:
# ACTION=="add", KERNEL=="sda", RUN+="/bin/raw/dev/raw/raw1 %N"
# to bind /dev/raw/raw1 to /dev/sda, or
# ACTION=="add", ENV{MAJOR}=="8",ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m"
# to bind /dev/raw/raw2 to the device with major 8,minor 1.
ACTION=="add",KERNEL=="/dev/sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", ENV{MAJOR}=="8",ENV{MINOR}=="17", RUN+="/bin/raw /dev/raw/raw1 %M %m"
ACTION=="add",KERNEL=="/dev/sdc1", RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add", ENV{MAJOR}=="8",ENV{MINOR}=="33", RUN+="/bin/raw /dev/raw/raw2 %M %m"
ACTION=="add",KERNEL=="/dev/sdd1", RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add", ENV{MAJOR}=="8",ENV{MINOR}=="49", RUN+="/bin/raw /dev/raw/raw3 %M %m"
ACTION=="add",KERNEL=="/dev/sde1", RUN+="/bin/raw /dev/raw/raw4 %N"
ACTION=="add", ENV{MAJOR}=="8",ENV{MINOR}=="65", RUN+="/bin/raw /dev/raw/raw4 %M %m"
ACTION=="add",KERNEL=="/dev/sdf1", RUN+="/bin/raw /dev/raw/raw5 %N"
ACTION=="add", ENV{MAJOR}=="8",ENV{MINOR}=="81", RUN+="/bin/raw /dev/raw/raw5 %M %m"
KERNEL=="raw[1-5]",OWNER="oracle", GROUP="oinstall", MODE="640"
使之生效
[root@rac3 ~]# start_udev
Starting udev: [ OK ]
三. 创建对应文件夹,配置环境变量,配置互信
3.1 创建文件夹
[root@rac3 ~]# mkdir -p /u01/app/oracle
[root@rac3 ~]# chown oracle:oinstall -R /u01/
[root@rac3 ~]# ll /u01/
total 4
drwxr-xr-x 3 oracle oinstall 4096 Aug 26 12:28 app
3.2 配置环境变量
[oracle@rac3 ~]$ cat .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
.~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_2
export CRS_HOME=$ORACLE_BASE/product/10.2.0/db_1
export PATH=$PATH:$CRS_HOME/bin:$ORACLE_HOME/bin
[oracle@rac3 ~]$ . .bash_profile
3.3 配置互信
[oracle@rac3 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key(/home/oracle/.ssh/id_rsa):
Created directory '/home/oracle/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in/home/oracle/.ssh/id_rsa.
Your public key has been saved in/home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
19:66:a4:57:de:5d:21:1c:4a:2b:f1:0b:31:79:ae:fcoracle@rac3
[oracle@rac3 ~]$ cd .ssh/
[oracle@rac3 .ssh]$ ls
id_rsa id_rsa.pub
[oracle@rac3 .ssh]$ scprac1:/home/oracle/.ssh/authorized_keys .
The authenticity of host 'rac1 (192.168.90.2)'can't be established.
RSA key fingerprint is94:54:4a:64:74:0f:d4:e8:7f:7a:ba:43:0d:0f:70:30.
Are you sure you want to continue connecting(yes/no)?
yes
Warning: Permanently added 'rac1,192.168.90.2'(RSA) to the list of known hosts.
oracle@rac1's password:
authorized_keys 100% 786 0.8KB/s 00:00
[oracle@rac3 .ssh]$ cat id_rsa.pub >>authorized_keys
[oracle@rac3 .ssh]$ cat authorized_keys
ssh-rsaAAAAB3NzaC1yc2EAAAABIwAAAQEAzSn5D7BcQXTJbxwG9yW9no1nKB8n23ydm8f6g9ROrJ0iYVyK6Rg9iV1jS/NJHws/CInLWWYTqYuzZcTO+Hj2iAg8SzhaIBN9glXZvjbfzyS4Wi+FZUbH6TMDwiNXxKXEuWwUzH0c9yvvRuF/s2BoIve/rZ+go3lb4zsbCM3jLDFHDZmsJ89NLG80bljM8iHt6Y18Z3Il1OPFjgpojGg8jwQTXm2IWwxeRYdB6C9IxGJ2VUciOECqx5JtXEfC+vY0zl0zFtZnejVSiA9MZehS/QuCIcT0eC/+OsMVXAT93dsuVfZvuvfjsejmV0lnH9f001ZeBPl6pKqQWxUCVmcJKw==oracle@rac1
ssh-rsaAAAAB3NzaC1yc2EAAAABIwAAAQEAyApaZkkfr1WEXNJwLuukbmLFZEcXaYmZpCxZ78QwO3MKOyZbDaCpDY0muSqPa5UOuH///aXAXp8ki1JVhACVaq5/45yFKqYVxsOXHoxRRmUnCLnXbsnnK5qAew6pLzNNeNzMeqoCLz4rVM7r+aIDYs6j05H0oEXxAbe4E3pFroTdEl7ULM7kUWgWf4UlQKWNPVJ01YYhYnzbgXr2vpVpAYRMDuH6+PikF4cgl/WvpQBmO4WHkwqZ1m12bINSKfiX384UmhxzJWl4IYVlMYrqYqluVL14F/cUBFnV9avq6TxmraDG1eZwi1nCTZhLpMS1juXcRPGqY0mV3LkUSTwGwQ==oracle@rac2
ssh-rsaAAAAB3NzaC1yc2EAAAABIwAAAQEA0zDPoxlo7BJD2kCuKVhqA2vOML4IIWdKmP3ejqiqCxfgTkMUCoa0ml2AEOtB11FdEsRFJ7p3lxzLcYMgsY3fJCNb59KiyRA5FCqiqRBiVYiwnUYG3iVvFsz4brne6fv+mc4AMgvdw6UqrGVcgieY4754Hji/i6qbv4GkmHbGv2LMFgtBn3JXon4prJws8aepHt4hBp1F0nL99pC+xyHkXtqa7DGaFW3Qy7nIF05VdD+ibzgiFOYWqDShXYHKNLql++/LmR3/hC/W71CeED7NdPcu8X1Eg8OvypMfIu1jO6HpU1rhmwV7QG23XFJB2kxSxDbt4txeQLtCKbV8RwWPzw==oracle@rac3
[oracle@rac3 .ssh]$ scp authorized_keys rac1:/home/oracle/.ssh/
oracle@rac1's password:
authorized_keys 100% 1179 1.2KB/s 00:00
[oracle@rac3 .ssh]$ scp authorized_keysrac2:/home/oracle/.ssh/
The authenticity of host 'rac2 (192.168.90.5)'can't be established.
RSA key fingerprint is0b:7b:ff:55:fb:7b:80:a0:be:13:a0:25:0d:e8:47:2b.
Are you sure you want to continue connecting(yes/no)? yes
Warning: Permanently added 'rac2,192.168.90.5'(RSA) to the list of known hosts.
oracle@rac2's password:
authorized_keys 100%1179 1.2KB/s 00:00
3.4 验证互信
编写一个shell脚本
[oracle@rac2 ~]$ vi ssh.sh
ssh rac1 date
ssh rac1-priv date
ssh rac2 date
ssh rac2-priv date
ssh rac3 date
ssh rac3-priv date
节点1:
[oracle@rac3 ~]$ sh ssh.sh
Tue Aug 26 12:36:09 CST 2014
Tue Aug 26 12:36:09 CST 2014
Tue Aug 26 12:36:09 CST 2014
Tue Aug 26 12:36:09 CST 2014
Tue Aug 26 12:36:09 CST 2014
Tue Aug 26 12:36:09 CST 2014
节点2
[oracle@rac3 ~]$ sh ssh.sh
Tue Aug 26 12:36:40 CST 2014
Tue Aug 26 12:36:40 CST 2014
Tue Aug 26 12:36:40 CST 2014
Tue Aug 26 12:36:40 CST 2014
Tue Aug 26 12:36:40 CST 2014
Tue Aug 26 12:36:40 CST 2014
节点3
[oracle@rac3 ~]$ sh ssh.sh
Tue Aug 26 12:36:50 CST 2014
Tue Aug 26 12:36:50 CST 2014
Tue Aug 26 12:36:50 CST 2014
Tue Aug 26 12:36:50 CST 2014
Tue Aug 26 12:36:50 CST 2014
Tue Aug 26 12:36:50 CST 2014
3.5 安装几个关于oracleasm的rpm包
使用Yum源安装 --由节点1搭建的ftp
[root@rac3 yum.repos.d]# cat oel.repo
[oel5]
name = Enterprise Linux 5.6 DVD
baseurl=ftp://192.168.90.2/pub/Server
gpgcheck=0
[root@rac3 yum.repos.d]# yum -y install oracleasm*
Loaded plugins: rhnplugin, security
This system is not registered with ULN.
ULN support will be disabled.
oel5 | 1.1 kB 00:00
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package oracleasm-2.6.18-238.el5.i6860:2.0.5-1.el5 set to be updated
---> Package oracleasm-2.6.18-238.el5PAE.i6860:2.0.5-1.el5 set to be updated
--> Processing Dependency: kernel-PAE =2.6.18-238.el5 for package: oracleasm-2.6.18-238.el5PAE
---> Package oracleasm-2.6.18-238.el5debug.i6860:2.0.5-1.el5 set to be updated
--> Processing Dependency: kernel-debug =2.6.18-238.el5 for package: oracleasm-2.6.18-238.el5debug
---> Package oracleasm-2.6.18-238.el5xen.i6860:2.0.5-1.el5 set to be updated
--> Processing Dependency: kernel-xen =2.6.18-238.el5 for package: oracleasm-2.6.18-238.el5xen
---> Package oracleasm-support.i3860:2.1.4-1.el5 set to be updated
--> Running transaction check
---> Package kernel-PAE.i686 0:2.6.18-238.el5set to be installed
---> Package kernel-debug.i686 0:2.6.18-238.el5set to be installed
---> Package kernel-xen.i686 0:2.6.18-238.el5set to be installed
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================
Package Arch Version Repository
Size
================================================================================
Installing:
oracleasm-2.6.18-238.el5 i686 2.0.5-1.el5 oel5 22 k
oracleasm-2.6.18-238.el5PAE i686 2.0.5-1.el5 oel5 22 k
oracleasm-2.6.18-238.el5debug i686 2.0.5-1.el5 oel5 23 k
oracleasm-2.6.18-238.el5xen i686 2.0.5-1.el5 oel5 22 k
oracleasm-support i386 2.1.4-1.el5 oel5 83 k
Installing for dependencies:
kernel-PAE i686 2.6.18-238.el5 oel5 17 M
kernel-debug i686 2.6.18-238.el5 oel5 18 M
kernel-xen i686 2.6.18-238.el5 oel5 18 M
Transaction Summary
================================================================================
Install 8 Package(s)
Upgrade 0 Package(s)
Total download size: 53 M
Downloading Packages:
(1/8):oracleasm-2.6.18-238.el5-2.0.5-1.el5.i686.rpm | 22 kB 00:00
(2/8):oracleasm-2.6.18-238.el5PAE-2.0.5-1.el5.i686.rpm | 22kB 00:00
(3/8):oracleasm-2.6.18-238.el5xen-2.0.5-1.el5.i686.rpm | 22kB 00:00
(4/8):oracleasm-2.6.18-238.el5debug-2.0.5-1.el5.i686.rp | 23 kB 00:00
(5/8): oracleasm-support-2.1.4-1.el5.i386.rpm | 83 kB 00:00
(6/8): kernel-PAE-2.6.18-238.el5.i686.rpm | 17 MB 00:01
(7/8): kernel-debug-2.6.18-238.el5.i686.rpm | 18 MB 00:01
(8/8): kernel-xen-2.6.18-238.el5.i686.rpm | 18 MB 00:01
--------------------------------------------------------------------------------
Total 15MB/s | 53 MB 00:03
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing :oracleasm-support 1/8
Installing : kernel-PAE 2/8
Installing : kernel-xen 3/8
Installing : kernel-debug 4/8
Installing :oracleasm-2.6.18-238.el5PAE 5/8
Installing : oracleasm-2.6.18-238.el5xen 6/8
Installing :oracleasm-2.6.18-238.el5debug 7/8
Installing :oracleasm-2.6.18-238.el5 8/8
Installed:
oracleasm-2.6.18-238.el5.i686 0:2.0.5-1.el5
oracleasm-2.6.18-238.el5PAE.i686 0:2.0.5-1.el5
oracleasm-2.6.18-238.el5debug.i686 0:2.0.5-1.el5
oracleasm-2.6.18-238.el5xen.i686 0:2.0.5-1.el5
oracleasm-support.i386 0:2.1.4-1.el5
Dependency Installed:
kernel-PAE.i686 0:2.6.18-238.el5 kernel-debug.i686 0:2.6.18-238.el5
kernel-xen.i686 0:2.6.18-238.el5
Complete!
四. 開始添加节点($ORA_CRS_HOME/oui/bin)
4.1 再次查看原集群状态
[root@rac1 bin]# ./crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....B1.inst application ONLINE ONLINE rac1
ora....B2.inst application ONLINE ONLINE rac2
ora.RACDB.db application ONLINE ONLINE rac1
ora....SM1.asm application ONLINE ONLINE rac1
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora....SM2.asm application ONLINE ONLINE rac2
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2
4.2 開始配置
[root@rac1 bin]# xhost +
access control disabled, clients can connect fromany host
[root@rac1 bin]# su - oracle
[oracle@rac1 ~]$ cd/u01/app/oracle/product/10.2.0/db_1/oui/bin/
[oracle@rac1 bin]$ ./addNode.sh
欢迎界面welcome
添加添加节点的信息
Summary信息
開始配置
按顺序与相应节点运行!
!!
!
不要用ssh,到相应节点上运行!!!
节点1:
[root@rac1 ~]#/u01/app/oracle/product/10.2.0/db_1/install/rootaddnode.sh
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Attempting to add 1 new nodes to the configuration
Using ports: CSS=49895 CRS=49896 EVMC=49898 andEVMR=49897.
node <nodenumber>: <nodename><private interconnect name> <hostname>
node 3: rac3 rac3-priv rac3
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
/u01/app/oracle/product/10.2.0/db_1/bin/srvctl addnodeapps -n rac3 -A rac3-vip/255.255.255.0/eth0 -o/u01/app/oracle/product/10.2.0/db_1
节点3:
[root@rac3 ~]#/u01/app/oracle/product/10.2.0/db_1/root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0'is not owned by root
WARNING: directory '/u01/app/oracle/product' is notowned by root
WARNING: directory '/u01/app/oracle' is not ownedby root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is alreadyconfigured
/etc/oracle does not exist. Creating it now.
OCR LOCATIONS = /dev/raw/raw1
OCR backup directory'/u01/app/oracle/product/10.2.0/db_1/cdata/crs' does not exist. Creating now
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgradedsuccessfully
WARNING: directory '/u01/app/oracle/product/10.2.0'is not owned by root
WARNING: directory '/u01/app/oracle/product' is notowned by root
WARNING: directory '/u01/app/oracle' is not ownedby root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname rac1 for node 1.
assigning default hostname rac2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 andEVMR=49897.
node <nodenumber>: <nodename><private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter tooverride.
-force is destructive and will destroy any previouscluster
configuration.
Oracle Cluster Registry for cluster has alreadybeen initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600seconds.
CSS is active on these nodes.
rac1
rac2
rac3
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running underinit(1M)
Running vipca(silent) for configuring nodeapps
/u01/app/oracle/product/10.2.0/db_1/jdk/jre//bin/java:error while loading shared libraries: libpthread.so.0: cannot open sharedobject file: No such file or directory
在master节点的cssd.log中有具体信息
五. 为新节点安装数据库软件:($ORACLE_HOME/oui/bin)
[oracle@rac1 ~]$ cd/u01/app/oracle/product/10.2.0/db_2/oui/bin/
[oracle@rac1 bin]$ ls
addLangs.sh addNode.sh lsnodes ouica.sh resource runConfig.sh runInstaller runInstaller.sh
[oracle@rac1 bin]$ ./addNode.sh
Starting Oracle Universal Installer...
No pre-requisite checks found in oraparam.ini, nosystem pre-requisite checks will be executed.
Oracle Universal Installer, Version 10.2.0.1.0Production
Copyright (C) 1999, 2005, Oracle. All rightsreserved.
相同一个summary 開始安装
节点3:
[root@rac3 ~]#/u01/app/oracle/product/10.2.0/db_2/root.sh
Running Oracle10 root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/10.2.0/db_2
Enter the full pathname of the local bin directory:[/usr/local/bin]:
The file "dbhome" already exists in/usr/local/bin. Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in/usr/local/bin. Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in/usr/local/bin. Overwrite it?
(y/n)
[n]:
Entries will be added to the /etc/oratab file asneeded by
Database Configuration Assistant when a database iscreated
Finished running generic part of root.sh script.
Now product-specific root actions will beperformed.
此时的nodeapps已经加入成功。如今加入Listener和instance
[root@rac1 bin]# ./crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....B1.inst application ONLINE ONLINE rac1
ora....B2.inst application ONLINE ONLINE rac2
ora.RACDB.db application ONLINE ONLINE rac1
ora....SM1.asm application ONLINE ONLINE rac1
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora....SM2.asm application ONLINE ONLINE rac2
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2
ora.rac3.gsd application ONLINE ONLINE rac3
ora.rac3.ons application ONLINE ONLINE rac3
ora.rac3.vip application ONLINE ONLINE rac3
配置监听 -- 遇到问题将新节点重新启动就好。
[oracle@rac3 bin]$ lsnrctl start
节点2:
[oracle@rac2 bin]$ netca
Oracle Net Services Configuration:
选择 reconfigure
六. 将新节点实例增加集群($ORACLE_HOME/bin/dbca)
在原有节点运行
[oracle@rac1]# dbca
在节点RAC3上添加ASM
是否运行其它操作这里NO!!!!
查看节点配置:
[root@rac3 bin]# ./crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....B1.inst application ONLINE ONLINE rac1
ora....B2.inst application ONLINE ONLINE rac2
ora....B3.inst application ONLINE ONLINE rac3
ora.RACDB.db application ONLINE ONLINE rac2
ora....SM1.asm application ONLINE ONLINE rac1
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora....SM2.asm application ONLINE ONLINE rac2
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2
ora....SM3.asm application ONLINE ONLINE rac3
ora....C3.lsnr application ONLINE ONLINE rac3
ora.rac3.gsd application ONLINE ONLINE rac3
ora.rac3.ons application ONLINE ONLINE rac3
ora.rac3.vip application ONLINE ONLINE rac3
添加节点完毕。!
26-08-2014
tyger
版权声明:本文博主原创文章。博客,未经同意不得转载。