转 AIX7.2+11.2.0.4RAC实施

 

 

 

参考

https://blog.csdn.net/alangmei/article/details/18310381

https://blog.csdn.net/smasegain/article/details/47049955

 

文档说明:以#开始命令表示root用户执行,$开始命令以oracle/grid执行(前面都有切换用户的动作)
==================================================
一.基础环境准备(两个节点都需要做)
==================================================
--------------------------------------------------
1.1.操作系统检查(录屏:<nodename>_os_check.log)
--------------------------------------------------
1).操作系统版本及内核
====================
# bootinfo -K
# uname -s
# oslevel -s

====================
2).系统软件包检查
====================
a).必须软件包
--------------------
# lslpp -l bos.adt.base bos.adt.lib  bos.adt.libm bos.perf.libperfstat \
bos.perf.perfstat bos.perf.proctools xlC.rte
注:xlC.rte 11.1.0.2 or later

--------------------
b).JAVA、C++、Xwindows、ssh
--------------------
# lslpp -l | grep -i ssh

# lslpp -l | grep -i java
注:JAVA建议安装java6_64bit

# lslpp -l | grep -i C++
注:C/C++建议9.0以上

# lslpp -l | grep -i x11|grep -i dt
注:X11需要包含以下包:
  X11.Dt.ToolTalk   
  X11.Dt.bitmaps    
  X11.Dt.helpmin    
  X11.Dt.helprun    
  X11.Dt.lib        
  X11.Dt.rte        
  X11.Dt.ToolTalk          
  X11.Dt.bitmaps            
  X11.Dt.helpmin           
  X11.Dt.rte

====================
3).系统补丁包检查
====================
--------------------
a).ARPAs
--------------------
IZ87216
IZ87564
IZ89165
IZ97035

# instfix -i -k "IZ87216 IZ87564 IZ89165 IZ97035"
注:安装补丁包的时候参考下面的命令
# emgr -e IZ89302.101121.epkg.Z 

--------------------
b).PTFs
--------------------
none

====================
4).内核参数检查
====================
a).ncargs>=256
--------------------
# lsattr -El sys0 -a ncargs
ncargs 256 ARG/ENV list size in 4K byte blocks True
注:修改方式
# chdev -l sys0 -a ncargs='256'

--------------------
b).maxuproc>=16384
--------------------
# lsattr -E -l sys0 -a maxuproc
maxuproc 16384 Maximum number of PROCESSES allowed per user True
注:修改方式
# chdev -l sys0 -a maxuproc=16384

--------------------
c).aio_maxreqs>=65536
--------------------
#  ioo -o aio_maxreqs
aio_maxreqs = 131072
注:修改方式
# ioo –p -o aio_maxreqs=65536

====================
5).检查系统资源限制
====================
确认 /etc/security/limits文件包含:
fsize = -1
db = -1
cpu = -1
data = -1
rss = -1
stack = -1
nofiles = -1

# more /etc/security/limits
注:修改方式
# vi /etc/security/limits

====================
6).网络参数与端口
====================
--------------------
a).网络参数
--------------------
Network Preparation
=======================================
PARAMETER RECOMMENDED              VALUE
ipqmaxlen                         512
rfc1323                           1
sb_max                                41943040
tcp_recvspace                      1048576
tcp_sendspace                      1048576
udp_recvspace                         20971520
udp_sendspace                     2097152

注意:
udp_recvspace:应该是udp_sendspace的10倍,但是必须小于sb_max
udp_sendspace:这个值至少应该是4K+(db_block_size*db_multiblock_read_count)的大小。
--
查看所有的:
# no –a | more
分项查看:
# no -a | fgrep ipqmaxlen
# no -a | fgrep rfc1323
# no -a | fgrep sb_max
# no -a | fgrep tcp_recvspace
# no -a | fgrep tcp_sendspace
# no -a | fgrep udp_recvspace
# no -a | fgrep udp_sendspace

若有值不满足,进行修改:
no -r -o ipqmaxlen=512            
no -p -o rfc1323=1                  
no -p -o sb_max=41943040
no -p -o tcp_recvspace=1048576
no -p -o tcp_sendspace=1048576
no -p -o udp_recvspace=20971520
no -p -o udp_sendspace=2097152

也可以在/etc/rc.net文件里面加入如下内容
if [ -f /usr/sbin/no ] ; then
/usr/sbin/no -o udp_sendspace=2097152
/usr/sbin/no -o udp_recvspace=20971520
/usr/sbin/no -o tcp_sendspace=1048576
/usr/sbin/no -o tcp_recvspace=1048576
/usr/sbin/no -o rfc1323=1
/usr/sbin/no -o sb_max=41943040
/usr/sbin/no -o ipqmaxlen=512
fi

--------------------
b).端口范围
--------------------
# no -a | fgrep ephemeral
       tcp_ephemeral_high = 65500
        tcp_ephemeral_low = 9000
       udp_ephemeral_high = 65500
        udp_ephemeral_low = 9000
调整方式:
#no -p -o tcp_ephemeral_low=9000 -o tcp_ephemeral_high=65500
#no -p -o udp_ephemeral_low=9000 -o udp_ephemeral_high=65500


====================
7).虚拟内存优化
====================
检查:
# vmo -L minperm%
# vmo -L maxperm%
# vmo -L maxclient%
# vmo -L lru_file_repage //此参数已经不是可优化参数
# vmo -L strict_maxclient
# vmo -L strict_maxperm

调整:
# vmo -p -o minperm%=3
# vmo -p -o maxperm%=90
# vmo -p -o maxclient%=90
# vmo -p -o lru_file_repage=0
# vmo -p -o strict_maxclient=1
# vmo -p -o strict_maxperm=0

====================
8).内存和paging space
====================
--------------------
a).检查内存(至少2.5G):
--------------------
# lsattr -E -l sys0 -a realmem

--------------------
b).检查交换空间:
--------------------
# lsps -a
注:内存小于16G建议设置成内存大小,内存大于16G则设置成16G
# chps -s 10 hd6 (lsvg rootvg查看PP SIZE大小,扩展10个PP)

====================
9).文件系统空间检查
====================
# df -g
临时文件系统至少1G;
安装软件文件系统至少50G

====================
10).重启
====================
如果在check过程中对以上参数进行过修改,建议进行重启之后再进行后续操作
# shutdown -Fr
--------------------------------------------------
1.2.配置IP地址解析(录屏:<node_name>_pre-install.log)
--------------------------------------------------
# vi /etc/hosts
保留回环地址之外加入:
#public ip
129.1.1.124   p740a
129.1.1.125   p740b

#private ip
1.1.1.9       p740a-priv
1.1.1.10      p740b-priv

#vip
129.1.1.224   p740a-vip
129.1.1.225   p740b-vip

#scanip
129.1.1.226    cluster-scanip

--------------------------------------------------
1.3.时间同步配置
--------------------------------------------------
1).确认时区和NTP状态
====================
# echo $TZ                                                       确认时区是否和原生产系统一致
# lssrc -s xntpd                          查看NTP服务的状态
# stopsrc -s xntpd                        关闭NTP服务

====================
2).使用ctssd服务配置方式
====================
##使用ctssd服务进行时间同步:
# mv /etc/ntp.conf /etc/ntp.conf.bak          重命名NTP的配置文件,防止ctss安装成observer状态
在Grid Infrastructure软件安装以后,用grid用户查看时间同步服务是否处于活动状态:
# su - grid
$ crsctl stat resource ora.ctssd -t -init

====================
3).使用NTP配置方式
====================
为了保证NTP不往回同步时间需要编辑以下内容:
# vi /etc/rc.tcpip
start  /usr/sbin/xntpd  "$src_running"  "-x"
##启动xntpd服务:
# startsrc -s xntpd -a "-x"

--------------------------------------------------
1.4.创建系统组、用户
--------------------------------------------------
====================
1).存在性检查
====================
--------------------
a).检查
--------------------
# id oracle
# id grid
# more /etc/passwd
# more /etc/group
//如果用户已经存在,需要确认这些参数。最好是删除重建用户和组,保证正确性

--------------------
b).删除用户方案
--------------------
# rmuser -p oracle
# rmuser -p grid
# rm -rf /home/oracle
# rm -rf /home/grid
注:跳过步骤c)到创建用户

--------------------
c).保留用户方案
--------------------
# lsuser -a capabilities grid
# lsuser -a capabilities oracle
# chuser -a capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE grid|oracle
# grep oinstall /etc/group
# more /etc/oraInst.loc
注:查看是否有用户,组,是否安装过Oracle的产品,如果检查通过则跳过创建用户步骤。

====================
2).创建系统组、用户
====================
a).创建系统组
--------------------
# mkgroup -'A' id='501' adms='root' oinstall
# mkgroup -'A' id='502' adms='root' asmadmin
# mkgroup -'A' id='503' adms='root' asmdba
# mkgroup -'A' id='504' adms='root' asmoper
# mkgroup -'A' id='505' adms='root' dba
# mkgroup -'A' id='506' adms='root' oper

--------------------
b).创建用户
--------------------
# mkuser id='501' pgrp='oinstall' groups='dba,asmadmin,asmdba,asmoper' home='/home/grid' fsize=-1 cpu=-1 data=-1 rss=-1 stack=-1 stack_hard=-1 capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE grid
# mkuser id='502' pgrp='oinstall' groups='dba,asmdba,oper' home='/home/oracle' fsize=-1 cpu=-1 data=-1 rss=-1 stack=-1 stack_hard=-1 capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE oracle

--------------------
c).检查用户
--------------------
# id oracle
# id grid
# lsuser -a capabilities grid
# lsuser -a capabilities oracle

--------------------
d).修改用户密码
--------------------
# passwd grid
# passwd oracle
# su - grid
# su - oracle
注:建议登录一次图形界面

--------------------------------------------------
1.5.创建安装目录
--------------------------------------------------
以root用户一次执行可跳过后续创建步骤:
mkdir -p /orastg/app/oraInventory
chown -R grid:oinstall /orastg/app/oraInventory
chmod -R 775 /orastg/app/oraInventory
mkdir -p /orastg/app/grid
chown grid:oinstall /orastg/app/grid
chmod -R 775 /orastg/app/grid
mkdir -p /orastg/app/11.2.0/grid
chown -R grid:oinstall /orastg/app/11.2.0/grid
chmod -R 775 /orastg/app/11.2.0/grid
mkdir -p /orastg/app/oracle
mkdir /orastg/app/oracle/cfgtoollogs
chown -R oracle:oinstall /orastg/app/oracle
chmod -R 775 /orastg/app/oracle
mkdir -p /orastg/app/oracle/product/11.2.0/db_1
chown -R oracle:oinstall /orastg/app/oracle/product/11.2.0/db_1
chmod -R 775 /orastg/app/oracle/product/11.2.0/db_1
mkdir -p /home/oracle/upgrdtools
chown -R oracle:oinstall /home/oracle/upgrdtools          //此命令创建存放升级相关文件目录

====================
1).创建oraInventory:
====================
# mkdir -p /orastg/app/oraInventory
# chown -R grid:oinstall /orastg/app/oraInventory
# chmod -R 775 /orastg/app/oraInventory

====================
2).GI_BASE目录
====================
# mkdir -p /orastg/app/grid
# chown grid:oinstall /orastg/app/grid
# chmod -R 775 /orastg/app/grid

====================
3).GI_HOME目录
====================
# mkdir -p /orastg/app/11.2.0/grid
# chown -R grid:oinstall /orastg/app/11.2.0/grid
# chmod -R 775 /orastg/app/11.2.0/grid

====================
4).ORACLE_BASE目录
====================
# mkdir -p /orastg/app/oracle
# mkdir /orastg/app/oracle/cfgtoollogs
# chown -R oracle:oinstall /orastg/app/oracle
# chmod -R 775 /orastg/app/oracle

====================
5).ORACLE_HOME目录
====================
# mkdir -p /orastg/app/oracle/product/11.2.0/db_1
# chown -R oracle:oinstall /orastg/app/oracle/product/11.2.0/db_1
# chmod -R 775 /orastg/app/oracle/product/11.2.0/db_1

--------------------------------------------------
1.6.grid、oracle用户环境变量
--------------------------------------------------
1).grid用户环境变量
====================
# su - grid
$ vi /home/grid/.profile
--加入以下内容:
export ORACLE_BASE=/orastg/app/grid
export ORACLE_HOME=/orastg/app/11.2.0/grid
export JAVA_HOME=/usr/java6_64
#export ORACLE_SID=+ASM2
export AIXTHREAD_SCOPE=S
export PATH=$ORACLE_HOME/OPatch:$ORACLE_HOME/bin:$JAVA_HOME/bin:$PATH
umask 022
export PATH=$PATH:$JAVA_HOME/bin

====================
2).oracle用户环境变量:
====================
# su - oracle
$ vi /home/oracle/.profile
--加入以下内容:
export ORACLE_BASE=/orastg/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export JAVA_HOME=/usr/java6_64
export PATH=$ORACLE_HOME/OPatch:$ORACLE_HOME/bin:$JAVA_HOME/bin:$PATH
umask 022
#export ORACLE_SID=addr11g1
export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
export AIXTHREAD_SCOPE=S
export EDITOR=vi

--------------------------------------------------
1.7.grid、oracle用户等同性配置
--------------------------------------------------
1).等同性配置(一个节点root)
====================
# cd /orastg/software/grid/sshsetup
# ./sshUserSetup.sh -user grid -hosts "p740a p740b" -advanced -noPromptPassphrase
# ./sshUserSetup.sh -user oracle -hosts "p740a p740b" -advanced -noPromptPassphrase

====================
2).等同性检查(两个节点oracle,grid)
====================
$ ssh p740a date
$ ssh p740b date
$ ssh p740a-priv date
$ ssh p740b-priv date

--------------------------------------------------
1.8.ASM磁盘准备
--------------------------------------------------
1).检查磁盘映射路径是否一致
====================
# sanlun lun show -p

注:需要保证两边的PVID和盘符对应,不对应则需要用mknod
语法:mknod Name {b|c} Major Minor
mknod datavg b 44 0 表示创建主从设备号44 0的块设备:datatvg
mknod orahdisk c 16 27
ls -l orahdisk
chown oracle:oinstall orahdisk
chmod orahdisk 660

====================
2).检查磁盘的reserve_policy属性
====================
a).检查(确保为:no_reserve)
--------------------
# echo "hdisk2:  `lsattr -E -l hdisk2  |grep reserve_ `" >/home/oracle/upgrdtools/reserve_policy.log
# echo "hdisk3:  `lsattr -E -l hdisk3  |grep reserve_ `" >>/home/oracle/upgrdtools/reserve_policy.log
# echo "hdisk4:  `lsattr -E -l hdisk4  |grep reserve_ `" >>/home/oracle/upgrdtools/reserve_policy.log
# echo "hdisk5:  `lsattr -E -l hdisk5  |grep reserve_ `" >>/home/oracle/upgrdtools/reserve_policy.log
# echo "hdisk6:  `lsattr -E -l hdisk6  |grep reserve_ `" >>/home/oracle/upgrdtools/reserve_policy.log
# echo "hdisk7:  `lsattr -E -l hdisk7  |grep reserve_ `" >>/home/oracle/upgrdtools/reserve_policy.log
# echo "hdisk8:  `lsattr -E -l hdisk8  |grep reserve_ `" >>/home/oracle/upgrdtools/reserve_policy.log
# echo "hdisk9:  `lsattr -E -l hdisk9  |grep reserve_ `" >>/home/oracle/upgrdtools/reserve_policy.log
# echo "hdisk10: `lsattr -E -l hdisk10 |grep reserve_ `" >>/home/oracle/upgrdtools/reserve_policy.log
# echo "hdisk11: `lsattr -E -l hdisk11 |grep reserve_ `" >>/home/oracle/upgrdtools/reserve_policy.log

# cat /home/oracle/upgrdtools/reserve_policy.log

--------------------
b).修改
--------------------
# chdev -l hdisk2  -a reserve_policy=no_reserve
# chdev -l hdisk3  -a reserve_policy=no_reserve
# chdev -l hdisk4  -a reserve_policy=no_reserve
# chdev -l hdisk5  -a reserve_policy=no_reserve
# chdev -l hdisk6  -a reserve_policy=no_reserve
# chdev -l hdisk7  -a reserve_policy=no_reserve
# chdev -l hdisk8  -a reserve_policy=no_reserve
# chdev -l hdisk9  -a reserve_policy=no_reserve
# chdev -l hdisk10 -a reserve_policy=no_reserve
# chdev -l hdisk11 -a reserve_policy=no_reserve

====================
3).PVID处理
====================
a).检查
--------------------
# lspv

--------------------
b).清理
--------------------
# chdev -l hdisk2  -a pv=clear
# chdev -l hdisk3  -a pv=clear
# chdev -l hdisk4  -a pv=clear
# chdev -l hdisk5  -a pv=clear
# chdev -l hdisk6  -a pv=clear
# chdev -l hdisk7  -a pv=clear
# chdev -l hdisk8  -a pv=clear
# chdev -l hdisk9  -a pv=clear
# chdev -l hdisk10 -a pv=clear
# chdev -l hdisk11 -a pv=clear
# lspv

====================
4).格式化磁盘头
====================
# dd if=/dev/zero of=/dev/rhdisk2  bs=1024K count=1
# dd if=/dev/zero of=/dev/rhdisk3  bs=1024K count=1
# dd if=/dev/zero of=/dev/rhdisk4  bs=1024K count=1
# dd if=/dev/zero of=/dev/rhdisk5  bs=1024K count=1
# dd if=/dev/zero of=/dev/rhdisk6  bs=1024K count=1
# dd if=/dev/zero of=/dev/rhdisk7  bs=1024K count=1
# dd if=/dev/zero of=/dev/rhdisk8  bs=1024K count=1
# dd if=/dev/zero of=/dev/rhdisk9  bs=1024K count=1
# dd if=/dev/zero of=/dev/rhdisk10 bs=1024K count=1
# dd if=/dev/zero of=/dev/rhdisk11 bs=1024K count=1

====================
5).修改磁盘权限
====================
a).修改读写权限
--------------------
# chmod 660 /dev/rhdisk2
# chmod 660 /dev/rhdisk3
# chmod 660 /dev/rhdisk4
# chmod 660 /dev/rhdisk5
# chmod 660 /dev/rhdisk6
# chmod 660 /dev/rhdisk7
# chmod 660 /dev/rhdisk8
# chmod 660 /dev/rhdisk9
# chmod 660 /dev/rhdisk10
# chmod 660 /dev/rhdisk11

--------------------
b).修改属主
--------------------
# chown grid:asmadmin /dev/rhdisk2
# chown grid:asmadmin /dev/rhdisk3
# chown grid:asmadmin /dev/rhdisk4
# chown grid:asmadmin /dev/rhdisk5
# chown grid:asmadmin /dev/rhdisk6
# chown grid:asmadmin /dev/rhdisk7
# chown grid:asmadmin /dev/rhdisk8
# chown grid:asmadmin /dev/rhdisk9
# chown grid:asmadmin /dev/rhdisk10
# chown grid:asmadmin /dev/rhdisk11 

# ls -ltr /dev/ |grep rhdisk

==================================================
二.GI/RDBMS安装配置
==================================================
2.1.安装准备工作
--------------------------------------------------
1).CUV安装预检查(录屏:<nodename>_cuvout.log)
====================
$ cd /orastg/software/
$ unzip p13390677_112040_AIX64-5L_3of7.zip 1>/dev/zero 2>gi.err
$ cd /orastg/software/grid
$ ./runcluvfy.sh stage -pre crsinst -n p740a,p740b -fixup -verbose

====================
2).清理sockets
====================
# rm -rf /tmp/.oracle
# rm –rf /var/tmp/.oracle
# rm -rf /tmp/OraInst*
# rm –rf /opt/ ORCL*

--------------------------------------------------
2.2.GI安装步骤(录屏:<nodename>_gi_isntall.log)
--------------------------------------------------
====================
1).开始安装
====================
# su - grid
$ export DISPLAY=10.1.25.30:0.0 <此IP地址为当前主机IP>
$ cd /orastg/software/grid
$ ./runInstaller
##按照提示,【2个节点】以root执行rootpre.sh脚本
# cd /orastg/software/grid/
# ./rootpre.sh

====================
2).图形化界面
====================
-skip software updates
-Install and Configure Oracle Infrastructure for a Cluster
-Advanced Installation
-增加简体中文后下一步
-关闭GNS后设置集群名和SCAN参数
-添加添加其他节点信息(检测ssh连通性)
-配置网卡
-选择Oracle ASM存放OCR
-选择磁盘创建磁盘组
-设置ASM密码
-确认系统组
-确认ORACLE_BASE和ORACLE_HOME
-确认oraInventory
-等待CUV检测
-check检测不通过问题
-确认基础信息
-等待安装过程中的脚本
-以root用户按照脚本顺序和主机顺序运行脚本(录屏:<nodename>_root_script_out.log)
-等待安装完成

====================
3).后检查工作
====================
a).检查集群状态
--------------------
$ crsctl status resource -t -init
$ crsctl check crs
--------------------
b).检查CTSSD服务
--------------------
$ crsctl check ctss
注:不使用ntp需要确保ctssd服务不是observe状态

--------------------------------------------------
2.3.安装RDBMS(oracle 用户)(录屏:<nodename>_rdbms_install.log)
--------------------------------------------------
1).开始安装
====================
# su - oracle
$ cd /orastg/software
$ unzip p13390677_112040_AIX64-5L_1of7.zip 1>/dev/zero 2>db1.err
$ unzip p13390677_112040_AIX64-5L_2of7.zip 1>/dev/zero 2>db2.err
$ cd /orastg/software/database
$ export DISPLAY=10.1.25.30:0.0
$ ./runInstaller

所有节点按照要求以root用户执行rootpre.sh脚本(两个节点都需要)
# cd /orastg/software/database
# ./rootpre.sh

====================
2).图形界面
====================
-不接受安全更新
-skip software updates
-Install database software only
-Oracle RAC database Installation(还要选择节点)
-增加简体中文到下一步
-选择企业版
-确认ORACLE_BASE,ORACLE_HOME无误
-确认用户组
-等待CVU检查
-检查check不通过问题
-确认配置信息
-等待安装过程
-以root用户按照主机顺序执行root脚本(录屏:<nodename>_root_script_out.log)
--------------------------------------------------
2.4.配置ASM(grid用户)
--------------------------------------------------
$ export DISPLAY=10.1.25.30:0.0
$ asmca
1).创建磁盘组

====================
2.调整ASM参数
====================
# su – grid
# sqlplus / as sysasm
# alter system set memory_max_target=4g sid='*' scope=spfile;
# alter system set memory_target=4g sid='*' scope=spfile;
# alter system  set processes=500 sid='*' scope=spfile;

==================================================
三.GI/RDBMS补丁安装(以下操作两个节点都需要进行)
==================================================
3.1.补丁准备工作录屏:(录屏:<nodename>_pre-patch.log)
--------------------------------------------------
1).停集群
====================
#     /orastg/app/11.2.0/grid/bin/crsctl stop crs -f
====================
2).准备介质
====================
# su - oracle
$ cd /orastg/software/
$ unzip p6880880_112000_AIX64-5L-OPatch_patch.zip 1>/dev/zero 2>optach.err
$ unzip p17478514_112040_AIX64-5L-PSU112041.zip 1>/dev/zero 2>psu.err
$ unzip p18180390_112041_AIX64-5L.zip 1>dev/zero 2>oneoff.err

注:确认所有.err的日志输出文件中无报错

--------------------------------------------------
3.2.安装Opatch(录屏:<nodename>_opatch-patch.log)
--------------------------------------------------
1).OPatch安装(两个节点root)
====================
a).备份原有OPatch并确认
--------------------
# mv /orastg/app/11.2.0/grid/OPatch /orastg/app/11.2.0/grid/OPatch_old
# mv /orastg/app/oracle/product/11.2.0/db_1/OPatch /orastg/app/oracle/product/11.2.0/db_1/OPatch_old
# ls -ltr /orastg/app/11.2.0/grid/ |grep OPatch
# ls -ltr /orastg/app/oracle/product/11.2.0/db_1/ |grep OPatch

--------------------
b).patch安装
--------------------
# cp -r OPatch /orastg/app/11.2.0/grid/
# cp -r OPatch /orastg/app/oracle/product/11.2.0/db_1/
# chown -R oracle:oinstall /orastg/app/oracle/product/11.2.0/db_1/OPatch
# chown -R grid:oinstall /orastg/app/11.2.0/grid/OPatch

====================
2).OPatch版本确认
====================
分别以grid和oracle确认opatch 版本
$ opatch lsinventory

--------------------------------------------------
3.3.安装PSU
--------------------------------------------------
1).清理Lib
====================
a).清理
--------------------
# slibclean

--------------------
b).确认lib是否被占用
--------------------
# genkld |grep oracle
注:需要没有返回结果

====================
2).GI PSU安装(录屏:<nodename>_gi-psu-patch.log)
====================
a).GI_HOME解锁
--------------------
# /orastg/app/11.2.0/grid/crs/install/rootcrs.pl -unlock

--------------------
b).PSU安装检测
--------------------
# su - grid
$ opatch prereq CheckConflictAgainstOHWithDetail -ph /orastg/software/17478514

--------------------
c).正式安装
--------------------
$ opatch apply -oh /orastg/app/11.2.0/grid -local /orastg/software/17478514

--------------------
d).完成
--------------------
# /orastg/app/11.2.0/grid/crs/install/rootcrs.pl -patch
# su - grid
$ opatch lsinventory

====================
3).RDBMS PSU安装(录屏:<nodename>_rdbms-psu-patch.log)
====================
a).停集群
--------------------
# /orastg/app/11.2.0/grid/bin/crsctl stop crs

--------------------
b).PSU安装检测
--------------------
# su - oracle
$ opatch prereq CheckConflictAgainstOHWithDetail -ph /orastg/software/17478514

-------------------
c).正式安装
-------------------
$ opatch apply -oh /orastg/app/oracle/product/11.2.0/db_1 -local /orastg/software/17478514
--检查日志:
cat <file_name> |grep -i error
cat <file_name> |grep -i warning|grep -iv error|grep -v 773|grep -v 224|grep -v 345|grep -v 783|grep -v 415
--------------------
d).完成
--------------------
opatch lsinventory

--------------------------------------------------
3.4.安装one-off补丁(录屏:<nodename>_rdbms-oneoff-patch.log)
--------------------------------------------------
# su - oracle
$ opatch prereq CheckConflictAgainstOHWithDetail -ph /orastg/software/18180390
$ opatch apply -oh /orastg/app/oracle/product/11.2.0/db_1 -local /orastg/software/18180390

--------------------------------------------------
3.5.完成Patch安装(录屏:<nodename>_finish-patch.log)
--------------------------------------------------
1).确认所有Patch
====================
在各个节点以oracle、grid都执行
$ opatch lsinventory

====================
2).启动集群
====================
# /orastg/app/11.2.0/grid/bin/crsctl start crs
---------------------
作者:司马松儆
来源:CSDN
原文:https://blog.csdn.net/smasegain/article/details/47049955
版权声明:本文为博主原创文章,转载请附上博文链接!

 

 

###sample 1:

##all source in /dbatmp

##网络环境
每台主机 各有 公网和 私网 网卡各一个。 用来 RAC 通信
10.197.1.201 pdbdb01
10.197.1.202 pdbdb02
10.197.1.203 pdbdb01-vip
10.197.1.204 pdbdb02-vip
190.0.1.201 pdbdb01-priv
190.0.1.202 pdbdb02-priv
10.197.1.205 db-scan


##文件系统目录:
120.00 G /db/db/app
120.00 G /db/db/grid

### crete disk new_data 400G new_fra 200G

3-4 NEW_DATA
5 NEW_FRA
6 - 10 db_OCRVD


###需要手工配置放在最前面
###HOSTS:

10.241.28.80 dbrac1
10.241.28.81 dbrac2
10.241.28.82 dbrac1-vip
10.241.28.83 dbrac2-vip
190.0.28.80 dbrac1-priv
190.0.28.81 dbrac2-priv
10.241.28.84 dbrac-scan

##提示 是否要修改,选择YES或者Y,这里要一个项项的确认:
vmo -p -o maxperm%=90;
sleep 5
vmo -p -o minperm%=3;
sleep 5
vmo -p -o maxclient%=90;
sleep 5
vmo -p -o strict_maxclient=1;
sleep 5
vmo -p -o strict_maxperm=0;

 

 

##network
chmod 755 /etc/rc.net

vi /etc/rc.net
if [ -f /usr/sbin/no ] ; then
/usr/sbin/no -o extendednetstats=0 >>/dev/null 2>&1
/usr/sbin/no -p -o sb_max=20000000
/usr/sbin/no -r -o ipqmaxlen=512
/usr/sbin/no -p -o udp_sendspace=1056000
/usr/sbin/no -p -o udp_recvspace=10560000
/usr/sbin/no -p -o tcp_sendspace=65536
/usr/sbin/no -p -o tcp_recvspace=65536
/usr/sbin/no -p -o rfc1323=1
/usr/sbin/no -p -o rfc2414=1
/usr/sbin/no -r -o rfc1122addrchk=0
/usr/sbin/no -r -o ipqmaxlen=512
fi


手工跑如下命令,该命令 可以下次重启生效,配置完成后,要重启主机生效 (重要),否则有可能导致 CRS 节点2无法重新加入磁盘
/usr/sbin/no -o extendednetstats=0 >>/dev/null 2>&1
/usr/sbin/no -p -o sb_max=20000000
/usr/sbin/no -r -o ipqmaxlen=512
/usr/sbin/no -p -o udp_sendspace=1056000
/usr/sbin/no -p -o udp_recvspace=10560000
/usr/sbin/no -p -o tcp_sendspace=65536
/usr/sbin/no -p -o tcp_recvspace=65536
/usr/sbin/no -p -o rfc1323=1
/usr/sbin/no -p -o rfc2414=1
/usr/sbin/no -r -o rfc1122addrchk=0
/usr/sbin/no -r -o ipqmaxlen=512

 

###ntp 可以不用启动.这一步可忽略
root@opdbdb02:/etc>#ps -ef | grep ntp
root 3604666 3015654 0 18:20:20 - 0:01 /usr/sbin/xntpd -x
root 1508426 1049852 0 09:46:14 pts/1 0:00 grep ntp
##(最后一行加入 "-x")
cd /etc
vi rc.tcpip
start /usr/sbin/xntpd "$src_running" "-x"

root@opdbdb02:/etc>#egrep ntp rc.tcpip
stopsrc -s xntpd
startsrc -s xntpd

 


##windows 装xmanager, when meet grid user logging Permission denied,please user belows command ,
##check file /etc/ssh/sshd_config line allow_user ,and add the username grid
##and also add to allow xmanager to start x windows

vi /etc/ssh/sshd_config
X11Forwarding yes
## to /etc/ssh/sshd_config
stopsrc -s sshd
startsrc -s sshd

 

##create user

script -a /tmp/cre_log.log
rmuser opdb
rmuser grid
rmuser appmon

rmgroup oinstall
rmgroup asmadmin
rmgroup dba
rmgroup asmdba
rmgroup asmoper

 

mkgroup -'A' id='1000' adms='root' oinstall
mkgroup -'A' id='1002' adms='root' asmadmin
mkgroup -'A' id='1001' adms='root' dba
mkgroup -'A' id='1003' adms='root' asmdba
mkgroup -'A' id='1004' adms='root' asmoper
###mkuser id='1000' pgrp='oinstall' groups='dba,asmadmin,asmdba' home='/home/opdb' opdb
###mkuser id='1001' pgrp='oinstall' groups='asmadmin,asmdba,asmoper' home='/home/grid' grid


mkuser id='1000' pgrp='oinstall' groups='dba,asmadmin,asmdba' home='/home/opdb' opdb
mkuser id='1001' pgrp='oinstall' groups='asmadmin,asmdba,asmoper' home='/home/grid' grid

passwd grid
grid1234
passwd opdb
opdb1234

 

##因为系统安全策略,第一次登陆需要重新修改密码:
##in grid(grid12345) and opdb(opdb1234)
ssh dbrac1
ssh dbrac2

 

####profile
##grid用户:编辑家目下的.profile文件,添加如下内容:
##运行这条命令,因为之前删除一次grid,后面发现/Home/grid 权限不够
chown -R grid:asmdba /home/grid

umask 022
export ORACLE_BASE=/db/db/app/grid
export ORACLE_HOME=/db/db/grid/11.2.0
export ORACLE_SID=+ASM1
export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
export NLS_DATE_FORMAT="yyyy-mm-dd hh24:mi:ss"
export PATH=$PATH:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch

###oracle用户:编辑家目下的.profile文件,添加如下内容:

umask 022
export ORACLE_BASE=/db/db/app/db
export ORACLE_HOME=/db/db/app/db/product/11204
export ORACLE_SID=db1
export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
export NLS_DATE_FORMAT="yyyy-mm-dd hh24:mi:ss"
export PATH=$PATH:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch

 


#########可以动态调整的放在后面:
#from grid to opdb to check
#su - opdb

#from opdb to grid to check
#su - gird

id opdb
#uid=1000(opdb) gid=1000(oinstall) groups=1001(dba),1002(asmadmin),1003(asmdba)
id grid
#uid=1001(grid) gid=1000(oinstall) groups=1002(asmadmin),1003(asmdba),1004(asmoper)


lsuser -a capabilities grid
chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE grid


lsuser -a capabilities grid

lsuser -a capabilities opdb
chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE opdb

lsuser -a capabilities opdb

##os version
oslevel -s
bootinfo -K

### crete disk new_data 5T new_fra 800G

#3-4 NEW_DATA
#5 NEW_FRA
#6 - 10 db_OCRVD
/usr/sbin/chdev -l hdisk3 -a reserve_policy=no_reserve
/usr/sbin/chdev -l hdisk4 -a reserve_policy=no_reserve
/usr/sbin/chdev -l hdisk5 -a reserve_policy=no_reserve
/usr/sbin/chdev -l hdisk6 -a reserve_policy=no_reserve
/usr/sbin/chdev -l hdisk7 -a reserve_policy=no_reserve
/usr/sbin/chdev -l hdisk8 -a reserve_policy=no_reserve
/usr/sbin/chdev -l hdisk9 -a reserve_policy=no_reserve
/usr/sbin/chdev -l hdisk10 -a reserve_policy=no_reserve

##dd old data (pay attention)

##dd if=/dev/hdisk3 of=/dev/null bs=1024

dd if=/dev/zero of=/dev/rhdisk3 bs=1024k count=1
dd if=/dev/zero of=/dev/rhdisk4 bs=1024k count=1
dd if=/dev/zero of=/dev/rhdisk5 bs=1024k count=1
dd if=/dev/zero of=/dev/rhdisk6 bs=1024k count=1
dd if=/dev/zero of=/dev/rhdisk7 bs=1024k count=1
dd if=/dev/zero of=/dev/rhdisk8 bs=1024k count=1
dd if=/dev/zero of=/dev/rhdisk9 bs=1024k count=1
dd if=/dev/zero of=/dev/rhdisk10 bs=1024k count=1

 

 


#node 1 using high reduncy of ocrvg disk, 5 ocr_vote disk is candiacte


chown grid:asmadmin /dev/rhdisk3
chown grid:asmadmin /dev/rhdisk4
chown grid:asmadmin /dev/rhdisk5
chown grid:asmadmin /dev/rhdisk6
chown grid:asmadmin /dev/rhdisk7
chown grid:asmadmin /dev/rhdisk8
chown grid:asmadmin /dev/rhdisk9
chown grid:asmadmin /dev/rhdisk10

## stroage and aix feedback: the miner number 是自动识别的,尽管在节点1和节点2 认的 不一致,
## 比如rhdisk3 在节点1对应 23(marjor),3(miner ),在节点2对应着 23(marjor):10(miner ), 说尽管这样,但是重启也不会发生改变。
## 考虑重命名的麻烦,于是直接用rhdisk盘,而不用link 的可识别的盘。虽然是官方推荐的配置
## 如果担心,可以用lscfg -vl hdisk10 看seriable number 后面几位是否一致

ls -tlr /dev/rhdisk*
#cd /dev/
#rm *ocr_disk*
#rm *vote_disk*

#mknod /dev/ocr_vote_disk1 c 44 29
#mknod /dev/ocr_vote_disk2 c 44 30
#mknod /dev/ocr_vote_disk3 c 44 31
#mknod /dev/ocr_vote_disk4 c 44 32
#mknod /dev/ocr_vote_disk5 c 44 33

 

#mknod /dev/asm_data_disk0 c 44 0
#mknod /dev/asm_data_disk1 c 44 1


#mknod /dev/asm_fra_disk25 c 44 25


#--mknod /dev/asm_data_disk29 c 40 29

#--mknod /dev/asm_fra_disk1 c 40 30
#--mknod /dev/asm_fra_disk2 c 40 31
#/dev/asm_*
#chown grid:asmadmin /dev/asm_*
chmod 664 /dev/rhdisk3
chmod 664 /dev/rhdisk4
chmod 664 /dev/rhdisk5
chmod 664 /dev/rhdisk6
chmod 664 /dev/rhdisk7
chmod 664 /dev/rhdisk8
chmod 664 /dev/rhdisk9
chmod 664 /dev/rhdisk10

 


##node 2
ls -tlr /dev/rhdisk*

#cd /dev/
#rm *ocr_disk*
#rm *vote_disk*

#mknod /dev/ocr_vote_disk1 c 44 29
#mknod /dev/ocr_vote_disk2 c 44 30
#mknod /dev/ocr_vote_disk3 c 44 31
#mknod /dev/ocr_vote_disk4 c 44 32
#mknod /dev/ocr_vote_disk5 c 44 33

 

#mknod /dev/asm_data_disk0 c 44 0
#mknod /dev/asm_data_disk1 c 44 1


#mknod /dev/asm_fra_disk25 c 44 25

 

###give permmision

##chown grid:asmadmin /dev/rhdiskpower*

#chown grid:asmadmin /dev/ocr*
#chown grid:asmadmin /dev/ocr_disk1
#chown grid:asmadmin /dev/ocr_disk2
#chown grid:asmadmin /dev/vote_disk1
#chown grid:asmadmin /dev/vote_disk2
#chown grid:asmadmin /dev/vote_disk3


#chown grid:asmadmin /dev/asm_data_disk0
#chown grid:asmadmin /dev/asm_data_disk1

#chown grid:asmadmin /dev/asm_*
#chmod 664 /dev/asm_*

 


###create home
####GRID_BASE /db/db/app/grid
mkdir -p /db/db/app/oraInventory
chown -R grid:oinstall /db/db/app/oraInventory
chmod -R 775 /db/db/app/oraInventory

mkdir -p /db/db/app/grid
chown -R grid:oinstall /db/db/app/grid
chmod -R 775 /db/db/app/grid

####GRID_HOME /db/db/grid/11.2.0
mkdir -p /db/db/grid/11.2.0
chown -R grid:oinstall /db/db/grid/11.2.0
chmod -R 775 /db/db/grid/11.2.0


##ORACLE_BBASE /db/db/app/db
mkdir -p /db/db/app/db
chown -R opdb:oinstall /db/db/app/db
chmod -R 775 /db/db/app/db
chmod -R 775 /db/db/app/db

##ORACLE_BBASE /db/db/app/db
mkdir -p /db/db/app/db/oraInventory
chown -R grid:oinstall /db/db/app/db/oraInventory
chmod -R 775 /db/db/app/db/oraInventory


###ORACLE_HOME /db/db/app/db/product/11204
mkdir -p /db/db/app/db/product/11204
chown -R opdb:oinstall /db/db/app/db/product/11204
chmod -R 775 /db/db/app/db/product/11204

##soft
lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.libperfstat bos.perf.perfstat bos.perf.proctools rsct.basic.rte rsct.compat.clients.rte

lslpp -L | egrep "bos.mp64|bos.adt.include|bos.sysmgt.trace"


###kernel
##感谢用户 用户6543014

ioo -o aio_maxreqs
#aio_maxreqs = 131072

#cat /etc/environment | grep AIXTHREAD_SCOPE
#AIXTHREAD_SCOPE=S

echo "AIXTHREAD_SCOPE=S" |tee -a /etc/environment
cat /etc/environment

lsattr -E -l sys0 -a ncargs
#ncargs 256 ARG/ENV list size in 4K byte blocks True

vmo -L vmm_klock_mode
#NAME CUR DEF BOOT MIN MAX UNIT TYPE
# DEPENDENCIES
#--------------------------------------------------------------------------------
#vmm_klock_mode 2 -1 -1 -1 3 numeric B


cat /etc/security/limits
default:
fsize = -1
db = 0
cpu = -1
data = -1
rss = -1
stack = -1
nofiles = -1
db_hard = 0

 


##vmo -p -o lru_file_repage=0;
##chdev –l sys0 –a 'minpout=4096 maxpout=8193'
##vmo -r -o page_steal_method=1;
# (need to reboot to take into effect)
##vmo -p -o strict maxperm=0;
##vmo -p -o strict maxclient=1;

 

##shh config 打开 X11
##运行这条命令,因为之前删除一次grid,后面发现/Home/grid 权限不够
chown -R grid:asmdba /home/grid
su - grid
mkdir ~/.ssh
cd /home/grid/.ssh
pwd
#/home/grid/.ssh
echo "Host *" | tee -a config
echo "ForwardX11 no" | tee -a config

cat config
#Host *
# ForwardX11 no


ssh grid@10.197.1.202
ssh grid@10.197.1.201

cd /tmp/orasoft/
scp -rp root@10.241.28.86:/tmp/orasoft/ ./
cd /tmp/orasoft/orasoft/11.2*/gi

###感谢 翔之天空
##从 安装包p13390677_112040_AIX64-5L_3of7.zip 提取unzip

 


chmod 777 unzip
cp unzip /usr/bin/unzip
unzip p13390677_112040_AIX64-5L_3of7.zip
unzip p13390677_112040_AIX64-5L_1of7.zip
unzip p13390677_112040_AIX64-5L_2of7.zip
##runing one node in root
/dbatmp/db_soft/GI/grid/sshsetup
####./sshUserSetup.sh -user grid -hosts "pdbdb01 pdbdb02" -advanced -noPromptPassphrase
##./sshUserSetup.sh -user opdb -hosts "sdbdb01 sdbdb02" -advanced -noPromptPassphrase

##一共输入四次密码即可
sh -x sshUserSetup.sh -user grid -hosts "dbrac1 dbrac2" -advanced -noPromptPassphrase
grid1234


sh -x sshUserSetup.sh -user opdb -hosts "dbrac1 dbrac2" -advanced -noPromptPassphrase
opdb1234

 

 

 

###grid running (it is slowly ,about 5minutes)
cd /dbatmp/db_soft/GI/grid
##./runcluvfy.sh stage -pre crsinst -n dbrac1,dbrac2 -verbose
sh -x runcluvfy.sh stage -pre crsinst -n dbrac1,dbrac2 -verbose

###install
#chown -R grid:oinstall /db/db/app/dbsoft/gi

chown -R grid:oinstall /tmp/orasoft/orasoft

cd /dbatmp/db_soft/GI

##two node running in root :
sh rootpre.sh

./runInstaller -ignorePrereq


选择install and configure grid infrasturcture for a cluster


选择advanced installation

cluster name
cluster name db-cluster
scan name db-scan
scan port 1528


##if we meet INS-41112 specified network interface doesnt maintaince connectivity
##ssh pdbdb01-priv to check

 

密码:oracle1234

选择ASM 作为存储盘,

名称选择 OCR_VOTE_db (这里建立的OCR_VOTE 盘)

界面11:Step 11 of 22 (这里建立的是OCR_VOTE_DG)
输入OCR和VOTE DG的名称“{DBNAME}_OCRVD”, db_OCR
选择Redundancy为“higth”,
选择Candidate Disks为(5块盘即可)
“ORCL:OCRVD_DISK1”、 “ORCL:OCRVD_DISK2”、 “ORCL:OCRVD_DISK3”、


界面12:Step 12 of 22
选择“Use same passwords for these accounts”,并输入密码密码:oracle1234


3个组
asmadmin
asmdba
asmoper

inventory
/db/db/app/oraInventory
oinstall


###two nodes use root running

/db/db/app/oraInventory
sh orainstRoot.sh

/db/db/grid/11.2.0
sh root.sh
##(主要是root.sh)

###change asm patch
sqlplus / as sysasm
SQL> show parameter asm_disk

NAME TYPE
------------------------------------ ---------------------------------
VALUE
------------------------------
asm_diskgroups string

asm_diskstring string
/dev/ocr*


asm_power_limit integer
1

 

change
alter system set asm_diskstring='/dev/*_disk*' scope=spfile sid='*';
alter system set asm_power_limit =10 scope=spfile sid='*';

 

 


## can ignore INS-20802 Oracle cluster verification utility failed


##use asmca to create dg

#create asm disk grourp
#名字:NEW_DATA
#类型: external
#AU:1M
###use opdb and database/runinstall to install DB software only
cd /dbatmp/db_soft/GI/database

#use root in two nodes
sh rootpre.sh

##use opdb run database/runinstall


dba
oinstall

 

/db/db/grid/11.2.0/bin/crsctl stop crs (5 )
/db/db/grid/11.2.0/bin/crsctl start crs

### install PSU 28429134

(running in two nodes)

cd /db/db/app/dbsoft/opatch

export ORACLE_HOME=/db/db/grid/11.2.0
cp p6880880_112000_AIX64-5L.zip /db/db/grid/11.2.0/p6880880_112000_AIX64-5L.zip

export ORACLE_HOME=/db/db/app/db/product/11204

cp p6880880_112000_AIX64-5L.zip /db/db/app/db/product/11204/p6880880_112000_AIX64-5L.zip


su - grid
cd /db/db/grid/11.2.0/
unzip p6880880_112000_AIX64-5L.zip
(choose A)

su - opdb
cd /db/db/app/db/product/11204
unzip p6880880_112000_AIX64-5L.zip
(choose A)

opatch version
OPatch Version: 11.2.0.3.20


cd /db/db/app/dbsoft/psu
unzip p28429134_112040_AIX64-5L.zip


chmod -R 777 /db/db/app/dbsoft/psu

in node 1
su - grid

$ORACLE_HOME/OPatch/ocm/bin/emocmrsp

OCM Installation Response Generator 10.3.7.0.0 - Production
Copyright (c) 2005, 2012, Oracle and/or its affiliates. All rights reserved.
Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name:
You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]: Y
The OCM configuration response file (ocm.rsp) was successfully created

 

su - opdb

$ORACLE_HOME/OPatch/ocm/bin/emocmrsp

Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name:

You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]: Y
The OCM configuration response file (ocm.rsp) was successfully created


(apply gi psu to gi home in node1)
su - root
cd /db/db/grid/11.2.0/OPatch/
./opatch auto /db/db/app/dbsoft/psu/28429134 -oh /db/db/grid/11.2.0 -ocmrf /home/grid/ocm.rsp


(apply gi psu to gi home in node2)
in node 2
su - root
./opatch auto /db/db/app/dbsoft/psu/28429134 -oh /db/db/grid/11.2.0 -ocmrf /home/grid/ocm.rsp

#28204939
#28204707
#27735020
# apply_emctl_patch.sh
28204707
DB PSU 11.2.0.4.181016
Both DB Homes and Grid Home

27735020
OCW PATCH SET UPDATE 11.2.0.4.181016
Both DB Homes and Grid Home

28204939
ACFS PATCH SET UPDATE 11.2.0.4.181016
Only Grid Home

 


(apply gi psu to ora home in node1)
in node1
su - root
export USER=opdb

--cd /db/db/grid/11.2.0/OPatch/
cd /db/db/app/db/product/11204/OPatch
/db/db/app/db/product/11204/OPatch/opatch auto /db/db/app/dbsoft/psu/28429134 -oh /db/db/app/db/product/11204 -ocmrf /home/opdb/ocm.rsp

 

##dirname $0


27735020

(apply gi psu to ora home in node2)
in nod2
su - root
export USER=opdb
cd /db/db/app/db/product/11204/OPatch
./opatch auto /db/db/app/dbsoft/psu/28429134 -oh /db/db/app/db/product/11204 -ocmrf /home/opdb/ocm.rsp

 

###modify asm cluster_interconnects (haip) to old version management,Known Issues:ora.cluster_interconnect.haip (文档 ID 1640865.1)

cluster_interconnects

SQL> select * from v$cluster_interconnects;

NAME
---------------------------------------------
IP_ADDRESS IS_PUBLIC
------------------------------------------------ ---------
SOURCE
--------------------------------------------------------------------------------
en8
169.254.251.241 NO


SQL> show parameter spfile

NAME TYPE
------------------------------------ ---------------------------------
VALUE
------------------------------
spfile string
+db_OCR/db-cluster/asmpara
meterfile/registry.253.1006541


[grid@pdbdb02:/home/grid]$ asmcmd find --type ASMPARAMETERFILE +db_OCR "*"
+db_OCR/db-cluster/ASMPARAMETERFILE/REGISTRY.253.1006541113

[grid@pdbdb02:/home/grid]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED HIGH N 512 4096 1048576 5125 3869 2050 606 0 Y db_OCR/


sqllus / as sysasm
create pfile='/tmp/dba/pfile_asm.ora' from spfile='+db_OCR/db-cluster/asmparameterfile/registry.253.1006541113';

modify pfile_asm.ora
processes=170
sessions=180
shared_pool_size = 5G
large_pool_size = 1G
db_cache_size = 1G
sga_max_size=8192M

sqllus / as sysasm
shutdown abort
startup pfile='/tmp/dba/pfile_asm.ora';

create spfile='+NEW_DATA/spfileASM.ora' from pfile='/tmp/dba/pfile_asm.ora';

##We now see the ASM spfile itself (REGISTRY.253.843597139) and its alias (spfileASM.ora)
asmcmd ls -l +NEW_DATA/spfileASM.ora
Type Redund Striped Time Sys Name
N spfileASM.ora => +NEW_DATA/db-cluster/ASMPARAMETERFILE/REGISTRY.253.1006958153

/db/db/grid/11.2.0/bin/crsctl stop crs

/db/db/grid/11.2.0/bin/crsctl start crs


alter system set cluster_interconnects='190.0.1.201' sid='+ASM1' scope=spfile;
alter system set cluster_interconnects='190.0.1.202' sid='+ASM2' scope=spfile;

/db/db/grid/11.2.0/bin/crsctl stop crs

/db/db/grid/11.2.0/bin/crsctl start crs

SQL> select * from v$cluster_interconnects;

NAME
---------------------------------------------
IP_ADDRESS IS_PUBLIC
------------------------------------------------ ---------
SOURCE
--------------------------------------------------------------------------------
en8
190.0.1.201 NO
cluster_interconnects parameter


##issue 1

12.1.0.2 Installation fails with Error:"Reference data is not available for release “12.1” on the Operation System Distribution “AIX7.2” (文档 ID 2169858.1)

CAUSE
cvu_prereq.xml not updated with AIX 7.2 pre-req details .

Unpublished BUG 23186307

SOLUTION
1. Please ensure all the pre-requirements are met as mentioned in the following My Oracle Support Note :


http://docs.oracle.com/database/121/AXDBI/pre_install.htm#CIHEIAHI

2. After the above step :


Please launch the OUI using the below command -

% ./runInstaller -ignoreSysPrereqs
Or
% ./runInstaller -ignorePrereq

 

###issue 1.1
感谢 DBA-010

问题:
节点1 root.sh fails CLSRSC-331: Failure initializing entries in


原因;
ckptGridHA_racdb1.xml is a check point file. It contains information about the node name, ocr, voting disk location, GRID_HOME, ORCALE_HOME, private interconnect, public and vip IP addresses… and is located


fix:
find / -name ckptGridHA_dbrac1.xml
find / -name ckptGridHA*.xml

cd find / -name ckptGridHA*.xml
mv ckptGridHA_dbrac1.xml ckptGridHA_dbrac1.xml.bak

 

re-run

root.sh

 

###issue 1.2
感谢 wenzhongyan
https://blog.csdn.net/wenzhongyan/article/details/48312903

节点1.root.sh 报错
CRS-2728: A resource type with the name 'ora.daemon.type' is already register


由于系统资源繁忙,导致创建失败,一般情况下,跟主机性能偏差有关, 第一个节点 用roothas.pl 卸载,在重新加载在重新加载root.sh
fix:

find / -name roothas.pl
$grid_home/install/
./roothas.pl -deconfig -force -verbose

re-run

root.sh

 

##isse 1.3

感谢官网
#第二个节点用 rootcrs.pl 卸载,在重新加载root.sh
问题: AIX node two root.sh 报错,crsconfig_lib.pm 11814

可能原因:

可能的原因:

3. 私网未工作,ping 或 traceroute <private host> 显示无法访问目标。或虽然 ping/traceroute 正常工作,但是在私网中启用了防火墙

1. 表决磁盘丢失或无法访问

1. 通过检查存储存取性、磁盘权限等恢复表决磁盘存取。如果表决盘在 OS 级别无法访问,请敦促操作系统管理员恢复磁盘访问。
如果 OCR ASM 磁盘组中的 voting disk已经丢失,以独占模式启动 CRS,并重建表决磁盘:

2. 多播未正常工作(对于版本11.2.0.2,这是正常的情况。对于 11.2.0.3 PSU5/PSU6/PSU7 和 12.1.0.1 版本,是由于Bug 16547309)
3. 私网未工作,ping 或 traceroute <private host> 显示无法访问目标。或虽然 ping/traceroute 正常工作,但是在私网中启用了防火墙
4. gpnpd 未出现,卡在 dispatch 线程中, Bug 10105195
5. 通过 asm_diskstring 发现的磁盘太多,或由于 Bug 13454354 导致扫描太慢(仅在 Solaris 11.2.0.3 上出现)


##解决方法,


1. correct the NIC configuration (naming) to match the information stored in the OCR
修改网络内核参数,重启主机,生效

2. as the root user run the following command to allow a rerun of 'root.sh' on the new node:

find / -name rootcrs.pl

$GRID_HOME/crs/install/rootcrs.pl -deconfig -force -verbose

3. as user root rerun
$GRID_HOME/root.sh

 

###issue 1.4

感谢雨丶花丶石
https://blog.csdn.net/shiyu1157758655/article/details/75018879、

问题:
Oracle 11g RAC 安装数据库软件找不到节点的解决

解决:

1.oracle用户互信配置
2.

//u01/app/oraInventory/ContentsXML/inventory.xml

找到打开如下行:

[root@rac1 ~]#


<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid_1" TYPE="O" IDX="1" >
修改为

<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid_1" TYPE="O" IDX="1" CRS="true">

 


###issue 2

grid 安装认不到共享盘,

/tmp/OraInstall2019-03-23_09-30-16PM/ext/bin/kfod, nohdr=true, ver
bose=true, disks=all, status=true, op=disks, asm_diskstring='/dev/ocr*

cd /tmp/OraInstall2019-03-23_09-30-16PM/ext/bin/
kfod disks=all status=true op=disks asm_diskstring='/dev/ocr*'

dd if=/dev/ocr_vote_disk1 of=/dev/null bs=8192
dd if=/dev/ocr_vote_disk5 of=/dev/null bs=8192
dd if=/dev/ocr_vote_disk2 of=/dev/null bs=8192

###rsp (grid.rsp ) 保存在文件/home/grid
[grid@sdbdb01:/home/grid]$ ls
grid.rsp oradiag_grid

 


#####installer issue 3
76%
Remote 'AttachHome' failed on nodes: 'pdbdb01'. Refer to '/db/db/app/oraInventory/logs/installActions2019-04-25_03-18-07PM.log' for details.
It is recommended that the following command needs to be manually run on the failed nodes:
/db/db/grid/11.2.0/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=/db/db/grid/11.2.0 ORACLE_HOME_NAME=Ora11g_gridinfrahome1 CLUSTER_NODES=loopback,pdbdb01,pdbdb02 "INVENTORY_LOCATION=/db/db/app/oraInventory" LOCAL_NODE=<node on which command is to be run>.
Please refer 'AttachHome' logs under central inventory of remote nodes where failure occurred for more details.


soltution

如果没有执行root.sh ,可以手工删除文件
因为$ORACLE_HOME/deinstall/deinstall 脚本会抱错

1.touch file
[root@sdbdb01:/etc/ssh]# ls -tlr /etc/oraInst.loc
inventory_loc=/db/db/app/oraInventory
inst_group=oinstall

2.
su - gird
export ORACLE_HOME=/db/db/grid/11.2.0

/db/db/grid/11.2.0/oui/bin/runInstaller -detachHome -silent ORACLE_HOME=/db/db/grid/11.2.0

/db/db/grid/11.2.0/oui/bin/runInstaller -detachHome -silent -local ORACLE_HOME=/db/db/grid/11.2.0

[grid@pdbdb01:/db/db/app/oraInventory]$ /db/db/grid/11.2.0/oui/bin/runInstaller -detachHome -silent -local ORACLE_HOME=/db/db/grid/11.2.0
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 32768 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /db/db/app/oraInventory
The Oracle home '/db/db/grid/11.2.0' could not be updated as it does not exist.
'DetachHome' failed.


/db/db/grid/11.2.0/oui/bin/runInstaller -detachHome -noClusterEnabled ORACLE_HOME=/db/db/grid/11.2.0 "INVENTORY_LOCATION=/db/db/app/oraInventory" LOCAL_NODE=pdbdb01


/db/db/grid/11.2.0/OPatch/opatch lsinventory -all

/var/opt/oracle/oraInst.loc

3.
su - root
cd /db/db/grid/11.2.0
rm -rf *
rm /etc/oraInst.loc

cd /db/db/app/oraInventory
rm -rf *


###issue 4
[root@pdbdb02:/db/db/app/oraInventory]# sh orainstRoot.sh
Changing permissions of /db/db/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /db/db/app/oraInventory to oinstall.
The execution of the script is complete.
[root@pdbdb02:/db/db/app/oraInventory]# cd /db/db/grid/11.2.0
[root@pdbdb02:/db/db/grid/11.2.0]# sh root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /db/db/grid/11.2.0

Enter the full pathname of the local bin directory: [/usr/local/bin]:
Creating /usr/local/bin directory...
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /db/db/grid/11.2.0/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
User grid has the required capabilities to run CSSD in realtime mode
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node pdbdb01, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Start of resource "ora.crsd" failed
CRS-2800: Cannot start resource 'ora.asm' as it is already in the INTERMEDIATE state on server 'pdbdb02'
CRS-4000: Command Start failed, or completed with errors.
Failed to start Oracle Grid Infrastructure stack
Failed to start Cluster Ready Services at /db/db/grid/11.2.0/crs/install/crsconfig_lib.pm line 1353.
/db/db/grid/11.2.0/perl/bin/perl -I/db/db/grid/11.2.0/perl/lib -I/db/db/grid/11.2.0/crs/install /db/db/grid/11.2.0/crs/install/rootcrs.pl execution failed


solution:
1/check ocr permmition
2.stop crs
3.
[root@pdbdb02:cd /db/db/grid/11.2.0]#
2. re-run root.sh

 

### issue 5
1.more and more info in alert+asm.log and
2.asmca log show the sql is hange

node 1 ASM query hang, ASM 2 query is ok
select name||'|'||state from v$asm_diskgroup;


node 1 asm log
. Found 5 voting file(s).
NOTE: Voting file relocation is required in diskgroup db_OCR
NOTE: Attempting voting file relocation on diskgroup db_OCR
NOTE: Successful voting file relocation on diskgroup db_OCR


node2 asm log

NOTE: Voting file relocation is required in diskgroup db_OCR
NOTE: Attempting voting file relocation on diskgroup db_OCR
NOTE: Successful voting file relocation on diskgroup db_OCR


solution ( 20 minutes)
step 1:

/db/db/grid/11.2.0/bin/crsctl stop crs
/db/db/grid/11.2.0/bin/crsctl start crs


step 2:
/db/db/app/grid/diag/asm/+asm/+ASM2/trace

select MOUNT_STATUS,path from v$asm_disk;

alter diskgroup db_OCR undrop disks;


###issue 6

when root running: report error
CLSD:An error was encountered while attempting to open log file "UNKNOWN". Additional diagnostics: (:CLSD00157:)

[root@pdbdb01:/tmp/OraInstall2019-04-25_08-13-41PM]# /db/db/grid/11.2.0/bin/crsctl stop crs
2019-04-26 10:15:08.628:
CLSD:An error was encountered while attempting to open log file "UNKNOWN". Additional diagnostics: (:CLSD00157:)
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'pdbdb01'
CRS-2673: Attempting to stop 'ora.crsd' on 'pdbdb01'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'pdbdb01'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'pdbdb01'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'pdbdb01'
CRS-2676: Start of 'ora.scan1.vip' on 'pdbdb02' succeeded


solution:
change from root to root to execute command:

 


###issue 7
[root@pdbdb01:/db/db/grid/11.2.0/OPatch]# ./opatch auto /db/db/app/dbsoft/psu/28429134 -oh /db/db/app/db/product/11204 -ocmrf /home/opdb/ocm.rsp
Executing /db/db/grid/11.2.0/perl/bin/perl /db/db/grid/11.2.0/OPatch/crs/patch11203.pl -patchdir /db/db/app/dbsoft/psu -patchn 28429134 -oh /db/db/app/db/product/11204 -ocmrf /home/opdb/ocm.rsp -paramfile /db/db/grid/11.2.0/crs/install/crsconfig_params

This is the main log file: /db/db/grid/11.2.0/cfgtoollogs/opatchauto2019-04-26_14-02-15.log

This file will show your detected configuration and all the steps that opatchauto attempted to do on your system:
/db/db/grid/11.2.0/cfgtoollogs/opatchauto2019-04-26_14-02-15.log

2019-04-26 14:02:15: Starting Clusterware Patch Setup
Using configuration parameter file: /db/db/grid/11.2.0/crs/install/crsconfig_params

Stopping RAC /db/db/app/db/product/11204 ...
Stopped RAC /db/db/app/db/product/11204 successfully

patch /db/db/app/dbsoft/psu/28429134/27735020/custom/server/27735020 apply successful for home /db/db/app/db/product/11204
patch /db/db/app/dbsoft/psu/28429134/28204707 apply failed for home /db/db/app/db/product/11204

Starting RAC /db/db/app/db/product/11204 ...
Failed to start resources from database home /db/db/app/db/product/11204
ERROR: Refer log file for more details.


opatch auto failed.


solution:
1.clean
su - opdb
opatch rollback -local -id 27735020 -oh /db/db/app/db/product/11204

cd /db/db/app/db/product/11204/.patch_storage/NApply/2019-04-26_15-47-36PM
sh restore.sh

2.test
/bin/su opdb -c ' /db/db/app/db/product/11204/OPatch/opatch napply /db/db/app/dbsoft/psu/28429134/28204707 -local \
-silent -ocmrf /home/opdb/ocm.rsp -oh /db/db/app/db/product/11204 -invPtrLoc /db/db/app/db/product/11204/oraInst.loc'

check error command file
cd /db/db/app/db/product/11204/.patch_storage/NApply

sh /db/db/app/dbsoft/psu/28429134/28204707/28204707/custom/scripts/pre -apply 28204707

###export ORACLE_HOME=/db/db/app/db/product/11204 ,since the issue caused by apply_emctl_patch.sh
###GRID_HOME=/db/db/grid/11.2.0

ls -ld $ORACLE_HOME | awk '{print $3}'

cd /db/db/app/dbsoft/psu/28429134/28204707/28204707/custom/scripts

cat apply_emctl_patch.sh


3.
--cd /db/db/app/dbsoft/psu/28429134/28204707/28204707/custom/scripts
--mv rollback_emctl_patch.sh /tmp/dba
--mv apply_emctl_patch.sh /tmp/dba
--cd ../../../28729262/custom/scripts/


###
export USER=opdb

cd /db/db/grid/11.2.0/OPatch/
/db/db/app/db/product/11204/OPatch/opatch auto /db/db/app/dbsoft/psu/28429134 -oh /db/db/app/db/product/11204 -ocmrf /home/opdb/ocm.rsp

chown -R opdb:oinstall /db/db/app/db
chown -R opdb:oinstall /db/db/app/db/product/11204
deinstall ORACLE_HOME
runinstall ORACLE_HOME

 

 

###sampl  linux  配置NTP时间同步

配置NTP时间同步

男孩李

于 2022-05-21 17:33:46 发布

1116
收藏
分类专栏: Linux 文章标签: linux centos 服务器
版权

Linux
专栏收录该内容
11 篇文章0 订阅
订阅专栏
1.集群中所有节点安装ntp,

yum install ntp
2.所有节点设置时区,这里设置为中国所用时间timedatectl set-timezone Asia/Shanghai

3. 在server节点上启动ntp服务

systemctl start ntpd

systemctl enable ntpd
4.在server节点上设置现在的准确时间 timedatectl set-time HH:MM:SS

5.在server节点上设置其ntp服务器为其自身,同时设置可以接受连接服务的客户端。修改/etc/ntp.conf,添加两行:

restrict 127.0.0.1
server 127.127.1.0
6.重启ntpd服务

systemctl restart ntpd
7.在client节点上设置ntp服务器为server节点。修改/etc/ntp.conf,添加一行:

server (server的IP地址)
8.在client节点上同步server的时间,

ntpdate (server IP)
9. client节点启动ntpd服务

systemctl start ntpd
systemctl enable ntpd
10.所有节点启动时间同步

timedatectl set-ntp yes
————————————————
版权声明:本文为CSDN博主「男孩李」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/lovebaby1689/article/details/124900979

posted @ 2019-03-23 23:36  feiyun8616  阅读(1667)  评论(0编辑  收藏  举报