修改RAC两节点主机名

hosts文件
#cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.10.50.83 sanfree1
10.10.50.82 sanfree2
22.22.22.83 sanfree1-priv
22.22.22.82 sanfree2-priv
10.10.50.6 sanfree1-vip
10.10.50.5 sanfree2-vip
10.10.50.4 sanfree-scan

crs状态
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ONLINE ONLINE sanfree1
ONLINE ONLINE sanfree2
ora.OCRVOTE.dg
ONLINE ONLINE sanfree1
ONLINE ONLINE sanfree2
ora.REDODG.dg
ONLINE ONLINE sanfree1
ONLINE ONLINE sanfree2
ora.SASDG.dg
ONLINE ONLINE sanfree1
ONLINE ONLINE sanfree2
ora.asm
ONLINE ONLINE sanfree1 Started
ONLINE ONLINE sanfree2 Started
ora.gsd
OFFLINE OFFLINE sanfree1
OFFLINE OFFLINE sanfree2
ora.net1.network
ONLINE ONLINE sanfree1
ONLINE ONLINE sanfree2
ora.ons
ONLINE ONLINE sanfree1
ONLINE ONLINE sanfree2
ora.registry.acfs
ONLINE ONLINE sanfree1
ONLINE ONLINE sanfree2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE sanfree2
ora.coco.db
1 ONLINE ONLINE sanfree1 Open
2 ONLINE ONLINE sanfree2 Open
ora.cvu
1 ONLINE ONLINE sanfree1
ora.oc4j
1 ONLINE ONLINE sanfree1
ora.sanfree1.vip
1 ONLINE ONLINE sanfree1
ora.sanfree2.vip
1 ONLINE ONLINE sanfree2
ora.scan1.vip
1 ONLINE ONLINE sanfree2

现在要把两个节点的主机名改为rac1和rac2,方法是:删除2节点,改2节点主机名,加2节点进入crs,删除1节点,改1节点主机名,加1节点进入crs

1.删除2节点

a.检查2节点是否是active和Unpinned ,如果是pinned的,用crsctl unpin css
olsnodes -s -t
sanfree1 Active Unpinned
sanfree2 Active Unpinned

b. root用户在2节点 GRID_HOME 上执行
/opt/grid/products/11.2.0/crs/install/rootcrs.pl -deconfig -force

Using configuration parameter file: /opt/grid/products/11.2.0/crs/install/crsconfig_params
Network exists: 1/10.10.50.0/255.255.255.0/eth0, type static
VIP exists: /sanfree1-vip/10.10.50.6/10.10.50.0/255.255.255.0/eth0, hosting node sanfree1
VIP exists: /sanfree2-vip/10.10.50.5/10.10.50.0/255.255.255.0/eth0, hosting node sanfree2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'sanfree2'
CRS-2677: Stop of 'ora.registry.acfs' on 'sanfree2' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'sanfree2'
CRS-2673: Attempting to stop 'ora.crsd' on 'sanfree2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'sanfree2'
CRS-2673: Attempting to stop 'ora.coco.db' on 'sanfree2'
CRS-2677: Stop of 'ora.coco.db' on 'sanfree2' succeeded
CRS-2673: Attempting to stop 'ora.OCRVOTE.dg' on 'sanfree2'
CRS-2673: Attempting to stop 'ora.REDODG.dg' on 'sanfree2'
CRS-2673: Attempting to stop 'ora.SASDG.dg' on 'sanfree2'
CRS-2677: Stop of 'ora.REDODG.dg' on 'sanfree2' succeeded
CRS-2677: Stop of 'ora.SASDG.dg' on 'sanfree2' succeeded
CRS-2677: Stop of 'ora.OCRVOTE.dg' on 'sanfree2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'sanfree2'
CRS-2677: Stop of 'ora.asm' on 'sanfree2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'sanfree2' has completed
CRS-2677: Stop of 'ora.crsd' on 'sanfree2' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'sanfree2'
CRS-2673: Attempting to stop 'ora.ctssd' on 'sanfree2'
CRS-2673: Attempting to stop 'ora.evmd' on 'sanfree2'
CRS-2673: Attempting to stop 'ora.asm' on 'sanfree2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'sanfree2'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'sanfree2'
CRS-2677: Stop of 'ora.crf' on 'sanfree2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'sanfree2' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'sanfree2' succeeded
CRS-2677: Stop of 'ora.evmd' on 'sanfree2' succeeded
CRS-2677: Stop of 'ora.asm' on 'sanfree2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'sanfree2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'sanfree2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'sanfree2'
CRS-2677: Stop of 'ora.drivers.acfs' on 'sanfree2' succeeded
CRS-2677: Stop of 'ora.cssd' on 'sanfree2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'sanfree2'
CRS-2677: Stop of 'ora.gipcd' on 'sanfree2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'sanfree2'
CRS-2677: Stop of 'ora.gpnpd' on 'sanfree2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'sanfree2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
Successfully deconfigured Oracle clusterware stack on this node

C.从1节点上root执行
crsctl delete node -n sanfree2
CRS-4661: Node sanfree2 successfully deleted.

D. 从2节点上grid用户执行
su grid
/opt/grid/products/11.2.0/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/opt/grid/products/11.2.0/ "CLUSTER_NODES={sanfree2}" CRS=TRUE -silent -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 16372 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /opt/grid/oraInventory
'UpdateNodeList' was successful.

E. 从2节点上grid用户执行
期间会有交互,一直回车用默认值,最后产生一个脚本,用root新开一个终端执行
#/opt/grid/products/11.2.0/deinstall/deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /opt/grid/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################### CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /opt/grid/products/11.2.0
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /opt/ogrid
Checking for existence of central inventory location /opt/grid/oraInventory
Checking for existence of the Oracle Grid Infrastructure home
The following nodes are part of this cluster: sanfree2
Checking for sufficient temp space availability on node(s) : 'sanfree2'

## [END] Install check configuration ##

Traces log file: /opt/grid/oraInventory/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "sanfree2"[sanfree2-vip]
>

The following information can be collected by running "/sbin/ifconfig -a" on node "sanfree2"
Enter the IP netmask of Virtual IP "10.10.50.5" on node "sanfree2"[255.255.255.0]
>

Enter the network interface name on which the virtual IP address "10.10.50.5" is active
>

Enter an address or the name of the virtual IP[]
>


Network Configuration check config START

Network de-configuration trace file location: /opt/grid/oraInventory/logs/netdc_check2017-10-19_11-31-08-AM.log

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN1]:

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /opt/grid/oraInventory/logs/asmcadc_check2017-10-19_11-31-09-AM.log


######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home deinstallation will be performed are:sanfree2
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'sanfree2', and the global configuration will be removed.
Oracle Home selected for deinstall is: /opt/grid/products/11.2.0
Inventory Location where the Oracle home registered is: /opt/grid/oraInventory
Following RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN1
Option -local will not modify any ASM configuration.
Do you want to continue (y - yes, n - no)? [n]: yes
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/opt/grid/oraInventory/logs/deinstall_deconfig2017-10-19_11-30-42-AM.out'
Any error messages from this session will be written to: '/opt/grid/oraInventory/logs/deinstall_deconfig2017-10-19_11-30-42-AM.err'

######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /opt/grid/oraInventory/logs/asmcadc_clean2017-10-19_11-31-20-AM.log
ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /opt/grid/oraInventory/logs/netdc_clean2017-10-19_11-31-20-AM.log

De-configuring RAC listener(s): LISTENER,LISTENER_SCAN1

De-configuring listener: LISTENER
Stopping listener on node "sanfree2": LISTENER
Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.

De-configuring listener: LISTENER_SCAN1
Stopping listener on node "sanfree2": LISTENER_SCAN1
Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.

De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.

De-configuring backup files...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END


---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "sanfree2".

/tmp/deinstall2017-10-19_11-30-23AM/perl/bin/perl -I/tmp/deinstall2017-10-19_11-30-23AM/perl/lib -I/tmp/deinstall2017-10-19_11-30-23AM/crs/install /tmp/deinstall2017-10-19_11-30-23AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2017-10-19_11-30-23AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Press Enter after you finish running the above commands

。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。
新开的终端:
#/tmp/deinstall2017-10-19_11-30-23AM/perl/bin/perl -I/tmp/deinstall2017-10-19_11-30-23AM/perl/lib -I/tmp/deinstall2017-10-19_11-30-23AM/crs/install /tmp/deinstall2017-10-19_11-30-23AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2017-10-19_11-30-23AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Using configuration parameter file: /tmp/deinstall2017-10-19_11-30-23AM/response/deinstall_Ora11g_gridinfrahome1.rsp

****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Modify failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Delete failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
################################################################
# You must kill processes or reboot the system to properly #
# cleanup the processes started by Oracle clusterware #
################################################################
ACFS-9313: No ADVM/ACFS installation detected.
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node
。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。
Remove the directory: /tmp/deinstall2017-10-19_11-30-23AM on node:
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/opt/grid/products/11.2.0' from the central inventory on the local node : Done

Delete directory '/opt/grid/products/11.2.0' on the local node : Done

Failed to delete the directory '/opt/ogrid'. The directory is in use.
Delete directory '/opt/ogrid' on the local node : Failed <<<<

Oracle Universal Installer cleanup completed with errors.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2017-10-19_11-30-23AM' on node 'sanfree2'

## [END] Oracle install clean ##


######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN1
Oracle Clusterware is stopped and successfully de-configured on node "sanfree2"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/opt/grid/products/11.2.0' from the central inventory on the local node.
Successfully deleted directory '/opt/grid/products/11.2.0' on the local node.
Failed to delete directory '/opt/ogrid' on the local node.
Oracle Universal Installer cleanup completed with errors.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############

F.从1节点上grid用户执行
#/opt/grid/products/11.2.0/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/opt/grid/products/11.2.0/ "CLUSTER_NODES={sanfree1}" CRS=TRUE -silent -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 16345 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /opt/grid/oraInventory
'UpdateNodeList' was successful.

G.在1节点上检查2节点是否被删除成功
cluvfy stage -post nodedel -n orb -verbose

Performing post-checks for node removal

Checking CRS integrity...

Clusterware version consistency passed
The Oracle Clusterware is healthy on node "sanfree1"

CRS integrity check passed
Result:
Node removal check passed

Post-check for node removal was successful.

H.节点2被正确删除后,修改节点2的主机名为rac2,修改两个节点的/etc/host
#cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.10.50.83 sanfree1
10.10.50.82 rac2
22.22.22.83 sanfree1-priv
22.22.22.82 rac2-priv
10.10.50.6 sanfree1-vip
10.10.50.5 rac2-vip
10.10.50.4 sanfree-scan

I.加节点2到CRS
1.节点1上grid用户,检查节点2是否满足
#cluvfy stage -pre nodeadd -n rac2 -fixup -fixupdir /tmp/ -verbose

Performing pre-checks for node addition

Checking node reachability...

Check: Node reachability from node "sanfree1"
Destination Node Reachable?
------------------------------------ ------------------------
rac2 yes
Result: Node reachability check passed from node "sanfree1"


Checking user equivalence...

Check: User equivalence for user "grid"
Node Name Status
------------------------------------ ------------------------
rac2 failed
Result: PRVF-4007 : User equivalence check failed for user "grid"

ERROR:
User equivalence unavailable on all the specified nodes
Verification cannot proceed


Pre-check for node addition was unsuccessful on all the nodes.

因为主机名修改了,两节点间grid用户信任关系需要重建
/opt/grid/products/11.2.0/deinstall/sshUserSetup.sh -user grid -hosts sanfree1 rac2 -noPromptPassphrase
The output of this script is also logged into /tmp/sshUserSetup_2017-10-19-12-09-36.log
Hosts are sanfree1
user is grid
Platform:- Linux
Checking if the remote hosts are reachable
PING sanfree1 (10.10.50.83) 56(84) bytes of data.
64 bytes from sanfree1 (10.10.50.83): icmp_seq=1 ttl=64 time=0.045 ms
64 bytes from sanfree1 (10.10.50.83): icmp_seq=2 ttl=64 time=0.060 ms
64 bytes from sanfree1 (10.10.50.83): icmp_seq=3 ttl=64 time=0.072 ms
64 bytes from sanfree1 (10.10.50.83): icmp_seq=4 ttl=64 time=0.059 ms
64 bytes from sanfree1 (10.10.50.83): icmp_seq=5 ttl=64 time=0.040 ms

--- sanfree1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4001ms
rtt min/avg/max/mdev = 0.040/0.055/0.072/0.012 ms
Remote host reachability check succeeded.
The following hosts are reachable: sanfree1.
The following hosts are not reachable: .
All hosts are reachable. Proceeding further...
firsthost sanfree1
numhosts 1
The script will setup SSH connectivity from the host sanfree1 to all
the remote hosts. After the script is executed, the user can use SSH to run
commands on the remote hosts or copy files between this host sanfree1
and the remote hosts without being prompted for passwords or confirmations.

NOTE 1:
As part of the setup procedure, this script will use ssh and scp to copy
files between the local host and the remote hosts. Since the script does not
store passwords, you may be prompted for the passwords during the execution of
the script whenever ssh or scp is invoked.

NOTE 2:
AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY
AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESE
directories.

Do you want to continue and let the script make the above mentioned changes (yes/no)?
yes

The user chose yes
User chose to skip passphrase related questions.
Creating .ssh directory on local host, if not present already
Creating authorized_keys file on local host
Changing permissions on authorized_keys to 644 on local host
Creating known_hosts file on local host
Changing permissions on known_hosts to 644 on local host
Creating config file on local host
If a config file exists already at /root/.ssh/config, it would be backed up to /root/.ssh/config.backup.
Creating .ssh directory and setting permissions on remote host sanfree1
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR grid. THIS IS AN SSH REQUIREMENT.
The script would create ~grid/.ssh/config file on remote host sanfree1. If a config file exists already at ~grid/.ssh/config, it would be backed up to ~grid/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host sanfree1.
Warning: Permanently added 'sanfree1,10.10.50.83' (RSA) to the list of known hosts.
grid@sanfree1's password:
Done with creating .ssh directory and setting permissions on remote host sanfree1.
Copying local host public key to the remote host sanfree1
The user may be prompted for a password or passphrase here since the script would be using SCP for host sanfree1.
grid@sanfree1's password:
Done copying local host public key to the remote host sanfree1
SSH setup is complete.

------------------------------------------------------------------------
Verifying SSH setup
===================
The script will now run the date command on the remote nodes using ssh
to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP,
THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR
PASSWORDS. If you see any output other than date or are prompted for the
password, ssh is not setup correctly and you will need to resolve the
issue and set up ssh again.
The possible causes for failure could be:
1. The server settings in /etc/ssh/sshd_config file do not allow ssh
for user grid.
2. The server may have disabled public key based authentication.
3. The client public key on the server may be outdated.
4. ~grid or ~grid/.ssh on the remote host may not be owned by grid.
5. User may not have passed -shared option for shared remote users or
may be passing the -shared option for non-shared remote users.
6. If there is output in addition to the date, but no password is asked,
it may be a security alert shown as part of company policy. Append the
additional text to the <OMS HOME>/sysman/prov/resources/ignoreMessages.txt file.
------------------------------------------------------------------------
--sanfree1:--
Running /usr/bin/ssh -x -l grid sanfree1 date to verify SSH connectivity has been setup from local host to sanfree1.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
Thu Oct 19 12:11:37 CST 2017
------------------------------------------------------------------------
SSH verification complete.


2.节点1上grid用户执行
$ORACLE_HOME/oui/bin/addNode.sh "CLUSTER_NEW_NODES={rac2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac2-vip}"
检查不通过,在图形界面安装时是可以忽略的,这里是不能直接忽略的,需要修改一下addNode.sh文件
在文件的EXIT_CODE=$?;项后添加一项EXIT_CODE=0;即可
完成后再次执行
。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。
OHOME=/opt/grid/products/11.2.0
INVPTRLOC=$OHOME/oraInst.loc
EXIT_CODE=0
ADDNODE="$OHOME/oui/bin/runInstaller -addNode -invPtrLoc $INVPTRLOC ORACLE_HOME=$OHOME $*"
if [ "$IGNORE_PREADDNODE_CHECKS" = "Y" -o ! -f "$OHOME/cv/cvutl/check_nodeadd.pl" ]
then
$ADDNODE
EXIT_CODE=$?;
else
CHECK_NODEADD="$OHOME/perl/bin/perl $OHOME/cv/cvutl/check_nodeadd.pl -pre ORACLE_HOME=$OHOME $*"
$CHECK_NODEADD
EXIT_CODE=$?;
EXIT_CODE=0
if [ $EXIT_CODE -eq 0 ]
then
$ADDNODE
EXIT_CODE=$?;
fi
fi
exit $EXIT_CODE ;
。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。
$ORACLE_HOME/oui/bin/addNode.sh "CLUSTER_NEW_NODES={rac2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac2-vip}"
Performing pre-checks for node addition

Checking node reachability...
Node reachability check passed from node "sanfree1"


Checking user equivalence...
User equivalence check passed for user "grid"

Checking CRS integrity...

Clusterware version consistency passed

CRS integrity check passed

Checking shared resources...

Checking CRS home location...
"/opt/grid/products/11.2.0" is shared
Shared resources check for node addition passed


Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
TCP connectivity check passed for subnet "22.22.22.0"


Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"
TCP connectivity check passed for subnet "10.10.50.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "10.10.50.0".
Subnet mask consistency check passed for subnet "22.22.22.0".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "10.10.50.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.10.50.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "22.22.22.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "22.22.22.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "rac2:/opt/grid/products/11.2.0"
Free disk space check passed for "sanfree1:/opt/grid/products/11.2.0"
Free disk space check passed for "rac2:/tmp"
Free disk space check passed for "sanfree1:/tmp"
Check for multiple users with UID value 502 passed
User existence check passed for "grid"
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make"
Package existence check passed for "binutils"
Package existence check passed for "gcc(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "elfutils-libelf(x86_64)"
Package existence check passed for "elfutils-libelf-devel"
Package existence check passed for "glibc-common"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "glibc-headers"
Package existence check passed for "gcc-c++(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check failed for "pdksh"
Check failed on nodes:
rac2,sanfree1
Package existence check passed for "expat(x86_64)"
Check for multiple users with UID value 0 passed
Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed

Checking OCR integrity...

OCR integrity check passed

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed
Time zone consistency check passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
NTP Configuration file check passed

Checking daemon liveness...
Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes

NTP daemon slewing option check passed

NTP daemon's boot time configuration check for slewing option passed

NTP common Time Server Check started...
PRVF-5410 : Check of common NTP Time Server failed
PRVF-5416 : Query of NTP daemon failed on all nodes
Clock synchronization check using Network Time Protocol(NTP) failed


User "grid" is not part of "root" group. Check passed
Checking consistency of file "/etc/resolv.conf" across nodes

File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: sanfree1,rac2

File "/etc/resolv.conf" is not consistent across nodes


Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed

Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.

Pre-check for node addition was unsuccessful on all the nodes.
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 16343 MB Passed
Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.


Performing tests to see whether nodes rac2 are available
............................................................... 100% Done.

..
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
Source: /opt/grid/products/11.2.0
New Nodes
Space Requirements
New Nodes
rac2
/opt: Required 7.46GB : Available 84.60GB
Installed Products
Product Names
Oracle Grid Infrastructure 11g 11.2.0.4.0
Java Development Kit 1.5.0.51.10
Installer SDK Component 11.2.0.4.0
Oracle One-Off Patch Installer 11.2.0.3.4
Oracle Universal Installer 11.2.0.4.0
Oracle RAC Required Support Files-HAS 11.2.0.4.0
Oracle USM Deconfiguration 11.2.0.4.0
Oracle Configuration Manager Deconfiguration 10.3.1.0.0
Enterprise Manager Common Core Files 10.2.0.4.5
Oracle DBCA Deconfiguration 11.2.0.4.0
Oracle RAC Deconfiguration 11.2.0.4.0
Oracle Quality of Service Management (Server) 11.2.0.4.0
Installation Plugin Files 11.2.0.4.0
Universal Storage Manager Files 11.2.0.4.0
Oracle Text Required Support Files 11.2.0.4.0
Automatic Storage Management Assistant 11.2.0.4.0
Oracle Database 11g Multimedia Files 11.2.0.4.0
Oracle Multimedia Java Advanced Imaging 11.2.0.4.0
Oracle Globalization Support 11.2.0.4.0
Oracle Multimedia Locator RDBMS Files 11.2.0.4.0
Oracle Core Required Support Files 11.2.0.4.0
Bali Share 1.1.18.0.0
Oracle Database Deconfiguration 11.2.0.4.0
Oracle Quality of Service Management (Client) 11.2.0.4.0
Expat libraries 2.0.1.0.1
Oracle Containers for Java 11.2.0.4.0
Perl Modules 5.10.0.0.1
Secure Socket Layer 11.2.0.4.0
Oracle JDBC/OCI Instant Client 11.2.0.4.0
Oracle Multimedia Client Option 11.2.0.4.0
LDAP Required Support Files 11.2.0.4.0
Character Set Migration Utility 11.2.0.4.0
Perl Interpreter 5.10.0.0.2
PL/SQL Embedded Gateway 11.2.0.4.0
OLAP SQL Scripts 11.2.0.4.0
Database SQL Scripts 11.2.0.4.0
Oracle Extended Windowing Toolkit 3.4.47.0.0
SSL Required Support Files for InstantClient 11.2.0.4.0
SQL*Plus Files for Instant Client 11.2.0.4.0
Oracle Net Required Support Files 11.2.0.4.0
Oracle Database User Interface 2.2.13.0.0
RDBMS Required Support Files for Instant Client 11.2.0.4.0
RDBMS Required Support Files Runtime 11.2.0.4.0
XML Parser for Java 11.2.0.4.0
Oracle Security Developer Tools 11.2.0.4.0
Oracle Wallet Manager 11.2.0.4.0
Enterprise Manager plugin Common Files 11.2.0.4.0
Platform Required Support Files 11.2.0.4.0
Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
RDBMS Required Support Files 11.2.0.4.0
Oracle Ice Browser 5.2.3.6.0
Oracle Help For Java 4.2.9.0.0
Enterprise Manager Common Files 10.2.0.4.5
Deinstallation Tool 11.2.0.4.0
Oracle Java Client 11.2.0.4.0
Cluster Verification Utility Files 11.2.0.4.0
Oracle Notification Service (eONS) 11.2.0.4.0
Oracle LDAP administration 11.2.0.4.0
Cluster Verification Utility Common Files 11.2.0.4.0
Oracle Clusterware RDBMS Files 11.2.0.4.0
Oracle Locale Builder 11.2.0.4.0
Oracle Globalization Support 11.2.0.4.0
Buildtools Common Files 11.2.0.4.0
HAS Common Files 11.2.0.4.0
SQL*Plus Required Support Files 11.2.0.4.0
XDK Required Support Files 11.2.0.4.0
Agent Required Support Files 10.2.0.4.5
Parser Generator Required Support Files 11.2.0.4.0
Precompiler Required Support Files 11.2.0.4.0
Installation Common Files 11.2.0.4.0
Required Support Files 11.2.0.4.0
Oracle JDBC/THIN Interfaces 11.2.0.4.0
Oracle Multimedia Locator 11.2.0.4.0
Oracle Multimedia 11.2.0.4.0
Assistant Common Files 11.2.0.4.0
Oracle Net 11.2.0.4.0
PL/SQL 11.2.0.4.0
HAS Files for DB 11.2.0.4.0
Oracle Recovery Manager 11.2.0.4.0
Oracle Database Utilities 11.2.0.4.0
Oracle Notification Service 11.2.0.3.0
SQL*Plus 11.2.0.4.0
Oracle Netca Client 11.2.0.4.0
Oracle Advanced Security 11.2.0.4.0
Oracle JVM 11.2.0.4.0
Oracle Internet Directory Client 11.2.0.4.0
Oracle Net Listener 11.2.0.4.0
Cluster Ready Services Files 11.2.0.4.0
Oracle Database 11g 11.2.0.4.0
-----------------------------------------------------------------------------


Instantiating scripts for add node (Thursday, October 19, 2017 10:20:49 PM CST)
. 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Thursday, October 19, 2017 10:20:54 PM CST)
............................................................................................... 96% Done.
Home copied to new nodes

Saving inventory on nodes (Thursday, October 19, 2017 10:35:34 PM CST)
. 100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/opt/grid/products/11.2.0/root.sh #On nodes rac2
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node

The Cluster Node Addition of /opt/grid/products/11.2.0 was successful.
Please check '/tmp/silentInstall.log' for more details.

按照提示在2节点上用root执行:/opt/grid/products/11.2.0/root.sh
Check /opt/grid/products/11.2.0/install/root_rac2_2017-10-19_23-30-40.log for the output of root script按照提示查看log文件内容看是否成功
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /opt/grid/products/11.2.0
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/grid/products/11.2.0/crs/install/crsconfig_params
User ignored Prerequisites during installation
Installing Trace File Analyzer
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

3. 检查CRS状态
#crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ONLINE ONLINE rac2
ONLINE ONLINE sanfree1
ora.OCRVOTE.dg
ONLINE ONLINE rac2
ONLINE ONLINE sanfree1
ora.REDODG.dg
ONLINE ONLINE rac2
ONLINE ONLINE sanfree1
ora.SASDG.dg
ONLINE ONLINE rac2
ONLINE ONLINE sanfree1
ora.asm
ONLINE ONLINE rac2 Started
ONLINE ONLINE sanfree1 Started
ora.gsd
OFFLINE OFFLINE rac2
OFFLINE OFFLINE sanfree1
ora.net1.network
ONLINE ONLINE rac2
ONLINE ONLINE sanfree1
ora.ons
ONLINE ONLINE rac2
ONLINE ONLINE sanfree1
ora.registry.acfs
ONLINE ONLINE rac2
ONLINE ONLINE sanfree1
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE sanfree1
ora.coco.db
1 ONLINE ONLINE sanfree1 Open
2 ONLINE OFFLINE
ora.cvu
1 ONLINE ONLINE sanfree1
ora.oc4j
1 ONLINE ONLINE sanfree1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.sanfree1.vip
1 ONLINE ONLINE sanfree1
ora.scan1.vip
1 ONLINE ONLINE sanfree1

##在旧节点名称上删除实例
srvctl remove instance -d coco -i coco2 -f -y
##加实例到新节点
srvctl add instance -d coco -i coco2 -n rac2 -f
##开始实例在新节点
srvctl start instance -d coco -i coco2
##查看数据库在节点上的状态
srvctl status database -d coco
Instance orcl1 is running on node sanfree1
Instance orcl2 is running on node rac2


##########################################################################################################################################################################################


J.删除节点1
1.这次删除节点1多测试一些步骤
#停止、删除节点1上的db实例
srvctl stop instance -d coco -i coco1
srvctl remove instance -d coco -i coco1 -f -y

2.检查2节点是否是active和Unpinned ,如果是pinned的,用crsctl unpin css
olsnodes -s -t
sanfree1 Active Unpinned
rac2 Active Unpinned

3. root用户在1节点 GRID_HOME 上执行
#/opt/grid/products/11.2.0/crs/install/rootcrs.pl -deconfig -force
Using configuration parameter file: /opt/grid/products/11.2.0/crs/install/crsconfig_params
Network exists: 1/10.10.50.0/255.255.255.0/eth0, type static
VIP exists: /rac2-vip/10.10.50.5/10.10.50.0/255.255.255.0/eth0, hosting node rac2
VIP exists: /sanfree1-vip/10.10.50.6/10.10.50.0/255.255.255.0/eth0, hosting node sanfree1
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'sanfree1'
CRS-2677: Stop of 'ora.registry.acfs' on 'sanfree1' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'sanfree1'
CRS-2673: Attempting to stop 'ora.crsd' on 'sanfree1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'sanfree1'
CRS-2673: Attempting to stop 'ora.OCRVOTE.dg' on 'sanfree1'
CRS-2673: Attempting to stop 'ora.REDODG.dg' on 'sanfree1'
CRS-2673: Attempting to stop 'ora.SASDG.dg' on 'sanfree1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'sanfree1'
CRS-2677: Stop of 'ora.REDODG.dg' on 'sanfree1' succeeded
CRS-2677: Stop of 'ora.SASDG.dg' on 'sanfree1' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'sanfree1' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'rac2'
CRS-2677: Stop of 'ora.OCRVOTE.dg' on 'sanfree1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'sanfree1'
CRS-2677: Stop of 'ora.asm' on 'sanfree1' succeeded
CRS-2676: Start of 'ora.oc4j' on 'rac2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'sanfree1' has completed
CRS-2677: Stop of 'ora.crsd' on 'sanfree1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'sanfree1'
CRS-2673: Attempting to stop 'ora.crf' on 'sanfree1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'sanfree1'
CRS-2673: Attempting to stop 'ora.evmd' on 'sanfree1'
CRS-2673: Attempting to stop 'ora.asm' on 'sanfree1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'sanfree1'
CRS-2677: Stop of 'ora.mdnsd' on 'sanfree1' succeeded
CRS-2677: Stop of 'ora.crf' on 'sanfree1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'sanfree1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'sanfree1' succeeded
CRS-2677: Stop of 'ora.asm' on 'sanfree1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'sanfree1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'sanfree1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'sanfree1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'sanfree1' succeeded
CRS-2677: Stop of 'ora.cssd' on 'sanfree1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'sanfree1'
CRS-2677: Stop of 'ora.gipcd' on 'sanfree1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'sanfree1'
CRS-2677: Stop of 'ora.gpnpd' on 'sanfree1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'sanfree1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
Successfully deconfigured Oracle clusterware stack on this node

4.从2节点上root执行
/opt/grid/products/11.2.0/bin/crsctl delete node -n sanfree1
CRS-4661: Node sanfree1 successfully deleted.

5. 从1节点上grid用户执行
/opt/grid/products/11.2.0/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/opt/grid/products/11.2.0/ "CLUSTER_NODES={sanfree1}" CRS=TRUE -silent -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 16343 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /opt/grid/oraInventory
'UpdateNodeList' was successful.

6. 从1节点上grid用户执行
/opt/grid/products/11.2.0/deinstall/deinstall -local
期间会有交互,一直回车用默认值,最后产生一个脚本,用root新开一个终端执行
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 16343 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /opt/grid/oraInventory
'UpdateNodeList' was successful.
grid@sanfree1:/home/grid>/opt/grid/products/11.2.0/deinstall/deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /opt/grid/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################### CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /opt/grid/products/11.2.0
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /opt/ogrid
Checking for existence of central inventory location /opt/grid/oraInventory
Checking for existence of the Oracle Grid Infrastructure home
The following nodes are part of this cluster: sanfree1
Checking for sufficient temp space availability on node(s) : 'sanfree1'

## [END] Install check configuration ##

Traces log file: /opt/grid/oraInventory/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "sanfree1"[sanfree1-vip]
>

The following information can be collected by running "/sbin/ifconfig -a" on node "sanfree1"
Enter the IP netmask of Virtual IP "10.10.50.6" on node "sanfree1"[255.255.255.0]
>

Enter the network interface name on which the virtual IP address "10.10.50.6" is active
>

Enter an address or the name of the virtual IP[]
>


Network Configuration check config START

Network de-configuration trace file location: /opt/grid/oraInventory/logs/netdc_check2017-10-20_12-01-37-AM.log

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN1]:

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /opt/grid/oraInventory/logs/asmcadc_check2017-10-20_12-01-40-AM.log


######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home deinstallation will be performed are:sanfree1
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'sanfree1', and the global configuration will be removed.
Oracle Home selected for deinstall is: /opt/grid/products/11.2.0
Inventory Location where the Oracle home registered is: /opt/grid/oraInventory
Following RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN1
Option -local will not modify any ASM configuration.
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/opt/grid/oraInventory/logs/deinstall_deconfig2017-10-20_12-00-31-AM.out'
Any error messages from this session will be written to: '/opt/grid/oraInventory/logs/deinstall_deconfig2017-10-20_12-00-31-AM.err'

######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /opt/grid/oraInventory/logs/asmcadc_clean2017-10-20_12-01-42-AM.log
ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /opt/grid/oraInventory/logs/netdc_clean2017-10-20_12-01-42-AM.log

De-configuring RAC listener(s): LISTENER,LISTENER_SCAN1

De-configuring listener: LISTENER
Stopping listener on node "sanfree1": LISTENER
Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.

De-configuring listener: LISTENER_SCAN1
Stopping listener on node "sanfree1": LISTENER_SCAN1
Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.

De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.

De-configuring backup files...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END


---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "sanfree1".

/tmp/deinstall2017-10-20_00-00-18AM/perl/bin/perl -I/tmp/deinstall2017-10-20_00-00-18AM/perl/lib -I/tmp/deinstall2017-10-20_00-00-18AM/crs/install /tmp/deinstall2017-10-20_00-00-18AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2017-10-20_00-00-18AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Press Enter after you finish running the above commands

<----------------------------------------
。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。
/tmp/deinstall2017-10-20_00-00-18AM/perl/bin/perl -I/tmp/deinstall2017-10-20_00-00-18AM/perl/lib -I/tmp/deinstall2017-10-20_00-00-18AM/crs/install /tmp/deinstall2017-10-20_00-00-18AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2017-10-20_00-00-18AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Using configuration parameter file: /tmp/deinstall2017-10-20_00-00-18AM/response/deinstall_Ora11g_gridinfrahome1.rsp
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Modify failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Delete failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
################################################################
# You must kill processes or reboot the system to properly #
# cleanup the processes started by Oracle clusterware #
################################################################
ACFS-9313: No ADVM/ACFS installation detected.
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node
。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。
Remove the directory: /tmp/deinstall2017-10-20_00-00-18AM on node:
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/opt/grid/products/11.2.0' from the central inventory on the local node : Done

Delete directory '/opt/grid/products/11.2.0' on the local node : Done

Failed to delete the directory '/opt/ogrid'. The directory is in use.
Delete directory '/opt/ogrid' on the local node : Failed <<<<

Oracle Universal Installer cleanup completed with errors.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2017-10-20_00-00-18AM' on node 'sanfree1'

## [END] Oracle install clean ##


######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN1
Oracle Clusterware is stopped and successfully de-configured on node "sanfree1"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/opt/grid/products/11.2.0' from the central inventory on the local node.
Successfully deleted directory '/opt/grid/products/11.2.0' on the local node.
Failed to delete directory '/opt/ogrid' on the local node.
Oracle Universal Installer cleanup completed with errors.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############

7.从2节点上grid用户执行
/opt/grid/products/11.2.0/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/opt/grid/products/11.2.0/ "CLUSTER_NODES={rac2}" CRS=TRUE -silent -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 16383 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /opt/grid/oraInventory
'UpdateNodeList' was successful.

8.在2节点上检查1节点是否被删除成功
cluvfy stage -post nodedel -n sanfree1 -verbose

K.节点1被正确删除后,修改节点1的主机名为rac1,修改两个节点的/etc/host
#cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.10.50.83 rac1
10.10.50.82 rac2
22.22.22.83 rac1-priv
22.22.22.82 rac2-priv
10.10.50.6 rac1-vip
10.10.50.5 rac2-vip
10.10.50.4 sanfree-scan

L.加节点1到CRS
1.节点2上设置grid用户对等性
/opt/grid/products/11.2.0/deinstall/sshUserSetup.sh -user grid -hosts rac1 rac2 -noPromptPassphrase
The output of this script is also logged into /tmp/sshUserSetup_2017-10-20-00-51-15.log
Hosts are rac1
user is grid
Platform:- Linux
Checking if the remote hosts are reachable
PING rac1 (10.10.50.83) 56(84) bytes of data.
64 bytes from rac1 (10.10.50.83): icmp_seq=1 ttl=64 time=0.591 ms
64 bytes from rac1 (10.10.50.83): icmp_seq=2 ttl=64 time=0.753 ms
64 bytes from rac1 (10.10.50.83): icmp_seq=3 ttl=64 time=0.643 ms
64 bytes from rac1 (10.10.50.83): icmp_seq=4 ttl=64 time=0.718 ms
64 bytes from rac1 (10.10.50.83): icmp_seq=5 ttl=64 time=0.698 ms

--- rac1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4008ms
rtt min/avg/max/mdev = 0.591/0.680/0.753/0.063 ms
Remote host reachability check succeeded.
The following hosts are reachable: rac1.
The following hosts are not reachable: .
All hosts are reachable. Proceeding further...
firsthost rac1
numhosts 1
The script will setup SSH connectivity from the host rac2 to all
the remote hosts. After the script is executed, the user can use SSH to run
commands on the remote hosts or copy files between this host rac2
and the remote hosts without being prompted for passwords or confirmations.

NOTE 1:
As part of the setup procedure, this script will use ssh and scp to copy
files between the local host and the remote hosts. Since the script does not
store passwords, you may be prompted for the passwords during the execution of
the script whenever ssh or scp is invoked.

NOTE 2:
AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY
AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESE
directories.

Do you want to continue and let the script make the above mentioned changes (yes/no)?
y

The user chose y
User chose to skip passphrase related questions.
Creating .ssh directory on local host, if not present already
Creating authorized_keys file on local host
Changing permissions on authorized_keys to 644 on local host
Creating known_hosts file on local host
Changing permissions on known_hosts to 644 on local host
Creating config file on local host
If a config file exists already at /root/.ssh/config, it would be backed up to /root/.ssh/config.backup.
Creating .ssh directory and setting permissions on remote host rac1
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR grid. THIS IS AN SSH REQUIREMENT.
The script would create ~grid/.ssh/config file on remote host rac1. If a config file exists already at ~grid/.ssh/config, it would be backed up to ~grid/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host rac1.
Warning: Permanently added 'rac1,10.10.50.83' (RSA) to the list of known hosts.
Done with creating .ssh directory and setting permissions on remote host rac1.
Copying local host public key to the remote host rac1
The user may be prompted for a password or passphrase here since the script would be using SCP for host rac1.
Done copying local host public key to the remote host rac1
SSH setup is complete.

2.节点2上grid用户,检查节点1是否满足
cluvfy stage -pre nodeadd -n rac1 -fixup -fixupdir /tmp -verbose
Performing pre-checks for node addition

Checking node reachability...

Check: Node reachability from node "rac2"
Destination Node Reachable?
------------------------------------ ------------------------
rac1 yes
Result: Node reachability check passed from node "rac2"


Checking user equivalence...

Check: User equivalence for user "grid"
Node Name Status
------------------------------------ ------------------------
rac1 failed
Result: PRVF-4007 : User equivalence check failed for user "grid"

ERROR:
User equivalence unavailable on all the specified nodes
Verification cannot proceed


Pre-check for node addition was unsuccessful on all the nodes.
修改 $ORACLE_HOME/oui/bin/addNode.sh

M.加节点1到CRS
$ORACLE_HOME/oui/bin/addNode.sh "CLUSTER_NEW_NODES={rac1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac1-vip}"
Performing pre-checks for node addition

Checking node reachability...
Node reachability check passed from node "rac2"


Checking user equivalence...
User equivalence check passed for user "grid"

Checking CRS integrity...

Clusterware version consistency passed

CRS integrity check passed

Checking shared resources...

Checking CRS home location...
"/opt/grid/products/11.2.0" is shared
Shared resources check for node addition passed


Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
TCP connectivity check passed for subnet "22.22.22.0"


Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"
TCP connectivity check passed for subnet "10.10.50.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "10.10.50.0".
Subnet mask consistency check passed for subnet "22.22.22.0".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "10.10.50.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.10.50.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "22.22.22.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "22.22.22.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "rac2:/opt/grid/products/11.2.0"
Free disk space check passed for "rac1:/opt/grid/products/11.2.0"
Free disk space check passed for "rac2:/tmp"
Free disk space check passed for "rac1:/tmp"
Check for multiple users with UID value 502 passed
User existence check passed for "grid"
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make"
Package existence check passed for "binutils"
Package existence check passed for "gcc(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "elfutils-libelf(x86_64)"
Package existence check passed for "elfutils-libelf-devel"
Package existence check passed for "glibc-common"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "glibc-headers"
Package existence check passed for "gcc-c++(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check failed for "pdksh"
Check failed on nodes:
rac2,rac1
Package existence check passed for "expat(x86_64)"
Check for multiple users with UID value 0 passed
Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed

Checking OCR integrity...

OCR integrity check passed

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed
Time zone consistency check passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
NTP Configuration file check passed

Checking daemon liveness...
Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes

NTP daemon slewing option check passed

NTP daemon's boot time configuration check for slewing option passed

NTP common Time Server Check started...
PRVF-5410 : Check of common NTP Time Server failed
PRVF-5416 : Query of NTP daemon failed on all nodes
Clock synchronization check using Network Time Protocol(NTP) failed


User "grid" is not part of "root" group. Check passed
Checking consistency of file "/etc/resolv.conf" across nodes

File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: rac2,rac1

File "/etc/resolv.conf" is not consistent across nodes


Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed

Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.

Pre-check for node addition was unsuccessful on all the nodes.
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 16383 MB Passed
Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.


Performing tests to see whether nodes rac1 are available
............................................................... 100% Done.

.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
Source: /opt/grid/products/11.2.0
New Nodes
Space Requirements
New Nodes
rac1
/opt: Required 7.57GB : Available 84.52GB
Installed Products
Product Names
Oracle Grid Infrastructure 11g 11.2.0.4.0
Java Development Kit 1.5.0.51.10
Installer SDK Component 11.2.0.4.0
Oracle One-Off Patch Installer 11.2.0.3.4
Oracle Universal Installer 11.2.0.4.0
Oracle RAC Required Support Files-HAS 11.2.0.4.0
Oracle USM Deconfiguration 11.2.0.4.0
Oracle Configuration Manager Deconfiguration 10.3.1.0.0
Enterprise Manager Common Core Files 10.2.0.4.5
Oracle DBCA Deconfiguration 11.2.0.4.0
Oracle RAC Deconfiguration 11.2.0.4.0
Oracle Quality of Service Management (Server) 11.2.0.4.0
Installation Plugin Files 11.2.0.4.0
Universal Storage Manager Files 11.2.0.4.0
Oracle Text Required Support Files 11.2.0.4.0
Automatic Storage Management Assistant 11.2.0.4.0
Oracle Database 11g Multimedia Files 11.2.0.4.0
Oracle Multimedia Java Advanced Imaging 11.2.0.4.0
Oracle Globalization Support 11.2.0.4.0
Oracle Multimedia Locator RDBMS Files 11.2.0.4.0
Oracle Core Required Support Files 11.2.0.4.0
Bali Share 1.1.18.0.0
Oracle Database Deconfiguration 11.2.0.4.0
Oracle Quality of Service Management (Client) 11.2.0.4.0
Expat libraries 2.0.1.0.1
Oracle Containers for Java 11.2.0.4.0
Perl Modules 5.10.0.0.1
Secure Socket Layer 11.2.0.4.0
Oracle JDBC/OCI Instant Client 11.2.0.4.0
Oracle Multimedia Client Option 11.2.0.4.0
LDAP Required Support Files 11.2.0.4.0
Character Set Migration Utility 11.2.0.4.0
Perl Interpreter 5.10.0.0.2
PL/SQL Embedded Gateway 11.2.0.4.0
OLAP SQL Scripts 11.2.0.4.0
Database SQL Scripts 11.2.0.4.0
Oracle Extended Windowing Toolkit 3.4.47.0.0
SSL Required Support Files for InstantClient 11.2.0.4.0
SQL*Plus Files for Instant Client 11.2.0.4.0
Oracle Net Required Support Files 11.2.0.4.0
Oracle Database User Interface 2.2.13.0.0
RDBMS Required Support Files for Instant Client 11.2.0.4.0
RDBMS Required Support Files Runtime 11.2.0.4.0
XML Parser for Java 11.2.0.4.0
Oracle Security Developer Tools 11.2.0.4.0
Oracle Wallet Manager 11.2.0.4.0
Enterprise Manager plugin Common Files 11.2.0.4.0
Platform Required Support Files 11.2.0.4.0
Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
RDBMS Required Support Files 11.2.0.4.0
Oracle Ice Browser 5.2.3.6.0
Oracle Help For Java 4.2.9.0.0
Enterprise Manager Common Files 10.2.0.4.5
Deinstallation Tool 11.2.0.4.0
Oracle Java Client 11.2.0.4.0
Cluster Verification Utility Files 11.2.0.4.0
Oracle Notification Service (eONS) 11.2.0.4.0
Oracle LDAP administration 11.2.0.4.0
Cluster Verification Utility Common Files 11.2.0.4.0
Oracle Clusterware RDBMS Files 11.2.0.4.0
Oracle Locale Builder 11.2.0.4.0
Oracle Globalization Support 11.2.0.4.0
Buildtools Common Files 11.2.0.4.0
HAS Common Files 11.2.0.4.0
SQL*Plus Required Support Files 11.2.0.4.0
XDK Required Support Files 11.2.0.4.0
Agent Required Support Files 10.2.0.4.5
Parser Generator Required Support Files 11.2.0.4.0
Precompiler Required Support Files 11.2.0.4.0
Installation Common Files 11.2.0.4.0
Required Support Files 11.2.0.4.0
Oracle JDBC/THIN Interfaces 11.2.0.4.0
Oracle Multimedia Locator 11.2.0.4.0
Oracle Multimedia 11.2.0.4.0
Assistant Common Files 11.2.0.4.0
Oracle Net 11.2.0.4.0
PL/SQL 11.2.0.4.0
HAS Files for DB 11.2.0.4.0
Oracle Recovery Manager 11.2.0.4.0
Oracle Database Utilities 11.2.0.4.0
Oracle Notification Service 11.2.0.3.0
SQL*Plus 11.2.0.4.0
Oracle Netca Client 11.2.0.4.0
Oracle Advanced Security 11.2.0.4.0
Oracle JVM 11.2.0.4.0
Oracle Internet Directory Client 11.2.0.4.0
Oracle Net Listener 11.2.0.4.0
Cluster Ready Services Files 11.2.0.4.0
Oracle Database 11g 11.2.0.4.0
-----------------------------------------------------------------------------


Instantiating scripts for add node (Friday, October 20, 2017 1:01:38 AM CST)
. 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Friday, October 20, 2017 1:01:43 AM CST)
..........................................................................................2..... 96% Done.
Home copied to new nodes

Saving inventory on nodes (Friday, October 20, 2017 1:15:22 AM CST)
. 100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/opt/grid/products/11.2.0/root.sh #On nodes rac1
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node

The Cluster Node Addition of /opt/grid/products/11.2.0 was successful.
Please check '/tmp/silentInstall.log' for more details.

按照提示在1节点上用root执行:/opt/grid/products/11.2.0.4/root.sh
tail -f /opt/grid/products/11.2.0/install/root_rac1_2017-10-20_01-17-29.log

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/grid/products/11.2.0/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful

N.
##在旧节点名称上删除实例
srvctl remove instance -d coco -i coco1 -f -y
##加实例到新节点
srvctl add instance -d coco -i coco1 -n rac1 -f
##开始实例在新节点
srvctl start instance -d coco -i coco1
##查看数据库在节点上的状态
srvctl status database -d coco
Instance coco1 is running on node rac1
Instance coco2 is running on node rac2

O. 发现两节点ASM实例名被修改为+ASM3和+ASM4
修改grid用户下的.bash_profile文件中的实例名即可

posted @ 2017-10-19 14:42  Sean.Biang  阅读(1623)  评论(0编辑  收藏  举报