Storage Foundation for Oracle® RAC 7.4.1 Configuration and Upgrade Guide - Solaris
- Section I. Configuring SF Oracle RAC
- Preparing to configure SF Oracle RAC
- Configuring SF Oracle RAC using the script-based installer
- Configuring the SF Oracle RAC components using the script-based installer
- Configuring the SF Oracle RAC cluster
- Configuring SF Oracle RAC in secure mode
- Configuring a secure cluster node by node
- Configuring the SF Oracle RAC cluster
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Configuring the SF Oracle RAC components using the script-based installer
- Performing an automated SF Oracle RAC configuration
- Section II. Post-installation and configuration tasks
- Verifying the installation
- Performing additional post-installation and configuration tasks
- Section III. Upgrade of SF Oracle RAC
- Planning to upgrade SF Oracle RAC
- Performing a full upgrade of SF Oracle RAC using the product installer
- Performing an automated full upgrade of SF Oracle RAC using response files
- Performing a phased upgrade of SF Oracle RAC
- Performing a phased upgrade of SF Oracle RAC from version 6.2.1 and later release
- Performing a rolling upgrade of SF Oracle RAC
- Upgrading SF Oracle RAC using Live Upgrade or Boot Environment upgrade
- Performing post-upgrade tasks
- Section IV. Installation and upgrade of Oracle RAC
- Before installing Oracle RAC
- Preparing to install Oracle RAC using the SF Oracle RAC installer or manually
- Creating users and groups for Oracle RAC
- Creating storage for OCR and voting disk
- Configuring private IP addresses for Oracle RAC 11.2.0.1
- Configuring private IP addresses for Oracle RAC 11.2.0.2 and later versions
- Installing Oracle RAC
- Performing an automated Oracle RAC installation
- Performing Oracle RAC post-installation tasks
- Configuring the CSSD resource
- Relinking the SF Oracle RAC libraries with Oracle RAC
- Configuring VCS service groups for Oracle RAC
- Upgrading Oracle RAC
- Before installing Oracle RAC
- Section V. Adding and removing nodes
- Adding a node to SF Oracle RAC clusters
- Adding a node to a cluster using the Veritas InfoScale installer
- Adding the node to a cluster manually
- Setting up the node to run in secure mode
- Configuring server-based fencing on the new node
- Preparing the new node manually for installing Oracle RAC
- Adding a node to the cluster using the SF Oracle RAC response file
- Configuring private IP addresses for Oracle RAC 11.2.0.2 and later versions on the new node
- Removing a node from SF Oracle RAC clusters
- Adding a node to SF Oracle RAC clusters
- Section VI. Configuration of disaster recovery environments
- Configuring disaster recovery environments
- Configuring disaster recovery environments
- Section VII. Installation reference
- Appendix A. Installation scripts
- Appendix B. Tunable files for installation
- Appendix C. Sample installation and configuration values
- SF Oracle RAC worksheet
- Appendix D. Configuration files
- Sample configuration files
- Sample configuration files for CP server
- Appendix E. Configuring the secure shell or the remote shell for communications
- Appendix F. Automatic Storage Management
- Appendix G. Creating a test database
- Appendix H. High availability agent information
- About agents
- CVMCluster agent
- CVMVxconfigd agent
- CVMVolDg agent
- CFSMount agent
- CFSfsckd agent
- PrivNIC agent
- MultiPrivNIC agent
- CSSD agent
- VCS agents for Oracle
- Oracle agent functions
- Resource type definition for the Oracle agent
- Resource type definition for the Netlsnr agent
- Resource type definition for the ASMDG agent
- Oracle agent functions
- CRSResource agent
- Appendix I. SF Oracle RAC deployment scenarios
- Configuration diagrams for setting up server-based I/O fencing
Modifying the VCS configuration files on existing nodes
Modify the configuration files on the remaining nodes of the cluster to remove references to the deleted nodes.
Tasks for modifying the cluster configuration files:
Edit the /etc/llthosts file
Edit the /etc/gabtab file
Modify the VCS configuration to remove the node
To edit the /etc/llthosts file
- On each of the existing nodes, edit the
/etc/llthosts
file to remove lines that contain references to the removed nodes.For example, if sys5 is the node removed from the cluster, remove the line "2 sys5" from the file:
0 sys1 1 sys2 2 sys5
Change to:
0 sys1 1 sys2
- Modify the following command in the
/etc/gabtab
file to reflect the number of systems after the node is removed:/sbin/gabconfig -c -nN
where N is the number of remaining nodes in the cluster.
For example, with two nodes remaining, the file resembles:
/sbin/gabconfig -c -n2
Modify the VCS configuration file main.cf to remove all references to the deleted node.
Use one of the following methods to modify the configuration:
Edit the
/etc/VRTSvcs/conf/config/main.cf
fileThis method requires application down time.
Use the command line interface
This method allows the applications to remain online on all remaining nodes.
The following procedure uses the command line interface and modifies the sample VCS configuration to remove references to the deleted node. Run the steps in the procedure from one of the existing nodes in the cluster. The procedure allows you to change the VCS configuration while applications remain online on the remaining nodes.
To modify the cluster configuration using the command line interface (CLI)
- Back up the
/etc/VRTSvcs/conf/config/main.cf
file.# cd /etc/VRTSvcs/conf/config
# cp main.cf main.cf.3node.bak
- Change the cluster configuration to read-write mode:
# haconf -makerw
- Remove the node from the AutoStartList attribute of the service group by specifying the remaining nodes in the desired order:
# hagrp -modify cvm AutoStartList sys1 sys2
- Remove the node from the SystemList attribute of the service group:
# hagrp -modify cvm SystemList -delete sys5
If the system is part of the SystemList of a parent group, it must be deleted from the parent group first.
- Remove the node from the CVMNodeId attribute of the service group:
# hares -modify cvm_clus CVMNodeId -delete sys5
- If you have the other service groups (such as the database service group or the ClusterService group) that have the removed node in their configuration, perform step 4 and step 5 for each of them.
- Remove the deleted node from the NodeList attribute of all CFS mount resources:
# hares -modify CFSMount NodeList -delete sys5
- Remove the deleted node from the system list of any other service groups that exist on the cluster. For example, to delete the node sys5:
# hagrp -modify crsgrp SystemList -delete sys5
- Remove the deleted node from the cluster system list:
# hasys -delete sys5
- Save the new configuration to disk:
# haconf -dump -makero
- Verify that the node is removed from the VCS configuration.
# grep -i sys5 /etc/VRTSvcs/conf/config/main.cf
If the node is not removed, use the VCS commands as described in this procedure to remove the node.