Storage Foundation for Oracle® RAC 7.4.1 Configuration and Upgrade Guide - Solaris
- Section I. Configuring SF Oracle RAC
- Preparing to configure SF Oracle RAC
- Configuring SF Oracle RAC using the script-based installer
- Configuring the SF Oracle RAC components using the script-based installer
- Configuring the SF Oracle RAC cluster
- Configuring SF Oracle RAC in secure mode
- Configuring a secure cluster node by node
- Configuring the SF Oracle RAC cluster
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Configuring the SF Oracle RAC components using the script-based installer
- Performing an automated SF Oracle RAC configuration
- Section II. Post-installation and configuration tasks
- Verifying the installation
- Performing additional post-installation and configuration tasks
- Section III. Upgrade of SF Oracle RAC
- Planning to upgrade SF Oracle RAC
- Performing a full upgrade of SF Oracle RAC using the product installer
- Performing an automated full upgrade of SF Oracle RAC using response files
- Performing a phased upgrade of SF Oracle RAC
- Performing a phased upgrade of SF Oracle RAC from version 6.2.1 and later release
- Performing a rolling upgrade of SF Oracle RAC
- Upgrading SF Oracle RAC using Live Upgrade or Boot Environment upgrade
- Performing post-upgrade tasks
- Section IV. Installation and upgrade of Oracle RAC
- Before installing Oracle RAC
- Preparing to install Oracle RAC using the SF Oracle RAC installer or manually
- Creating users and groups for Oracle RAC
- Creating storage for OCR and voting disk
- Configuring private IP addresses for Oracle RAC 11.2.0.1
- Configuring private IP addresses for Oracle RAC 11.2.0.2 and later versions
- Installing Oracle RAC
- Performing an automated Oracle RAC installation
- Performing Oracle RAC post-installation tasks
- Configuring the CSSD resource
- Relinking the SF Oracle RAC libraries with Oracle RAC
- Configuring VCS service groups for Oracle RAC
- Upgrading Oracle RAC
- Before installing Oracle RAC
- Section V. Adding and removing nodes
- Adding a node to SF Oracle RAC clusters
- Adding a node to a cluster using the Veritas InfoScale installer
- Adding the node to a cluster manually
- Setting up the node to run in secure mode
- Configuring server-based fencing on the new node
- Preparing the new node manually for installing Oracle RAC
- Adding a node to the cluster using the SF Oracle RAC response file
- Configuring private IP addresses for Oracle RAC 11.2.0.2 and later versions on the new node
- Removing a node from SF Oracle RAC clusters
- Adding a node to SF Oracle RAC clusters
- Section VI. Configuration of disaster recovery environments
- Configuring disaster recovery environments
- Configuring disaster recovery environments
- Section VII. Installation reference
- Appendix A. Installation scripts
- Appendix B. Tunable files for installation
- Appendix C. Sample installation and configuration values
- SF Oracle RAC worksheet
- Appendix D. Configuration files
- Sample configuration files
- Sample configuration files for CP server
- Appendix E. Configuring the secure shell or the remote shell for communications
- Appendix F. Automatic Storage Management
- Appendix G. Creating a test database
- Appendix H. High availability agent information
- About agents
- CVMCluster agent
- CVMVxconfigd agent
- CVMVolDg agent
- CFSMount agent
- CFSfsckd agent
- PrivNIC agent
- MultiPrivNIC agent
- CSSD agent
- VCS agents for Oracle
- Oracle agent functions
- Resource type definition for the Oracle agent
- Resource type definition for the Netlsnr agent
- Resource type definition for the ASMDG agent
- Oracle agent functions
- CRSResource agent
- Appendix I. SF Oracle RAC deployment scenarios
- Configuration diagrams for setting up server-based I/O fencing
Configuring VCS service groups manually for container Oracle databases
This section describes the steps to configure the VCS service group manually for container Oracle databases.
See Figure: Service group configuration with the VCS Oracle agent.
The following procedure assumes that you have created the database.
To configure the Oracle service group manually for container Oracle databases
- Change the cluster configuration to read-write mode:
# haconf -makerw
- Add the service group to the VCS configuration:
# hagrp -add oradb_grpname
- Modify the attributes of the service group:
# hagrp -modify oradb_grpname Parallel 1
# hagrp -modify oradb_grpname SystemList node_name1 0 node_name2 1
# hagrp -modify oradb_grpname AutoStartList node_name1 node_name2
- Add the CVMVolDg resource for the service group:
# hares -add oradbdg_resname CVMVolDg oradb_grpname
- Modify the attributes of the CVMVolDg resource for the service group:
# hares -modify oradbdg_resname CVMDiskGroup oradb_dgname # hares -modify oradbdg_resname CVMActivation sw # hares -modify oradbdg_resname CVMVolume oradb_volname
- Add the CFSMount resource for the service group:
# hares -add oradbmnt_resname CFSMount oradb_grpname
- Modify the attributes of the CFSMount resource for the service group:
# hares -modify oradbmnt_resname MountPoint "oradb_mnt" # hares -modify oradbmnt_resname BlockDevice \ "/dev/vx/dsk/oradb_dgname/oradb_volname"
- Add the container and pluggable Oracle RAC resources to the service group:
# hares -add cdb_resname Oracle oradb_grpname
# hares -add pdb_resname Oracle oradb_grpname
- Modify the attributes of the container and pluggable Oracle resources for the service group:
For container Oracle resource:
# hares -modify cdb_resname Owner oracle # hares -modify cdb_resname Home "db_home" # hares -modify cdb_resname StartUpOpt SRVCTLSTART # hares -modify cdb_resname ShutDownOpt SRVCTLSTOP
For pluggable Oracle resource:
# hares -modify pdb_resname Owner oracle # hares -modify pdb_resname Home "db_home" # hares -modify pdb_resname StartUpOpt STARTUP # hares -modify pdb_resname ShutDownOpt IMMEDIATE
For container databases that are administrator-managed, perform the following steps:
Localize the Sid attribute for the container Oracle resource:
# hares -local cdb_resname Sid
Set the Sid attributes for the container Oracle resource on each system:
# hares -modify cdb_resname Sid oradb_sid_node1 -sys node_name1 # hares -modify cdb_resname Sid oradb_sid_node2 -sys node_name2
For pluggable databases that reside in administrator-managed container databases, perform the following steps:
Localize the Sid attribute for the pluggable Oracle resource:
# hares -local pdb_resname Sid
Set the Sid attributes for the pluggable Oracle resource on each system:
# hares -modify pdb_resname Sid oradb_sid_node1 -sys node_name1 # hares -modify pdb_resname Sid oradb_sid_node2 -sys node_name2
Set the PDBName attribute for the pluggable Oracle database:
# hares -modify pdb_resname PDBName pdbname
For container databases that are policy-managed, perform the following steps:
Modify the attributes of the container Oracle resource for the service group:
# hares -modify cdb_resname DBName db_name # hares -modify cdb_resname ManagedBy POLICY
Set the Sid attribute to the Sid prefix for the container Oracle resource on all systems:
# hares -modify cdb_resname Sid oradb_sid_prefix
Note:
The Sid prefix is displayed on the confirmation page during database creation. The prefix can also be determined by running the following command :
# grid_home/bin/crsctl status resource ora.db_name.db -f | grep GEN_USR_ORA_INST_NAME@ | tail -1 | sed 's/.*=//' | sed 's/_[0-9]$//'
Set the IntentionalOffline attribute for the resource to 1 and make sure that the health check monitoring is disabled:
# hares -override cdb_resname IntentionalOffline # hares -modify cdb_resname IntentionalOffline 1 # hares -modify cdb_resname MonitorOption 0
For pluggable databases that reside in policy-managed container databases, perform the following steps:
Modify the attributes of the pluggable Oracle resource for the service group:
# hares -modify pdb_resname DBName db_name # hares -modify pdb_resname ManagedBy POLICY
Set the Sid attribute to the Sid prefix for the pluggable Oracle resource on all systems:
# hares -modify pdb_resname Sid oradb_sid_prefix
Note:
The Sid prefix is displayed on the confirmation page during database creation. The prefix can also be determined by running the following command :
# grid_home/bin/crsctl status resource ora.db_name.db -f | grep GEN_USR_ORA_INST_NAME@ | tail -1 | sed 's/.*=//' | sed 's/_[0-9]$//'
Set the IntentionalOffline attribute for the resource to 1 and make sure that the health check monitoring is disabled:
# hares -override pdb_resname IntentionalOffline # hares -modify pdb_resname IntentionalOffline 1 # hares -modify pdb_resname MonitorOption 0
Set the PDBName attribute for the pluggable Oracle database:
# hares -modify pdb_resname PDBName pdbname
- Set the dependency between the pluggable database resources and the corresponding container database resource:
# hares -link pdb_resname cdb_resname
Repeat this step for each pluggable database resource in a container database.
- Repeat steps 8 to 12 for each container database.
- Set the dependencies between the CFSMount resource and the CVMVolDg resource for the Oracle service group:
# hares -link oradbmnt_resname oradbdg_resname
- Set the dependencies between the Oracle resource and the CFSMount resource for the Oracle service group:
# hares -link db_resname oradbmnt_resname
- Create an online local firm dependency between the oradb1_grp service group and the cvm service group:
# hagrp -link oradb_grpname cvm_grpname online local firm
- Enable the Oracle service group:
# hagrp -enableresources oradb_grpname
- Change the cluster configuration to the read-only mode:
# haconf -dump -makero
- Bring the Oracle service group online on all the nodes:
# hagrp -online oradb_grpname -any
Note:
For policy-managed databases: When VCS starts or when the administrator attempts to bring the Oracle resource online, if the server is not part of the server pool associated with the database, the resource will remain offline. If Oracle Grid Infrastructure decides to move the server from the server pool, the database will be brought offline by the Oracle Grid Infrastructure and the oracle resource moves to offline state.
For more information and instructions on configuring the service groups using the CLI:
See the Cluster Server Administrator's Guide.