Storage Foundation for Oracle® RAC 7.4.1 Configuration and Upgrade Guide - Solaris
- Section I. Configuring SF Oracle RAC
- Preparing to configure SF Oracle RAC
- Configuring SF Oracle RAC using the script-based installer
- Configuring the SF Oracle RAC components using the script-based installer
- Configuring the SF Oracle RAC cluster
- Configuring SF Oracle RAC in secure mode
- Configuring a secure cluster node by node
- Configuring the SF Oracle RAC cluster
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Configuring the SF Oracle RAC components using the script-based installer
- Performing an automated SF Oracle RAC configuration
- Section II. Post-installation and configuration tasks
- Verifying the installation
- Performing additional post-installation and configuration tasks
- Section III. Upgrade of SF Oracle RAC
- Planning to upgrade SF Oracle RAC
- Performing a full upgrade of SF Oracle RAC using the product installer
- Performing an automated full upgrade of SF Oracle RAC using response files
- Performing a phased upgrade of SF Oracle RAC
- Performing a phased upgrade of SF Oracle RAC from version 6.2.1 and later release
- Performing a rolling upgrade of SF Oracle RAC
- Upgrading SF Oracle RAC using Live Upgrade or Boot Environment upgrade
- Performing post-upgrade tasks
- Section IV. Installation and upgrade of Oracle RAC
- Before installing Oracle RAC
- Preparing to install Oracle RAC using the SF Oracle RAC installer or manually
- Creating users and groups for Oracle RAC
- Creating storage for OCR and voting disk
- Configuring private IP addresses for Oracle RAC 11.2.0.1
- Configuring private IP addresses for Oracle RAC 11.2.0.2 and later versions
- Installing Oracle RAC
- Performing an automated Oracle RAC installation
- Performing Oracle RAC post-installation tasks
- Configuring the CSSD resource
- Relinking the SF Oracle RAC libraries with Oracle RAC
- Configuring VCS service groups for Oracle RAC
- Upgrading Oracle RAC
- Before installing Oracle RAC
- Section V. Adding and removing nodes
- Adding a node to SF Oracle RAC clusters
- Adding a node to a cluster using the Veritas InfoScale installer
- Adding the node to a cluster manually
- Setting up the node to run in secure mode
- Configuring server-based fencing on the new node
- Preparing the new node manually for installing Oracle RAC
- Adding a node to the cluster using the SF Oracle RAC response file
- Configuring private IP addresses for Oracle RAC 11.2.0.2 and later versions on the new node
- Removing a node from SF Oracle RAC clusters
- Adding a node to SF Oracle RAC clusters
- Section VI. Configuration of disaster recovery environments
- Configuring disaster recovery environments
- Configuring disaster recovery environments
- Section VII. Installation reference
- Appendix A. Installation scripts
- Appendix B. Tunable files for installation
- Appendix C. Sample installation and configuration values
- SF Oracle RAC worksheet
- Appendix D. Configuration files
- Sample configuration files
- Sample configuration files for CP server
- Appendix E. Configuring the secure shell or the remote shell for communications
- Appendix F. Automatic Storage Management
- Appendix G. Creating a test database
- Appendix H. High availability agent information
- About agents
- CVMCluster agent
- CVMVxconfigd agent
- CVMVolDg agent
- CFSMount agent
- CFSfsckd agent
- PrivNIC agent
- MultiPrivNIC agent
- CSSD agent
- VCS agents for Oracle
- Oracle agent functions
- Resource type definition for the Oracle agent
- Resource type definition for the Netlsnr agent
- Resource type definition for the ASMDG agent
- Oracle agent functions
- CRSResource agent
- Appendix I. SF Oracle RAC deployment scenarios
- Configuration diagrams for setting up server-based I/O fencing
Creating Oracle Clusterware/Grid Infrastructure and Oracle database home directories on the new node
The Oracle Clusterware/Grid Infrastructure and Oracle database home directories must be located on the same storage as that on the existing nodes.
Depending on the storage in the existing cluster, use one of the following options to create the directories:
Local file system | |
Cluster File System |
To create the directories on the local file system
- Log in as the root user on the node.
Create a local file system and mount it using one of the following methods:
Using native operating system commands
For instructions, see the operating system documentation.
Using Veritas File System (VxFS) commands
As the root user, create a VxVM local diskgroup on each node.
# vxdg init vxvm_dg \ dg_name
Create separate volumes for Oracle Clusterware/Oracle Grid Infrastructure binaries and Oracle binaries.
# vxassist -g vxvm_dg make clus_volname size # vxassist -g vxvm_dg make ora_volname size
Create the file systems with the volumes.
# mkfs -F vxfs /dev/vx/rdsk/vxvm_dg/clus_volname # mkfs -F vxfs /dev/vx/rdsk/vxvm_dg/ora_volname
Mount the file system.
# mount -F vxfs /dev/vx/dsk/vxvm_dg/clus_volname \ clus_home # mount -F vxfs /dev/vx/dsk/vxvm_dg/ora_volname \ oracle_home
- Create the directories for Oracle RAC.
# mkdir -p grid_base # mkdir -p clus_home # mkdir -p oracle_base # mkdir -p oracle_home
- Set appropriate ownership and permissions for the directories.
# chown -R grid:oinstall grid_base # chmod -R 775 grid_base # chown -R grid:oinstall clus_home # chmod -R 775 clus_home
# chown -R oracle:oinstall oracle_base # chmod -R 775 oracle_base # chown -R oracle:oinstall oracle_home # chmod -R 775 oracle_home
- Add the resources to the VCS configuration.
See “To add the storage resources created on VxFS to the VCS configuration”.
- Repeat all the steps on each new node of the cluster.
To add the storage resources created on VxFS to the VCS configuration
- Change the permissions on the VCS configuration file:
# haconf -makerw
- Configure the VxVM volumes under VCS:
# hares -add dg_resname DiskGroup cvm # hares -modify dg_resname DiskGroup vxvm_dg -sys nodenew_name # hares -modify dg_resname Enabled 1
- Set up the file system under VCS:
# hares -add clusbin_mnt_resname Mount cvm
# hares -modify clusbin_mnt_resname MountPoint \ "clus_home"
# hares -modify clusbin_mnt_resname BlockDevice \ "/dev/vx/dsk/vxvm_dg/clus_volname" -sys nodenew_name # hares -modify clusbin_mnt_resname FSType vxfs # hares -modify clusbin_mnt_resname FsckOpt "-n" # hares -modify clusbin_mnt_resname Enabled 1 # hares -add orabin_mnt_resname Mount cvm
# hares -modify orabin_mnt_resname MountPoint \ "oracle_home"
# hares -modify orabin_mnt_resname BlockDevice \ "/dev/vx/dsk/vxvm_dg/ora_volname" -sys nodenew_name # hares -modify orabin_mnt_resname FSType vxfs # hares -modify orabin_mnt_resname FsckOpt "-n" # hares -modify orabin_mnt_resname Enabled 1
- Link the parent and child resources:
# hares -link clusbin_mnt_resname vxvm_dg # hares -link orabin_mnt_resname vxvm_dg
- Repeat all the steps on each new node of the cluster.
To create the file system and directories on cluster file system for Oracle Clusterware and Oracle database
Perform the following steps on the CVM master node in the cluster.
- Create the Oracle base directory, clusterware home directory, and the Oracle home directory.
# mkdir -p oracle_base # mkdir -p oracle_home # mkdir -p clus_home # mkdir -p grid_base
- Mount the file systems. Perform this step on each new node.
# mount -F vxfs -o cluster /dev/vx/dsk/cvm_dg/clus_volname \ clus_home # mount -F vxfs -o cluster /dev/vx/dsk/cvm_dg/ora_volname \ oracle_home