Veritas Access Installation Guide
- Introducing Veritas Access
- Licensing in Veritas Access
- System requirements
- System requirements
- Linux requirements
- Linux requirements
- Network and firewall requirements
- Preparing to install Veritas Access
- Deploying virtual machines in VMware ESXi for Veritas Access installation
- Installing and configuring a cluster
- Installing the operating system on each node of the cluster
- Installing Veritas Access on the target cluster nodes
- About managing the NICs, bonds, and VLAN devices
- About VLAN tagging
- Automating Veritas Access installation and configuration using response files
- Displaying and adding nodes to a cluster
- Upgrading Veritas Access and operating system
- Upgrading Veritas Access using a rolling upgrade
- Uninstalling Veritas Access
- Appendix A. Installation reference
- Appendix B. Configuring the secure shell for communications
- Appendix C. Manual deployment of Veritas Access
Performing a rolling upgrade using the installer
Before you start a rolling upgrade, make sure that the Cluster Server (VCS) is running on all the nodes of the cluster.
You need to stop all activities for all the VxVM volumes that are not under the VCS control. For example, stop any applications such as databases that can access the volumes, and unmount any file systems that have been created on the volumes. Then stop all the volumes.
Unmount all the VxFS file systems that are not under VCS control.
To perform a rolling upgrade
- In case of the LTR-configured Veritas Access cluster, make sure that the backup or restore jobs from NetBackup are stopped.
- Phase 1 of a rolling upgrade begins on the second subcluster. Complete the preparatory steps on the second subcluster.
Unmount all VxFS file systems not under VCS control:
# umount mount_point
- Complete the updates to the OS, if required.
Make sure that the existing version of Veritas Access supports the OS updates that you apply. If the existing version of Veritas Access does not support the OS update, first upgrade Veritas Access to a version that supports the OS update.
For more information, see the RHEL OS documentation.
Switch the applications to the remaining subcluster and upgrade the OS of the first subcluster.
The nodes are restarted after the OS updates are completed.
- If a cache area is online, you must take the cache area offline before you upgrade the VxVM RPMs. Use the following command to take the cache area offline:
# sfcache offline cachename
- Disable I/O fencing before you perform the rolling upgrade by using the storage fencing off command.
- Log on as a root user and mount the Veritas Access 7.4 installation media.
- From root, start the installer.
# ./installaccess -rolling_upgrade
- The installer checks system communications, release compatibility, version information, and lists the cluster name, ID, and cluster nodes. The installer asks for permission to proceed with the rolling upgrade.
Would you like to perform rolling upgrade on the cluster? [y,n,q] (y)
Type y to continue.
- Phase 1 of the rolling upgrade begins. Phase 1 must be performed on one node at a time. The installer asks for the system name.
Enter the system names separated by spaces on which you want to perform rolling upgrade: [q?] Enter the name or IP address of one of the slave node on which you want to perform the rolling upgrade.
- The installer performs further prechecks on the nodes in the cluster and may present warnings. You can type y to continue or quit the installer and address the precheck's warnings.
- If the boot disk is encapsulated and mirrored, you can create a backup boot disk.
If you choose to create a backup boot disk, type y. Provide a backup name for the boot disk group or accept the default name. The installer then creates a backup copy of the boot disk group.
- After the installer detects the online service groups, the installer prompts the user to do one of the following:
Manually switch service groups
Use the CPI to automatically switch service groups
The downtime is the time that it takes for the failover of the service group.
Note:
Veritas recommends that you manually switch the service groups. Automatic switching of service groups does not resolve dependency issues.
- The installer prompts you to stop the applicable processes. Type y to continue.
The installer evacuates all service groups to the node or nodes that are not upgraded at this time. The installer stops parallel service groups on the nodes that are to be upgraded.
The installer stops all the related processes, uninstalls the old kernel RPMs, and installs the new RPMs.
- The installer performs the upgrade configuration and starts the processes. If the boot disk is encapsulated before the upgrade, the installer prompts you to restart the node after performing the upgrade configuration.
- Complete the preparatory steps on the nodes that you have not yet upgraded.
Unmount all the VxFS file systems that are not under VCS control on all the nodes.
# umount mount_point
- If the OS updates are not required, skip this step.
Go to step 4.
Else, complete updates to the OS on the nodes that you have not yet upgraded. For the instructions, see the RHEL OS documentation.
- Phase 1 of the rolling upgrade is complete for the first node. You can start with the upgrade of phase 1 for the next slave node. Installer again asks for the system name.
Before you start the upgrade of phase 1 for the next node, you need to check if the recovery task is in-progress. You need to wait for a few minutes for the recovery task to start.
On the master node, enter the following command:
# vxtask list Check if following keywords are present: ECREBUILD/ATCOPY/ATCPY/PLXATT/VXRECOVER/RESYNC/RECOV
If any recovery task is in progress, wait for the task to complete, and then start the upgrade of phase 1 for the next node.
- After the upgrade of phase 1 is done on the node, make sure that the node is not out of the cluster.
Enter the # vxclustadm nidmap command.
If the upgraded node is out of the cluster, wait for the node to join the cluster before you start the upgrade of phase 1 for the next node.
- Set up all cache areas as offline on the remaining node or nodes:
# sfcache offline cachename
The installer asks for a node name on which the upgrade is to be performed.
- Type the system names on which you want to perform the rolling upgrade.
Enter the system names separated by spaces on which you want to perform rolling upgrade: [q,?]
- Type the cluster node name.
Type cluster node name or q to quit.
The installer repeats step 9 through step 14.
For clusters with a larger number of nodes, this process may repeat several times. Service groups come down and are brought up to accommodate the upgrade.
- When phase 1 of the rolling upgrade completes, mount all the VxFS file systems that are not under VCS control manually. Begin phase 2 of the upgrade. Phase 2 of the upgrade includes downtime for the VCS engine (HAD), which does not include application downtime. Type y to continue. Phase 2 of the rolling upgrade begins here.
- The installer determines the remaining RPMs to upgrade. Type y to continue.
- The installer stops Cluster Server (VCS) processes but the applications continue to run. Type y to continue.
The installer performs a prestop, uninstalls the old RPMs, and installs the new RPMs. It performs post-installation tasks and the configuration for the upgrade.
- If you have a network connection to the Internet, the installer checks for updates.
If any updates are discovered, you can apply them now.
- Verify the cluster's status:
# hastatus -sum
- Post-upgrade steps only for the LTR-configured Veritas Access cluster:
Take offline all the OpenDedup volumes by using the following command:
cluster2> opendedup volume offline <vol-name>
Update all the OpenDedup
config.xml
files as follows:/etc/sdfs/<vol-name>-volume-cfg.xml
by adding the following parameter to the
extended-config
tag:dist-layout= "false"
Note:
This parameter should not be used for the existing OpenDedup volumes because they might have existing data with the default layout. If you use the existing OpenDedup volumes, it might result in data corruption.
Bring online all the OpenDedup volumes by using the following command:
cluster2> opendedup volume online <vol-name>