Veritas Access Installation Guide
- Licensing in Veritas Access
- System requirements
- System requirements
- Linux requirements
- Network and firewall requirements
- Preparing to install Veritas Access
- Deploying virtual machines in VMware ESXi for Veritas Access installation
- Installing and configuring a cluster
- Installing the operating system on each node of the cluster
- Installing Veritas Access on the target cluster nodes
- About managing the NICs, bonds, and VLAN devices
- About VLAN tagging
- Automating Veritas Access installation and configuration using response files
- Displaying and adding nodes to a cluster
- Upgrading the operating system and Veritas Access
- Performing a rolling upgrade
- Uninstalling Veritas Access
- Appendix A. Installation reference
- Appendix B. Configuring the secure shell for communications
- Appendix C. Manual deployment of Veritas Access
Performing a rolling upgrade using the installer
Note:
See the "Known issues> Upgrade issues" section of the Veritas Access Release Notes before starting the rolling upgrade.
Before you start a rolling upgrade, make sure that the Veritas Cluster Server (VCS) is running on all the nodes of the cluster.
Stop all activity for all the VxVM volumes that are not under VCS control. For example, stop any applications such as databases that access the volumes, and unmount any file systems that have been created on the volumes. Then stop all the volumes.
Unmount all the VxFS file systems that are not under VCS control.
Note:
The Veritas Access GUI is not accessible from the time that you start rolling upgrade on the master node till the time rolling upgrade is complete.
Note:
It is recommended that during rolling upgrade, you use only list and show commands in the Veritas Access command-line interfae. Using other commands like create, destroy, add, and remove may update the Veritas Access configuration which is not recommended during rolling upgrade.
To perform a rolling upgrade
- In case of the LTR-configured Veritas Access cluster, make sure that the backup or restore jobs from NetBackup are stopped.
- Phase 1 of a rolling upgrade begins on the second subcluster. Complete the preparatory steps on the second subcluster.
Unmount all VxFS file systems not under VCS control:
# umount mount_point
- Complete updates to the operating system, if required.
Make sure that the existing version of Veritas Access supports the operating system update you apply. If the existing version of Veritas Access does not support the operating system update, first upgrade Veritas Access to a version that supports the operating system update.
For instructions, see the Red Hat Enterprise Linux (RHEL) operating system documentation.
Switch applications to the remaining subcluster and upgrade the operating system of the first subcluster.
The nodes are restarted after the operating system update.
- If a cache area is online, you must take the cache area offline before you upgrade the VxVM RPMs. Use the following command to take the cache area offline:
# sfcache offline cachename
- Log on as superuser and mount the Veritas Access 7.4.2 installation media.
- From root, start the installer.
# ./installaccess -rolling_upgrade
- The installer checks system communications, release compatibility, version information, and lists the cluster name, ID, and cluster nodes. The installer asks for permission to proceed with the rolling upgrade.
Would you like to perform rolling upgrade on the cluster? [y,n,q] (y)
Type y to continue.
- Phase 1 of the rolling upgrade begins. Phase 1 must be performed on one node at a time. The installer asks for the system name.
Enter the system names separated by spaces on which you want to perform rolling upgrade: [q?]
Enter the name or IP address of one of the slave nodes on which you want to perform the rolling upgrade.
- The installer performs further prechecks on the nodes in the cluster and may present warnings. You can type y to continue or quit the installer and address the precheck's warnings.
- If the boot disk is encapsulated and mirrored, you can create a backup boot disk.
If you choose to create a backup boot disk, type y. Provide a backup name for the boot disk group or accept the default name. The installer then creates a backup copy of the boot disk group.
- After the installer detects the online service groups, the installer prompts the user to do one of the following:
Manually switch service groups
Use the CPI to automatically switch service groups
The downtime is the time that it takes for the failover of the service group.
Note:
Veritas recommends that you manually switch the service groups. Automatic switching of service groups does not resolve dependency issues.
- The installer prompts you to stop the applicable processes. Type y to continue.
The installer evacuates all service groups to the node or nodes that are not upgraded at this time. The installer stops parallel service groups on the nodes that are to be upgraded.
The installer stops all the related processes, uninstalls the old kernel RPMs, and installs the new RPMs.
- The installer performs the upgrade configuration and starts the processes. If the boot disk is encapsulated before the upgrade, the installer prompts you to restart the node after performing the upgrade configuration.
- Complete the preparatory steps on the nodes that you have not yet upgraded.
Unmount all the VxFS file systems that are not under the VCS control on all the nodes.
# umount mount_point
- If the operating system updates are not required, skip this step.
Go to step 16 .
Else, complete updates to the operating system on the nodes that you have not yet upgraded. For the instructions, see the Red Hat Enterprise Linux (RHEL) operating system documentation.
- After the upgrade of phase 1 is done on the node, make sure that the node is not out of the cluster.
Enter the # vxclustadm nidmap command.
If the upgraded node is out of the cluster, wait for the node to join the cluster before you start the upgrade of phase 1 for the next node.
- Phase 1 of the rolling upgrade is complete for the first node. You can start with the upgrade of phase 1 for the next slave node. Installer again asks for the system name.
Before you start phase 1 of rolling upgrade for the next node, check if any recovery task is still in-progress. Wait for the recovery task to complete.
On the master node, enter the following command:
# vxtask list Check if following keywords are present: ECREBUILD/ATCOPY/ATCPY/PLXATT/VXRECOVER/RESYNC/RECOV
If any recovery task is in progress, wait for the task to complete, and then start for upgrade of phase 1 for the next node.
- Set up all cache areas as offline on the remaining node or nodes:
# sfcache offline cachename
The installer asks for a node name on which upgrade is to be performed.
- Enter the system names separated by spaces on which you want to perform rolling upgrade: [q,?].
Type cluster node name or q to quit.
The installer repeats step 8 through step 13 .
For clusters with larger number of nodes, this process may repeat several times. Service groups come down and are brought up to accommodate the upgrade.
- When phase 1 of the rolling upgrade completes, mount all the VxFS file systems that are not under VCS control manually.
Before you start phase 2 of rolling upgrade, make sure that all the nodes have joined the cluster and all recovery tasks are complete.
Begin Phase 2 of the upgrade. Phase 2 of the upgrade includes downtime for the VCS engine (HAD), which does not include application downtime. Type y to continue. Phase 2 of the rolling upgrade begins here.
- The installer determines the remaining RPMs to upgrade. Press y to continue.
- The installer stops the Veritas Cluster Server (VCS) processes but the applications continue to run. Type y to continue.
The installer performs a prestop, uninstalls the old RPMs, and installs the new RPMs. It performs post-installation tasks, and the configuration for the upgrade.
- If you have a network connection to the Internet, the installer checks for updates.
If updates are discovered, you can apply them now.
- Verify the cluster's status:
# hastatus -sum
- Post-upgrade steps only for the LTR-configured Veritas Access cluster:
Offline all the OpenDedup volumes by using the following command:
cluster2> opendedup volume offline <vol-name>
Update all the OpenDedup
config.xml
files as follows:"/etc/sdfs/<vol-name>-volume-cfg.xml
by adding following parameter to the <extended-config> tag:
dist-layout="false"
Note:
This parameter should not be used for the existing OpenDedup volumes because they may have existing data with the default layout. If you use the existing OpenDedup volumes, it may result in data corruption.
Online all the OpenDedup volumes by using following command:
cluster2> opendedup volume online <vol-name>
More Information