Veritas Access Installation Guide
- Introducing Veritas Access
- Licensing in Veritas Access
- System requirements
- System requirements
- Linux requirements
- Linux requirements
- Network and firewall requirements
- Preparing to install Veritas Access
- Deploying virtual machines in VMware ESXi for Veritas Access installation
- Installing and configuring a cluster
- Installing the operating system on each node of the cluster
- Installing Veritas Access on the target cluster nodes
- About managing the NICs, bonds, and VLAN devices
- About VLAN tagging
- Automating Veritas Access installation and configuration using response files
- Displaying and adding nodes to a cluster
- Upgrading Veritas Access and operating system
- Upgrading Veritas Access using a rolling upgrade
- Uninstalling Veritas Access
- Appendix A. Installation reference
- Appendix B. Configuring the secure shell for communications
- Appendix C. Manual deployment of Veritas Access
About the rolling upgrades
This release of Veritas Access supports rolling upgrades from the Veritas Access 7.3.0.1 and later versions. Rolling upgrade is supported on RHEL 7.3 and 7.4.
A rolling upgrade minimizes the service and application downtime for highly available clusters by limiting the upgrade time to the amount of time that it takes to perform a service group failover. Nodes with different product versions can be run in one cluster.
The rolling upgrade has two main phases. The installer upgrades kernel RPMs in phase 1 and VCS agent RPMs in phase 2. Upgrade should be done on each node individually one by one. You need to perform upgrade first on an each slave node and thereafter on the master node. The upgrade process stops all services and resources on the node, which is being upgraded. All services (including the VIP groups) fail over to the one of the other nodes from the cluster. During the failover process, the clients that are connected to the VIP groups of nodes are intermittently interrupted. For those clients that do not time-out, the service is resumed after the VIP groups become online on the node that is being upgraded.
While the upgrade process is running on the first node, other nodes of the cluster continue to serve the clients. After the first node has been upgraded, it restarts the services and resources on the first-stage node. After the first node comes up, the upgrade process stops the services and resources on the next slave node and so on. All services and resources are online and serve clients. Meanwhile, the rolling upgrade starts the upgrade process on the remaining nodes. After the upgrade is complete on the remaining nodes, the cluster recovers and services are balanced across the cluster.
A rolling upgrade has two main phases where the installer upgrades the kernel RPMs in Phase 1 and VCS agent-related non-kernel RPMs in Phase 2.
Disable the I/O fencing before you start the rolling upgrade.
The upgrade process is performed on each node one after another.
In phase 1, the upgrade process is performed on the slave nodes first. The upgrade process stops all services on the node and the group of services are failed over to an another node in the cluster.
During the failover process, the clients that are connected to the VIP groups of the nodes are intermittently interrupted. For those clients that do not time out, the service is resumed after the VIP groups become online on one of the nodes.
During phase 1, the installer upgrades the kernel RPMs on the node and the other nodes continue to serve the clients.
After the phase 1 for the first slave node is complete, upgrade is started for the second slave node and so on. After master node of the slave node is upgraded, all the service groups from the master node are failed over to some other node.
After phase 1 for the first node is successful, you need to check if recovery task is also complete before starting upgrade phase 1 for the next node.
Note:
You need to verify that the upgraded node is not out of the cluster by running the vxclustadm nidmap. If it shows that the node is out of cluster, wait for the node to join the existing cluster.
During Phase 2 of the rolling upgrade, all remaining RPMs are upgraded on all the nodes of the cluster simultaneously. VCS and VCS-agent packages are upgraded. The kernel drivers are upgraded to the new protocol version. Applications stay online during Phase 2. The High Availability daemon (HAD) stops and starts again.