Veritas Access Installation Guide
- Licensing in Veritas Access
- System requirements
- System requirements
- Linux requirements
- Network and firewall requirements
- Preparing to install Veritas Access
- Deploying virtual machines in VMware ESXi for Veritas Access installation
- Installing and configuring a cluster
- Installing the operating system on each node of the cluster
- Installing Veritas Access on the target cluster nodes
- About managing the NICs, bonds, and VLAN devices
- About VLAN tagging
- Automating Veritas Access installation and configuration using response files
- Displaying and adding nodes to a cluster
- Upgrading the operating system and Veritas Access
- Performing a rolling upgrade
- Uninstalling Veritas Access
- Appendix A. Installation reference
- Appendix B. Configuring the secure shell for communications
- Appendix C. Manual deployment of Veritas Access
About rolling upgrade
The Veritas Access 7.4.2 supports rolling upgrade from the Veritas Access 7.3.1 and later versions. Rolling upgrade is supported on RHEL 7.4 and on OL 7 Update 4 (only in RHEL compatible mode).
A rolling upgrade minimizes the service and application downtime for highly available clusters by limiting the upgrade time to the amount of time that it takes to perform a service group failover. Nodes with different product versions can be run in one cluster.
The rolling upgrade has two main phases. The installer upgrades kernel RPMs in phase 1 and VCS agent RPMs in phase 2. Upgrade should be done on each node individually one by one. You need to perform upgrade first on an each slave node and thereafter on the master node. The upgrade process stops all services and resources on the node, which is being upgraded. All services (including the VIP groups) fail over to the one of the other node from the cluster. During the failover process, the clients that are connected to the VIP groups of nodes are intermittently interrupted. For those clients that do not time-out, the service is resumed after the VIP groups become Online on the node that is being upgraded.
While the upgrade process is running on the first node, other nodes of the cluster continues to serve the clients. After the first node has been upgraded, it restarts the services and resources on the first-stage node. After the first node comes up, the upgrade process stops the services and resources on the next slave node and so on. All services and resources are online and serve clients. Meanwhile, the rolling upgrade starts the upgrade process on the remaining nodes. After the upgrade is complete on the remaining nodes, the cluster recovers and services are balanced across the cluster.
A rolling upgrade has two main phases where the installer upgrades the kernel RPMs in Phase 1 and VCS agent-related non-kernel RPMs in Phase 2.
The upgrade process is performed on each node one after another.
In phase 1, the upgrade process is performed first on the slave node(s) and then on the master node. The upgrade process stops all services on the node and failover service group to an another node in the cluster.
During the failover process, the clients that are connected to the VIP groups of the nodes are intermittently interrupted. For those clients that do not time out, the service is resumed after the VIP groups become online on one of the nodes.
The installer upgrades the kernel RPMs on the node. The nodes continue to serve the clients.
After the phase 1 for first slave node is complete, upgrade is started for the second slave node and so on. After slave nodes master node is upgraded. And all the service groups from master node failover to some other node.
After phase 1 for first node is successful, you need to check if recovery task is also complete before starting upgrade phase 1 for the next node.
Note:
Make sure that the upgraded node is not out of the cluster. If the node is out of cluster, wait for the node to join the existing cluster.
During Phase 2 of the rolling upgrade, all remaining RPMs are upgraded on all the nodes of the cluster simultaneously. VCS and VCS-agent packages are upgraded. The kernel drivers are upgraded to the new protocol version. Applications stay online during Phase 2. The High Availability Daemon (HAD) stops and starts again.
More Information