Veritas Access Installation Guide

Last Published:
Product(s): Access (7.4)
Platform: Linux
  1. Introducing Veritas Access
    1.  
      About Veritas Access
  2. Licensing in Veritas Access
    1.  
      About Veritas Access product licensing
  3. System requirements
    1.  
      Important release information
    2. System requirements
      1. Linux requirements
        1.  
          Operating system RPM installation requirements and operating system patching
        2.  
          Kernel RPMs that are required to be installed with exact predefined RPM versions
        3.  
          OL kernel RPMs that are required to be installed with exact predefined RPM versions
        4.  
          Required operating system RPMs for OL 7.3
        5.  
          Required operating system RPMs for OL 7.4
        6.  
          Required operating system RPMs for RHEL 7.3
        7.  
          Required operating system RPMs for RHEL 7.4
      2.  
        Software requirements for installing Veritas Access in a VMware ESXi environment
      3.  
        Hardware requirements for installing Veritas Access virtual machines
      4.  
        Management Server Web browser support
      5.  
        Supported NetBackup versions
      6.  
        Supported OpenStack versions
      7.  
        Supported Oracle versions and host operating systems
      8.  
        Supported IP version 6 Internet standard protocol
    3. Network and firewall requirements
      1.  
        NetBackup ports
      2.  
        OpenDedup ports and disabling the iptable rules
      3.  
        CIFS protocols and firewall ports
    4.  
      Maximum configuration limits
  4. Preparing to install Veritas Access
    1.  
      Overview of the installation process
    2.  
      Hardware requirements for the nodes
    3.  
      Connecting the network hardware
    4. About obtaining IP addresses
      1.  
        About calculating IP address requirements
      2.  
        Reducing the number of IP addresses required at installation time
    5.  
      About checking the storage configuration
  5. Deploying virtual machines in VMware ESXi for Veritas Access installation
    1.  
      Setting up networking in VMware ESXi
    2.  
      Creating a datastore for the boot disk and LUNs
    3.  
      Creating a virtual machine for Veritas Access installation
  6. Installing and configuring a cluster
    1.  
      Installation overview
    2.  
      Summary of the installation steps
    3.  
      Before you install
    4. Installing the operating system on each node of the cluster
      1.  
        About the driver node
      2.  
        Installing the operating system on the target Veritas Access cluster
      3.  
        Installing the Oracle Linux operating system on the target Veritas Access cluster
    5. Installing Veritas Access on the target cluster nodes
      1.  
        Installing and configuring the Veritas Access software on the cluster
      2.  
        Veritas Access Graphical User Interface
    6. About managing the NICs, bonds, and VLAN devices
      1.  
        Selecting the public NICs
      2.  
        Selecting the private NICs
      3.  
        Excluding a NIC
      4.  
        Including a NIC
      5.  
        Creating a NIC bond
      6.  
        Removing a NIC bond
      7.  
        Removing a NIC from the bond list
    7. About VLAN tagging
      1.  
        Creating a VLAN device
      2.  
        Removing a VLAN device
      3.  
        Limitations of VLAN tagging
    8.  
      Replacing an Ethernet interface card
    9.  
      Configuring I/O fencing
    10.  
      About configuring Veritas NetBackup
    11.  
      About enabling kdump during an Veritas Access configuration
    12.  
      Reconfiguring the Veritas Access cluster name and network
    13.  
      Configuring a KMS server on the Veritas Access cluster
  7. Automating Veritas Access installation and configuration using response files
    1.  
      About response files
    2.  
      Performing a silent Veritas Access installation
    3.  
      Response file variables to install and configure Veritas Access
    4.  
      Sample response file for Veritas Access installation and configuration
  8. Displaying and adding nodes to a cluster
    1.  
      About the Veritas Access installation states and conditions
    2.  
      Displaying the nodes in the cluster
    3.  
      Before adding new nodes in the cluster
    4.  
      Adding a node to the cluster
    5.  
      Adding a node in mixed mode environment
    6.  
      Deleting a node from the cluster
    7.  
      Shutting down the cluster nodes
  9. Upgrading Veritas Access and operating system
    1.  
      Upgrading the operating system and Veritas Access
  10. Upgrading Veritas Access using a rolling upgrade
    1.  
      About the rolling upgrades
    2.  
      Supported rolling upgrade paths for upgrades on RHEL and Oracle Linux
    3.  
      Performing a rolling upgrade using the installer
  11. Uninstalling Veritas Access
    1.  
      Before you uninstall Veritas Access
    2. Uninstalling Veritas Access using the installer
      1.  
        Removing Veritas Access 7.4 RPMs
      2.  
        Running uninstall from the Veritas Access 7.4 disc
  12. Appendix A. Installation reference
    1.  
      Installation script options
  13. Appendix B. Configuring the secure shell for communications
    1.  
      Manually configuring passwordless SSH
    2.  
      Setting up the SSH and the RSH connections
  14. Appendix C. Manual deployment of Veritas Access
    1.  
      Deploying Veritas Access manually on a two-node cluster in a non-SSH environment
    2.  
      Enabling internal sudo user communication in Veritas Access

About the rolling upgrades

This release of Veritas Access supports rolling upgrades from the Veritas Access 7.3.0.1 and later versions. Rolling upgrade is supported on RHEL 7.3 and 7.4.

A rolling upgrade minimizes the service and application downtime for highly available clusters by limiting the upgrade time to the amount of time that it takes to perform a service group failover. Nodes with different product versions can be run in one cluster.

The rolling upgrade has two main phases. The installer upgrades kernel RPMs in phase 1 and VCS agent RPMs in phase 2. Upgrade should be done on each node individually one by one. You need to perform upgrade first on an each slave node and thereafter on the master node. The upgrade process stops all services and resources on the node, which is being upgraded. All services (including the VIP groups) fail over to the one of the other nodes from the cluster. During the failover process, the clients that are connected to the VIP groups of nodes are intermittently interrupted. For those clients that do not time-out, the service is resumed after the VIP groups become online on the node that is being upgraded.

While the upgrade process is running on the first node, other nodes of the cluster continue to serve the clients. After the first node has been upgraded, it restarts the services and resources on the first-stage node. After the first node comes up, the upgrade process stops the services and resources on the next slave node and so on. All services and resources are online and serve clients. Meanwhile, the rolling upgrade starts the upgrade process on the remaining nodes. After the upgrade is complete on the remaining nodes, the cluster recovers and services are balanced across the cluster.

Workflow for the rolling upgrade

A rolling upgrade has two main phases where the installer upgrades the kernel RPMs in Phase 1 and VCS agent-related non-kernel RPMs in Phase 2.

  1. Disable the I/O fencing before you start the rolling upgrade.

  2. The upgrade process is performed on each node one after another.

  3. In phase 1, the upgrade process is performed on the slave nodes first. The upgrade process stops all services on the node and the group of services are failed over to an another node in the cluster.

  4. During the failover process, the clients that are connected to the VIP groups of the nodes are intermittently interrupted. For those clients that do not time out, the service is resumed after the VIP groups become online on one of the nodes.

  5. During phase 1, the installer upgrades the kernel RPMs on the node and the other nodes continue to serve the clients.

  6. After the phase 1 for the first slave node is complete, upgrade is started for the second slave node and so on. After master node of the slave node is upgraded, all the service groups from the master node are failed over to some other node.

  7. After phase 1 for the first node is successful, you need to check if recovery task is also complete before starting upgrade phase 1 for the next node.

    Note:

    You need to verify that the upgraded node is not out of the cluster by running the vxclustadm nidmap. If it shows that the node is out of cluster, wait for the node to join the existing cluster.

  8. During Phase 2 of the rolling upgrade, all remaining RPMs are upgraded on all the nodes of the cluster simultaneously. VCS and VCS-agent packages are upgraded. The kernel drivers are upgraded to the new protocol version. Applications stay online during Phase 2. The High Availability daemon (HAD) stops and starts again.