Veritas Access Installation Guide

Last Published:
Product(s): Access (7.4.2)
Platform: Linux
  1. Licensing in Veritas Access
    1.  
      About Veritas Access product licensing
    2.  
      Per-TB licensing model
    3.  
      TB-Per-Core licensing model
    4.  
      Per-Core licensing model
    5.  
      Add-on license for using Veritas Data Deduplication
    6.  
      Notes and functional enforcements for licensing
  2. System requirements
    1.  
      Important release information
    2. System requirements
      1. Linux requirements
        1.  
          Required operating system RPMs and patches
        2.  
          Required kernel RPMs
        3.  
          Required Oracle Linux kernel RPMs
        4.  
          Required operating system RPMs for OL 7.4
        5.  
          Required operating system RPMs for OL 7.5
        6.  
          Required operating system RPMs for RHEL 7.4
        7.  
          Required operating system RPMs for RHEL 7.5
      2.  
        Software requirements for installing Veritas Access in a VMware ESXi environment
      3.  
        Hardware requirements for installing Veritas Access virtual machines
      4.  
        Management Server Web browser support
      5.  
        Required NetBackup versions
      6.  
        Required OpenStack versions
      7.  
        Required Oracle versions and host operating systems
      8.  
        Required IP version 6 Internet standard protocol
    3. Network and firewall requirements
      1.  
        NetBackup ports
      2.  
        OpenDedup ports and disabling the iptable rules
      3.  
        CIFS protocols and firewall ports
    4.  
      Maximum configuration limits
  3. Preparing to install Veritas Access
    1.  
      Overview of the installation process
    2.  
      Hardware requirements for the nodes
    3. About using LLT over the RDMA network for Veritas Access
      1.  
        RDMA over InfiniBand networks in the Veritas Access clustering environment
      2.  
        How LLT supports RDMA for faster interconnections between applications
      3.  
        Configuring LLT over RDMA for Veritas Access
      4.  
        How the Veritas Access installer configures LLT over RDMA
      5.  
        LLT over RDMA sample /etc/llttab
    4.  
      Connecting the network hardware
    5. About obtaining IP addresses
      1.  
        About calculating IP address requirements
      2.  
        Reducing the number of IP addresses required at installation time
    6.  
      About checking the storage configuration
  4. Deploying virtual machines in VMware ESXi for Veritas Access installation
    1.  
      Setting up networking in VMware ESXi
    2.  
      Creating a datastore for the boot disk and LUNs
    3.  
      Creating a virtual machine for Veritas Access installation
  5. Installing and configuring a cluster
    1.  
      Installation overview
    2.  
      Summary of the installation steps
    3.  
      Before you install
    4. Installing the operating system on each node of the cluster
      1.  
        About the driver node
      2.  
        Installing the RHEL operating system on the target Veritas Access cluster
      3. Installing the Oracle Linux operating system on the target Veritas Access cluster
        1.  
          Setting up the Oracle Linux yum server repository
    5. Installing Veritas Access on the target cluster nodes
      1.  
        Installing and configuring the Veritas Access software on the cluster
      2.  
        Veritas Access Graphical User Interface
    6. About managing the NICs, bonds, and VLAN devices
      1.  
        Selecting the public NICs
      2.  
        Selecting the private NICs
      3.  
        Excluding a NIC
      4.  
        Including a NIC
      5.  
        Creating a NIC bond
      6.  
        Removing a NIC bond
      7.  
        Removing a NIC from the bond list
    7. About VLAN tagging
      1.  
        Creating a VLAN device
      2.  
        Removing a VLAN device
      3.  
        Limitations of VLAN Tagging
    8.  
      Replacing an Ethernet interface card
    9.  
      Configuring I/O fencing
    10.  
      About configuring Veritas NetBackup
    11.  
      About enabling kdump during an Veritas Access configuration
    12.  
      Reconfiguring the Veritas Access cluster name and network
    13.  
      Configuring a KMS server on the Veritas Access cluster
  6. Automating Veritas Access installation and configuration using response files
    1.  
      About response files
    2.  
      Performing a silent Veritas Access installation
    3.  
      Response file variables to install and configure Veritas Access
    4.  
      Sample response file for Veritas Access installation and configuration
  7. Displaying and adding nodes to a cluster
    1.  
      About the Veritas Access installation states and conditions
    2.  
      Displaying the nodes in the cluster
    3.  
      Before adding new nodes in the cluster
    4.  
      Adding a node to the cluster
    5.  
      Adding a node in mixed mode environment
    6.  
      Deleting a node from the cluster
    7.  
      Shutting down the cluster nodes
  8. Upgrading the operating system and Veritas Access
    1.  
      Supported upgrade paths for upgrades on RHEL
    2.  
      Upgrading the operating system and Veritas Access
  9. Performing a rolling upgrade
    1.  
      About rolling upgrade
    2.  
      Performing a rolling upgrade using the installer
  10. Uninstalling Veritas Access
    1.  
      Before you uninstall Veritas Access
    2. Uninstalling Veritas Access using the installer
      1.  
        Removing Veritas Access 7.4.2 RPMs
      2.  
        Running uninstall from the Veritas Access 7.4.2 disc
  11. Appendix A. Installation reference
    1.  
      Installation script options
  12. Appendix B. Configuring the secure shell for communications
    1.  
      Manually configuring passwordless secure shell (ssh)
    2.  
      Setting up ssh and rsh connections using the pwdutil.pl utility
  13. Appendix C. Manual deployment of Veritas Access
    1.  
      Deploying Veritas Access manually on a two-node cluster in a non-SSH environment
    2.  
      Enabling internal sudo user communication in Veritas Access

Performing a rolling upgrade using the installer

Note:

See the "Known issues> Upgrade issues" section of the Veritas Access Release Notes before starting the rolling upgrade.

Before you start a rolling upgrade, make sure that the Veritas Cluster Server (VCS) is running on all the nodes of the cluster.

Stop all activity for all the VxVM volumes that are not under VCS control. For example, stop any applications such as databases that access the volumes, and unmount any file systems that have been created on the volumes. Then stop all the volumes.

Unmount all the VxFS file systems that are not under VCS control.

Note:

The Veritas Access GUI is not accessible from the time that you start rolling upgrade on the master node till the time rolling upgrade is complete.

Note:

It is recommended that during rolling upgrade, you use only list and show commands in the Veritas Access command-line interfae. Using other commands like create, destroy, add, and remove may update the Veritas Access configuration which is not recommended during rolling upgrade.

To perform a rolling upgrade

  1. In case of the LTR-configured Veritas Access cluster, make sure that the backup or restore jobs from NetBackup are stopped.
  2. Phase 1 of a rolling upgrade begins on the second subcluster. Complete the preparatory steps on the second subcluster.

    Unmount all VxFS file systems not under VCS control:

    # umount mount_point
  3. Complete updates to the operating system, if required.

    Make sure that the existing version of Veritas Access supports the operating system update you apply. If the existing version of Veritas Access does not support the operating system update, first upgrade Veritas Access to a version that supports the operating system update.

    For instructions, see the Red Hat Enterprise Linux (RHEL) operating system documentation.

    Switch applications to the remaining subcluster and upgrade the operating system of the first subcluster.

    The nodes are restarted after the operating system update.

  4. If a cache area is online, you must take the cache area offline before you upgrade the VxVM RPMs. Use the following command to take the cache area offline:
    # sfcache offline cachename
  5. Log on as superuser and mount the Veritas Access 7.4.2 installation media.
  6. From root, start the installer.
    # ./installaccess -rolling_upgrade
  7. The installer checks system communications, release compatibility, version information, and lists the cluster name, ID, and cluster nodes. The installer asks for permission to proceed with the rolling upgrade.
    Would you like to perform rolling upgrade on the cluster? [y,n,q] (y)

    Type y to continue.

  8. Phase 1 of the rolling upgrade begins. Phase 1 must be performed on one node at a time. The installer asks for the system name.

    Enter the system names separated by spaces on which you want to perform rolling upgrade: [q?]

    Enter the name or IP address of one of the slave nodes on which you want to perform the rolling upgrade.

  9. The installer performs further prechecks on the nodes in the cluster and may present warnings. You can type y to continue or quit the installer and address the precheck's warnings.
  10. If the boot disk is encapsulated and mirrored, you can create a backup boot disk.

    If you choose to create a backup boot disk, type y. Provide a backup name for the boot disk group or accept the default name. The installer then creates a backup copy of the boot disk group.

  11. After the installer detects the online service groups, the installer prompts the user to do one of the following:
    • Manually switch service groups

    • Use the CPI to automatically switch service groups

    The downtime is the time that it takes for the failover of the service group.

    Note:

    Veritas recommends that you manually switch the service groups. Automatic switching of service groups does not resolve dependency issues.

  12. The installer prompts you to stop the applicable processes. Type y to continue.

    The installer evacuates all service groups to the node or nodes that are not upgraded at this time. The installer stops parallel service groups on the nodes that are to be upgraded.

    The installer stops all the related processes, uninstalls the old kernel RPMs, and installs the new RPMs.

  13. The installer performs the upgrade configuration and starts the processes. If the boot disk is encapsulated before the upgrade, the installer prompts you to restart the node after performing the upgrade configuration.
  14. Complete the preparatory steps on the nodes that you have not yet upgraded.

    Unmount all the VxFS file systems that are not under the VCS control on all the nodes.

    # umount mount_point
  15. If the operating system updates are not required, skip this step.

    Go to step 16 .

    Else, complete updates to the operating system on the nodes that you have not yet upgraded. For the instructions, see the Red Hat Enterprise Linux (RHEL) operating system documentation.

    Repeat steps 4 to 13 for each node.

  16. After the upgrade of phase 1 is done on the node, make sure that the node is not out of the cluster.

    Enter the # vxclustadm nidmap command.

    If the upgraded node is out of the cluster, wait for the node to join the cluster before you start the upgrade of phase 1 for the next node.

  17. Phase 1 of the rolling upgrade is complete for the first node. You can start with the upgrade of phase 1 for the next slave node. Installer again asks for the system name.

    Before you start phase 1 of rolling upgrade for the next node, check if any recovery task is still in-progress. Wait for the recovery task to complete.

    On the master node, enter the following command:

    # vxtask list
    Check if following keywords are present:
    ECREBUILD/ATCOPY/ATCPY/PLXATT/VXRECOVER/RESYNC/RECOV

    If any recovery task is in progress, wait for the task to complete, and then start for upgrade of phase 1 for the next node.

  18. Set up all cache areas as offline on the remaining node or nodes:
    # sfcache offline cachename

    The installer asks for a node name on which upgrade is to be performed.

  19. Enter the system names separated by spaces on which you want to perform rolling upgrade: [q,?].

    Type cluster node name or q to quit.

    The installer repeats step 8 through step 13 .

    For clusters with larger number of nodes, this process may repeat several times. Service groups come down and are brought up to accommodate the upgrade.

  20. When phase 1 of the rolling upgrade completes, mount all the VxFS file systems that are not under VCS control manually.

    Before you start phase 2 of rolling upgrade, make sure that all the nodes have joined the cluster and all recovery tasks are complete.

    Begin Phase 2 of the upgrade. Phase 2 of the upgrade includes downtime for the VCS engine (HAD), which does not include application downtime. Type y to continue. Phase 2 of the rolling upgrade begins here.

  21. The installer determines the remaining RPMs to upgrade. Press y to continue.
  22. The installer stops the Veritas Cluster Server (VCS) processes but the applications continue to run. Type y to continue.

    The installer performs a prestop, uninstalls the old RPMs, and installs the new RPMs. It performs post-installation tasks, and the configuration for the upgrade.

  23. If you have a network connection to the Internet, the installer checks for updates.

    If updates are discovered, you can apply them now.

  24. Verify the cluster's status:
    # hastatus -sum
  25. Post-upgrade steps only for the LTR-configured Veritas Access cluster:

    Offline all the OpenDedup volumes by using the following command:

    cluster2> opendedup volume offline <vol-name>

    Update all the OpenDedup config.xml files as follows:

    "/etc/sdfs/<vol-name>-volume-cfg.xml

    by adding following parameter to the <extended-config> tag:

    dist-layout="false"

    Note:

    This parameter should not be used for the existing OpenDedup volumes because they may have existing data with the default layout. If you use the existing OpenDedup volumes, it may result in data corruption.

    Online all the OpenDedup volumes by using following command:

    cluster2> opendedup volume online <vol-name>

More Information

About rolling upgrade