Veritas Access Installation Guide

Last Published:
Product(s): Access (7.4)
Platform: Linux
  1. Introducing Veritas Access
    1.  
      About Veritas Access
  2. Licensing in Veritas Access
    1.  
      About Veritas Access product licensing
  3. System requirements
    1.  
      Important release information
    2. System requirements
      1. Linux requirements
        1.  
          Operating system RPM installation requirements and operating system patching
        2.  
          Kernel RPMs that are required to be installed with exact predefined RPM versions
        3.  
          OL kernel RPMs that are required to be installed with exact predefined RPM versions
        4.  
          Required operating system RPMs for OL 7.3
        5.  
          Required operating system RPMs for OL 7.4
        6.  
          Required operating system RPMs for RHEL 7.3
        7.  
          Required operating system RPMs for RHEL 7.4
      2.  
        Software requirements for installing Veritas Access in a VMware ESXi environment
      3.  
        Hardware requirements for installing Veritas Access virtual machines
      4.  
        Management Server Web browser support
      5.  
        Supported NetBackup versions
      6.  
        Supported OpenStack versions
      7.  
        Supported Oracle versions and host operating systems
      8.  
        Supported IP version 6 Internet standard protocol
    3. Network and firewall requirements
      1.  
        NetBackup ports
      2.  
        OpenDedup ports and disabling the iptable rules
      3.  
        CIFS protocols and firewall ports
    4.  
      Maximum configuration limits
  4. Preparing to install Veritas Access
    1.  
      Overview of the installation process
    2.  
      Hardware requirements for the nodes
    3.  
      Connecting the network hardware
    4. About obtaining IP addresses
      1.  
        About calculating IP address requirements
      2.  
        Reducing the number of IP addresses required at installation time
    5.  
      About checking the storage configuration
  5. Deploying virtual machines in VMware ESXi for Veritas Access installation
    1.  
      Setting up networking in VMware ESXi
    2.  
      Creating a datastore for the boot disk and LUNs
    3.  
      Creating a virtual machine for Veritas Access installation
  6. Installing and configuring a cluster
    1.  
      Installation overview
    2.  
      Summary of the installation steps
    3.  
      Before you install
    4. Installing the operating system on each node of the cluster
      1.  
        About the driver node
      2.  
        Installing the operating system on the target Veritas Access cluster
      3.  
        Installing the Oracle Linux operating system on the target Veritas Access cluster
    5. Installing Veritas Access on the target cluster nodes
      1.  
        Installing and configuring the Veritas Access software on the cluster
      2.  
        Veritas Access Graphical User Interface
    6. About managing the NICs, bonds, and VLAN devices
      1.  
        Selecting the public NICs
      2.  
        Selecting the private NICs
      3.  
        Excluding a NIC
      4.  
        Including a NIC
      5.  
        Creating a NIC bond
      6.  
        Removing a NIC bond
      7.  
        Removing a NIC from the bond list
    7. About VLAN tagging
      1.  
        Creating a VLAN device
      2.  
        Removing a VLAN device
      3.  
        Limitations of VLAN tagging
    8.  
      Replacing an Ethernet interface card
    9.  
      Configuring I/O fencing
    10.  
      About configuring Veritas NetBackup
    11.  
      About enabling kdump during an Veritas Access configuration
    12.  
      Reconfiguring the Veritas Access cluster name and network
    13.  
      Configuring a KMS server on the Veritas Access cluster
  7. Automating Veritas Access installation and configuration using response files
    1.  
      About response files
    2.  
      Performing a silent Veritas Access installation
    3.  
      Response file variables to install and configure Veritas Access
    4.  
      Sample response file for Veritas Access installation and configuration
  8. Displaying and adding nodes to a cluster
    1.  
      About the Veritas Access installation states and conditions
    2.  
      Displaying the nodes in the cluster
    3.  
      Before adding new nodes in the cluster
    4.  
      Adding a node to the cluster
    5.  
      Adding a node in mixed mode environment
    6.  
      Deleting a node from the cluster
    7.  
      Shutting down the cluster nodes
  9. Upgrading Veritas Access and operating system
    1.  
      Upgrading the operating system and Veritas Access
  10. Upgrading Veritas Access using a rolling upgrade
    1.  
      About the rolling upgrades
    2.  
      Supported rolling upgrade paths for upgrades on RHEL and Oracle Linux
    3.  
      Performing a rolling upgrade using the installer
  11. Uninstalling Veritas Access
    1.  
      Before you uninstall Veritas Access
    2. Uninstalling Veritas Access using the installer
      1.  
        Removing Veritas Access 7.4 RPMs
      2.  
        Running uninstall from the Veritas Access 7.4 disc
  12. Appendix A. Installation reference
    1.  
      Installation script options
  13. Appendix B. Configuring the secure shell for communications
    1.  
      Manually configuring passwordless SSH
    2.  
      Setting up the SSH and the RSH connections
  14. Appendix C. Manual deployment of Veritas Access
    1.  
      Deploying Veritas Access manually on a two-node cluster in a non-SSH environment
    2.  
      Enabling internal sudo user communication in Veritas Access

Performing a rolling upgrade using the installer

Before you start a rolling upgrade, make sure that the Cluster Server (VCS) is running on all the nodes of the cluster.

You need to stop all activities for all the VxVM volumes that are not under the VCS control. For example, stop any applications such as databases that can access the volumes, and unmount any file systems that have been created on the volumes. Then stop all the volumes.

Unmount all the VxFS file systems that are not under VCS control.

To perform a rolling upgrade

  1. In case of the LTR-configured Veritas Access cluster, make sure that the backup or restore jobs from NetBackup are stopped.
  2. Phase 1 of a rolling upgrade begins on the second subcluster. Complete the preparatory steps on the second subcluster.

    Unmount all VxFS file systems not under VCS control:

    # umount mount_point
  3. Complete the updates to the OS, if required.

    Make sure that the existing version of Veritas Access supports the OS updates that you apply. If the existing version of Veritas Access does not support the OS update, first upgrade Veritas Access to a version that supports the OS update.

    For more information, see the RHEL OS documentation.

    Switch the applications to the remaining subcluster and upgrade the OS of the first subcluster.

    The nodes are restarted after the OS updates are completed.

  4. If a cache area is online, you must take the cache area offline before you upgrade the VxVM RPMs. Use the following command to take the cache area offline:
    # sfcache offline cachename
  5. Disable I/O fencing before you perform the rolling upgrade by using the storage fencing off command.
  6. Log on as a root user and mount the Veritas Access 7.4 installation media.
  7. From root, start the installer.
    # ./installaccess -rolling_upgrade
  8. The installer checks system communications, release compatibility, version information, and lists the cluster name, ID, and cluster nodes. The installer asks for permission to proceed with the rolling upgrade.
    Would you like to perform rolling upgrade on the cluster? [y,n,q] (y)

    Type y to continue.

  9. Phase 1 of the rolling upgrade begins. Phase 1 must be performed on one node at a time. The installer asks for the system name.
    Enter the system names separated by spaces on which you want to perform rolling upgrade: [q?]
    Enter the name or IP address of one of the slave node on which you want to perform the rolling upgrade. 
  10. The installer performs further prechecks on the nodes in the cluster and may present warnings. You can type y to continue or quit the installer and address the precheck's warnings.
  11. If the boot disk is encapsulated and mirrored, you can create a backup boot disk.

    If you choose to create a backup boot disk, type y. Provide a backup name for the boot disk group or accept the default name. The installer then creates a backup copy of the boot disk group.

  12. After the installer detects the online service groups, the installer prompts the user to do one of the following:
    • Manually switch service groups

    • Use the CPI to automatically switch service groups

    The downtime is the time that it takes for the failover of the service group.

    Note:

    Veritas recommends that you manually switch the service groups. Automatic switching of service groups does not resolve dependency issues.

  13. The installer prompts you to stop the applicable processes. Type y to continue.

    The installer evacuates all service groups to the node or nodes that are not upgraded at this time. The installer stops parallel service groups on the nodes that are to be upgraded.

    The installer stops all the related processes, uninstalls the old kernel RPMs, and installs the new RPMs.

  14. The installer performs the upgrade configuration and starts the processes. If the boot disk is encapsulated before the upgrade, the installer prompts you to restart the node after performing the upgrade configuration.
  15. Complete the preparatory steps on the nodes that you have not yet upgraded.

    Unmount all the VxFS file systems that are not under VCS control on all the nodes.

    # umount mount_point
  16. If the OS updates are not required, skip this step.

    Go to step 4.

    Else, complete updates to the OS on the nodes that you have not yet upgraded. For the instructions, see the RHEL OS documentation.

    Repeat steps 4 to 14 for each node.

  17. Phase 1 of the rolling upgrade is complete for the first node. You can start with the upgrade of phase 1 for the next slave node. Installer again asks for the system name.

    Before you start the upgrade of phase 1 for the next node, you need to check if the recovery task is in-progress. You need to wait for a few minutes for the recovery task to start.

    On the master node, enter the following command:

    # vxtask list
    
    Check if following keywords are present:
    ECREBUILD/ATCOPY/ATCPY/PLXATT/VXRECOVER/RESYNC/RECOV
    
    

    If any recovery task is in progress, wait for the task to complete, and then start the upgrade of phase 1 for the next node.

  18. After the upgrade of phase 1 is done on the node, make sure that the node is not out of the cluster.

    Enter the # vxclustadm nidmap command.

    If the upgraded node is out of the cluster, wait for the node to join the cluster before you start the upgrade of phase 1 for the next node.

  19. Set up all cache areas as offline on the remaining node or nodes:
    # sfcache offline cachename

    The installer asks for a node name on which the upgrade is to be performed.

  20. Type the system names on which you want to perform the rolling upgrade.
    Enter the system names separated by spaces on which you want to perform rolling upgrade: [q,?]
    
  21. Type the cluster node name.
    Type cluster node name or q to quit.

    The installer repeats step 9 through step 14.

    For clusters with a larger number of nodes, this process may repeat several times. Service groups come down and are brought up to accommodate the upgrade.

  22. When phase 1 of the rolling upgrade completes, mount all the VxFS file systems that are not under VCS control manually. Begin phase 2 of the upgrade. Phase 2 of the upgrade includes downtime for the VCS engine (HAD), which does not include application downtime. Type y to continue. Phase 2 of the rolling upgrade begins here.
  23. The installer determines the remaining RPMs to upgrade. Type y to continue.
  24. The installer stops Cluster Server (VCS) processes but the applications continue to run. Type y to continue.

    The installer performs a prestop, uninstalls the old RPMs, and installs the new RPMs. It performs post-installation tasks and the configuration for the upgrade.

  25. If you have a network connection to the Internet, the installer checks for updates.

    If any updates are discovered, you can apply them now.

  26. Verify the cluster's status:
    # hastatus -sum
  27. Post-upgrade steps only for the LTR-configured Veritas Access cluster:

    Take offline all the OpenDedup volumes by using the following command:

    cluster2> opendedup volume offline <vol-name>

    Update all the OpenDedup config.xml files as follows:

    /etc/sdfs/<vol-name>-volume-cfg.xml

    by adding the following parameter to the extended-config tag:

    dist-layout= "false"

    Note:

    This parameter should not be used for the existing OpenDedup volumes because they might have existing data with the default layout. If you use the existing OpenDedup volumes, it might result in data corruption.

    Bring online all the OpenDedup volumes by using the following command:

    cluster2> opendedup volume online <vol-name>