Veritas Access Installation Guide

Last Published:
Product(s): Access (7.3.1)
Platform: Linux
  1. Introducing Veritas Access
    1.  
      About Veritas Access
  2. Licensing in Veritas Access
    1.  
      About Veritas Access product licensing
  3. System requirements
    1.  
      Important release information
    2. System requirements
      1. Linux requirements
        1.  
          Operating system RPM installation requirements and operating system patching
        2.  
          Kernel RPMs that are required to be installed with exact predefined RPM versions
        3.  
          OL kernel RPMs that are required to be installed with exact predefined RPM versions
        4.  
          Required operating system RPMs for OL 6.8
        5.  
          Required operating system RPMs for OL 7.3
        6.  
          Required operating system RPMs for OL 7.4
        7.  
          Required operating system RPMs for RHEL 6.6
        8.  
          Required operating system RPMs for RHEL 6.7
        9.  
          Required operating system RPMs for RHEL 6.8
        10.  
          Required operating system RPMs for RHEL 7.3
        11.  
          Required operating system RPMs for RHEL 7.4
      2.  
        Software requirements for installing Veritas Access in a VMware ESXi environment
      3.  
        Hardware requirements for installing Veritas Access virtual machines
      4.  
        Management Server Web browser support
      5.  
        Supported NetBackup versions
      6.  
        Supported OpenStack versions
      7.  
        Supported Oracle versions and host operating systems
      8.  
        Supported IP version 6 Internet standard protocol
    3. Network and firewall requirements
      1.  
        NetBackup ports
      2.  
        OpenDedup ports and disabling the iptable rules
      3.  
        CIFS protocols and firewall ports
    4.  
      Maximum configuration limits
  4. Preparing to install Veritas Access
    1.  
      Overview of the installation process
    2.  
      Hardware requirements for the nodes
    3. About using LLT over the RDMA network for Veritas Access
      1.  
        RDMA over InfiniBand networks in the Veritas Access clustering environment
      2.  
        How LLT supports RDMA for faster interconnections between applications
      3.  
        Configuring LLT over RDMA for Veritas Access
      4.  
        How the Veritas Access installer configures LLT over RDMA
      5.  
        LLT over RDMA sample /etc/llttab
    4.  
      Connecting the network hardware
    5. About obtaining IP addresses
      1.  
        About calculating IP address requirements
      2.  
        Reducing the number of IP addresses required at installation time
    6.  
      About checking the storage configuration
  5. Deploying virtual machines in VMware ESXi for Veritas Access installation
    1.  
      Setting up networking in VMware ESXi
    2.  
      Creating a datastore for the boot disk and LUNs
    3.  
      Creating a virtual machine for Veritas Access installation
  6. Installing and configuring a cluster
    1.  
      Installation overview
    2.  
      Summary of the installation steps
    3.  
      Before you install
    4. Installing the operating system on each node of the cluster
      1.  
        About the driver node
      2.  
        Installing the operating system on the target Veritas Access cluster
      3.  
        Installing the Oracle Linux operating system on the target Veritas Access cluster
      4.  
        Obtaining the Red Hat Enterprise Linux compatible kernels
    5. Installing Veritas Access on the target cluster nodes
      1.  
        Installing and configuring the Veritas Access software on the cluster
      2.  
        Veritas Access Graphical User Interface
    6. About NIC bonding and NIC exclusion
      1.  
        Excluding a NIC
      2.  
        Including a NIC
      3.  
        Creating a new NIC bond
      4.  
        Removing a NIC bond
      5.  
        Removing a NIC from the bond list
    7. About VLAN Tagging
      1.  
        Adding a VLAN device on a particular NIC
      2.  
        Limitations of VLAN Tagging
    8.  
      Replacing an Ethernet interface card
    9.  
      Configuring I/O fencing
    10.  
      About configuring Veritas NetBackup
    11.  
      About enabling kdump during an Veritas Access configuration
    12.  
      Reconfiguring the Veritas Access cluster name and network
    13.  
      Configuring a KMS server on the Veritas Access cluster
  7. Automating Veritas Access installation and configuration using response files
    1.  
      About response files
    2.  
      Performing a silent Veritas Access installation
    3.  
      Response file variables to install and configure Veritas Access
    4.  
      Sample response file for Veritas Access installation and configuration
  8. Displaying and adding nodes to a cluster
    1.  
      About the Veritas Access installation states and conditions
    2.  
      Displaying the nodes in the cluster
    3.  
      Before adding new nodes in the cluster
    4.  
      Adding a node to the cluster
    5.  
      Deleting a node from the cluster
    6.  
      Shutting down the cluster nodes
  9. Upgrading Veritas Access and operating system
    1.  
      Upgrading the operating system and Veritas Access
  10. Upgrading Veritas Access using a rolling upgrade
    1.  
      About rolling upgrades
    2.  
      Supported rolling upgrade paths for upgrades on RHEL
    3.  
      Performing a rolling upgrade using the installer
  11. Uninstalling Veritas Access
    1.  
      Before you uninstall Veritas Access
    2. Uninstalling Veritas Access using the installer
      1.  
        Removing Veritas Access 7.3.1 RPMs
      2.  
        Running uninstall from the Veritas Access 7.3.1 disc
  12. Appendix A. Installation reference
    1.  
      Installation script options
  13. Appendix B. Troubleshooting the LTR upgrade
    1.  
      Locating the log files for troubleshooting the LTR upgrade
    2.  
      Troubleshooting pre-upgrade issues for LTR
    3.  
      Troubleshooting post-upgrade issues for LTR
  14. Appendix C. Configuring the secure shell for communications
    1.  
      Manually configuring passwordless secure shell (ssh)
    2.  
      Setting up ssh and rsh connections using the pwdutil.pl utility

Adding a node to the cluster

The operating system has to be installed on the nodes before you add nodes to a cluster.

If you use disk-based fencing, the coordinator disks must be visible on the newly added node as a prerequisite for I/O fencing to be configured successfully. Without the coordinator disks, I/O fencing will not load properly and the node will not be able to obtain cluster membership.

If you use majority-based fencing, the newly added node doesn't have to have shared disks.

If you want to add a new node and want to exclude some unique PCI IDs, add the unique PCI IDs to the /opt/VRTSsnas/conf/net_exclusion_dev.conf file on each cluster node manually. For example:

[root@bob_01 ~]# cat /opt/VRTSsnas/conf/net_exclusion_dev.conf 
0000:42:00.0 0000:42:00.1

Note:

Writeback cache is supported for two-node clusters only, so adding nodes to a two-node cluster changes the caching to read-only.

Note:

Newly added nodes should have the same configuration of InfiniBand NICs. See About using LLT over the RDMA network for Veritas Access.

If your cluster has a configured the FSS pool, and the FSS pool's node group is missing a node, then the newly added node is added into the FSS node group, and the installer adds the new node's local data disks into the FSS pool.

To add the new node to the cluster

  1. Log in to Veritas Access using the master or the system-admin account.
  2. In CLISH, enter the Cluster command to enter the Cluster> mode.
  3. To add the new nodes to the cluster, enter the following:
    Cluster> add node1ip, node2ip.....

    where node1ip, node2ip, .... are the IP address list of the additional nodes for the ssh connection.

    It is important to note that:

    • The node IPs should not be the IPs which are allocated to the new nodes as physical IPs or virtual IPs.

    • The physical IPs of new nodes are usable IPs found from the configured public IP starting addresses.

    • The virtual IPs are re-balanced to the new node but additional virtual IPs are not assigned.

      Go to step 7 to add new virtual IP addresses to the cluster after adding a node.

    • The IPs that are accessible to the new nodes should be given.

    • The accessible IPs of the new nodes should be in the public network, they should be able to ping the public network's gateway successfully.

    For example:

    Cluster> add 10.200.114.56
  4. When you add nodes to a two-node cluster and writeback caching is enabled, the installer asks the following question before adding the node:
    CPI WARNING V-9-30-2164 Adding a node to a two-node cluster 
    that has writeback caching enabled will change the caching 
    to read-only. Writeback caching is only supported for two nodes.
    Do you want to continue adding new node(s)? [y,n,q](n)

    Enter y to continue adding the node. Enter n to exit from the add node procedure.

  5. If a cache exists on the original cluster, the installer prompts you to choose the ssd disks to create cache on the new node when CFS is mounted.
    1) emc_clariion1_242
    2) emc_clariion1_243
    b) Back to previous menu
    Choose disks separate by spaces to create cache on 10.198.89.164
    [1-2,b,q] 1
    Create cache on snas_02 .....................Done
    
  6. If the cluster nodes have created FSS pool, and there are more than two local data disks on the new node, the installer asks you to select the disks to add into the FSS pool. Make sure that you select at least two disks for stripe volume layout. The total selected disk size should be no less than the FSS pool's capacity size.
    Following storage pools need to add disk from the new node:
         1)  fsspool1
         2)  fsspool2
         3)  Skip this step
    
    Choose a pool to add disks [1-3,q] 1
         1)  emc_clariion0_1570 (5.000 GB)
         2)  installres_03_sdc (5.000 GB)
         3)  installres_03_sde (5.000 GB)
         4)  sdd (5.000 GB)
         b)  Back to previous menu
    
    
    Choose at least 2 local disks with minimum capacity of 10 GB [1-4,b,q] 2 4
    Format disk installres_03_sdc,sdd ................................ Done
    
    The disk name changed to installres_03_sdc,installres_03_sdd
        Add disk installres_03_sdc,installres_03_sdd to storage pool fsspool1  Done
  7. If required, add the virtual IP addresses to the cluster. Adding the node does not add new virtual IP addresses or service groups to the cluster.

    To add additional virtual IP addresses, use the following command in the Network mode:

    Network> ip addr add ipaddr virtual

    For example:

    Network> ip addr add 10.200.58.66 255.255.252.0 virtual
    ACCESS ip addr SUCCESS V-288-1031 ip addr add successful.

If a problem occurs while you are adding a node to a cluster (for example, if the node is temporarily disconnected from the network), do the following to fix the problem:

To recover the node:

  • Power off the node.

  • Use the Cluster> del nodename command to delete the node from the cluster.

  • Power on the node.

  • Use the Cluster> add nodeip command to add the node to the cluster.