Veritas Access Installation Guide

Last Published:
Product(s): Access (7.4)
Platform: Linux
  1. Introducing Veritas Access
    1.  
      About Veritas Access
  2. Licensing in Veritas Access
    1.  
      About Veritas Access product licensing
  3. System requirements
    1.  
      Important release information
    2. System requirements
      1. Linux requirements
        1.  
          Operating system RPM installation requirements and operating system patching
        2.  
          Kernel RPMs that are required to be installed with exact predefined RPM versions
        3.  
          OL kernel RPMs that are required to be installed with exact predefined RPM versions
        4.  
          Required operating system RPMs for OL 7.3
        5.  
          Required operating system RPMs for OL 7.4
        6.  
          Required operating system RPMs for RHEL 7.3
        7.  
          Required operating system RPMs for RHEL 7.4
      2.  
        Software requirements for installing Veritas Access in a VMware ESXi environment
      3.  
        Hardware requirements for installing Veritas Access virtual machines
      4.  
        Management Server Web browser support
      5.  
        Supported NetBackup versions
      6.  
        Supported OpenStack versions
      7.  
        Supported Oracle versions and host operating systems
      8.  
        Supported IP version 6 Internet standard protocol
    3. Network and firewall requirements
      1.  
        NetBackup ports
      2.  
        OpenDedup ports and disabling the iptable rules
      3.  
        CIFS protocols and firewall ports
    4.  
      Maximum configuration limits
  4. Preparing to install Veritas Access
    1.  
      Overview of the installation process
    2.  
      Hardware requirements for the nodes
    3.  
      Connecting the network hardware
    4. About obtaining IP addresses
      1.  
        About calculating IP address requirements
      2.  
        Reducing the number of IP addresses required at installation time
    5.  
      About checking the storage configuration
  5. Deploying virtual machines in VMware ESXi for Veritas Access installation
    1.  
      Setting up networking in VMware ESXi
    2.  
      Creating a datastore for the boot disk and LUNs
    3.  
      Creating a virtual machine for Veritas Access installation
  6. Installing and configuring a cluster
    1.  
      Installation overview
    2.  
      Summary of the installation steps
    3.  
      Before you install
    4. Installing the operating system on each node of the cluster
      1.  
        About the driver node
      2.  
        Installing the operating system on the target Veritas Access cluster
      3.  
        Installing the Oracle Linux operating system on the target Veritas Access cluster
    5. Installing Veritas Access on the target cluster nodes
      1.  
        Installing and configuring the Veritas Access software on the cluster
      2.  
        Veritas Access Graphical User Interface
    6. About managing the NICs, bonds, and VLAN devices
      1.  
        Selecting the public NICs
      2.  
        Selecting the private NICs
      3.  
        Excluding a NIC
      4.  
        Including a NIC
      5.  
        Creating a NIC bond
      6.  
        Removing a NIC bond
      7.  
        Removing a NIC from the bond list
    7. About VLAN tagging
      1.  
        Creating a VLAN device
      2.  
        Removing a VLAN device
      3.  
        Limitations of VLAN tagging
    8.  
      Replacing an Ethernet interface card
    9.  
      Configuring I/O fencing
    10.  
      About configuring Veritas NetBackup
    11.  
      About enabling kdump during an Veritas Access configuration
    12.  
      Reconfiguring the Veritas Access cluster name and network
    13.  
      Configuring a KMS server on the Veritas Access cluster
  7. Automating Veritas Access installation and configuration using response files
    1.  
      About response files
    2.  
      Performing a silent Veritas Access installation
    3.  
      Response file variables to install and configure Veritas Access
    4.  
      Sample response file for Veritas Access installation and configuration
  8. Displaying and adding nodes to a cluster
    1.  
      About the Veritas Access installation states and conditions
    2.  
      Displaying the nodes in the cluster
    3.  
      Before adding new nodes in the cluster
    4.  
      Adding a node to the cluster
    5.  
      Adding a node in mixed mode environment
    6.  
      Deleting a node from the cluster
    7.  
      Shutting down the cluster nodes
  9. Upgrading Veritas Access and operating system
    1.  
      Upgrading the operating system and Veritas Access
  10. Upgrading Veritas Access using a rolling upgrade
    1.  
      About the rolling upgrades
    2.  
      Supported rolling upgrade paths for upgrades on RHEL and Oracle Linux
    3.  
      Performing a rolling upgrade using the installer
  11. Uninstalling Veritas Access
    1.  
      Before you uninstall Veritas Access
    2. Uninstalling Veritas Access using the installer
      1.  
        Removing Veritas Access 7.4 RPMs
      2.  
        Running uninstall from the Veritas Access 7.4 disc
  12. Appendix A. Installation reference
    1.  
      Installation script options
  13. Appendix B. Configuring the secure shell for communications
    1.  
      Manually configuring passwordless SSH
    2.  
      Setting up the SSH and the RSH connections
  14. Appendix C. Manual deployment of Veritas Access
    1.  
      Deploying Veritas Access manually on a two-node cluster in a non-SSH environment
    2.  
      Enabling internal sudo user communication in Veritas Access

Adding a node to the cluster

You must install the operating system on the nodes before you add nodes to a cluster.

If you use disk-based fencing, the coordinator disks must be visible on the newly added node as a prerequisite for I/O fencing to be configured successfully. Without the coordinator disks, I/O fencing does not load properly and the node cannot obtain the cluster membership.

If you use majority-based fencing, the newly added node does not need to have shared disks.

If you want to add a new node and want to exclude some unique PCI IDs, add the unique PCI IDs to the /opt/VRTSsnas/conf/net_exclusion_dev.conf file on each cluster node manually. For example:

[root@bob_01 ~]# cat /opt/VRTSsnas/conf/net_exclusion_dev.conf 
0000:42:00.0 0000:42:00.1

Note:

Writeback cache is supported for two-node clusters only, so adding nodes to a two-node cluster changes the caching to read-only.

Newly added nodes should have the same configuration of InfiniBand NICs.

If your cluster has configured the FSS pool and the pool's node group does not have a node, the newly added node is added into the FSS node group. The installer adds the new node's local data disks into the FSS pool.

To add the new node to the cluster

  1. Log on to Veritas Access using the master or the system-admin account.
  2. In CLISH, enter the Cluster command to enter the Cluster> mode.
  3. To add the new nodes to the cluster, enter the following:
    Cluster> add node1ip, node2ip.....

    Where node1ip,node2ip,.... are the IP address list of the additional nodes for the SSH connection.

    Note:

    • The node IPs are preserved and additional required are assigned from (unused) pool of physical IPs.

    • The physical IPs of new nodes are usable IPs found from the configured public IP starting addresses.

    • The virtual IPs are re-balanced to the new node but additional virtual IPs are not assigned.

      Go to step 6 to add new virtual IP addresses to the cluster after you add a node.

    • The IPs that are accessible to the new nodes should be given.

    • The accessible IPs of the new nodes should be in the public network, they should be able to ping the public network's gateway successfully.

    For example:

    Cluster> add 192.168.30.10

    Note:

    You cannot add nodes to a two-node cluster when the writeback caching is enabled. Before you add a node, change the cache mode to read and then try again.

  4. If a cache exists on the original cluster, the installer prompts you to choose the SSD disks to create cache on the new node when CFS is mounted.
    1) emc_clariion1_242
    2) emc_clariion1_243
    b) Back to previous menu
    Choose disks separate by spaces to create cache on 192.168.30.11
    [1-2,b,q] 1
    Create cache on snas_02 .....................Done
    
  5. If the cluster nodes have created an FSS pool, and there are more than two local data disks on the new node, the installer asks you to select the disks to add into the FSS pool. Make sure that you select at least two disks for a striped volume layout. The total selected disk size should be no less than the FSS pool's capacity size.
    Following storage pools need to add disk from the new node:
         1)  fsspool1
         2)  fsspool2
         3)  Skip this step
    
    Choose a pool to add disks [1-3,q] 1
         1)  emc_clariion0_1570 (5.000 GB)
         2)  installres_03_sdc (5.000 GB)
         3)  installres_03_sde (5.000 GB)
         4)  sdd (5.000 GB)
         b)  Back to previous menu
    
    
    Choose at least 2 local disks with minimum capacity of 10 GB [1-4,b,q] 2 4
    Format disk installres_03_sdc,sdd ................................ Done
    
    The disk name changed to installres_03_sdc,installres_03_sdd
        Add disk installres_03_sdc,installres_03_sdd to storage pool fsspool1  Done
  6. If required, add the virtual IP addresses to the cluster. When you add a node, it does not add new virtual IP addresses or service groups to the cluster.

    To add additional virtual IP addresses, use the following command in the Network mode:

    Network> ip addr add ipaddr virtual

    For example:

    Network> ip addr add 192.168.30.14 255.255.252.0 virtual
    ACCESS ip addr SUCCESS V-288-1031 ip addr add successful.

If a problem occurs when you add a node to a cluster (for example, if the node is temporarily disconnected from the network), do the following to fix the problem:

To recover the node:

  • Turn off the node.

  • Use the Cluster> del nodename command to delete the node from the cluster.

  • Turn on the node.

  • Use the Cluster> add nodeip command to add the node to the cluster.