Veritas Access Installation Guide

Last Published:
Product(s): Access (7.4)
Platform: Linux
  1. Introducing Veritas Access
    1.  
      About Veritas Access
  2. Licensing in Veritas Access
    1.  
      About Veritas Access product licensing
  3. System requirements
    1.  
      Important release information
    2. System requirements
      1. Linux requirements
        1.  
          Operating system RPM installation requirements and operating system patching
        2.  
          Kernel RPMs that are required to be installed with exact predefined RPM versions
        3.  
          OL kernel RPMs that are required to be installed with exact predefined RPM versions
        4.  
          Required operating system RPMs for OL 7.3
        5.  
          Required operating system RPMs for OL 7.4
        6.  
          Required operating system RPMs for RHEL 7.3
        7.  
          Required operating system RPMs for RHEL 7.4
      2.  
        Software requirements for installing Veritas Access in a VMware ESXi environment
      3.  
        Hardware requirements for installing Veritas Access virtual machines
      4.  
        Management Server Web browser support
      5.  
        Supported NetBackup versions
      6.  
        Supported OpenStack versions
      7.  
        Supported Oracle versions and host operating systems
      8.  
        Supported IP version 6 Internet standard protocol
    3. Network and firewall requirements
      1.  
        NetBackup ports
      2.  
        OpenDedup ports and disabling the iptable rules
      3.  
        CIFS protocols and firewall ports
    4.  
      Maximum configuration limits
  4. Preparing to install Veritas Access
    1.  
      Overview of the installation process
    2.  
      Hardware requirements for the nodes
    3.  
      Connecting the network hardware
    4. About obtaining IP addresses
      1.  
        About calculating IP address requirements
      2.  
        Reducing the number of IP addresses required at installation time
    5.  
      About checking the storage configuration
  5. Deploying virtual machines in VMware ESXi for Veritas Access installation
    1.  
      Setting up networking in VMware ESXi
    2.  
      Creating a datastore for the boot disk and LUNs
    3.  
      Creating a virtual machine for Veritas Access installation
  6. Installing and configuring a cluster
    1.  
      Installation overview
    2.  
      Summary of the installation steps
    3.  
      Before you install
    4. Installing the operating system on each node of the cluster
      1.  
        About the driver node
      2.  
        Installing the operating system on the target Veritas Access cluster
      3.  
        Installing the Oracle Linux operating system on the target Veritas Access cluster
    5. Installing Veritas Access on the target cluster nodes
      1.  
        Installing and configuring the Veritas Access software on the cluster
      2.  
        Veritas Access Graphical User Interface
    6. About managing the NICs, bonds, and VLAN devices
      1.  
        Selecting the public NICs
      2.  
        Selecting the private NICs
      3.  
        Excluding a NIC
      4.  
        Including a NIC
      5.  
        Creating a NIC bond
      6.  
        Removing a NIC bond
      7.  
        Removing a NIC from the bond list
    7. About VLAN tagging
      1.  
        Creating a VLAN device
      2.  
        Removing a VLAN device
      3.  
        Limitations of VLAN tagging
    8.  
      Replacing an Ethernet interface card
    9.  
      Configuring I/O fencing
    10.  
      About configuring Veritas NetBackup
    11.  
      About enabling kdump during an Veritas Access configuration
    12.  
      Reconfiguring the Veritas Access cluster name and network
    13.  
      Configuring a KMS server on the Veritas Access cluster
  7. Automating Veritas Access installation and configuration using response files
    1.  
      About response files
    2.  
      Performing a silent Veritas Access installation
    3.  
      Response file variables to install and configure Veritas Access
    4.  
      Sample response file for Veritas Access installation and configuration
  8. Displaying and adding nodes to a cluster
    1.  
      About the Veritas Access installation states and conditions
    2.  
      Displaying the nodes in the cluster
    3.  
      Before adding new nodes in the cluster
    4.  
      Adding a node to the cluster
    5.  
      Adding a node in mixed mode environment
    6.  
      Deleting a node from the cluster
    7.  
      Shutting down the cluster nodes
  9. Upgrading Veritas Access and operating system
    1.  
      Upgrading the operating system and Veritas Access
  10. Upgrading Veritas Access using a rolling upgrade
    1.  
      About the rolling upgrades
    2.  
      Supported rolling upgrade paths for upgrades on RHEL and Oracle Linux
    3.  
      Performing a rolling upgrade using the installer
  11. Uninstalling Veritas Access
    1.  
      Before you uninstall Veritas Access
    2. Uninstalling Veritas Access using the installer
      1.  
        Removing Veritas Access 7.4 RPMs
      2.  
        Running uninstall from the Veritas Access 7.4 disc
  12. Appendix A. Installation reference
    1.  
      Installation script options
  13. Appendix B. Configuring the secure shell for communications
    1.  
      Manually configuring passwordless SSH
    2.  
      Setting up the SSH and the RSH connections
  14. Appendix C. Manual deployment of Veritas Access
    1.  
      Deploying Veritas Access manually on a two-node cluster in a non-SSH environment
    2.  
      Enabling internal sudo user communication in Veritas Access

Deploying Veritas Access manually on a two-node cluster in a non-SSH environment

This section describes the manual steps for deploying a two-node Veritas Access cluster when SSH communication is disabled.

Pre-requisites

  • You need to have a two-node cluster.

  • Supported operating system version is: RHEL 7.4

  • Verify that the Veritas Access image is present in your local system at the /access_build_dir/rhel7_x86_64/ location.

  • The cluster is named as clus and the cluster nodes are named as clus_01 and clus_02. Cluster names should be unique for all nodes.

  • You need to stop the SSH service on all the nodes.

  • Verify that the public NICs are pubeth0, pubeth1, and private NICs are priveth0 and priveth1. NIC names should be consistent across all the nodes. Public NIC names and private NIC names should be the same across all the nodes.

  • Use 172.16.0.3 as the private IP address for clus_01 and 172.16.0.4 as the private IP address for clus_02.

To deploy Veritas Access manually on a two-node cluster

  1. Copy the Veritas Access image on all the nodes of the desired cluster.
  2. Stop the SSH daemon on all the nodes.
    # systemctl stop sshd
  3. Verify if the following RPMs are installed. If not, install the RPMs from the RHEL repository.
    bash-4.2.46-28.el7.x86_64
    lsscsi-0.27-6.el7.x86_64
    initscripts-9.49.39-1.el7.x86_64
    iproute-3.10.0-87.el7.x86_64
    kmod-20-15.el7.x86_64
    coreutils-8.22-18.el7.x86_64
    binutils-2.25.1-31.base.el7.x86_64
    python-requests-2.6.0-1.el7_1.noarch
    python-urllib3-1.10.2-3.el7.noarch
  4. Install the required operating system RPMs.
    • Create a repo file.

      cat /etc/yum.repos.d/os.repo   
      		[veritas-access-os-rpms]
      		name=Veritas Access OS RPMS
      		baseurl=file:///access_build_dir/rhel7_x86_64/os_rpms/
      		enabled=1
      		gpgcheck=0
    • Run the following command:

      # yum updateinfo
    • Run the following command:

      # cd /access_build_dir/rhel7_x86_64/os_rpms/
    • Before running the following command, make sure that there is no RHEL subscription in the system. The yum repolist should point to veritas-access-os-rpms only.

    # /usr/bin/yum -y install --setopt=protected_multilib=false 
    perl-5.16.3-292.el7.x86_64.rpm nmap-ncat-6.40-7.el7.x86_64.rpm 
    perl-LDAP-0.56-5.el7.noarch.rpm perl-Convert-ASN1-0.26-4.el7.noarch.rpm 
    net-snmp-5.7.2-28.el7_4.1.x86_64.rpm 
    net-snmp-utils-5.7.2-28.el7_4.1.x86_64.rpm 
    openldap-2.4.44-5.el7.x86_64.rpm nss-pam-ldapd-0.8.13-8.el7.x86_64.rpm 
    rrdtool-1.4.8-9.el7.x86_64.rpm wireshark-1.10.14-14.el7.x86_64.rpm 
    vsftpd-3.0.2-22.el7.x86_64.rpm openssl-1.0.2k-12.el7.x86_64.rpm 
    openssl-devel-1.0.2k-12.el7.x86_64.rpm 
    iscsi-initiator-utils-6.2.0.874-4.el7.x86_64.rpm 
    libpcap-1.5.3-9.el7.x86_64.rpm libtirpc-0.2.4-0.10.el7.x86_64.rpm 
    nfs-utils-1.3.0-0.48.el7_4.2.x86_64.rpm 
    kernel-debuginfo-common-x86_64-3.10.0-693.el7.x86_64.rpm 
    kernel-debuginfo-3.10.0-693.el7.x86_64.rpm 
    kernel-headers-3.10.0-693.el7.x86_64.rpm 
    krb5-devel-1.15.1-8.el7.x86_64.rpm 
    krb5-libs-1.15.1-8.el7.x86_64.rpm 
    krb5-workstation-1.15.1-8.el7.x86_64.rpm 
    perl-JSON-2.59-2.el7.noarch.rpm telnet-0.17-64.el7.x86_64.rpm 
    apr-devel-1.4.8-3.el7_4.1.x86_64.rpm 
    apr-util-devel-1.5.2-6.el7.x86_64.rpm 
    glibc-common-2.17-196.el7_4.2.x86_64.rpm 
    glibc-headers-2.17-196.el7_4.2.x86_64.rpm 
    glibc-2.17-196.el7_4.2.x86_64.rpm glibc-2.17-196.el7_4.2.i686.rpm 
    glibc-devel-2.17-196.el7_4.2.x86_64.rpm 
    glibc-utils-2.17-196.el7_4.2.x86_64.rpm 
    nscd-2.17-196.el7_4.2.x86_64.rpm sysstat-10.1.5-12.el7.x86_64.rpm 
    libibverbs-utils-13-7.el7.x86_64.rpm libibumad-13-7.el7.x86_64.rpm 
    opensm-3.3.19-1.el7.x86_64.rpm opensm-libs-3.3.19-1.el7.x86_64.rpm 
    infiniband-diags-1.6.7-1.el7.x86_64.rpm 
    sg3_utils-libs-1.37-12.el7.x86_64.rpm sg3_utils-1.37-12.el7.x86_64.rpm 
    libyaml-0.1.4-11.el7_0.x86_64.rpm 
    memcached-1.4.15-10.el7_3.1.x86_64.rpm 
    python-memcached-1.59-1.noarch.rpm 
    python-paramiko-2.1.1-4.el7.noarch.rpm 
    python-backports-1.0-8.el7.x86_64.rpm 
    python-backports-ssl_match_hostname-3.4.0.2-4.el7.noarch.rpm 
    python-chardet-2.2.1-1.el7_1.noarch.rpm 
    python-six-1.9.0-2.el7.noarch.rpm 
    python-setuptools-0.9.8-7.el7.noarch.rpm 
    python-ipaddress-1.0.16-2.el7.noarch.rpm 
    targetcli-2.1.fb46-1.el7.noarch.rpm 
    fuse-2.9.2-8.el7.x86_64.rpm fuse-devel-2.9.2-8.el7.x86_64.rpm 
    fuse-libs-2.9.2-8.el7.x86_64.rpm PyYAML-3.10-11.el7.x86_64.rpm 
    arptables-0.0.4-8.el7.x86_64.rpm ipvsadm-1.27-7.el7.x86_64.rpm 
    ntpdate-4.2.6p5-25.el7_3.2.x86_64.rpm ntp-4.2.6p5-25.el7_3.2.x86_64.rpm 
    autogen-libopts-5.18-5.el7.x86_64.rpm ethtool-4.8-1.el7.x86_64.rpm 
    net-tools-2.0-0.22.20131004git.el7.x86_64.rpm 
    cups-libs-1.6.3-29.el7.x86_64.rpm avahi-libs-0.6.31-17.el7.x86_64.rpm 
    psmisc-22.20-15.el7.x86_64.rpm strace-4.12-4.el7.x86_64.rpm 
    vim-enhanced-7.4.160-2.el7.x86_64.rpm at-3.1.13-22.el7_4.2.x86_64.rpm 
    rsh-0.17-76.el7_1.1.x86_64.rpm unzip-6.0-16.el7.x86_64.rpm 
    zip-3.0-11.el7.x86_64.rpm bzip2-1.0.6-13.el7.x86_64.rpm 
    mlocate-0.26-6.el7.x86_64.rpm lshw-B.02.18-7.el7.x86_64.rpm 
    jansson-2.10-1.el7.x86_64.rpm ypbind-1.37.1-9.el7.x86_64.rpm 
    yp-tools-2.14-5.el7.x86_64.rpm perl-Net-Telnet-3.03-19.el7.noarch.rpm 
    tzdata-java-2018d-1.el7.noarch.rpm 
    perl-XML-Parser-2.41-10.el7.x86_64.rpm 
    lsof-4.87-4.el7.x86_64.rpm cairo-1.14.8-2.el7.x86_64.rpm 
    pango-1.40.4-1.el7.x86_64.rpm libjpeg-turbo-1.2.90-5.el7.x86_64.rpm 
    sos-3.4-13.el7_4.noarch.rpm traceroute-2.0.22-2.el7.x86_64.rpm 
    openldap-clients-2.4.44-5.el7.x86_64.rpm
  5. Install the third-party RPMs:
    # cd /access_build_dir/rhel7_x86_64/ third_party _rpms/
    # /bin/rpm -U -v --oldpackage --nodeps --replacefiles --replacepkgs 
    ctdb-4.6.6-1.el7.x86_64.rpm 
    perl-Template-Toolkit-2.24-5.el7.x86_64.rpm  
    perl-Template-Extract-0.41-1.noarch.rpm 
    perl-AppConfig-1.66-20.el7.noarch.rpm 
    perl-File-HomeDir-1.00-4.el7.noarch.rpm 
    samba-common-4.6.6-1.el7.x86_64.rpm 
    samba-common-libs-4.6.6-1.el7.x86_64.rpm 
    samba-client-4.6.6-1.el7.x86_64.rpm 
    samba-client-libs-4.6.6-1.el7.x86_64.rpm 
    samba-4.6.6-1.el7.x86_64.rpm 
    samba-winbind-4.6.6-1.el7.x86_64.rpm 
    samba-winbind-clients-4.6.6-1.el7.x86_64.rpm 
    samba-winbind-krb5-locator-4.6.6-1.el7.x86_64.rpm 
    libsmbclient-4.6.6-1.el7.x86_64.rpm 
    samba-krb5-printing-4.6.6-1.el7.x86_64.rpm 
    samba-libs-4.6.6-1.el7.x86_64.rpm 
    libwbclient-4.6.6-1.el7.x86_64.rpm 
    samba-winbind-modules-4.6.6-1.el7.x86_64.rpm 
    libnet-1.1.6-7.el7.x86_64.rpm lmdb-libs-0.9.13-2.el7.x86_64.rpm 
    nfs-ganesha-2.2.0-0.el7.x86_64.rpm 
    nfs-ganesha-vxfs-2.2.0-0.el7.x86_64.rpm gevent-1.0.2-1.x86_64.rpm 
    python-msgpack-0.4.6-1.el7ost.x86_64.rpm 
    python-flask-0.10.1-4.el7.noarch.rpm 
    python-itsdangerous-0.23-2.el7.noarch.rpm 
    libevent-libs-2.0.22-1.el7.x86_64.rpm 
    python-werkzeug-0.9.1-2.el7.noarch.rpm 
    python-jinja2-2.7.2-2.el7.noarch.rpm sdfs-7.4.0.0-1.x86_64.rpm 
    psutil-4.3.0-1.x86_64.rpm 
    python-crontab-2.2.4-1.noarch.rpm libuv-1.9.1-1.el7.x86_64.rpm

    In this command, you can update the RPM version based on the RPMs in the /access_build_dir/rhel7_x86_64/third_party _rpms/ directory.

  6. Install the Veritas Access RPMs.
    • Run the following commands:

      # cd /access_build_dir/rhel7_x86_64/rpms/repodata/
      # cat access73.repo > /etc/yum.repos.d/access73.repo
    • Update the baseurl and gpgkey entry in the /etc/yum.repos.d/access73.repo for yum repository directory.

      • baseurl=file:///access_build_dir/rhel7_x86_64/rpms/
      • gpgkey=file:///access_build_dir/rhel7_x86_64/rpms/
        RPM-GPG-KEY-veritas-access7
    • Run the following commands to refresh the yum repository.

      • 	# yum repolist
      • 	# yum grouplist
    • Run the following command.

      # yum -y groupinstall ACCESS73
    • Run the following command.

      # /opt/VRTS/install/bin/add_install_scripts
  7. Install the Veritas NetBackup client software.
    # cd /access_build_dir/rhel7_x86_64
    # /opt/VRTSnas/install/image_install/netbackup/install_netbackup.pl 
    /access_build_dir/rhel7_x86_64/netbackup
  8. Create soft links for Veritas Access. Run the following command.
    # /opt/VRTSnas/pysnas/install/install_tasks.py 
    all_rpms_installed parallel
  9. License the product.
    • Register the permanent VLIC key.

      # /opt/VRTSvlic/bin/vxlicinstupgrade -k <Key>
    • Verify that the VLIC key is installed properly:

      # /opt/VRTSvlic/bin/vxlicrep
    • Register the SLIC key file:

      # /opt/VRTSslic/bin/vxlicinstupgrade -k $keyfile
    • Verify that the SLIC key is installed properly:

      # /opt/VRTSslic/bin/vxlicrep
  10. Take a backup of the following files:
    • /etc/sysconfig/network

    • /etc/sysconfig/network-scripts/ifcfg-*

    • /etc/resolv.conf

  11. Configure the private NIC:
    # cd /etc/sysconfig/network-scripts/
    • Configure the first private NIC.

      • Run the following command.

        # ip link set down priveth0
      • Update the ifcfg-priveth0 file with the following:

        DEVICE=priveth0
        NAME=priveth0
        BOOTPROTO=none
        TYPE=Ethernet
        ONBOOT=yes
      • Add entries in the ifcfg-priveth0 file.

        HWADDR=<MAC address>
        IPADDR= 172.16.0.3		(use IPADDR= 172.16.0.4 for second node)
        NETMASK=<netmask>		
        NM_CONTROLLED=no

        For example:

        HWADDR=00:0c:29:0c:8d:69
        IPADDR=172.16.0.3
        NETMASK=255.255.248.0
        NM_CONTROLLED=no
      • Run the following command.

        # ip link set up priveth0
    • Configure the second private NIC.

      You can configure the second private NIC in the same way. Instead of priveth0, use priveth1 for the second node. You do not need to provide IPADDR for priveth1.

  12. Configure the public NIC.
    # cd /etc/sysconfig/network-scripts/
    • Configure the second public NIC, pubeth1 (in which the host IP is not already configured).

      • Run the following command:

        # ip link set down pubeth1
      • Update the ifcfg-pubeth1 file with the following:

        DEVICE=pubeth1
        NAME=pubeth1
        TYPE=Ethernet
        BOOTPROTO=none
        ONBOOT=yes
      • Add entries in the ifcfg-pubeth1 file.

        HWADDR=<MAC address>
        IPADDR=<pubeth1_pub_ip>
        NETMASK=<netmask>
        NM_CONTROLLED=no
      • Run the following command.

        # ip link set up pubeth1
    • Configure the first public NIC, pubeth0.

      • As the first public NIC goes down, make sure that you access the system directly from its console.

      • Run the following command:

        # ip link set down pubeth0
      • Update the ifcfg-pubeth0 file with the following:

        DEVICE=pubeth0
        NAME=pubeth0
        TYPE=Ethernet
        BOOTPROTO=none
        ONBOOT=yes
      • Add entries in the ifcfg-pubeth0 file.

        HWADDR=<MAC address>
        IPADDR=<pubeth0_pub_ip>
        NETMASK=<netmask>
        NM_CONTROLLED=no
      • Run the following command.

        # ip link set up pubeth0
      • Verify if pubeth1 is down. If yes, then bring it online.

        # ip link set up pubeth1
      • Verify the changes.

        # ip a
      • Run the following command.

        # service network restart

        SSH to the above-mentioned IP should work if you start the sshd service.

  13. Configure the DNS.

    Update the /etc/resolv.conf file by adding the following entries:

    nameserver <DNS>
    domain <master node name>

    For example:

    nameserver 10.182.128.134
    domain clus_01
  14. Configure the gateway.

    Update the /etc/sysconfig/network file.

    GATEWAY=$gateway
    NOZEROCONF=yes
  15. Update the configfileTemplate file.
    • Enter the following command:

      # cd /access_build_dir/rhel7_x86_64/manual_install/network
    • Update the configfileTemplate file with the current system details:

      • Use master as the mode for the master node and slave as the mode for the other nodes.

      • The configuration utility script uses this template file to create configuration files.

      • Provide the same name (current host name) in old_hostname and new_hostname.

  16. Generate the network configuration files.
    • The configuration utility script named configNetworkHelper.pl creates the required configuration files.

      # cd /access_build_dir/rhel7_x86_64/manual_install/network
      # chmod +x configNetworkHelper.pl
    • Run the configuration utility script.

      # ./configNetworkHelper.pl -f configfileTemplate
    • # cat /opt/VRTSnas/scripts/net/network_options.conf > 
      /opt/VRTSnas/conf/network_options.conf
    • # sed -i -e '$a\' /opt/VRTSnas/conf/net_console_ip.conf
    • Update the /etc/hosts file.

      # echo "172.16.0.3 	<master hostname>" >> /etc/hosts
      # echo "172.16.0.4 	<slave node name>" >> /etc/hosts

      For example:

      # echo "172.16.0.3 	clus_01" >> /etc/hosts
      # echo "172.16.0.4 	clus_02" >> /etc/hosts
  17. Create the S3 configuration file.
    # cat /opt/VRTSnas/conf/ssnas.yml
    ObjectAccess:
      config: {admin_port: 8144, s3_port: 8143, server_enable: 'no', 
      ssl: 'no'}
      defaults:
        fs_blksize: '8192'
        fs_encrypt: 'off'
        fs_nmirrors: '2'
        fs_options: ''
        fs_pdirenable: 'yes'
        fs_protection: disk
        fs_sharing: 'no'
        fs_size: 20G
        fs_type: mirrored
        poollist: []
      filesystems: {}
      groups: {}
      pools: {}
  18. Set up the Storage Foundation cluster.
    • # cd /access_build_dir/rhel7_x86_64/manual_install/
      network/SetupClusterScripts
    • # mkdir -p /opt/VRTSperl/lib/site_perl/UXRT72/CPIR/Module/veritas/
    • # cp sfcfsha_ctrl.sh /opt/VRTSperl/lib/site_perl/UXRT72/CPIR/
      Module/veritas/sfcfsha_ctrl.sh
    • # cp module_script.pl /tmp/
    • # chmod +x /tmp/module_script.pl
    • Update the cluster name, system name, and NIC name in the following command and execute it:

      # /tmp/module_script.pl veritas::sfcfsha_config '{"cluster_name" => 
      "<Provide cluster name here>","component" => "sfcfsha","state" => 
      "present","vcs_users" => "admin:password:Administrators,user1:
      passwd1:Operators","vcs_clusterid" => 14865,"cluster_uuid" => 
      "1391a-443ab-2b34c","method" => "ethernet","systems" => 
      "<Provide hostnames separated by comma>","private_link" => 
      "<provide private nic name separated by comma>"}'

      For example, if the cluster name is clus and the host names are clus_01 and clus_02.

      /tmp/module_script.pl veritas::sfcfsha_config '
      {"cluster_name" => "clus","component" => "sfcfsha",
      "state" => "present","vcs_users" => 
      "admin:password:Administrators,user1:passwd1:Operators",
      "vcs_clusterid" => 14865,"cluster_uuid" => "1391a-443ab-2b34c",
      "method" => "ethernet","systems" => "clus_01,clus_02",
      "private_link" => "priveth0,priveth1"}'
    • Update and configure the following files:

      • # rpm -q --queryformat '%{VERSION}|%{BUILDTIME:date}|%
        {INSTALLTIME:date}|% {VERSION}\n' VRTSnas > 
        /opt/VRTSnas/conf/version.conf
      • # echo NORMAL > /opt/VRTSnas/conf/cluster_type
      • # echo 'path /opt/VRTSsnas/core/kernel/' >> /etc/kdump.conf
      • # sed -i '/^core_collector\b/d;' /etc/kdump.conf
      • # echo 'core_collector makedumpfile -c --message-level 1 -d 31' >> 
        /etc/kdump.conf
  19. Start the Veritas Access product processes.
    • Provide the current host name in the following command and execute it.

      # /tmp/module_script.pl veritas::process '{"state" => "present",
      "seednode" => "<provide current hostname here>","component"
       => "sfcfsha"}'

      For example, if the host name is clus_01:

      # /tmp/module_script.pl veritas::process '{"state" => 
      "present","seednode" => "clus_01","component" => "sfcfsha"}'

      If you are running it on clus_02, then you have to provide "seednode" => "clus_02".

    • Run the following command.

      # /opt/VRTSnas/pysnas/install/install_tasks.py 
      all_services_running serial
  20. Create the CVM group.

    If the /etc/vx/reconfig.d/state.d/install-db file exists, then execute the following command.

    # mv /etc/vx/reconfig.d/state.d/install-db 
    /etc/vx/reconfig.d/state.d/install-db.a

    If CVM is not configured already, run the following command on the master node.

    # /opt/VRTS/bin/cfscluster config -t 200 -s
  21. Enable hacli.

    Verify in the /etc/VRTSvcs/conf/config/main.cf file. If HacliUserLevel = COMMANDROOT exists, then move to step 22 , else follow the below steps to enable hacli in your system.

    # /opt/VRTS/bin/hastop -local

    Update the /etc/VRTSvcs/conf/config/main.cf file.

    If it does not exist, add the following line:

    HacliUserLevel = COMMANDROOT 	in cluster <cluster name> ( ) loop

    For example:

    cluster clus (
    			UserNames = { admin = aHIaHChEIdIIgQIcHF, user1 = aHIaHChEIdIIgFEb }
    			Administrators = { admin }
    			Operators = { user1 }
    			HacliUserLevel = COMMANDROOT
    # /opt/VRTS/bin/hastart

    Verify that hacli service is running.

    # /opt/VRTS/bin/hacli -cmd "ls /" -sys clus_01
  22. Verify that the HAD daemon is running.
    # /opt/VRTS/bin/hastatus -sum
  23. Configure Veritas Access on the second node by following steps 1 to 22 .
  24. Verify that the system is configured correctly.
    • Verify that LLT is configured correctly.

      # lltconfig -a list

      For example:

      [root@clus_02 SetupClusterScripts]# lltconfig -a list
      Link 0 (priveth0):
       	Node   0 clus_01   :   00:0C:29:0C:8D:69
       	Node   1 clus_02   :   00:0C:29:F0:CC:B6  permanent 
      
      Link 1 (priveth1):
      	Node   0 clus_01    :   00:0C:29:0C:8D:5F
      	Node   1 clus_02    :   00:0C:29:F0:CC:AC  permanent
    • Verify that GAB is configured properly.

      # gabconfig -a

      For example:

      [root@clus_01 network]# gabconfig -a
      GAB 									Port 		 Memberships
      ===========  ======  ================
      Port a gen   43b804  membership 01
      Port b gen   43b807  membership 01
      Port h gen   43b821  membership 01
      
    • Verify the LLT state.

      # lltstat -nvv

      For example:

      [root@clus_01 network]# lltstat -nvv
      LLT node information:
         		 Node        State    Link    Status  Address
         * 0 clus_01    OPEN
                                priveth0   UP    00:0C:29:0C:8D:69
                                priveth1   UP    00:0C:29:0C:8D:5F
           1 clus_02    OPEN
                                priveth0   UP      00:0C:29:F0:CC:B6
                                priveth1   UP      00:0C:29:F0:CC:AC
           2          CONNWAIT
                               priveth0   DOWN
                               priveth1   DOWN
    • The vxconfigd daemon should be online on both nodes.

      # ps -ef | grep vxconfigd

      For example:

      # ps -ef | grep vxconfigd
      root   13393 1  0 01:33 ?  00:00:00 vxconfigd -k -m disable -x syslog
  25. Run the Veritas Access post-start actions.
    • Make sure that HAD is running on all the nodes.

      # /opt/VRTS/bin/hastatus
    • On all the nodes, create a communication.conf file to enable hacli instead of ssh.

      vim /opt/VRTSnas/conf/communication.conf
      {
      	"WorkingVersion": "1",
      	"Version": "1",
      	"CommunicationType": "HACLI"
      }
    • Run the installer to install Veritas Access. Run the following command only on the master node.

      # /opt/VRTSnas/install/image_install/installer -m master
  26. Run the join operation on the slave node.
    # /opt/VRTSnas/install/image_install/installer -m join
  27. Run the following command on both the nodes.
    # echo "<first private nic name>" >
    /opt/VRTSnas/conf/net_priv_dev.conf

    For example:

    # echo "priveth0" > /opt/VRTSnas/conf/net_priv_dev.conf
  28. Enable NFS resources. Run the following commands on the master node.
    # /opt/VRTS/bin/haconf -makerw 
    # /opt/VRTS/bin/hares -modify ssnas_nfs Enabled 1
    # /opt/VRTS/bin/haconf -dump -makero

    You can now use the two-node Veritas Access cluster.