Veritas Access Installation Guide
- Licensing in Veritas Access
- System requirements
- System requirements
- Linux requirements
- Network and firewall requirements
- Preparing to install Veritas Access
- Deploying virtual machines in VMware ESXi for Veritas Access installation
- Installing and configuring a cluster
- Installing the operating system on each node of the cluster
- Installing Veritas Access on the target cluster nodes
- About managing the NICs, bonds, and VLAN devices
- About VLAN tagging
- Automating Veritas Access installation and configuration using response files
- Displaying and adding nodes to a cluster
- Upgrading the operating system and Veritas Access
- Performing a rolling upgrade
- Uninstalling Veritas Access
- Appendix A. Installation reference
- Appendix B. Configuring the secure shell for communications
- Appendix C. Manual deployment of Veritas Access
Deploying Veritas Access manually on a two-node cluster in a non-SSH environment
This section describes the manual steps for deploying a two-node Veritas Access cluster when SSH communication is disabled.
Pre-requisites
Consider a two-node cluster.
Supported operating system version is: RHEL 7.4
It is assumed that Veritas Access image is present in your local system at the
/access_build_dir/rhel7_x86_64/
location.The cluster is named as clus and the cluster nodes are named as clus_01 and clus_02. Cluster names should be unique for all nodes.
SSH service is stopped on all nodes.
Assume that the public NICs are pubeth0, pubeth1, and private NICs are priveth0 and priveth1. NIC names should be consistent across all nodes. Public NIC names and private NIC names should be same across all nodes.
Use 172.16.0.3 as private IP address for clus_01 and 172.16.0.4 as private IP address for clus_02.
To deploy Veritas Access manually on a two-node cluster
- Copy the Veritas Access image on all nodes of the desired cluster.
- Stop the SSH daemon on all the nodes.
# systemctl stop sshd
- Verify if the following rpms are installed. If not, install the rpms from the RHEL repository.
bash-4.2.46-28.el7.x86_64 lsscsi-0.27-6.el7.x86_64 initscripts-9.49.39-1.el7.x86_64 iproute-3.10.0-87.el7.x86_64 kmod-20-15.el7.x86_64 coreutils-8.22-18.el7.x86_64 binutils-2.25.1-31.base.el7.x86_64 python-requests-2.6.0-1.el7_1.noarch python-urllib3-1.10.2-3.el7.noarch
- Install the required operating system rpms.
Create a
repo
file.cat /etc/yum.repos.d/os.repo [veritas-access-os-rpms] name=Veritas Access OS RPMS baseurl=file:///access_build_dir/rhel7_x86_64/os_rpms/ enabled=1 gpgcheck=0
Run the following command:
# yum updateinfo
Run the following command:
# cd /access_build_dir/rhel7_x86_64/os_rpms/
Before running the following command, make sure that there is no RHEL subscription in the system. The
yum repolist
should point toveritas-access-os-rpms
only.
# /usr/bin/yum -y install --setopt=protected_multilib=false perl-5.16.3-292.el7.x86_64.rpm nmap-ncat-6.40-7.el7.x86_64.rpm perl-LDAP-0.56-5.el7.noarch.rpm perl-Convert-ASN1-0.26-4.el7.noarch.rpm net-snmp-5.7.2-28.el7_4.1.x86_64.rpm net-snmp-utils-5.7.2-28.el7_4.1.x86_64.rpm openldap-2.4.44-5.el7.x86_64.rpm nss-pam-ldapd-0.8.13-8.el7.x86_64.rpm rrdtool-1.4.8-9.el7.x86_64.rpm wireshark-1.10.14-14.el7.x86_64.rpm vsftpd-3.0.2-22.el7.x86_64.rpm openssl-1.0.2k-12.el7.x86_64.rpm openssl-devel-1.0.2k-12.el7.x86_64.rpm iscsi-initiator-utils-6.2.0.874-4.el7.x86_64.rpm libpcap-1.5.3-9.el7.x86_64.rpm libtirpc-0.2.4-0.10.el7.x86_64.rpm nfs-utils-1.3.0-0.48.el7_4.2.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-693.el7.x86_64.rpm kernel-debuginfo-3.10.0-693.el7.x86_64.rpm kernel-headers-3.10.0-693.el7.x86_64.rpm krb5-devel-1.15.1-8.el7.x86_64.rpm krb5-libs-1.15.1-8.el7.x86_64.rpm krb5-workstation-1.15.1-8.el7.x86_64.rpm perl-JSON-2.59-2.el7.noarch.rpm telnet-0.17-64.el7.x86_64.rpm apr-devel-1.4.8-3.el7_4.1.x86_64.rpm apr-util-devel-1.5.2-6.el7.x86_64.rpm glibc-common-2.17-196.el7_4.2.x86_64.rpm glibc-headers-2.17-196.el7_4.2.x86_64.rpm glibc-2.17-196.el7_4.2.x86_64.rpm glibc-2.17-196.el7_4.2.i686.rpm glibc-devel-2.17-196.el7_4.2.x86_64.rpm glibc-utils-2.17-196.el7_4.2.x86_64.rpm nscd-2.17-196.el7_4.2.x86_64.rpm sysstat-10.1.5-12.el7.x86_64.rpm libibverbs-utils-13-7.el7.x86_64.rpm libibumad-13-7.el7.x86_64.rpm opensm-3.3.19-1.el7.x86_64.rpm opensm-libs-3.3.19-1.el7.x86_64.rpm infiniband-diags-1.6.7-1.el7.x86_64.rpm sg3_utils-libs-1.37-12.el7.x86_64.rpm sg3_utils-1.37-12.el7.x86_64.rpm libyaml-0.1.4-11.el7_0.x86_64.rpm memcached-1.4.15-10.el7_3.1.x86_64.rpm python-memcached-1.59-1.noarch python-paramiko-2.1.1-4.el7.noarch.rpm python-backports-1.0-8.el7.x86_64.rpm python-backports-ssl_match_hostname-3.4.0.2-4.el7.noarch.rpm python-chardet-2.2.1-1.el7_1.noarch.rpm python-six-1.9.0-2.el7.noarch.rpm python-setuptools-0.9.8-7.el7.noarch.rpm python-ipaddress-1.0.16-2.el7.noarch.rpm targetcli-2.1.fb46-1.el7.noarch.rpm fuse-2.9.2-8.el7.x86_64.rpm fuse-devel-2.9.2-8.el7.x86_64.rpm fuse-libs-2.9.2-8.el7.x86_64.rpm PyYAML-3.10-11.el7.x86_64.rpm arptables-0.0.4-8.el7.x86_64.rpm ipvsadm-1.27-7.el7.x86_64.rpm ntpdate-4.2.6p5-25.el7_3.2.x86_64.rpm ntp-4.2.6p5-25.el7_3.2.x86_64.rpm autogen-libopts-5.18-5.el7.x86_64.rpm ethtool-4.8-1.el7.x86_64.rpm net-tools-2.0-0.22.20131004git.el7.x86_64.rpm cups-libs-1.6.3-29.el7.x86_64.rpm avahi-libs-0.6.31-17.el7.x86_64.rpm psmisc-22.20-15.el7.x86_64.rpm strace-4.12-4.el7.x86_64.rpm vim-enhanced-7.4.160-2.el7.x86_64.rpm at-3.1.13-22.el7_4.2.x86_64.rpm rsh-0.17-76.el7_1.1.x86_64.rpm unzip-6.0-16.el7.x86_64.rpm zip-3.0-11.el7.x86_64.rpm bzip2-1.0.6-13.el7.x86_64.rpm mlocate-0.26-6.el7.x86_64.rpm lshw-B.02.18-7.el7.x86_64.rpm jansson-2.10-1.el7.x86_64.rpm ypbind-1.37.1-9.el7.x86_64.rpm yp-tools-2.14-5.el7.x86_64.rpm perl-Net-Telnet-3.03-19.el7.noarch.rpm tzdata-java-2018d-1.el7.noarch.rpm perl-XML-Parser-2.41-10.el7.x86_64.rpm lsof-4.87-4.el7.x86_64.rpm cairo-1.14.8-2.el7.x86_64.rpm pango-1.40.4-1.el7.x86_64.rpm libjpeg-turbo-1.2.90-5.el7.x86_64.rpm sos-3.4-13.el7_4.noarch.rpm traceroute-2.0.22-2.el7.x86_64.rpm openldap-clients-2.4.44-5.el7.x86_64.rpm
- Install the following third-party rpms:
# cd /access_build_dir/rhel7_x86_64/ third_party _rpms/ # /bin/rpm -U -v --oldpackage --nodeps --replacefiles --replacepkgs ctdb-4.6.6-1.el7.x86_64.rpm perl-Template-Toolkit-2.24-5.el7.x86_64.rpm perl-Template-Extract-0.41-1.noarch.rpm perl-AppConfig-1.66-20.el7.noarch.rpm perl-File-HomeDir-1.00-4.el7.noarch.rpm samba-common-4.6.11-1.el7.x86_64.rpm samba-common-libs-4.6.11-1.el7.x86_64.rpm samba-client-4.6.11-1.el7.x86_64.rpm samba-client-libs-4.6.11-1.el7.x86_64.rpm samba-4.6.11-1.el7.x86_64.rpm samba-winbind-4.6.11-1.el7.x86_64.rpm samba-winbind-clients-4.6.11-1.el7.x86_64.rpm samba-winbind-krb5-locator-4.6.11-1.el7.x86_64.rpm libsmbclient-4.6.6-1.el7.x86_64.rpm samba-krb5-printing-4.6.11-1.el7.x86_64.rpm samba-libs-4.6.11-1.el7.x86_64.rpm libwbclient-4.6.6-1.el7.x86_64.rpm samba-winbind-modules-4.6.11-1.el7.x86_64.rpm libnet-1.1.6-7.el7.x86_64.rpm lmdb-libs-0.9.13-2.el7.x86_64.rpm nfs-ganesha-2.2.0-0.el7.x86_64.rpm nfs-ganesha-vxfs-2.2.0-0.el7.x86_64.rpm gevent-1.0.2-1.x86_64.rpm python-msgpack-0.4.6-1.el7ost.x86_64.rpm python-flask-0.10.1-4.el7.noarch.rpm python-itsdangerous-0.23-2.el7.noarch.rpm libevent-libs-2.0.22-1.el7.x86_64.rpm python-werkzeug-0.9.1-2.el7.noarch.rpm python-jinja2-2.7.2-2.el7.noarch.rpm sdfs-7.4.0.0-1.x86_64.rpm psutil-4.3.0-1.x86_64.rpm python-crontab-2.2.4-1.noarch.rpm libuv-1.9.1-1.el7.x86_64.rpm
In this command, you can update the rpm version based on the rpms in the
/access_build_dir/rhel7_x86_64/ third_party _rpms/
directory. - Install the Veritas Access rpms.
Run the following command:
# cd /access_build_dir/rhel7_x86_64/rpms/repodata/ # cat access73.repo > /etc/yum.repos.d/access73.repo
Update the baseurl and gpgkey entry in the
/etc/yum.repos.d/access73.repo
for yum repository directory.baseurl=file:///access_build_dir/rhel7_x86_64/rpms/
gpgkey=file:///access_build_dir/rhel7_x86_64/rpms/ RPM-GPG-KEY-veritas-access7
Run the following commands to refresh the yum repository.
# yum repolist
# yum grouplist
Run the following command.
# yum -y groupinstall ACCESS73
Run the following command.
# /opt/VRTS/install/bin/add_install_scripts
- Install the Veritas NetBackup client software.
# cd /access_build_dir/rhel7_x86_64 # /opt/VRTSnas/install/image_install/netbackup/install_netbackup.pl /access_build_dir/rhel7_x86_64/netbackup
- Create soft links for Veritas Access. Run the following command.
# /opt/VRTSnas/pysnas/install/install_tasks.py all_rpms_installed parallel
- License the product.
Register the permanent VLIC key.
# /opt/VRTSvlic/bin/vxlicinstupgrade -k <Key>
Verify that the VLIC key is installed properly:
# /opt/VRTSvlic/bin/vxlicrep
Register the SLIC key file:
# /opt/VRTSslic/bin/vxlicinstupgrade -k $keyfile
Verify that the SLIC key is installed properly:
# /opt/VRTSslic/bin/vxlicrep
- Take a backup of the following files:
/etc/sysconfig/network
/etc/sysconfig/network-scripts/ifcfg-*
/etc/resolv.conf
- Configure the private NIC:
# cd /etc/sysconfig/network-scripts/
Configure the first private NIC.
Run the following command.
# ip link set down priveth0
Update the
ifcfg-priveth0
file with the following:DEVICE=priveth0 NAME=priveth0 BOOTPROTO=none TYPE=Ethernet ONBOOT=yes
Add entries in the
ifcfg-priveth0
file.HWADDR=<MAC address> IPADDR= 172.16.0.3 (use IPADDR= 172.16.0.4 for second node) NETMASK=<netmask> NM_CONTROLLED=no
For example:
HWADDR=00:0c:29:0c:8d:69 IPADDR=172.16.0.3 NETMASK=255.255.248.0 NM_CONTROLLED=no
Run the following command.
# ip link set up priveth0
Configure the second private NIC.
You can configure the second private NIC in the same way. Instead of priveth0, use priveth1 for second node. You do not need to provide IPADDR for priveth1.
- Configure the public NIC.
# cd /etc/sysconfig/network-scripts/
Configure the second public NIC, pubeth1 (in which the host IP is not already configured).
Run the following command:
# ip link set down pubeth1
Update the
ifcfg-pubeth1
file with the following:DEVICE=pubeth1 NAME=pubeth1 TYPE=Ethernet BOOTPROTO=none ONBOOT=yes
Add entries in the
ifcfg-pubeth1
file.HWADDR=<MAC address> IPADDR=<pubeth1_pub_ip> NETMASK=<netmask> NM_CONTROLLED=no
Run the following command.
# ip link set up pubeth1
Configure the first public NIC, pubeth0.
As the first public NIC will go down, make sure that you access the system directly from its console.
Run the following command:
# ip link set down pubeth0
Update the
ifcfg-pubeth0
file with the following:DEVICE=pubeth0 NAME=pubeth0 TYPE=Ethernet BOOTPROTO=none ONBOOT=yes
Add entries in the
ifcfg-pubeth0
file.HWADDR=<MAC address> IPADDR=<pubeth0_pub_ip> NETMASK=<netmask> NM_CONTROLLED=no
Run the following command.
# ip link set up pubeth0
Verify if pubeth1 is down. If yes, then bring it online.
# ip link set up pubeth1
Verify the changes.
# ip a
Run the following command.
# service network restart
SSH to the above-mentioned IP should work if you start the sshd service.
- Configure the DNS.
Update the
/etc/resolv.conf
file by adding the following entries:nameserver <DNS> domain <master node name>
For example:
nameserver 10.182.128.134 domain clus_01
- Configure the gateway.
Update the
/etc/sysconfig/network
file.GATEWAY=$gateway NOZEROCONF=yes
- Update the
configfileTemplate
file.Enter the following command:
# cd /access_build_dir/rhel7_x86_64/manual_install/network
Update the
configfileTemplate
file with the current system details:Use master as the mode for the master node and slave as the mode for the other nodes.
This template file is used by the configuration utility script to create configuration files.
Provide the same name (current host name) in old_hostname and new_hostname.
- Generate the network configuration files.
The configuration utility script named
configNetworkHelper.pl
creates the required configuration files.# cd /access_build_dir/rhel7_x86_64/manual_install/network # chmod +x configNetworkHelper.pl
Run the configuration utility script.
# ./configNetworkHelper.pl -f configfileTemplate
# cat /opt/VRTSnas/scripts/net/network_options.conf > /opt/VRTSnas/conf/network_options.conf
# sed -i -e '$a\' /opt/VRTSnas/conf/net_console_ip.conf
Update the
/etc/hosts
file.# echo "172.16.0.3 <master hostname>" >> /etc/hosts # echo "172.16.0.4 <slave node name>" >> /etc/hosts
For example:
# echo "172.16.0.3 clus_01" >> /etc/hosts # echo "172.16.0.4 clus_02" >> /etc/hosts
- Create the S3 configuration file.
# cat /opt/VRTSnas/conf/ssnas.yml ObjectAccess: config: {admin_port: 8144, s3_port: 8143, server_enable: 'no', ssl: 'no'} defaults: fs_blksize: '8192' fs_encrypt: 'off' fs_nmirrors: '2' fs_options: '' fs_pdirenable: 'yes' fs_protection: disk fs_sharing: 'no' fs_size: 20G fs_type: mirrored poollist: [] filesystems: {} groups: {} pools: {}
- Set up the Storage Foundation cluster.
# cd /access_build_dir/rhel7_x86_64/manual_install/ network/SetupClusterScripts
# mkdir -p /opt/VRTSperl/lib/site_perl/UXRT72/CPIR/Module/veritas/
# cp sfcfsha_ctrl.sh /opt/VRTSperl/lib/site_perl/UXRT72/CPIR/ Module/veritas/sfcfsha_ctrl.sh
# cp module_script.pl /tmp/
# chmod +x /tmp/module_script.pl
Update the cluster name, system name, and NIC name in the following command and execute it:
# /tmp/module_script.pl veritas::sfcfsha_config '{"cluster_name" => "<Provide cluster name here>","component" => "sfcfsha","state" => "present","vcs_users" => "admin:password:Administrators,user1: passwd1:Operators","vcs_clusterid" => 14865,"cluster_uuid" => "1391a-443ab-2b34c","method" => "ethernet","systems" => "<Provide hostnames separated by comma>","private_link" => "<provide private nic name separated by comma>"}'
For example, if the cluster name is clus and the host names are clus_01 and clus_02.
/tmp/module_script.pl veritas::sfcfsha_config ' {"cluster_name" => "clus","component" => "sfcfsha", "state" => "present","vcs_users" => "admin:password:Administrators,user1:passwd1:Operators", "vcs_clusterid" => 14865,"cluster_uuid" => "1391a-443ab-2b34c", "method" => "ethernet","systems" => "clus_01,clus_02", "private_link" => "priveth0,priveth1"}'
Update and configure the following files:
# rpm -q --queryformat '%{VERSION}|%{BUILDTIME:date}|% {INSTALLTIME:date}|% {VERSION}\n' VRTSnas > /opt/VRTSnas/conf/version.conf
# echo NORMAL > /opt/VRTSnas/conf/cluster_type
# echo 'path /opt/VRTSsnas/core/kernel/' >> /etc/kdump.conf
# sed -i '/^core_collector\b/d;' /etc/kdump.conf
# echo 'core_collector makedumpfile -c --message-level 1 -d 31' >> /etc/kdump.conf
- Start the Veritas Access product processes.
Provide the current host name in the following command and execute it.
# /tmp/module_script.pl veritas::process '{"state" => "present", "seednode" => "<provide current hostname here>","component" => "sfcfsha"}'
For example, if the host name is clus_01:
# /tmp/module_script.pl veritas::process '{"state" => "present","seednode" => "clus_01","component" => "sfcfsha"}'
If you are running it on clus_02, then you have to provide "seednode" => "clus_02".
Run the following command.
# /opt/VRTSnas/pysnas/install/install_tasks.py all_services_running serial
- Create the CVM group.
If the
/etc/vx/reconfig.d/state.d/install-db
file exists, then execute the following command.# mv /etc/vx/reconfig.d/state.d/install-db /etc/vx/reconfig.d/state.d/install-db.a
If CVM is not configured already then run the following command on the master node.
# /opt/VRTS/bin/cfscluster config -t 200 -s
- Enable hacli.
Verify in
/etc/VRTSvcs/conf/config/main.cf
file. If HacliUserLevel = COMMANDROOT exists, then move to step 22 , else follow below steps to enable hacli in your system.# /opt/VRTS/bin/hastop -local
Update the
/etc/VRTSvcs/conf/config/main.cf
file.If it does not exist, then add the following line:
HacliUserLevel = COMMANDROOT in cluster <cluster name> ( ) loop
For example:
cluster clus ( UserNames = { admin = aHIaHChEIdIIgQIcHF, user1 = aHIaHChEIdIIgFEb } Administrators = { admin } Operators = { user1 } HacliUserLevel = COMMANDROOT # /opt/VRTS/bin/hastart
Verify that hacli is working.
# /opt/VRTS/bin/hacli -cmd "ls /" -sys clus_01
- Verify that the HAD daemon is running.
# /opt/VRTS/bin/hastatus -sum
- Configure Veritas Access on the second node by following steps 1 to 22 .
- Verify that the system is configured correctly.
Verify that LLT is configured correctly.
# lltconfig -a list
For example:
[root@clus_02 SetupClusterScripts]# lltconfig -a list Link 0 (priveth0): Node 0 clus_01 : 00:0C:29:0C:8D:69 Node 1 clus_02 : 00:0C:29:F0:CC:B6 permanent Link 1 (priveth1): Node 0 clus_01 : 00:0C:29:0C:8D:5F Node 1 clus_02 : 00:0C:29:F0:CC:AC permanent
Verify that GAB is configured properly.
# gabconfig -a
For example:
[root@clus_01 network]# gabconfig -a GAB Port Memberships ================================== Port a gen 43b804 membership 01 Port b gen 43b807 membership 01 Port h gen 43b821 membership 01
Verify the LLT state.
# lltstat -nvv
For example:
[root@clus_01 network]# lltstat -nvv LLT node information: Node State Link Status Address * 0 clus_01 OPEN priveth0 UP 00:0C:29:0C:8D:69 priveth1 UP 00:0C:29:0C:8D:5F 1 clus_02 OPEN priveth0 UP 00:0C:29:F0:CC:B6 priveth1 UP 00:0C:29:F0:CC:AC 2 CONNWAIT priveth0 DOWN priveth1 DOWN
The vxconfigd daemon should be online on both nodes.
# ps -ef | grep vxconfigd
For example:
# ps -ef | grep vxconfigd root 13393 1 0 01:33 ? 00:00:00 vxconfigd -k -m disable -x syslog
- Run the Veritas Access post-start actions.
Make sure that HAD is running on all the nodes.
# /opt/VRTS/bin/hastatus
On all the nodes, create a
communication.conf
file to enable hacli instead of ssh.vim /opt/VRTSnas/conf/communication.conf { "WorkingVersion": "1", "Version": "1", "CommunicationType": "HACLI" }
Run the installer to install Veritas Access. Run the following command only on the master node.
# /opt/VRTSnas/install/image_install/installer -m master
- Run the join operation on the slave node.
# /opt/VRTSnas/install/image_install/installer -m join
- Run the following command on both the nodes.
# echo "<first private nic name>" > /opt/VRTSnas/conf/net_priv_dev.conf
For example:
# echo "priveth0" > /opt/VRTSnas/conf/net_priv_dev.conf
- Enable NFS resources. Run the following commands on the master node.
# /opt/VRTS/bin/haconf -makerw # /opt/VRTS/bin/hares -modify ssnas_nfs Enabled 1 # /opt/VRTS/bin/haconf -dump -makero
You can now use the two-node Veritas Access cluster.