Veritas Access Installation Guide
- Introducing Veritas Access
- Licensing in Veritas Access
- System requirements
- System requirements
- Linux requirements
- Linux requirements
- Network and firewall requirements
- Preparing to install Veritas Access
- Deploying virtual machines in VMware ESXi for Veritas Access installation
- Installing and configuring a cluster
- Installing the operating system on each node of the cluster
- Installing Veritas Access on the target cluster nodes
- About managing the NICs, bonds, and VLAN devices
- About VLAN tagging
- Automating Veritas Access installation and configuration using response files
- Displaying and adding nodes to a cluster
- Upgrading Veritas Access and operating system
- Upgrading Veritas Access using a rolling upgrade
- Uninstalling Veritas Access
- Appendix A. Installation reference
- Appendix B. Configuring the secure shell for communications
- Appendix C. Manual deployment of Veritas Access
Upgrading the operating system and Veritas Access
Veritas Access supports the following upgrade paths for upgrades on RHEL.
Table: Supported upgrade paths for upgrades on RHEL
From product version | From operating system versions | To operating system versions | To product version |
---|---|---|---|
7.3.0.1 | RHEL 7 Update 3 | RHEL 7 Update 4 | 7.4 |
7.3.1 | RHEL 7 Update 3 | RHEL 7 Update 4 | 7.4 |
Upgrading the operating system and Veritas Access includes the following steps:
Pre-upgrade steps only for the LTR-configured Veritas Access cluster.
Export the Veritas Access configurations by using the script provided by Veritas Access
Copy the configuration file
Install RHEL 7.3 or 7.4
Install Veritas Access 7.4
Import the Veritas Access configurations
Verify the imported Veritas Access configurations
Post-upgrade steps only for the LTR-configured Veritas Access cluster
Pre-upgrade steps only for the LTR-configured Veritas Access cluster
Note:
These steps are required when OpenDedup volumes are provisioned on the Veritas Access cluster.
- Ensure that the backup or restore jobs from NetBackup are stopped.
- If the upgrade is from 7.3.0.1, copy the
upgrade_scripts/odd_config_export_va7301.py
script from the ISO to the management console node.If the upgrade is from 7.3.1, copy the
upgrade_scripts/odd_config_export_va731.py
script from the ISO to the management console node. - Execute the respective script to export the OpenDedup configuration:
For 7.3.0.1:
python odd_config_export_va7301.py [filename]
For 7.3.1:
python odd_config_export_va731.py [filename]
Note:
If no file name is provided, the default config file name
odd_config.exp
is used.
To export the Veritas Access configurations
- Prerequisites:
Install the RHEL 7.3 version.
Verify that the Veritas Access version 7.3.0.1 or 7.3.1 is installed.
Make sure that you have stopped all I/Os and services related to Veritas Access by using the CLISH, such as CIFS, NFS, FTP, and so on.
Stop all services by using the hastop -all command.
- From the ISO, copy the
upgrade_scripts/config_export
directory on the cluster node on which the management console service group is online. - From the directory, run the following command on the shell (terminal) by using the
root
login to export the Veritas Access configurations:/bin/bash -f export_lib.sh export local <filename>
To verify the Veritas Access configuration export
- Run the following command on CLISH to see the list of available configurations:
system config list
The configuration files can be found in:
/opt/VRTSnas/conf/backup
Note:
You need to store these configuration files on a node that is out of the cluster nodes to avoid any damage to the file.
To install RHEL 7.4
- Prerequisites:
Make sure that you stop all the running modules on CLISH and no I/O is running.
Run the
network ip addr show
command andcluster show
command on CLISH before you install RHEL 7.4. Make a note of these IP addresses and cluster node names. Make sure to use the same IP addresses and cluster name while installing the Veritas Access cluster after RHEL 7.4 is installed.Examples:
upgrade> network ip addr show IP Netmask/Prefix Device Node Type Status -- -------------- ------ ---- ---- ------ 192.168.10.151 255.255.255.0 pubeth0 upgrade_01 Physical 192.168.10.158 255.255.255.0 pubeth1 upgrade_01 Physical 192.168.10.152 255.255.255.0 pubeth0 upgrade_02 Physical 192.168.10.159 255.255.255.0 pubeth1 upgrade_02 Physical 192.168.10.174 255.255.255.0 pubeth0 upgrade_01 Virtual ONLINE (Con IP) 192.168.10.160 255.255.255.0 pubeth0 upgrade_01 Virtual ONLINE 192.168.10.161 255.255.255.0 pubeth1 upgrade_01 Virtual ONLINE
upgrade> cluster show Node State CPU(15 min) pubeth0(15 min) pubeth1(15 min) % rx(MB/s) tx(MB/s) rx(MB/s) tx(MB/s) ---- ----- ----------- -------- -------- -------- -------- upgrade_01 RUNNING 11.52 0.67 0.06 0.60 0.00 upgrade_02 RUNNING 4.19 0.61 0.05 0.60 0.00
Note:
In this example, the cluster name is
upgrade
and the cluster node names areupgrade_01
andupgrade_02
. - Restart all the nodes of the cluster.
- Install RHEL 7.4 on the desired nodes.
See Installing the operating system on the target Veritas Access cluster.
Note:
It is recommended to select the same disk or disks for the installation on which RHEL 7.3 was installed. Make sure that you do not select any other disk, because those disks may be part of a pool, and may result in data loss.
To install Veritas Access 7.4
- After a restart when the nodes are up, start the Veritas Access 7.4 installation by using the CPI.
Note:
Make sure to use the same IP addresses and cluster name that were used for the Veritas Access installation on RHEL 7.3.
To verify the Veritas Access installation
- By using the console IP, check whether the CLISH is accessible.
- Run the following command in CLISH to see whether the disks are accessible:
storage disk list
Note:
If the disks are not visible in the CLISH output, run the
storage scanbus force
command in CLISH. - Run the following command in CLISH to see whether the pools are accessible:
storage pool list
Note:
If the pools are not visible in the CLISH output, run the
storage scanbus force
command in CLISH. - Run the following command in CLISH to see whether the file systems are accessible:
storage fs list
Note:
If the file systems are not visible in the CLISH output, run the
storage scanbus force
command in CLISH. - Make sure that the file systems are online. If the file systems are not online, you need to run the following command in CLISH to bring them online:
storage fs online <fs name>
To import the Veritas Access configuration
- Prerequisites:
Make sure that the file systems are online. If the file systems are not online, you need to run the following command in CLISH to bring them online:
storage fs online <fs name>
Note:
Make sure that the cluster uses the same IP addresses and cluster name that were used for the Veritas Access installation on RHEL 7.3.
If VIP addresses are not added during installation, which were used for Veritas Access on RHEL 7.3, add them from CLISH after Veritas Access is installed on RHEL 7.4, and then import the configuration.
- Copy the exported configuration files to the cluster nodes in the following location:
/opt/VRTSnas/conf/backup/
- Run the following command in CLISH to see the available exported configuration:
system config list
- Log in to CLISH and import the module configuration by using the following command:
system config import local <config-filename> <module-to-import>
The following modules can be imported:
upgrade> system config import local system config import local <file_name> [config-type] -- Import the configuration which is stored locally file_name : configuration file name config-type : input type of configuration to import (network/admin/all/report/ system/support/cluster_specific/all_except_cluster_specific/nfs/cifs/ftp/backup/ replication/storage_schedules/storage_quota/storage_fs_alert/storage_fs_policy/ compress_schedules/defrag_schedules/storage_dedup/smartio/target/object_access/ loadbalance/opendedup) [all] upgrade> system config import local
Note:
The module names are auto-suggested in CLISH.
Post-upgrade steps only for the LTR-configured Veritas Access cluster
Note:
These steps are required in addition to the above steps when the OpenDedup volumes are provisioned on the Veritas Access cluster.
- Enable or start the required authentication services (AD, LDAP, or NIS) that are used by the ObjectAccess service.
Note:
If the upgrade is from Veritas Access 7.3.0.1, set the pool for ObjectAccess, and enable the ObjectAccess as follows.
Cluster1> objectaccess set pools pool1 ACCESS ObjectAccess INFO V-493-10-0 Set pools successful. Please make sure the storage is provisioned as per the requirements of the layout. Cluster1> objectaccess server enable 100% [********************] Enabling ObjectAccess server. ACCESS ObjectAccess SUCCESS V-493-10-4 ObjectAccess server enabled.
- Start the ObjectAccess service by using the following command:
cluster2> objectaccess server start ACCESS ObjectAccess SUCCESS V-493-10-4 ObjectAccess started successfully.
- Import the OpenDedup configuration by using the following command.
cluster2> system config import remote <file location> opendedup
Note:
You can import the OpenDedup configuration that you have exported by using the steps provided in the section Pre-upgrade steps only for the LTR-configured Veritas Access cluster.
- Offline all the OpenDedup volumes by using the following command:
cluster2> opendedup volume offline <vol-name>
- Update all the OpenDedup
config.xml
files as follows:"/etc/sdfs/<vol-name>-volume-cfg.xml
by adding following parameter to the <extended-config> tag:
dist-layout="false"
Note:
This parameter should not be used for the existing OpenDedup volumes because they may have existing data with the default layout. If you use the existing OpenDedup volumes, it may result in data corruption.
- To bring all the OpenDedup volumes Online, use the following command:
cluster2> opendedup volume online <vol-name>