Veritas Access Installation Guide
- Introducing Veritas Access
- Licensing in Veritas Access
- System requirements
- System requirements
- Linux requirements
- Linux requirements
- Network and firewall requirements
- Preparing to install Veritas Access
- Deploying virtual machines in VMware ESXi for Veritas Access installation
- Installing and configuring a cluster
- Installing the operating system on each node of the cluster
- Installing Veritas Access on the target cluster nodes
- About NIC bonding and NIC exclusion
- About VLAN Tagging
- Automating Veritas Access installation and configuration using response files
- Displaying and adding nodes to a cluster
- Upgrading Veritas Access and operating system
- Upgrading Veritas Access using a rolling upgrade
- Uninstalling Veritas Access
- Appendix A. Installation reference
- Appendix B. Troubleshooting the LTR upgrade
- Appendix C. Configuring the secure shell for communications
Upgrading the operating system and Veritas Access
Veritas Access supports the following upgrade paths for upgrades on RHEL.
Table: Supported upgrade paths for upgrades on RHEL
From product version | From operating system versions | To operating system versions | To product version |
---|---|---|---|
7.2.1.1 | RHEL 6 Update 6, 7, and 8 | RHEL 7 Update 3 and 4 | 7.3.1 |
7.3 | RHEL 6 Update 6, 7, and 8 | RHEL 7 Update 3 and 4 | 7.3.1 |
This process is required for upgrading the Long-term Retention (LTR) on the Veritas Access cluster. All the backup and/or restore jobs from NetBackup must be stopped before you start the upgrade process.
For the LTR upgrade scenarios, you need to use the following scripts:
preUpgrade_ltr_access731.py
postUpgrade_ltr_access731.py
:
You need to execute the
preUpgrade_ltr_access731.py
script where theodd_cache_fs
file system is created to backup the cache data of OpenDedup volumes. Size of this file system is determined based on the current cache size (/opt/sdfs
).The pool(s) which is configured as a default pool for Objectaccess to create this file system. Therefore, sufficient space must be available in this pool(s). After the
odd_cache_fs
file system is provisioned, all the OpenDedup volumes are made offline and configuration and the cache data are backed up.
After the upgrade of cluster is completed, you need to execute the postUpgrade_ltr_access731.py
script where all the OpenDedup volumes are made online after restoring all the configurations.
A one-time tier policy is created for the configured cloud tiers to move the OpenDedup metadata files (ending with .6442
extension) from the cloud tier to an on-premises storage. OpenDedup needs these metadata files to verify and restore configurations. If these files are stored on a cloud tier, performance of these operations may get degraded.
Upgrading the operating system and Veritas Access includes the following steps:
Pre-upgrade steps only for LTR configured Veritas Access cluster
Export the Veritas Access configurations by using the script provided by Veritas Access
Copy the configuration file
Install RHEL 7.3 or 7.4
Install Veritas Access 7.3.1
Import the Veritas Access configurations
Verify the imported Veritas Access configurations
Post-upgrade steps only for LTR configured Veritas Access cluster
Pre-upgrade steps only for the LTR configured Veritas Access cluster
Note:
These steps are required when OpenDedup volumes are provisioned on the Veritas Access cluster.
- Ensure that the backup and/or restore jobs from NetBackup are stopped.
- From the ISO, copy the
upgrade_scripts/preUpgrade_ltr_access731.py
script to "/" on each node where the OpenDedup volume is online. - Execute the
preUpgrade_ltr_access731.py
script one-by-one on each node where the OpenDedup volume is online.
To export the Veritas Access configurations
- Prerequisite:
Any supported RHEL 6 version is installed.
Veritas Access version 7.2.1.1 or 7.3 is installed.
Make sure that you have stopped all I/Os and services related to Veritas Access by using CLISH, such as CIFS, NFS, FTP, and so on.
- From the ISO, copy the
upgrade_scripts/config_export
directory on "/" on the cluster node on which the ManagementConsole service group is online. - From the directory, run the following command on Shell (terminal) by using the
root
login to export the Veritas Access configurations:/bin/bash -f export_lib.sh export local <filename>
To verify the Veritas Access configuration export
- Run the following command on CLISH to see the list of available configurations:
system config list
The configuration files can be found in
/opt/VRTSnas/conf/backup
Note:
You need to store these configuration files on a node, which is out of the cluster nodes to avoid any damage to the file.
To install RHEL 7.3 or 7.4
- Prerequisites:
Make sure that you stop all the running modules on CLISH and no I/O is running.
Run the
network ip addr show
command andcluster show
command on CLISH before you install RHEL 7. Make a note of these IP addresses and cluster node names. Make sure to use the same IP addresses and cluster name while installing the Veritas Access after the RHEL 7 is installed.Examples:
upgrade> network ip addr show IP Netmask/Prefix Device Node Type Status -- -------------- ------ ---- ---- ------ 192.168.10.151 255.255.255.0 pubeth0 upgrade_01 Physical 192.168.10.158 255.255.255.0 pubeth1 upgrade_01 Physical 192.168.10.152 255.255.255.0 pubeth0 upgrade_02 Physical 192.168.10.159 255.255.255.0 pubeth1 upgrade_02 Physical 192.168.10.174 255.255.255.0 pubeth0 upgrade_01 Virtual ONLINE (Con IP) 192.168.10.160 255.255.255.0 pubeth0 upgrade_01 Virtual ONLINE 192.168.10.161 255.255.255.0 pubeth1 upgrade_01 Virtual ONLINE
upgrade> cluster show Node State CPU(15 min) pubeth0(15 min) pubeth1(15 min) % rx(MB/s) tx(MB/s) rx(MB/s) tx(MB/s) ---- ----- ----------- -------- -------- -------- -------- upgrade_01 RUNNING 11.52 0.67 0.06 0.60 0.00 upgrade_02 RUNNING 4.19 0.61 0.05 0.60 0.00
Note:
In this example, cluster name is
upgrade
and cluster node names areupgrade_01
andupgrade_02
. - Restart all the nodes of the cluster.
- Install RHEL 7.3 or 7.4 on the desired nodes.
See Installing the operating system on the target Veritas Access cluster.
Note:
It is recommended to select the same disk or disks for the installation on which RHEL 6 was installed. Make sure that you do not to select any other disk as it may be possible that those disks are part of a pool, and result in data loss.
To install Veritas Access 7.3.1
- After a restart when the nodes are up, start the Veritas Access 7.3.1 installation by using CPI.
Note:
Make sure to use the same IP addresses and cluster name which were used for the Veritas Access on RHEL 6.
To verify the Veritas Access installation
- By using the console IP, check whether the CLISH is accessible.
- Run the following command in CLISH to see whether the disks are accessible:
storage disk list
Note:
If the disks are not visible in the CLISH output, run the
storage scanbus force
command in CLISH. - Run the following command in CLISH to see whether the pools are accessible:
storage pool list
Note:
If the pools are not visible in the CLISH output, run the
storage scanbus force
command in CLISH. - Run the following command in CLISH to see whether the file systems are accessible:
storage fs list
Note:
If the file systems are not visible in the CLISH output, run the
storage scanbus force
command in CLISH. - Make sure that the file systems are online. If the file systems are not online, you need to run the following command in CLISH to bring them online:
storage fs online <fs name>
To import the Veritas Access configuration
- Prerequisites:
Make sure that the file systems are online. If the file systems are not online, you need to run the following command in CLISH to bring them online:
storage fs online <fs name>
Note:
Make sure that the cluster uses the same IP addresses and cluster name which were used for the Veritas Access on RHEL 6.
If VIP addresses are not added during installation, which were used for Veritas Access on RHEL 6, add them from CLISH after Veritas Access is installed on RHEL 7, and then import the configuration.
- Copy the exported configuration files to the cluster nodes in the following location:
/opt/VRTSnas/conf/backup/
- Run the following command in CLISH to see the available exported configuration:
system config list
- Log in to CLISH and import the module configuration by using the following command:
system config import local <config-filename> <module-to-import>
The following modules can be imported:
upgrade> system config import local system config import local <file_name> [config-type] -- Import the configuration which is stored locally file_name : configuration file name config-type : input type of configuration to import (network/admin/all/report/ system/support/cluster_specific/all_except_cluster_specific/nfs/cifs/ftp/backup/ replication/storage_schedules/storage_quota/storage_fs_alert/storage_fs_policy/ compress_schedules/defrag_schedules/storage_dedup/smartio/target/object_access/ loadbalance/opendedup) [all] upgrade> system config import local
Note:
The module names are auto-suggested in CLISH.
Post-upgrade steps only for the LTR configured Veritas Access cluster
Note:
These steps are required when the OpenDedup volumes are provisioned on Veritas Access cluster.
- From the ISO, copy the
upgrade_scripts/postUpgrade_ltr_access731.py
script to/
on the Management Console node. - Execute the
postUpgrade_ltr_access731.py
script.