Please enter search query.
 
              Search <book_title>...
            
 
          Veritas Access Release Notes
                Last Published: 
				2020-10-16
                
              
              
                Product(s): 
				Access (7.4.2)
                 
              
              
                Platform: Linux
              
            - Overview of Veritas Access
 - Fixed issues
 - Software limitations
- Limitations on using shared LUNs
 - Flexible Storage Sharing limitations
 - Limitations related to installation and upgrade
 - Limitations in the Backup mode
 - Veritas Access IPv6 limitations
 - FTP limitations
 - Intel Spectre Meltdown limitation
 - Samba ACL performance-related issues
 - Limitations on using InfiniBand NICs in the Veritas Access cluster
 - Limitations related to commands in a non-SSH environment
 - Limitation on using Veritas Access in a virtual machine environment
 - Limitations related to Veritas Data Deduplication
 - NFS-Ganesha limitations
 - Kernel-based NFS v4 limitations
 - File system limitation
 - Veritas Access S3 server limitation
 - Long-term data retention (LTR) limitations
 - Limitation related to replication
 
 - Known issues
- Veritas Access known issues
- Admin issues
 - Backup issues
 - CIFS issues
- Cannot enable the quota on a file system that is appended or added to the list of homedir
 - Deleting a CIFS share resets the default owner and group permissions for other CIFS shares on the same file system
 - Listing of CIFS shares created on a Veritas Access cluster fails on Windows server or client
 - Default CIFS share has owner other than root
 - CIFS> mapuser command fails to map all the users from Active Directory (AD) to all the NIS/LDAP users
 - Windows client displays incorrect CIFS home directory share size
 - CIFS share may become unavailable when the CIFS server is in normal mode
 - CIFS share creation does not authenticate AD users
 
 - Deduplication issues
 - FTP issues
- If a file system is used as homedir or anonymous_login_dir for FTP, this file system cannot be destroyed
 - The FTP> server start command reports the FTP server to be online even when it is not online
 - The FTP> session showdetails user=<AD username> command does not work
 - If the security in CIFS is not set to Active Directory (AD), you cannot log on to FTP through the AD user
 - If security is set to local, FTP does not work in case of a fresh operating system and Veritas Access installation
 - If the LDAP and local FTP user have the same user name, then the LDAP user cannot perform PUT operations when the security is changed from local to nis-ldap
 - FTP with LDAP as security is not accessible to a client who connects from the console node using virtual IPs
 - The FTP server starts even if the home directory is offline and if the security is changed to local, the FTP client writes on the root file system
 
 - General issues
 - GUI issues
- When both continuous and episodic replication links are set up, provisioning of storage using High Availability and Data Protection policies does not work
 - When a new node is added or when a new cluster is installed and configured, the GUI may not start on the console node after a failover
 - When an earlier version of the Veritas Access cluster is upgraded, the GUI shows stale and incomplete data
 - Restarting the server as part of the command to add and remove certificates gives an error on RHEL 7
 - While provisioning an S3 bucket for NetBackup, the bucket creation fails if the device protection is selected as erasurecoded and the failure domain is selected as disk
 - Client certificate validation using OpenSSL ocsp does not work on RHEL 7
 - When you perform the set LDAP operation using the GUI, the operation fails with an error
 - GUI does not support segregated IPv6 addresses while creating CIFS shares using the Enterprise Vault policy
 
 - Installation and configuration issues
- After you restart a node that uses RDMA LLT, LLT does not work, or the gabconfig - a command shows the jeopardy state
 - Running individual Veritas Access scripts may return inconsistent return codes
 - Configuring Veritas Access with the installer fails when the SSH connection is lost
 - Excluding PCIs from the configuration fails when you configure Veritas Access using a response file
 - Installer does not list the initialized disks immediately after initializing the disks during I/O fencing configuration
 - If the same driver node is used for two installations at the same time, then the second installation shows the progress status of the first installation
 - If the same driver node is used for two or more installations at the same time, then the first installation session is terminated
 - If you run the Cluster> show command when a slave node is in the restart, shutdown, or crash state, the slave node throws an exception
 - If duplicate PCI IDs are added for the PCI exclusion, the Cluster> add node name command fails
 - If installing using a response file is started from the cluster node, then the installation session gets terminated after the configuring NICs section
 - After finishing system verification checks, the installer displays a warning message about missing third-party RPMs
 - Phantomgroup for the VLAN device does not come online if you create another VLAN device from the Veritas Access command-line interface after cluster configuration is done
 - Veritas Access fails to install if LDAP or the autofs home directories are preconfigured on the system
 - After the Veritas Access installation is complete, the installer does not clean the SSH keys of the driver node on the Veritas Access nodes from where the installation is triggered.
 - Veritas Access installation fails if the nodes have older yum repositories and do not have Internet connectivity to reach RHN repositories
 - Installing Veritas Access with a preconfigured VLAN and a preconfigured bond fails
 - When you configure Veritas Access, the common NICs may not be listed
 - In a mixed mode Veritas Access cluster, after the execution of the Cluster> add node command, one type of unused IP does not get assigned as a physical IP to public NICs
 - NLMGroup service goes into a FAULTED state when the private IP (x.x.x.2) is not free
 - The cluster> show command does not detect all the nodes of the cluster
 
 - Internationalization (I18N) issues
 - Networking issues
- CVM service group goes into faulted state unexpectedly
 - In a mixed IPv4 and IPv6 VIP network set up, the IP balancing does not consider IP type
 - The netgroup search does not continue to search in NIS if the entry is not found in LDAP
 - The IPs hosted on an interface that is not the current IPv6 default gateway interface are not reachable outside the current IPv6 subnet
 - After network interface swapping between two private NICs or one private NIC and one public NIC, the service groups on the slave nodes are not probed
 - Unable to import the network module after an operating system upgrade
 - LDAP with SSL on option does not work if you upgrade Veritas Access
 - Network load balancer does not get configured with IPv6
 - Unable to add an IPv6-default gateway on an IPv4-installed cluster
 - LDAP over SSL may not work in Veritas Access 7.4.2
 - The network> swap command hangs if any node other than the console node is specified
 
 - NFS issues
- Slow performance with Solaris 10 clients with NFS-Ganesha version 4
 - Random-write performance drop of NFS-Ganesha with Linux clients
 - Latest directory content of server is not visible to the client if time is not synchronized across the nodes
 - NFS> share show may list the shares as faulted for some time if you restart the cluster node
 - NFS-Ganesha shares faults after the NFS configuration is imported
 - NFS-Ganesha shares may not come online when the number of shares are more than 500
 - Exporting a single path to multiple clients through multiple exports does not work with NFS-Ganesha
 - For the NFS-Ganesha server, bringing a large number of shares online or offline takes a long time
 - NFS client application may fail with the stale file handle error on node reboot
 - NFS> share show command does not distinguish offline versus online shares
 - Difference in output between NFS> share show and Linux showmount commands
 - NFS mount on client is stalled after you switch the NFS server
 - Kernel-NFS v4 lock failover does not happen correctly in case of a node crash
 - Kernel-NFS v4 export mount for Netgroup does not work correctly
 - NFS-Ganesha share for IPv6 subnet does not work and NFS share becomes faulted
 - When a file system goes into the FAULTED or OFFLINE state, the NFS share groups associated with the file system do not become offline on all the nodes
 
 - ObjectAccess issues
- When trying to connect to the S3 server over SSLS3, the client application may give a warning like "SSL3_GET_SERVER_CERTIFICATE:certificate verify failed"
 - If you have upgraded to Veritas Access 7.4.2 from an earlier release, access to S3 server fails if the cluster name has uppercase letters
 - If the cluster name does not follow the DNS hostname restrictions, you cannot work with the ObjectAccess service in Veritas Access
 - Bucket creation may fail with time-out error
 - Bucket deletion may fail with "No such bucket" or "No such key" error
 - Group configuration does not work in ObjectAccess if the group name contains a space
 
 - OpenDedup issues
- The file system storage is not reclaimed after deletion of an OpenDedup volume
 - The Storage> fs online command fails with an EBUSY error
 - Output mismatch in the df -h command for OpenDedup volumes that are backed by a single bucket and mounted on two different media servers
 - The OpenDedup> volume create command does not revert the changes if the command fails during execution
 - Some of the OpenDedup volume stats reset to zero after upgrade
 - OpenDedup volume mount operation fails with an error
 - Restore of data from AWS glacier fails
 - OpenDedup volumes are not online after an OpenDedup upgrade if there is a change in the cluster name
 - If the Veritas Access master node is restarted when a restore job is in progress and OpenDedup resides on the media server, the restored files may be in inconsistent state
 - The OpenDedup> volume list command may not show the node IP for a volume
 - When Veritas Access is configured in mixed mode, the configure LTR script randomly chooses a virtual IP from the available Veritas Access virtual IPs
 
 - OpenStack issues
- Cinder and Manila shares cannot be distinguished from the Veritas Access command-line interface
 - Cinder volume creation fails after a failure occurs on the target side
 - Cinder volume may fail to attach to the instance
 - Bootable volume creation for an iSCSI driver fails with an I/O error when a qcow image is used
 
 - Replication issues
- When running episodic replication and dedup over the same source, the episodic replication file system fails in certain scenarios
 - The System> config import command does not import episodic replication keys and jobs
 - The job uses the schedule on the target after episodic replication failover
 - Episodic replication fails with error "connection reset by peer" if the target node fails over
 - Episodic replication jobs created in Veritas Access 7.2.1.1 or earlier versions are not recognized after an upgrade
 - Setting the bandwidth through the GUI is not enabled for episodic replication
 - Episodic replication job with encryption fails after job remove and add link with SSL certificate error
 - Episodic replication job status shows the entry for a link that was removed
 - Episodic replication job modification fails
 - Episodic replication failover does not work
 - Continuous replication fails when the 'had' daemon is restarted on the target manually
 - Continuous replication is unable to go to the replicating state if the Storage Replicated Log becomes full
 - Unplanned failover and failback in continuous replication may fail if the communication of the IPTABLE rules between the cluster nodes does not happen correctly
 - Continuous replication configuration may fail if the continuous replication IP is not online on the master node but is online on another node
 - If you restart any node in the primary or the secondary cluster, replication may go into a PAUSED state
 
 - SDS known issues
 - SmartIO issues
 - Storage issues
- Snapshot mount can fail if the snapshot quota is set
 - Sometimes the Storage> pool rmdisk command does not print a message
 - The Storage> pool rmdisk command sometimes can give an error where the file system name is not printed
 - Not able to enable quota for file system that is newly added in the list of CIFS home directories
 - Destroying the file system may not remove the /etc/mtab entry for the mount point
 - The Storage> fs online command returns an error, but the file system is online after several minutes
 - Removing disks from the pool fails if a DCO exists
 - Scale-out file system returns an ENOSPC error even if the df command shows there is space available in the file system
 - Rollback refresh fails when running it after running Storage> fs growby or growto commands
 - If an exported DAS disk is in error state, it shows ERR on the local node and NOT_CONN on the remote nodes in Storage> list
 - Inconsistent cluster state with management service down when disabling I/O fencing
 - Storage> tier move command failover of node is not working
 - Storage> scanbus operation hangs at the time of I/O fencing operation
 - Rollback service group goes in faulted state when respective cache object is full and there is no way to clear the state
 - Event messages are not generated when cache objects get full
 - The Veritas Access command-line interface should not allow uncompress and compress operations to run on the same file at the same time
 - Storage device fails with SIGBUS signal causing the abnormal termination of the scale-out file system daemon
 - Storage> tier move list command fails if one of the cluster nodes is rebooted
 - Pattern given as filter criteria to Storage> fs policy add sometimes erroneously transfers files that do not fit the criteria
 - When a policy run completes after issuing Storage> fs policy resume, the total data and total files count might not match the moved data and files count as shown in Storage> fs policy status
 - Storage> fs addcolumn operation fails but error notification is not sent
 - Storage> fs-growto and Storage> fs-growby commands give error with isolated disks
 - Unable to create space-optimized rollback when tiering is present
 - Enabling I/O fencing on a set up with Volume Manager objects present fails to import the disk group
 - File system creation fails when the pool contains only one disk
 - After starting the backup service, BackupGrp goes into FAULTED state on some nodes
 - A scale-out file system created with a simple layout using thin LUNs may show layered layout in the Storage> fs list command
 - A file system created with a largefs-striped or largefs-mirrored-stripe layout may show incorrect number of columns in the Storage> fs list command
 - File system creation fails with SSD pool
 - A scale-out file system may go into faulted state after the execution of Storage> fencing off/on command
 - After an Azure tier is added to a scale-out file system, you cannot move files to the Azure tier and the Storage> tier stats command may fail
 - The CVM service group goes in to faulted state after you restart the management console node
 - The Storage> fs create command does not display the output correctly if one of the nodes of the cluster is in unknown state
 - Storage> fs growby and growto commands fail if the size of the file system or bucket is full
 - The operating system names of fencing disks are not consistent across the Veritas Access cluster that may lead to issues
 - The disk group import operation fails and all the services go into failed state when fencing is enabled
 - While creating an erasure-coded file system, a misleading message leads to issues in the execution of the storage> fs create command
 - The Veritas Access cluster node can get explicitly ejected or aborted from the cluster during recovery when another node joins the cluster after a restart
 - Error while creating a file system stating that the CVM master and management console are not on the same node
 - When you configure disk-based fencing, the cluster does not come online after you restart the node
 - In an erasure-coded file system, when the nodes are restarted, some of the file systems do not get unmounted
 - After a node is restarted, the vxdclid process may generate core dump
 - The Veritas Access command-line interface may be inaccessible after some nodes in the cluster are restarted
 - The cluster> shutdown command does not shut down the node
 
 - System issues
 - Target issues
- Storage provisioning commands hang on the Veritas Access initiator when LUNs from the Veritas Access target are being used
 - After the Veritas Access cluster recovers from a storage disconnect, the iSCSI LUNs exported from Veritas Access as an iSCSI target may show the wrong content on the initiator side
 
 - Upgrade issues
- Some vulnerabilities are present in the python-requests rpm which impacts rolling upgrade when you try to upgrade from 7.4.x to 7.4.2
 - During rolling upgrade, Veritas Access shutdown does not complete successfully
 - CVM is in FAULTED state after you perform a rolling upgrade
 - If rolling upgrade is performed when NFS v4 is configured using NFS lease, the system may hang
 
 - Veritas Data Deduplication issues
- The Veritas Data Deduplication storage server does not come online on a newly added node in the cluster if the node was offline when you configured deduplication
 - The Veritas Data Deduplication server goes offline after destroying the bond interface on which the deduplication IP was online
 - If you grow the deduplication pool using the fs> grow command, and then try to grow it further using the dedupe> grow command, the dedupe> grow command fails
 - The Veritas Data Deduplication server goes offline after bond creation using the interface of the deduplication IP
 
 
 
 - Veritas Access known issues
 - Getting help
 
In a mixed IPv4 and IPv6 VIP network set up, the IP balancing does not consider IP type
In a mixed IPv4 and IPv6 set up, the IP balancing does not consider the IP type. This behavior means that a node in the cluster might end up with no IPv6 VIP on it. IP balancing should consider the type of IP.
Workaround:
If required, manually bring online a VIP of the appropriate IP type on the node.