Please enter search query.
Search <book_title>...
Veritas Access Release Notes
Last Published:
2018-07-23
Product(s):
Access (7.3.1)
Platform: Linux
- Overview of Veritas Access
- About this release
- Important release information
- Changes in this release
- IP load balancing
- Veritas Access as an iSCSI target for RHEL 7.3 and 7.4
- Changes to the GUI
- Support for RHEL and OL operating systems
- Erasure-coding support in a cluster file system (CFS) for NFS use case
- Episodic and continuous replication in Veritas Access
- Active-active support for scale-out file system
- Replication on a scale-out file system
- Veritas Access 7.3.0.1 is certified as primary NAS storage for VMware
- Change in documentation
- Partial support for internationalization (I18N)
- Technical preview features
- Fixed issues
- Software limitations
- Limitations on using shared LUNs
- Flexible Storage Sharing limitations
- Limitations related to installation and upgrade
- Limitations in the Backup mode
- Veritas Access IPv6 limitations
- FTP limitations
- Samba ACL performance-related issues
- Veritas Access language support
- Limitations on using InfiniBand NICs in the Veritas Access cluster
- Limitation on using Veritas Access in a virtual machine environment
- NFS-Ganesha limitations
- Kernel-based NFS v4 limitations
- File system limitation
- Veritas Access S3 server limitation
- Long-term data retention limitations
- Limitation related to replication
- Known issues
- Veritas Access known issues
- Backup issues
- CIFS issues
- Cannot enable the quota on a file system that is appended or added to the list of homedir
- Deleting a CIFS share resets the default owner and group permissions for other CIFS shares on the same file system
- Default CIFS share has owner other than root
- Listing of CIFS shares created on a Veritas Access cluster fails on Windows server or client
- CIFS> mapuser command fails to map all the users from Active Directory (AD) to all the NIS/LDAP users
- Deduplication issues
- Enterprise Vault Attach known issues
- FTP issues
- GUI issues
- When both continuous and episodic replication links are set up, provisioning of storage using High Availability and Data Protection policies does not work
- When a new node is added or when a new cluster is installed and configured, the GUI may not start on the console node after a failover
- When an earlier version of the Veritas Access cluster is upgraded, the GUI shows stale and incomplete data
- Restarting the server as part of the command to add and remove certificates gives an error on RHEL 7
- Client certificate validation using OpenSSL ocsp does not work on RHEL7
- Installation and configuration issues
- After you restart a node that uses RDMA LLT, LLT does not work, or the gabconifg - a command shows the jeopardy state
- Running individual Veritas Access scripts may return inconsistent return codes
- Configuring Veritas Access with the installer fails when the SSH connection is lost
- Excluding PCIs from the configuration fails when you configure Veritas Access using a response file
- Installer does not list the initialized disks immediately after initializing the disks during I/O fencing configuration
- If the same driver node is used for two installations at the same time, then the second installation shows the status of progress of the first installation
- If the same driver node is used for two or more installations at the same time, then the first installation session is terminated
- If you run the Cluster> show command when a slave node is in the restart, shutdown, or crash state, the slave node throws an exception
- If duplicate PCI IDs are added for the PCI exclusion, the Cluster> add node name command fails
- If installing using a response file is started from the cluster node, then the installation session gets terminated after the configuring NICs section
- After finishing system verification checks, the installer displays a warning message about missing third-party RPMs
- Installer appears to hang when you use the installaccess command to install and configure the product from a node of the cluster
- After phase 1 of rolling upgrade is complete on the first node, a panic occurs on the second node
- Argparse module does not get installed during OS installation in RHEL 6.6
- Phantomgroup for the VLAN device does not come online if you create another VLAN device from CLISH after cluster configuration is done
- Veritas Access fails to install if LDAP or the autofs home directories are preconfigured on the system
- When performing a rolling upgrade from Veritas Access 7.3.0.1 to 7.3.1 on RHEL 7.3, CIFS services get into a faulted stated after the nodes are upgraded to Veritas Access 7.3.1
- Veritas Access installation fails if the nodes have older yum repositories and do not have Internet connectivity to reach RHN repositories
- Networking issues
- CVM service group goes into faulted state unexpectedly
- In a mixed IPv4 and IPv6 VIP network setup, the IP balancing does not consider IP type
- The netgroup search does not continue to search in NIS if the entry is not found in LDAP
- VIP and PIP hosted on an interface that is not the current IPv6 default gateway interface are not reachable outside the current IPv6 subnet
- After network interface swapping between two private NICs or one private NIC and one public NIC, the service groups on the slave nodes are not probed
- NFS issues
- Slow performance with Solaris 10 clients with NFS-Ganesha version 4
- Random-write performance drop of NFS-Ganesha with Linux clients
- Latest directory content of server is not visible to the client if time is not synchronized across the nodes
- NFS> share show may list the shares as faulted for some time if you restart the cluster node
- NFS-Ganesha shares faults after the NFS configuration is imported
- NFS-Ganesha shares may not come online when the number of shares are more than 500
- Exporting a single path to multiple clients through multiple exports does not work with NFS-Ganesha
- For the NFS-Ganesha server, bringing a large number of shares online or offline takes a long time
- NFS client application may fail with the stale file handle error on node reboot
- NFS> share show command does not distinguish offline versus online shares
- Difference in output between NFS> share show and Linux showmount commands
- NFS mount on client is stalled after you switch the NFS server
- Kernel NFS v4 lock failover does not happen correctly in case of a node crash
- Kernel NFS v4 export mount for Netgroup does not work correctly
- ObjectAccess issues
- ObjectAccess server goes in to faulted state while doing multi-part upload of a 10-GB file with a chunk size of 5 MB
- When trying to connect to the S3 server over SSLS3, the client application may give a warning like "SSL3_GET_SERVER_CERTIFICATE:certificate verify failed"
- If you have upgraded to Veritas Access 7.3.1 from an earlier release, access to S3 server fails if the cluster name has uppercase letters
- If the cluster name does not follow the DNS hostname restrictions, you cannot work with the ObjectAccess service in Veritas Access
- ObjectAccess operations do not work correctly in virtual hosted-style addressing when SSL is enabled
- Bucket creation may fail with time-out error
- Bucket deletion may fail with "No such bucket" or "No such key" error
- Temporary objects may be present in the bucket in case of multi-part upload
- Group configuration does not work in ObjectAccess if the group name contains a space
- OpenDedup issues
- The file system storage is not reclaimed after deletion of an OpenDedup volume
- Removing or modifying the virtual IP associated to an OpenDedup volume leads to the OpenDedup volume going into an inconsistent state
- OpenDedup port is blocked if the firewall is disabled and then enabled again
- The Storage> fs online command fails with an EBUSY error
- The OpenDedup volume is not mounted automatically by the /etc/fstab on the media server after a restart operation
- Output mismatch in the df -h command for OpenDedup volumes that are backed by a single bucket and mounted on two different media servers
- OpenStack issues
- Replication issues
- When running episodic replication and dedup over the same source, the episodic replication file system fails in certain scenarios
- The System> config import command does not import episodic replication keys and jobs
- The job uses the schedule on the target after episodic replication failover
- Episodic replication fails with error "connection reset by peer" if the target node fails over
- Episodic replication jobs created in Veritas Access 7.2.1.1 or earlier versions are not recognized after an upgrade
- Setting the bandwidth through the GUI is not enabled for episodic replication
- Episodic replication job with encryption fails after job remove and add link with SSL certificate error
- Episodic replication job status shows the entry for a link that was removed
- Episodic replication job modification fails
- Episodic replication failover does not work
- Continuous replication fails when the 'had' daemon is restarted on the target manually
- Continuous replication is unable to come in replicating state if the Storage Replicated Log becomes full
- Unplanned failover and failback in continuous replication may fail if the communication of the IPTABLE rules between the cluster nodes does not happen correctly
- Continuous replication configuration may fail if the continuous replication IP is not online on the master node but is online on another node
- If you restart any node in the primary or the secondary cluster, replication may go into a PAUSED state
- SmartIO issues
- Storage issues
- Snapshot mount can fail if the snapshot quota is set
- Sometimes the Storage> pool rmdisk command does not print a message
- The Storage> pool rmdisk command sometimes can give an error where the file system name is not printed
- Not able to enable quota for file system that is newly added in the list of CIFS home directories
- Destroying the file system may not remove the /etc/mtab entry for the mount point
- The Storage> fs online command returns an error, but the file system is online after several minutes
- Removing disks from the pool fails if a DCO exists
- Scale-out file system returns an ENOSPC error even if the df command shows there is space available in the file system
- Rollback refresh fails when running it after running Storage> fs growby or growto commands
- If an exported DAS disk is in error state, it shows ERR on the local node and NOT_CONN on the remote nodes in Storage> list
- Inconsistent cluster state with management service down when disabling I/O fencing
- Storage> tier move command failover of node is not working
- Storage> scanbus operation hangs at the time of I/O fencing operation
- Rollback service group goes in faulted state when respective cache object is full and there is no way to clear the state
- Event messages are not generated when cache objects get full
- Veritas Access CLISH interface should not allow uncompress and compress operations to run on the same file at the same time
- Storage device fails with SIGBUS signal causing the abnormal termination of the scale-out file system daemon
- Storage> tier move list command fails if one of the cluster nodes is rebooted
- Pattern given as filter criteria to Storage> fs policy add sometimes erroneously transfers files that do not fit the criteria
- When a policy run completes after issuing Storage> fs policy resume, the total data and total files count might not match the moved data and files count as shown in Storage> fs policy status
- Storage> fs addcolumn operation fails but error notification is not sent
- Storage> fs-growto and Storage> fs-growby commands give error with isolated disks
- Unable to create space-optimized rollback when tiering is present
- Enabling I/O fencing on a set up with Volume Manager objects present fails to import the disk group
- File system creation fails when the pool contains only one disk
- After starting the backup service, BackupGrp goes into FAULTED state on some nodes
- A scale-out file system created with a simple layout using thin LUNs may show layered layout in the Storage> fs list command
- A file system created with a largefs-striped or largefs-mirrored-stripe layout may show incorrect number of columns in the Storage> fs list command
- File system creation fails with SSD pool
- A scale-out file system may go into faulted state after the execution of Storage> fencing off/on command
- The CVM service group goes in to faulted state after you restart the management console node
- After an Azure tier is added to a scale-out file system, you cannot move files to the Azure tier and the Storage> tier stats command may fail
- System issues
- Target issues
- Veritas Access known issues
- Getting help
In a mixed IPv4 and IPv6 VIP network setup, the IP balancing does not consider IP type
In a mixed IPv4 and IPv6 setup, the IP balancing does not consider IP type. This behavior means that a node in the cluster might end up with no IPv6 VIP on it. IP balancing should consider the type of IP.
Workaround:
If required, manually bring online a VIP of the appropriate IP type on the node.