Please enter search query.
Search <book_title>...
Veritas Access Appliance Release Notes
Last Published:
2022-12-29
Product(s):
Appliances (7.4.2)
Platform: 3340
- Overview of Veritas Access
- Software limitations
- Limitations on using shared LUNs
- Limitations related to installation and upgrade
- Limitations in the Backup mode
- Veritas Access IPv6 limitations
- FTP limitations
- Limitations related to commands in a non-SSH environment
- Limitations related to Veritas Data Deduplication
- NFS-Ganesha limitations
- Kernel-based NFS v4 limitations
- File system limitation
- Veritas Access S3 server limitation
- Long-term data retention (LTR) limitations
- Cloud tiering limitation
- Limitation related to replication
- Known issues
- Veritas Access known issues
- Access Appliance issues
- Mongo service does not start after a new node is added successfully
- File systems that are already created cannot be mapped as S3 buckets for local users using the GUI
- The Veritas Access management console is not available after a node is deleted and the remaining node is restarted
- When provisioning the Veritas Access GUI, the option to generate S3 keys is not available after the LTR policy is activated
- Unable to add an Appliance node to the cluster again after the Appliance node is turned off and removed from the Veritas Access cluster
- Setting retention on a directory path does not work from the Veritas Access command-line interface
- During the Access Appliance upgrade, I/O gets paused with an error message
- When provisioning storage, the Access web interface or the command-line interface displays storage capacity in MB, GB, TB, or PB
- Access Appliance operational notes
- Admin issues
- CIFS issues
- Cannot enable the quota on a file system that is appended or added to the list of homedir
- Deleting a CIFS share resets the default owner and group permissions for other CIFS shares on the same file system
- Default CIFS share has owner other than root
- CIFS mapuser command fails to map all the users from Active Directory (AD) to all the NIS/LDAP users
- CIFS share may become unavailable when the CIFS server is in normal mode
- CIFS share creation does not authenticate AD users
- General issues
- GUI issues
- When both continuous and episodic replication links are set up, provisioning of storage using High Availability and Data Protection policies does not work
- When a new node is added or when a new cluster is installed and configured, the GUI may not start on the console node after a failover
- When an earlier version of the Veritas Access cluster is upgraded, the GUI shows stale and incomplete data
- Restarting the server as part of the command to add and remove certificates gives an error on RHEL 7
- While provisioning an S3 bucket for NetBackup, the bucket creation fails if the device protection is selected as erasurecoded and the failure domain is selected as disk
- Client certificate validation using OpenSSL ocsp does not work on RHEL 7
- When you perform the set LDAP operation using the GUI, the operation fails with an error
- GUI does not support segregated IPv6 addresses while creating CIFS shares using the Enterprise Vault policy
- Installation and configuration issues
- After you restart a node that uses RDMA LLT, LLT does not work, or the gabconfig - a command shows the jeopardy state
- Running individual Veritas Access scripts may return inconsistent return codes
- Installer does not list the initialized disks immediately after initializing the disks during I/O fencing configuration
- If you run the Cluster> show command when a slave node is in the restart, shutdown, or crash state, the slave node throws an exception
- If duplicate PCI IDs are added for the PCI exclusion, the Cluster> add node name command fails
- Phantomgroup for the VLAN device does not come online if you create another VLAN device from the Veritas Access command-line interface after cluster configuration is done
- Veritas Access fails to install if LDAP or the autofs home directories are preconfigured on the system
- Configuring Veritas Access with a preconfigured VLAN and a preconfigured bond fails
- In a mixed mode Veritas Access cluster, after the execution of the Cluster> add node command, one type of unused IP does not get assigned as a physical IP to public NICs
- NLMGroup service goes into a FAULTED state when the private IP (x.x.x.2) is not free
- The cluster> show command does not detect all the nodes of the cluster
- Internationalization (I18N) issues
- Networking issues
- CVM service group goes into faulted state unexpectedly
- In a mixed IPv4 and IPv6 VIP network set up, the IP balancing does not consider IP type
- The netgroup search does not continue to search in NIS if the entry is not found in LDAP
- The IPs hosted on an interface that is not the current IPv6 default gateway interface are not reachable outside the current IPv6 subnet
- After network interface swapping between two private NICs or one private NIC and one public NIC, the service groups on the slave nodes are not probed
- Unable to import the network module after an operating system upgrade
- LDAP with SSL on option does not work if you upgrade Veritas Access
- Network load balancer does not get configured with IPv6
- Unable to add an IPv6-default gateway on an IPv4-installed cluster
- LDAP over SSL may not work in Veritas Access 7.4.2
- The network> swap command hangs if any node other than the console node is specified
- NFS issues
- Slow performance with Solaris 10 clients with NFS-Ganesha version 4
- Random-write performance drop of NFS-Ganesha with Linux clients
- Latest directory content of server is not visible to the client if time is not synchronized across the nodes
- NFS> share show may list the shares as faulted for some time if you restart the cluster node
- NFS-Ganesha shares faults after the NFS configuration is imported
- NFS-Ganesha shares may not come online when the number of shares are more than 500
- Exporting a single path to multiple clients through multiple exports does not work with NFS-Ganesha
- For the NFS-Ganesha server, bringing a large number of shares online or offline takes a long time
- NFS client application may fail with the stale file handle error on node reboot
- NFS> share show command does not distinguish offline versus online shares
- Difference in output between NFS> share show and Linux showmount commands
- NFS mount on client is stalled after you switch the NFS server
- Kernel-NFS v4 lock failover does not happen correctly in case of a node crash
- Kernel-NFS v4 export mount for Netgroup does not work correctly
- NFS-Ganesha share for IPv6 subnet does not work and NFS share becomes faulted
- When a file system goes into the FAULTED or OFFLINE state, the NFS share groups associated with the file system do not become offline on all the nodes
- ObjectAccess issues
- When trying to connect to the S3 server over SSLS3, the client application may give a warning
- If you have upgraded to Veritas Access 7.4.2 from an earlier release, access to S3 server fails if the cluster name has uppercase letters
- If the cluster name does not follow the DNS hostname restrictions, you cannot work with the ObjectAccess service in Veritas Access
- Bucket creation may fail with time-out error
- Bucket deletion may fail with "No such bucket" or "No such key" error
- Group configuration does not work in ObjectAccess if the group name contains a space
- OpenDedup issues
- The file system storage is not reclaimed after deletion of an OpenDedup volume
- The Storage> fs online command fails with an EBUSY error
- Output mismatch in the df -h command for OpenDedup volumes that are backed by a single bucket and mounted on two different media servers
- The OpenDedup> volume create command does not revert the changes if the command fails during execution
- Some of the OpenDedup volume stats reset to zero after upgrade
- OpenDedup volume mount operation fails with an error
- Restore of data from AWS glacier fails
- OpenDedup volumes are not online after an OpenDedup upgrade if there is a change in the cluster name
- If the Veritas Access master node is restarted when a restore job is in progress and OpenDedup resides on the media server, the restored files may be in inconsistent state
- The OpenDedup> volume list command may not show the node IP for a volume
- When Veritas Access is configured in mixed mode, the configure LTR script randomly chooses a virtual IP from the available Veritas Access virtual IPs
- OpenStack issues
- Cinder and Manila shares cannot be distinguished from the Veritas Access command-line interface
- Cinder volume creation fails after a failure occurs on the target side
- Cinder volume may fail to attach to the instance
- Bootable volume creation for an iSCSI driver fails with an I/O error when a qcow image is used
- Replication issues
- When running episodic replication and deduplication on the same cluster node, the episodic replication job fails in certain scenarios
- The System> config import command does not import episodic replication keys and jobs
- The job uses the schedule on the target after episodic replication failover
- Episodic replication fails with error "connection reset by peer" if the target node fails over
- Episodic replication jobs created in Veritas Access 7.2.1.1 or earlier versions are not recognized after an upgrade
- Setting the bandwidth through the GUI is not enabled for episodic replication
- Episodic replication job with encryption fails after job remove and add link with SSL certificate error
- Episodic replication job status shows the entry for a link that was removed
- Episodic replication job modification fails
- Episodic replication failover does not work
- Continuous replication fails when the 'had' daemon is restarted on the target manually
- Continuous replication is unable to go to the replicating state if the Storage Replicated Log becomes full
- Unplanned failover and failback in continuous replication may fail if the communication of the IPTABLE rules between the cluster nodes does not happen correctly
- Continuous replication configuration may fail if the continuous replication IP is not online on the master node but is online on another node
- If you restart any node in the primary or the secondary cluster, replication may go into a PAUSED state
- SDS known issues
- Storage issues
- Snapshot mount can fail if the snapshot quota is set
- Sometimes the Storage> pool rmdisk command does not print a message
- The Storage> pool rmdisk command sometimes can give an error where the file system name is not printed
- Not able to enable quota for file system that is newly added in the list of CIFS home directories
- Destroying the file system may not remove the /etc/mtab entry for the mount point
- The Storage> fs online command returns an error, but the file system is online after several minutes
- Removing disks from the pool fails if a DCO exists
- Scale-out file system returns an ENOSPC error even if the df command shows there is space available in the file system
- Rollback refresh fails when running it after running Storage> fs growby or growto commands
- If an exported DAS disk is in error state, it shows ERR on the local node and NOT_CONN on the remote nodes in Storage> list
- Inconsistent cluster state with management service down when disabling I/O fencing
- Storage> tier move command failover of node is not working
- Rollback service group goes in faulted state when respective cache object is full and there is no way to clear the state
- Event messages are not generated when cache objects get full
- The Veritas Access command-line interface does not block uncompress and compress operations from running on the same file at the same time
- Storage device fails with SIGBUS signal causing the abnormal termination of the scale-out file system daemon
- Storage> tier move list command fails if one of the cluster nodes is rebooted
- Pattern given as filter criteria to Storage> fs policy add sometimes erroneously transfers files that do not fit the criteria
- When a policy run completes after issuing Storage> fs policy resume, the total data and total files count might not match the moved data and files count as shown in Storage> fs policy status
- Storage> fs addcolumn operation fails but error notification is not sent
- Storage> fs-growto and Storage> fs-growby commands give error with isolated disks
- Unable to create space-optimized rollback when tiering is present
- Enabling I/O fencing on a set up with Volume Manager objects present fails to import the disk group
- File system creation fails when the pool contains only one disk
- After starting the backup service, BackupGrp goes into FAULTED state on some nodes
- A scale-out file system created with a simple layout using thin LUNs may show layered layout in the Storage> fs list command
- A file system created with a largefs-striped or largefs-mirrored-stripe layout may show incorrect number of columns in the Storage> fs list command
- File system creation fails with SSD pool
- A scale-out file system may go into faulted state after the execution of Storage> fencing off/on command
- After an Azure tier is added to a scale-out file system, you cannot move files to the Azure tier and the Storage> tier stats command may fail
- The CVM service group goes in to faulted state after you restart the management console node
- The Storage> fs create command does not display the output correctly if one of the nodes of the cluster is in unknown state
- Storage> fs growby and growto commands fail if the size of the file system or bucket is full
- The operating system names of fencing disks are not consistent across the Veritas Access cluster that may lead to issues
- The disk group import operation fails and all the services go into failed state when fencing is enabled
- While creating an erasure-coded file system, a misleading message leads to issues in the execution of the storage> fs create command
- The Veritas Access cluster node can get explicitly ejected or aborted from the cluster during recovery when another node joins the cluster after a restart
- Error while creating a file system stating that the CVM master and management console are not on the same node
- When you configure disk-based fencing, the cluster does not come online after you restart the node
- In an erasure-coded file system, when the nodes are restarted, some of the file systems do not get unmounted
- After a node is restarted, the vxdclid process may generate core dump
- The Veritas Access command-line interface may be inaccessible after some nodes in the cluster are restarted
- The cluster> shutdown command does not shut down the node
- System issues
- Target issues
- Storage provisioning commands hang on the Veritas Access initiator when LUNs from the Veritas Access target are being used
- After the Veritas Access cluster recovers from a storage disconnect, the iSCSI LUNs exported from Veritas Access as an iSCSI target may show the wrong content on the initiator side
- Upgrade issues
- Veritas Data Deduplication issues
- The Veritas Data Deduplication storage server does not come online on a newly added node in the cluster if the node was offline when you configured deduplication
- The Veritas Data Deduplication server goes offline after destroying the bond interface on which the deduplication IP was online
- If you grow the deduplication pool using the fs> grow command, and then try to grow it further using the dedupe> grow command, the dedupe> grow command fails
- The Veritas Data Deduplication server goes offline after bond creation using the interface of the deduplication IP
- Access Appliance issues
- Veritas Access known issues
- Getting help
CIFS mapuser command fails to map all the users from Active Directory (AD) to all the NIS/LDAP users
While mapping all the CIFS users to NIS/LDAP users, the command does not accept the special character '*'.
(IA-8108)
Workaround:
Use one-to-one user mappings from the Active Directory (AD) user to the NIS/LDAP user.