Veritas Access Release Notes

Last Published:
Product(s): Appliances (7.3.2)
Platform: 3340
  1. Overview of Veritas Access
    1.  
      About this release
    2.  
      Important release information
    3. Changes in this release
      1.  
        IP load balancing
      2.  
        Veritas Access as an iSCSI target for RHEL 7.3 and 7.4
      3.  
        Changes to the GUI
      4.  
        Support for operating systems
      5.  
        Episodic and continuous replication in Veritas Access
      6.  
        Active-active support for scale-out file system
      7.  
        Replication on a scale-out file system
      8.  
        Changes to the documentation set
    4. Technical preview features
      1.  
        Veritas Access as an iSCSI Target for RHEL 6.x
  2. Software limitations
    1. Limitations related to installation and upgrade
      1.  
        If required VIPs are not configured, then services like NFS, CIFS, and S3 do not function properly
      2.  
        Upgrade is not supported from CLISH
      3.  
        Rolling upgrade is not supported from CLISH
    2.  
      Limitations in the Backup mode
    3.  
      Veritas Access IPv6 limitations
    4.  
      FTP limitations
    5.  
      Samba ACL performance-related issues
    6. Veritas Access language support
      1.  
        Veritas Access does not support non-English characters when using the CLISH
    7.  
      NFS-Ganesha limitations
    8.  
      Kernel-based NFS v4 limitations
    9. File system limitation
      1.  
        Any direct NLM operations from CLISH can lead to system instability
    10.  
      Veritas Access S3 server limitation
    11.  
      Long-term data retention limitations
    12.  
      Cloud tiering limitation
    13. Limitation related to replication
      1.  
        Limitation related to episodic replication authentication
      2.  
        Limitation related to continuous replication
  3. Known issues
    1. Veritas Access known issues
      1. Backup issues
        1.  
          Backup or restore status may show invalid status after the BackupGrp is switched or failed over to the other node when the SAN client is enabled
      2. CIFS issues
        1.  
          Cannot enable the quota on a file system that is appended or added to the list of homedir
        2.  
          Deleting a CIFS share resets the default owner and group permissions for other CIFS shares on the same file system
        3.  
          Default CIFS share has owner other than root
        4.  
          Listing of CIFS shares created on a Veritas Access cluster fails on Windows server or client
        5.  
          CIFS> mapuser command fails to map all the users from Active Directory (AD) to all the NIS/LDAP users
      3. Enterprise Vault Attach known issues
        1.  
          Error while setting full access permission to Enterprise Vault user for archival directory
      4. GUI issues
        1.  
          When both continuous and episodic replication links are set up, provisioning of storage using High Availability and Data Protection policies does not work
        2.  
          When a new node is added or when a new cluster is installed and configured, the GUI may not start on the console node after a failover
        3.  
          When an earlier version of the Veritas Access cluster is upgraded, the GUI shows stale and incomplete data
        4.  
          Restarting the server as part of the command to add and remove certificates gives an error on RHEL 7
        5.  
          Client certificate validation using OpenSSL ocsp does not work on RHEL7
      5. Installation and configuration issues
        1.  
          Running individual Veritas Access scripts may return inconsistent return codes
        2.  
          Configuring Veritas Access with the installer fails when the SSH connection is lost
        3.  
          Installer does not list the initialized disks immediately after initializing the disks during I/O fencing configuration
        4.  
          If you run the Cluster> show command when a slave node is in the restart, shutdown, or crash state, the slave node throws an exception
        5.  
          Phantomgroup for the VLAN device does not come online if you create another VLAN device from CLISH after cluster configuration is done
        6.  
          Veritas Access installation fails if the nodes have older yum repositories and do not have Internet connectivity to reach RHN repositories
      6. Networking issues
        1.  
          CVM service group goes into faulted state unexpectedly
        2.  
          The netgroup search does not continue to search in NIS if the entry is not found in LDAP
        3.  
          After network interface swapping between two private NICs or one private NIC and one public NIC, the service groups on the slave nodes are not probed
      7. NFS issues
        1.  
          Slow performance with Solaris 10 clients with NFS-Ganesha version 4
        2.  
          Random-write performance drop of NFS-Ganesha with Linux clients
        3.  
          Latest directory content of server is not visible to the client if time is not synchronized across the nodes
        4.  
          NFS> share show may list the shares as faulted for some time if you restart the cluster node
        5.  
          NFS-Ganesha shares faults after the NFS configuration is imported
        6.  
          NFS-Ganesha shares may not come online when the number of shares are more than 500
        7.  
          Exporting a single path to multiple clients through multiple exports does not work with NFS-Ganesha
        8.  
          For the NFS-Ganesha server, bringing a large number of shares online or offline takes a long time
        9.  
          NFS client application may fail with the stale file handle error on node reboot
        10.  
          NFS> share show command does not distinguish offline versus online shares
        11.  
          Difference in output between NFS> share show and Linux showmount commands
        12.  
          NFS mount on client is stalled after you switch the NFS server
        13.  
          Kernel NFS v4 lock failover does not happen correctly in case of a node crash
        14.  
          Kernel NFS v4 export mount for Netgroup does not work correctly
      8. ObjectAccess issues
        1.  
          ObjectAccess server goes in to faulted state while doing multi-part upload of a 10-GB file with a chunk size of 5 MB
        2.  
          When trying to connect to the S3 server over SSLS3, the client application may give a warning like "SSL3_GET_SERVER_CERTIFICATE:certificate verify failed"
        3.  
          If the cluster name does not follow the DNS hostname restrictions, you cannot work with the ObjectAccess service in Veritas Access
        4.  
          ObjectAccess operations do not work correctly in virtual hosted-style addressing when SSL is enabled
        5.  
          Bucket creation may fail with time-out error
        6.  
          Bucket deletion may fail with "No such bucket" or "No such key" error
        7.  
          Temporary objects may be present in the bucket in case of multi-part upload
        8.  
          Group configuration does not work in ObjectAccess if the group name contains a space
      9. OpenDedup issues
        1.  
          The file system storage is not reclaimed after deletion of an OpenDedup volume
        2.  
          Removing or modifying the virtual IP associated to an OpenDedup volume leads to the OpenDedup volume going into an inconsistent state
        3.  
          OpenDedup port is blocked if the firewall is disabled and then enabled again
        4.  
          The Storage> fs online command fails with an EBUSY error
        5.  
          The OpenDedup volume is not mounted automatically by the /etc/fstab on the media server after a restart operation
        6.  
          Output mismatch in the df -h command for OpenDedup volumes that are backed by a single bucket and mounted on two different media servers
      10. OpenStack issues
        1.  
          Cinder and Manila shares cannot be distinguished from the CLISH
        2.  
          The Veritas Access Manila driver which is upstreamed in the OpenStack Manila repository is not compatible with Veritas Access 7.3.1
      11. Replication issues
        1.  
          When running episodic replication and dedup over the same source, the episodic replication file system fails in certain scenarios
        2.  
          The System> config import command does not import episodic replication keys and jobs
        3.  
          The job uses the schedule on the target after episodic replication failover
        4.  
          Episodic replication fails with error "connection reset by peer" if the target node fails over
        5.  
          Episodic replication jobs created in Veritas Access 7.2.1.1 or earlier versions are not recognized after an upgrade
        6.  
          Setting the bandwidth through the GUI is not enabled for episodic replication
        7.  
          Episodic replication job with encryption fails after job remove and add link with SSL certificate error
        8.  
          Episodic replication job status shows the entry for a link that was removed
        9.  
          Episodic replication job modification fails
        10.  
          Episodic replication failover does not work
        11.  
          Continuous replication fails when the 'had' daemon is restarted on the target manually
        12.  
          Continuous replication is unable to come in replicating state if the Storage Replicated Log becomes full
        13.  
          Unplanned failover and failback in continuous replication may fail if the communication of the IPTABLE rules between the cluster nodes does not happen correctly
        14.  
          Continuous replication configuration may fail if the continuous replication IP is not online on the master node but is online on another node
        15.  
          If you restart any node in the primary or the secondary cluster, replication may go into a PAUSED state
      12. SmartIO issues
        1.  
          SmartIO writeback cachemode for a file system changes to read mode after taking the file system offline and then online
      13. Storage issues
        1.  
          Snapshot mount can fail if the snapshot quota is set
        2.  
          Sometimes the Storage> pool rmdisk command does not print a message
        3.  
          The Storage> pool rmdisk command sometimes can give an error where the file system name is not printed
        4.  
          Not able to enable quota for file system that is newly added in the list of CIFS home directories
        5.  
          Destroying the file system may not remove the /etc/mtab entry for the mount point
        6.  
          The Storage> fs online command returns an error, but the file system is online after several minutes
        7.  
          Removing disks from the pool fails if a DCO exists
        8.  
          Scale-out file system returns an ENOSPC error even if the df command shows there is space available in the file system
        9.  
          Rollback refresh fails when running it after running Storage> fs growby or growto commands
        10.  
          If an exported DAS disk is in error state, it shows ERR on the local node and NOT_CONN on the remote nodes in Storage> list
        11.  
          Inconsistent cluster state with management service down when disabling I/O fencing
        12.  
          Storage> tier move command failover of node is not working
        13.  
          Storage> scanbus operation hangs at the time of I/O fencing operation
        14.  
          Rollback service group goes in faulted state when respective cache object is full and there is no way to clear the state
        15.  
          Event messages are not generated when cache objects get full
        16.  
          Veritas Access CLISH interface should not allow uncompress and compress operations to run on the same file at the same time
        17.  
          Storage device fails with SIGBUS signal causing the abnormal termination of the scale-out file system daemon
        18.  
          Storage> tier move list command fails if one of the cluster nodes is rebooted
        19.  
          Pattern given as filter criteria to Storage> fs policy add sometimes erroneously transfers files that do not fit the criteria
        20.  
          When a policy run completes after issuing Storage> fs policy resume, the total data and total files count might not match the moved data and files count as shown in Storage> fs policy status
        21.  
          Storage> fs addcolumn operation fails but error notification is not sent
        22.  
          Storage> fs-growto and Storage> fs-growby commands give error with isolated disks
        23.  
          Unable to create space-optimized rollback when tiering is present
        24.  
          Enabling I/O fencing on a set up with Volume Manager objects present fails to import the disk group
        25.  
          File system creation fails when the pool contains only one disk
        26.  
          After starting the backup service, BackupGrp goes into FAULTED state on some nodes
        27.  
          A scale-out file system created with a simple layout using thin LUNs may show layered layout in the Storage> fs list command
        28.  
          A file system created with a largefs-striped or largefs-mirrored-stripe layout may show incorrect number of columns in the Storage> fs list command
        29.  
          A scale-out file system may go into faulted state after the execution of Storage> fencing off/on command
        30.  
          The CVM service group goes in to faulted state after you restart the management console node
        31.  
          After an Azure tier is added to a scale-out file system, you cannot move files to the Azure tier and the Storage> tier stats command may fail
      14. System issues
        1.  
          The System> ntp sync command without any argument does not appear to work correctly
      15. Target issues
        1.  
          If a user is added on the target side, the initiator cannot see the LUNs
        2.  
          LIO does not support target name in uppercase
      16. Access Appliance issues
        1.  
          Mongo service does not start after a new node is added successfully
        2.  
          File systems that are already created cannot be mapped as S3 buckets for local users using the GUI
        3.  
          The Veritas Access management console is not available after a node is deleted and the remaining node is restarted
        4.  
          When provisioning the Veritas Access GUI, the option to generate S3 keys is not available after the LTR policy is activated
        5.  
          Unable to add an Appliance node to the cluster again after the Appliance node is turned off and removed from the Veritas Access cluster
        6.  
          Setting retention on a directory path does not work from Veritas Access CLISH
        7. Access Appliance operational notes
          1.  
            Access services do not restart properly after storage shelf restart
  4. Getting help
    1.  
      Displaying the Online Help
    2.  
      Displaying the man pages
    3.  
      Using the Veritas Access product documentation

Mongo service does not start after a new node is added successfully

After you add a new node, the installer does not bring the Cluster Volume Manager (CVM) and its dependent service groups (appdb_data and appdb_svc) online on the newly added node. Hence, the Mongo service does not start on the newly added node.

Workaround:

After you add a node, log on to the Veritas Access CLISH interface using the admin user credentials. In the CLISH interface, execute the cluster reboot <new_node> command to restart the newly added node.