Veritas InfoScale™ 7.4.2 Release Notes - Solaris

Last Published:
Product(s): InfoScale & Storage Foundation (7.4.2)
Platform: Solaris
  1. Introduction
    1.  
      About this document
  2. Requirements
    1.  
      VCS system requirements
    2.  
      Supported Solaris operating systems
    3.  
      Supported Oracle VM Server for SPARC
    4.  
      Storage Foundation for Databases features supported in database environments
    5.  
      Storage Foundation memory requirements
    6.  
      Supported database software
    7.  
      Supported hardware and software
    8.  
      Number of nodes supported
  3. Changes introduced in 7.4.2
    1. Changes related to installation and upgrades
      1.  
        Change in upgrade path
      2.  
        Changes in the VRTSperl package
    2. Changes related to security features
      1.  
        Improved password encryption for VCS users and agents
    3. Changes related to supported configurations
      1.  
        Support for Oracle 19c
      2.  
        Deprecated support for Oracle 11g R2
    4. Changes related to the Cluster Server engine
      1.  
        Support for starting VCS in a customized environment
      2.  
        Ability to stop VCS without evacuating service groups
      3.  
        Ability to disable CmdServer
    5. Changes related to Veritas File System
      1.  
        Changes in VxFS Disk Layout Versions (DLV)
    6. Changes related to replication
      1.  
        DCM logging in DCO
  4. Fixed issues
    1.  
      Cluster Server and Cluster Server agents fixed issues
    2.  
      Storage Foundation Cluster File System High Availability fixed issues
  5. Limitations
    1. Storage Foundation software limitations
      1. Dynamic Multi-Pathing software limitations
        1.  
          DMP support for the Solaris format command (2043956)
        2.  
          DMP settings for NetApp storage attached environment
        3.  
          ZFS pool in unusable state if last path is excluded from DMP (1976620)
        4.  
          When an I/O domain fails, the vxdisk scandisks or vxdctl enable command take a long time to complete (2791127)
      2. Veritas Volume Manager software limitations
        1.  
          Snapshot configuration with volumes in shared disk groups and private disk groups is not supported (2801037)
        2.  
          SmartSync is not supported for Oracle databases running on raw VxVM volumes
        3.  
          Veritas InfoScale does not support thin reclamation of space on a linked mirror volume (2729563)
        4.  
          A 1 TB disk that is not labeled using operating system commands goes into an error state after the vxconfigd daemon is restarted
        5.  
          Converting a multi-pathed disk
        6.  
          Thin reclamation requests are not redirected even when the ioship policy is enabled (2755982)
        7.  
          Veritas Operations Manager does not support disk, disk group, and volume state information related to CVM I/O shipping feature (2781126)
      3. Veritas File System software limitations
        1.  
          Recommended limit of number of files in a directory
        2.  
          The vxlist command cannot correctly display numbers greater than or equal to 1 EB
        3.  
          Limitations with delayed allocation for extending writes feature
        4.  
          Compressed files that are backed up using NetBackup 7.1 or prior become uncompressed when you restore the files
      4. SmartIO software limitations
        1.  
          Cache is not online after a reboot
        2.  
          The sfcache operations may display error messages in the caching log when the operation completed successfully (3611158)
    2. Replication software limitations
      1.  
        VVR Replication in a shared environment
      2.  
        VVR IPv6 software limitations
      3.  
        VVR support for replicating across Storage Foundation versions
    3. Cluster Server software limitations
      1. Limitations related to bundled agents
        1.  
          Programs using networked services may stop responding if the host is disconnected
        2.  
          Volume agent clean may forcibly stop volume resources
        3.  
          False concurrency violation when using PidFiles to monitor application resources
        4.  
          Volumes in a disk group start automatically irrespective of the value of the StartVolumes attribute in VCS [2162929]
        5.  
          Online for LDom resource fails [2517350]
        6.  
          Zone agent registered to IMF for Directory Online event
        7.  
          LDom resource calls clean entry point when primary domain is gracefully shut down
        8.  
          Application agent limitations
        9.  
          Interface object name must match net<x>/v4static for VCS network reconfiguration script in Solaris 11 guest domain [2840193]
        10.  
          Share agent limitation (2717636)
        11.  
          Campus cluster fire drill does not work when DSM sites are used to mark site boundaries [3073907]
        12.  
          Mount agent reports resource state as OFFLINE if the configured mount point does not exist [3435266]
      2. Limitations related to VCS engine
        1.  
          Loads fail to consolidate and optimize when multiple groups fault [3074299]
        2.  
          Preferred fencing ignores the forecasted available capacity [3077242]
        3.  
          Failover occurs within the SystemZone or site when BiggestAvailable policy is set [3083757]
        4.  
          Load for Priority groups is ignored in groups with BiggestAvailable and Priority in the same group[3074314]
      3. Veritas cluster configuration wizard limitations
        1.  
          Environment variable used to change log directory cannot redefine the log path of the wizard [3609791]
        2.  
          Cluster configuration wizard takes long time to configure a cluster on Solaris systems [3582495]
      4. Limitations related to the VCS database agents
        1.  
          DB2 RestartLimit value [1234959]
        2.  
          Sybase agent does not perform qrmutil based checks if Quorum_dev is not set (2724848)
        3.  
          Pluggable database (PDB) online may timeout when started after container database (CDB) [3549506]
      5.  
        Systems in a cluster must have same system locale setting
      6.  
        Limitations with DiskGroupSnap agent [1919329]
      7. Cluster Manager (Java console) limitations
        1.  
          VCS Simulator does not support I/O fencing
      8. Limitations related to LLT
        1.  
          Limitation of LLT support over UDP using alias IP [3622175]
      9. Limitations related to I/O fencing
        1.  
          Preferred fencing limitation when VxFEN activates RACER node re-election
        2.  
          Stopping systems in clusters with I/O fencing configured
        3.  
          Uninstalling VRTSvxvm causes issues when VxFEN is configured in SCSI3 mode with dmp disk policy (2522069)
        4.  
          Node may panic if HAD process is stopped by force and then node is shut down or restarted [3640007]
      10.  
        Limitations related to global clusters
      11.  
        CP Server 6.0.5 client fails to communicate with CP Server 7.0 with certificates having 2048-bit keys and SHA256 hashing [IIP-5803]
      12.  
        Clusters must run on VCS 6.0.5 and later to be able to communicate after upgrading to 2048 bit key and SHA256 signature certificates [3812313]
    4. Storage Foundation Cluster File System High Availability software limitations
      1.  
        cfsmntadm command does not verify the mount options (2078634)
      2.  
        Stale SCSI-3 PR keys remain on disk after stopping the cluster and deporting the disk group
      3.  
        Unsupported FSS scenarios
    5. Storage Foundation for Oracle RAC software limitations
      1.  
        Supportability constraints for normal or high redundancy ASM disk groups with CVM I/O shipping and FSS (3600155)
      2.  
        Limitations of CSSD agent
      3.  
        Oracle Clusterware/Grid Infrastructure installation fails if the cluster name exceeds 14 characters
      4.  
        Policy-managed databases not supported by CRSResource agent
      5.  
        Health checks may fail on clusters that have more than 10 nodes
      6.  
        Cached ODM not supported in Veritas InfoScale environments
    6. Storage Foundation for Databases (SFDB) tools software limitations
      1.  
        Parallel execution of vxsfadm is not supported (2515442)
      2.  
        Creating point-in-time copies during database structural changes is not supported (2496178)
      3.  
        Oracle Data Guard in an Oracle RAC environment
  6. Known issues
    1. Issues related to installation and upgrade
      1.  
        Switch fencing in enable or disable mode may not take effect if VCS is not reconfigured [3798127]
      2.  
        After the upgrade to version 7.4.2, the installer may fail to stop the Asynchronous Monitoring Framework (AMF) process [3781993]
      3.  
        LLT may fail to start after upgrade on Solaris 11 (3770835)
      4.  
        On SunOS, drivers may not be loaded after a reboot [3798849]
      5.  
        On Oracle Solaris, drivers may not be loaded after stop and then reboot [3763550]
      6.  
        During an upgrade process, the AMF_START or AMF_STOP variable values may be inconsistent [3763790]
      7.  
        Uninstallation fails on global zone on Solaris 11 if product packages are installed on both global zone and local zone [3762814]
      8.  
        On Solaris 11, when you install the operating system together with SFHA products using Automated Installer, the local installer scripts do not get generated (3640805)
      9.  
        Stopping the installer during an upgrade and then resuming the upgrade might freeze the service groups (2574731)
      10.  
        Installing VRTSvlic package during live upgrade on Solaris system non-global zones displays error messages [3623525]
      11.  
        VCS installation with CPI fails when a non-global zone is in installed state and zone root is not mounted on the node (2731178)
      12.  
        Log messages are displayed when VRTSvcs is uninstalled on Solaris 11 [2919986]
      13.  
        Cluster goes into STALE_ADMIN_WAIT state during upgrade from VCS 5.1 to 6.1 or later [2850921]
      14.  
        Flash Archive installation not supported if the target system's root disk is encapsulated
      15.  
        The Configure Sybase ASE CE Instance in VCS option creates duplicate service groups for Sybase binary mount points (2560188)
      16.  
        The Installer fails to unload GAB module while installation of SF packages [3560458]
      17.  
        On Solaris 11 non-default ODM mount options will not be preserved across package upgrade (2745100)
      18.  
        Upgrade fails because there is zone installed on the VxFS file system which is offline. The packages in the zone are not updated. (3319753)
      19.  
        If you choose to upgrade nodes without zones first, the rolling upgrade or phased upgrade is not blocked in the beginning, but fails later (3319961)
      20.  
        Upgrades from previous SF Oracle RAC versions may fail on Solaris systems (3256400)
      21.  
        After a locale change restart the vxconfig daemon (2417547, 2116264)
      22.  
        Verification of Oracle binaries incorrectly reports as failed during Oracle Grid Infrastructure installation
      23.  
        Live upgrade of the InfoScale product may detect the wrong product or ask for the license repeatedly (3870685)
      24.  
        In RHEV environment, if you stop the SF service and then start it by installer, the permission on dmpnode will get lost (3870111)
      25.  
        The installer fails to upgrade the product packages on Solaris 11 during an upgrade to InfoScale 7.4.2 (3896530)
      26.  
        When using the response file, the installer must not proceed with the installation or upgrade, if you have not provided edge server details (3964335)
      27.  
        Collector service does not start automatically on Solaris 11 servers (3963406)
      28.  
        Warning message is displayed on Solaris and AIX even though telemetry.veritas.com (VCR) is reachable from the host (3961631)
      29.  
        Unable to update edge server details by running the installer (3964611)
    2. Storage Foundation known issues
      1. Dynamic Multi-Pathing known issues
        1.  
          In a CVM environment, adding a relabelled LUN to a shared disk group cause the I/O requests to fail until the LUN fails and disables the filesystem. (3979198)
        2.  
          Vxconfigd may core dump after suppressing paths of a PowerPath device (3869111)
      2. Veritas Volume Manager known issues
        1.  
          vradmin delsec fails to remove a secondary RVG from its RDS (3983296)
        2.  
          FSS disk group creation fails for clusters with eight or more nodes that have several directly attached disks (3986110)
        3.  
          Core dump issue after restoration of disk group backup (3909046)
        4.  
          Failed verifydata operation leaves residual cache objects that cannot be removed (3370667)
        5.  
          LUNs claimed but not in use by VxVM may report "Device Busy" when it is accessed outside VxVM (3667574)
        6.  
          Unable to set master on the secondary site in VVR environment if any pending I/O's are on the secondary site (3874873)
        7.  
          vxdisksetup -if fails on PowerPath disks of sizes 1T to 2T [3752250]
        8.  
          VRAS verifydata command fails without cleaning up the snapshots created [3558199]
        9.  
          Root disk encapsulation fails for root volume and swap volume configured on thin LUNs (3538594)
        10.  
          The vxdisk resize command does not claim the correct LUN size on Solaris 11 during expansion of the LUN from array side (2858900)
        11.  
          SmartIO VxVM cache invalidated after relayout operation (3492350)
        12.  
          Disk greater than 1TB goes into error state [3761474, 3269099]
        13.  
          Importing an exported zpool can fail when DMP native support is on (3133500)
        14.  
          Server panic after losing connectivity to the voting disk (2787766)
        15.  
          Performance impact when a large number of disks are reconnected (2802698)
        16.  
          device.map must be up to date before doing root disk encapsulation (2202047)
        17.  
          Veritas Volume Manager (VxVM) might report false serial split brain under certain scenarios (1834513)
        18.  
          Suppressing the primary path of an encapsulated SAN boot disk from Veritas Volume Manager causes the system reboot to fail (1933631)
        19.  
          After changing the preferred path from the array side, the secondary path becomes active (2490012)
        20.  
          Disk group import of BCV LUNs using -o updateid and -ouseclonedev options is not supported if the disk group has mirrored volumes with DCO or has snapshots (2831658)
        21.  
          After devices that are managed by EMC PowerPath lose access to storage, Veritas Volume Manager commands are delayed (2757198)
        22.  
          vxresize does not work with layered volumes that have multiple plexes at the top level (3301991)
        23.  
          In a clustered configuration with Oracle ASM and DMP and AP/F array, when all the storage is removed from one node in the cluster, the Oracle DB is unmounted from other nodes of the cluster (3237696)
        24.  
          When all Primary/Optimized paths between the server and the storage array are disconnected, ASM disk group dismounts and the Oracle database may go down (3289311)
        25.  
          Running the vxdisk disk set clone=off command on imported clone disk group luns results in a mix of clone and non-clone disks (3338075)
        26.  
          The administrator must explicitly enable and disable support for a clone device created from an existing root pool (3110589)
        27.  
          Restarting the vxconfigd daemon on the slave node after a disk is removed from all nodes may cause the disk groups to be disabled on the slave node (3591019)
        28.  
          Failback to primary paths does not occur if the node that initiated the failover leaves the cluster (1856723)
        29.  
          Issues if the storage connectivity to data disks is lost on a CVM slave node while vxconfigd was not running on the node (2562889)
        30.  
          The vxcdsconvert utility is supported only on the master node (2616422)
        31.  
          Re-enabling connectivity if the disks are in local failed (lfailed) state (2425977)
        32.  
          Issues with the disk state on the CVM slave node when vxconfigd is restarted on all nodes (2615680)
        33.  
          Plex synchronization is not completed after resuming synchronization on a new master when the original master lost connectivity (2788077)
        34.  
          A master node is not capable of doing recovery if it cannot access the disks belonging to any of the plexes of a volume (2764153)
        35.  
          CVM fails to start if the first node joining the cluster has no connectivity to the storage (2787713)
        36.  
          CVMVolDg agent may fail to deport CVM disk group when CVMDeportOnOffline is set to 1
        37.  
          The vxsnap print command shows incorrect value for percentage dirty [2360780]
        38.  
          For Solaris 11.1 or later, uninstalling DMP or disabling DMP native support requires steps to enable booting from alternate root pools (3178642)
        39.  
          For Solaris 11.1 or later, after enabling DMP native support for ZFS, only the current boot environment is bootable (3157394)
        40.  
          When dmp_native_support is set to on, commands hang for a long time on SAN failures (3084656)
        41.  
          vxdisk export operation fails if length of hostprefix and device name exceeds 30 characters (3543668)
        42.  
          Systems may panic after GPT disk resize operation (3930664)
      3. Veritas File System known issues
        1.  
          Upgrade from InfoScale Enterprise 7.3.1 to 7.4.2 may appear incomplete as the product installer fails to stop the VxFS process (4002728)
        2.  
          The VxFS file system with local scope enabled may hang if two or more nodes are restarted simultaneously (3944891)
        3.  
          Docker does not recognize VxFS backend file system
        4.  
          Warning message sometimes appear in the console during system startup (2354829)
        5.  
          vxresize may fail when you shrink a file system with the "blocks are currently in use" error (3762935)
        6.  
          On Solaris11U2, /dev/odm may show 'Device busy' status when the system mounts ODM [3661567]
        7.  
          Delayed allocation may be turned off automatically when one of the volumes in a multi-volume file system nears 100%(2438368)
        8.  
          The file system deduplication operation fails with the error message "DEDUP_ERROR Error renaming X checkpoint to Y checkpoint on filesystem Z error 16" (3348534)
        9.  
          Oracle Disk Manager (ODM) may fail to start after upgrade to 7.4.2 on Solaris 11 [3739102]
        10.  
          On the cluster file system, clone dispose may fail [3754906]
        11.  
          VRTSvxfs verification reports error after upgrading to 7.4.2 [3463479]
        12.  
          spfile created on VxFS and ODM may contain uninitialized blocks at the end (3760262)
        13.  
          Taking a FileSnap over NFS multiple times with the same target name can result in the 'File exists' error (2353352)
        14.  
          On the online cache device you should not perform the mkfs operation, because any subsequent fscache operation panics (3643800)
        15.  
          Deduplication can fail with error 110 (3741016)
        16.  
          A restored volume snapshot may be inconsistent with the data in the SmartIO VxFS cache (3760219)
        17.  
          When in-place and relocate compression rules are in the same policy file, file relocation is unpredictable (3760242)
        18.  
          The file system may hang when it has compression enabled (3331276)
    3. Replication known issues
      1.  
        The secondary vradmind may appear hung and the vradmin commands may fail (3940842,3944301)
      2.  
        Data corruption may occur if you perform a rolling upgrade of InfoScale Storage or InfoScale Enterprise from 7.3.1 or earlier to 7.4 or later during replication (3951527)
      3.  
        vradmind may appear hung or may fail for the role migrate operation (3968642, 3968641)
      4.  
        After the product upgrade on secondary site, replication may fail to resume with "Secondary SRL missing" error [3931763]
      5.  
        vradmin repstatus command reports secondary host as "unreachable"(3896588)
      6.  
        RVGPrimary agent operation to start replication between the original Primary and the bunker fails during failback (2036605)
      7.  
        A snapshot volume created on the Secondary, containing a VxFS file system may not mount in read-write mode and performing a read-write mount of the VxFS file systems on the new Primary after a global clustering site failover may fail [3761497]
      8.  
        In an IPv6-only environment RVG, data volumes or SRL names cannot contain a colon (1672410, 1672417)
      9.  
        vradmin functionality may not work after a master switch operation [2158679]
      10.  
        Cannot relayout data volumes in an RVG from concat to striped-mirror (2129601)
      11.  
        vradmin verifydata may report differences in a cross-endian environment (2834424)
      12.  
        vradmin verifydata operation fails if the RVG contains a volume set (2808902)
      13.  
        Bunker replay does not occur with volume sets (3329970)
      14.  
        SmartIO does not support write-back caching mode for volumes configured for replication by Volume Replicator (3313920)
      15.  
        During moderate to heavy I/O, the vradmin verifydata command may falsely report differences in data (3270067)
      16.  
        While vradmin commands are running, vradmind may temporarily lose heartbeats (3347656, 3724338)
      17.  
        Write I/Os on the primary logowner may take a long time to complete (2622536)
      18.  
        DCM logs on a disassociated layered data volume results in configuration changes or CVM node reconfiguration issues (3582509)
      19.  
        After performing a CVM master switch on the secondary node, both rlinks detach (3642855)
      20.  
        The RVGPrimary agent may fail to bring the application service group online on the new Primary site because of a previous primary-elect operation not being run or not completing successfully (3761555, 2043831)
      21.  
        A snapshot volume created on the Secondary, containing a VxFS file system may not mount in read-write mode and performing a read-write mount of the VxFS file systems on the new Primary after a global clustering site failover may fail (1558257)
      22.  
        DCM plex becomes inaccessible and goes into DISABLED(SPARSE) state in case of node failure. (3931775)
      23.  
        Initial autosync operation takes a long time to complete for data volumes larger than 3TB (3966713)
    4. Cluster Server known issues
      1. Operational issues for VCS
        1.  
          On Solaris 11.4, Oracle and Netlsnr agents fail to perform intelligent monitoring (4001565)
        2.  
          The hastop -all command on VCS cluster node with AlternateIO resource and StorageSG having service groups may leave the node in LEAVING state
        3.  
          Missing characters in system messages [2334245]
        4.  
          CP server does not allow adding and removing HTTPS virtual IP or ports when it is running [3322154]
        5.  
          System encounters multiple VCS resource timeouts and agent core dumps [3424429]
        6.  
          Some VCS components do not work on the systems where a firewall is configured to block TCP traffic [3545338]
      2. Issues related to the VCS engine
        1.  
          Extremely high CPU utilization may cause HAD to fail to heartbeat to GAB [1744854]
        2.  
          Missing host names in engine_A.log file (1919953)
        3.  
          The hacf -cmdtocf command generates a broken main.cf file [1919951]
        4.  
          Character corruption observed when executing the uuidconfig.pl -clus -display -use_llthost command [2350517]
        5.  
          Trigger does not get executed when there is more than one leading or trailing slash in the triggerpath [2368061]
        6.  
          Service group is not auto started on the node having incorrect value of EngineRestarted [2653688]
        7.  
          Group is not brought online if top level resource is disabled [2486476]
        8.  
          NFS resource goes offline unexpectedly and reports errors when restarted [2490331]
        9.  
          Parent group does not come online on a node where child group is online [2489053]
        10.  
          Cannot modify temp attribute when VCS is in LEAVING state [2407850]
        11.  
          Oracle service group faults on secondary site during failover in a disaster recovery scenario [2653704]
        12.  
          Service group may fail to come online after a flush and a force flush operation [2616779]
        13.  
          Elevated TargetCount prevents the online of a service group with hagrp -online -sys command [2871892]
        14.  
          Auto failover does not happen in case of two successive primary and secondary cluster failures [2858187]
        15.  
          GCO clusters remain in INIT state [2848006]
        16.  
          The ha commands may fail for non-root user if cluster is secure [2847998]
        17.  
          Startup trust failure messages in system logs [2721512]
        18.  
          Running -delete -keys for any scalar attribute causes core dump [3065357]
        19.  
          Veritas InfoScale enters into admin_wait state when Cluster Statistics is enabled with load and capacity defined [3199210]
        20.  
          Agent reports incorrect state if VCS is not set to start automatically and utmp file is empty before VCS is started [3326504]
        21.  
          VCS crashes if feature tracking file is corrupt [3603291]
        22.  
          RemoteGroup agent and non-root users may fail to authenticate after a secure upgrade [3649457]
        23.  
          If you disable security before upgrading VCS to version 7.0.1 or later on secured clusters, the security certificates will not be upgraded to 2048 bit SHA2 [3812313]
        24.  
          Java console and CLI do not allow adding VCS user names starting with '_' character (3870470)
      3. Issues related to the bundled agents
        1.  
          Entry points that run inside a zone are not cancelled cleanly [1179694]
        2.  
          Solaris mount agent fails to mount Linux NFS exported directory
        3.  
          The zpool command runs into a loop if all storage paths from a node are disabled
        4.  
          Zone remains stuck in down state if tried to halt with file system mounted from global zone [2326105]
        5.  
          Process and ProcessOnOnly agent rejects attribute values with white spaces [2303513]
        6.  
          The zpool commands hang and remain in memory till reboot if storage connectivity is lost [2368017]
        7.  
          Offline of zone resource may fail if zoneadm is invoked simultaneously [2353541]
        8.  
          Password changed while using hazonesetup script does not apply to all zones [2332349]
        9.  
          RemoteGroup agent does not failover in case of network cable pull [2588807]
        10.  
          CoordPoint agent remains in faulted state [2852872]
        11.  
          Prevention of Concurrency Violation (PCV) is not supported for applications running in a container [2536037]
        12.  
          Share resource goes offline unexpectedly causing service group failover [1939398]
        13.  
          Mount agent does not support all scenarios of loopback mounts
        14.  
          Invalid Netmask value may display code errors [2583313]
        15.  
          Zone root configured on ZFS with ForceAttach attribute enabled causes zone boot failure (2695415)
        16.  
          Error message is seen for Apache resource when zone is in transient state [2703707]
        17.  
          Monitor falsely reports NIC resource as offline when zone is shutting down (2683680)
        18.  
          Apache resource does not come online if the directory containing Apache pid file gests deleted when a node or zone restarts (2680661)
        19.  
          Online of LDom resource may fail due to incompatibility of LDom configuration file with host OVM version (2814991)
        20.  
          Online of IP or IPMultiNICB resource may fail if its IP address specified does not fit within the values specified in the allowed-address property (2729505)
        21.  
          Application resource running in a container with PidFiles attribute reports offline on upgrade to VCS 6.0 or later [2850927]
        22.  
          NIC resource may fault during group offline or failover on Solaris 11 [2754172]
        23.  
          NFS client reports error when server is brought down using shutdown command [2872741]
        24.  
          NFS client reports I/O error because of network split brain [3257399]
        25.  
          Mount resource does not support spaces in the MountPoint and BlockDevice attribute values [3335304]
        26.  
          IP Agent fails to detect the online state for the resource in an exclusive-IP zone [3592683]
        27.  
          SFCache Agent fails to enable caching if cache area is offline [3644424]
        28.  
          RemoteGroup agent may stop working on upgrading the remote cluster in secure mode [3648886]
        29.  
          (Solaris 11 x64) Application does not come online after the ESX server crashes or is isolated [3838654]
        30.  
          (Solaris 11 x64) Application may not failover when a cable is pulled off from the ESX host [3842833]
        31.  
          (Solaris 11 x64) Disk may not be visible on VM even after the VMwareDisks resource is online [3838644]
        32.  
          (Solaris 11 x64) Virtual machine may hang when the VMwareDisks resource is trying to come online [3849480]
        33.  
          SambaServer agent does not come online after upgrading to Oracle Solaris x86 SRU 11.3.15.4.0 ( 3915235)
      4. Issues related to the VCS database agents
        1.  
          VCS ASMDG resource status does not match the Oracle ASMDG resource status (3962416)
        2.  
          ASMDG agent does not go offline if the management DB is running on the same (3856460)
        3.  
          ASMDG on a particular does not go offline if its instances is being used by other database instances (3856450)
        4.  
          Sometimes ASMDG reports as offline instead of faulted (3856454)
        5.  
          Netlsnr agent monitoring can't detect tnslsnr running on Solaris if the entire process name exceeds 79 characters [3784547]
        6.  
          The ASMInstAgent does not support having pfile/spfile for the ASM Instance on the ASM diskgroups
        7.  
          VCS agent for ASM: Health check monitoring is not supported for ASMInst agent
        8.  
          NOFAILOVER action specified for certain Oracle errors
        9.  
          ASMInstance resource monitoring offline resource configured with OHASD as application resource logs error messages in VCS logs [2846945]
        10.  
          Oracle agent fails to offline pluggable database (PDB) resource with PDB in backup mode [3592142]
        11.  
          Clean succeeds for PDB even as PDB staus is UNABLE to OFFLINE [3609351]
        12.  
          Second level monitoring fails if user and table names are identical [3594962]
        13.  
          Monitor entry point times out for Oracle PDB resources when CDB is moved to suspended state in Oracle 12.1.0.2 [3643582]
        14.  
          Oracle agent fails to come online and monitor Oracle instance if threaded_execution parameter is set to true (3644425)
      5. Issues related to the agent framework
        1.  
          The agent framework does not detect if service threads hang inside an entry point [1442255]
        2.  
          IMF related error messages while bringing a resource online and offline [2553917]
        3.  
          Delayed response to VCS commands observed on nodes with several resources and system has high CPU usage or high swap usage [3208239]
        4.  
          CFSMount agent may fail to heartbeat with VCS engine and logs an error message in the engine log on systems with high memory load [3060779]
        5.  
          Logs from the script executed other than the agent entry point goes into the engine logs [3547329]
        6.  
          VCS fails to process the hares -add command resource if the resource is deleted and subsequently added just after the VCS process or the agent's process starts (3813979)
      6. Issues related to Intelligent Monitoring Framework (IMF)
        1.  
          Registration error while creating a Firedrill setup [2564350]
        2.  
          IMF does not fault zones if zones are in ready or down state [2290883]
        3.  
          IMF does not detect the zone state when the zone goes into a maintenance state [2535733]
        4.  
          IMF does not provide notification for a registered disk group if it is imported using a different name (2730774)
        5.  
          Direct execution of linkamf displays syntax error [2858163]
        6.  
          Error messages displayed during reboot cycles [2847950]
        7.  
          Error message displayed when ProPCV prevents a process from coming ONLINE to prevent concurrency violation does not have I18N support [2848011]
        8.  
          AMF displays StartProgram name multiple times on the console without a VCS error code or logs [2872064]
        9.  
          VCS engine shows error for cancellation of reaper when Apache agent is disabled [3043533]
        10.  
          Terminating the imfd daemon orphans the vxnotify process [2728787]
        11.  
          Agent cannot become IMF-aware with agent directory and agent file configured [2858160]
        12.  
          ProPCV fails to prevent a script from running if it is run with relative path [3617014]
      7. Issues related to global clusters
        1.  
          The engine log file receives too many log messages on the secure site in global cluster environments [1919933]
        2.  
          Application group attempts to come online on primary site before fire drill service group goes offline on the secondary site (2107386)
      8. Issues related to the Cluster Manager (Java Console)
        1.  
          Some Cluster Manager features fail to work in a firewall setup [1392406]
      9. VCS Cluster Configuration wizard issues
        1.  
          IPv6 verification fails while configuring generic application using VCS Cluster Configuration wizard [3614680]
        2.  
          InfoScale Enterprise: Unable to configure clusters through the VCS Cluster Configuration wizard (3911694)
        3.  
          Cluster Configuration Wizard fails to configure a cluster due to missing telemetry data (4002133)
      10. LLT known issues
        1.  
          Cannot configure LLT if full device path is not used in the llttab file (2858159)
        2.  
          Fast link failure detection is not supported on Solaris 11 (2954267)
      11. I/O fencing known issues
        1.  
          Fencing port b is visible for few seconds even if cluster nodes have not registered with CP server (2415619)
        2.  
          The cpsadm command fails if LLT is not configured on the application cluster (2583685)
        3.  
          When I/O fencing is not up, the svcs command shows VxFEN as online (2492874)
        4.  
          The vxfenswap utility does not detect failure of coordination points validation due to an RSH limitation (2531561)
        5.  
          The vxfenswap utility deletes comment lines from the /etc/vxfemode file, if you run the utility with hacli option (3318449)
        6.  
          The vxfentsthdw utility may not run on systems installed with partial SFHA stack [3333914]
        7.  
          When a client node goes down, for reasons such as node panic, I/O fencing does not come up on that client node after node restart (3341322)
        8.  
          The vxfenconfig -l command output does not list Coordinator disks that are removed using the vxdmpadm exclude dmpnodename=<dmp_disk/node> command [3644431]
        9.  
          Stale .vxfendargs file lets hashadow restart vxfend in Sybase mode (2554886)
        10.  
          CP server configuration fails while setting up secure credentials for CP server hosted on an SFHA cluster (2621029)
        11.  
          The CoordPoint agent faults after you detach or reattach one or more coordination disks from a storage array (3317123)
      12. GAB known issues
        1.  
          GAB may fail to stop during a phased upgrade on Oracle Solaris 11 (2858157)
        2.  
          Cannot run pfiles or truss files on gablogd (2292294)
        3.  
          (Oracle Solaris 11) On virtual machines, sometimes the common product installer (CPI) may report that GAB failed to start and may exit (2879262)
        4.  
          During upgrade, GAB kernel module fails to unload [3560458]
    5. Storage Foundation and High Availability known issues
      1.  
        Cache area is lost after a disk failure (3158482)
      2.  
        NFS issues with VxFS Storage Checkpoints (2027492)
      3.  
        Some SmartTier for Oracle commands do not work correctly in non-POSIX locales (2138030)
      4.  
        In an IPv6 environment, db2icrt and db2idrop commands return a segmentation fault error during instance creation and instance removal (1602444)
      5.  
        Not all the objects are visible in the VOM GUI (1821803)
      6.  
        An error message is received when you perform off-host clone for RAC and the off-host node is not part of the CVM cluster (1834860)
      7.  
        A volume's placement class tags are not visible in the Veritas Enterprise Administrator GUI when creating a dynamic storage tiering placement policy (1880081)
    6. Storage Foundation Cluster File System High Availability known issues
      1.  
        Master node in an FSS cluster may panic or behave unexpectedly if 'vol_taskship' is set to 1 (4003796)
      2.  
        On Solaris 11, the vxfen driver may panic the system after upgrading SFHA 6.2.1, or SFCFSHA 6.2.1, or later InfoScale versions to 7.4.2 (4003278)
      3.  
        Older VxFS modules may fail to unload after upgrading an earlier InfoScale version to 7.4.2 on Solaris 11.4 (4003395)
      4.  
        Transaction hangs when multiple plex-attach or add-mirror operations are triggered on the same volume (3969500)
      5.  
        In an FSS environment, creation of mirrored volumes may fail for SSD media [3932494]
      6.  
        Mount command may fail to mount the file system (3913246)
      7.  
        After the local node restarts or panics, the FSS service group cannot be online successfully on the local node and the remote node when the local node is up again (3865289)
      8.  
        In the FSS environment, if DG goes to the dgdisable state and deep volume monitoring is disabled, successive node joins fail with error 'Slave failed to create remote disk: retry to add a node failed' (3874730)
      9.  
        DG creation fails with error "V-5-1-585 Disk group punedatadg: cannot create: SCSI-3 PR operation failed" on the VSCSI disks (3875044)
      10.  
        CVMVOLDg agent is not going into the FAULTED state. [3771283]
      11.  
        CFS commands might hang when run by non-root (3038283)
      12.  
        The fsappadm subfilemove command moves all extents of a file (3258678)
      13.  
        Certain I/O errors during clone deletion may lead to system panic. (3331273)
      14.  
        Panic due to null pointer de-reference in vx_bmap_lookup() (3038285)
      15.  
        In a CFS cluster, that has multi-volume file system of a small size, the fsadm operation may hang (3348520)
    7. Storage Foundation for Oracle RAC known issues
      1. Oracle RAC known issues
        1.  
          Oracle Grid Infrastructure installation may fail with internal driver error
        2.  
          During installation or system startup, Oracle Grid Infrastructure may fail to start
      2. Storage Foundation Oracle RAC issues
        1.  
          Oracle database or grid installation using the product installer fails (4004808)
        2.  
          ASM configuration fails if OCR and voting disk volumes are configured on VxFS or CFS for Oracle 19c during the grid installation (4003844)
        3.  
          CSSD configuration fails if OCR and voting disk volumes are located on Oracle ASM (3914497)
        4.  
          ASM disk groups configured with normal or high redundancy are dismounted if the CVM master panics due to network failure in FSS environment or if CVM I/O shipping is enabled (3600155)
        5.  
          PrivNIC and MultiPrivNIC agents not supported with Oracle RAC 11.2.0.2 and later versions
        6.  
          CSSD agent forcibly stops Oracle Clusterware if Oracle Clusterware fails to respond (3352269)
        7.  
          Intelligent Monitoring Framework (IMF) entry point may fail when IMF detects resource state transition from online to offline for CSSD resource type (3287719)
        8.  
          The vxconfigd daemon fails to start after machine reboot (3566713)
        9.  
          Health check monitoring fails with policy-managed databases (3609349)
        10.  
          CVMVolDg agent may fail to deport CVM disk group
        11.  
          PrivNIC resource faults in IPMP environments on Solaris 11 systems (2838745)
        12.  
          Warning message displayed on taking cssd resource offline if LANG attribute is set to "eucJP" (2123122)
        13.  
          Error displayed on removal of VRTSjadba language package (2569224)
        14.  
          Veritas Volume Manager can not identify Oracle Automatic Storage Management (ASM) disks (2771637)
        15.  
          vxdisk resize from slave nodes fails with "Command is not supported for command shipping" error (3140314)
        16.  
          Oracle Universal Installer fails to start on Solaris 11 systems (2784560)
        17.  
          CVM requires the T10 vendor provided ID to be unique (3191807)
        18.  
          FSS Disk group creation with 510 exported disks from master fails with Transaction locks timed out error (3311250)
        19.  
          vxdisk export operation fails if length of hostprefix and device name exceeds 30 characters (3543668)
        20.  
          Change in naming scheme is not reflected on nodes in an FSS environment (3589272)
        21.  
          When you upgrade SFRAC version from 6.2.1 to 7.2 on Solaris 11 Update 2, vxglm process fails to stop [3876778]
    8. Storage Foundation for Databases (SFDB) tools known issues
      1.  
        Clone operations fail for instant mode snapshot (3916053)
      2.  
        Sometimes SFDB may report the following error message: SFDB remote or privileged command error (2869262)
      3.  
        SFDB commands do not work in IPV6 environment (2619958)
      4.  
        When you attempt to move all the extents of a table, the dbdst_obj_move(1M) command fails with an error (3260289)
      5.  
        Attempt to use SmartTier commands fails (2332973)
      6.  
        Attempt to use certain names for tiers results in error (2581390)
      7.  
        Clone operation failure might leave clone database in unexpected state (2512664)
      8.  
        Clone command fails if PFILE entries have their values spread across multiple lines (2844247)
      9.  
        Data population fails after datafile corruption, rollback, and restore of offline checkpoint (2869259)
      10.  
        Flashsnap clone fails under some unusual archivelog configuration on RAC (2846399)
      11.  
        vxdbd process is online after Flash archive installation (2869269)
      12.  
        On Solaris 11.1 SPARC, setting up the user-authentication process using the sfae_auth_op command fails with an error message (3556996)
      13.  
        In the cloned database, the seed PDB remains in the mounted state (3599920)
      14.  
        Cloning of a container database may fail after a reverse resync commit operation is performed (3509778)
      15.  
        If one of the PDBs is in the read-write restricted state, then cloning of a CDB fails (3516634)
      16.  
        Cloning of a CDB fails for point-in-time copies when one of the PDBs is in the read-only mode (3513432)
      17.  
        If a CDB has a tablespace in the read-only mode, then the cloning fails (3512370)
      18.  
        SFDB commands fail when an SFDB installation with authentication configured is upgraded to InfoScale 7.4.2 (3644030)
      19.  
        Benign message displayed upon execution of vxsfadm -a oracle -s filesnap -o destroyclone (3901533)

Trigger does not get executed when there is more than one leading or trailing slash in the triggerpath [2368061]

The path specified in TriggerPath attribute must not contain more than one leading or trailing '/' character.

Workaround: Remove the extra leading or trailing '/' characters from the path.