Veritas InfoScale™ 7.4.2 Release Notes - Solaris
- Introduction
- Requirements
- Changes introduced in 7.4.2
- Fixed issues
- Limitations
- Storage Foundation software limitations
- Dynamic Multi-Pathing software limitations
- Veritas Volume Manager software limitations
- Veritas File System software limitations
- SmartIO software limitations
- Replication software limitations
- Cluster Server software limitations
- Limitations related to bundled agents
- Limitations related to VCS engine
- Veritas cluster configuration wizard limitations
- Limitations related to the VCS database agents
- Cluster Manager (Java console) limitations
- Limitations related to LLT
- Limitations related to I/O fencing
- Limitations related to bundled agents
- Storage Foundation Cluster File System High Availability software limitations
- Storage Foundation for Oracle RAC software limitations
- Storage Foundation for Databases (SFDB) tools software limitations
- Storage Foundation software limitations
- Known issues
- Issues related to installation and upgrade
- Storage Foundation known issues
- Dynamic Multi-Pathing known issues
- Veritas Volume Manager known issues
- Veritas File System known issues
- Replication known issues
- Cluster Server known issues
- Operational issues for VCS
- Issues related to the VCS engine
- Issues related to the bundled agents
- Issues related to the VCS database agents
- Issues related to the agent framework
- Issues related to Intelligent Monitoring Framework (IMF)
- Issues related to global clusters
- Issues related to the Cluster Manager (Java Console)
- VCS Cluster Configuration wizard issues
- LLT known issues
- I/O fencing known issues
- GAB known issues
- Operational issues for VCS
- Storage Foundation and High Availability known issues
- Storage Foundation Cluster File System High Availability known issues
- Storage Foundation for Oracle RAC known issues
- Oracle RAC known issues
- Storage Foundation Oracle RAC issues
- Storage Foundation for Databases (SFDB) tools known issues
- Issues related to installation and upgrade
Restarting the vxconfigd daemon on the slave node after a disk is removed from all nodes may cause the disk groups to be disabled on the slave node (3591019)
The issue occurs if the storage connectivity of a disk is removed from all the nodes of the cluster and the vxconfigd daemon is restarted on the slave node before the disk is detached from the slave. All the disk groups are in the dgdisabled state on the slave node, but show as enabled on the other nodes.
If the disk was detached before the vxconfigd daemon is restarted, the issue does not occur.
In a Flexible Storage Sharing (FSS) environment, removing the storage connectivity on a node that contributes DAS storage to a shared disk group results in global connectivity loss because the storage is not connected elsewhere.
Workaround:
To prevent this issue:
Before restarting the vxconfigd daemon, if a disk in a shared disk group has lost connectivity to all nodes in the cluster, make sure that the disk is in the detached state. If a disk needs to be detached, use the following command:
# vxdisk check diskname
To resolve the issue after it has occurred:
If vxconfigd is restarted before the disks got detached, remove the node from the cluster and rejoin the node to the cluster.