Veritas InfoScale™ 7.4.2 Release Notes - Solaris
- Introduction
- Requirements
- Changes introduced in 7.4.2
- Fixed issues
- Limitations
- Storage Foundation software limitations
- Dynamic Multi-Pathing software limitations
- Veritas Volume Manager software limitations
- Veritas File System software limitations
- SmartIO software limitations
- Replication software limitations
- Cluster Server software limitations
- Limitations related to bundled agents
- Limitations related to VCS engine
- Veritas cluster configuration wizard limitations
- Limitations related to the VCS database agents
- Cluster Manager (Java console) limitations
- Limitations related to LLT
- Limitations related to I/O fencing
- Limitations related to bundled agents
- Storage Foundation Cluster File System High Availability software limitations
- Storage Foundation for Oracle RAC software limitations
- Storage Foundation for Databases (SFDB) tools software limitations
- Storage Foundation software limitations
- Known issues
- Issues related to installation and upgrade
- Storage Foundation known issues
- Dynamic Multi-Pathing known issues
- Veritas Volume Manager known issues
- Veritas File System known issues
- Replication known issues
- Cluster Server known issues
- Operational issues for VCS
- Issues related to the VCS engine
- Issues related to the bundled agents
- Issues related to the VCS database agents
- Issues related to the agent framework
- Issues related to Intelligent Monitoring Framework (IMF)
- Issues related to global clusters
- Issues related to the Cluster Manager (Java Console)
- VCS Cluster Configuration wizard issues
- LLT known issues
- I/O fencing known issues
- GAB known issues
- Operational issues for VCS
- Storage Foundation and High Availability known issues
- Storage Foundation Cluster File System High Availability known issues
- Storage Foundation for Oracle RAC known issues
- Oracle RAC known issues
- Storage Foundation Oracle RAC issues
- Storage Foundation for Databases (SFDB) tools known issues
- Issues related to installation and upgrade
The cpsadm command fails if LLT is not configured on the application cluster (2583685)
The cpsadm command fails to communicate with the coordination point server (CP server) if LLT is not configured on the application cluster node where you run the cpsadm command. You may see errors similar to the following:
# cpsadm -s 10.209.125.200 -a ping_cps CPS ERROR V-97-1400-729 Please ensure a valid nodeid using environment variable CPS_NODEID CPS ERROR V-97-1400-777 Client unable to communicate with CPS.
However, if you run the cpsadm command on the CP server, this issue does not arise even if LLT is not configured on the node that hosts CP server. The cpsadm command on the CP server node always assumes the LLT node ID as 0 if LLT is not configured.
According to the protocol between the CP server and the application cluster, when you run the cpsadm on an application cluster node, cpsadm needs to send the LLT node ID of the local node to the CP server. But if LLT is unconfigured temporarily, or if the node is a single-node VCS configuration where LLT is not configured, then the cpsadm command cannot retrieve the LLT node ID. In such situations, the cpsadm command fails.
Workaround: Set the value of the CPS_NODEID environment variable to 255. The cpsadm command reads the CPS_NODEID variable and proceeds if the command is unable to get LLT node ID from LLT.