Cluster Server 7.4.2 Configuration Guide for Custom Applications - Windows
- Introducing the Veritas High Availability solution for VMware
- Configuring application monitoring using the Veritas High Availability solution
- Administering application monitoring
- About the various interfaces available for performing application monitoring tasks
- Administering application monitoring using the Veritas High Availability tab
- Understanding the Veritas High Availability tab work area
- To configure or unconfigure application monitoring
- To start or stop applications
- To switch an application to another system
- To add or remove a failover system
- To suspend or resume application monitoring
- To clear Fault state
- To resolve a held-up operation
- To determine application state
- To remove all monitoring configurations
- To remove VCS cluster configurations
- Administering application monitoring settings
- Administering application availability using Veritas High Availability dashboard
- Understanding the dashboard work area
- Monitoring applications across a data center
- Monitoring applications across an ESX cluster
- Searching for application instances by using filters
- Selecting multiple applications for batch operations
- Starting an application using the dashboard
- Stopping an application by using the dashboard
- Entering an application into maintenance mode
- Bringing an application out of maintenance mode
- Switching an application
- Resolving dashboard alerts
- Appendix A. Troubleshooting
- Troubleshooting application monitoring configuration issues
- Veritas High Availability Configuration Wizard displays the "hadiscover is not recognized as an internal or external command" error
- Running the 'hastop - all' command detaches virtual disks
- Validation may fail when you add a failover system
- Adding a failover system may fail if you configure a cluster with communication links over UDP
- Troubleshooting Veritas High Availability view issues
- Veritas High Availability tab not visible from a cluster node
- Veritas High Availability tab does not display the application monitoring status
- Veritas High Availabilitytab may freeze due to special characters in application display name
- Veritas High Availability view may fail to load or refresh
- Operating system commands to unmount resource may fail
- Troubleshooting application monitoring configuration issues
Managing storage
Configure the storage disks to save the application data.
VMware virtualization manages the application data by storing it on SAN LUNs (RDM file), or creating virtual disks on a local or networked storage attached to the ESX host using iSCSI, network, or Fibre Channel. The virtual disks reside on a datastore or a raw disk that exists on the storage disks used.
For more information, refer to the VMware documentation.
The application monitoring configuration in a VMware environment requires you to use the RDM or VMDK disk formats. During a failover, these disks can be deported from a system and imported to another system.
Consider the following to manage the storage disks:
Use a networked storage and create virtual disks on the datastores that are accessible to all the ESX servers that hosts the VCS cluster systems.
In case of virtual disks, create non-shared virtual disks (Thick Provision Lazy Zeroed).
Add the virtual disks to the virtual machine on which you want to start the configured application.
Create volumes on the virtual disks.
Note:
If your storage configuration involves NetApp filers that are directly connected to the systems using iSCSI initiator, you cannot configure application monitoring in a virtual environment with non-shared disks.
The following VCS storage agents are used to monitor the storage components involving non-shared storage:
If the storage is managed using SFW, the MountV, VMNSDg, and VMwareDisks agents are used.
If the storage is managed using LDM, the Mount, NativeDisks, and VMwareDisks agents are used.
Before configuring the storage, you can review the resource types and attribute definitions of these VCS storage agents. For details refer to the Cluster Server Bundled Agents Reference Guide.