Storage Foundation and High Availability Solutions 7.4.2 Solutions Guide - Windows
- Section I. Introduction
- Introducing Storage Foundation and High Availability Solutions
- Using the Solutions Configuration Center
- SFW best practices for storage
- Section II. Quick Recovery
- Section III. High Availability
- High availability: Overview
- How VCS monitors storage components
- Deploying InfoScale Enterprise for high availability: New installation
- Notes and recommendations for cluster and application configuration
- Configuring disk groups and volumes
- Configuring the cluster using the Cluster Configuration Wizard
- About modifying the cluster configuration
- About installing and configuring the application or server role
- Configuring the service group
- About configuring file shares
- About configuring IIS sites
- About configuring applications using the Application Configuration Wizard
- About configuring the Oracle service group using the wizard
- Modifying the application service groups
- Adding DMP to a clustering configuration
- High availability: Overview
- Section IV. Campus Clustering
- Introduction to campus clustering
- Deploying InfoScale Enterprise for campus cluster
- Notes and recommendations for cluster and application configuration
- Reviewing the configuration
- Configuring the cluster using the Cluster Configuration Wizard
- Creating disk groups and volumes
- Installing the application on cluster nodes
- Section V. Replicated Data Clusters
- Introduction to Replicated Data Clusters
- Deploying Replicated Data Clusters: New application installation
- Notes and recommendations for cluster and application configuration
- Configuring the cluster using the Cluster Configuration Wizard
- Configuring disk groups and volumes
- Installing and configuring the application or server role
- Configuring the service group
- About configuring file shares
- About configuring IIS sites
- About configuring applications using the Application Configuration Wizard
- Configuring a RVG service group for replication
- Configuring the resources in the RVG service group for RDC replication
- Configuring the VMDg or VMNSDg resources for the disk groups
- Configuring the RVG Primary resources
- Adding the nodes from the secondary zone to the RDC
- Verifying the RDC configuration
- Section VI. Disaster Recovery
- Disaster recovery: Overview
- Deploying disaster recovery: New application installation
- Notes and recommendations for cluster and application configuration
- Reviewing the configuration
- About managing disk groups and volumes
- Setting up the secondary site: Configuring SFW HA and setting up a cluster
- Setting up your replication environment
- About configuring disaster recovery with the DR wizard
- Installing and configuring the application or server role (secondary site)
- Configuring replication and global clustering
- Configuring the global cluster option for wide-area failover
- Possible task after creating the DR environment: Adding a new failover node to a Volume Replicator environment
- Maintaining: Normal operations and recovery procedures (Volume Replicator environment)
- Testing fault readiness by running a fire drill
- About the Fire Drill Wizard
- Prerequisites for a fire drill
- Preparing the fire drill configuration
- Deleting the fire drill configuration
- Section VII. Microsoft Clustering Solutions
- Microsoft clustering solutions overview
- Deploying SFW with Microsoft failover clustering
- Tasks for installing InfoScale Foundation or InfoScale Storage for Microsoft failover clustering
- Creating SFW disk groups and volumes
- Implementing a dynamic quorum resource
- Deploying SFW with Microsoft failover clustering in a campus cluster
- Reviewing the configuration
- Establishing a Microsoft failover cluster
- Tasks for installing InfoScale Foundation or InfoScale Storage for Microsoft failover clustering
- Creating disk groups and volumes
- Implementing a dynamic quorum resource
- Installing the application on the cluster nodes
- Deploying SFW and VVR with Microsoft failover clustering
- Part 1: Setting up the cluster on the primary site
- Reviewing the prerequisites and the configuration
- Part 2: Setting up the cluster on the secondary site
- Part 3: Adding the Volume Replicator components for replication
- Part 4: Maintaining normal operations and recovery procedures
- Section VIII. Server Consolidation
- Server consolidation overview
- Server consolidation configurations
- Typical server consolidation configuration
- Server consolidation configuration 1 - many to one
- Server consolidation configuration 2 - many to two: Adding clustering and DMP
- About this configuration
- SFW features that support server consolidation
Overview of campus clustering with VCS
This overview focuses on the recovery with a VCS campus cluster. Automated recovery is handled differently in a VCS campus cluster than with a VCS local cluster.
The following table lists failure situations and the outcomes that occur with the two different settings for the ForceImport attribute of the VMDg resource. This attribute can be set to 1 (automatically forcing the import of the disk groups to the another node) or 0 (not forcing the import).
For information on how to set the ForceImport attribute:
See Setting the ForceImport attribute.
Table: Failure situations
Failure Situation | ForceImport set to 0 (import not forced) | ForceImport set to 1 (automatic force import) |
---|---|---|
1. Application fault May mean the services stopped for an application, a NIC failed, or a database table went offline. | Application automatically moves to other site. | Service Group failover is automatic on the standby or preferred system or node. |
2. Server failure May mean that a power cord was unplugged, a system hang occurred, or another failure caused the system to stop responding. | Application automatically moves to other site. 100% of the disks are still available. | Service Group failover is automatic on the standby or preferred system or node. 100% of the mirrored disks are still available. |
3. Failure of disk array or all disks Remaining disks in mirror are still accessible from the other site. | No interruption of service. Remaining disks in mirror are still accessible from other site. | The Service Group does not failover. 50% of the mirrored disk is still available at remaining site. |
4. Site failure All access to the server and storage is lost. | Manual intervention required to move application. Can't import with only 50% of the disks available. | Application automatically moves to the other site. |
5. Split-brain situation (loss of both heartbeats) If the public network link is used as a low-priority heartbeat, it is assumed that link is also lost. | No interruption of service. Can't import disks because original site still has the SCSI reservation. | No interruption of service. Failover does not occur due to Service Group resources remaining online on the original nodes. Example: Online node has SCSI reservation to own disk. |
6. Storage interconnect lost Fibre interconnect severed. | No interruption of service. Disks on the same node are functioning. Mirroring is not working. | No interruption of service. Service Group resources remain online, but 50% of the mirror disk becomes detached. |
7. Split-brain situation and storage interconnect lost If a single pipe is used between buildings for the Ethernet and storage, this situation can occur. | No interruption of service. Can't import with only 50% of disks available. Disks on the same node are functioning. Mirroring is not working. | Automatically imports disks on secondary site. Now disks are online in both locations - data can be kept from only one. |