Storage Foundation and High Availability Solutions 7.4.2 Solutions Guide - Windows
- Section I. Introduction
- Introducing Storage Foundation and High Availability Solutions
- Using the Solutions Configuration Center
- SFW best practices for storage
- Section II. Quick Recovery
- Section III. High Availability
- High availability: Overview
- How VCS monitors storage components
- Deploying InfoScale Enterprise for high availability: New installation
- Notes and recommendations for cluster and application configuration
- Configuring disk groups and volumes
- Configuring the cluster using the Cluster Configuration Wizard
- About modifying the cluster configuration
- About installing and configuring the application or server role
- Configuring the service group
- About configuring file shares
- About configuring IIS sites
- About configuring applications using the Application Configuration Wizard
- About configuring the Oracle service group using the wizard
- Modifying the application service groups
- Adding DMP to a clustering configuration
- High availability: Overview
- Section IV. Campus Clustering
- Introduction to campus clustering
- Deploying InfoScale Enterprise for campus cluster
- Notes and recommendations for cluster and application configuration
- Reviewing the configuration
- Configuring the cluster using the Cluster Configuration Wizard
- Creating disk groups and volumes
- Installing the application on cluster nodes
- Section V. Replicated Data Clusters
- Introduction to Replicated Data Clusters
- Deploying Replicated Data Clusters: New application installation
- Notes and recommendations for cluster and application configuration
- Configuring the cluster using the Cluster Configuration Wizard
- Configuring disk groups and volumes
- Installing and configuring the application or server role
- Configuring the service group
- About configuring file shares
- About configuring IIS sites
- About configuring applications using the Application Configuration Wizard
- Configuring a RVG service group for replication
- Configuring the resources in the RVG service group for RDC replication
- Configuring the VMDg or VMNSDg resources for the disk groups
- Configuring the RVG Primary resources
- Adding the nodes from the secondary zone to the RDC
- Verifying the RDC configuration
- Section VI. Disaster Recovery
- Disaster recovery: Overview
- Deploying disaster recovery: New application installation
- Notes and recommendations for cluster and application configuration
- Reviewing the configuration
- About managing disk groups and volumes
- Setting up the secondary site: Configuring SFW HA and setting up a cluster
- Setting up your replication environment
- About configuring disaster recovery with the DR wizard
- Installing and configuring the application or server role (secondary site)
- Configuring replication and global clustering
- Configuring the global cluster option for wide-area failover
- Possible task after creating the DR environment: Adding a new failover node to a Volume Replicator environment
- Maintaining: Normal operations and recovery procedures (Volume Replicator environment)
- Testing fault readiness by running a fire drill
- About the Fire Drill Wizard
- Prerequisites for a fire drill
- Preparing the fire drill configuration
- Deleting the fire drill configuration
- Section VII. Microsoft Clustering Solutions
- Microsoft clustering solutions overview
- Deploying SFW with Microsoft failover clustering
- Tasks for installing InfoScale Foundation or InfoScale Storage for Microsoft failover clustering
- Creating SFW disk groups and volumes
- Implementing a dynamic quorum resource
- Deploying SFW with Microsoft failover clustering in a campus cluster
- Reviewing the configuration
- Establishing a Microsoft failover cluster
- Tasks for installing InfoScale Foundation or InfoScale Storage for Microsoft failover clustering
- Creating disk groups and volumes
- Implementing a dynamic quorum resource
- Installing the application on the cluster nodes
- Deploying SFW and VVR with Microsoft failover clustering
- Part 1: Setting up the cluster on the primary site
- Reviewing the prerequisites and the configuration
- Part 2: Setting up the cluster on the secondary site
- Part 3: Adding the Volume Replicator components for replication
- Part 4: Maintaining normal operations and recovery procedures
- Section VIII. Server Consolidation
- Server consolidation overview
- Server consolidation configurations
- Typical server consolidation configuration
- Server consolidation configuration 1 - many to one
- Server consolidation configuration 2 - many to two: Adding clustering and DMP
- About this configuration
- SFW features that support server consolidation
Reviewing the configuration
This configuration overview highlights the active/passive high availability within a cluster and disaster recovery between two sites. In an active/passive configuration, one or more application virtual servers can exist in a cluster, but each server must be managed by a service group configured with a distinct set of nodes in the cluster.
Active/passive clusters involve one-to-one failover capabilities. For instance, if you have two nodes on each site (SYSTEM1 and SYSTEM2 on the primary site, SYSTEM5 and SYSTEM6 on the secondary site), SYSTEM1 can fail over to SYSTEM2, and SYSTEM5 can fail over to SYSTEM6.The figure that follows illustrates the cluster configuration on the primary site. For a view of the configuration that includes both sites, refer to the illustration in the topic:
See About a disaster recovery solution.
This configuration does not include DMP. For information about DMP and clustering:
See Overview of configuration tasks for adding DMP DSMs.
The following are some other key points about the configuration:
A Microsoft failover cluster must be running before you install InfoScale Storage.
Installing InfoScale Storage requires a reboot, but a reboot on the active cluster node causes it to fail over. Thus, Veritas recommends that you use a "rolling install" procedure to install InfoScale Storage first on the inactive cluster node, then move the active cluster resources to the other node, and install on the now inactive node.
SFW adds the advantage of the dynamic mirrored quorum. The quorum resource maintains the cluster database and critical recovery information in a recovery log.
Microsoft clustering only supports a basic physical disk and does not enable you to mirror the quorum resource. One advantage of SFW is that it provides a dynamic mirrored quorum resource for Microsoft clustering. If a quorum disk fails, a mirror on another disk (another plex) takes over and the resource remains online. For this configuration, Veritas recommends creating a three-way mirror for the quorum to provide additional fault tolerance. If possible, do not use the disks assigned to the quorum for any other purpose.
After InfoScale Storage is installed on the cluster nodes, the next task is to create one or more cluster disk groups with SFW and set up the volumes for your application. At the same time, you can create the disk group and mirrored volume for the dynamic quorum resource.
The quorum disk group on each site does not get replicated because each cluster has its own quorum.