Veritas InfoScale™ 7.4.2 Solutions in Cloud Environments

Last Published:
Product(s): InfoScale & Storage Foundation (7.4.2)
Platform: Linux,Windows
  1. Overview and preparation
    1.  
      Overview of InfoScale solutions in cloud environments
    2.  
      InfoScale agents for monitoring resources in cloud environments
    3.  
      InfoScale feature for storage sharing in cloud environments
    4.  
      About SmartIO in AWS environments
    5.  
      Preparing for InfoScale installations in cloud environments
    6.  
      Installing the AWS CLI package
    7.  
      VPC security groups example
  2. Configurations for Amazon Web Services - Linux
    1. Replication configurations in AWS - Linux
      1.  
        Replication from on-premises to AWS - Linux
      2.  
        Replication across AZs within an AWS region - Linux
      3.  
        Replication across AWS regions - Linux
      4.  
        Replication across multiple AWS AZs and regions (campus cluster) - Linux
    2. HA and DR configurations in AWS - Linux
      1.  
        Failover within a subnet of an AWS AZ using virtual private IP - Linux
      2.  
        Failover across AWS subnets using overlay IP - Linux
      3.  
        Public access to InfoScale cluster nodes in AWS using elastic IP - Linux
      4.  
        DR from on-premises to AWS and across AWS regions or VPCs - Linux
  3. Configurations for Amazon Web Services - Windows
    1. Replication configurations in AWS - Windows
      1.  
        Replication from on-premises to AWS - Windows
      2.  
        Replication across AZs in an AWS region - Windows
      3.  
        Replication across AWS regions - Windows
    2. HA and DR configurations in AWS - Windows
      1.  
        Failover within a subnet of an AWS AZ using virtual private IP - Windows
      2.  
        Failover across AWS subnets using overlay IP - Windows
      3.  
        Public access to InfoScale cluster nodes in AWS using Elastic IP - Windows
      4.  
        DR from on-premises to AWS and across AWS regions or VPCs - Windows
      5.  
        DR from on-premises to AWS - Windows
  4. Configurations for Microsoft Azure - Linux
    1. Replication configurations in Azure - Linux
      1.  
        Replication from on-premises to Azure - Linux
      2.  
        Replication within an Azure region - Linux
      3.  
        Replication across Azure regions - Linux
      4.  
        Replication across multiple Azure sites and regions (campus cluster) - Linux
      5.  
        About identifying a temporary resource disk - Linux
    2. HA and DR configurations in Azure - Linux
      1.  
        Failover within an Azure subnet using private IP - Linux
      2.  
        Failover across Azure subnets using overlay IP - Linux
      3.  
        Public access to cluster nodes in Azure using public IP - Linux
      4.  
        DR from on-premises to Azure and across Azure regions or VNets - Linux
  5. Configurations for Microsoft Azure - Windows
    1. Replication configurations in Azure - Windows
      1.  
        Replication from on-premises to Azure - Windows
      2.  
        Replication within an Azure region - Windows
      3.  
        Replication across Azure regions - Windows
    2. HA and DR configurations in Azure - Windows
      1.  
        Failover within an Azure subnet using private IP - Windows
      2.  
        Failover across Azure subnets using overlay IP - Windows
      3.  
        Public access to cluster nodes in Azure using public IP - Windows
      4.  
        DR from on-premises to Azure and across Azure regions or VNets - Windows
  6. Configurations for Google Cloud Platform- Linux
    1. Replication configurations in GCP - Linux
      1.  
        Replication across GCP regions - Linux
      2.  
        Replication across multiple GCP zones and regions (campus cluster) - Linux
    2. HA and DR configurations in GCP - Linux
      1.  
        Failover within a subnet of a GCP zone using virtual private IP - Linux
      2.  
        Failover across GCP subnets using overlay IP - Linux
      3.  
        DR across GCP regions or VPC networks - Linux
      4.  
        Shared storage within a GCP zone or across GCP zones - Linux
  7. Configurations for Google Cloud Platform - Windows
    1. Replication configurations in GCP - Windows
      1.  
        Replication from on-premises to GCP - Windows
      2.  
        Replication across zones in a GCP region - Windows
      3.  
        Replication across GCP regions - Windows
    2. HA and DR configurations in GCP - Windows
      1.  
        Failover within a subnet of a GCP zone using virtual private IP - Windows
      2.  
        Failover across GCP subnets using overlay IP - Windows
      3.  
        DR across GCP regions or VPC networks - Windows
  8. Replication to and across cloud environments
    1.  
      Data replication in supported cloud environments
    2.  
      Supported replication scenarios
    3.  
      Setting up replication across AWS and Azure environments
  9. Migrating files to the cloud using Cloud Connectors
    1.  
      About cloud connectors
    2.  
      About InfoScale support for cloud connectors
    3.  
      How InfoScale migrates data using cloud connectors
    4.  
      Limitations for file-level tiering
    5.  
      About operations with Amazon Glacier
    6.  
      Migrating data from on-premise to cloud storage
    7.  
      Reclaiming object storage space
    8.  
      Removing a cloud volume
    9.  
      Examining in-cloud storage usage
    10.  
      Sample policy file
    11.  
      Replication support with cloud tiering
  10. Troubleshooting issues in cloud deployments
    1.  
      In an Azure environment, exporting a disk for Flexible Storage Sharing (FSS) may fail with "Disk not supported for FSS operation" error

InfoScale feature for storage sharing in cloud environments

InfoScale supports flexible storage sharing (FSS) in cloud environments for a cluster that is located within the same region. The nodes in the cluster may be located within the same zone or across zones (Availability Zone in case of AWS and user-defined site in case of Azure). FSS leverages cloud block storage to provide shared storage capability.

Storage devices that are under VxVM control are prefixed with the private IP address of the node. You can override the default behavior with the vxdctl set hostprefix command. For details, see the Storage Foundation Cluster File System High Availability Administrator's Guide - Linux.

In cloud environments, FSS in campus cluster configurations can be used as a disaster recovery mechanism, across data centers within a single region. For example, in AWS, nodes within an AZ can be configured as one of the campus cluster site, while the nodes in another AZ can be configured as the second site. For details, see the Veritas InfoScale Disaster Recovery Implementation Guide - Linux.

Figure: Typical FSS configuration in a supported cloud environment

Typical FSS configuration in a supported cloud environment

Note:

(Azure only) By default, in addition to the storage disks that you have attached, every virtual machine that is provisioned contains a temporary resource disk. Do not use the temporary resource disk as a data disk. A temporary resource disk is an ephemeral storage that must not be used for persistent data. The disk may change after a machine is redeployed or restarted, and the data is lost.

See About identifying a temporary resource disk - Linux.

For details on how Azure uses a temporary disk, see the Microsoft Azure documentation.

Note:

(GCP only) When VCS is stopped and started on VM instances, or after a node restarts, the import and recovery operations on FSS diskgroups may take longer than expected. The master cannot import a diskgroup until all the nodes have joined the cluster. Some nodes may join their cluster with some delay. In that case, a diskgroup import operation takes longer than expected to succeed. Even if the master initially fails to import the diskgroup due to such a delay, the operation completes successfully later on retry.

Considerations for LLT in Azure and GCP

FSS in cloud environments is supported with LLT over UDP only.

The MTU size of a network path in Azure and in GCP is 1500 bytes, by default, and it cannot be changed. For slow networks like these, LLT uses a single UDP socket for each high priority link.

To achieve better LLT performance in such high-latency cloud networks:

  • Set the following tunable values before you start LLT or the LLT services:

    • set-flow window:10

    • set-flow highwater:10000

    • set-flow lowwater:8000

    • set-flow rporthighwater:10000

    • set-flow rportlowwater:8000

    • set-flow ackval:5

    • set-flow linkburst:32

  • Disable the LLT adaptive window in Azure and in GCP as follows:

    /etc/sysconfig/llt LLT_ENABLE_AWINDOW=0

For details on the usage of these tunables, refer to the Cluster Server Administrator's Guide.