Storage Foundation and High Availability 7.4.2 Configuration and Upgrade Guide - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (7.4.2)
Platform: Linux
  1. Section I. Introduction to SFHA
    1. Introducing Storage Foundation and High Availability
      1. About Storage Foundation High Availability
        1.  
          About Veritas Replicator Option
      2.  
        About Veritas InfoScale Operations Manager
      3. About Storage Foundation and High Availability features
        1.  
          About LLT and GAB
        2.  
          About I/O fencing
        3.  
          About global clusters
      4.  
        About Veritas Services and Operations Readiness Tools (SORT)
      5. About configuring SFHA clusters for data integrity
        1.  
          About I/O fencing for SFHA in virtual machines that do not support SCSI-3 PR
        2. About I/O fencing components
          1.  
            About data disks
          2.  
            About coordination points
          3.  
            About preferred fencing
  2. Section II. Configuration of SFHA
    1. Preparing to configure
      1. I/O fencing requirements
        1.  
          Coordinator disk requirements for I/O fencing
        2.  
          CP server requirements
        3.  
          Non-SCSI-3 I/O fencing requirements
    2. Preparing to configure SFHA clusters for data integrity
      1. About planning to configure I/O fencing
        1.  
          Typical SFHA cluster configuration with server-based I/O fencing
        2.  
          Recommended CP server configurations
      2. Setting up the CP server
        1.  
          Planning your CP server setup
        2.  
          Installing the CP server using the installer
        3.  
          Configuring the CP server cluster in secure mode
        4.  
          Setting up shared storage for the CP server database
        5.  
          Configuring the CP server using the installer program
        6. Configuring the CP server manually
          1.  
            Configuring the CP server manually for HTTPS-based communication
          2.  
            Generating the key and certificates manually for the CP server
          3.  
            Completing the CP server configuration
        7. Configuring CP server using response files
          1.  
            Response file variables to configure CP server
          2.  
            Sample response file for configuring the CP server on single node VCS cluster
          3.  
            Sample response file for configuring the CP server on SFHA cluster
        8.  
          Verifying the CP server configuration
    3. Configuring SFHA
      1. Configuring Storage Foundation High Availability using the installer
        1.  
          Overview of tasks to configure SFHA using the product installer
        2.  
          Required information for configuring Storage Foundation and High Availability Solutions
        3.  
          Starting the software configuration
        4.  
          Specifying systems for configuration
        5.  
          Configuring the cluster name
        6.  
          Configuring private heartbeat links
        7.  
          Configuring the virtual IP of the cluster
        8.  
          Configuring SFHA in secure mode
        9. Configuring a secure cluster node by node
          1.  
            Configuring the first node
          2.  
            Configuring the remaining nodes
          3.  
            Completing the secure cluster configuration
        10.  
          Adding VCS users
        11.  
          Configuring SMTP email notification
        12.  
          Configuring SNMP trap notification
        13.  
          Configuring global clusters
        14. Completing the SFHA configuration
          1.  
            Verifying the NIC configuration
        15.  
          About Veritas License Audit Tool
        16. Verifying and updating licenses on the system
          1.  
            Checking licensing information on the system
          2.  
            Replacing a SFHA keyless license with another keyless license
          3.  
            Replacing a SFHA keyless license with a permanent license
      2.  
        Configuring SFDB
    4. Configuring SFHA clusters for data integrity
      1. Setting up disk-based I/O fencing using installer
        1.  
          Initializing disks as VxVM disks
        2. Checking shared disks for I/O fencing
          1.  
            Verifying Array Support Library (ASL)
          2.  
            Verifying that the nodes have access to the same disk
          3.  
            Testing the disks using vxfentsthdw utility
        3.  
          Configuring disk-based I/O fencing using installer
        4.  
          Refreshing keys or registrations on the existing coordination points for disk-based fencing using the installer
      2. Setting up server-based I/O fencing using installer
        1.  
          Refreshing keys or registrations on the existing coordination points for server-based fencing using the installer
        2. Setting the order of existing coordination points for server-based fencing using the installer
          1.  
            About deciding the order of existing coordination points
          2.  
            Setting the order of existing coordination points using the installer
      3.  
        Setting up non-SCSI-3 I/O fencing in virtual environments using installer
      4.  
        Setting up majority-based I/O fencing using installer
      5.  
        Enabling or disabling the preferred fencing policy
    5. Manually configuring SFHA clusters for data integrity
      1. Setting up disk-based I/O fencing manually
        1.  
          Removing permissions for communication
        2.  
          Identifying disks to use as coordinator disks
        3.  
          Setting up coordinator disk groups
        4.  
          Creating I/O fencing configuration files
        5.  
          Modifying VCS configuration to use I/O fencing
        6.  
          Verifying I/O fencing configuration
      2. Setting up server-based I/O fencing manually
        1.  
          Preparing the CP servers manually for use by the SFHA cluster
        2.  
          Generating the client key and certificates manually on the client nodes
        3. Configuring server-based fencing on the SFHA cluster manually
          1.  
            Sample vxfenmode file output for server-based fencing
        4.  
          Configuring CoordPoint agent to monitor coordination points
        5.  
          Verifying server-based I/O fencing configuration
      3. Setting up non-SCSI-3 fencing in virtual environments manually
        1.  
          Sample /etc/vxfenmode file for non-SCSI-3 fencing
      4. Setting up majority-based I/O fencing manually
        1.  
          Creating I/O fencing configuration files
        2.  
          Modifying VCS configuration to use I/O fencing
        3.  
          Verifying I/O fencing configuration
    6. Performing an automated SFHA configuration using response files
      1.  
        Configuring SFHA using response files
      2.  
        Response file variables to configure SFHA
      3.  
        Sample response file for SFHA configuration
    7. Performing an automated I/O fencing configuration using response files
      1.  
        Configuring I/O fencing using response files
      2.  
        Response file variables to configure disk-based I/O fencing
      3.  
        Sample response file for configuring disk-based I/O fencing
      4. Response file variables to configure server-based I/O fencing
        1.  
          Sample response file for configuring server-based I/O fencing
      5.  
        Sample response file for configuring non-SCSI-3 I/O fencing
      6.  
        Response file variables to configure non-SCSI-3 I/O fencing
      7.  
        Response file variables to configure majority-based I/O fencing
      8.  
        Sample response file for configuring majority-based I/O fencing
  3. Section III. Upgrade of SFHA
    1. Planning to upgrade SFHA
      1.  
        About the upgrade
      2.  
        Supported upgrade paths
      3.  
        Considerations for upgrading SFHA to 7.4.2 on systems configured with an Oracle resource
      4. Preparing to upgrade SFHA
        1.  
          Getting ready for the upgrade
        2.  
          Creating backups
        3.  
          Determining if the root disk is encapsulated
        4. Pre-upgrade planning when VVR is configured
          1.  
            Considerations for upgrading SFHA to 7.4 or later on systems with an ongoing or a paused replication
          2. Planning an upgrade from the previous VVR version
            1.  
              Planning and upgrading VVR to use IPv6 as connection protocol
        5. Preparing to upgrade VVR when VCS agents are configured
          1. Freezing the service groups and stopping all the applications
            1.  
              Determining the nodes on which disk groups are online
          2.  
            Preparing for the upgrade when VCS agents are configured
        6.  
          Upgrading the array support
      5.  
        Using Install Bundles to simultaneously install or upgrade full releases (base, maintenance, rolling patch), and individual patches
    2. Upgrading Storage Foundation and High Availability
      1. Upgrading Storage Foundation and High Availability from previous versions to 7.4.2
        1.  
          Upgrading Storage Foundation and High Availability using the product installer
      2. Upgrading Volume Replicator
        1. Upgrading VVR without disrupting replication
          1.  
            Upgrading VVR on the Secondary
          2.  
            Upgrading VVR on the Primary
      3.  
        Upgrading SFDB
    3. Performing a rolling upgrade of SFHA
      1.  
        About rolling upgrade
      2.  
        Performing a rolling upgrade using the product installer
    4. Performing a phased upgrade of SFHA
      1. About phased upgrade
        1.  
          Prerequisites for a phased upgrade
        2.  
          Planning for a phased upgrade
        3.  
          Phased upgrade limitations
        4.  
          Phased upgrade example
        5.  
          Phased upgrade example overview
      2. Performing a phased upgrade using the product installer
        1.  
          Moving the service groups to the second subcluster
        2.  
          Upgrading the operating system on the first subcluster
        3.  
          Upgrading the first subcluster
        4.  
          Preparing the second subcluster
        5.  
          Activating the first subcluster
        6.  
          Upgrading the operating system on the second subcluster
        7.  
          Upgrading the second subcluster
        8.  
          Finishing the phased upgrade
    5. Performing an automated SFHA upgrade using response files
      1.  
        Upgrading SFHA using response files
      2.  
        Response file variables to upgrade SFHA
      3.  
        Sample response file for full upgrade of SFHA
      4.  
        Sample response file for rolling upgrade of SFHA
    6. Performing post-upgrade tasks
      1.  
        Optional configuration steps
      2.  
        Re-joining the backup boot disk group into the current disk group
      3.  
        Reverting to the backup boot disk group after an unsuccessful upgrade
      4.  
        Recovering VVR if automatic upgrade fails
      5. Post-upgrade tasks when VCS agents for VVR are configured
        1.  
          Unfreezing the service groups
        2.  
          Restoring the original configuration when VCS agents are configured
        3.  
          CVM master node needs to assume the logowner role for VCS managed VVR resources
      6.  
        Resetting DAS disk names to include host name in FSS environments
      7.  
        Upgrading disk layout versions
      8.  
        Upgrading VxVM disk group versions
      9.  
        Updating variables
      10.  
        Setting the default disk group
      11. About enabling LDAP authentication for clusters that run in secure mode
        1.  
          Enabling LDAP authentication for clusters that run in secure mode
      12.  
        Verifying the Storage Foundation and High Availability upgrade
  4. Section IV. Post-installation tasks
    1. Performing post-installation tasks
      1.  
        Switching on Quotas
      2. About configuring authentication for SFDB tools
        1.  
          Configuring vxdbd for SFDB tools authentication
  5. Section V. Adding and removing nodes
    1. Adding a node to SFHA clusters
      1.  
        About adding a node to a cluster
      2.  
        Before adding a node to a cluster
      3.  
        Adding a node to a cluster using the Veritas InfoScale installer
      4. Adding the node to a cluster manually
        1.  
          Starting Veritas Volume Manager (VxVM) on the new node
        2.  
          Configuring cluster processes on the new node
        3. Setting up the node to run in secure mode
          1.  
            Setting up SFHA related security configuration
        4.  
          Starting fencing on the new node
        5.  
          Configuring the ClusterService group for the new node
      5. Adding a node using response files
        1.  
          Response file variables to add a node to a SFHA cluster
        2.  
          Sample response file for adding a node to a SFHA cluster
      6. Configuring server-based fencing on the new node
        1.  
          Adding the new node to the vxfen service group
      7.  
        After adding the new node
      8.  
        Adding nodes to a cluster that is using authentication for SFDB tools
      9.  
        Updating the Storage Foundation for Databases (SFDB) repository after adding a node
    2. Removing a node from SFHA clusters
      1. Removing a node from a SFHA cluster
        1.  
          Verifying the status of nodes and service groups
        2.  
          Deleting the departing node from SFHA configuration
        3.  
          Modifying configuration files on each remaining node
        4.  
          Removing the node configuration from the CP server
        5.  
          Removing security credentials from the leaving node
        6.  
          Unloading LLT and GAB and removing Veritas InfoScale Availability or Enterprise on the departing node
        7.  
          Updating the Storage Foundation for Databases (SFDB) repository after removing a node
  6. Section VI. Configuration and upgrade reference
    1. Appendix A. Installation scripts
      1.  
        Installation script options
      2.  
        About using the postcheck option
    2. Appendix B. SFHA services and ports
      1.  
        About InfoScale Enterprise services and ports
    3. Appendix C. Configuration files
      1.  
        About the LLT and GAB configuration files
      2.  
        About the AMF configuration files
      3. About the VCS configuration files
        1.  
          Sample main.cf file for VCS clusters
        2.  
          Sample main.cf file for global clusters
      4.  
        About I/O fencing configuration files
      5. Sample configuration files for CP server
        1.  
          Sample main.cf file for CP server hosted on a single node that runs VCS
        2.  
          Sample main.cf file for CP server hosted on a two-node SFHA cluster
        3.  
          Sample CP server configuration (/etc/vxcps.conf) file output
    4. Appendix D. Configuring the secure shell or the remote shell for communications
      1.  
        About configuring secure shell or remote shell communication modes before installing products
      2.  
        Manually configuring passwordless ssh
      3.  
        Setting up ssh and rsh connection using the installer -comsetup command
      4.  
        Setting up ssh and rsh connection using the pwdutil.pl utility
      5.  
        Restarting the ssh session
      6.  
        Enabling rsh for Linux
    5. Appendix E. Sample SFHA cluster setup diagrams for CP server-based I/O fencing
      1. Configuration diagrams for setting up server-based I/O fencing
        1.  
          Two unique client clusters served by 3 CP servers
        2.  
          Client cluster served by highly available CPS and 2 SCSI-3 disks
        3.  
          Two node campus cluster served by remote CP server and 2 SCSI-3 disks
        4.  
          Multiple client clusters served by highly available CP server and 2 SCSI-3 disks
    6. Appendix F. Configuring LLT over UDP
      1. Using the UDP layer for LLT
        1.  
          When to use LLT over UDP
      2. Manually configuring LLT over UDP using IPv4
        1.  
          Broadcast address in the /etc/llttab file
        2.  
          The link command in the /etc/llttab file
        3.  
          The set-addr command in the /etc/llttab file
        4.  
          Selecting UDP ports
        5.  
          Configuring the netmask for LLT
        6.  
          Configuring the broadcast address for LLT
        7.  
          Sample configuration: direct-attached links
        8.  
          Sample configuration: links crossing IP routers
      3. Using the UDP layer of IPv6 for LLT
        1.  
          When to use LLT over UDP
      4. Manually configuring LLT over UDP using IPv6
        1.  
          Sample configuration: direct-attached links
        2.  
          Sample configuration: links crossing IP routers
      5. About configuring LLT over UDP multiport
        1.  
          Manually configuring LLT over UDP multiport
        2.  
          Enabling LLT ports in firewall
        3.  
          Disabling the UDP multiport feature
    7. Appendix G. Using LLT over RDMA
      1.  
        Using LLT over RDMA
      2.  
        About RDMA over RoCE or InfiniBand networks in a clustering environment
      3.  
        How LLT supports RDMA capability for faster interconnects between applications
      4.  
        Using LLT over RDMA: supported use cases
      5. Configuring LLT over RDMA
        1.  
          Choosing supported hardware for LLT over RDMA
        2.  
          Installing RDMA, InfiniBand or Ethernet drivers and utilities
        3. Configuring RDMA over an Ethernet network
          1.  
            Enable RDMA over Converged Ethernet (RoCE)
          2.  
            Configuring RDMA and Ethernet drivers
          3.  
            Configuring IP addresses over Ethernet Interfaces
        4. Configuring RDMA over an InfiniBand network
          1.  
            Configuring RDMA and InfiniBand drivers
          2.  
            Configuring the OpenSM service
          3.  
            Configuring IP addresses over InfiniBand Interfaces
        5. Tuning system performance
          1.  
            Tuning the CPU frequency
          2.  
            Tuning the boot parameter settings
        6. Manually configuring LLT over RDMA
          1.  
            Broadcast address in the /etc/llttab file
          2.  
            The link command in the /etc/llttab file
          3.  
            Selecting UDP ports
          4.  
            Configuring the netmask for LLT
          5.  
            Sample configuration: direct-attached links
        7.  
          LLT over RDMA sample /etc/llttab
        8.  
          Verifying LLT configuration
      6. Troubleshooting LLT over RDMA
        1.  
          IP addresses associated to the RDMA NICs do not automatically plumb on node restart
        2.  
          Ping test fails for the IP addresses configured over InfiniBand interfaces
        3.  
          After a node restart, by default the Mellanox card with Virtual Protocol Interconnect (VPI) gets configured in InfiniBand mode
        4.  
          The LLT module fails to start

Performing a rolling upgrade using the product installer

Note:

Root Disk Encapsulation (RDE) is not supported on Linux from 7.3.1 onwards.

Before you start the rolling upgrade, make sure that Cluster Server (VCS) is running on all the nodes of the cluster.

Stop all activity for all the VxVM volumes that are not under VCS control. For example, stop any applications such as databases that access the volumes, and unmount any file systems that have been created on the volumes. Then stop all the volumes.

Unmount all VxFS file systems that are not under VCS control.

To perform a rolling upgrade

  1. Phase 1 of rolling upgrade begins on the first subcluster. Complete the preparatory steps on the first subcluster.

    Unmount all VxFS file systems not under VCS control:

    # umount mount_point
  2. Complete updates to the operating system, if required.

    Make sure that the existing version of SFHA supports the operating system update you apply. If the existing version of SFHA does not support the operating system update, first upgrade SFHA to a version that supports the operating system update.

    For instructions, see the operating system documentation.

    Switch applications to remaining subcluster and upgrade the operating system of the fist subcluster.

    The nodes are restarted after the operating system update.

  3. If a cache area is online, you must take the cache area offline before you upgrade the VxVM RPM. Use the following command to take the cache area offline:
    # sfcache offline cachename
  4. Log in as superuser and mount the SFHA 7.4.2 installation media.
  5. From root, start the installer.
    # ./installer
  6. From the menu, select Upgrade a Product and from the sub menu, select Rolling Upgrade.
  7. The installer suggests system names for the upgrade. Press Enter to upgrade the suggested systems, or enter the name of any one system in the cluster on which you want to perform a rolling upgrade and then press Enter.
  8. The installer checks system communications, release compatibility, version information, and lists the cluster name, ID, and cluster nodes. Type y to continue.
  9. The installer inventories the running service groups and determines the node or nodes to upgrade in phase 1 of the rolling upgrade. Type y to continue. If you choose to specify the nodes, type n and enter the names of the nodes.
  10. The installer performs further prechecks on the nodes in the cluster and may present warnings. You can type y to continue or quit the installer and address the precheck's warnings.
  11. Review the end-user license agreement, and type y if you agree to its terms.
  12. If the boot disk is encapsulated and mirrored, you can create a backup boot disk.

    If you choose to create a backup boot disk, type y. Provide a backup name for the boot disk group or accept the default name. The installer then creates a backup copy of the boot disk group.

  13. After the installer detects the online service groups, the installer prompts the user to do one of the following:
    • Manually switch service groups

    • Use the CPI to automatically switch service groups

    The downtime is the time that it normally takes for the service group's failover.

    Note:

    It is recommended that you manually switch the service groups. Automatic switching of service groups does not resolve dependency issues if any dependent resource is not under VCS control.

  14. The installer prompts you to stop the applicable processes. Type y to continue.

    The installer evacuates all service groups to the node or nodes that are not upgraded at this time. The installer stops parallel service groups on the nodes that are to be upgraded.

  15. The installer stops relevant processes, uninstalls old kernel RPMs, and installs the new RPMs. The installer asks if you want to update your licenses to the current version. Select Yes or No. Veritas recommends that you update your licenses to fully use the new features in the current release.
  16. If the cluster has configured Coordination Point Server based fencing, then during upgrade, installer may ask the user to provide the new HTTPS Coordination Point Server.

    The installer performs the upgrade configuration and starts the processes. If the boot disk is encapsulated before the upgrade, installer prompts the user to reboot the node after performing the upgrade configuration.

  17. Complete the preparatory steps on the nodes that you have not yet upgraded.

    Unmount all VxFS file systems not under VCS control on all the nodes.

    # umount mount_point
  18. If operating system updates are not required, skip this step.

    Go to step 19.

    Else, complete updates to the operating system on the nodes that you have not yet upgraded. For instructions, see the operating system documentation.

    Repeat steps 3 to 16 for each node.

    Phase 1 of rolling upgrade is complete on the first subcluster. Phase 1 of rolling upgrade begins on the second subcluster.

  19. Offline all cache areas on the remaining node or nodes:
    # sfcache offline cachename
  20. The installer begins phase 1 of the upgrade on the remaining node or nodes. Type y to continue the rolling upgrade. If the installer was invoked on the upgraded (rebooted) nodes, you must invoke the installer again.

    Note:

    In case of an FSS environment, phase 1 of the rolling upgrade is performed on one node at a time.

    The installer repeats step 9 through step 16.

    For clusters with larger number of nodes, this process may repeat several times. Service groups come down and are brought up to accommodate the upgrade.

  21. When Phase 1 of the rolling upgrade completes, mount all the VxFS file systems that are not under VCS control manually. Begin Phase 2 of the upgrade. Phase 2 of the upgrade includes downtime for the VCS engine (HAD), which does not include application downtime. Type y to continue. Phase 2 of the rolling upgrade begins here.
  22. The installer determines the remaining RPMs to upgrade. Press Enter to continue.
  23. The installer displays the following question before the installer stops the product processes. If the cluster was configured in secure mode and version is prior to 6.2 before the upgrade, these questions are displayed.
    • Do you want to grant read access to everyone? [y,n,q,?]

      • To grant read access to all authenticated users, type y.

      • To grant usergroup specific permissions, type n.

    • Do you want to provide any usergroups that you would like to grant read access?[y,n,q,?]

      • To specify usergroups and grant them read access, type y

      • To grant read access only to root users, type n. The installer grants read access read access to the root users.

    • Enter the usergroup names separated by spaces that you would like to grant read access. If you would like to grant read access to a usergroup on a specific node, enter like 'usrgrp1@node1', and if you would like to grant read access to usergroup on any cluster node, enter like 'usrgrp1'. If some usergroups are not created yet, create the usergroups after configuration if needed. [b]

  24. The installer stops Cluster Server (VCS) processes but the applications continue to run. Type y to continue.

    The installer performs prestop, uninstalls old RPMs, and installs the new RPMs. It performs post-installation tasks, and the configuration for the upgrade.

  25. If you have network connection to the Internet, the installer checks for updates.

    If updates are discovered, you can apply them now.

  26. A prompt message appears to ask if the user wants to read the summary file. You can choose y if you want to read the install summary file.