Storage Foundation 7.4.2 Configuration and Upgrade Guide - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (7.4.2)
Platform: Linux
  1. Section I. Introduction and configuration of Storage Foundation
    1. Introducing Storage Foundation
      1. About Storage Foundation
        1.  
          About Veritas Replicator Option
      2.  
        About Veritas InfoScale Operations Manager
      3.  
        About Veritas Services and Operations Readiness Tools (SORT)
    2. Configuring Storage Foundation
      1.  
        Configuring Storage Foundation using the installer
      2. Configuring SF manually
        1.  
          Configuring Veritas Volume Manager
        2. Configuring Veritas File System
          1.  
            Loading and unloading the file system module
      3.  
        Configuring SFDB
  2. Section II. Upgrade of Storage Foundation
    1. Planning to upgrade Storage Foundation
      1.  
        About the upgrade
      2.  
        Supported upgrade paths
      3. Preparing to upgrade SF
        1.  
          Getting ready for the upgrade
        2.  
          Creating backups
        3.  
          Determining if the root disk is encapsulated
        4. Pre-upgrade planning when VVR is configured
          1.  
            Considerations for upgrading SF to 7.4 or later on systems with an ongoing or a paused replication
          2. Planning an upgrade from the previous VVR version
            1.  
              Planning and upgrading VVR to use IPv6 as connection protocol
        5.  
          Upgrading the array support
      4.  
        Using Install Bundles to simultaneously install or upgrade full releases (base, maintenance, rolling patch), and individual patches
    2. Upgrading Storage Foundation
      1. Upgrading Storage Foundation from previous versions to 7.4.2
        1.  
          Upgrading Storage Foundation using the product installer
      2. Upgrading Volume Replicator
        1. Upgrading VVR without disrupting replication
          1.  
            Upgrading VVR on the Secondary
          2.  
            Upgrading VVR on the Primary
      3.  
        Upgrading SFDB
    3. Performing an automated SF upgrade using response files
      1.  
        Upgrading SF using response files
      2.  
        Response file variables to upgrade SF
      3.  
        Sample response file for SF upgrade
    4. Performing post-upgrade tasks
      1.  
        Optional configuration steps
      2.  
        Re-joining the backup boot disk group into the current disk group
      3.  
        Reverting to the backup boot disk group after an unsuccessful upgrade
      4.  
        Recovering VVR if automatic upgrade fails
      5.  
        Resetting DAS disk names to include host name in FSS environments
      6.  
        Upgrading disk layout versions
      7.  
        Upgrading VxVM disk group versions
      8.  
        Updating variables
      9.  
        Setting the default disk group
      10.  
        Verifying the Storage Foundation upgrade
  3. Section III. Post configuration tasks
    1. Performing configuration tasks
      1.  
        Switching on Quotas
      2.  
        Enabling DMP support for native devices
      3. About configuring authentication for SFDB tools
        1.  
          Configuring vxdbd for SFDB tools authentication
  4. Section IV. Configuration and Upgrade reference
    1. Appendix A. Installation scripts
      1.  
        Installation script options
      2.  
        About using the postcheck option
    2. Appendix B. Configuring the secure shell or the remote shell for communications
      1.  
        About configuring secure shell or remote shell communication modes before installing products
      2.  
        Manually configuring passwordless ssh
      3.  
        Setting up ssh and rsh connection using the installer -comsetup command
      4.  
        Setting up ssh and rsh connection using the pwdutil.pl utility
      5.  
        Restarting the ssh session
      6.  
        Enabling rsh for Linux

About using the postcheck option

You can use the installer's post-check to determine installation-related problems and to aid in troubleshooting.

Note:

This command option requires downtime for the node.

When you use the postcheck option, it can help you troubleshoot the following VCS-related issues:

  • The heartbeat link does not exist.

  • The heartbeat link cannot communicate.

  • The heartbeat link is a part of a bonded or aggregated NIC.

  • A duplicated cluster ID exists (if LLT is not running at the check time).

  • The VRTSllt pkg version is not consistent on the nodes.

  • The llt-linkinstall value is incorrect.

  • The /etc/llthosts and /etc/llttab configuration is incorrect.

  • the /etc/gabtab file is incorrect.

  • The incorrect GAB linkinstall value exists.

  • The VRTSgab pkg version is not consistent on the nodes.

  • The main.cf file or the types.cf file is invalid.

  • The /etc/VRTSvcs/conf/sysname file is not consistent with the hostname.

  • The cluster UUID does not exist.

  • The uuidconfig.pl file is missing.

  • The VRTSvcs pkg version is not consistent on the nodes.

  • The /etc/vxfenmode file is missing or incorrect.

  • The /etc/vxfendg file is invalid.

  • The vxfen link-install value is incorrect.

  • The VRTSvxfen pkg version is not consistent.

The postcheck option can help you troubleshoot the following SFHA or SFCFSHA issues:

  • Volume Manager cannot start because the /etc/vx/reconfig.d/state.d/install-db file has not been removed.

  • Volume Manager cannot start because the volboot file is not loaded.

  • Volume Manager cannot start because no license exists.

  • Cluster Volume Manager cannot start because the CVM configuration is incorrect in the main.cf file. For example, the Autostartlist value is missing on the nodes.

  • Cluster Volume Manager cannot come online because the node ID in the /etc/llthosts file is not consistent.

  • Cluster Volume Manager cannot come online because Vxfen is not started.

  • Cluster Volume Manager cannot start because gab is not configured.

  • Cluster Volume Manager cannot come online because of a CVM protocol mismatch.

  • Cluster Volume Manager group name has changed from "cvm", which causes CVM to go offline.

You can use the installer's post-check option to perform the following checks:

General checks for all products:

  • All the required RPMs are installed.

  • The versions of the required RPMs are correct.

  • There are no verification issues for the required RPMs.

Checks for Volume Manager (VM):

  • Lists the daemons which are not running (vxattachd, vxconfigbackupd, vxesd, vxrelocd ...).

  • Lists the disks which are not in 'online' or 'online shared' state (vxdisk list).

  • Lists the diskgroups which are not in 'enabled' state (vxdg list).

  • Lists the volumes which are not in 'enabled' state (vxprint -g <dgname>).

  • Lists the volumes which are in 'Unstartable' state (vxinfo -g <dgname>).

  • Lists the volumes which are not configured in /etc/fstab .

Checks for File System (FS):

  • Lists the VxFS kernel modules which are not loaded (vxfs/fdd/vxportal.).

  • Whether all VxFS file systems present in /etc/fstab file are mounted.

  • Whether all VxFS file systems present in /etc/fstab are in disk layout 12 or higher.

  • Whether all mounted VxFS file systems are in disk layout 12 or higher.

Checks for Cluster File System:

  • Whether FS and ODM are running at the latest protocol level.

  • Whether all mounted CFS file systems are managed by VCS.

  • Whether cvm service group is online.