Storage Foundation 7.4.2 Configuration and Upgrade Guide - Linux

Last Published:
Product(s): InfoScale & Storage Foundation (7.4.2)
Platform: Linux
  1. Section I. Introduction and configuration of Storage Foundation
    1. Introducing Storage Foundation
      1. About Storage Foundation
        1.  
          About Veritas Replicator Option
      2.  
        About Veritas InfoScale Operations Manager
      3.  
        About Veritas Services and Operations Readiness Tools (SORT)
    2. Configuring Storage Foundation
      1.  
        Configuring Storage Foundation using the installer
      2. Configuring SF manually
        1.  
          Configuring Veritas Volume Manager
        2. Configuring Veritas File System
          1.  
            Loading and unloading the file system module
      3.  
        Configuring SFDB
  2. Section II. Upgrade of Storage Foundation
    1. Planning to upgrade Storage Foundation
      1.  
        About the upgrade
      2.  
        Supported upgrade paths
      3. Preparing to upgrade SF
        1.  
          Getting ready for the upgrade
        2.  
          Creating backups
        3.  
          Determining if the root disk is encapsulated
        4. Pre-upgrade planning when VVR is configured
          1.  
            Considerations for upgrading SF to 7.4 or later on systems with an ongoing or a paused replication
          2. Planning an upgrade from the previous VVR version
            1.  
              Planning and upgrading VVR to use IPv6 as connection protocol
        5.  
          Upgrading the array support
      4.  
        Using Install Bundles to simultaneously install or upgrade full releases (base, maintenance, rolling patch), and individual patches
    2. Upgrading Storage Foundation
      1. Upgrading Storage Foundation from previous versions to 7.4.2
        1.  
          Upgrading Storage Foundation using the product installer
      2. Upgrading Volume Replicator
        1. Upgrading VVR without disrupting replication
          1.  
            Upgrading VVR on the Secondary
          2.  
            Upgrading VVR on the Primary
      3.  
        Upgrading SFDB
    3. Performing an automated SF upgrade using response files
      1.  
        Upgrading SF using response files
      2.  
        Response file variables to upgrade SF
      3.  
        Sample response file for SF upgrade
    4. Performing post-upgrade tasks
      1.  
        Optional configuration steps
      2.  
        Re-joining the backup boot disk group into the current disk group
      3.  
        Reverting to the backup boot disk group after an unsuccessful upgrade
      4.  
        Recovering VVR if automatic upgrade fails
      5.  
        Resetting DAS disk names to include host name in FSS environments
      6.  
        Upgrading disk layout versions
      7.  
        Upgrading VxVM disk group versions
      8.  
        Updating variables
      9.  
        Setting the default disk group
      10.  
        Verifying the Storage Foundation upgrade
  3. Section III. Post configuration tasks
    1. Performing configuration tasks
      1.  
        Switching on Quotas
      2.  
        Enabling DMP support for native devices
      3. About configuring authentication for SFDB tools
        1.  
          Configuring vxdbd for SFDB tools authentication
  4. Section IV. Configuration and Upgrade reference
    1. Appendix A. Installation scripts
      1.  
        Installation script options
      2.  
        About using the postcheck option
    2. Appendix B. Configuring the secure shell or the remote shell for communications
      1.  
        About configuring secure shell or remote shell communication modes before installing products
      2.  
        Manually configuring passwordless ssh
      3.  
        Setting up ssh and rsh connection using the installer -comsetup command
      4.  
        Setting up ssh and rsh connection using the pwdutil.pl utility
      5.  
        Restarting the ssh session
      6.  
        Enabling rsh for Linux

Upgrading Storage Foundation using the product installer

Note:

Root Disk Encapsulation (RDE) is not supported on Linux from 7.3.1 onwards.

Use this procedure to upgrade Storage Foundation (SF).

To upgrade SF from previous versions to 7.4.2

  1. Log in as superuser.
  2. Use the following command to check if any VxFS file systems or Storage Checkpoints are mounted:
    # df -k | grep vxfs
  3. Unmount all Storage Checkpoints and file systems:
    # umount /checkpoint_name
    # umount /filesystem
  4. Verify that all file systems have been cleanly unmounted:
    # echo "8192B.p S" | fsdb -t vxfs filesystem | grep clean
    flags 0 mod 0 clean clean_value

    A clean_value value of 0x5a indicates the file system is clean, 0x3c indicates the file system is dirty, and 0x69 indicates the file system is dusty. A dusty file system has pending extended operations.

    Perform the following steps in the order listed:

    • If a file system is not clean, enter the following commands for that file system:

      # fsck -t vxfs filesystem
      # mount -t vxfs filesystem mountpoint
      # umount mountpoint

      This should complete any extended operations that were outstanding on the file system and unmount the file system cleanly.

      There may be a pending large RPM clone removal extended operation if the umount command fails with the following error:

            file system device busy

      You know for certain that an extended operation is pending if the following message is generated on the console:

            Storage Checkpoint asynchronous operation on file_system
            file system still in progress.
    • If an extended operation is pending, you must leave the file system mounted for a longer time to allow the operation to complete. Removing a very large RPM clone can take several hours.

    • Repeat this step to verify that the unclean file system is now clean.

  5. If a cache area is online, you must take the cache area offline before you upgrade the VxVM RPM. Use the following command to take the cache area offline:
    # sfcache offline cachename
  6. Stop activity to all VxVM volumes. For example, stop any applications such as databases that access the volumes, and unmount any file systems that have been created on the volumes.
  7. Stop all the volumes by entering the following command for each disk group:
    # vxvol -g diskgroup stopall

    To verify that no volumes remain open, use the following command:

    # vxprint -Aht -e v_open
  8. Make a record of the mount points for VxFS file systems and VxVM volumes that are defined in the /etc/fstab file. You will need to recreate these entries in the /etc/fstab file on the freshly installed system.
  9. Perform any necessary preinstallation checks.
  10. To invoke the installer, run the installer command on the disc as shown in this example:
    # cd /cdrom/cdrom0
    # ./installer
  11. Enter G to upgrade and press Return.
  12. You are prompted to enter the system names (in the following example, "host1") on which the software is to be installed. Enter the system name or names and then press Return.
    Enter the 64 bit <platform> system names separated 
    by spaces : [q, ?] host1 host2

    where <platform> is the platform on which the system runs, such as RHEL6.

    Depending on your existing configuration, various messages and prompts may appear. Answer the prompts appropriately.

    During the system verification phase, the installer checks if the boot disk is encapsulated and the upgrade's path. If the upgrade is not supported, you need to un-encapsulate the boot disk.

  13. The installer asks if you agree with the terms of the End User License Agreement. Press y to agree and continue.
  14. The installer discovers if any of the systems that you are upgrading have mirrored and encapsulated boot disks. For each system that has a mirrored boot disk, you have the option to create a backup of the system's book disk group before the upgrade proceeds. If you want to split the boot disk group to create a backup, answer y.
  15. The installer then prompts you to name the backup boot disk group. Enter the name for it or press Enter to accept the default.
  16. You are prompted to start the split operation. Press y to continue.

    Note:

    The split operation can take some time to complete.

  17. Stop the product's processes.
    Do you want to stop SF processes now? [y,n,q] (y) y

    If you select y, the installer stops the product processes and makes some configuration updates before upgrading.

  18. The installer stops, uninstalls, reinstalls, and starts specified RPMs.
  19. If necessary, reinstate any missing mount points in the /etc/fstab file on each node that you recorded in step 8.
  20. Restart all the volumes by entering the following command for each disk group:
    # vxvol -g diskgroup startall
  21. Remount all VxFS file systems and Storage Checkpoints on all nodes:
    # mount /filesystem
    # mount /checkpoint_name
  22. You can perform the following optional configuration steps:

    • If you want to use features of Storage Foundation 7.4.2 for which you do not currently have an appropriate license installed, obtain the license and run the vxlicinst command to add it to your system.

    • To upgrade VxFS Disk Layout versions and VxVM Disk Group versions, follow the upgrade instructions.

  23. Only perform this step if you have split the mirrored root disk to back it up. After a successful reboot, verify the upgrade and re-join the backup disk group. If the upgrade fails, revert to the backup disk group.

    See Re-joining the backup boot disk group into the current disk group.

    See Reverting to the backup boot disk group after an unsuccessful upgrade.