Veritas™ Volume Manager Administrator's Guide
- Understanding Veritas Volume Manager
- About Veritas Volume Manager
- VxVM and the operating system
- How VxVM handles storage management
- Volume layouts in VxVM
- Online relayout
- Volume resynchronization
- Dirty region logging
- Volume snapshots
- FastResync
- Hot-relocation
- Volume sets
- Provisioning new usable storage
- Administering disks
- About disk management
- Disk devices
- Discovering and configuring newly added disk devices
- Partial device discovery
- Discovering disks and dynamically adding disk arrays
- Third-party driver coexistence
- How to administer the Device Discovery Layer
- Listing all the devices including iSCSI
- Listing all the Host Bus Adapters including iSCSI
- Listing the ports configured on a Host Bus Adapter
- Listing the targets configured from a Host Bus Adapter or a port
- Listing the devices configured from a Host Bus Adapter and target
- Getting or setting the iSCSI operational parameters
- Listing all supported disk arrays
- Excluding support for a disk array library
- Re-including support for an excluded disk array library
- Listing excluded disk arrays
- Listing supported disks in the DISKS category
- Displaying details about a supported array library
- Adding unsupported disk arrays to the DISKS category
- Removing disks from the DISKS category
- Foreign devices
- Disks under VxVM control
- Changing the disk-naming scheme
- About the Array Volume Identifier (AVID) attribute
- Discovering the association between enclosure-based disk names and OS-based disk names
- About disk installation and formatting
- Displaying or changing default disk layout attributes
- Adding a disk to VxVM
- RAM disk support in VxVM
- Veritas Volume Manager co-existence with Oracle Automatic Storage Management (ASM) disks
- Rootability
- Displaying disk information
- Controlling Powerfail Timeout
- Removing disks
- Removing a disk from VxVM control
- Removing and replacing disks
- Enabling a disk
- Taking a disk offline
- Renaming a disk
- Reserving disks
- Administering Dynamic Multi-Pathing
- How DMP works
- Disabling multi-pathing and making devices invisible to VxVM
- Enabling multi-pathing and making devices visible to VxVM
- About enabling and disabling I/O for controllers and storage processors
- About displaying DMP database information
- Displaying the paths to a disk
- Setting customized names for DMP nodes
- Administering DMP using vxdmpadm
- Retrieving information about a DMP node
- Displaying consolidated information about the DMP nodes
- Displaying the members of a LUN group
- Displaying paths controlled by a DMP node, controller, enclosure, or array port
- Displaying information about controllers
- Displaying information about enclosures
- Displaying information about array ports
- Displaying extended device attributes
- Suppressing or including devices for VxVM or DMP control
- Gathering and displaying I/O statistics
- Setting the attributes of the paths to an enclosure
- Displaying the redundancy level of a device or enclosure
- Specifying the minimum number of active paths
- Displaying the I/O policy
- Specifying the I/O policy
- Disabling I/O for paths, controllers or array ports
- Enabling I/O for paths, controllers or array ports
- Renaming an enclosure
- Configuring the response to I/O failures
- Configuring the I/O throttling mechanism
- Configuring Subpaths Failover Groups (SFG)
- Configuring Low Impact Path Probing
- Displaying recovery option values
- Configuring DMP path restoration policies
- Stopping the DMP path restoration thread
- Displaying the status of the DMP path restoration thread
- Displaying information about the DMP error-handling thread
- Configuring array policy modules
- Online dynamic reconfiguration
- About online dynamic reconfiguration
- Reconfiguring a LUN online that is under DMP control
- Removing LUNs dynamically from an existing target ID
- Adding new LUNs dynamically to a new target ID
- About detecting target ID reuse if the operating system device tree is not cleaned up
- Scanning an operating system device tree after adding or removing LUNs
- Cleaning up the operating system device tree after removing LUNs
- Upgrading the array controller firmware online
- Replacing a host bus adapter
- Creating and administering disk groups
- About disk groups
- Displaying disk group information
- Creating a disk group
- Adding a disk to a disk group
- Removing a disk from a disk group
- Moving disks between disk groups
- Deporting a disk group
- Importing a disk group
- Handling of minor number conflicts
- Moving disk groups between systems
- Handling cloned disks with duplicated identifiers
- Renaming a disk group
- Handling conflicting configuration copies
- Reorganizing the contents of disk groups
- Disabling a disk group
- Destroying a disk group
- Upgrading the disk group version
- About the configuration daemon in VxVM
- Backing up and restoring disk group configuration data
- Using vxnotify to monitor configuration changes
- Working with existing ISP disk groups
- Creating and administering subdisks and plexes
- About subdisks
- Creating subdisks
- Displaying subdisk information
- Moving subdisks
- Splitting subdisks
- Joining subdisks
- Associating subdisks with plexes
- Associating log subdisks
- Dissociating subdisks from plexes
- Removing subdisks
- Changing subdisk attributes
- About plexes
- Creating plexes
- Creating a striped plex
- Displaying plex information
- Attaching and associating plexes
- Taking plexes offline
- Detaching plexes
- Reattaching plexes
- Moving plexes
- Copying volumes to plexes
- Dissociating and removing plexes
- Changing plex attributes
- Creating volumes
- About volume creation
- Types of volume layouts
- Creating a volume
- Using vxassist
- Discovering the maximum size of a volume
- Disk group alignment constraints on volumes
- Creating a volume on any disk
- Creating a volume on specific disks
- Creating a mirrored volume
- Creating a volume with a version 0 DCO volume
- Creating a volume with a version 20 DCO volume
- Creating a volume with dirty region logging enabled
- Creating a striped volume
- Mirroring across targets, controllers or enclosures
- Mirroring across media types (SSD and HDD)
- Creating a RAID-5 volume
- Creating tagged volumes
- Creating a volume using vxmake
- Initializing and starting a volume
- Accessing a volume
- Using rules and persistent attributes to make volume allocation more efficient
- Administering volumes
- About volume administration
- Displaying volume information
- Monitoring and controlling tasks
- About SF Thin Reclamation feature
- Reclamation of storage on thin reclamation arrays
- Monitoring Thin Reclamation using the vxtask command
- Using SmartMove with Thin Provisioning
- Admin operations on an unmounted VxFS thin volume
- Stopping a volume
- Starting a volume
- Resizing a volume
- Adding a mirror to a volume
- Removing a mirror
- Adding logs and maps to volumes
- Preparing a volume for DRL and instant snapshots
- Specifying storage for version 20 DCO plexes
- Using a DCO and DCO volume with a RAID-5 volume
- Determining the DCO version number
- Determining if DRL is enabled on a volume
- Determining if DRL logging is active on a volume
- Disabling and re-enabling DRL
- Removing support for DRL and instant snapshots from a volume
- Adding traditional DRL logging to a mirrored volume
- Upgrading existing volumes to use version 20 DCOs
- Setting tags on volumes
- Changing the read policy for mirrored volumes
- Removing a volume
- Moving volumes from a VM disk
- Enabling FastResync on a volume
- Performing online relayout
- Converting between layered and non-layered volumes
- Adding a RAID-5 log
- Creating and administering volume sets
- Configuring off-host processing
- Administering hot-relocation
- About hot-relocation
- How hot-relocation works
- Configuring a system for hot-relocation
- Displaying spare disk information
- Marking a disk as a hot-relocation spare
- Removing a disk from use as a hot-relocation spare
- Excluding a disk from hot-relocation use
- Making a disk available for hot-relocation use
- Configuring hot-relocation to use only spare disks
- Moving relocated subdisks
- Modifying the behavior of hot-relocation
- Administering cluster functionality (CVM)
- Overview of clustering
- Multiple host failover configurations
- About the cluster functionality of VxVM
- CVM initialization and configuration
- Dirty region logging in cluster environments
- Administering VxVM in cluster environments
- Requesting node status and discovering the master node
- Changing the CVM master manually
- Determining if a LUN is in a shareable disk group
- Listing shared disk groups
- Creating a shared disk group
- Importing disk groups as shared
- Handling cloned disks in a shared disk group
- Converting a disk group from shared to private
- Moving objects between shared disk groups
- Splitting shared disk groups
- Joining shared disk groups
- Changing the activation mode on a shared disk group
- Setting the disk detach policy on a shared disk group
- Setting the disk group failure policy on a shared disk group
- Creating volumes with exclusive open access by a node
- Setting exclusive open access to a volume by a node
- Displaying the cluster protocol version
- Displaying the supported cluster protocol version range
- Recovering volumes in shared disk groups
- Obtaining cluster performance statistics
- Administering CVM from the slave node
- Administering sites and remote mirrors
- About sites and remote mirrors
- Making an existing disk group site consistent
- Configuring a new disk group as a Remote Mirror configuration
- Fire drill - testing the configuration
- Changing the site name
- Administering the Remote Mirror configuration
- Examples of storage allocation by specifying sites
- Displaying site information
- Failure and recovery scenarios
- Performance monitoring and tuning
- Appendix A. Using Veritas Volume Manager commands
- Appendix B. Configuring Veritas Volume Manager
- Glossary
How hot-relocation works
Hot-relocation allows a system to react automatically to I/O failures on redundant (mirrored or RAID-5) VxVM objects, and to restore redundancy and access to those objects. VxVM detects I/O failures on objects and relocates the affected subdisks to disks designated as spare disks or to free space within the disk group. VxVM then reconstructs the objects that existed before the failure and makes them redundant and accessible again.
When a partial disk failure occurs (that is, a failure affecting only some subdisks on a disk), redundant data on the failed portion of the disk is relocated. Existing volumes on the unaffected portions of the disk remain accessible.
Hot-relocation is only performed for redundant (mirrored or RAID-5) subdisks on a failed disk. Non-redundant subdisks on a failed disk are not relocated, but the system administrator is notified of their failure.
Hot-relocation is enabled by default and takes effect without the intervention of the system administrator when a failure occurs.
The hot-relocation daemon, vxrelocd, detects and reacts to VxVM events that signify the following types of failures:
This is normally detected as a result of an I/O failure from a VxVM object. VxVM attempts to correct the error. If the error cannot be corrected, VxVM tries to access configuration information in the private region of the disk. If it cannot access the private region, it considers the disk failed. | |
This is normally detected as a result of an uncorrectable I/O error in the plex (which affects subdisks within the plex). For mirrored volumes, the plex is detached. | |
This is normally detected as a result of an uncorrectable I/O error. The subdisk is detached. |
When vxrelocd detects such a failure, it performs the following steps:
vxrelocd informs the system administrator (and other nominated users) by electronic mail of the failure and which VxVM objects are affected.
See Partial disk failure mail messages.
vxrelocd next determines if any subdisks can be relocated. vxrelocd looks for suitable space on disks that have been reserved as hot-relocation spares (marked spare) in the disk group where the failure occurred. It then relocates the subdisks to use this space.
If no spare disks are available or additional space is needed, vxrelocd uses free space on disks in the same disk group, except those disks that have been excluded for hot-relocation use (marked nohotuse). When vxrelocd has relocated the subdisks, it reattaches each relocated subdisk to its plex.
Finally, vxrelocd initiates appropriate recovery procedures. For example, recovery includes mirror resynchronization for mirrored volumes or data recovery for RAID-5 volumes. It also notifies the system administrator of the hot-relocation and recovery actions that have been taken.
If relocation is not possible, vxrelocd notifies the system administrator and takes no further action.
Warning:
Hot-relocation does not guarantee the same layout of data or the same performance after relocation. An administrator should check whether any configuration changes are required after hot-relocation occurs.
Relocation of failing subdisks is not possible in the following cases:
The failing subdisks are on non-redundant volumes (that is, volumes of types other than mirrored or RAID-5).
There are insufficient spare disks or free disk space in the disk group.
The only available space is on a disk that already contains a mirror of the failing plex.
The only available space is on a disk that already contains the RAID-5 log plex or one of its healthy subdisks. Failing subdisks in the RAID-5 plex cannot be relocated.
If a mirrored volume has a dirty region logging (DRL) log subdisk as part of its data plex, failing subdisks belonging to that plex cannot be relocated.
If a RAID-5 volume log plex or a mirrored volume DRL log plex fails, a new log plex is created elsewhere. There is no need to relocate the failed subdisks of the log plex.
See the vxrelocd(1M) manual page.
Figure: Example of hot-relocation for a subdisk in a RAID-5 volume shows the hot-relocation process in the case of the failure of a single subdisk of a RAID-5 volume.