Storage Foundation 7.4.1 Administrator's Guide - Linux
- Section I. Introducing Storage Foundation
- Overview of Storage Foundation
- How Dynamic Multi-Pathing works
- How Veritas Volume Manager works
- How Veritas Volume Manager works with the operating system
- How Veritas Volume Manager handles storage management
- Volume layouts in Veritas Volume Manager
- Online relayout
- Volume resynchronization
- Dirty region logging
- Volume snapshots
- FastResync
- How VxVM handles hardware clones or snapshots
- Volume encryption
- How Veritas File System works
- Section II. Provisioning storage
- Provisioning new storage
- Advanced allocation methods for configuring storage
- Customizing allocation behavior
- Using rules to make volume allocation more efficient
- Understanding persistent attributes
- Customizing disk classes for allocation
- Specifying allocation constraints for vxassist operations with the use clause and the require clause
- Creating volumes of a specific layout
- Customizing allocation behavior
- Creating and mounting VxFS file systems
- Creating a VxFS file system
- Mounting a VxFS file system
- tmplog mount option
- ioerror mount option
- largefiles and nolargefiles mount options
- Resizing a file system
- Monitoring free space
- Extent attributes
- Section III. Administering multi-pathing with DMP
- Administering Dynamic Multi-Pathing
- Discovering and configuring newly added disk devices
- About discovering disks and dynamically adding disk arrays
- How to administer the Device Discovery Layer
- Administering DMP using the vxdmpadm utility
- Gathering and displaying I/O statistics
- Specifying the I/O policy
- Discovering and configuring newly added disk devices
- Dynamic Reconfiguration of devices
- Reconfiguring a LUN online that is under DMP control using the Dynamic Reconfiguration tool
- Manually reconfiguring a LUN online that is under DMP control
- Managing devices
- Displaying disk information
- Changing the disk device naming scheme
- Adding and removing disks
- Event monitoring
- Administering Dynamic Multi-Pathing
- Section IV. Administering Storage Foundation
- Administering sites and remote mirrors
- About sites and remote mirrors
- Fire drill - testing the configuration
- Changing the site name
- Administering the Remote Mirror configuration
- Failure and recovery scenarios
- Administering sites and remote mirrors
- Section V. Optimizing I/O performance
- Veritas File System I/O
- Veritas Volume Manager I/O
- Managing application I/O workloads using maximum IOPS settings
- Section VI. Using Point-in-time copies
- Understanding point-in-time copy methods
- When to use point-in-time copies
- About Storage Foundation point-in-time copy technologies
- Volume-level snapshots
- Storage Checkpoints
- About FileSnaps
- About snapshot file systems
- Administering volume snapshots
- Traditional third-mirror break-off snapshots
- Full-sized instant snapshots
- Creating instant snapshots
- Adding an instant snap DCO and DCO volume
- Controlling instant snapshot synchronization
- Creating instant snapshots
- Cascaded snapshots
- Adding a version 0 DCO and DCO volume
- Administering Storage Checkpoints
- Storage Checkpoint administration
- Administering FileSnaps
- Administering snapshot file systems
- Understanding point-in-time copy methods
- Section VII. Optimizing storage with Storage Foundation
- Understanding storage optimization solutions in Storage Foundation
- Migrating data from thick storage to thin storage
- Maintaining Thin Storage with Thin Reclamation
- Reclamation of storage on thin reclamation arrays
- Identifying thin and thin reclamation LUNs
- Veritas InfoScale 4k sector device support solution
- Section VIII. Maximizing storage utilization
- Understanding storage tiering with SmartTier
- Creating and administering volume sets
- Multi-volume file systems
- Features implemented using multi-volume file system (MVFS) support
- Adding a volume to and removing a volume from a multi-volume file system
- Volume encapsulation
- Load balancing
- Administering SmartTier
- About SmartTier
- Placement classes
- Administering placement policies
- File placement policy rules
- Multiple criteria in file placement policy rule statements
- Using SmartTier with solid state disks
- Sub-file relocation
- Administering hot-relocation
- How hot-relocation works
- Moving relocated subdisks
- Deduplicating data
- Compressing files
- About compressing files
- Use cases for compressing files
- Section IX. Administering storage
- Managing volumes and disk groups
- Rules for determining the default disk group
- Moving volumes or disks
- Monitoring and controlling tasks
- Performing online relayout
- Adding a mirror to a volume
- Managing disk groups
- Disk group versions
- Displaying disk group information
- Importing a disk group
- Moving disk groups between systems
- Importing a disk group containing hardware cloned disks
- Handling conflicting configuration copies
- Destroying a disk group
- Backing up and restoring disk group configuration data
- Managing plexes and subdisks
- Decommissioning storage
- Rootability
- Encapsulating a disk
- Rootability
- Sample supported root disk layouts for encapsulation
- Encapsulating and mirroring the root disk
- Administering an encapsulated boot disk
- Quotas
- Using Veritas File System quotas
- File Change Log
- Managing volumes and disk groups
- Section X. Reference
- Appendix A. Reverse path name lookup
- Appendix B. Tunable parameters
- Tuning the VxFS file system
- Methods to change Dynamic Multi-Pathing tunable parameters
- Tunable parameters for VxVM
- Methods to change Veritas Volume Manager tunable parameters
- Appendix C. Command reference
Migrating to thin provisioning
The SmartMove™ feature enables migration from traditional LUNs to thinly provisioned LUNs, removing unused space in the process.
To migrate to thin provisioning
- Check if the SmartMove feature is enabled.
# vxdefault list KEYWORD CURRENT-VALUE DEFAULT-VALUE usefssmartmove all all ...
If the output shows that the current value is none, configure SmartMove for all disks or thin disks.
See Configuring SmartMove .
- Add the new, thin LUNs to the existing disk group. Enter the following commands:
# vxdisksetup -i da_name # vxdg -g datadg adddisk da_name
where
da_name
is the disk access name in VxVM. - To identify LUNs with the thinonly or thinrclm attributes, enter:
# vxdisk -o thin list
- Add the new, thin LUNs as a new plex to the volume. On a thin LUN, when you create a mirrored volume or add a mirror to an existing LUN, VxVM creates a Data Change Object (DCO) by default. The DCO helps prevent the thin LUN from becoming thick, by eliminating the need for full resynchronization of the mirror.
NOTE: The VxFS file system must be mounted to get the benefits of the SmartMove feature.
The following methods are available to add the LUNs:
Use the default settings for the vxassist command:
# vxassist -g datadg mirror datavol da_name
Specify the vxassist command options for faster completion. The -b option copies blocks in the background. The following command improves I/O throughput:
# vxassist -b -oiosize=1m -t thinmig -g datadg mirror \ datavol da_name
To view the status of the command, use the vxtask command:
# vxtask list TASKID PTID TYPE/STATE PCT PROGRESS 211 ATCOPY/R 10.64% 0/20971520/2232320 PLXATT vol1 vol1-02 xivdg smartmove 212 ATCOPY/R 09.88% 0/20971520/2072576 PLXATT vol1 vol1-03 xivdg smartmove 219 ATCOPY/R 00.27% 0/20971520/57344 PLXATT vol1 vol1-04 xivdg smartmove
# vxtask monitor 211 TASKID PTID TYPE/STATE PCT PROGRESS 211 ATCOPY/R 50.00% 0/20971520/10485760 PLXATT vol1 vol1-02 xivdg smartmove 211 ATCOPY/R 50.02% 0/20971520/10489856 PLXATT vol1 vol1-02 xivdg smartmove 211 ATCOPY/R 50.04% 0/20971520/10493952 PLXATT vol1 vol1-02 xivdg smartmove 211 ATCOPY/R 50.06% 0/20971520/10498048 PLXATT vol1 vol1-02 xivdg smartmove 211 ATCOPY/R 50.08% 0/20971520/10502144 PLXATT vol1 vol1-02 xivdg smartmove 211 ATCOPY/R 50.10% 0/20971520/10506240 PLXATT vol1 vol1-02 xivdg smartmove
Specify the vxassist command options to reduce the effect on system performance. The following command takes longer to complete:
# vxassist -oslow -g datadg mirror datavol da_name
- Optionally, test the performance of the new LUNs before removing the old LUNs.
To test the performance, use the following steps:
Determine which plex corresponds to the thin LUNs:
# vxprint -g datadg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg datadg datadg - - - - - - dm THINARRAY0_02 THINARRAY0_02 - 83886080 - - - - dm STDARRAY1_01 STDARRAY1_01 - 41943040 - -OHOTUSE - - v datavol fsgen ENABLED 41943040 - ACTIVE - - pl datavol-01 datavol ENABLED 41943040 - ACTIVE - - sd STDARRAY1_01-01 datavol-01 ENABLED 41943040 0 - - - pl datavol-02 datavol ENABLED 41943040 - ACTIVE - - sd THINARRAY0_02-01 datavol-02 ENABLED 41943040 0 - - -
The example output indicates that the thin LUN corresponds to plex datavol-02.
Direct all reads to come from those LUNs:
# vxvol -g datadg rdpol prefer datavol datavol-02
- Remove the original non-thin LUNs.
Note:
The ! character is a special character in some shells. This example shows how to escape it in a bash shell.
# vxassist -g datadg remove mirror datavol \!STDARRAY1_01 # vxdg -g datadg rmdisk STDARRAY1_01 # vxdisk rm STDARRAY1_01
- Grow the file system and volume to use all of the larger thin LUN:
# vxresize -g datadg -x datavol 40g da_name