InfoScale™ 9.0 Storage Foundation Administrator's Guide - Windows
- Overview
- Setup and configuration
- Setup and configuration overview
- Function overview
- About the client console for Storage Foundation
- Recommendations for caching-enabled disks
- Review the Veritas Enterprise Administrator GUI
- Configure basic disks (Optional)
- About creating dynamic disk groups
- About creating dynamic volumes
- Set desired preferences
- Protecting your SFW configuration with vxcbr
- Using the GUI to manage your storage
- Working with disks, partitions, and volumes
- Overview
- Adding storage
- Disk tasks
- Remove a disk from a dynamic disk group
- Remove a disk from the computer
- Offline a disk
- Update disk information by using rescan
- Set disk usage
- Evacuate disk
- Replace disk
- Changing the internal name of a disk
- Renaming an enclosure
- Work with removable media
- Working with disks that support thin provisioning
- View disk properties
- Veritas Disk ID (VDID)
- General Partition/Volume tasks
- Delete a volume
- Delete a partition or logical drive
- Shredding a volume
- Refresh drive letter, file system, and partition or volume information
- Add, change, or remove a drive letter or path
- Renaming a mirror (plex)
- Changing the internal name of a volume
- Mount a volume at an empty folder (Drive path)
- View all drive paths
- Format a partition or volume with the file system command
- Cancel format
- Change file system options on a partition or volume
- Set a volume to read only
- Check partition or volume properties
- Expand a dynamic volume
- Expand a partition
- Safeguarding the expand volume operation in SFW against limitations of NTFS
- Safeguarding the expand volume operation in SFW against limitations of ReFS
- Shrink a dynamic volume
- Dynamic LUN expansion
- Basic disk and volume tasks
- Automatic discovery of SSD devices and manual classification as SSD
- Disk media types
- Supported Solid-State Devices
- Icon for SSD
- Enclosure and VDID for automatically discovered On-Host Fusion-IO disks
- Enclosure and VDID for automatically discovered On-Host Intel disks
- Enclosure and VDID for automatically discovered Violin disks
- Classifying disks as SSD
- Limitations for classifying SSD devices
- Volume Manager space allocation is SSD aware
- Dealing with disk groups
- Disk groups overview
- Delete a dynamic disk group
- Upgrading the dynamic disk group version
- Converting a Microsoft Disk Management Disk Group
- Importing a dynamic disk group to a cluster disk group
- Rename a dynamic disk group
- Detaching and attaching dynamic disks
- Importing and deporting dynamic disk groups
- Importing a cloned disk group
- Partitioned shared storage with private dynamic disk group protection
- Dynamic disk group properties
- Troubleshooting problems with dynamic disk groups
- Fast failover in clustered environments
- iSCSI SAN support
- Settings for monitoring objects
- Overview
- Event monitoring and notification
- Event notification
- Disk monitoring
- Capacity monitoring
- Configuring Automatic volume growth
- SMTP configuration for email notification
- Standard features for adding fault tolerance
- Performance tuning
- FlashSnap
- FlashSnap overview
- FlashSnap components
- FastResync
- Snapshot commands
- Dynamic Disk Group Split and Join
- About Dynamic Disk Group Split and Join
- Dynamic disk group split
- Recovery for the split command
- Dynamic disk group join
- Using Dynamic Disk Group Split and Join with a cluster on shared storage
- Limitations when using dynamic disk group split and join with Volume Replicator
- Dynamic Disk Group Split and Join troubleshooting tips
- CLI FlashSnap commands
- Fast File Resync
- Volume Shadow Copy Service (VSS)
- Using the VSS snapshot wizards with Enterprise Vault
- Using the VSS snapshot wizards with Microsoft SQL
- Copy on Write (COW)
- Using the VSS COW snapshot wizards with Microsoft SQL
- Configuring data caching with SmartIO
- About SmartIO
- Typical deployment scenarios
- How SmartIO works
- SmartIO benefits
- SmartIO limitations
- About cache area
- About SmartIO caching support
- Configuring SmartIO
- Frequently asked questions about SmartIO
- How to configure a volume to use a non-default cache area?
- What is write-through I/O caching?
- Does SmartIO with SFW support write caching?
- Are there any logs that I can refer to, if caching fails for a particular volume?
- I have deleted a cache area, but the disk is still present in the Cachepool. How can I remove it from the Cachepool?
- Is the VxVM cached data persistent?
- Is an application's performance affected if the cache device becomes inaccessible while caching is enabled?
- Are there any tools available to measure SmartIO performance?
- Will there be a performance drop after vMotion?
- Will the cache area recreation fail, if the SmartDisk assigned has insufficient space?
- A cache area recreation is in process in a VMware environment with vMotion, does it affect the sfcache operations?
- Does SmartIO continue to use the previous cache area if the VM is moved back to the previous host?
- How does SmartIO behave if the vxsvc service fails during vMotion?
- Dynamic Multi-Pathing
- Configuring Cluster Volume Manager (CVM)
- Overview
- Configuring a CVM cluster
- Administering CVM
- Configuring CVM links for multi-subnet cluster networks
- Access modes for cluster-shared volumes
- Storage disconnectivity and CVM disk detach policy
- Unconfiguring a CVM cluster
- Command shipping
- About I/O Fencing
- Administering site-aware allocation for campus clusters
- SFW for Hyper-V virtual machines
- Introduction to Storage Foundation solutions for Hyper-V environments
- Live migration support for SFW dynamic disk group
- About implementing Hyper-V virtual machine live migration on SFW storage
- Tasks for deploying live migration support for Hyper-V virtual machines
- Installing Windows Server
- Preparing the host machines
- Installing the SFW option for Microsoft failover cluster option
- Using the SFW Configuration Wizard for Microsoft Failover Cluster for Hyper-V live migration support
- Configuring the SFW storage
- Creating a virtual machine service group
- Setting the dependency of the virtual machine on the VMDg resource
- Administering storage migration for SFW and Hyper-V virtual machine volumes
- About storage migration
- About performance tunables for storage migration
- Setting performance tunables for storage migration
- About performing online storage migration
- Storage migration limitations
- About changing the layout while performing volume migration
- Migrating volumes belonging to SFW dynamic disk groups
- Migrating volumes belonging to Hyper-V virtual machines
- Migrating data from SFW dynamic disks of one enclosure to another
- Converting your existing Hyper-V configuration to live migration supported configuration
- Optional Storage Foundation features for Hyper-V environments
- Microsoft Failover Clustering support
- Configuring a quorum in a Microsoft Failover Cluster
- Implementing disaster recovery with Volume Replicator
- Volume encryption
- Secure file system (SecureFS) for protection against ransomware
- Troubleshooting and recovery
- Overview
- Using disk and volume status information
- SFW error symbols
- Resolving common problem situations
- Bring an offline dynamic disk back to an imported state
- Bring a basic disk back to an online state
- Remove a disk from the computer
- Bring a foreign disk back to an online state
- Bring a basic volume back to a healthy state
- Bring a dynamic volume back to a healthy state
- Repair a volume with degraded data after moving disks between computers
- Deal with a provider error on startup
- Commands or procedures used in troubleshooting and recovery
- Refresh command
- Rescan command
- Replace disk command
- Merge foreign disk command
- Reactivate disk command
- Reactivate volume command
- Repair volume command for dynamic RAID-5 volumes
- Repair volume command for dynamic mirrored volumes
- Starting and stopping the Storage Foundation Service
- Accessing the CLI history
- Additional troubleshooting issues
- Disk issues
- Volume issues
- After a failover, VEA sometimes does not show the drive letter or mounted folder paths of a successfully-mounted volume
- Cannot create a RAID-5 volume
- Cannot create a mirror
- Cannot extend a volume
- When creating a spanned volume over multiple disks within a disk group, you cannot customize the size of subdisks on each disk
- Disk group issues
- Sometimes, creating dynamic disk group operation fails even if disk is connected to a shared bus
- Unknown group appears after upgrading a basic disk to dynamic and immediately deporting its dynamic disk group
- Cannot use SFW disk groups in disk management after uninstalling InfoScale Storage management software
- After uninstalling and reinstalling InfoScale Storage management software, the private dynamic disk group protection is removed
- Cannot import a cluster dynamic disk group or a secondary disk group with private dynamic disk group protection when SCSI reservations have not been released
- Connection issues
- Issues related to boot or restart
- During restart, a message may appear about a "Corrupt drive" and suggest that you run autocheck
- Error that the boot device is inaccessible, bugcheck 7B
- Error message "vxboot- failed to auto-import disk group repltest_dg. all volumes of the disk group are not available."
- Error message "Bugcheck 7B, Inaccessible Boot Device"
- When Attempting to Boot from a Stale or Damaged Boot Plex
- Cluster issues
- Dynamic Multi-Pathing issues
- vxsnap issues
- Other issues
- Live migration fails if VM VHD is hosted on an SFW volume mounted as a folder mount
- Disk group deletion fails if ReFS volume is marked as read-only
- ReFS volume deletion from VEA GUI fails if Symantec Endpoint Protection (SEP) is installed.
- An option is grayed out
- Disk view on a mirrored volume does not display the DCO volume
- CVM issues
- After a storage disconnect, unable to bring volume resources online on the CVM cluster nodes
- Error may occur while unconfiguring a node from CVM cluster
- Shutdown of all the nodes except one causes CVM to hang
- Sometimes, CSDG Deport causes Master node to hang due to IRP getting stuck in QLogic driver
- Unknown disk groups seen on nodes after splitting a cluster-shared disk group into cluster disk groups from Slave node
- In some cases, missing disks are seen on target Secondary dynamic disk groups after splitting a cluster-shared disk group from Slave node
- Cannot stop VxSVC if SFW resources are online on the node
- Cluster-shared volume fails to come online on Slave if a stale CSDG of the same name is present on it
- CVM does not start if all cluster nodes are shut down and then any of the nodes are not restarted
- Incorrect errors shown while creating a CSDG if Volume Manager Shared Volume is not registered
- After splitting or joining disk group having mirrored volume with DRL, VEA GUI shows incorrect volume file system if volumes move to another disk group
- Enclosure-level storage migration fails, but adds disks if a cluster-shared volume is offline
- Volume Manager Shared Volume resource fails to come online or cannot be deleted from Failover Cluster Manager
- Sometimes, source cluster-shared volumes are missing after joining two cluster-shared disk groups
- If private CVM links are removed, then nodes may remain out of cluster after network reconnect
- Format dialog box appears after storage disconnect
- Volume Manager Shared Volume resources fail to come online on failover nodes if VxSVC is stopped before stopping clussvc
- One or more nodes have invalid configuration or are not running or reachable
- After node crash or network disconnect, volume resources failover to other node but the drive letters are left behind mounted on the failing node even after it joins cluster successfully
- Shutdown of Master node in a CVM cluster makes the Slave nodes to hang in "Joining" state while joining to new Master
- CVM stops if Microsoft Failover Clustering and CVM cluster networks are not in sync because of multiple, independent network failures or disconnect
- Restarting CVM
- Administering CVM using the CLI
- Tuning the VDS software provider logging
- Appendix A. Command line interface
- Overview of the command line interface
- vxclustadm
- vxvol
- vxdg
- vxclus
- vxdisk
- vxassist
- vxassist make
- vxassist growby
- vxassist querymax
- vxassist shrinkby
- vxassist shrinkabort
- vxassist mirror
- vxassist break
- vxassist remove
- vxassist delete
- vxassist shred
- vxassist addlog
- vxassist online (read/write)
- vxassist offline
- vxassist prepare
- vxassist snapshot
- vxassist snapback
- vxassist snapclear
- vxassist snapabort
- vxassist rescan
- vxassist refresh
- vxassist resetbus
- vxassist version
- vxassist (Windows-specific)
- vxevac
- vxsd
- vxstat
- vxtask
- vxedit
- vxunreloc
- vxdmpadm
- vxdmpadm dsminfo
- vxdmpadm arrayinfo
- vxdmpadm deviceinfo
- vxdmpadm pathinfo
- vxdmpadm arrayperf
- vxdmpadm deviceperf
- vxdmpadm pathperf
- vxdmpadm allperf
- vxdmpadm iostat
- vxdmpadm cleardeviceperf
- vxdmpadm cleararrayperf
- vxdmpadm clearallperf
- vxdmpadm setdsmscsi3
- vxdmpadm setarrayscsi3
- vxdmpadm setattr dsm
- vxdmpadm setattr array
- vxdmpadm setattr device
- vxdmpadm setattr path
- vxdmpadm set isislog
- vxdmpadm rescan
- vxdmpadm disk list
- vxdmpadm getdsmattrib
- vxdmpadm getmpioparam
- vxdmpadm setmpioparam
- vxcbr
- vxsnap
- vxfsync
- vxscrub
- vxverify
- vxprint
- vxschadm
- sfcache
- Tuning SFW
- Appendix B. VDID details for arrays
- Appendix C. InfoScale event logging
Prepare
Prepare creates a snapshot mirror or plex, which is attached to and synchronized with a volume. Alternatively, if you apply the command to a volume that already has one or more normal mirrors, you can designate an existing mirror to be used for the snapshot mirror. The advantage of selecting an existing mirror is that it saves time, since it is not necessary to resynchronize the mirror to the volume.
Note:
The Prepare command replaces the Snap Start command in the VEA GUI.
The mirror synchronization process can take a while, but it does not interfere with use of the volume. If the prepare the volume for snapshot process fails, the snapshot mirror is deleted if it was created from scratch, and its space is released. If you selected a normal mirror to be used for the snapshot mirror, that mirror reverts to its normal state if the prepare the volume for snapshot process fails.
When the prepare the volume for snapshot process is complete, the status of the snapshot mirror displays as Snap Ready on the Mirrors tab in the right pane of the VEA GUI. The snapshot mirror can be associated with a snapshot volume by using the Snap Shot command. Once the snapshot mirror is created, it continues to be updated until it is detached.
Note:
Dynamic disks belonging to a Microsoft Disk Management Disk Group do not support the Prepare or Snap Start commands.
To create a snapshot mirror
- Right-click on the volume that you want to take a snapshot of.
A context menu is displayed.
- Select Snap > Prepare.
The Prepare volume for FlashSnap wizard welcome screen appears.
Click Next to continue.
- The screen that appears depends on whether you already have a mirrored volume or not. If you already have a mirrored volume, the snapshot mirror continues to be updated until it is detached by using the Snap Shot command.
The various screens are as follows:
Mirrored volume: If you have a mirrored volume, a screen appears to let you select an existing mirror to be used for the snapshot mirror.
If you have a mirrored volume and there is also a disk available on your system to create an additional mirror, the screen lets you choose either to use an existing mirror for the snapshot or to have a new mirror created.
If you have a mirrored volume and there is no disk available for creating a new snapshot mirror, the screen lets you select from existing mirrors in the volume.
If you select an existing mirror, click Next to continue to the summary screen and click Finish to complete the Prepare command.
If you do not select an existing mirror, click Next to continue and follow the instructions for an unmirrored volume.
Unmirrored volume: If you have an unmirrored volume or you have not selected an existing mirror to use for the snapshot mirror, select the disk to be used for the snapshot mirror from the window for disk selection.
The default setting is to have the program automatically select the disks where the mirror is created.
Alternatively, you can specify the disks that can be used to create the snapshot mirror by clicking the Manually select disks radio button. If you select the manual setting, use the Add or Add All option to move the selected disks to the right pane of the window. The Remove and Remove All options let you move selected disks back to the left pane.
You may also check Disable Track Alignment to disable track alignment on the snapshot mirror volume.
Click Next to continue to specify attributes.
Specify attributes
On this screen select one of the following volume layout types:
Concatenated
Striped
If you create a striped volume, the Columns and Stripe unit size boxes need to have entries. Defaults are provided.
For a concatenated or striped volume, you may also specify to mirror across disks by the following:
Port
Target
Enclosure
Channel
The operation to prepare a volume for a snapshot fails if the appropriate resources are not available to support the selected attributes to mirror across disks.
After the Prepare command completes, a new snapshot mirror is attached to the volume. See the sample screen below. In that screen, the volume Flash has a snapshot mirror attached to it.
The new mirror is added to the Mirrors tab for the volume. In the sample screen, the mirror is identified as a snapshot mirror and has the Snapshot icon. After the snapshot mirror is synchronized with the volume, its status becomes Snap Ready.
It is important to make sure that the snapshot mirror (or snap plex) has completed its resynchronization and displays the status of Snap Ready before continuing with the Snap Shot command or doing any other operations on the snapshot mirror. Also, if you shut down the server or deport the disk group containing the volume being prepared for a snapshot before resynchronization completes, the snapshot mirror is deleted when the disk group with the original volume comes online again.
The DCO (Disk Change Object) volume is created to track the regions on a volume that are changed while a mirror is detached.
The DCO volume is not included in the tree view of the VEA GUI. To view the DCO volume, you must use the Disk View. To access the Disk View, click the Disk View tab in the right pane or select Disk View from a disk's or volume's context menu.
The sample Disk View screen that follows shows the DCO log that the Prepare command creates.
Note:
The Break Mirror and Remove Mirror commands do not work with the snapshot mirror.
More Information