Veritas InfoScale™ 7.4 Solutions Guide - Solaris
- Section I. Introducing Veritas InfoScale
- Section II. Solutions for Veritas InfoScale products
- Section III. Improving database performance
- Overview of database accelerators
- Improving database performance with Veritas Quick I/O
- About Quick I/O
- Tasks for setting up Quick I/O in a database environment
- Creating DB2 database containers as Quick I/O files using qiomkfile Creating Sybase files as Quick I/O files using qiomkfile
- Preallocating space for Quick I/O files using the setext command
- Accessing regular VxFS files as Quick I/O files
- Extending a Quick I/O file
- Disabling Quick I/O
- Improving database performance with Veritas Cached Quick I/O
- Improving database performance with Veritas Concurrent I/O
- Section IV. Using point-in-time copies
- Understanding point-in-time copy methods
- Backing up and recovering
- Storage Foundation and High Availability Solutions backup and recovery methods
- Preserving multiple point-in-time copies
- Online database backups
- Backing up on an off-host cluster file system
- Database recovery using Storage Checkpoints
- Backing up and recovering in a NetBackup environment
- Off-host processing
- Creating and refreshing test environments
- Creating point-in-time copies of files
- Section V. Maximizing storage utilization
- Optimizing storage tiering with SmartTier
- About SmartTier
- About VxFS multi-volume file systems
- About VxVM volume sets
- About volume tags
- SmartTier use cases for Sybase
- Setting up a filesystem for storage tiering with SmartTier
- Relocating old archive logs to tier two storage using SmartTier
- Relocating inactive tablespaces or segments to tier two storage
- Relocating active indexes to premium storage
- Relocating all indexes to premium storage
- Optimizing storage with Flexible Storage Sharing
- Optimizing storage tiering with SmartTier
- Section VI. Migrating data
- Understanding data migration
- Offline migration from Solaris Volume Manager to Veritas Volume Manager
- About migration from Solaris Volume Manager
- How Solaris Volume Manager objects are mapped to VxVM objects
- Overview of the conversion process
- Planning the conversion
- Preparing a Solaris Volume Manager configuration for conversion
- Setting up a Solaris Volume Manager configuration for conversion
- Converting from the Solaris Volume Manager software to VxVM
- Post conversion tasks
- Converting a root disk
- Online migration of a native file system to the VxFS file system
- About online migration of a native file system to the VxFS file system
- Administrative interface for online migration of a native file system to the VxFS file system
- Migrating a native file system to the VxFS file system
- Migrating a source file system to the VxFS file system over NFS v3
- Backing out an online migration of a native file system to the VxFS file system
- VxFS features not available during online migration
- Migrating storage arrays
- Migrating data between platforms
- Overview of the Cross-Platform Data Sharing (CDS) feature
- CDS disk format and disk groups
- Setting up your system to use Cross-platform Data Sharing (CDS)
- Maintaining your system
- Disk tasks
- Disk group tasks
- Changing the alignment of a disk group during disk encapsulation
- Changing the alignment of a non-CDS disk group
- Splitting a CDS disk group
- Moving objects between CDS disk groups and non-CDS disk groups
- Moving objects between CDS disk groups
- Joining disk groups
- Changing the default CDS setting for disk group creation
- Creating non-CDS disk groups
- Upgrading an older version non-CDS disk group
- Replacing a disk in a CDS disk group
- Setting the maximum number of devices for CDS disk groups
- Changing the DRL map and log size
- Creating a volume with a DRL log
- Setting the DRL map length
- Displaying information
- Determining the setting of the CDS attribute on a disk group
- Displaying the maximum number of devices in a CDS disk group
- Displaying map length and map alignment of traditional DRL logs
- Displaying the disk group alignment
- Displaying the log map length and alignment
- Displaying offset and length information in units of 512 bytes
- Default activation mode of shared disk groups
- Additional considerations when importing CDS disk groups
- File system considerations
- Considerations about data in the file system
- File system migration
- Specifying the migration target
- Using the fscdsadm command
- Checking that the metadata limits are not exceeded
- Maintaining the list of target operating systems
- Enforcing the established CDS limits on a file system
- Ignoring the established CDS limits on a file system
- Validating the operating system targets for a file system
- Displaying the CDS status of a file system
- Migrating a file system one time
- Migrating a file system on an ongoing basis
- When to convert a file system
- Converting the byte order of a file system
- Alignment value and block size
- Disk group alignment and encapsulated disks
- Disk group import between Linux and non-Linux machines
- Migrating a snapshot volume
- Migrating from Oracle ASM to Veritas File System
- Section VII. Veritas InfoScale 4K sector device support solution
Relocating inactive tablespaces or segments to tier two storage
It is general practice to use partitions in databases. Each partition maps to a unique tablespace. For example in a shopping goods database, the orders table can be portioned into orders of each quarter. Q1 orders can be organized into Q1_order_tbs tablespace, Q2 order can be organized into Q2_order_tbs.
As the quarters go by, the activity on older quarter data decreases. By relocating old quarter data into Tier-2, significant storage costs can be saved. The relocation of data can be done when the database is online.
For the following example use case, the steps illustrate how to relocate Q1 order data into Tier-2 in the beginning of Q3. The example steps assume that all the database data is in the /DBdata filesystem.
To prepare to relocate Q1 order data into Tier-2 storage for DB2
- Obtain a list of containers belonging to Q1_order_tbs.
$ db2inst1$ db2 list tablespaces
- Find the tablespace-id for the tablespace Q1_order_tbs.
$ db2inst1$ db2 list tablespace containers for <tablespace-id>
- Find the path names for the containers and store them in file Q1_order_files.txt.
#cat Q1_order_files.txt NODE0000/Q1_order_file1.f NODE0000/Q1_order_file2.f ... NODE0000/Q1_order_fileN.f
To prepare to relocate Q1 order data into Tier-2 storage for Sybase
- Obtain a list of datafiles belonging to segment Q1_order_tbs. System Procedures sp_helpsegment and sp_helpdevice can be used for this purpose.
sybsadmin$ sp_helpsegment Q1_order_tbss
Note:
In Sybase terminology, a "tablespace" is same as a "segment."
- Note down the device names for the segment Q1_order_tbs.
- For each device name use the sp_helpdevice system procedure to get the physical path name of the datafile.
sybsadmin$ sp_helpdevice <device name>
- Save all the datafile path names in Q1_order_files.txt
# cat Q1_order_files.txt NODE0000/Q1_order_file1.f NODE0000/Q1_order_file2.f ... NODE0000/Q1_order_fileN.f
To relocate Q1 order data into Tier-2
- Prepare a policy XML file. For the example, the policy file name is Q1_order_policy.xml. Below is a sample policy.
This is policy is for unconditional relocation and hence there is no WHEN clause. There are multiple PATTERN statements as part of the SELECT clause. Each PATTERN selects a different file.
<?xml version="1.0"?> <!DOCTYPE PLACEMENT_POLICY SYSTEM "/opt/VRTSvxfs/etc/\ placement_policy.dtd"> <PLACEMENT_POLICY Version="5.0" Name="selected files"> <RULE Flags="data" Name="Key-Files-Rule"> <COMMENT> This rule deals with key important files. </COMMENT><SELECT Flags="Data"> <DIRECTORY Flags="nonrecursive" > NODE0000</DIRECTORY> <PATTERN> Q1_order_file1.f </PATTERN> <PATTERN> Q1_order_file2.f </PATTERN> <PATTERN> Q1_order_fileN.f </PATTERN> </SELECT><RELOCATE> <COMMENT> Note that there is no WHEN clause. </COMMENT> <TO> <DESTINATION> <CLASS> tier2 </CLASS> </DESTINATION> </TO> </RELOCATE></RULE> </PLACEMENT_POLICY>
- Validate the policy Q1_order_policy.xml.
# fsppadm validate /DBdata Q1_order_policy.xml
- Assign the policy.
# fsppadm assign /DBdata Q1_order_policy.xml
- Enforce the policy.
# fsppadm enforce /DBdata