Veritas InfoScale™ 7.4 Solutions Guide - Solaris
- Section I. Introducing Veritas InfoScale
- Section II. Solutions for Veritas InfoScale products
- Section III. Improving database performance
- Overview of database accelerators
- Improving database performance with Veritas Quick I/O
- About Quick I/O
- Improving database performance with Veritas Cached Quick I/O
- Improving database performance with Veritas Concurrent I/O
- Section IV. Using point-in-time copies
- Understanding point-in-time copy methods
- Backing up and recovering
- Preserving multiple point-in-time copies
- Online database backups
- Backing up on an off-host cluster file system
- Database recovery using Storage Checkpoints
- Backing up and recovering in a NetBackup environment
- Off-host processing
- Creating and refreshing test environments
- Creating point-in-time copies of files
- Section V. Maximizing storage utilization
- Optimizing storage tiering with SmartTier
- Optimizing storage with Flexible Storage Sharing
- Optimizing storage tiering with SmartTier
- Section VI. Migrating data
- Understanding data migration
- Offline migration from Solaris Volume Manager to Veritas Volume Manager
- How Solaris Volume Manager objects are mapped to VxVM objects
- Overview of the conversion process
- Planning the conversion
- Preparing a Solaris Volume Manager configuration for conversion
- Setting up a Solaris Volume Manager configuration for conversion
- Converting from the Solaris Volume Manager software to VxVM
- Post conversion tasks
- Online migration of a native file system to the VxFS file system
- Migrating a source file system to the VxFS file system over NFS v3
- VxFS features not available during online migration
- Migrating storage arrays
- Migrating data between platforms
- Overview of the Cross-Platform Data Sharing (CDS) feature
- CDS disk format and disk groups
- Setting up your system to use Cross-platform Data Sharing (CDS)
- Maintaining your system
- Disk tasks
- Disk group tasks
- Displaying information
- File system considerations
- Specifying the migration target
- Using the fscdsadm command
- Maintaining the list of target operating systems
- Migrating a file system on an ongoing basis
- Converting the byte order of a file system
- Migrating from Oracle ASM to Veritas File System
- Section VII. Veritas InfoScale 4K sector device support solution
Making an off-host backup of an online Sybase database
The procedure for off-host database backup is designed to minimize copy-on-write operations that can impact system performance. You can use this procedure whether the database volumes are in a cluster-shareable disk group or a private disk group on a single host. If the disk group is cluster-shareable, you can use a node in the cluster for the off-host processing (OHP) host. In that case, you can omit the steps to split the disk group and deport it to the OHP host. The disk group is already accessible to the OHP host. Similarly, when you refresh the snapshot you do not need to reimport the snapshot and rejoin the snapshot disk group to the primary host.
To make an off-host backup of an online Sybase database
- On the primary host, add one or more snapshot plexes to the volume using this command:
# vxsnap -g database_dg addmir database_vol [nmirror=N] \ [alloc=storage_attributes]
By default, one snapshot plex is added unless you specify a number using the nmirror attribute. For a backup, you should usually only require one plex. You can specify storage attributes (such as a list of disks) to determine where the plexes are created.
- Suspend updates to the volumes. As the Sybase database administrator, put the database in quiesce mode by using a script such as that shown in the example.
#!/bin/ksh # # script: backup_start.sh # # Sample script to quiesce example Sybase ASE database. # # Note: The "for external dump" clause was introduced in Sybase # ASE 12.5 to allow a snapshot database to be rolled forward. # See the Sybase ASE 12.5 documentation for more information. isql -Usa -Ppassword -SFMR <<! quiesce database tag hold database1[, database2]... [for external dump] go quit !
- Use the following command to make a full-sized snapshot, snapvol, of the tablespace volume by breaking off the plexes that you added in step 1 from the original volume:
# vxsnap -g database_dg make \ source=database_vol/newvol=snapvol/nmirror=N \ [alloc=storage_attributes]
The nmirror attribute specifies the number of mirrors, N, in the snapshot volume.
If a database spans more than one volume, specify all the volumes and their snapshot volumes as separate tuples on the same line, for example:
# vxsnap -g database_dg make source=database_vol1/snapvol=snapvol1 \ source=database_vol/snapvol=snapvol2 \ source=database_vol3/snapvol=snapvol3 alloc=ctlr:c3,ctlr:c4
This step sets up the snapshot volumes ready for the backup cycle, and starts tracking changes to the original volumes.
- Release all the tablespaces or databases from quiesce mode. As the Sybase database administrator, release the database from quiesce mode using a script such as that shown in the example.
#!/bin/ksh # # script: backup_end.sh # # Sample script to release example Sybase ASE database from quiesce # mode. isql -Usa -Ppassword -SFMR <<! quiesce database tag release go quit !
- If the primary host and the snapshot host are in the same cluster, and the disk group is shared, the snapshot volume is already accessable to the OHP host. Skip to step 9.
If the OHP host is not in the cluster, perform the following steps to make the snapshot volume accessible to the OHP host.
On the primary host, split the disks containing the snapshot volumes into a separate disk group, snapvoldg, from the original disk group, database_dg using the following command:
# vxdg split database_dg snapvoldg snapvol ...
- On the primary host, deport the snapshot volume's disk group using the following command:
# vxdg deport snapvoldg
- On the OHP host where the backup is to be performed, use the following command to import the snapshot volume's disk group:
# vxdg import snapvoldg
- VxVM will recover the volumes automatically after the disk group import unless it is set to not recover automatically. Check if the snapshot volume is initially disabled and not recovered following the split.
If a volume is in the DISABLED state, use the following command on the OHP host to recover and restart the snapshot volume:
# vxrecover -g snapvoldg -m snapvol ...
- On the OHP host, back up the snapshot volumes. If you need to remount the file system in the volume to back it up, first run fsck on the volumes. The following are sample commands for checking and mounting a file system:
# fsck -F vxfs /dev/vx/rdsk/snapvoldg/snapvol # mount -F vxfs /dev/vx/dsk/snapvoldg/snapvol mount_point
Back up the file system using a command such as bpbackup in Veritas NetBackup. After the backup is complete, use the following command to unmount the file system.
# umount mount_point
- If the primary host and the snapshot host are in the same cluster, and the disk group is shared, the snapshot volume is already accessible to the primary host. Skip to step 14.
If the OHP host is not in the cluster, perform the following steps to make the snapshot volume accessible to the primary host.
On the OHP host, use the following command to deport the snapshot volume's disk group:
# vxdg deport snapvoldg
- On the primary host, re-import the snapshot volume's disk group using the following command:
# vxdg [-s] import snapvoldg
Note:
Specify the -s option if you are reimporting the disk group to be rejoined with a shared disk group in a cluster.
- On the primary host, use the following command to rejoin the snapshot volume's disk group with the original volume's disk group:
# vxdg join snapvoldg database_dg
- VxVM will recover the volumes automatically after the join unless it is set not to recover automatically. Check if the snapshot volumes are initially disabled and not recovered following the join.
If a volume is in the DISABLED state, use the following command on the primary host to recover and restart the snapshot volume:
# vxrecover -g database_dg -m snapvol
- On the primary host, reattach the snapshot volumes to their original volume using the following command:
# vxsnap -g database_dg reattach snapvol source=database_vol \ [snapvol2 source=database_vol2]...
For example, to reattach the snapshot volumes snapvol1, snapvol2 and snapvol3:
# vxsnap -g database_dg reattach snapvol1 source=database_vol1 \ snapvol2 source=database_vol2 snapvol3 source=database_vol3
While the reattached plexes are being resynchronized from the data in the parent volume, they remain in the SNAPTMP state. After resynchronization is complete, the plexes are placed in the SNAPDONE state. You can use the vxsnap print command to check on the progress of synchronization.
Repeat steps 2 through 14 each time that you need to back up the volume.