Veritas InfoScale™ 8.0.2 Storage and Availability Management for Oracle Databases - AIX, Linux, Solaris
- Section I. Storage Foundation High Availability (SFHA) management solutions for Oracle databases
- Overview of Storage Foundation for Databases
- About Veritas File System
- Overview of Storage Foundation for Databases
- Section II. Deploying Oracle with Veritas InfoScale products
- Deployment options for Oracle in a Storage Foundation environment
- Deploying Oracle with Storage Foundation
- Setting up disk group for deploying Oracle
- Creating volumes for deploying Oracle
- Creating VxFS file system for deploying Oracle
- Deploying Oracle in an off-host configuration with Storage Foundation
- Deploying Oracle with High Availability
- Deploying Oracle with Volume Replicator (VVR) for disaster recovery
- Deployment options for Oracle in a Storage Foundation environment
- Section III. Configuring Storage Foundation for Database (SFDB) tools
- Configuring and managing the Storage Foundation for Databases repository database
- Configuring the Storage Foundation for Databases (SFDB) tools repository
- Configuring authentication for Storage Foundation for Databases (SFDB) tools
- Configuring and managing the Storage Foundation for Databases repository database
- Section IV. Improving Oracle database performance
- About database accelerators
- Improving database performance with Veritas Extension for Oracle Disk Manager
- About Oracle Disk Manager in the Veritas InfoScale products environment
- Improving database performance with Veritas Cached Oracle Disk Manager
- About Cached ODM in SFHA environment
- Configuring Cached ODM in SFHA environment
- Administering Cached ODM settings with Cached ODM Advisor in SFHA environment
- Generating reports of candidate datafiles by using Cached ODM Advisor in SFHA environment
- Generating summary reports of historical activity by using Cached ODM Advisor in SFHA environment
- Generating reports of candidate datafiles by using Cached ODM Advisor in SFHA environment
- Improving database performance with Quick I/O
- About Quick I/O
- Improving database performance with Cached Quick I/O
- Section V. Using point-in-time copies
- Understanding point-in-time copy methods
- Volume-level snapshots
- About Reverse Resynchronization in volume-level snapshots (FlashSnap)
- Storage Checkpoints
- About FileSnaps
- Considerations for Oracle point-in-time copies
- Administering third-mirror break-off snapshots
- Administering space-optimized snapshots
- Creating a clone of an Oracle database by using space-optimized snapshots
- Administering Storage Checkpoints
- Database Storage Checkpoints for recovery
- Administering FileSnap snapshots
- Backing up and restoring with Netbackup in an SFHA environment
- Understanding point-in-time copy methods
- Section VI. Optimizing storage costs for Oracle
- Understanding storage tiering with SmartTier
- Configuring and administering SmartTier
- Configuring SmartTier for Oracle
- Optimizing database storage using SmartTier for Oracle
- Extent balancing in a database environment using SmartTier for Oracle
- Configuring SmartTier for Oracle
- SmartTier use cases for Oracle
- Compressing files and databases to optimize storage costs
- Using the Compression Advisor tool
- Section VII. Managing Oracle disaster recovery
- Section VIII. Storage Foundation for Databases administrative reference
- Storage Foundation for Databases command reference
- Tuning for Storage Foundation for Databases
- About tuning Veritas Volume Manager (VxVM)
- About tuning VxFS
- About tuning Oracle databases
- About tuning Solaris for Oracle
- Troubleshooting SFDB tools
- About troubleshooting Storage Foundation for Databases (SFDB) tools
- About the vxdbd daemon
- Resources for troubleshooting SFDB tools
- Manual recovery of Oracle database
- Storage Foundation for Databases command reference for the releases prior to 6.0
- Preparing storage for Database FlashSnap
- About creating database snapshots
- FlashSnap commands
- Creating a snapplan (dbed_vmchecksnap)
- Validating a snapplan (dbed_vmchecksnap)
- Displaying, copying, and removing a snapplan (dbed_vmchecksnap)
- Creating a snapshot (dbed_vmsnap)
- Backing up the database from snapshot volumes (dbed_vmclonedb)
- Cloning a database (dbed_vmclonedb)
- Guidelines for Oracle recovery
- Database Storage Checkpoint Commands
- Section IX. Reference
- Appendix A. VCS Oracle agents
- Appendix B. Sample configuration files for clustered deployments
- Appendix C. Database FlashSnap status information
- Appendix D. Using third party software to back up files
About tunable VxFS I/O parameters
The following are tunable VxFS I/O parameters:
The preferred read request size. The file system uses this parameter in conjunction with the read_nstream value to determine how much data to read ahead. The default value is 64K. | |
The preferred write request size. The file system uses this parameter in conjunction with the write_nstream value to determine how to do flush behind on writes. The default value is 64K. | |
The number of parallel read requests of size read_pref_io that you can have outstanding at one time. The file system uses the product of read_nstream multiplied by read_pref_io to determine its read ahead size. The default value for read_nstream is 1. | |
The number of parallel write requests of size write_pref_io that you can have outstanding at one time. The file system uses the product of write_nstream multiplied by write_pref_io to determine when to do flush behind on writes. The default value for write_nstream is 1. | |
On VxFS, files can have up to ten variably sized direct extents stored in the inode. After these extents are used, the file must use indirect extents that are a fixed size. The size is set when the file first uses indirect extents. These indirect extents are 8K by default. The file system does not use larger indirect extents because it must fail a write and return ENOSPC if there are no extents available that are the indirect extent size. For file systems with many large files, the 8K indirect extent size is too small. Large files that require indirect extents use many smaller extents instead of a few larger ones. By using this parameter, the default indirect extent size can be increased so that large files in indirects use fewer large extents. Be careful using this tunable. If it is too large, then writes fail when they are unable to allocate extents of the indirect extent size to a file. In general, the fewer and the larger the files on a file system, the larger the default_indir_size parameter can be. The value of this parameter is generally a multiple of the read_pref_io parameter. This tunable is not applicable on Version 4 disk layouts. | |
Any file I/O requests larger than the discovered_direct_iosz are handled as discovered direct I/O. A discovered direct I/O is unbuffered similar to direct I/O, but does not require a synchronous commit of the inode when the file is extended or blocks are allocated. For larger I/O requests, the CPU time for copying the data into the page cache and the cost of using memory to buffer the I/O data becomes more expensive than the cost of doing the disk I/O. For these I/O requests, using discovered direct I/O is more efficient than regular I/O. The default value of this parameter is 256K. | |
Changes the default initial extent size. VxFS determines the size of the first extent to be allocated to the file based on the first write to a new file. Normally, the first extent is the smallest power of 2 that is larger than the size of the first write. If that power of 2 is less than 8K, the first extent allocated is 8K. After the initial extent, the file system increases the size of subsequent extents (see max_seqio_extent_size) with each allocation. Since most applications write to files using a buffer size of 8K or less, the increasing extents start doubling from a small initial extent. initial_extent_size can change the default initial extent size to be larger, so the doubling policy will start from a much larger initial size and the file system will not allocate a set of small extents at the start of file. Use this parameter only on file systems that will have a very large average file size. On these file systems, it will result in fewer extents per file and less fragmentation. initial_extent_size is measured in file system blocks. | |
The maximum size of a direct I/O request that will be issued by the file system. If a larger I/O request comes in, then it is broken up into max_direct_iosz chunks. This parameter defines how much memory an I/O request can lock at once, so it should not be set to more than 20 percent of memory. | |
Limits the maximum disk queue generated by a single file. When the file system is flushing data for a file and the number of pages being flushed exceeds max_diskq, processes will block until the amount of data being flushed decreases. Although this doesn't limit the actual disk queue, it prevents flushing processes from making the system unresponsive. The default value is 1MB. | |
Increases or decreases the maximum size of an extent. When the file system is following its default allocation policy for sequential writes to a file, it allocates an initial extent that is large enough for the first write to the file. When additional extents are allocated, they are progressively larger (the algorithm tries to double the size of the file with each new extent) so each extent can hold several writes' worth of data. This is done to reduce the total number of extents in anticipation of continued sequential writes. When the file stops being written, any unused space is freed for other files to use. Normally, this allocation stops increasing the size of extents at 2048 blocks, which prevents one file from holding too much unused space. max_seqio_extent_size is measured in file system blocks. | |
Enables or disables caching on Quick I/O files. The default behavior is to disable caching. To enable caching, set qio_cache_enable to 1. On systems with large memories, the database cannot always use all of the memory as a cache. By enabling file system caching as a second level cache, performance may be improved. If the database is performing sequential scans of tables, the scans may run faster by enabling file system caching so the file system will perform aggressive read-ahead on the files. | |
Warning: The write_throttle parameter is useful in special situations where a computer system has a combination of a lot of memory and slow storage devices. In this configuration, sync operations (such as fsync()) may take so long to complete that the system appears to hang. This behavior occurs because the file system is creating dirty pages (in-memory updates) faster than they can be asynchronously flushed to disk without slowing system performance. Lowering the value of write_throttle limits the number of dirty pages per file that a file system will generate before flushing the pages to disk. After the number of dirty pages for a file reaches the write_throttle threshold, the file system starts flushing pages to disk even if free memory is still available. The default value of write_throttle typically generates a lot of dirty pages, but maintains fast user writes. Depending on the speed of the storage device, if you lower write_throttle, user write performance may suffer, but the number of dirty pages is limited, so sync operations will complete much faster. Because lowering write_throttle can delay write requests (for example, lowering write_throttle may increase the file disk queue to the max_diskq value, delaying user writes until the disk queue decreases), it is recommended that you avoid changing the value of write_throttle unless your system has a a large amount of physical memory and slow storage devices. |
If the file system is being used with VxVM, it is recommended that you set the VxFS I/O parameters to default values based on the volume geometry.
If the file system is being used with a hardware disk array or volume manager other than VxVM, align the parameters to match the geometry of the logical disk. With striping or RAID-5, it is common to set read_pref_io to the stripe unit size and read_nstream to the number of columns in the stripe. For striping arrays, use the same values for write_pref_io and write_nstream, but for RAID-5 arrays, set write_pref_io to the full stripe size and write_nstream to 1.
For an application to do efficient disk I/O, it should issue read requests that are equal to the product of read_nstream multiplied by read_pref_io. Generally, any multiple or factor of read_nstream multiplied by read_pref_io should be a good size for performance. For writing, the same rule of thumb applies to the write_pref_io and write_nstream parameters. When tuning a file system, the best thing to do is try out the tuning parameters under a real-life workload.
If an application is doing sequential I/O to large files, it should issue requests larger than the discovered_direct_iosz. This causes the I/O requests to be performed as discovered direct I/O requests, which are unbuffered like direct I/O but do not require synchronous inode updates when extending the file. If the file is too large to fit in the cache, then using unbuffered I/O avoids throwing useful data out of the cache and lessons CPU overhead.