NetBackup™ Backup Planning and Performance Tuning Guide
- NetBackup capacity planning
- Primary server configuration guidelines
- Media server configuration guidelines
- NetBackup hardware design and tuning considerations
- About NetBackup Media Server Deduplication (MSDP)
- MSDP tuning considerations
- MSDP sizing considerations
- Accelerator performance considerations
- Media configuration guidelines
- How to identify performance bottlenecks
- Best practices
- Best practices: NetBackup AdvancedDisk
- Best practices: NetBackup tape drive cleaning
- Best practices: Universal shares
- NetBackup for VMware sizing and best practices
- Best practices: Storage lifecycle policies (SLPs)
- Measuring Performance
- Table of NetBackup All Log Entries report
- Evaluating system components
- Tuning the NetBackup data transfer path
- NetBackup network performance in the data transfer path
- NetBackup server performance in the data transfer path
- About shared memory (number and size of data buffers)
- About the communication between NetBackup client and media server
- Effect of fragment size on NetBackup restores
- Other NetBackup restore performance issues
- About shared memory (number and size of data buffers)
- Tuning other NetBackup components
- How to improve NetBackup resource allocation
- How to improve FlashBackup performance
- Tuning disk I/O performance
I/O considerations
MSDP works best with multiple file systems configured to provide the disk space to store its data and metadata. Ideally, each file system should be created on independent storage volumes with equal size and no disk/LUN sharing for best parallel I/O operations. If possible, MSDP metadata should be configured to be stored on a different file system separate from the file systems storing the data containers due to its different I/O patterns.
The size of a file system configured for an MSDP pool may be in the range of several tens of TB up to 100 TB. For each file system, MSDP has dedicated worker threads and data buffers for data writes, compactions, and CRC checks, etc. If the size of the file systems for data containers vary a lot, the smaller file systems may be filled up earlier than the larger ones. The smaller file systems stop receiving data when filled up which reduces the I/O bandwidth and eventually affects server performance. The impact will be more significant, especially for I/O intensive workloads, such as low deduplication ratio backups, optimized duplication, or restores, the last two are read-intensive as the data needed is not likely found in file system cache.