NetBackup™ Backup Planning and Performance Tuning Guide
- NetBackup capacity planning
- Primary server configuration guidelines
- Media server configuration guidelines
- NetBackup hardware design and tuning considerations
- About NetBackup Media Server Deduplication (MSDP)
- MSDP tuning considerations
- MSDP sizing considerations
- Accelerator performance considerations
- Media configuration guidelines
- How to identify performance bottlenecks
- Best practices
- Best practices: NetBackup AdvancedDisk
- Best practices: NetBackup tape drive cleaning
- Best practices: Universal shares
- NetBackup for VMware sizing and best practices
- Best practices: Storage lifecycle policies (SLPs)
- Measuring Performance
- Table of NetBackup All Log Entries report
- Evaluating system components
- Tuning the NetBackup data transfer path
- NetBackup network performance in the data transfer path
- NetBackup server performance in the data transfer path
- About shared memory (number and size of data buffers)
- About the communication between NetBackup client and media server
- Effect of fragment size on NetBackup restores
- Other NetBackup restore performance issues
- About shared memory (number and size of data buffers)
- Tuning other NetBackup components
- How to improve NetBackup resource allocation
- How to improve FlashBackup performance
- Tuning disk I/O performance
Adjusting the batch size for sending metadata to the NetBackup catalog
You can change the batch size that is used to send metadata to the NetBackup catalog during backups. You can also change the batch size for sending metadata to the catalog specifically for catalog backups.
A change to the batch size can help in the following cases:
If backups fail because a query to add files to the catalog takes more than 10 minutes to complete.
In this case, the backup job fails, and the
bpbrm
log contains a message that indicates a failed attempt to add files to the catalog. Note that thebpdbm
log does not contain a similar message.To reduce the CPU usage on primary server.
When many files are to be backed up and the batch size is small, too many bpdbm could be running and using up CPU cycles of the primary server. CPU utilization could be reduced much by increasing the batch size. For large numbers of small file backups, it is recommended to set the value to at least 90000. For NetBackup Flex Scale (NBFS), it is set to 100000 by default.
To improve backup performance when the folders to back up contain a large number of small files or subfolders.
To adjust the batch size for sending metadata to the catalog for NBU-Catalog backups
- Create the following file:
/usr/openv/netbackup/CAT_BU_MAX_FILES_PER_ADD
- In the file, enter a value for the number of metadata entries to be sent to the catalog in each batch, for catalog backups. The allowed values are from 1 to 100,000.
The default is the maximum of 100,000 entries per batch. Veritas recommends that you experiment with a lower value to achieve the best performance for your backup.