NetBackup™ Backup Planning and Performance Tuning Guide
- NetBackup capacity planning
- Primary server configuration guidelines
- Size guidance for the NetBackup primary server and domain
- Factors that limit job scheduling
- More than one backup job per second
- Stagger the submission of jobs for better load distribution
- NetBackup job delays
- Selection of storage units: performance considerations
- About file system capacity and NetBackup performance
- About the primary server NetBackup catalog
- Guidelines for managing the primary server NetBackup catalog
- Adjusting the batch size for sending metadata to the NetBackup catalog
- Methods for managing the catalog size
- Performance guidelines for NetBackup policies
- Legacy error log fields
- Media server configuration guidelines
- NetBackup hardware design and tuning considerations
- About NetBackup Media Server Deduplication (MSDP)
- Data segmentation
- Fingerprint lookup for deduplication
- Predictive and sampling cache scheme
- Data store
- Space reclamation
- System resource usage and tuning considerations
- Memory considerations
- I/O considerations
- Network considerations
- CPU considerations
- OS tuning considerations
- MSDP tuning considerations
- MSDP sizing considerations
- Cloud tier sizing and performance
- Accelerator performance considerations
- Media configuration guidelines
- About dedicated versus shared backup environments
- Suggestions for NetBackup media pools
- Disk versus tape: performance considerations
- NetBackup media not available
- About the threshold for media errors
- Adjusting the media_error_threshold
- About tape I/O error handling
- About NetBackup media manager tape drive selection
- How to identify performance bottlenecks
- Best practices
- Best practices: NetBackup SAN Client
- Best practices: NetBackup AdvancedDisk
- Best practices: Disk pool configuration - setting concurrent jobs and maximum I/O streams
- Best practices: About disk staging and NetBackup performance
- Best practices: Supported tape drive technologies for NetBackup
- Best practices: NetBackup tape drive cleaning
- Best practices: NetBackup data recovery methods
- Best practices: Suggestions for disaster recovery planning
- Best practices: NetBackup naming conventions
- Best practices: NetBackup duplication
- Best practices: NetBackup deduplication
- Best practices: Universal shares
- NetBackup for VMware sizing and best practices
- Best practices: Storage lifecycle policies (SLPs)
- Best practices: NetBackup NAS-Data-Protection (D-NAS)
- Best practices: NetBackup for Nutanix AHV
- Best practices: NetBackup Sybase database
- Best practices: Avoiding media server resource bottlenecks with Oracle VLDB backups
- Best practices: Avoiding media server resource bottlenecks with MSDPLB+ prefix policy
- Best practices: Cloud deployment considerations
- Measuring Performance
- Measuring NetBackup performance: overview
- How to control system variables for consistent testing conditions
- Running a performance test without interference from other jobs
- About evaluating NetBackup performance
- Evaluating NetBackup performance through the Activity Monitor
- Evaluating NetBackup performance through the All Log Entries report
- Table of NetBackup All Log Entries report
- Evaluating system components
- About measuring performance independent of tape or disk output
- Measuring performance with bpbkar
- Bypassing disk performance with the SKIP_DISK_WRITES touch file
- Measuring performance with the GEN_DATA directive (Linux/UNIX)
- Monitoring Linux/UNIX CPU load
- Monitoring Linux/UNIX memory use
- Monitoring Linux/UNIX disk load
- Monitoring Linux/UNIX network traffic
- Monitoring Linux/Unix system resource usage with dstat
- About the Windows Performance Monitor
- Monitoring Windows CPU load
- Monitoring Windows memory use
- Monitoring Windows disk load
- Increasing disk performance
- Tuning the NetBackup data transfer path
- About the NetBackup data transfer path
- About tuning the data transfer path
- Tuning suggestions for the NetBackup data transfer path
- NetBackup client performance in the data transfer path
- NetBackup network performance in the data transfer path
- NetBackup server performance in the data transfer path
- About shared memory (number and size of data buffers)
- Default number of shared data buffers
- Default size of shared data buffers
- Amount of shared memory required by NetBackup
- How to change the number of shared data buffers
- Notes on number data buffers files
- How to change the size of shared data buffers
- Notes on size data buffer files
- Size values for shared data buffers
- Note on shared memory and NetBackup for NDMP
- Recommended shared memory settings
- Recommended number of data buffers for SAN Client and FT media server
- Testing changes made to shared memory
- About NetBackup wait and delay counters
- Changing parent and child delay values for NetBackup
- About the communication between NetBackup client and media server
- Processes used in NetBackup client-server communication
- Roles of processes during backup and restore
- Finding wait and delay counter values
- Note on log file creation
- About tunable parameters reported in the bptm log
- Example of using wait and delay counter values
- Issues uncovered by wait and delay counter values
- Estimating the effect of multiple copies on backup performance
- Effect of fragment size on NetBackup restores
- Other NetBackup restore performance issues
- About shared memory (number and size of data buffers)
- NetBackup storage device performance in the data transfer path
- Tuning other NetBackup components
- When to use multiplexing and multiple data streams
- Effects of multiplexing and multistreaming on backup and restore
- How to improve NetBackup resource allocation
- Encryption and NetBackup performance
- Compression and NetBackup performance
- How to enable NetBackup compression
- Effect of encryption plus compression on NetBackup performance
- Information on NetBackup Java performance improvements
- Information on NetBackup Vault
- Fast recovery with Bare Metal Restore
- How to improve performance when backing up many small files
- How to improve FlashBackup performance
- Veritas NetBackup OpsCenter
- Tuning disk I/O performance
Configuring universal shares
To optimize the use of this feature, it is important to consider some key points directly related to configuration. How the shares are configured directly affects scalability and performance.
As a guideline, it is recommended that no more than 50 shares be created per NetBackup media server or NetBackup Flex Instance. This recommendation is a guideline only and not a hard limit. That said, significant performance testing has revealed that performance can be affected when surpassing more than 50 concurrent shares. For clarity, the term "concurrent" in this context refers to active executing read and write operations. It was also observed that performance tends to peak at 25 concurrent shares.
To provide the most flexibility, leveraging the NetBackup Flex Appliance provides a way to create multiple MSDP instances, with the optimal 50 shares per MSDP instance.
As with all solution design, it is important to be mindful of the amount of compute and I/O resources available on the target hardware. Furthermore, all best practice recommendations around optimizing MSDP performance still apply here as the underlying technology on the storage target is MSDP. The recommendation of 1 GB of memory for 1 TB of MSDP storage still applies here as well.
When leveraging Flex Appliances with universal shares, the same principles to avoiding I/O bottlenecks apply. For example, avoid sharing LUNs across MSDP instances.
Best practices for Flex Appliances, traditional NetBackup Appliances, and BYO still apply as the universal share feature leverages the same underlying MSDP technology.
Universal share size is limited to 960 TB.
Each individual share can be used by multiple hosts. However, it is recommended that one share not be assigned to more than a few host clients, especially if each host client is frequently dumping data to the share. A share that is mapped to many host clients can experience performance bottlenecks that affect the success of universal share backups and secondary operations that are executed thereafter. For very busy environments, a 1:1 ratio of share to host client is optimal.
Any data that is ingested into the universal share resides in the MSDP storage pool that is located on the appliance-based or BYO media server hosting the universal share. While any data ingested into the universal share is deduplicated and located in MSDP immediately, that data will not be referenced in the NetBackup catalog and no retention enforcement enabled before running a universal share backup. Without a universal share backup, the data that is placed in the universal share is not searchable and cannot be restored using standard NetBackup procedures. Before the backup, control of the data in the share is entirely managed by the host that is mounting the share. If the owner of the share deletes the share data or if the share itself is removed, the data that used to exist in the share is not recoverable by NetBackup. Therefore, the universal share protection point backup, a special backup type, was designed to facilitate management and restorability through traditional NetBackup methods.
For clarity, references to a universal share backup and a universal share protection point are the same in that they both refer to the special NetBackup policy type that indexes the data in the share and sets the retention enforcement, making it available for other activities like secondary operations.
A single NetBackup policy can be configured to protect every universal share within a NetBackup domain or multiple NetBackup policies can be configured to protect each individual universal share. When a protection point is executed, no data movement occurs. Furthermore, the performance of this special backup is not based on the size of the file data. It is more closely correlated with the number of files in the specific universal share. As part of the special backup activity, each file in the share is indexed within the NetBackup catalog, and retention enforcement is set.
The timing of a universal share protection point backup is important for two important reasons:
It is important to ensure that the database dump is completed before initiating the protection point backup. Performance suffers if the backup is run while the database dump is still in progress. It can also affect how complete the backup is.
It is important for NetBackup administrators to meet with the DBAs to understand the workload size by host client, dump frequency, and time that is required to complete the dump. This information helps determine the optimal quiet period to schedule the backup of each share, as well as any subsequent secondary operations like replication and optimized duplication.
Running a universal share protection point backup during the quiet period when no dumps are occurring on the share helps to ensure that the complete dump is captured, as well as avoiding I/O contention between extensive read and write activities.
In reference to the recommendation of the optimal 1:1 ratio of host client to share mapping and scheduling the backup and any secondary operations during a quiet period, the 1:1 ratio helps prevent a scenario where there are too many host clients hitting a specific share, thus making it difficult to find a quiet period, as well as creating inevitable I/O contention.
The results of extensive testing where each NetBackup protection point policy backs up a small number of shares, for example, ~10 shares, and where each host client is mapped to one share, were favorable and allowed time for secondary operations.
It is also important to note that the NetBackup Accelerator feature does not apply here, nor is it supported.
Any functionality that is available with storage lifecycle policies (SLP) can be applied to data managed by a universal share protection point backup. This functionality includes transitioning data to tape, cloud, optimized duplication (opt-dup) to other media servers, and replication to other NetBackup domains via Auto Image Replication (A.I.R.).
The maximum 50 concurrent universal share guideline includes read and write activities, including secondary operations.
To optimize performance of secondary operations, schedule these activities when no other read and write activities to the same share are occurring. For example, after the dump and the backup are completed.
As previously highlighted, the data characteristics that affect deduplication efficacy also apply here as the underlying technology is MSDP. If a DBA chooses to use third-party encryption with their database dumps, the deduplication rate will be affected negatively. Data leveraging third-party encryption doesn't deduplicate well. Furthermore, certain types of database dump compression can also negatively affect deduplication efficacy. In both cases, decreased deduplication efficacy negatively affects space optimization, and it will also affect the speed requirements and the storage requirements of secondary operations.
It is also important to note that data characteristics where the dumps universal share consist of millions of tiny files will also be affected due to the overhead in read and write activities.
For all the aforementioned data characteristics, it is important run some real performance benchmark tests to measure speed and deduplication efficacy before moving the solution into a production state.
For clarity, the deduplication occurs at the time of dump, and not during the time of the universal share protection point backup.