NetBackup™ Backup Planning and Performance Tuning Guide

Last Published:
Product(s): NetBackup & Alta Data Protection (10.4, 10.3.0.1, 10.3, 10.2.0.1, 10.2, 10.1.1, 10.1, 10.0.0.1, 10.0, 9.1.0.1, 9.1, 9.0.0.1, 9.0, 8.3.0.2, 8.3.0.1, 8.3)
  1. NetBackup capacity planning
    1.  
      Purpose of this guide
    2.  
      Changes in Veritas terminology
    3.  
      Disclaimer
    4.  
      How to analyze your backup requirements
    5.  
      How to calculate the size of your NetBackup image database
    6. Sizing for capacity with MSDP
      1. Key sizing parameters
        1.  
          Data types and deduplication
        2.  
          Determining FETB for workloads
        3.  
          Retention periods
        4.  
          Change rate
        5.  
          Replication and duplication of backups
        6.  
          Sizing calculations for MSDP clients
    7.  
      About how to design your OpsCenter server
  2. Primary server configuration guidelines
    1.  
      Size guidance for the NetBackup primary server and domain
    2.  
      Factors that limit job scheduling
    3.  
      More than one backup job per second
    4.  
      Stagger the submission of jobs for better load distribution
    5.  
      NetBackup job delays
    6.  
      Selection of storage units: performance considerations
    7.  
      About file system capacity and NetBackup performance
    8.  
      About the primary server NetBackup catalog
    9.  
      Guidelines for managing the primary server NetBackup catalog
    10.  
      Adjusting the batch size for sending metadata to the NetBackup catalog
    11.  
      Methods for managing the catalog size
    12.  
      Performance guidelines for NetBackup policies
    13.  
      Legacy error log fields
  3. Media server configuration guidelines
    1. NetBackup hardware design and tuning considerations
      1.  
        PCI architecture
      2.  
        Central processing unit (CPU) trends
      3.  
        Storage trends
      4.  
        Conclusions
    2. About NetBackup Media Server Deduplication (MSDP)
      1.  
        Data segmentation
      2.  
        Fingerprint lookup for deduplication
      3.  
        Predictive and sampling cache scheme
      4.  
        Data store
      5.  
        Space reclamation
      6.  
        System resource usage and tuning considerations
      7.  
        Memory considerations
      8.  
        I/O considerations
      9.  
        Network considerations
      10.  
        CPU considerations
      11.  
        OS tuning considerations
      12. MSDP tuning considerations
        1.  
          Sample steps to change MSDP contentrouter.cfg
      13. MSDP sizing considerations
        1.  
          Data gathering
        2.  
          Leveraging requirements and best practices
    3.  
      Cloud tier sizing and performance
    4. Accelerator performance considerations
      1.  
        Accelerator for file-based backups
      2.  
        Controlling disk space for Accelerator track logs
      3.  
        Accelerator for virtual machine backups
      4.  
        Forced rescan schedules
      5.  
        Reporting the amount of Accelerator data transferred over the network
      6.  
        Accelerator backups and the NetBackup catalog
  4. Media configuration guidelines
    1.  
      About dedicated versus shared backup environments
    2.  
      Suggestions for NetBackup media pools
    3.  
      Disk versus tape: performance considerations
    4.  
      NetBackup media not available
    5.  
      About the threshold for media errors
    6.  
      Adjusting the media_error_threshold
    7.  
      About tape I/O error handling
    8.  
      About NetBackup media manager tape drive selection
  5. How to identify performance bottlenecks
    1.  
      Introduction
    2.  
      Proper mind set for performance issue RCA
    3.  
      The 6 steps of performance issue RCA and resolution
    4. Flowchart of performance data analysis
      1.  
        How to create a workload profile
  6. Best practices
    1.  
      Best practices: NetBackup SAN Client
    2. Best practices: NetBackup AdvancedDisk
      1.  
        AdvancedDisk performance considerations
      2.  
        Exclusive use of disk volumes with AdvancedDisk
      3.  
        Disk volumes with different characteristics
      4.  
        Disk pools and volume managers with AdvancedDisk
      5.  
        Network file system considerations
      6.  
        State changes in AdvancedDisk
    3.  
      Best practices: Disk pool configuration - setting concurrent jobs and maximum I/O streams
    4.  
      Best practices: About disk staging and NetBackup performance
    5.  
      Best practices: Supported tape drive technologies for NetBackup
    6. Best practices: NetBackup tape drive cleaning
      1.  
        How NetBackup TapeAlert works
      2.  
        Disabling TapeAlert
    7.  
      Best practices: NetBackup data recovery methods
    8.  
      Best practices: Suggestions for disaster recovery planning
    9.  
      Best practices: NetBackup naming conventions
    10.  
      Best practices: NetBackup duplication
    11.  
      Best practices: NetBackup deduplication
    12. Best practices: Universal shares
      1.  
        Benefits of universal shares
      2.  
        Configuring universal shares
      3.  
        Tuning universal shares
    13. NetBackup for VMware sizing and best practices
      1.  
        Configuring and controlling NetBackup for VMware
      2.  
        Discovery
      3.  
        Backup and restore operations
    14. Best practices: Storage lifecycle policies (SLPs)
      1.  
        Data flow and SLP design best practices
      2.  
        Targeted SLP
      3.  
        Limiting the number of SLP secondary operations to maximize performance
      4.  
        Storage Server IO
    15.  
      Best practices: NetBackup NAS-Data-Protection (D-NAS)
    16.  
      Best practices: NetBackup for Nutanix AHV
    17.  
      Best practices: NetBackup Sybase database
    18.  
      Best practices: Avoiding media server resource bottlenecks with Oracle VLDB backups
    19.  
      Best practices: Avoiding media server resource bottlenecks with MSDPLB+ prefix policy
    20.  
      Best practices: Cloud deployment considerations
  7. Measuring Performance
    1.  
      Measuring NetBackup performance: overview
    2.  
      How to control system variables for consistent testing conditions
    3.  
      Running a performance test without interference from other jobs
    4.  
      About evaluating NetBackup performance
    5.  
      Evaluating NetBackup performance through the Activity Monitor
    6.  
      Evaluating NetBackup performance through the All Log Entries report
    7. Table of NetBackup All Log Entries report
      1.  
        Additional information on the NetBackup All Log Entries report
    8. Evaluating system components
      1.  
        About measuring performance independent of tape or disk output
      2.  
        Measuring performance with bpbkar
      3.  
        Bypassing disk performance with the SKIP_DISK_WRITES touch file
      4.  
        Measuring performance with the GEN_DATA directive (Linux/UNIX)
      5.  
        Monitoring Linux/UNIX CPU load
      6.  
        Monitoring Linux/UNIX memory use
      7.  
        Monitoring Linux/UNIX disk load
      8.  
        Monitoring Linux/UNIX network traffic
      9.  
        Monitoring Linux/Unix system resource usage with dstat
      10.  
        About the Windows Performance Monitor
      11.  
        Monitoring Windows CPU load
      12.  
        Monitoring Windows memory use
      13.  
        Monitoring Windows disk load
    9.  
      Increasing disk performance
  8. Tuning the NetBackup data transfer path
    1.  
      About the NetBackup data transfer path
    2.  
      About tuning the data transfer path
    3.  
      Tuning suggestions for the NetBackup data transfer path
    4.  
      NetBackup client performance in the data transfer path
    5. NetBackup network performance in the data transfer path
      1.  
        Network interface settings
      2.  
        Network load
      3. Setting the network buffer size for the NetBackup media server
        1.  
          Network buffer size in relation to other parameters
      4.  
        Setting the NetBackup client communications buffer size
      5.  
        About the NOSHM file
      6.  
        Using socket communications (the NOSHM file)
    6. NetBackup server performance in the data transfer path
      1. About shared memory (number and size of data buffers)
        1.  
          Default number of shared data buffers
        2.  
          Default size of shared data buffers
        3.  
          Amount of shared memory required by NetBackup
        4.  
          How to change the number of shared data buffers
        5.  
          Notes on number data buffers files
        6.  
          How to change the size of shared data buffers
        7.  
          Notes on size data buffer files
        8.  
          Size values for shared data buffers
        9.  
          Note on shared memory and NetBackup for NDMP
        10.  
          Recommended shared memory settings
        11.  
          Recommended number of data buffers for SAN Client and FT media server
        12.  
          Testing changes made to shared memory
      2.  
        About NetBackup wait and delay counters
      3.  
        Changing parent and child delay values for NetBackup
      4. About the communication between NetBackup client and media server
        1.  
          Processes used in NetBackup client-server communication
        2.  
          Roles of processes during backup and restore
        3.  
          Finding wait and delay counter values
        4.  
          Note on log file creation
        5.  
          About tunable parameters reported in the bptm log
        6.  
          Example of using wait and delay counter values
        7.  
          Issues uncovered by wait and delay counter values
      5.  
        Estimating the effect of multiple copies on backup performance
      6. Effect of fragment size on NetBackup restores
        1.  
          How fragment size affects restore of a non-multiplexed image
        2.  
          How fragment size affects restore of a multiplexed image on tape
        3.  
          Fragmentation and checkpoint restart
      7. Other NetBackup restore performance issues
        1.  
          Example of restore from multiplexed database backup (Oracle)
    7.  
      NetBackup storage device performance in the data transfer path
  9. Tuning other NetBackup components
    1.  
      When to use multiplexing and multiple data streams
    2.  
      Effects of multiplexing and multistreaming on backup and restore
    3. How to improve NetBackup resource allocation
      1.  
        Improving the assignment of resources to NetBackup queued jobs
      2.  
        Sharing reservations in NetBackup
      3.  
        Disabling the sharing of NetBackup reservations
      4.  
        Disabling on-demand unloads
    4.  
      Encryption and NetBackup performance
    5.  
      Compression and NetBackup performance
    6.  
      How to enable NetBackup compression
    7.  
      Effect of encryption plus compression on NetBackup performance
    8.  
      Information on NetBackup Java performance improvements
    9.  
      Information on NetBackup Vault
    10.  
      Fast recovery with Bare Metal Restore
    11.  
      How to improve performance when backing up many small files
    12. How to improve FlashBackup performance
      1.  
        Adjusting the read buffer for FlashBackup and FlashBackup-Windows
    13.  
      Veritas NetBackup OpsCenter
  10. Tuning disk I/O performance
    1. About NetBackup performance and the hardware hierarchy
      1.  
        About performance hierarchy level 1
      2.  
        About performance hierarchy level 2
      3.  
        About performance hierarchy level 3
      4.  
        About performance hierarchy level 4
      5.  
        Summary of performance hierarchies
      6.  
        Notes on performance hierarchies
    2.  
      Hardware examples for better NetBackup performance

Leveraging requirements and best practices

Design plans include many new features.

When the data gathering phase is complete, the next steps involve leveraging that data to calculate the capacity, I/O and compute requirements to determine three key numbers. Those numbers are BETB, IOPS, and compute (memory and CPU) resources. Veritas recommends that customers engage the Veritas Presales Team to assist with these calculations to determine the sizing of the solution. Then, it is important to consider some best practices around sizing and performance, as well as ensuring that the solution has some flexibility and headroom.

Best practice guidelines

Due to the nature of MSDP, the memory requirements are driven by the cache, spoold and spad. The guideline is 1GB of memory to 1TB of MSDP. For a 500TB MSDP pool, the recommendation is a minimum of 500GB of memory. Also note that leveraging features like Accelerator can be memory intensive. The memory sizing is important.

For the workloads that have very high job numbers, it is recommended that smaller disk drives be leveraged to increase IOPS performance. Sometimes 4TB drives are a better fit than 8TB drives. Consider this suggestion as a factor along with the workload type, data characteristics, retention, and secondary operations.

Where MSDP storage servers are virtual, whether through VMware, Docker, or in the cloud, it is important not to share physical LUNs between instances. Significant performance issues have been observed in MSDP storage servers that are deployed in AWS, Azure, VMware, and Docker when the physical LUNs are shared between instances.

Often, customers mistakenly believe that setting a high number of data streams on an MSDP pool can increase performance of their backups. However, the goal is to set the number of streams that satisfy the workload needs without creating a bottleneck due to too many concurrent streams fighting for resources. For example, a single MSDP storage server with a 500TB pool protecting Oracle workloads exclusively at 60K jobs per day was configured with a maximum concurrent stream count of 275. Initially, this count was set to 200 and then gradually increased to 275.

One method of determining if the stream count is too low, is to measure how long a single job, during the busiest times of the day, waits in queue. If a lot of jobs are waiting in the queue for lengthy periods, then it is possible the stream count is too low.

That said, it is important to gather performance data like SAR from the storage server to see how compute and I/O resources are used. If those resources are heavily used at the current state of a specific stream count, and yet there are still large numbers of jobs waiting in the queue for a lengthy period of time, then additional MSDP storage servers may be required to meet a customer's window for backups and secondary operations.

When it comes to secondary operations, the goal should be to process all SLP backlog within the same 24 hours it was placed in queue. As an example, if there are 40K backup images per day that must be replicated and duplicated, the goal is to process those images consistently within a 24-hour period to prevent a significant SLP backlog.

Customers often make the mistake of oversubscribing their Maximum Concurrent Jobs within their storage units (STUs). This mistake adds up to be a number larger than the Max Concurrent Streams on the MSDP pool. This approach is not a correct way to leverage STUs. Additionally, customers may incorrectly create multiple STUs that reference the same MSDP storage server with stream counts that individually aren't higher than the Max Concurrent Streams on the MSDP pool, but add up to a higher number when all STUs that reference that storage server are combined. This approach is also an improper use of STUs.

All actively concurrent STUs that reference a single, specific MSDP storage server must have Maximum Concurrent Jobs set in total to be less than or equal to the Maximum Concurrent Streams on the MSDP pool. STUs are used to throttle workloads that reference a single storage resource. For example, if Maximum Concurrent Streams for an MSDP pool is set to 200 and two storage units have Maximum Concurrent Jobs each set to 150, the maximum number of jobs that can be processed at any given time is still 200, even though the sum of the two STUs is 300. This type of configuration isn't recommended. Furthermore, it is important to understand why more than one STU should be created to reference the same MSDP pool. A clean, concise NetBackup configuration is easier to manage and highly recommended. It is rare that a client must have more than one STU referencing the same MSDP storage server and associated pool.

Another thing to consider is that SLPs do need one or more streams to process secondary operations. Duplications and replications may not always have the luxury to be written during a window of time when no backups are running. Therefore, it is recommended that the sum of the Maximum Concurrent Jobs on all STUs referencing a specific MSDP storage server be 7-10% less than the Maximum Concurrent Streams on the MSDP pool to accommodate secondary operations while backups jobs are running. An example is where the Maximum Concurrent Streams on the MSDP pool is set to 275 while the sum of all Maximum Concurrent Jobs set on the STUs that reference that MSDP storage server is 250. This example allows up to 25 streams to be used for other activities like restores, replications, and duplications during which backups jobs are also running.

Pool Sizes

Although it is tempting to minimize the number of MSDP storage servers and size pools to the max 960TB, there are some performance implications that are worth considering. It has been observed that heavy mixed workloads sent to a single, 960TB MSDP pool don't perform as well as constructing two MSDP pools at 480TB and grouping the workloads to back up to a consistent MSDP pool. For example, consider two workload types, namely VMware and Oracle which happen to both be very large. Sending both workloads to a single large pool, especially considering that VMware and Oracle are resource-intensive, and both generate high job counts, can affect performance. In this scenario, creating a 480TB MSDP pool as the target for VMware workloads and a 480TB MSDP pool Oracle workloads can often deliver better performance.

Some customers incorrectly believe that alternating MSDP pools as the target or the same data is a good idea. It isn't. In fact, this approach decreases deduplication efficacy. Veritas does not recommend that a client send the same client data to two different pools. Also, Veritas does not recommended that a client send the same workloads to two different pools. This action negatively affects solution performance and capacity.

The only exceptions would be in the case that the target MSDP pool isn't available due to maintenance, and the backup jobs can't wait until it is available, or perhaps the MSDP pool is tight on space and juggling workloads temporarily is necessary while additional storage resources are added.

Fingerprint media servers

Many customers believe that minimizing the number of MSDP pools while maximizing the number of fingerprint media servers (FPMS) can increase performance significantly. In the past, there has been some evidence that FPMS might be effective at offloading some of the compute activity from the storage server would increase performance. While there are some scenarios where it might still be helpful, those scenarios are less frequent. In fact, often the opposite is true. There has been repeated evidence that large numbers of FPMSs leveraging a small number of storage servers can be a waste of resources, increase complexity, and affect performance negatively by overwhelming the storage server. We have consistently seen that the use of more storage servers with MSDP pools in the range of 500TB tend to perform better than a handful of FPMSs directing workloads to a single MSDP storage server. Therefore, it is recommended that the use of FPMS be deliberate and conservative, if they are indeed required.

Flexibility

The larger the pool, the larger the MSDP cache. The larger the pool, the longer it takes to run an MSDP check when the need arises. The fewer number of pools, the more the effect of taking a single pool offline for maintenance can have on the overall capability of the solution. Therefore, considering more pools of a smaller size instead of a minimum number of pools at a larger size can provide flexibility in your solution design and increase performance.

For virtual platforms such as Flex, there is value to creating MSDP pools and associated storage server instances that act as a target for a specific workload type. With multiple MSDP pools that do not share physical LUNs, the end result produces less I/O contention while minimizing the physical footprint.

Headroom

Customer who runs their environments very close to full capacity tend to put themselves in a difficult position when a single MSDP pool becomes unavailable for any reason. When designing a solution that involves defining the size and number of MSDP pools, it is important to minimize SPOF, whether due to capacity, maintenance, or component failure. Furthermore, in cases where there are a lot of secondary activities like duplications or replications, ensuring there is some additional capacity headroom is important, as certain types of maintenance activity might lead to a short-term SLP backlog. A guideline of 25% headroom in each MSDP pool is recommended for these purposes, whether SLP backlog or temporarily juggling workloads due to the aforementioned.