NetBackup™ Backup Planning and Performance Tuning Guide
- NetBackup capacity planning
- Primary server configuration guidelines
- Media server configuration guidelines
- NetBackup hardware design and tuning considerations
- About NetBackup Media Server Deduplication (MSDP)
- MSDP tuning considerations
- MSDP sizing considerations
- Accelerator performance considerations
- Media configuration guidelines
- How to identify performance bottlenecks
- Best practices
- Best practices: NetBackup AdvancedDisk
- Best practices: NetBackup tape drive cleaning
- Best practices: Universal shares
- NetBackup for VMware sizing and best practices
- Best practices: Storage lifecycle policies (SLPs)
- Measuring Performance
- Table of NetBackup All Log Entries report
- Evaluating system components
- Tuning the NetBackup data transfer path
- NetBackup network performance in the data transfer path
- NetBackup server performance in the data transfer path
- About shared memory (number and size of data buffers)
- About the communication between NetBackup client and media server
- Effect of fragment size on NetBackup restores
- Other NetBackup restore performance issues
- About shared memory (number and size of data buffers)
- Tuning other NetBackup components
- How to improve NetBackup resource allocation
- How to improve FlashBackup performance
- Tuning disk I/O performance
Conclusions
When specifying and building systems, understanding the use case is imperative. The following are recommended courses of action depending on the use case.
The large number of concurrent streams needed for nightly backups requires higher number of cores per processor. If looking at an enterprise-level backup it is recommended that 40 to 60 cores per compute node are required. More is not necessarily better, but if the user is backing up very large numbers of highly deduplicatable files, a high number of cores are required.
Mid-range stream requirements indicate a 12 to 36 core system. This assumes that the requirements are approximately 20 to 70% of the workload of the enterprise environment as shown above.
Small systems should look at 8 to 18 core systems and single processor motherboards as they will reduce cost and accommodate today's processor core count.
Quality dynamic RAM (DRAM) is extremely important to ensure accurate operation. Because of the number of concurrent backups that users look to accomplish, Error Code Correction (ECC) and Registered (R) DRAM are required to ensure operation with no issue. Current systems use DDR4 SDRAM as the abbreviated "Double Data Rate Synchronous Dynamic Random-Access Memory" with the 4 representing the fourth generation of DDR memory. Users must use DDR4 ECC RDIMMs with current, as of the writing of this document, processors. Frequencies and generation of the DRAM must align with the processor recommendation and be of the same manufacturing lot to ensure smooth operation.
Current requirements of RAM in backup solutions are tied to the amount of MSDP data that is stored on the solution. To ensure proper and performant operation, 1 GB of RAM for every terabyte of MSDP data is recommended. For instance a system with 96TB of MSDP capacity requires the use of at least 96GB of RAM. DDR4 ECC RDIMMs come in 8, 16, 32, 64 and 128GB capacity. For this example, 12 each 8GB DIMMs would suffice, but may not be the most cost effective. Production amounts of the different sizes will change the cost per GB and the user may find that a population of 6 each 16GB or even 8 each 16GB, 128GB total may be a more cost effective solution and provide a future path to larger MSDP pools as the need for such increases.
When selecting a system or motherboard, it is recommended that a PCIe 4 compliant system be chosen. In addition to the doubling of speed of the PCIe lanes, the number of lanes on processors will increase thereby creating a more than 2X performance enhancement. PCIe 4 Ethernet NICs, up to 200Gb, Fibre Channel HBAs up to 32Gb, SAS HBAs and RAID controllers at 4x10Gb per port all with up to 4 port or port groups can take advantage of this higher bandwidth. This level of system will be applicable for 7 to 10 years as opposed to PCIe 3 level systems that will likely disappear in the 2023 time frame. Users will be able to continue to utilize PCIe 3 based components as PCIe 4 is rearward compliant. However, it appears that the PCIe 4 components are in the same price range as PCIe 3, so the user is encouraged to utilize the newer protocol.
Disk drives have the potential to have rather large capacity in the future. HAMR and MAMR as noted earlier are technologies poised to create large, petabyte to exabyte scale repositories with up to 50TB drives. Assuming that consumption continues a 30% per year expansion, these sizes will fulfill the needs of backup storage for the foreseeable future.
For build-your-own (BYO) systems with present day 256TB capacity the best solution would be to design storage that brackets the 32 TiB volumes. For instance, using RAID 6 volumes with a hot spare, as the Veritas NetBackup and Flex appliances use, it is wise to create volumes that can contain those sizes of volumes efficiently. As an example, the NetBackup and Flex 5250 appliances utilize a 12 drive JBOD connected to a RAID controller in the Main Node. It uses 8TB drives and with a RAID 6 using 11 of the drives +1 for hot spare the resultant capacity is 72TB / 65.5 TiB. With this, two volumes of 32TiB fit well into the JBOD and can easily be stacked to arrive at the maximum capacity.
SSDs present a new variable into the solution as they act like disk drives but are not mechanical devices. They present lower power, high capacity, smaller size, significant access time improvement over disk and higher field reliability. The one downside, as compared to disks, is cost. For certain implementations though they are the best solution. Customers who require speed are finding that SSDs used for tape out of deduplicated data are 2.7 times faster than disk storage. If concurrent operations are required such as backup and then immediate replication to off-site, the access time of the SSDs used as the initial target make this possible in the time window necessary. Another use case is to use the SSDs as an Advance Disk pool and then, after the user feels the time is appropriate, the data could be deduplicated to a disk pool for medium or long-term retention.
As noted, earlier NVMe should be the choice for the best performance. Expectations are that the Whitley version of the Intel reference design, due for release in 2021, will be the best Intel platform as it will feature PCIe 4. With the incremental doubling of speed, only 2 lanes would be necessary allowing for an architecture that can handle a large number of SSDs, 24 in a 2u Chassis as well as accommodate the requisite Ethernet and Fibre Channel NIC/HBA to connect to clients.
As the predominant transport for backup, Ethernet NICs are of critical importance. Fortunately, there are a number of quality manufacturers of the NICs. For the time being, greater than 90% of the ports used will be 10GBASE-T, 10Gb optical or direct-attached copper (DAC) and 25Gb Optical / DAC. Broadcom and Marvell have NICs that support all three configurations. Intel and NVIDIA have 25-10 Optical / DAC NICs as well as10GBASE-T equipped NICs. Any of these can be used to accommodate the user's particular needs. Forecasts show that 50 and 100 and, to a lesser extent, 200 and 400Gb Ethernet will be growing quickly as the technology advances.
Fibre Channel (FC) will continue to exist for the foreseeable future, but much of its differentiation from other transports is lessening as NVMe over fabric becomes a more prevalent solution. FC is one of the transports, but it appears that Ethernet will have the speed advantage and will likely win out as the favored transport. For customers with FC SANs Marvell and Broadcom are the two choices for Host Bus Adapters as initiators and targets. Both are very good initiators, and the choice is up to the user as many sites have settled on a single vendor.