NetBackup™ Backup Planning and Performance Tuning Guide
- NetBackup capacity planning
- Primary server configuration guidelines
- Media server configuration guidelines
- NetBackup hardware design and tuning considerations
- About NetBackup Media Server Deduplication (MSDP)
- MSDP tuning considerations
- MSDP sizing considerations
- Accelerator performance considerations
- Media configuration guidelines
- How to identify performance bottlenecks
- Best practices
- Best practices: NetBackup AdvancedDisk
- Best practices: NetBackup tape drive cleaning
- Best practices: Universal shares
- NetBackup for VMware sizing and best practices
- Best practices: Storage lifecycle policies (SLPs)
- Measuring Performance
- Table of NetBackup All Log Entries report
- Evaluating system components
- Tuning the NetBackup data transfer path
- NetBackup network performance in the data transfer path
- NetBackup server performance in the data transfer path
- About shared memory (number and size of data buffers)
- About the communication between NetBackup client and media server
- Effect of fragment size on NetBackup restores
- Other NetBackup restore performance issues
- About shared memory (number and size of data buffers)
- Tuning other NetBackup components
- How to improve NetBackup resource allocation
- How to improve FlashBackup performance
- Tuning disk I/O performance
Monitoring Linux/UNIX disk load
You can use the iostat utility to check device I/O performance, such as average wait time and percentage of disk utilization.
Try the following:
iostat -ktxN 5
where 5 specifies a five-second refresh rate.
Sample output from the Red Hat 7 kernel:
iostat -ktxN 5 Time: 07:39:14 AM avg-cpu: %user %nice %system %iowait %steal %idle 5.02 0.00 8.84 0.84 0.00 85.30 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 .40 0.00 3.80 0.00 48.80 25.68 0.11 29.89 3.37 1.28 sdb 0.00 7.60 0.80 9.40 3.20 69.60 14.27 0.52 50.90 3.53 3.60 sys-r 0.00 0.00 0.00 2.00 0.00 8.00 8.00 0.02 10.80 2.40 0.48 sys-swap 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sys-usr 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sys-h 0.00 0.00 0.80 27.20 3.20 108.80 8.00 1.33 47.63 1.11 3.12 sdc 0.00 2.60 0.00 3.60 0.00 25.60 14.22 0.01 3.56 0.22 0.08 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Note:
This example is from a Red Hat 7 Linux system. Other operating systems may use different options. Refer to the iostat man page of each OS for details.
Helpful report values are the following:
await: The average time (in milliseconds) for device I/O requests to complete, for both the virtual device and the physical disk. This includes the time the requests spend waiting in the disk queue and the time servicing them. In general, low average wait values indicate better throughput.
%util: The percentage of elapsed time in which I/O requests were sent to the device. As the value approaches 100%, saturation of the device occurs. A lower percentage is better.
If NetBackup is running on Red Hat 8 Linux and the Veritas storage manager, InfoScale, is used for MSDP storage management, you may find the VxVM devices name no longer appear in the iostat output. You can check the VxVM IO mode with the command vxtune vol_use_rq. If the current value of vol_use_rq is '0', then BIO mode is enabled. Otherwise, Request mode is enabled. The mode change is needed to bypass a known Red Hat 8 Linux bug. In order to analyze IO statistics for VxVM devices when BIO mode is enabled, use vxstat instead of iostat.