Veritas NetBackup™ Cloud Administrator's Guide
- About NetBackup cloud storage
- About the cloud storage
- About the Amazon S3 cloud storage API type
- About protecting data in Amazon for long-term retention
- Protecting data using Amazon's cloud tiering
- About using Amazon IAM roles with NetBackup
- Protecting data with Amazon Snowball and Amazon Snowball Edge
- About Microsoft Azure cloud storage API type
- About OpenStack Swift cloud storage API type
- Configuring cloud storage in NetBackup
- Scalable Storage properties
- Cloud Storage properties
- About the NetBackup CloudStore Service Container
- About the NetBackup media servers for cloud storage
- Configuring a storage server for cloud storage
- NetBackup cloud storage server properties
- Configuring a storage unit for cloud storage
- Changing cloud storage disk pool properties
- Monitoring and Reporting
- Operational notes
- Troubleshooting
- About unified logging
- About legacy logging
- Troubleshooting cloud storage configuration issues
- Troubleshooting cloud storage operational issues
About object size for cloud storage
During backup, NetBackup divides the backup image data into chunks called objects. PUT request is made for each object to move it to the cloud storage.
By setting a custom Object Size, you can control the amount of PUT and GET requests that are sent to and from the cloud storage. The reduced number of PUT and GET requests help in reducing the total charges that are incurred for the requests.
During the creation of a cloud storage server, you can specify a custom value for the Object Size. Consider the cloud storage provider, hardware, infrastructure, expected performance, and other factors for deciding the value. Once you set the Object Size for a cloud storage server, you cannot change the value. If you want to set a different Object Size, you must recreate the cloud storage server.
The combination of object size, number of parallel connections, and the read or write buffer size contribute to the performance of NetBackup in the cloud.
To enhance the performance of backup and restore operations, NetBackup uses multiple parallel connections into cloud storage. The performance of NetBackup depends on the number of parallel connections. Number of parallel connections are derived from the read or write buffer size and the object size.
Read or Write buffer size (user set) ÷ Object Size (user set) = Number of parallel connections (derived). The following diagram illustrates how these factors are related:
The following diagram illustrates how these factors are related:
Consider the following factors when deciding the number of parallel connections:
The maximum number of parallel connections the cloud storage provider permits.
Network bandwidth availability between NetBackup and the cloud storage environment.
System memory availability on the NetBackup host.
If you increase the object size, the number of parallel connections reduce. The number of parallel connections affect the upload and the download rate.
If you increase the read or write buffer size, the number of parallel connections increase. Similarly, if you want lesser number of parallel connections, you can reduce the read or write buffer size. However, you must consider the network bandwidth and the system memory availability.
Cloud providers charge for the number of PUT and GET requests that are initiated during a backup or restore process. The smaller the object size, higher the number of PUT or GET requests, and therefore, higher charges are incurred.
In case of temporary failures with data transfer, NetBackup performs multiple retries for transferring the failed objects. If the failures persist, the complete object is transferred again. Also, with higher latency and higher packet loss, the performance might reduce. To handle the latency and the packet loss issues, increasing the number of parallel connections can be helpful.
NetBackup has some time-outs on the client side. If the upload operation takes more time (due to big object size) than the minimum derived NetBackup data transfer rate, there can be failures with NetBackup.
For legacy environments without deduplication support, if the number of connections are less, parallel downloads are less compared to older number of connections.
For example, while restoring from back-level images (8.0 and earlier), where the object size is 1MB, the buffer of 16 MB (for one connection) is not completely used while also consuming memory. With the increased object size, there is a restriction on number of connections due to the available read or write buffer size memory.
The default settings are as follows:
Table: Current default settings
Cloud storage provider | Object size | Default read or write buffer size |
---|---|---|
Amazon S3/Amazon GovCloud | 16 MB (fixed) | 400 MB (configurable between 16 MB to 1 GB) |
Azure | 4 MB (fixed) | 400 MB (configurable between 4 MB to 1 GB) |
More Information