Veritas NetBackup™ Administrator's Guide, Volume II
- NetBackup licensing models and usage reporting
- How capacity licensing works
- Creating and viewing the licensing report
- Reviewing a capacity licensing report
- Reconciling the capacity licensing report results
- Reviewing a traditional licensing report
- Reviewing an NEVC licensing report
- Additional configuration
- About dynamic host name and IP addressing
- About busy file processing on UNIX clients
- About the Shared Storage Option
- DELETE About configuring the Shared Storage Option in NetBackup
- Viewing SSO summary reports
- About the vm.conf configuration file
- Holds Management
- Menu user interfaces on UNIX
- About the tpconfig device configuration utility
- About the NetBackup Disk Configuration Utility
- Reference topics
- Host name rules
- About reading backup images with nbtar or tar32.exe
- Factors that affect backup time
- NetBackup notify scripts
- Media and device management best practices
- About TapeAlert
- About tape drive cleaning
- How NetBackup reserves drives
- About SCSI persistent reserve
- About the SPC-2 SCSI reserve process
- About checking for data loss
- About checking for tape and driver configuration errors
- How NetBackup selects media
- About Tape I/O commands on UNIX
BigData plug-ins for NetBackup
Capacity licensing for BigData plug-ins is specific to the BigData policy type for the following plug-ins:
Hadoop plug-in for NetBackup
Nutanix plug-in for NetBackup
After the nbdeployutil utility runs using the capacity licensing option, the report displays the policy type in the Itemization sheet as follows:
BigData:hadoop
BigData:Nutanix-AHV
Using the BigData policy, the Hadoop (HDFS) data is backed up for the directory or the backup selection that is defined in the policy. The definition of protected data for the policy is the size of the defined directories that have the allow snapshot option enabled on an HDFS file system. To view the defined directory size, you can browse the file system using the Hadoop web console.
Administrators can also run the following HDFS command to verify the size that capacity licensing reports. The following HDFS commands are available by default:
hdfs dfs -ls -R -h /<name_of_the_directory>
The front-end data size that is reported for Nutanix Acropolis Hypervisor is the consumed storage size. You can verify the consumed storage size on the Nutanix AHV console by navigating to Storage column. The Storage column displays the consumed storage size as compared to the total allocated storage size.
under theThe Nutanix-AHV VM can be backed up using the BigData policy and the Hypervisor policy (all drives Included). If the same VM is backed up using both BigData and Hypervisor policies, you are charged once for the VM backup using the Hypervisor policy.