Veritas NetBackup for Hadoop Administrator's Guide
- Introduction
- Installing and deploying Hadoop plug-in for NetBackup
- Configuring NetBackup for Hadoop
- Managing backup hosts
- Configuring the Hadoop plug-in using the Hadoop configuration file
- Configuring NetBackup policies for Hadoop plug-in
- Performing backups and restores of Hadoop
- Troubleshooting
- Troubleshooting backup issues for Hadoop data
- Troubleshooting restore issues for Hadoop data
Backing up Hadoop data
Hadoop data is backed up in parallel streams wherein Hadoop DataNodes stream data blocks simultaneously to multiple backup hosts.
Note:
All the directories specified in Hadoop backup selection must be snapshot-enabled before the backup.
The following diagram provides an overview of the backup flow:
As illustrated in the following diagram:
A scheduled backup job is triggered from the master server.
Backup job for Hadoop data is a compound job. When the backup job is triggered, first a discovery job is run.
During discovery, the first backup host connects with the NameNode and performs a discovery to get details of data that needs to be backed up.
A workload discovery file is created on the backup host. The workload discovery file contains the details of the data that needs to be backed up from the different DataNodes.
The backup host uses the workload discovery file and decides how the workload is distributed amongst the backup hosts. Workload distribution files are created for each backup host.
Individual child jobs are executed for each backup host. As specified in the workload distribution files, data is backed up.
Data blocks are streamed simultaneously from different DataNodes to multiple backup hosts.
The compound backup job is not completed untill all the child jobs are completed. After the child jobs are completed, NetBackup cleans all the snapshots from the NameNode. Only after the cleanup activity is completed, the compound backup job is completed.