Veritas Access Administrator's Guide
- Section I. Introducing Veritas Access
- Section II. Configuring Veritas Access
- Adding users or roles
- Configuring the network
- Configuring authentication services
- Section III. Managing Veritas Access storage
- Configuring storage
- Configuring data integrity with I/O fencing
- Configuring ISCSI
- Veritas Access as an iSCSI target
- Configuring storage
- Section IV. Managing Veritas Access file access services
- Configuring the NFS server
- Setting up Kerberos authentication for NFS clients
- Using Veritas Access as a CIFS server
- About Active Directory (AD)
- About configuring CIFS for Active Directory (AD) domain mode
- About setting trusted domains
- About managing home directories
- About CIFS clustering modes
- About migrating CIFS shares and home directories
- About managing local users and groups
- Configuring an FTP server
- Using Veritas Access as an Object Store server
- Configuring the NFS server
- Section V. Monitoring and troubleshooting
- Section VI. Provisioning and managing Veritas Access file systems
- Creating and maintaining file systems
- Considerations for creating a file system
- Modifying a file system
- Managing a file system
- Creating and maintaining file systems
- Section VII. Configuring cloud storage
- Configuring the cloud gateway
- Configuring cloud as a tier
- Section VIII. Provisioning and managing Veritas Access shares
- Creating shares for applications
- Creating and maintaining NFS shares
- Creating and maintaining CIFS shares
- Using Veritas Access with OpenStack
- Integrating Veritas Access with Data Insight
- Section IX. Managing Veritas Access storage services
- Deduplicating data
- Compressing files
- About compressing files
- Compression tasks
- Configuring SmartTier
- Configuring SmartIO
- Configuring episodic replication
- Episodic replication job failover and failback
- Configuring continuous replication
- How Veritas Access continuous replication works
- Continuous replication failover and failback
- Using snapshots
- Using instant rollbacks
- Section X. Reference
Manually running deduplication
To create a deduplication dryrun
- To create a deduplication dryrun, enter the following command:
Storage> dedup dryrun fs_name [threshold]
The Storage> dedup dryrun command is useful for determining the statistics/potential savings on the file system data if actual deduplication is performed. Most accurate statistics are obtained when the file system block size and the deduplication block size are the same.
Note:
You cannot perform a dryrun on a file system that has already been deduplicated.
fs_name
Specify the file system name for which you want to create a dryrun.
threshold
Specify the threshold percentage in the range of [0-100].
A dryrun is automatically converted to the actual deduplication if the dryrun meets the threshold value. For example, if you specified a threshold value of 40%, and if deduplication results in a space savings of >=40%, then the dryrun is automatically converted to the actual deduplication
To check whether the deduplication dryrun reaches to a threshold value of 60%, enter the following:
Storage> dedup dryrun fs1 60
To start the deduplication process
- To manually start the deduplication process, enter the following:
Storage> dedup start fs_name [nodename]
where fs_name is the name of the file system where you want to start the deduplication process and nodename is the node in the cluster where you want to start deduplication. You can run deduplication on any node in the cluster.
Note:
If the system where you started deduplication crashes, the deduplication job fails over to one of the other nodes in the cluster. Run the dedup status fs_name command to find out the status. The dedup status command can temporarily show status as "FAILOVER" which means dedup job is currently being failed over and will resume shortly. dedup failover is applicable for deduplication jobs started with the dedup start command only. It is not applicable for scheduled dedup jobs.
When the deduplication process is started for the first time, a full scan of the file system is performed. Any subsequent attempt to run deduplication requires an incremental scan only.
For example:
Storage> dedup start fs1 node_01
Note:
When deduplication is running on a file system, you run the Storage> fs offline or Storage> fs destroy commands, these operations can proceed only after deduplication is stopped by using the Storage> dedup stop command.
To stop the deduplication process
- To stop the deduplication process running on a file system, enter the following command:
Storage> dedup stop fs_name
where fs_name is the name of the file system where you want to stop the deduplication process.
Note:
The deduplication process may not stop immediately as a consistent state is ensured while stopping. Use the Storage> dedup status command to verify if the deduplication process has stopped.