Veritas Access Administrator's Guide
- Section I. Introducing Veritas Access
- Section II. Configuring Veritas Access
- Adding users or roles
- Configuring the network
- Configuring authentication services
- Section III. Managing Veritas Access storage
- Configuring storage
- Configuring data integrity with I/O fencing
- Configuring ISCSI
- Veritas Access as an iSCSI target
- Configuring storage
- Section IV. Managing Veritas Access file access services
- Configuring the NFS server
- Setting up Kerberos authentication for NFS clients
- Using Veritas Access as a CIFS server
- About Active Directory (AD)
- About configuring CIFS for Active Directory (AD) domain mode
- About setting trusted domains
- About managing home directories
- About CIFS clustering modes
- About migrating CIFS shares and home directories
- About managing local users and groups
- Configuring an FTP server
- Using Veritas Access as an Object Store server
- Configuring the NFS server
- Section V. Monitoring and troubleshooting
- Section VI. Provisioning and managing Veritas Access file systems
- Creating and maintaining file systems
- Considerations for creating a file system
- Modifying a file system
- Managing a file system
- Creating and maintaining file systems
- Section VII. Configuring cloud storage
- Section VIII. Provisioning and managing Veritas Access shares
- Creating shares for applications
- Creating and maintaining NFS shares
- Creating and maintaining CIFS shares
- Using Veritas Access with OpenStack
- Integrating Veritas Access with Data Insight
- Section IX. Managing Veritas Access storage services
- Compressing files
- About compressing files
- Compression tasks
- Configuring SmartTier
- Configuring SmartIO
- Configuring episodic replication
- Episodic replication job failover and failback
- Configuring continuous replication
- How Veritas Access continuous replication works
- Continuous replication failover and failback
- Using snapshots
- Using instant rollbacks
- Compressing files
- Section X. Reference
Moving files between tiers in a scale-out file system
By default, a scale-out file system has a single tier (also known as a primary tier), which is the on-premises storage for the scale-out file system. You can add a cloud service as an additional tier. After a cloud tier is configured, you can move data between the tiers of the scale-out file system as needed. There can be up to eight cloud tiers per a scale-out file system. For example, you can configure Azure and AWS Glacier as two tiers and move data between these clouds.
Use the commands in this section to move data as a one-time operation. For example, if you have just set up a cloud tier, and you want to move some older data to that tier.
If you want to specify repeatable rules for maintaining data on the tiers, you can set up a policy for the file system.
See Creating and scheduling a policy for a scale-out file system .
You can specify the following criteria to indicate which files or directories to move between tiers:
file or directory name pattern to match
last accessed time (atime)
last modified time (mtime)
Because a scale-out file system can be large, and the size of the files to be moved can be large as well, the Storage> tier move command lets you perform a dry run.
See the storage_tier(1) man page.
To move data between storage tiers in a scale-out file system
- (Optional) Perform a dry run to see which files would be moved and some statistics about the move.
Storage> tier move dryrun fs_name source_tier destination_tier pattern [atime condition] [mtime condition]
The dry run starts in the background. The command output shows the job ID.
- Move the files that match pattern from source_tier to destination_tier based on the last accessed time (atime) or the last modified time (mtime).
Storage> tier move start fs_name source_tier destination_tier pattern [atime condition] [mtime condition]
pattern is required. To include all the files, specify * for pattern.
The condition for atime or mtime includes an operator, a value, and a unit. Possible operators are the following: <, <=, >, >=. Possible units are m, h, and d, indicating minutes, hours, and days.
The name of the default tier is primary. The name of the cloud tier is specified when you add the tier to the file system.
The move job starts in the background. The command output shows the job ID.
Examples:
Move the files that match pattern and that have not been accessed within the past 100 days to the cloud tier.
Storage> tier move start lfs1 primary cloudtier1 pattern atime >100d
Move the files that match pattern and that have been accessed recently in the past 30 days to the specified tier.
Storage> tier move start lfs1 cloud_tier primary pattern atime <=30d
Move the files that match pattern and that have not been modified within the past 100 days to the cloud tier.
Storage> tier move start lfs1 primary cloud_tier pattern mtime >=100d
Move only the files that match pattern and that were modified within the last three days from the cloud tier to the primary tier.
Storage> tier move start lfs2 cloud_tier primary pattern mtime >=3d
Move all files to the primary tier.
Storage> tier move start lfs2 cloud_tier primary *
- View the move jobs that are in progress in the background. This command lists the job IDs and the status of the job.
Storage> tier move list Job Fs name Source Tier Destination Tier Pattern Atime Mtime State ========== ======== ============ ================= =========== ====== ====== ======== 1473684478 largefs1 cloudtier primary /vx/largefs1/* >120s - not running 1473684602 largefs1 cloudtier primary /vx/largefs1/* - - scanning
- View the detailed status of the data movement for the specified job ID.
Storage> tier move status jobid
For example:
Storage> tier move status 1473682883 Job run type: normal Job Status: running Total Data (Files): 4.0 G (100) Moved Data (Files): 100.0 M (10) Last file visited: /vx/fstfs/10.txt
- If required, you can abort a move job.
Storage> tier move abort jobid