Veritas Access Administrator's Guide
- Section I. Introducing Veritas Access
- Section II. Configuring Veritas Access
- Adding users or roles
- Configuring the network
- Configuring authentication services
- Section III. Managing Veritas Access storage
- Configuring storage
- Configuring data integrity with I/O fencing
- Configuring ISCSI
- Veritas Access as an iSCSI target
- Configuring storage
- Section IV. Managing Veritas Access file access services
- Configuring the NFS server
- Setting up Kerberos authentication for NFS clients
- Using Veritas Access as a CIFS server
- About Active Directory (AD)
- About configuring CIFS for Active Directory (AD) domain mode
- About setting trusted domains
- About managing home directories
- About CIFS clustering modes
- About migrating CIFS shares and home directories
- About managing local users and groups
- Configuring an FTP server
- Using Veritas Access as an Object Store server
- Configuring the NFS server
- Section V. Monitoring and troubleshooting
- Section VI. Provisioning and managing Veritas Access file systems
- Creating and maintaining file systems
- Considerations for creating a file system
- Modifying a file system
- Managing a file system
- Creating and maintaining file systems
- Section VII. Configuring cloud storage
- Section VIII. Provisioning and managing Veritas Access shares
- Creating shares for applications
- Creating and maintaining NFS shares
- Creating and maintaining CIFS shares
- Using Veritas Access with OpenStack
- Integrating Veritas Access with Data Insight
- Section IX. Managing Veritas Access storage services
- Compressing files
- About compressing files
- Compression tasks
- Configuring SmartTier
- Configuring SmartIO
- Configuring episodic replication
- Episodic replication job failover and failback
- Configuring continuous replication
- How Veritas Access continuous replication works
- Continuous replication failover and failback
- Using snapshots
- Using instant rollbacks
- Compressing files
- Section X. Reference
About the maximum number of parallel episodic replication jobs
The maximum number of episodic replication jobs is 64, but there are stricter limits on the number of episodic replication jobs that can be running in parallel at the same time. Episodic replication uses a RAM-based file system for storing the transit messages. Each GB of this RAM-based file system can accommodate up to eight parallel running jobs. The default size of this file system depends upon the amount of physical memory of the node on which episodic replication is running. If the physical memory is less than 5 GB, episodic replication limits its maximum usage for storing messages to 1 GB of memory, which means the user can run up to eight episodic replication jobs in parallel at the same time. If the physical memory is between 5 GB to 10 GB, episodic replication limits its maximum usage for storing messages to 2 GB of memory, which means you can run up to 16 episodic replication jobs in parallel. If the physical memory is greater than 10 GB, episodic replication limits its maximum usage for storing messages to 4 GB of memory, which means you can run up to 32 episodic replication jobs in parallel at the same time.