Veritas Access Administrator's Guide
- Section I. Introducing Veritas Access
- Section II. Configuring Veritas Access
- Adding users or roles
- Configuring the network
- Configuring authentication services
- Section III. Managing Veritas Access storage
- Configuring storage
- Configuring data integrity with I/O fencing
- Configuring ISCSI
- Veritas Access as an iSCSI target
- Configuring storage
- Section IV. Managing Veritas Access file access services
- Configuring the NFS server
- Setting up Kerberos authentication for NFS clients
- Using Veritas Access as a CIFS server
- About Active Directory (AD)
- About configuring CIFS for Active Directory (AD) domain mode
- About setting trusted domains
- About managing home directories
- About CIFS clustering modes
- About migrating CIFS shares and home directories
- About managing local users and groups
- Configuring an FTP server
- Using Veritas Access as an Object Store server
- Configuring the NFS server
- Section V. Monitoring and troubleshooting
- Section VI. Provisioning and managing Veritas Access file systems
- Creating and maintaining file systems
- Considerations for creating a file system
- Modifying a file system
- Managing a file system
- Creating and maintaining file systems
- Section VII. Configuring cloud storage
- Section VIII. Provisioning and managing Veritas Access shares
- Creating shares for applications
- Creating and maintaining NFS shares
- Creating and maintaining CIFS shares
- Using Veritas Access with OpenStack
- Integrating Veritas Access with Data Insight
- Section IX. Managing Veritas Access storage services
- Compressing files
- About compressing files
- Compression tasks
- Configuring SmartTier
- Configuring SmartIO
- Configuring episodic replication
- Episodic replication job failover and failback
- Configuring continuous replication
- How Veritas Access continuous replication works
- Continuous replication failover and failback
- Using snapshots
- Using instant rollbacks
- Compressing files
- Section X. Reference
Recommended tuning for NFS-Ganesha version 3 and version 4
Veritas Access supports both the NFS kernel-based server and the NFS-Ganesha server in a mutually exclusive way. The NFS kernel-based server supports NFS version 3 and version 4. The NFS-Ganesha server also supports both NFS version 3 and NFS version 4.
See Using the NFS-Ganesha server.
The NFS-Ganesha server does not run in the kernel, instead NFS-Ganesha runs in user space on the NFS server. This means that the NFS-Ganesha server processes can be affected by system resource limitations as any other user space process can be affected. There are some NFS-server operating system tuning values that you should modify to ensure that the NFS-Ganesha server performance is not unduly affected. You use the NFS client mount option version to determine whether NFS version 3 or NFS version 4 is used. On the NFS client, you can select either the version=3 or the version=4 mount option. The NFS client is unaware of whether the NFS server is using kernel-based NFS or NFS-Ganesha.
When you start a system, kswapd_init() calls a kernel thread that is called kswapd, which continuously executes the function kswapd() in mm/vmscan.c
that usually sleeps. The kswapd daemon is responsible for reclaiming pages when memory is running low. kswapd performs most of the tasks that are needed to maintain the page cache correctly, shrink slab caches, and swap out processes if necessary. kswapd keeps freeing pages until the pages_high watermark is reached. Under extreme memory pressure, processes do the work of kswapd synchronously by calling balance_classzone(), which calls the try_to_free_pages_zone().
When there is memory pressure, pages are claimed using two different methods.
pgscank/s - The kswapd kernel daemon periodically wakes up and claims (frees) memory in the background when free memory is low. pgscank/s records this activity.
pgscand/s - When kswapd fails to free up enough memory, then the memory is also claimed directly in the process context (thus blocking the user program execution). pgscand/s records this activity.
The total pages being claimed (also known as page stealing) is therefore a combination of both pgscank/s and pgscand/s. pgsteal/s records the total activity, so (pgsteal/s = pgscank/s + pgscand/s).
The NFS-Ganesha user process can be affected when kswapd fails to free up enough memory. To alleviate the possibility of the NFS-Ganesha process from doing the work of kswapd, Veritas recommends increasing the value of the Linux virtual machine tunable min_free_kbytes.
Example of a default auto-tuned value:
sysctl -a | grep vm.min_free vm.min_free_kbytes = 90112
You use min_free_kbytes to force the Linux VM (virtual memory management) to keep a minimum number of kilobytes free. The VM uses this number to compute a watermark value for each lowmem zone in the system.
Table: Recommended tuning parameters for NFS version 3 and version 4
Option | Description |
---|---|
NFS mount options | File system mount options for the NFS client:
|
NFS server export options | NFS server export options:
|
Jumbo frames | A jumbo frame is an Ethernet frame with a payload greater than the standard maximum transmission unit (MTU) of 1,500 bytes. Enabling jumbo frames improves network performance in I/O intensive workloads. If jumbo frames are supported by your network, and if you wish to use jumbo frames, Veritas recommends using a jumbo frame size of 5000. |
min_free_kbytes | On server nodes with 96 GB RAM or more, the recommended value of min_free_kbytes is 1048576 (=1 GB). On server nodes using the minimum of 32 GB RAM, the minimum recommended value of min_free_kbytes is 524288 (=512 MB). |