Veritas Access Administrator's Guide
- Section I. Introducing Veritas Access
- Section II. Configuring Veritas Access
- Adding users or roles
- Configuring the network
- Configuring authentication services
- Section III. Managing Veritas Access storage
- Configuring storage
- Configuring data integrity with I/O fencing
- Configuring ISCSI
- Veritas Access as an iSCSI target
- Configuring storage
- Section IV. Managing Veritas Access file access services
- Configuring the NFS server
- Setting up Kerberos authentication for NFS clients
- Using Veritas Access as a CIFS server
- About Active Directory (AD)
- About configuring CIFS for Active Directory (AD) domain mode
- About setting trusted domains
- About managing home directories
- About CIFS clustering modes
- About migrating CIFS shares and home directories
- About managing local users and groups
- Configuring an FTP server
- Using Veritas Access as an Object Store server
- Configuring the NFS server
- Section V. Monitoring and troubleshooting
- Section VI. Provisioning and managing Veritas Access file systems
- Creating and maintaining file systems
- Considerations for creating a file system
- Modifying a file system
- Managing a file system
- Creating and maintaining file systems
- Section VII. Configuring cloud storage
- Section VIII. Provisioning and managing Veritas Access shares
- Creating shares for applications
- Creating and maintaining NFS shares
- Creating and maintaining CIFS shares
- Using Veritas Access with OpenStack
- Integrating Veritas Access with Data Insight
- Section IX. Managing Veritas Access storage services
- Compressing files
- About compressing files
- Compression tasks
- Configuring SmartTier
- Configuring SmartIO
- Configuring episodic replication
- Episodic replication job failover and failback
- Configuring continuous replication
- How Veritas Access continuous replication works
- Continuous replication failover and failback
- Using snapshots
- Using instant rollbacks
- Compressing files
- Section X. Reference
Configuring Veritas Access with OpenStack Cinder
To show all your NFS shares
- To show all your NFS shares that are exported from Veritas Access, enter the following:
OPENSTACK> cinder share show
For example:
OPENSTACK> cinder share show /vx/fs1 *(rw,no_root_squash)
OPENSTACK> cinder share show /vx/o_fs 2001:21::/120 (rw,sync,no_root_squash)
To share and export a file system
- To share and export a file system, enter the following:
OPENSTACK> cinder share add export-dir world|client
After issuing this command, OpenStack Cinder will be able to mount the exported file system using NFS.
export-dir
Specifies the path of the directory that needs to be exported to the client.
The directory path should start with /vx and only the following characters are allowed:
'a-zAZ0- 9_/@+=.:-'
world
Specifies if the NFS export directory is intended for everyone.
client
Exports the directory with the specified options.
Clients may be specified in the following ways:
Single host
Specify a host either by an abbreviated name recognized by the resolver, the fully qualified domain name, or an IP address.
Netgroups
Netgroups may be given as @group. Only the host part of each netgroup member is considered when checking for membership.
IP networks
You can simultaneously export directories to all hosts on an IP (sub-network). This is done by specifying an IP address and netmask pair as address/netmask where the netmask can be specified as a contiguous mask length. IPv4 or IPv6 addresses can be used.
To re-export new options to an existing share, the new options will be updated after the command is run.
For example:
OPENSTACK> cinder share add /vx/fs1 world Exporting /vs/fs1 with options rw,no_root_squash
OPENSTACK> cinder share add /vx/o_fs 2001:21::/120 Exporting /vx/o_fs with options rw,sync,no_root_squash Success.
To delete the exported file system
- To delete (or unshare) the exported file system, enter the following:
OPENSTACK> cinder share delete export-dir client
For example:
OPENSTACK> cinder share delete /vx/fs1 world Removing export path *:/vx/fs1 Success.
To start or display the status of the OpenStack Cinder service
- To start the OpenStack Cinder service, enter the following:
OPENSTACK> cinder service start
The OPENSTACK> cinder service start command needs the NFS service to be up for exporting any mount point using NFS. The OPENSTACK> cinder service start command internally starts the NFS service by running the command NFS> server start if the NFS service has not been started. There is no OPENSTACK> cinder service stop command. If you need to stop NFS mounts from being exported, use the NFS> server stop command.
For example:
OPENSTACK> cinder server start ..Success.
- To display the status of the OpenStack Cinder service, enter the following:
OPENSTACK> cinder service status
For example:
OPENSTACK> cinder server status NFS Status on access_01 : ONLINE NFS Status on access_02 : ONLINE
To display configuration changes that need to be done on the OpenStack controller node
- To display all the configuration changes that need to be done on the OpenStack controller node, enter the following:
OPENSTACK> cinder configure export-dir
export-dir
Specifies the path of the directory that needs to be exported to the client.
The directory path should start with /vx and only the following characters are allowed:
'a-zAZ0- 9_/@+=.:-'
For example:
OPENSTACK> cinder configure /vx/fs1
To create a new volume backend named ACCESS_HDD in OpenStack Cinder
- Add the following configuration block in the
/etc/cinder/cinder.conf
file on your OpenStack controller node.enabled_backends=access-1 [access-1] volume_driver=cinder.volume.drivers.veritas_cnfs.VeritasCNFSDriver volume_backend_name=ACCESS_HDD nfs_shares_config=/etc/cinder/access_share_hdd nfs_mount_point_base=/cinder/cnfs/cnfs_sata_hdd nfs_sparsed_volumes=True nfs_disk_util=df nfs_mount_options=nfsvers=3
Add the lines from the configuration block at the bottom of the file.
volume_driver
Name of the Veritas Access Cinder driver.
volume_backend_name
For this example, ACCESS_HDD is used.
This name can be different for each NFS share.
If several backends have the same name, the OpenStack Cinder scheduler decides in which backend to create the volume.
nfs_shares_config
This file has the share details in the form of
vip:/exported_dir
.nfs_mount_point_base
Mount point where the share will be mounted on OpenStack Cinder.
If the directory does not exist, create it. Make sure that the Cinder user has write permission on this directory.
nfs_sparsed_volumes
Preallocate or sparse files.
nfs_disk_util
Free space calculation.
nfs_mount_options
These are the mount options OpenStack Cinder uses to NFS mount.
This same configuration information for adding to the
/etc/cinder/cinder.conf
file can be obtained by running the OPENSTACK CINDER> configure export_dir command. - Append the following in the
/etc/cinder/access_share_hdd
file on your OpenStack controller node:vip:/vx/fs1
Use one of the virtual IPs for vip:
192.1.1.190
192.1.1.191
192.1.1.192
192.1.1.193
192.1.1.199
You can obtain Veritas Access virtual IPs using the OPENSTACK> cinder configure export-dir option.
- Create the
/etc/cinder/access_share_hdd
file at the root prompt, and update it with the NFS share details.# cnfs_sata_hdd(keystone_admin)]# cat /etc/cinder/access_share_hdd 192.1.1.190:/vx/fs1
- The Veritas Access package includes the Veritas Access OpenStack Cinder driver, which is a Python script. The OpenStack Cinder driver is located at
/opt/VRTSnas/scripts/OpenStack/veritas_cnfs.py
on the Veritas Access node. Copy theveritas_cnfs.py
file to/usr/lib/python2.6/site-packages/cinder/volume/drivers/veritas_cnfs.py
if you are using the Python 2.6 release.If you are using the OpenStack Kilo version of RDO, the file is located at:
/usr/lib/python2.7/site-packages/cinder/volume/drivers/veritas_cnfs.py
- Make sure that the NFS mount point on the OpenStack controller node has the right permission for the cinder user. The cinder user should have write permission on the NFS mount point. Set the permission using the following command.
# setfacl -m u:cinder:rwx /cinder/cnfs/cnfs_sata_hdd
# sudo chmod -R 777 /cinder/cnfs/cnfs_sata_hdd
- Give required permissions to the
/etc/cinder/access_share_hdd
file.# sudo chmod -R 777 /etc/cinder/access_share_hdd
- Restart the OpenStack Cinder driver.
# cnfs_sata_hdd(keystone_admin)]# /etc/init.d/openstack-cinder-volume restart Stopping openstack-cinder-volume: [ OK ] Starting openstack-cinder-volume: [ OK ]
Restarting the OpenStack Cinder driver picks up the latest configuration file changes.
After restarting the OpenStack Cinder driver,
/vx/fs1
is NFS-mounted as per the instructions provided in the/etc/cinder/access_share_hdd
file.# cnfs_sata_hdd(keystone_admin)]# mount |grep /vx/fs1 192.1.1.190:/vx/fs1 on cnfs_sata_hdd/e6c0baa5fb02d5c6f05f964423feca1f type nfs (rw,nfsvers=3,addr=10.182.98.20)
You can obtain OpenStack Cinder log files by navigating to:
/var/log/cinder/volume.log
- If you are using OpenStack RDO, use these steps to restart the OpenStack Cinder driver.
Login to the OpenStack controller node.
For example:
source /root/keystonerc_admin
Restart the services using the following command:
(keystone_admin)]# openstack-service restart openstack-cinder-volume
For more information, refer to the OpenStack Administration Guide.
- On the OpenStack controller node, create a volume type named va_vol_type.
This volume type is used to link to the volume backend.
[root@c1059-r720xd-111046 cnfs_sata_hdd(keystone_admin)]# cinder type-create va_vol_type +--------------------------------------+------------------+ | ID | Name | +--------------------------------------+------------------| | d854a6ad-63bd-42fa-8458-a1a4fadd04b7 | va_vol_type | +--------------------------------------+------------------+
- Link the volume type with the ACCESS_HDD back end.
[root@c1059-r720xd-111046cnfs_sata_hdd(keystone_admin)]# cinder type-key va_vol_type set volume_backend_name=ACCESS_HDD
- Create a volume of size 1gb.
[root@c1059-r720xd-111046 cnfs_sata_hdd(keystone_admin)]# cinder create --volume-type va_vol_type --display-name va_vol1 1 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | created_at | 2014-02-08T01:47:25.726803 | | display_description | None | | display_name | va_vol1 | | id | disk ID 1 | | metadata | {} | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | volume_type | va_vol_type | +---------------------+--------------------------------------+ [root@c1059-r720xd-111046 cnfs_sata_hdd(keystone_admin)]# cinder list +---------------+----------+-------------+-----+--------------+--------+------------+ | ID | Status | Display Name| Size| Volume Type |Bootable| Attached to| +---------------+----------+-------------+-----+--------------+--------+------------+ | disk ID 1 | available| va_vol1 | 1 | va_vol_type | false| | +----------------------------------------+-----+--------------+--------+------------+
- Extend the volume to 2gb.
[root@c1059-r720xd-111046 cnfs_sata_hdd(keystone_admin)]# cinder extend va_vol1 2 [root@c1059-r720xd-111046 cnfs_sata_hdd(keystone_admin)]# cinder list +------------+-----------+--------------+------+--------------+---------+------------+ | ID | Status | Display Name | Size | Volume Type | Bootable| Attached to| +------------------------+--------------+------+--------------+----------------------+ | disk ID 1 | available| va_vol1 | 2 | va_vol_type | false | | +------------+-----------+--------------+------+--------------+---------+------------+
- Create a snapshot.
[root@c1059-r720xd-111046 cnfs_sata_hdd(keystone_admin)]# cinder snapshot-create --display-name va_vol1-snap va_vol1 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | created_at | 2014-02-08T01:51:17.362501 | | display_description | None | | display_name | va_vol1-snap | | id | disk ID 1 | | metadata | {} | | size | 2 | | status | creating | | volume_id | 52145a91-77e5-4a68-b5e0-df66353c0591 | [root@c1059-r720xd-111046 cnfs_sata_hdd(keystone_admin)]# cinder snapshot-list +-----------+--------------------------------------+-----------+----------------+------+ | ID | Volume ID | Status | Display Name | Size | +--------------------------------------------------+-----------+----------------+------+ | disk ID 1 | 52145a91-77e5-4a68-b5e0-df66353c0591| available | va_vol1-snap | 2 | +--------------------------------------------------+-----------------------------------+
- Create a volume from a snapshot.
[root@c1059-r720xd-111046 cnfs_sata_hdd(keystone_admin)]# cinder create --snapshot-id e9dda50f-1075-407a-9cb1-3ab0697d274a --display-name va-vol2 2 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | created_at | 2014-02-08T01:57:11.558339 |