Feature Category
|
Feature
|
Details
|
Installation and Upgrades
|
Ansible Support
|
Ansible is a popular configuration management tool that automates various configuration and deployment operations in your environment. Ansible playbooks are files written in YAML format that contain human-readable code that can define the operations performed in your environment.
Veritas now provides Ansible modules that can be used in playbooks to install or upgrade Veritas InfoScale, deploy clusters or configure features such as Flexible Storage Sharing (FSS), Cluster File System (CFS) and Disk Group Volume.
For the Ansible modules, playbook templates and user's guide for using Ansible in an InfoScale environment visit:
https://sort.veritas.com/utility/ansible
|
Installation and Upgrades
|
Upgrade Path
|
You can upgrade to Veritas InfoScale 7.4.1 only if the base version of your currently installed product is 6.2.1 or later.
|
Installation and Upgrades
|
Deprecated support for co-existence of Veritas InfoScale products
|
Support for co-existence of the following Veritas InfoScale products has been deprecated in 7.4.1:
- InfoScale Availability and InfoScale Storage
- InfoScale Availability and InfoScale Foundation
Veritas no longer supports co-existence of more than one InfoScale product on a system.
|
Licensing
|
Misc
|
Veritas collects licensing and platform related information from InfoScale products as part of the Veritas Product Improvement Program. The information collected helps identify how customers deploy and use the product, and enables Veritas to manage customer licences more efficiently.
The Veritas Telemetry Collector is used to collect licensing and platform related information from InfoScale products as part of the Veritas Product Improvement Program. The Veritas Telemetry Collector sends this information to an edge server.
The Veritas Cloud Receiver (VCR) is a pre-configured, cloud-based edge server deployed by Veritas. While installing or upgrading InfoScale, ensure that you configure the Veritas Cloud Receiver (VCR) as your edge server.
For more information about setting up and configuring telemetry data collection, see the Veritas InfoScale Installation or the Veritas InfoScale Configuration and Upgrade guides.
|
Security
|
Support for third-party certificate for entity validation in SSL/TLS Server
|
InfoScale supports using a third-party certificate for entity validation in SSL/TLS Server in VxAT on a Linux host.
Note: Third-party certificate is not supported for Windows host.
In the prior InfoScale releases, the SSL/TLS Server uses a self-signed certificate. This self-signed certificate is not verified by a trusted CertificateAuthority, and hence poses a security threat.
With the support for third-party trusted certificates, you can now generate a certificate for the SSL/TLS Server by providing the encrypted passphrase to InfoScale. InfoScale then issues a certificate signing request, which is used to generate a certificate for the SSL/TLS Server.
For more information, see the Veritas InfoScale Installation Guide - Linux.
|
Security
|
Discontinuation of SSL/TLS Server support for TLSv1.0 and TLSv1.1
|
To reduce security vulnerabilities, the TLSv1.0 and TLSv1.1 protocols are not supported by default. However, you can enable these protocols by setting the value of the AT_CLIENT_ALLOW_TLSV1 attribute to 1.
|
Security
|
Discontinued support
|
The following features are no longer supported in this release:
- The AllowV2 attribute to enable or disable SSLv2 protocol.
- The medium strength ciphers for SSL communication.
|
Security
|
openssl 1.0.2o for enhanced security
|
The VxAT server now uses openssl 1.0.2o for SSL communication.
|
Supported Configurations
|
Support for Oracle 18c
|
InfoScale now supports single-instance configurations with Oracle 18c.
|
Supported Configurations
|
Support for Oracle Enterprise Manager 13c
|
InfoScale now provides an OEM plugin for Oracle 13c.
|
Cloud Environments
|
New high availability agents for Google Cloud Platform (GCP)
|
InfoScale has introduced the GoogleIP and the GoogleDisk agents for GCP environments.
These agents are bundled with the product.
GoogleIP agent
The GoogleIP agent manages the networking resources in the Google Cloud.
The agent performs the following tasks:
- Gets the NIC details, creates the configuration and associates or disassociates private IP address with VM instances
- Manages the routing of overlay IPs for failing over across subnets
The GoogleIP resource depends on the IP resource.
GoogleDisk agent
The GoogleDisk agent works with zonal persistent disks in the Google Cloud. The agent brings the disks online, monitors their states and takes them offline. It attaches the disks to a VM instance of the same resource group or a different one. The agent uses GCP Python SDK to determine whether the disks are attached to the VM instances or not.
The GoogleDisk resource does not depend on any other resources.
For more information, see Cluster Server Bundled Agents Reference Guide - Linux.
|
Cloud Environments
|
Support for file-level tiering to migrate data using cloud connectors
|
InfoScale supports file-level tiering to migrate data using cloud connectors.
In file-level tiering, a single file is broken into chunks of definite size and each chunk is stored as a single object. A single file can thus have multiple objects. A relevant metadata is associated with each object, which makes it easy to access the file directly from the cloud.
Since a file is broken into individual objects, the read-write performance is improved. Also, the large object size facilitates migration of large files with minimal chunking.
For details about migrating data using cloud connectors, refer to the InfoScale Solutions in Cloud Environments document.
|
Cloud Environments
|
Support for InfoScale configurations in Google Cloud
|
InfoScale lets you configure applications for HA and DR in Google Cloud environments. The GoogleIP and GoogleDisk agents are provided to support IP and disk resources in GCP.
The following replication configurations are supported:
- Replication across GCP regions
- Replication across multiple GCP zones and regions (campus cluster)
The following HA and DR configurations are supported:
- Failover within a subnet of a GCP zone using virtual private IP
- Failover across GCP subnets using overlay IP
- DR across GCP regions or VPC networks
- Shared storage within a GCP zone or across GCP zones
For details, refer to the InfoScale Solutions in Cloud Environments document.
|
Cluster Server Agents
|
Support for cloned Application Agent
|
The Application agent is used to make applications highly available when an appropriate ISV agent is not available. To make multiple different applications highly available using a cluster, you must create a service group for each application. InfoScale lets you clone the Application agent so that you can configure a different service group for each application. You must then assign the appropriate operator permissions for each service group for it to function as expected.
Note: A cloned Application agent is also IMF-aware.
For details, see the Cluster Server Bundled Agents Reference Guide for your platform.
|
Cluster Server Agents
|
IMF-aware SambaShare agent
|
The SambaShare agent is now IMF-aware.
|
Cluster Server Agents
|
New optional attributes in the SambaServer Agent
|
The Samba Server Agent now supports the Interfaces and the BindInterfaceOnly attributes. These attributes enable the agent to listen on all the interfaces strings that are supported by the Samba Server.
|
Veritas Volume Manager
|
Enhanced performance of the vradmind daemon for collecting consolidated statistics
|
You can configure VVR to collect statistics of the VVR components. The collected statistics can be used to monitor the system and diagnose problems with the VVR setup. By default, VVR collects the statistics automatically when the vradmind daemon starts.
The vradmind daemon is enhanced by making it a multi-threaded process where one thread is reserved specifically for collecting periodic statistics.
Note: If the vradmind daemon is not running, VVR stops collecting the statistics.
For details, see Veritas InfoScale Replication Administrator's Guide.
|
Veritas Volume Manager
|
Changes in hot-relocation in FSS environment
|
In FSS environments, hot-relocation employs a policy-based mechanism for healing storage failures. Storage failures may include disk media failure or node failures that render storage inaccessible. However, VxVM could not differentiate between disk media and node failures. As a result, VxVM sets the same value for both the node_reloc_timeout and storage_reloc_timeout tunables.
The hot-relocation daemon is now enhanced to differentiate between the disk media failure or node failures. You can now set different values for the node_reloc_timeout and storage_reloc_timeout tunables for hot-relocation in FSS environments. The default values are 30 minutes for the storage_reloc_timeout tunable and 120 minutes for node_reloc_timeout tunable. You can modify the tunable values to suit your business needs.
|
Veritas File System
|
Changes in VxFS Disk Layout Versions (DLV)
|
The following DLV changes are now applicable:
- Added support for DLV 15
- Default DLV is DLV 15
- Support deprecated for DLV 10
With this change, you can create and mount VxFS only on DLV 11 and later. DLV 6 to 10 can be used for local mount only.
|
Veritas File System
|
Support for SELinux security extended attributes
|
The SELinux policy for RHEL 7.6 and later now includes support for VxFS file system as persistent storage of SELinux security extended attributes. With this support, you can now use SELinux security functionalities and features on VxFS files and directories on RHEL 7.6 and later.
|
Replication
|
Added support to assign a slave node as a logowner
|
In a disaster recovery environment, VVR maintains write-order fidelity for the application I/Os received. When replicating in a shared disk group environment, VVR designates one cluster node as a logowner to maintain the order of writes.
By default, VVR designates the master node as a logowner.
To optimise the master node workload, VVR now enables you to assign any cluster node (slave node) as a logowner.
Note: In the following cases, the change in logowner role is not preserved, and the master nodes take over as a logowner.
- Product upgrade
- Cluster upgrade or reboot
- Logowner Slave node failure
For more details about assigning a slave node as a logowner, refer to, Veritas InfoScale™ 7.4.1 Replication Administrator's Guide.
|
Replication
|
Technology preview: Adaptive synchronous mode in VVR
|
When the synchronous attribute of the RLINK in VVR is set to override, the system temporarily switches the replication mode from synchronous to asynchronous whenever RLINK is disconnected. The override option allows VVR to continue receiving writes from the application even when RLINK is disconnected. However, in case of high network latency, replication continues to run in synchronous mode with degraded application performance.
The adaptive synchronous mode in VVR is an enhancement to the existing synchronous override mode. In the adaptive synchronous mode, replication switches from synchronous to asynchronous based on cross-site network latency. This allows replication to take place in synchronous mode when network conditions are good, and automatically switch to asynchronous mode when there is an increase in cross-site network latency.
- threshold for switching to asynchronous mode (percentage of timed-out occurrences)
- time interval for which threshold is calculated
- time interval for which the system must remain in asynchronous mode before switching back to synchronous mode
You can also set alerts for when the system undergoes prolonged periods of network deterioration. For more information, see the Veritas InfoScale Replication Administrator's Guide - Linux.
|
InfoScale 7.4.1 Linux Release Notes |