Veritas Access Installation Guide
- Licensing in Veritas Access
- System requirements
- System requirements
- Linux requirements
- Network and firewall requirements
- Preparing to install Veritas Access
- Deploying virtual machines in VMware ESXi for Veritas Access installation
- Installing and configuring a cluster
- Installing the operating system on each node of the cluster
- Installing Veritas Access on the target cluster nodes
- About managing the NICs, bonds, and VLAN devices
- About VLAN tagging
- Automating Veritas Access installation and configuration using response files
- Displaying and adding nodes to a cluster
- Upgrading the operating system and Veritas Access
- Performing a rolling upgrade
- Uninstalling Veritas Access
- Appendix A. Installation reference
- Appendix B. Configuring the secure shell for communications
- Appendix C. Manual deployment of Veritas Access
Before adding new nodes in the cluster
After you have installed the operating system, you can install and configure a multiple node Veritas Access cluster at one time. If you want to add additional nodes to the cluster after that, you need to complete the following procedures:
Install the appropriate operating system software on the additional nodes.
Disable SELinux on the new node.
You do not need to install the Veritas Access software on the additional node before you add the node. The Veritas Access software is installed when you add the nodes. If the Veritas Access software is already installed, it is uninstalled and the product (same version as the cluster) is installed after that. The reason to uninstall and then install the product is to make sure that the new node is installed with exactly the same version, and patch level (if any) as the other cluster nodes. The packages are stored in the cluster nodes so the product image is not needed during the addition of the new node.
Verify that the existing cluster has sufficient physical IP addresses for the new nodes. You can add additional IP addresses using the following command: .
Network> ip addr add command
For example:
Network> ip addr add 192.168.30.107 255.255.252.0 physical ACCESS ip addr SUCCESS V-288-1031 ip addr add successful.
Network> ip addr show IP Netmask/Prefix Device Node Type Status -- -------------- ------ ---- ---- ------ 192.168.30.10 255.255.252.0 pubeth0 snas_01 Physical 192.168.30.11 255.255.252.0 pubeth1 snas_01 Physical 192.168.30.12 255.255.252.0 pubeth0 snas_02 Physical 192.168.30.13 255.255.252.0 pubeth1 snas_02 Physical 192.168.30.14 255.255.252.0 ( unused ) Physical 192.168.30.15 255.255.252.0 ( unused ) Physical 192.168.30.16 255.255.252.0 pubeth0 snas_01 Virtual ONLINE (Con IP) 192.168.30.17 255.255.252.0 pubeth1 snas_01 Virtual ONLINE 192.168.30.18 255.255.252.0 pubeth1 snas_01 Virtual ONLINE 192.168.30.19 255.255.252.0 pubeth1 snas_01 Virtual
In the example, the unused IP addresses 192.168.30.14 and 192.168.30.15 can be used by the new node as physical IP addresses.
Note:
The network configuration on the new nodes should be the same as that of the cluster nodes, that is, NICs should have same names and connectivity.
Bonds and vLANs are created automatically to match the cluster configuration if they do not exist already.
If you want to add nodes to a cluster that has RDMA-based LLT links, disable iptables on the cluster nodes using the service iptables stop command.
For example:
# service iptables stop iptables: Setting chains to policy ACCEPT: filter [ OK ] iptables: Flushing firewall rules: [ OK ] iptables: Unloading modules: [ OK ]
Note:
Before proceeding, make sure that all of the nodes are physically connected to the private and public networks.
Add the node to your existing cluster.