Veritas Access Installation Guide
- Introducing Veritas Access
- Licensing in Veritas Access
- System requirements
- System requirements
- Linux requirements
- Linux requirements
- Network and firewall requirements
- Preparing to install Veritas Access
- Deploying virtual machines in VMware ESXi for Veritas Access installation
- Installing and configuring a cluster
- Installing the operating system on each node of the cluster
- Installing Veritas Access on the target cluster nodes
- About NIC bonding and NIC exclusion
- About VLAN Tagging
- Automating Veritas Access installation and configuration using response files
- Displaying and adding nodes to a cluster
- Upgrading Veritas Access and operating system
- Upgrading Veritas Access using a rolling upgrade
- Uninstalling Veritas Access
- Appendix A. Installation reference
- Appendix B. Troubleshooting the LTR upgrade
- Appendix C. Configuring the secure shell for communications
Before adding new nodes in the cluster
After you have installed the operating system, you can install and configure a multiple node Veritas Access cluster at one time. If you want to add additional nodes to the cluster after that, you need to complete the following procedures:
Install the appropriate operating system software on the additional nodes.
See Installing the operating system on each node of the cluster.
Disable SELinux on the new node.
You do not need to install the Veritas Access software on the additional node before you add the node. The Veritas Access software is installed when you add the nodes. If the Veritas Access software is already installed, it is uninstalled and the product (same version as the cluster) is installed after that. The reason to uninstall and then install the product is to make sure that the new node is installed with exactly the same version, and patch level (if any) as the other cluster nodes. The packages are stored in the cluster nodes so the product image is not needed during the addition of the new node.
Verify that the existing cluster has sufficient physical IP addresses for the new nodes. You can add additional IP addresses with the CLISH command: .
Network> ip addr add command
For example:
Network> ip addr add 10.200.58.107 255.255.252.0 physical ACCESS ip addr SUCCESS V-288-1031 ip addr add successful.
Network> ip addr show IP Netmask/Prefix Device Node Type Status -- -------------- ------ ---- ---- ------ 10.200.58.101 255.255.252.0 pubeth0 snas_01 Physical 10.200.58.102 255.255.252.0 pubeth1 snas_01 Physical 10.200.58.103 255.255.252.0 pubeth0 snas_02 Physical 10.200.58.104 255.255.252.0 pubeth1 snas_02 Physical 10.200.58.105 255.255.252.0 ( unused ) Physical 10.200.58.107 255.255.252.0 ( unused ) Physical 10.200.58.231 255.255.252.0 pubeth0 snas_01 Virtual ONLINE (Con IP) 10.200.58.62 255.255.252.0 pubeth1 snas_01 Virtual ONLINE 10.200.58.63 255.255.252.0 pubeth1 snas_01 Virtual ONLINE 10.200.58.64 255.255.252.0 pubeth1 snas_01 Virtual
In the example, the unused IP addresses 10.200.58.105 and 10.200.58.107 can be used by the new node as physical IP addresses.
If you want to add nodes to a cluster that has RDMA-based LLT links, disable iptables on the cluster nodes using the service iptables stop command.
For example:
# service iptables stop iptables: Setting chains to policy ACCEPT: filter [ OK ] iptables: Flushing firewall rules: [ OK ] iptables: Unloading modules: [ OK ]
Note:
Before proceeding, make sure that all of the nodes are physically connected to the private and public networks.
Add the node to your existing cluster.