Veritas Access Installation Guide
- Introducing Veritas Access
- Licensing in Veritas Access
- System requirements
- System requirements
- Linux requirements
- Linux requirements
- Network and firewall requirements
- Preparing to install Veritas Access
- Deploying virtual machines in VMware ESXi for Veritas Access installation
- Installing and configuring a cluster
- Installing the operating system on each node of the cluster
- Installing Veritas Access on the target cluster nodes
- About NIC bonding and NIC exclusion
- About VLAN Tagging
- Automating Veritas Access installation and configuration using response files
- Displaying and adding nodes to a cluster
- Upgrading Veritas Access and operating system
- Upgrading Veritas Access using a rolling upgrade
- Uninstalling Veritas Access
- Appendix A. Installation reference
- Appendix B. Troubleshooting the LTR upgrade
- Appendix C. Configuring the secure shell for communications
Adding a node to the cluster
The operating system has to be installed on the nodes before you add nodes to a cluster.
If you use disk-based fencing, the coordinator disks must be visible on the newly added node as a prerequisite for I/O fencing to be configured successfully. Without the coordinator disks, I/O fencing will not load properly and the node will not be able to obtain cluster membership.
If you use majority-based fencing, the newly added node doesn't have to have shared disks.
If you want to add a new node and want to exclude some unique PCI IDs, add the unique PCI IDs to the /opt/VRTSsnas/conf/net_exclusion_dev.conf
file on each cluster node manually. For example:
[root@bob_01 ~]# cat /opt/VRTSsnas/conf/net_exclusion_dev.conf 0000:42:00.0 0000:42:00.1
Note:
Writeback cache is supported for two-node clusters only, so adding nodes to a two-node cluster changes the caching to read-only.
Note:
Newly added nodes should have the same configuration of InfiniBand NICs. See About using LLT over the RDMA network for Veritas Access.
If your cluster has a configured the FSS pool, and the FSS pool's node group is missing a node, then the newly added node is added into the FSS node group, and the installer adds the new node's local data disks into the FSS pool.
To add the new node to the cluster
- Log in to Veritas Access using the master or the system-admin account.
- In CLISH, enter the Cluster command to enter the Cluster> mode.
- To add the new nodes to the cluster, enter the following:
Cluster> add node1ip, node2ip.....
where node1ip, node2ip, .... are the IP address list of the additional nodes for the ssh connection.
It is important to note that:
The node IPs should not be the IPs which are allocated to the new nodes as physical IPs or virtual IPs.
The physical IPs of new nodes are usable IPs found from the configured public IP starting addresses.
The virtual IPs are re-balanced to the new node but additional virtual IPs are not assigned.
Go to step 7 to add new virtual IP addresses to the cluster after adding a node.
The IPs that are accessible to the new nodes should be given.
The accessible IPs of the new nodes should be in the public network, they should be able to ping the public network's gateway successfully.
For example:
Cluster> add 10.200.114.56
- When you add nodes to a two-node cluster and writeback caching is enabled, the installer asks the following question before adding the node:
CPI WARNING V-9-30-2164 Adding a node to a two-node cluster that has writeback caching enabled will change the caching to read-only. Writeback caching is only supported for two nodes. Do you want to continue adding new node(s)? [y,n,q](n)
Enter y to continue adding the node. Enter n to exit from the add node procedure.
- If a cache exists on the original cluster, the installer prompts you to choose the ssd disks to create cache on the new node when CFS is mounted.
1) emc_clariion1_242 2) emc_clariion1_243 b) Back to previous menu Choose disks separate by spaces to create cache on 10.198.89.164 [1-2,b,q] 1 Create cache on snas_02 .....................Done
- If the cluster nodes have created FSS pool, and there are more than two local data disks on the new node, the installer asks you to select the disks to add into the FSS pool. Make sure that you select at least two disks for stripe volume layout. The total selected disk size should be no less than the FSS pool's capacity size.
Following storage pools need to add disk from the new node: 1) fsspool1 2) fsspool2 3) Skip this step Choose a pool to add disks [1-3,q] 1 1) emc_clariion0_1570 (5.000 GB) 2) installres_03_sdc (5.000 GB) 3) installres_03_sde (5.000 GB) 4) sdd (5.000 GB) b) Back to previous menu
Choose at least 2 local disks with minimum capacity of 10 GB [1-4,b,q] 2 4 Format disk installres_03_sdc,sdd ................................ Done The disk name changed to installres_03_sdc,installres_03_sdd Add disk installres_03_sdc,installres_03_sdd to storage pool fsspool1 Done
- If required, add the virtual IP addresses to the cluster. Adding the node does not add new virtual IP addresses or service groups to the cluster.
To add additional virtual IP addresses, use the following command in the Network mode:
Network> ip addr add ipaddr virtual
For example:
Network> ip addr add 10.200.58.66 255.255.252.0 virtual
ACCESS ip addr SUCCESS V-288-1031 ip addr add successful.
If a problem occurs while you are adding a node to a cluster (for example, if the node is temporarily disconnected from the network), do the following to fix the problem:
To recover the node:
Power off the node.
Use the Cluster> del nodename command to delete the node from the cluster.
Power on the node.
Use the Cluster> add nodeip command to add the node to the cluster.