Cluster Server 7.4.2 Configuration and Upgrade Guide - Linux
- Section I. Configuring Cluster Server using the script-based installer
- I/O fencing requirements
- Preparing to configure VCS clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Configuring VCS
- Configuring a secure cluster node by node
- Completing the VCS configuration
- Verifying and updating licenses on the system
- Configuring VCS clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Section II. Automated configuration using response files
- Performing an automated VCS configuration
- Performing an automated I/O fencing configuration using response files
- Section III. Manual configuration
- Manually configuring VCS
- Configuring LLT manually
- Configuring VCS manually
- Configuring VCS in single node mode
- Modifying the VCS configuration
- Manually configuring the clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Configuring server-based fencing on the VCS cluster manually
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Manually configuring VCS
- Section IV. Upgrading VCS
- Planning to upgrade VCS
- Performing a VCS upgrade using the installer
- Tasks to perform after upgrading to 2048 bit key and SHA256 signature certificates
- Performing an online upgrade
- Performing a phased upgrade of VCS
- About phased upgrade
- Performing a phased upgrade using the product installer
- Performing an automated VCS upgrade using response files
- Planning to upgrade VCS
- Section V. Adding and removing cluster nodes
- Adding a node to a single-node cluster
- Adding a node to a single-node cluster
- Adding a node to a multi-node VCS cluster
- Manually adding a node to a cluster
- Setting up the node to run in secure mode
- Configuring I/O fencing on the new node
- Adding a node using response files
- Removing a node from a VCS cluster
- Removing a node from a VCS cluster
- Removing a node from a VCS cluster
- Adding a node to a single-node cluster
- Section VI. Installation reference
- Appendix A. Services and ports
- Appendix B. Configuration files
- Appendix C. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Manually configuring LLT over UDP using IPv6
- About configuring LLT over UDP multiport
- Appendix D. Configuring LLT over TCP
- Manually configuring LLT over TCP using IPv4
- Manually configuring LLT over TCP using IPv6
- Appendix E. Migrating LLT links from IPv4 to IPv6 or dual-stack
- Appendix F. Using LLT over RDMA
- Configuring LLT over RDMA
- Configuring RDMA over an Ethernet network
- Configuring RDMA over an InfiniBand network
- Tuning system performance
- Manually configuring LLT over RDMA
- Troubleshooting LLT over RDMA
- Appendix G. Configuring the secure shell or the remote shell for communications
- Appendix H. Installation script options
- Appendix I. Troubleshooting VCS configuration
- Appendix J. Sample VCS cluster setup diagrams for CP server-based I/O fencing
- Appendix K. Upgrading the Steward process
Enabling LLT ports in firewall
You can use any firewall tool to enable the network ports.
While enabling ports make sure that:
No other application is using the LLT consumable network ports (50000 to 50006).
These ports are enabled in security groups if you are installing InfoScale in cloud.
By default, LLT uses 50000 to 50001 port range for clustering and 50002 to 50006 for I/O shipping sockets.
Ingress table: iptables -A INPUT -p udp -m udp --dport 50000 -j ACCEPT iptables -A INPUT -p udp -m udp --dport 50001 -j ACCEPT iptables -A INPUT -p udp -m udp --dport 50002 -j ACCEPT iptables -A INPUT -p udp -m udp --dport 50003 -j ACCEPT iptables -A INPUT -p udp -m udp --dport 50004 -j ACCEPT iptables -A INPUT -p udp -m udp --dport 50005 -j ACCEPT iptables -A INPUT -p udp -m udp --dport 50006 -j ACCEPT
Egress table: iptables -A OUTPUT -p udp -m udp --sport 50000 -j ACCEPT iptables -A OUTPUT -p udp -m udp --sport 50001 -j ACCEPT iptables -A OUTPUT -p udp -m udp --sport 50002 -j ACCEPT iptables -A OUTPUT -p udp -m udp --sport 50003 -j ACCEPT iptables -A OUTPUT -p udp -m udp --sport 50004 -j ACCEPT iptables -A OUTPUT -p udp -m udp --sport 50005 -j ACCEPT iptables -A OUTPUT -p udp -m udp --dport 50006 -j ACCEPT
link eth1 udp - udp 50000 - 192.168.10.1 - link eth2 udp - udp 50001 - 192.168.11.1 -
You can also use the following tunables while enabling ports.
Tunable | Description |
---|---|
set-udpports | Changes the port range to be used for I/O shipping if you do not want to use port range 50002 and onwards. Usage: set-udpports <initial_port_number> Example: set-udpports 60000 In this case, LLT uses the port 50000 and 50001 for clustering and 60000 and the subsequent port numbers for I/O shipping. |
set-udpthreads | Specifies how many threads per socket needs to be created. Usage: set-udpthreads <number of threads per socket> Example: set-udpthreads 2 |
set-udpsockets | Specifies how many sockets per link needs to be created. Usage: set-udpsockets <number of sockets per link> Example: set-udpsockets 4 |