Please enter search query.
Search <book_title>...
Cluster Server 7.3.1 Configuration and Upgrade Guide - Solaris
Last Published:
2019-04-17
Product(s):
InfoScale & Storage Foundation (7.3.1)
Platform: Solaris
- Section I. Configuring Cluster Server using the script-based installer
- I/O fencing requirements
- Preparing to configure VCS clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Configuring VCS
- Configuring a secure cluster node by node
- Verifying and updating licenses on the system
- Configuring VCS clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Section II. Automated configuration using response files
- Performing an automated VCS configuration
- Performing an automated I/O fencing configuration using response files
- Section III. Manual configuration
- Manually configuring VCS
- Configuring LLT manually
- Configuring VCS manually
- Configuring VCS in single node mode
- Modifying the VCS configuration
- Manually configuring the clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Configuring server-based fencing on the VCS cluster manually
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Manually configuring VCS
- Section IV. Upgrading VCS
- Planning to upgrade VCS
- Performing a VCS upgrade using the installer
- Tasks to perform after upgrading to 2048 bit key and SHA256 signature certificates
- Performing an online upgrade
- Performing a rolling upgrade of VCS
- Performing a phased upgrade of VCS
- About phased upgrade
- Performing a phased upgrade using the product installer
- Performing an automated VCS upgrade using response files
- Upgrading VCS using Live Upgrade and Boot Environment upgrade
- Planning to upgrade VCS
- Section V. Adding and removing cluster nodes
- Adding a node to a single-node cluster
- Adding a node to a single-node cluster
- Adding a node to a multi-node VCS cluster
- Manually adding a node to a cluster
- Setting up the node to run in secure mode
- Configuring I/O fencing on the new node
- Adding a node using response files
- Removing a node from a VCS cluster
- Removing a node from a VCS cluster
- Removing a node from a VCS cluster
- Adding a node to a single-node cluster
- Section VI. Installation reference
- Appendix A. Services and ports
- Appendix B. Configuration files
- Appendix C. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Manually configuring LLT over UDP using IPv6
- Appendix D. Configuring the secure shell or the remote shell for communications
- Appendix E. Installation script options
- Appendix F. Troubleshooting VCS configuration
- Appendix G. Sample VCS cluster setup diagrams for CP server-based I/O fencing
- Appendix H. Reconciling major/minor numbers for NFS shared disks
- Appendix I. Upgrading the Steward process
Sample /etc/vxfenmode file for non-SCSI-3 fencing
# # vxfen_mode determines in what mode VCS I/O Fencing should work. # # available options: # scsi3 - use scsi3 persistent reservation disks # customized - use script based customized fencing # disabled - run the driver but don't do any actual fencing # vxfen_mode=customized # vxfen_mechanism determines the mechanism for customized I/O # fencing that should be used. # # available options: # cps - use a coordination point server with optional script # controlled scsi3 disks # vxfen_mechanism=cps # # scsi3_disk_policy determines the way in which I/O fencing # communicates with the coordination disks. This field is # required only if customized coordinator disks are being used. # # available options: # dmp - use dynamic multipathing # scsi3_disk_policy=dmp # # Seconds for which the winning sub cluster waits to allow for the # losing subcluster to panic & drain I/Os. Useful in the absence of # SCSI3 based data disk fencing loser_exit_delay=55 # # Seconds for which vxfend process wait for a customized fencing # script to complete. Only used with vxfen_mode=customized # vxfen_script_timeout=25 # # vxfen_honor_cp_order determines the order in which vxfen # should use the coordination points specified in this file. # # available options: # 0 - vxfen uses a sorted list of coordination points specified # in this file, the order in which coordination points are specified # does not matter. # (default) # 1 - vxfen uses the coordination points in the same order they are # specified in this file # Specify 3 or more odd number of coordination points in this file, # each one in its own line. They can be all-CP servers, all-SCSI-3 # compliant coordinator disks, or a combination of CP servers and # SCSI-3 compliant coordinator disks. # Please ensure that the CP server coordination points are # numbered sequentially and in the same order on all the cluster # nodes. # # Coordination Point Server(CPS) is specified as follows: # # cps<number>=[<vip/vhn>]:<port> # # If a CPS supports multiple virtual IPs or virtual hostnames # over different subnets, all of the IPs/names can be specified # in a comma separated list as follows: # # cps<number>=[<vip_1/vhn_1>]:<port_1>,[<vip_2/vhn_2>]:<port_2>, # ...,[<vip_n/vhn_n>]:<port_n> # # Where, # <number> # is the serial number of the CPS as a coordination point; must # start with 1. # <vip> # is the virtual IP address of the CPS, must be specified in # square brackets ("[]"). # <vhn> # is the virtual hostname of the CPS, must be specified in square # brackets ("[]"). # <port> # is the port number bound to a particular <vip/vhn> of the CPS. # It is optional to specify a <port>. However, if specified, it # must follow a colon (":") after <vip/vhn>. If not specified, the # colon (":") must not exist after <vip/vhn>. # # For all the <vip/vhn>s which do not have a specified <port>, # a default port can be specified as follows: # # port=<default_port> # # Where <default_port> is applicable to all the <vip/vhn>s for which a # <port> is not specified. In other words, specifying <port> with a # <vip/vhn> overrides the <default_port> for that <vip/vhn>. # If the <default_port> is not specified, and there are <vip/vhn>s for # which <port> is not specified, then port number 14250 will be used # for such <vip/vhn>s. # # Example of specifying CP Servers to be used as coordination points: # port=57777 # cps1=[192.168.0.23],[192.168.0.24]:58888,[cps1.company.com] # cps2=[192.168.0.25] # cps3=[cps2.company.com]:59999 # # In the above example, # - port 58888 will be used for vip [192.168.0.24] # - port 59999 will be used for vhn [cps2.company.com], and # - default port 57777 will be used for all remaining <vip/vhn>s: # [192.168.0.23] # [cps1.company.com] # [192.168.0.25] # - if default port 57777 were not specified, port 14250 would be # used for all remaining <vip/vhn>s: # [192.168.0.23] # [cps1.company.com] # [192.168.0.25] # # SCSI-3 compliant coordinator disks are specified as: # # vxfendg=<coordinator disk group name> # Example: # vxfendg=vxfencoorddg # # Examples of different configurations: # 1. All CP server coordination points # cps1= # cps2= # cps3= # # 2. A combination of CP server and a disk group having two SCSI-3 # coordinator disks # cps1= # vxfendg= # Note: The disk group specified in this case should have two disks # # 3. All SCSI-3 coordinator disks # vxfendg= # Note: The disk group specified in case should have three disks # cps1=[cps1.company.com] # cps2=[cps2.company.com] # cps3=[cps3.company.com] # port=443