Cluster Server 7.3.1 Configuration and Upgrade Guide - Solaris
- Section I. Configuring Cluster Server using the script-based installer
- I/O fencing requirements
- Preparing to configure VCS clusters for data integrity
- About planning to configure I/O fencing
- Setting up the CP server
- Configuring VCS
- Configuring a secure cluster node by node
- Verifying and updating licenses on the system
- Configuring VCS clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Setting up server-based I/O fencing using installer
- Section II. Automated configuration using response files
- Performing an automated VCS configuration
- Performing an automated I/O fencing configuration using response files
- Section III. Manual configuration
- Manually configuring VCS
- Configuring LLT manually
- Configuring VCS manually
- Configuring VCS in single node mode
- Modifying the VCS configuration
- Manually configuring the clusters for data integrity
- Setting up disk-based I/O fencing manually
- Setting up server-based I/O fencing manually
- Configuring server-based fencing on the VCS cluster manually
- Setting up non-SCSI-3 fencing in virtual environments manually
- Setting up majority-based I/O fencing manually
- Manually configuring VCS
- Section IV. Upgrading VCS
- Planning to upgrade VCS
- Performing a VCS upgrade using the installer
- Tasks to perform after upgrading to 2048 bit key and SHA256 signature certificates
- Performing an online upgrade
- Performing a rolling upgrade of VCS
- Performing a phased upgrade of VCS
- About phased upgrade
- Performing a phased upgrade using the product installer
- Performing an automated VCS upgrade using response files
- Upgrading VCS using Live Upgrade and Boot Environment upgrade
- Planning to upgrade VCS
- Section V. Adding and removing cluster nodes
- Adding a node to a single-node cluster
- Adding a node to a single-node cluster
- Adding a node to a multi-node VCS cluster
- Manually adding a node to a cluster
- Setting up the node to run in secure mode
- Configuring I/O fencing on the new node
- Adding a node using response files
- Removing a node from a VCS cluster
- Removing a node from a VCS cluster
- Removing a node from a VCS cluster
- Adding a node to a single-node cluster
- Section VI. Installation reference
- Appendix A. Services and ports
- Appendix B. Configuration files
- Appendix C. Configuring LLT over UDP
- Using the UDP layer for LLT
- Manually configuring LLT over UDP using IPv4
- Manually configuring LLT over UDP using IPv6
- Appendix D. Configuring the secure shell or the remote shell for communications
- Appendix E. Installation script options
- Appendix F. Troubleshooting VCS configuration
- Appendix G. Sample VCS cluster setup diagrams for CP server-based I/O fencing
- Appendix H. Reconciling major/minor numbers for NFS shared disks
- Appendix I. Upgrading the Steward process
Sample vxfenmode file output for server-based fencing
The following is a sample vxfenmode file for server-based fencing:
# # vxfen_mode determines in what mode VCS I/O Fencing should work. # # available options: # scsi3 - use scsi3 persistent reservation disks # customized - use script based customized fencing # disabled - run the driver but don't do any actual fencing # vxfen_mode=customized # vxfen_mechanism determines the mechanism for customized I/O # fencing that should be used. # # available options: # cps - use a coordination point server with optional script # controlled scsi3 disks # vxfen_mechanism=cps # # scsi3_disk_policy determines the way in which I/O fencing # communicates with the coordination disks. This field is # required only if customized coordinator disks are being used. # # available options: # dmp - use dynamic multipathing # scsi3_disk_policy=dmp # # vxfen_honor_cp_order determines the order in which vxfen # should use the coordination points specified in this file. # # available options: # 0 - vxfen uses a sorted list of coordination points specified # in this file, # the order in which coordination points are specified does not matter. # (default) # 1 - vxfen uses the coordination points in the same order they are # specified in this file # Specify 3 or more odd number of coordination points in this file, # each one in its own line. They can be all-CP servers, # all-SCSI-3 compliant coordinator disks, or a combination of # CP servers and SCSI-3 compliant coordinator disks. # Please ensure that the CP server coordination points # are numbered sequentially and in the same order # on all the cluster nodes. # # Coordination Point Server(CPS) is specified as follows: # # cps<number>=[<vip/vhn>]:<port> # # If a CPS supports multiple virtual IPs or virtual hostnames # over different subnets, all of the IPs/names can be specified # in a comma separated list as follows: # # cps<number>=[<vip_1/vhn_1>]:<port_1>,[<vip_2/vhn_2>]:<port_2>, ...,[<vip_n/vhn_n>]:<port_n> # # Where, # <number> # is the serial number of the CPS as a coordination point; must # start with 1. # <vip> # is the virtual IP address of the CPS, must be specified in # square brackets ("[]"). # <vhn> # is the virtual hostname of the CPS, must be specified in square # brackets ("[]"). # <port> # is the port number bound to a particular <vip/vhn> of the CPS. # It is optional to specify a <port>. However, if specified, it # must follow a colon (":") after <vip/vhn>. If not specified, the # colon (":") must not exist after <vip/vhn>. # # For all the <vip/vhn>s which do not have a specified <port>, # a default port can be specified as follows: # # port=<default_port> # # Where <default_port> is applicable to all the <vip/vhn>s for # which a <port> is not specified. In other words, specifying # <port> with a <vip/vhn> overrides the <default_port> for that # <vip/vhn>. If the <default_port> is not specified, and there # are <vip/vhn>s for which <port> is not specified, then port # number 14250 will be used for such <vip/vhn>s. # # Example of specifying CP Servers to be used as coordination points: # port=57777 # cps1=[192.168.0.23],[192.168.0.24]:58888,[cps1.company.com] # cps2=[192.168.0.25] # cps3=[cps2.company.com]:59999 # # In the above example, # - port 58888 will be used for vip [192.168.0.24] # - port 59999 will be used for vhn [cps2.company.com], and # - default port 57777 will be used for all remaining <vip/vhn>s: # [192.168.0.23] # [cps1.company.com] # [192.168.0.25] # - if default port 57777 were not specified, port 14250 # would be used for all remaining <vip/vhn>s: # [192.168.0.23] # [cps1.company.com] # [192.168.0.25] # # SCSI-3 compliant coordinator disks are specified as: # # vxfendg=<coordinator disk group name> # Example: # vxfendg=vxfencoorddg # # Examples of different configurations: # 1. All CP server coordination points # cps1= # cps2= # cps3= # # 2. A combination of CP server and a disk group having two SCSI-3 # coordinator disks # cps1= # vxfendg= # Note: The disk group specified in this case should have two disks # # 3. All SCSI-3 coordinator disks # vxfendg= # Note: The disk group specified in case should have three disks # cps1=[cps1.company.com] # cps2=[cps2.company.com] # cps3=[cps3.company.com] # port=443
Table: vxfenmode file parameters defines the vxfenmode parameters that must be edited.
Table: vxfenmode file parameters
vxfenmode File Parameter | Description |
---|---|
vxfen_mode | Fencing mode of operation. This parameter must be set to "customized". |
vxfen_mechanism | Fencing mechanism. This parameter defines the mechanism that is used for fencing. If one of the three coordination points is a CP server, then this parameter must be set to "cps". |
scsi3_disk_policy | Configure the vxfen module to use DMP devices, "dmp". Note: The configured disk policy is applied on all the nodes. |
cps1, cps2, or vxfendg | Coordination point parameters. Enter either the virtual IP address or the FQHN (whichever is accessible) of the CP server. cps<number>=[virtual_ip_address/virtual_host_name]:port Where port is optional. The default port value is 443. If you have configured multiple virtual IP addresses or host names over different subnets, you can specify these as comma-separated values. For example: cps1=[192.168.0.23],[192.168.0.24]:58888, [cps1.company.com] Note: Whenever coordinator disks are used in an I/O fencing configuration, a disk group has to be created (vxfencoorddg) and specified in the /etc/vxfenmode file. Additionally, the customized fencing framework also generates the /etc/vxfentab file which specifies the security setting and the coordination points (all the CP servers and the disks from disk group specified in /etc/vxfenmode file). |
port | Default port for the CP server to listen on. If you have not specified port numbers for individual virtual IP addresses or host names, the default port number value that the CP server uses for those individual virtual IP addresses or host names is 443. You can change this default port value using the port parameter. |
single_cp | Value 1 for single_cp parameter indicates that the server-based fencing uses a single highly available CP server as its only coordination point. Value 0 for single_cp parameter indicates that the server-based fencing uses at least three coordination points. |
vxfen_honor_cp_order | Set the value to 1 for vxfen module to use a specific order of coordination points during a network partition scenario. By default the parameter is disabled. The default value is 0. |