Storage Foundation for Sybase ASE CE 7.4.1 Configuration and Upgrade Guide - Linux
- Section I. Configuring SF Sybase ASE CE
- Preparing to configure SF Sybase CE
- Configuring SF Sybase CE
- Configuring the SF Sybase CE components using the script-based installer
- Configuring the SF Sybase CE cluster
- Configuring SF Sybase CE in secure mode
- Configuring a secure cluster node by node
- Configuring the SF Sybase CE cluster
- Configuring SF Sybase CE clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Performing an automated SF Sybase CE configuration
- Performing an automated I/O fencing configuration using response files
- Configuring a cluster under VCS control using a response file
- Section II. Post-installation and configuration tasks
- Section III. Upgrade of SF Sybase CE
- Planning to upgrade SF Sybase CE
- Performing a full upgrade of SF Sybase CE using the product installer
- Performing an automated full upgrade of SF Sybase CE using response files
- Performing a phased upgrade of SF Sybase CE
- Performing a phased upgrade of SF Sybase CE from version 6.2.1 and later release
- Performing a rolling upgrade of SF Sybase CE
- Performing post-upgrade tasks
- Section IV. Installation and upgrade of Sybase ASE CE
- Installing, configuring, and upgrading Sybase ASE CE
- Preparing to configure the Sybase instances under VCS control
- Installing, configuring, and upgrading Sybase ASE CE
- Section V. Adding and removing nodes
- Adding a node to SF Sybase CE clusters
- Adding the node to a cluster manually
- Setting up the node to run in secure mode
- Adding the new instance to the Sybase ASE CE cluster
- Removing a node from SF Sybase CE clusters
- Adding a node to SF Sybase CE clusters
- Section VI. Configuration of disaster recovery environments
- Section VII. Installation reference
- Appendix A. Installation scripts
- Appendix B. Sample installation and configuration values
- Appendix C. Tunable files for installation
- Appendix D. Configuration files
- Sample main.cf files for Sybase ASE CE configurations
- Appendix E. Configuring the secure shell or the remote shell for communications
- Appendix F. High availability agent information
About using the postcheck option
You can use the installer's post-check to determine installation-related problems and to aid in troubleshooting.
Note:
This command option requires downtime for the node.
When you use the postcheck option, it can help you troubleshoot the following VCS-related issues:
The heartbeat link does not exist.
The heartbeat link cannot communicate.
The heartbeat link is a part of a bonded or aggregated NIC.
A duplicated cluster ID exists (if LLT is not running at the check time).
The VRTSllt pkg version is not consistent on the nodes.
The llt-linkinstall value is incorrect.
The /etc/llthosts and /etc/llttab configuration is incorrect.
the
/etc/gabtab
file is incorrect.The incorrect GAB linkinstall value exists.
The VRTSgab pkg version is not consistent on the nodes.
The
main.cf
file or thetypes.cf
file is invalid.The
/etc/VRTSvcs/conf/sysname
file is not consistent with the hostname.The cluster UUID does not exist.
The
uuidconfig.pl
file is missing.The VRTSvcs pkg version is not consistent on the nodes.
The
/etc/vxfenmode
file is missing or incorrect.The
/etc/vxfendg file
is invalid.The vxfen link-install value is incorrect.
The VRTSvxfen pkg version is not consistent.
The postcheck option can help you troubleshoot the following SFHA or SFCFSHA issues:
Volume Manager cannot start because the
/etc/vx/reconfig.d/state.d/install-db
file has not been removed.Volume Manager cannot start because the
volboot
file is not loaded.Volume Manager cannot start because no license exists.
Cluster Volume Manager cannot start because the CVM configuration is incorrect in the
main.cf
file. For example, the Autostartlist value is missing on the nodes.Cluster Volume Manager cannot come online because the node ID in the
/etc/llthosts
file is not consistent.Cluster Volume Manager cannot come online because Vxfen is not started.
Cluster Volume Manager cannot start because gab is not configured.
Cluster Volume Manager cannot come online because of a CVM protocol mismatch.
Cluster Volume Manager group name has changed from "cvm", which causes CVM to go offline.
You can use the installer's post-check option to perform the following checks:
General checks for all products:
All the required RPMs are installed.
The versions of the required RPMs are correct.
There are no verification issues for the required RPMs.
Checks for Volume Manager (VM):
Lists the daemons which are not running (vxattachd, vxconfigbackupd, vxesd, vxrelocd ...).
Lists the disks which are not in 'online' or 'online shared' state (vxdisk list).
Lists the diskgroups which are not in 'enabled' state (vxdg list).
Lists the volumes which are not in 'enabled' state (vxprint -g <dgname>).
Lists the volumes which are in 'Unstartable' state (vxinfo -g <dgname>).
Lists the volumes which are not configured in
/etc/fstab
.
Checks for File System (FS):
Lists the VxFS kernel modules which are not loaded (
vxfs/fdd/vxportal
.).Whether all VxFS file systems present in
/etc/fstab
file are mounted.Whether all VxFS file systems present in
/etc/fstab
are in disk layout 9 or higher.Whether all mounted VxFS file systems are in disk layout 9 or higher.
Checks for Cluster File System:
Whether FS and ODM are running at the latest protocol level.
Whether all mounted CFS file systems are managed by VCS.
Whether cvm service group is online.