Storage Foundation for Sybase ASE CE 7.4.1 Configuration and Upgrade Guide - Linux
- Section I. Configuring SF Sybase ASE CE
- Preparing to configure SF Sybase CE
- Configuring SF Sybase CE
- About configuring SF Sybase CE
- Configuring the SF Sybase CE components using the script-based installer
- Configuring the SF Sybase CE cluster
- Configuring the cluster name
- Configuring private heartbeat links
- Configuring the virtual IP of the cluster
- Configuring SF Sybase CE in secure mode
- Configuring a secure cluster node by node
- Adding VCS users
- Configuring SMTP email notification
- Configuring SNMP trap notification
- Configuring global clusters
- Configuring the SF Sybase CE cluster
- Configuring SF Sybase CE clusters for data integrity
- Setting up disk-based I/O fencing using installer
- Performing an automated SF Sybase CE configuration
- Performing an automated I/O fencing configuration using response files
- Configuring a cluster under VCS control using a response file
- Section II. Post-installation and configuration tasks
- Section III. Upgrade of SF Sybase CE
- Planning to upgrade SF Sybase CE
- Performing a full upgrade of SF Sybase CE using the product installer
- Performing an automated full upgrade of SF Sybase CE using response files
- Performing a phased upgrade of SF Sybase CE
- About phased upgrade
- Performing a phased upgrade of SF Sybase CE from version 6.2.1 and later release
- Step 1: Performing pre-upgrade tasks on the first half of the cluster
- Step 2: Upgrading the first half of the cluster
- Step 3: Performing pre-upgrade tasks on the second half of the cluster
- Step 4: Performing post-upgrade tasks on the first half of the cluster
- Step 5: Upgrading the second half of the cluster
- Step 6: Performing post-upgrade tasks on the second half of the cluster
- Performing a rolling upgrade of SF Sybase CE
- Performing post-upgrade tasks
- Section IV. Installation and upgrade of Sybase ASE CE
- Installing, configuring, and upgrading Sybase ASE CE
- Before installing Sybase ASE CE
- Preparing for local mount point on VxFS for Sybase ASE CE binary installation
- Preparing for shared mount point on CFS for Sybase ASE CE binary installation
- Installing Sybase ASE CE software
- Preparing to create a Sybase ASE CE cluster
- Creating the Sybase ASE CE cluster
- Preparing to configure the Sybase instances under VCS control
- Configuring a Sybase ASE CE cluster under VCS control using the SF Sybase CE installer
- Upgrading Sybase ASE CE
- Installing, configuring, and upgrading Sybase ASE CE
- Section V. Adding and removing nodes
- Adding a node to SF Sybase CE clusters
- About adding a node to a cluster
- Before adding a node to a cluster
- Adding the node to a cluster manually
- Starting Veritas Volume Manager (VxVM) on the new node
- Configuring cluster processes on the new node
- Setting up the node to run in secure mode
- Starting fencing on the new node
- Configuring Cluster Volume Manager (CVM) and Cluster File System (CFS) on the new node
- After adding the new node
- Configuring the ClusterService group for the new node
- Adding a node to a cluster using the Veritas InfoScale installer
- Adding the new instance to the Sybase ASE CE cluster
- Removing a node from SF Sybase CE clusters
- Adding a node to SF Sybase CE clusters
- Section VI. Configuration of disaster recovery environments
- Section VII. Installation reference
- Appendix A. Installation scripts
- Appendix B. Sample installation and configuration values
- Appendix C. Tunable files for installation
- About setting tunable parameters using the installer or a response file
- Setting tunables for an installation, configuration, or upgrade
- Setting tunables with no other installer-related operations
- Setting tunables with an un-integrated response file
- Preparing the tunables file
- Setting parameters for the tunables file
- Tunables value parameter definitions
- Appendix D. Configuration files
- About sample main.cf files
- Sample main.cf files for Sybase ASE CE configurations
- Sample main.cf for a basic Sybase ASE CE cluster configuration under VCS control with shared mount point on CFS for Sybase binary installation
- Sample main.cf for a basic Sybase ASE CE cluster configuration with local mount point on VxFS for Sybase binary installation
- Sample main.cf for a primary CVM VVR site
- Sample main.cf for a secondary CVM VVR site
- Appendix E. Configuring the secure shell or the remote shell for communications
- Appendix F. High availability agent information
Manually configuring passwordless ssh
The ssh program enables you to log into and execute commands on a remote system. ssh enables encrypted communications and an authentication process between two untrusted hosts over an insecure network.
In this procedure, you first create a DSA key pair. From the key pair, you append the public key from the source system to the authorized_keys file on the target systems.
Figure: Creating the DSA key pair and appending it to target systems illustrates this procedure.
Read the ssh documentation and online manual pages before enabling ssh. Contact your operating system support provider for issues regarding ssh configuration.
Visit the Openssh website that is located at: http://www.openssh.com/ to access online manuals and other resources.
To create the DSA key pair
- On the source system (sys1), log in as root, and navigate to the root directory.
sys1 # cd /root
- To generate a DSA key pair on the source system, type the following command:
sys1 # ssh-keygen -t dsa
System output similar to the following is displayed:
Generating public/private dsa key pair. Enter file in which to save the key (/root/.ssh/id_dsa):
- Press Enter to accept the default location of
/root/.ssh/id_dsa. - When the program asks you to enter the passphrase, press the Enter key twice.
Enter passphrase (empty for no passphrase):
Do not enter a passphrase. Press Enter.
Enter same passphrase again:
Press Enter again.
- Output similar to the following lines appears.
Your identification has been saved in /root/.ssh/id_dsa. Your public key has been saved in /root/.ssh/id_dsa.pub. The key fingerprint is: 1f:00:e0:c2:9b:4e:29:b4:0b:6e:08:f8:50:de:48:d2 root@sys1
To append the public key from the source system to the authorized_keys file on the target system, using secure file transfer
- From the source system (sys1), move the public key to a temporary file on the target system (sys2).
Use the secure file transfer program.
In this example, the file name
id_dsa.pubin the root directory is the name for the temporary file for the public key.Use the following command for secure file transfer:
sys1 # sftp sys2
If the secure file transfer is set up for the first time on this system, output similar to the following lines is displayed:
Connecting to sys2 ... The authenticity of host 'sys2 (10.182.00.00)' can't be established. DSA key fingerprint is fb:6f:9f:61:91:9d:44:6b:87:86:ef:68:a6:fd:88:7d. Are you sure you want to continue connecting (yes/no)?
- Enter yes.
Output similar to the following is displayed:
Warning: Permanently added 'sys2,10.182.00.00' (DSA) to the list of known hosts. root@sys2 password:
- Enter the root password of sys2.
- At the sftp prompt, type the following command:
sftp> put /root/.ssh/id_dsa.pub
The following output is displayed:
Uploading /root/.ssh/id_dsa.pub to /root/id_dsa.pub
- To quit the SFTP session, type the following command:
sftp> quit
- Add the
id_dsa.pubkeys to theauthorized_keysfile on the target system. To begin the ssh session on the target system (sys2 in this example), type the following command on sys1:sys1 # ssh sys2
Enter the root password of sys2 at the prompt:
password:
Type the following commands on sys2:
sys2 # cat /root/id_dsa.pub >> /root/.ssh/authorized_keys sys2 # rm /root/id_dsa.pub
- Run the following commands on the source installation system. If your ssh session has expired or terminated, you can also run these commands to renew the session. These commands bring the private key into the shell environment and make the key globally available to the user root:
sys1 # exec /usr/bin/ssh-agent $SHELL sys1 # ssh-add
Identity added: /root/.ssh/id_dsa
This shell-specific step is valid only while the shell is active. You must execute the procedure again if you close the shell during the session.
To verify that you can connect to a target system
- On the source system (sys1), enter the following command:
sys1 # ssh -l root sys2 uname -a
where sys2 is the name of the target system.
- The command should execute from the source system (sys1) to the target system (sys2) without the system requesting a passphrase or password.
- Repeat this procedure for each target system.