These release notes support Version 1.1.2 of the ptx/CTC software intended for use with IBM NUMA-Q Symmetry® 2000, Symmetry 5000, and NUMA-QTM 2000 computer systems. ptx/CTC manages mission-critical applications during and after cluster transitions.
Read this document before you install or run this release of ptx/CTC. If you encounter problems with ptx/CTC, refer to the "Problem Report Summary" section for descriptions of known problems with this version of the software.
This version of ptx/CTC can be used with the following systems:
The following products must be installed on your system to use V1.1.2 of ptx/CTC:
DYNIX/ptx® V4.4.4 or later.
ptx/CLUSTERS V2.1.0 or later.
ptx/SVM V2.1.0 or later, if using ptx/CTC for ptx/SVM failover.
ptx/TCP/IP V4.5.1 or later, if using ptx/CTC for IP failover.
ptx/CFS V1.0.1 if using ptx/CTC for CFS object failover.
With V1.1.2, you can now configure and maintain your ptx/CTC database from the CommandPoint ClustersTM graphical user interface. For information on how to set-up CommandPoint Clusters, refer to the CommandPoint Clusters Release Notes. The ptx/CTC Administration Guide contains a new chapter called "Using CommandPoint Clusters." In addition, each screen and dialog box for ptx/CTC in CommandPoint Clusters contains online help.
ptxCTC V1.1.2 now supports user configuration of the netmask value in the IPADDR method. This flexibility allows the IPADDR method to get a netmask value from the ptx/CTC Interface File (/usr/ctc/etc/ctc_ip.tab) for use in the ifconfig command.
The netmask value must be the sixth field in the ctc_ip.tab file, and if none is present, the default value of 255.255.255.0 will be used as the netmask value.
Following is an example of the ctc_ip.tab file with the netmask value included:
#NOTE:: Field 6 is optional. The default Netmask value is 255.255.255.0
#Field1 Field2 Field3 Field4 Field5 Field6
#Node CTC Object IP Address DomainName Interface Netmask
#============================================================================
#buster spot 148.10.172.50 spot.org.company.com pe0 255.255.0.0
#buster fido 148.10.172.26 fido.org.company.com pe0 255.255.0.0
#brown spot 148.10.172.50 spot.org.company.com pe0 255.255.0.0
#brown fido 148.10.172.26 fido.org.company.com pe0 255.255.0.0
The database must be accessible to all nodes in the cluster and should be stored on a raw shared ptx/SVM volume or a raw shared partition. It is strongly recommended that the database reside on a mirrored, shared ptx/SVM volume as a safeguard against disk failure. Each object in the database uses approximately 2,000 bytes of space. In general, the database should be large enough to hold the maximum number of objects in your configuration, plus room for future growth.
For installation instructions, refer to the DYNIX/ptx and Layered Products Software Installation Release Notes.
Note that ptx/CLUSTERS V2.1.0 or later must already be installed or must be installed at the same time as ptx/CTC V1.1.2.
The following problem reports are outstanding for V1.1.2 of ptx/CTC.
Workaround. Do not give two ptx/CTC objects the same name.
Workaround. Since the ctcadm command is run as root, the root environment is inherited by ctcadm when performing operations such as start, stop, and verify. Perform failover tests manually with root environment by first typing su - root.
Workaround. Always install ptx/CLUSTERS and ptx/CTC together. If you have already installed ptx/CLUSTERS, re-install ptx/CTC from the CD-ROM so that the menus will reappear.
Workaround. Stop ptx/CTC, set the database location, and then restart ptx/CTC. This procedure is described in the ptx/CTC Administration Guide, in the section titled "Creating Your ptx/CTC Configuration."
ptx/CTC V1.1.x is a port of ptx/CTC V1.0 to DYNIX/ptx V4.4.2 (or later), ptx/CLUSTERS V2.0.2 (or later) and ptx/SVM V2.0.2 (or later).
Note that if you are currently using non-ptx/CTC cluster failover scripts, IBM NUMA-Q recommends that you first install ptx/CTC V1.0. Once all failover scripts have been converted and verified using ptx/CTC V1.0, then perform the migration to ptx/CTC V1.1.2.
ptx/CTC maintains a database that must be accessible to all nodes in the cluster. IBM NUMA-Q recommends that it be stored on a raw shared ptx/SVM volume or a raw shared partition. If the database is stored on a ptx/SVM volume, then you will need to migrate to the new volume naming convention used by ptx/SVM V2.1.1.
To update the path to your ptx/CTC database, select the Set Database Location option located in the Administrative Operations menu. You will need to perform this step on each node in the cluster.
If you use any customized ptx/CTC failover scripts that contain references to immbroker, CMA, or Active Monitor, you will need to re-write these scripts to use the new ptx/CLUSTERS V2.1.1 commands. Additionally, these scripts were located in the trans.d directory; however, they are now located in /etc/avail_trans.d.
If any of your failover scripts contain references to ptx/SVM volumes using the ptx/SVM V1.x naming conventions, you will need to re-write these scripts to use the naming conventions used in ptx/SVM V2.1.1.
For more information on migration of ptx/CLUSTERS V2.1.1 and ptx/SVM V2.0.2, refer to the ptx/CLUSTERS V2.1.1 Release Notes and the ptx/SVM V2.1.1 Release Notes.
Perform the following steps using V1.0 of ptx/CTC.
Stop all ptx/CTC objects on all nodes in the cluster using the following command:
/usr/ctc/bin/ctcadm
This step is not mandatory; however, the migration process does not maintain the dynamic status information of objects. If you do not perform this step, you may have to set each object's correct status manually after you have installed the new version of ptx/CTC.
Create a temporary file that contains a copy of the object definitions in your ptx/CTC V1.0 database using the following command:
# /usr/ctc/bin ctcadm -m list > filename
This step can be performed on any node that is currently in the cluster and that will remain in the cluster after you have installed ptx/CTC. You may also perform this step on multiple nodes in the cluster if you want more than one copy of the temporary database file.
Stop ptx/CTC on all nodes in your cluster. You can do this using the Stop ptx/CTC option, which is located on the CTC Administration menu. If possible, perform this step on all nodes that are part of the cluster. If for some reason you are unable to perform this step on a particular node, for example, the node is not in the cluster, you can skip this step for that node for the time being.
Perform the following steps after installing ptx/CTC V1.1.2 on all nodes in the cluster.
Start ptx/CTC V1.1.2 on one of the nodes in the cluster using the Start ptx/CTC option, which is located in the CTC Configuration menu.
Answer yes when ptx/CTC asks you if you want to recreate the ptx/CTC database. (The actual message will say the database may be corrupt.)
Start ptx/CTC on all other nodes that are members of the cluster. If there are nodes that are not currently in the cluster, then you will need to start ptx/CTC on those nodes when they do join the cluster.
Load the ptx/CTC database from the temporary file you created in the previous section. Perform this step from the node where the file resides using the following command:
# /usr/ctc/bin/ctcadm loaddb filename
The restoration from filename only restores the static object definitions into the new ptx/CTC database. It does not restore the dynamic status information for each object (whether the object is disabled or running.)
Set each object's status using the following commands:
# /usr/ctc/bin/ctcadm setstatus object status
Where object is the name of the object and status is one of the following: STARTED, STOPPED.
Or you can issue the following command on each node in the cluster and let ptx/CTC automatically update the database:
# /usr/ctc/bin/ctcadm verify
To set the status for a disabled object on a particular node, use the following command:
# /usr/ctc/bin/ctcadm disable object_name node=node_name
To set the status of objects on nodes that have had ptx/CTC disabled, use the following command:
# /usr/ctc/bin/ctcnode disable node=node_name
For more information about the ctcadm and ctcnode commands, refer to the ptx/CTC Administration Guide or the man page.
The following IBM NUMA-Q manual is available with the ptx/CTC V1.1.2 release:
ptx/CTC Administration Guide