These release notes support CommandPointTM Clusters V2.2.1. They contain information on product compatibility, release-specific installation instructions, an overview of the product, how to get started with the product, and known problems.
Read this document before you install or run this release of CommandPoint Clusters. For complete information on ptx®/CLUSTERS and ptx/CTC, consult the ptx/CLUSTERS Administration Guide and the ptx/CTC Administration Guide, as well as the online help for CommandPoint Clusters.
This release of CommandPoint Clusters includes the following additions and modifications.
This release of CommandPoint Clusters lets you create a ptx/CTC database by selecting one of the shared devices or ptx/SVM volumes listed on the "Set CTC Database Location" form. This form appears when you right-click the CTC Database icon, and then left-click the "Set CTC Database Location" option. For more information on this option, see the section "Create a ptx/CTC Database."
The menu bar now includes a pull-down for Options, which includes the Set Refresh Interval option. When you select this option, the "Set Data Refresh Interval" window appears, which lets you change the default value for the clusters and EES data polling intervals. The "Set Data Refresh Interval" window appears when any cluster object is selected. For more information on this option, see the section "Change the Data Refresh Interval."
CommandPoint Clusters is a standalone DYNIX/ptx® or Windows NT® application for administering ptx/CLUSTERS and ptx/CTC. It is an optional application that provides the same level of administrative capabilities as the ptx/ADMIN® menus and forms for ptx/CLUSTERS and ptx/CTC. CommandPoint Clusters can be used instead of, or in conjunction with, ptx/ADMIN and the command-line utilities for administering a cluster and managing failover.
A cluster must be installed and running before you can begin to use CommandPoint Clusters; however, you can create new ptx/CTC failover rules for the cluster (or modify existing ptx/CTC rules) using CommandPoint Clusters.
CommandPoint Clusters provides all the functionality of the ptx/CLUSTERS and ptx/CTC command-line utilities and ptx/ADMIN menu forms. It is a Java-based graphical user interface that is easy to use and lets you administer all nodes from one interface. It also provides the following functionality that is not available in the standard ptx/CLUSTERS or ptx/CTC administration utilities:
Real-time monitoring. Every 30 seconds, CommandPoint Clusters checks the cluster and the ptx/CTC databases for any changes, and updates its screens accordingly. You can also force CommandPoint Clusters to check for changes and update its screens before the 30-second interval.
Command broadcasting across cluster nodes. Through CommandPoint Clusters, you can issue DYNIX/ptx commands on all nodes simultaneously; CommandPoint Clusters then displays the command results. You can also choose to issue a DYNIX/ptx command on any subset or combination of nodes.
Viewing of EES events. CommandPoint Clusters lets you view EES (error event subsystem) output any time during or throughout a CommandPoint Clusters session. All serious and critical events from all nodes are interleaved based on timestamp. EES must be installed for this feature to be activated.
Viewing of "behind the scenes" commands. CommandPoint Clusters can display the behind-the-scenes commands it uses when you perform an operation.
Issuing commands in "dry-run mode". "Dry-run mode" generates the commands to perform the specified operations, but does not run the commands. Dry-run mode lets you review command syntax, verify the operation you wish to perform is correct, or generate syntax for later use in scripts. A user logged in with administrative privileges who enables dry-run mode essentially becomes a non-administrative user. (A user logged in with non-administrative privileges cannot access the dry-run mode option, because non-administrative access is equivalent to dry run mode.)
Command viewing and dry-run mode together provide a learning environment for new or inexperienced system administrators.
The following software products are required, unless specified otherwise, on the IBM NUMA-Q host that will be running the CommandPoint Clusters server or client software:
DYNIX/ptx V4.5.1 or later (required)
ptx/CLUSTERS V2.2.1 or later (required)
ptx/SVM V2.2.1 or later (required)
ptx/LAN V4.7.1 or later (required)
ptx/TCP/IP V4.6.1 or later (required)
ptx/CTC V1.1.4 or later (required)
ptx/XWM V4.6.1 (required only if you will be running CommandPoint Clusters as a DYNIX/ptx application)
CommandPoint Base V4.5.1 (you will not be able to run the client (GUI) on DYNIX/ptx if CommandPoint Base is not installed)
ptx/NFS® V4.7.1 or later (optional)
EES V1.1.1 or later (required if you want to utilize the Event Viewing capability of CommandPoint Clusters)
If you plan to use Windows NT to administer a cluster from a PC, then the following requirements apply to the PC that you will be using. These requirements are in addition to the DYNIX/ptx requirements listed in the previous section, except for ptx/XWM and EES, which are needed on DYNIX/ptx platforms only.
CommandPoint Clusters is compatible with the following software products running on a PC:
Windows NT 4.0
eXceed V5.1.3.0 or V6.0.1.15 (This is required if you plan to launch CommandPoint Clusters from CommandPoint Admin or run CommandPoint Clusters on NT as a DYNIX/ptx application; see the section entitled "Launching CommandPoint Clusters or CommandPoint SVM" in the CommandPoint Admin Overview and Release Notes for more information.)
ATTENTION You will need about 1.4 MB free space on your PC to install CommandPoint Clusters, plus an additional 2.6 MB if the Java Runtime Environment (JRE) is not already installed. If CommandPoint SVM or CommandPoint Clusters is already installed on your PC, then the JRE is already installed. You will also need about 7 MB for the CommandPointClusters2.2.1.exe file.
ATTENTION Do not use the VCS console on a NUMA-Q system to run CommandPoint Clusters.
If you plan to use the Windows NT-compatible client to administer your IBM NUMA-Q system, then the following minimum requirements apply to the PC that you will be using. (These requirements are in addition to the DYNIX/ptx requirements listed in the previous section.)
ATTENTION The minimum configuration will allow you to run all three CommandPoint products (CommandPoint Admin, CommandPoint Clusters and Command Point SVM) simultaneously. However, the performance will be better if you use the recommended hardware configuration.
Whichever configuration you choose to use, the performance of the system will be impacted by the number and size of the applications that you run simultaneously.
The minimum hardware requirements are:
The recommended hardware requirements are:
266MHz (Pentium) (or faster) CPU
64 MB Memory
1280x1024 Video
21" Display
CD-ROM drive
Network Interface Card
The CommandPoint Clusters software must be installed on each cluster node, and can also be installed on a PC running Windows NT.
The CommandPoint Clusters software for DYNIX/ptx is located on the DYNIX/ptx Layered Products Software CD, Volume 2. Other required software for use with CommandPoint Clusters (including DYNIX/ptx, ptx/CLUSTERS, ptx/CTC and ptx/XWM) is located on the DYNIX/ptx Operating System and Layered Products Software CD, Volume 1. You can install all of these software products through the installation process described in the DYNIX/ptx and Layered Products Software Installation Release Notes. The product to select for installation is: cpclusters.
As shipped, CommandPoint Clusters is pre-configured to allow members of group cpclus and group cpgroup to run CommandPoint Clusters and the other CommandPoint products, respectively.
As root, you need to create the group cpclus (and cpgroup if you are going to run other CommandPoint products and cpgroup does not already exist), and add yourself to the group(s).
ATTENTION CommandPoint Clusters requires 6MB of free disk space.
To load the CommandPoint Clusters client software on a Windows NT system, load the DYNIX/ptx Layered Products Software CD, Volume 2 and click on cpclusters\setup.exe.
Follow the standard installation.
ATTENTION During the Windows NT installation of CommandPoint Clusters, you will be prompted to respond to a licensing query. In order to install CommandPoint Clusters, you will need to agree to the terms of the electronic license by clicking on the Yes button which will allow the installation to proceed. (If you answer No, the installation will stop).
Answering Yes allows you to install and use the Windows NT Client component of CommandPoint Clusters on one or more workstations which will be used for the purpose of administering IBM NUMA-Q systems. In this regard, the Windows NT Client component is an exception to Paragraph 1.2 of the Software License Agreement for CommandPoint Clusters.
To install CommandPoint Clusters software for Windows NT clients via ftp, perform the following steps:
ATTENTION You must set the ftp file transfer type to binary to support binary image transfer. Do not use ascii file transfer type.
ftp the file /opt/commandpoint/cpclusters/CommandPointClusters2.2.1.exe to a temporary directory on your system.
Double click the CommandPointClusters2.2.1.exe file.
CommandPoint Clusters uses the following authentication mechanisms:
All users must have an account on the DYNIX/ptx machine.
The user name and password are authenticated on the server prior to allowing access on the client.
Both the UNIX crypt algorithm and the DES encryption algorithm are used.
During authentication, the clear-text password is not sent over the network.
Direct root login is not allowed in order to preserve traceability.
A user must be explicitly authorized through special privilege groups to run CommandPoint Clusters. A special configuration file, /opt/commandpoint/cpclusters/cpclustersd.config, is installed with CommandPoint Clusters and contains entries for default privilege groups. The administrator must add users to these privilege groups before they can access CommandPoint Clusters.
CommandPoint Clusters provides two levels of authorized access:
Monitoring (or read-only) access is available only to those users who have a valid account on all nodes of the cluster (with the same login name) and who are also members of the cpclusro privilege group.
The /opt/commandpoint/cpclusters/cpclustersd.config file is installed on the system when CommandPoint Clusters is installed. It contains the following default entries:
# define access to CommandPoint Clusters cpclusters.adminGroup=cpgroup,cpclus cpclusters.readOnlyGroup=cpclusro
The privilege groups specified in the default config file and their associated access levels are:
Users have no access to CommandPoint Clusters or other CommandPoint applications until they are added to a privilege group that is listed in the config file. Use the ptx/ADMIN menu system or CommandPoint Admin to add users to the privilege groups, as you would add users to other DYNIX/ptx groups.
You can create privilege groups other than those specified in the cpclustersd.config file that is installed with CommandPoint Clusters. Once you create the groups, add them to the /opt/commandpoint/cpclusters/cpclustersd.config file on each cluster node.
Before you can use CommandPoint Clusters to monitor and administer ptx/CLUSTERS, the CommandPoint Clusters server must be running on each node, and then you must start CommandPoint Clusters on one of the cluster nodes running DYNIX/ptx or from Windows NT on a remote PC.
Verify that the CommandPoint Clusters server is running on each node by entering the following command (as root) on each node:
$ /opt/commandpoint/cpclusters/cpcSvr.sh query
If the server is running, you will be returned the message:
CommandPoint Clusters server (cpclustersd) currently running
If the server is not running, you will be returned the message:
CommandPoint Clusters server (cpclustersd) not running
To start the server, issue the following command (as root) on each node:
$ /opt/commandpoint/cpclusters/cpcSvr.sh
ATTENTION Before starting CommandPoint Clusters, you must correctly set your DISPLAY environment variable.
To start CommandPoint Clusters on a node running DYNIX/ptx, enter the following command:
$ /usr/bin/cpclusters
ATTENTION It can take a few seconds for the Login dialog box to appear regardless of the platform from which you are running CommandPoint Clusters. Note that if you are running CommandPoint Clusters on a Symmetry platform, it can take even longer, possibly a minute or more.
You will then be presented with the Cluster Login window. See the section below entitled "Getting Started" for information on how to proceed.
To start CommandPoint Clusters on Windows NT to administer a remote DYNIX/ptx cluster, choose Start->Programs->Sequent -> CommandPoint Clusters V2.2.1 from the Windows NT task bar or from wherever you installed the application. Alternatively, you can double-click the CommandPoint Clusters program icon.
ATTENTION It can take a few seconds for the Login dialog box to appear regardless of the platform from which you are running CommandPoint Clusters.
You will then be presented with the Cluster Login window. See the section below entitled "Getting Started" for information on how to proceed.
This section explains how to log in to CommandPoint Clusters, describes the forms and dialog boxes of CommandPoint Clusters, and tells how to navigate through the interface.
Once you have started CommandPoint Clusters from DYNIX/ptx or as a Windows NT application, the following Cluster Login window appears:
Enter the complete name of the cluster node to which you are connected at the Host Name prompt.
Enter your login at the Login prompt and password at the Password prompt.
When you have completed the Cluster Login form, click the OK button. CommandPoint Clusters will then attempt to log you in on all the cluster nodes.
If you successfully logged in to all cluster nodes, the CommandPoint Clusters splash screen will appear, followed by the CommandPoint Clusters main screen.
Error conditions that can occur during the login process are:
Connection Lost. Connection has been lost to all cluster nodes and CommandPoint Clusters is automatically terminated when you click OK. Possible causes of this error are:
Connection Error. Connection to one or more cluster nodes could not be made. The CommandPoint Clusters client will come up but will display the "no connection" icon on the node(s). Possible causes of this error are:
The CommandPoint Clusters server was not started. See "Start CommandPoint Clusters" for information on starting the server.
The node listed in the error message was not at run-level 2.
The network may be having problems.
Authentication Error. The login authentication process failed on one or more nodes. CommandPoint Clusters will be automatically terminated when you click OK. Possible causes of this error are:
The node joined the cluster after the client was started. The user does not have proper credentials (login and password) on this node.
The node was in the cluster and was rebooted (or somehow lost connection to the CommandPoint Clusters client). Upon rejoining the cluster or reconnecting to the CommandPoint Clusters client, the login authentication failed. The user account on the system may have changed.
ATTENTION The user must have the same login and password on all nodes in the cluster in order to run CommandPoint Clusters.
The following is an example of a CommandPoint Clusters main screen.
A list of cluster objects is located on the left of the CommandPoint Clusters main screen. Cluster objects consist of the cluster itself, each cluster node, and ptx/CTC database information configured through ptx/CTC. Each object is designated by an icon. The object that is outlined by a bold rectangle is the current object, and the information to the right of the list of cluster objects pertains to the current object.
By default, when you first log in to CommandPoint Clusters, the selected object is the cluster, and cluster information is displayed. To select a different object, left-click the object's icon or name. Information about that object will appear to the right of the cluster objects list.
If you have administrative privileges, you can change cluster and node settings, as well as create and modify ptx/CTC entries. To change cluster, node, or ptx/CTC entries, left-click the desired object, then right-click. A pop-up menu appears. Select the Properties entry from the list by left clicking on it.
The menu bar, which includes pull-downs for File, View, Tasks, Options, and Help, lets you select some CommandPoint Clusters tasks. These tasks vary, depending on which cluster object is selected. Also, all tasks may not be available to you if you are not logged in as an administrative user.
The File pull-down menu provides the Exit. Choose this item to log out of CommandPoint Clusters.
The View pull-down menu presents these four functions when any cluster object is selected:
Task Window. This option brings up the Tasks Window, which lists all tasks and their status for the current CommandPoint Clusters session.
Event Log. This function lets you view EES (error event subsystem) output any time during a CommandPoint Clusters session. The EES viewer reports all serious and critical events from all nodes. The events from the nodes are interleaved based on timestamp. The events that are displayed are those that occurred since the window was opened; no prior events are displayed. EES must be installed for this feature to be activated.
Refresh Data. This option forces CommandPoint Clusters to immediately check the cluster and the ptx/CTC databases for changes and to update its screens before the normal 30-second interval for performing these tasks.
Dry Run Mode. This option tells CommandPoint Clusters to generate the commands to perform specified operations, but not run the commands. From an administrative user's point of view, it is the same as running CommandPoint Clusters with non-administrative privileges. Used in conjunction with the Commands option, Dry Run Mode lets you review command syntax, verify that the operation you wish to perform is correct, or generate syntax for later use in scripts.
When dry-run mode is enabled or a user invokes CommandPoint Clusters with non-administrative privileges, each dialog will include "Dry Run mode" in the status bar on the bottom of the Main Window. When commands are run, the Tasks window will show them with the "i" icon instead of the "check" icon, representing information only.
Items that appear in the Tasks pull-down menu vary depending on the cluster object that is selected.
The only cluster task available is Command Broadcast. The task is available when the cluster icon is selected. It lets you issue DYNIX/ptx commands on all nodes simultaneously. CommandPoint Clusters then displays the command results. Command Broadcast also lets you choose to issue a DYNIX/ptx command on one node only. This feature is limited to non-interactive commands only (it will not work with vi for example). It also does not support commands that have more than 30 KB of output.
The following functions are available from the Tasks pull-down menu when a node icon is selected:
Stop All CTC Objects. Stops all ptx/CTC objects on the selected node.
Update All CTC Objects. Checks the status of the selected ptx/CTC object on the selected node and then updates the status information for that object in the ptx/CTC database.
Enable CTC. Enables ptx/CTC on the selected node.
Disable CTC. Disables ptx/CTC on a node so that you can prevent ptx/CTC from automatically taking any actions on that node as a result of an availability transition.
Start CTC. Turns ptx/CTC on. Also starts the ctcd daemon.
Stop CTC. Turns ptx/CTC off.
Load CTC Database. Initializes the database and builds it from the specified file.
Save CTC Database. Saves the ptx/CTC database to a file that can be used for editing the database or as a backup copy.
Command Broadcast. Lets you issue DYNIX/ptx on all nodes simultaneously. CommandPoint Clusters then displays the command results. Also lets you choose to issue a DYNIX/ptx command on one node only. This feature works only with non-interactive commands.
The following functions are available from the Tasks pull-down menu when the CTC Database object is selected:
New CTC Object. Lets you create a new CTC object. See the ptx/CTC Administration Guide for details.
Command Broadcast. Lets you issue DYNIX/ptx on all nodes simultaneously. CommandPoint Clusters then displays the command results. Also lets you choose to issue a DYNIX/ptx command on one node only. This feature works only with non-interactive commands.
The Options pull-down menu provides the "Set Refresh Interval" function when any cluster object is selected. You can modify the default refresh intervals for cluster and EES data through the Set Data Refresh Interval window. The default data polling interval for the cluster and EES data is 30 seconds. You can reduce the polling time to a minimum of seconds; there is no maximum bound.
When you click on OK, the change takes effect immediately.
The following functions are available from the Help pull-down menu when any CommandPoint Clusters object is selected:
Contents. Provides a listing of the help files for CommandPoint Clusters.
Getting Started. Describes the components of CommandPoint Clusters and how to use the product.
Icon Legend. Lists and defines the icons used in CommandPoint Clusters.
Index. Lists the index entries of the help files for CommandPoint Clusters.
Glossary. Lists and defines the ptx/CLUSTERS and ptx/CTC terms used throughout CommandPoint Clusters.
About CommandPoint Clusters. Provides copyright, trademark, version number, and product release date.
The following icons may appear during a CommandPoint Clusters session:
When you select the cluster object, information about the cluster is displayed in the right-hand portion of your screen. General cluster data, as well as information about the quorum disk, quorum votes, cluster interfaces, and cluster nodes, are displayed.
The following screen shows how the cluster information is displayed.
In the area near the bottom of the screen called "Cluster Nodes," information about one of the nodes is always highlighted. To view the property sheets for the highlighted node, right-click anywhere on the highlighted entry, and left-click on Properties from the drop-down menu that appears. The node property sheets are described in more detail in the section entitled Node Properties.
ATTENTION If you are logged in as an administrative user, you will be able to modify node properties; otherwise, you will only be able to view information about node properties.
To view or modify cluster properties, right-click the cluster object, and then left click on Properties.
There are two cluster property sheets: General and Lock Manager Domains.
The General Properties dialog box is shown below. It lets you view or modify entries for permanent cluster parameters, active cluster parameters, and the quorum disk.
If you modify entries on the General Properties dialog box, you can click the OK button to apply the changes and exit the dialog box, or click the Apply button to apply the changes and not exit the dialog box.
If you are logged in as a non-administrative user, none of the modifications will take effect.
The Lock Manager Domains form is shown below. It lets you view information about existing Lock Manager domains. If you have logged in with administrative privileges, the modifications you have made will take effect when you click the OK or Apply button.
To add a Lock Manager domain from the Lock Manager Domains screen, follow these steps. The domain will be added to all current member nodes in the cluster.
Left-click the Add button. The Add New Lock Manager Domain screen appears.
Enter the name of the lock domain you want to add, its owner and its group. The owner must have a valid entry in the /etc/passwd file, and the group must have a valid entry in the /etc/group file.
Choose the permissions for the new domain by left-clicking in the appropriate Read, Write, and Execute check boxes.
Choose to instantiate the new domain or wait until the system reboots. By default, the checkbox to instantiate is already checked; to de-select it, left-click the checkbox.
Click the OK button to apply the changes and exit the dialog box, or click the Apply button to apply the changes and not exit the dialog box.
To delete a Lock Manager domain from the Lock Manager Domains screen, follow these steps. The domain will be deleted only from the selected node in the Lock Manager Domains property sheet.
Select the domain you wish to delete by left-clicking on it. The domain you select will become highlighted.
Delete the domain by left-clicking the Delete button.
Click the OK button to apply the changes and exit the dialog box, or click on Apply button to apply the changes and not exit the dialog box.
Select the domain you wish to modify by left-clicking on it. The domain you select will become highlighted. The Modify Lock Manager Domain form appears.
Make modifications, such as permissions, owner, or group name, to the domain.
Click the OK button when you are done modifying the form. You will be returned to the Lock Manager Domains screen.
Click the OK button to apply the changes and exit the dialog box, or click the Apply button to apply the changes and not exit the dialog box.
To display information about a particular node, left-click the icon for the node in the Cluster Objects list. Information about the node will be displayed on the right-hand side of your screen.
The following sample screen shows the type of information displayed. This information includes node data, CTC node status, and CTC objects on the node.
With the node icon selected, you can also select a ptx/CTC object from the CTC Objects list on the right-hand portion of the screen to perform one of the following options:
The pop-up menu allows you to perform an action on the selected object on the specified node. For example, you can start or stop ptx/CTC objects on the specified node. For more information about these options, click the online Help button at the bottom of each option's dialog box.
Once you click the OK button to perform an action on the selected object, a Tasks window will appear. This window provides information on the action you requested. Each task in the Tasks window is preceded by an icon that represents the status of the task. The icon for each task corresponds to the information in each task's Status column. For more information about the status of a task (such as why a task failed), click anywhere on the row for the task in the Tasks window, and then click on Details.
To view or modify node properties, either right-click the node object or, if the cluster object is selected, right-click anywhere on a highlighted node entry in the Cluster Nodes area and then left-click on Properties.
There are two node property sheets: General and Network Interfaces.
The General dialog box displays the ID of the node. If you are logged in as the administrative user, you can change the value of the node's ID through the dialog box.
The node ID, or index, must be an integer between 0 and 7, inclusive. Run this form for each node whose ID you wish to change.
Normally, when changing node IDs in a cluster, you need to reboot only the node whose ID you are changing. However, because of a ptx/CLUSTERS software defect, after changing the ID of one, two, or all nodes, you need to reboot all nodes.
The Network Interfaces dialog box displays information about CCIs, including the name and address of each CCI and its state (activated or deactivated).
If you are the administrative user, you can activate or deactivate a CCI. Note, however, that you can only add or remove a CCI interface following the ptx/TCP/IP and ptx/LAN procedures described in the ptx/CLUSTERS Administration Guide.
To display information about the ptx/CTC configuration on cluster nodes, left-click the resource failover icon or the words CTC Database. General ptx/CTC data, ptx/CTC node status, and information about ptx/CTC objects on all nodes are displayed.
With the CTC Database icon selected, you can also select a ptx/CTC object from the CTC Objects section in the right-hand part of the screen, and then right click to perform one of the following options:
The pop-up menu allows you to perform an action on the selected ptx/CTC object. For example, the Copy option allows you to copy the selected ptx/CTC object. For more information about these options, click the online Help button at the bottom of each option's dialog box.
The only CTC database property that you can change is the global timeout value. The global timeout value is the amount of time ptx/CTC will wait for an object's start, stop, or verify procedure to complete. The default value for the global timeout period is 3600 seconds (one hour). You can change the global timeout by entering a new value in the global timeout field.
You can also set the refresh interval and designate the ptx/CTC database location by right clicking on the CTC database icon and then left clicking on the appropriate option.
To create a ptx/CTC database, right-click the CTC Database icon and then left-click the "Set CTC Database Location" option. The following form appears:
The database must be accessible by all nodes in the cluster and should be stored on a raw, shared ptx/SVM volume or a raw shared partition. It is strongly recommended that the database reside on a mirrored, shared ptx/SVM volume as a safeguard against disk failure.
Each object in the database takes approximately 2,000 bytes. In general, the database should be large enough to hold the maximum number of objects in your configuration, while leaving enough room for future growth.
The "Nodes" section of the form lists the nodes that belong to the cluster. Click the checkbox for each node on which you want the database to be located.
The "Shared Devices/SVM Volumes" section of the form lists all shared raw devices and shared ptx/SVM volumes on the nodes. Pick one of the devices or volumes and double-click on it. The device name or volume should now appear in the "DB Location" field.
ATTENTION All shared raw devices and shared ptx/SVM volumes on the nodes are listed in the "Shared Devices/SVM Volumes" section of the form, regardless of whether they contain existing data. Use caution when selecting one of the listed devices so that you do not destroy existing data that you need.
By default, CommandPoint Clusters updates cluster data every 30 seconds and updates EES (event logging) data every 30 seconds. However, you can change the default interval values by selecting "Set Refresh Interval" from the Options pull-down menu. When you select the "Set Refresh Interval" option, the following form appears:
The minimum value is 5 (seconds); there is no maximum. When you have made the change(s), click the OK button; the new value(s) will go into effect immediately.
Product documentation for CommandPoint Clusters, in addition to these release notes, includes the online help available with the product, ptx/CLUSTERS Administration Guide, and the ptx/CTC Administration Guide.
This section lists the following problem report summaries:
The numbers in parentheses identify the problems in the problem-tracking system.
The following problems have been fixed in CommandPoint Clusters.
(249839, 238964) CommandPoint Clusters could not handle newline characters in CTC object definitions.
(249674) There was no way to set up or change the ptx/CTC database. This functionality has been added to CommandPoint Clusters.
(249203) An NT client running CommandPoint Clusters received an "jrew.exe Application Error."
(247850) The CommandPoint Clusters server did not start automatically at boot time, even though this option was specified during CommandPoint Clusters setup.
(246795) When a running task was terminated, the status should have returned "terminated" instead of "failed."
(246791) When a command was run that generated a large amount of output, the Command Broadcast window on the client received an OutOfMemory exception error.
(246267) When creating a new CTC object, a list of "stock" types was displayed. However, no explanation of the types was included in the online help or in the documentation.
(246266) When a non-WLU object was stopped, no warning was displayed alerting the user that stopping a potentially dependent object could have disastrous results on the WLU during failover.
(244281) During a Windows NT installation of CommandPoint Clusters, the software was always installed in the Sequent Folder, even if a different folder was selected to install into and it appeared correct in the final summary of selections.
(244159) The file that appeared when Help=>index was selected was the wrong file when CommandPoint Clusters was run as a DYNIX/ptx application.
(243901) The online help for the "Icon Legend" category was missing the "CTC UP," "CTC DOWN," and "Transition" icons.
(239649) CommandPoint Clusters online help could not be printed from a Windows NT client.
(242664) When CommandPoint Clusters was run as a Windows NT application, the JRE crashed after CommandPoint Clusters ran for several hours.
(239330) When CommandPoint Clusters was run as a DYNIX/ptx application, the splash screen did not cover the main window and the login dialog was also displaced.
(239105) An unnecessary input error dialog appeared on the radio button group.
(238964) There was no way to set up or change the ptx/CTC database. A new option, "Set CTC Database Location," lets you create a ptx/CTC database by selecting from the presented list of shared devices and ptx/SVM volumes.
The following problems have been reported against CommandPoint Clusters.
When CommandPoint Clusters is run as a DYNIX/ptx application, the windowing system's icon is used as the icon when the application is iconified, instead of the CommandPoint Clusters icon.
Workaround: None.
When CommandPoint Clusters is run as a DYNIX/ptx application, a Java exception occurs and is displayed in the window where CommandPoint Clusters is started.
Workaround: None.
When CommandPoint Clusters is run as a DYNIX/ptx application, the command broadcast feature does not display tabs in the result text (output of the issued command). When run as a Windows NT application, the command broadcast feature displays the '?' character instead of tabs.
Workaround: None.