CommandPoint Clusters V2.3.0 Release Notes


Introduction

These release notes support CommandPointTM Clusters V2.3.0. They contain information on product compatibility, release-specific installation instructions, an overview of the product, how to get started with the product, and known problems.

Read this document before you install or run this release of CommandPoint Clusters. For complete information on ptx®/CLUSTERS and ptx/CTC, consult the ptx/CLUSTERS Administration Guide and the ptx/CTC Administration Guide, as well as the online help for CommandPoint Clusters.


What Is New in this Release

This release of CommandPoint Clusters includes the support for 4-node cluster configurations and fixed software defects, listed in "Fixed Problems in CommandPoint Clusters V2.3.0."


About CommandPoint Clusters

CommandPoint Clusters is a standalone DYNIX/ptx® or Windows NT® application for administering ptx/CLUSTERS and ptx/CTC. It is an optional application that provides the same level of administrative capabilities as the ptx/ADMIN® menus and forms for ptx/CLUSTERS and ptx/CTC. CommandPoint Clusters can be used instead of, or in conjunction with, ptx/ADMIN and the command-line utilities for administering a cluster and managing failover.

A cluster must be installed and running before you can begin to use CommandPoint Clusters; however, you can create new ptx/CTC failover rules for the cluster (or modify existing ptx/CTC rules) using CommandPoint Clusters.


Why Use CommandPoint Clusters?

CommandPoint Clusters provides all the functionality of the ptx/CLUSTERS and ptx/CTC command-line utilities and ptx/ADMIN menu forms. It is a Java-based graphical user interface that is easy to use and lets you administer all nodes from one interface. It also provides the following functionality that is not available in the standard ptx/CLUSTERS or ptx/CTC administration utilities:

Command viewing and dry-run mode together provide a learning environment for new or inexperienced system administrators.


Software and Hardware Compatibility Information


DYNIX/ptx Server and Client Requirements

The following software products are required, unless specified otherwise, on the IBM NUMA-Q host that will be running the CommandPoint Clusters server or client software:


Windows NT Client Requirements

If you plan to use Windows NT to administer a cluster from a PC, then the following requirements apply to the PC that you will be using. These requirements are in addition to the DYNIX/ptx requirements listed in the previous section, except for ptx/XWM and EES, which are needed on DYNIX/ptx platforms only.


Software Requirements

CommandPoint Clusters is compatible with the following software products running on a PC:


ATTENTION

You will need about 1.4 MB free space on your PC to install CommandPoint Clusters, plus an additional 2.6 MB if the Java Runtime Environment (JRE) is not already installed. If CommandPoint SVM or CommandPoint Clusters is already installed on your PC, then the JRE is already installed. You will also need about 9MB for the CommandPointClusters2.3.0.exe file.



Hardware Requirements


ATTENTION

Do not use the VCS console on a NUMA-Q system to run CommandPoint Clusters.


If you plan to use the Windows NT-compatible client to administer your IBM NUMA-Q system, then the following minimum requirements apply to the PC that you will be using. (These requirements are in addition to the DYNIX/ptx requirements listed in the previous section.)


ATTENTION

The minimum configuration will allow you to run all three CommandPoint products (CommandPoint Admin, CommandPoint Clusters and Command Point SVM) simultaneously. However, the performance will be better if you use the recommended hardware configuration.

Whichever configuration you choose to use, the performance of the system will be impacted by the number and size of the applications that you run simultaneously.



Minimum Hardware Configuration

The minimum hardware requirements are:


Recommended Hardware Configuration

The recommended hardware requirements are:


Software Installation

The CommandPoint Clusters software must be installed on each cluster node, and can also be installed on a PC running Windows NT.


DYNIX/ptx Installation

The CommandPoint Clusters software for DYNIX/ptx is located on the DYNIX/ptx Layered Products Software CD, Volume 2. Other required software for use with CommandPoint Clusters (including DYNIX/ptx, ptx/CLUSTERS, ptx/CTC and ptx/XWM) is located on the DYNIX/ptx Operating System and Layered Products Software CD, Volume 1. You can install all of these software products through the installation process described in the DYNIX/ptx and Layered Products Software Installation Release Notes. The product to select for installation is: cpclusters.

As shipped, CommandPoint Clusters is pre-configured to allow members of group cpclus and group cpgroup to run CommandPoint Clusters and the other CommandPoint products, respectively.

As root, you need to create the group cpclus (and cpgroup if you are going to run other CommandPoint products and cpgroup does not already exist), and add yourself to the group(s).


Windows NT Installation from CD-ROM (optional)


ATTENTION

CommandPoint Clusters requires 6MB of free disk space.


To load the CommandPoint Clusters client software on a Windows NT system, load the DYNIX/ptx Layered Products Software CD, Volume 2 and click on cpclusters\CommandPointClustersV2.3.0.exe.

Follow the standard installation.


ATTENTION

During the Windows NT installation of CommandPoint Clusters, you will be prompted to respond to a licensing query. In order to install CommandPoint Clusters, you will need to agree to the terms of the electronic license by clicking on the Yes button which will allow the installation to proceed. (If you answer No, the installation will stop).

Answering Yes allows you to install and use the Windows NT Client component of CommandPoint Clusters on one or more workstations which will be used for the purpose of administering IBM NUMA-Q systems. In this regard, the Windows NT Client component is an exception to Paragraph 1.2 of the Software License Agreement for CommandPoint Clusters.



Windows NT Installation via ftp

To install CommandPoint Clusters software for Windows NT clients via ftp, perform the following steps:


ATTENTION

You must set the ftp file transfer type to binary to support binary image transfer. Do not use ascii file transfer type.


  1. ftp the file /opt/commandpoint/cpclusters/CommandPointClusters2.3.0.exe to a temporary directory on your system.

  2. Double click the CommandPointClusters2.3.0.exe file.


Security

CommandPoint Clusters uses the following authentication mechanisms:


Set Up a CommandPoint Clusters Configuration File

A user must be explicitly authorized through special privilege groups to run CommandPoint Clusters. A special configuration file, /opt/commandpoint/cpclusters/cpclustersd.config, is installed with CommandPoint Clusters and contains entries for default privilege groups. The administrator must add users to these privilege groups before they can access CommandPoint Clusters.


Levels of Authorized Access

CommandPoint Clusters provides two levels of authorized access:

Administrative Access
This access level provides complete access to administer a cluster with CommandPoint Clusters. To attain this level of administrative access, a user must have a valid login account (with the same user name) on all nodes in the cluster, and must also be a member of either the cpclus or cpgroup privilege groups.
Monitoring (or Read-Only) Access
This access level provides information about the cluster, such as which nodes are up and the status of shared devices. It does not allow the user to take actions or execute commands on any cluster nodes. Users with read-only access cannot execute tasks or commands to perform an action on the cluster. They can, however, view commands run in dry-run mode.

Monitoring (or read-only) access is available only to those users who have a valid account on all nodes of the cluster (with the same login name) and who are also members of the cpclusro privilege group.


Privilege Groups

The /opt/commandpoint/cpclusters/cpclustersd.config file is installed on the system when CommandPoint Clusters is installed. It contains the following default entries:

# define access to CommandPoint Clusters
cpclusters.adminGroup=cpgroup,cpclus
cpclusters.readOnlyGroup=cpclusro

The privilege groups specified in the default config file and their associated access levels are:

cpgroup
Members of this group are allowed complete administration and monitoring access to an IBM NUMA-Q DYNIX/ptx system or cluster using CommandPoint Clusters, CommandPoint SVM, or CommandPoint Admin.
cpclus
Members of this group are allowed complete administration and monitoring access to an IBM NUMA-Q cluster using only the CommandPoint Clusters application. These members do not have access to CommandPoint SVM or CommandPoint Admin.
cpclusro
Members of this group are allowed only monitoring access to an IBM NUMA-Q cluster using CommandPoint Clusters. These members do not have access to CommandPoint SVM or CommandPoint Admin.

Users have no access to CommandPoint Clusters or other CommandPoint applications until they are added to a privilege group that is listed in the config file. Use the ptx/ADMIN menu system or CommandPoint Admin to add users to the privilege groups, as you would add users to other DYNIX/ptx groups.

You can create privilege groups other than those specified in the cpclustersd.config file that is installed with CommandPoint Clusters. Once you create the groups, add them to the /opt/commandpoint/cpclusters/cpclustersd.config file on each cluster node.


Start CommandPoint Clusters

Before you can use CommandPoint Clusters to monitor and administer ptx/CLUSTERS, the CommandPoint Clusters server must be running on each node, and then you must start CommandPoint Clusters on one of the cluster nodes running DYNIX/ptx or from Windows NT on a remote PC.


Verify the CommandPoint Clusters Server Is Running on Each Node

Verify that the CommandPoint Clusters server is running on each node by entering the following command (as root) on each node:

$ /opt/commandpoint/cpclusters/cpcSvr.sh query

If the server is running, you will be returned the message:

CommandPoint Clusters server (cpclustersd) currently running

If the server is not running, you will be returned the message:

CommandPoint Clusters server (cpclustersd) not running

To start the server, issue the following command (as root) on each node:

$ /opt/commandpoint/cpclusters/cpcSvr.sh


Start CommandPoint Clusters on DYNIX/ptx


ATTENTION

Before starting CommandPoint Clusters, you must correctly set your DISPLAY environment variable.


To start CommandPoint Clusters on a node running DYNIX/ptx, enter the following command:

$ /usr/bin/cpclusters


ATTENTION

It can take a few seconds for the Login dialog box to appear regardless of the platform from which you are running CommandPoint Clusters.


You will then be presented with the Cluster Login window. See the section below entitled "Getting Started" for information on how to proceed.


Start CommandPoint Clusters on Windows NT

To start CommandPoint Clusters on Windows NT to administer a remote DYNIX/ptx cluster, choose Start->Programs-> CommandPoint->CommandPoint Clusters V2.3.0 from the Windows NT task bar or from wherever you installed the application. Alternatively, you can double-click the CommandPoint Clusters program icon.


ATTENTION

It can take a few seconds for the Login dialog box to appear regardless of the platform from which you are running CommandPoint Clusters.


You will then be presented with the Cluster Login window. See the section below entitled "Getting Started" for information on how to proceed.


Getting Started

This section explains how to log in to CommandPoint Clusters, describes the forms and dialog boxes of CommandPoint Clusters, and tells how to navigate through the interface.


Log In to CommandPoint Clusters

Once you have started CommandPoint Clusters from DYNIX/ptx or as a Windows NT application, the following Cluster Login window appears:

Enter the complete name of the cluster node to which you are connected at the Host Name prompt.

Enter your login at the Login prompt and password at the Password prompt.

When you have completed the Cluster Login form, click the OK button. CommandPoint Clusters will then attempt to log you in on all the cluster nodes.

If you successfully logged in to all cluster nodes, the CommandPoint Clusters splash screen will appear, followed by the CommandPoint Clusters main screen.

Error conditions that can occur during the login process are:


CommandPoint Clusters Objects

The following is an example of a CommandPoint Clusters main screen.

A list of cluster objects is located on the left of the CommandPoint Clusters main screen. Cluster objects consist of the cluster itself, each cluster node, and ptx/CTC database information configured through ptx/CTC. Each object is designated by an icon. The object that is outlined by a bold rectangle is the current object, and the information to the right of the list of cluster objects pertains to the current object.

By default, when you first log in to CommandPoint Clusters, the selected object is the cluster, and cluster information is displayed. To select a different object, left-click the object's icon or name. Information about that object will appear to the right of the cluster objects list.

If you have administrative privileges, you can change cluster and node settings, as well as create and modify ptx/CTC entries. To change cluster, node, or ptx/CTC entries, left-click the desired object, then right-click. A pop-up menu appears. Select the Properties entry from the list by left clicking on it.


The Menu Bar

The menu bar, which includes pull-downs for File, View, Tasks, Options, and Help, lets you select some CommandPoint Clusters tasks. These tasks vary, depending on which cluster object is selected. Also, all tasks may not be available to you if you are not logged in as an administrative user.


File

The File pull-down menu provides the Exit. Choose this item to log out of CommandPoint Clusters.


View

The View pull-down menu presents these four functions when any cluster object is selected:


Tasks

Items that appear in the Tasks pull-down menu vary depending on the cluster object that is selected.


Cluster Tasks

The only cluster task available is Command Broadcast. The task is available when the cluster icon is selected. It lets you issue DYNIX/ptx commands on all nodes simultaneously. CommandPoint Clusters then displays the command results. Command Broadcast also lets you choose to issue a DYNIX/ptx command on one node only. This feature is limited to non-interactive commands only (it will not work with vi for example). It also does not support commands that have more than 30 KB of output.


Node Tasks

The following functions are available from the Tasks pull-down menu when a node icon is selected:


CTC Database Tasks

The following functions are available from the Tasks pull-down menu when the CTC Database object is selected:


Options

The Options pull-down menu provides the "Set Refresh Interval" function when any cluster object is selected. You can modify the default refresh intervals for cluster and EES data through the Set Data Refresh Interval window. The default data polling interval for the cluster and EES data is 30 seconds. You can reduce the polling time to a minimum of seconds; there is no maximum bound.

When you click on OK, the change takes effect immediately.


Help

The following functions are available from the Help pull-down menu when any CommandPoint Clusters object is selected:


CommandPoint Clusters Icons

The following icons may appear during a CommandPoint Clusters session:

Table 1. CommandPoint Clusters Icons
Icon Description
The cluster is up; the nodes have quorum.
The cluster is down; the nodes do not have quorum.
The node is a healthy cluster member and is at run-level 2 (multiuser mode).
The node is a healthy cluster member and is at run-level 1 (single-user mode); however, CommandPoint Clusters and ptx/CTC are not running.
The node is no longer a member of the cluster.
ptx/CTC is enabled and started on all cluster members.
ptx/CTC is disabled. It may be started or enabled on one node but not on all nodes of the cluster.
Confirm that the command to be issued is correct.
Warning about messages returned from the system after a command has been issued.
An error has occurred or a command has failed.
Information that is returned from a command. This icon also appears when a command is run in dry run mode.
A transition has occurred.
Connection to the node has been lost or never established. This could be due to a failed login, a node entering the cluster after CommandPoint Clusters has already started, or failure to connect to the CPserver on the node.
The specified task completed successfully.
The specified task is executing.
The specified task has failed.
The specified task did not complete.


Cluster Information

When you select the cluster object, information about the cluster is displayed in the right-hand portion of your screen. General cluster data, as well as information about the quorum disk, quorum votes, cluster interfaces, and cluster nodes, are displayed.

The following screen shows how the cluster information is displayed.

In the area near the bottom of the screen called "Cluster Nodes," information about one of the nodes is always highlighted. To view the property sheets for the highlighted node, right-click anywhere on the highlighted entry, and left-click on Properties from the drop-down menu that appears. The node property sheets are described in more detail in the section entitled Node Properties.


ATTENTION

If you are logged in as an administrative user, you will be able to modify node properties; otherwise, you will only be able to view information about node properties.



Cluster Properties

To view or modify cluster properties, right-click the cluster object, and then left click on Properties.

There are two cluster property sheets: General and Lock Manager Domains.


General Properties

The General Properties dialog box is shown below. It lets you view or modify entries for permanent cluster parameters, active cluster parameters, and the quorum disk.

If you modify entries on the General Properties dialog box, you can click the OK button to apply the changes and exit the dialog box, or click the Apply button to apply the changes and not exit the dialog box.

If you are logged in as a non-administrative user, none of the modifications will take effect.


Lock Manager Properties

The Lock Manager Domains form is shown below. It lets you view information about existing Lock Manager domains. If you have logged in with administrative privileges, the modifications you have made will take effect when you click the OK or Apply button.


Add a Lock Manager Domain

To add a Lock Manager domain from the Lock Manager Domains screen, follow these steps. The domain will be added to all current member nodes in the cluster.

  1. Left-click the Add button. The Add New Lock Manager Domain screen appears.

  2. Enter the name of the lock domain you want to add, its owner and its group. The owner must have a valid entry in the /etc/passwd file, and the group must have a valid entry in the /etc/group file.

  3. Choose the permissions for the new domain by left-clicking in the appropriate Read, Write, and Execute check boxes.

  4. Choose to instantiate the new domain or wait until the system reboots. By default, the checkbox to instantiate is already checked; to de-select it, left-click the checkbox.

  5. Click the OK button to apply the changes and exit the dialog box, or click the Apply button to apply the changes and not exit the dialog box.


Delete a Lock Manager Domain

To delete a Lock Manager domain from the Lock Manager Domains screen, follow these steps. The domain will be deleted only from the selected node in the Lock Manager Domains property sheet.

  1. Select the domain you wish to delete by left-clicking on it. The domain you select will become highlighted.

  2. Delete the domain by left-clicking the Delete button.

  3. Click the OK button to apply the changes and exit the dialog box, or click on Apply button to apply the changes and not exit the dialog box.


Modify a Lock Manager Domain

  1. Select the domain you wish to modify by left-clicking on it. The domain you select will become highlighted. The Modify Lock Manager Domain form appears.

  2. Make modifications, such as permissions, owner, or group name, to the domain.

  3. Click the OK button when you are done modifying the form. You will be returned to the Lock Manager Domains screen.

  4. Click the OK button to apply the changes and exit the dialog box, or click the Apply button to apply the changes and not exit the dialog box.


Node Information

To display information about a particular node, left-click the icon for the node in the Cluster Objects list. Information about the node will be displayed on the right-hand side of your screen.

The following sample screen shows the type of information displayed. This information includes node data, CTC node status, and CTC objects on the node.

With the node icon selected, you can also select a ptx/CTC object from the CTC Objects list on the right-hand portion of the screen to perform one of the following options:

The pop-up menu allows you to perform an action on the selected object on the specified node. For example, you can start or stop ptx/CTC objects on the specified node. For more information about these options, click the online Help button at the bottom of each option's dialog box.

Once you click the OK button to perform an action on the selected object, a Tasks window will appear. This window provides information on the action you requested. Each task in the Tasks window is preceded by an icon that represents the status of the task. The icon for each task corresponds to the information in each task's Status column. For more information about the status of a task (such as why a task failed), click anywhere on the row for the task in the Tasks window, and then click on Details.


Node Properties

To view or modify node properties, either right-click the node object or, if the cluster object is selected, right-click anywhere on a highlighted node entry in the Cluster Nodes area and then left-click on Properties.

There are two node property sheets: General and Network Interfaces.


General Properties

The General dialog box displays the ID of the node. If you are logged in as the administrative user, you can change the value of the node's ID through the dialog box.

The node ID, or index, must be an integer between 0 and 7, inclusive. Run this form for each node whose ID you wish to change.

Normally, when changing node IDs in a cluster, you need to reboot only the node whose ID you are changing. However, because of a ptx/CLUSTERS software defect, after changing the ID of one, two, or all nodes, you need to reboot all nodes.


Network Interfaces Properties

The Network Interfaces dialog box displays information about CCIs, including the name and address of each CCI and its state (activated or deactivated).

If you are the administrative user, you can activate or deactivate a CCI. Note, however, that you can only add or remove a CCI interface following the ptx/TCP/IP and ptx/LAN procedures described in the ptx/CLUSTERS Administration Guide.


ptx/CTC (Failover) Information

To display information about the ptx/CTC configuration on cluster nodes, left-click the resource failover icon or the words CTC Database. General ptx/CTC data, ptx/CTC node status, and information about ptx/CTC objects on all nodes are displayed.

With the CTC Database icon selected, you can also select a ptx/CTC object from the CTC Objects section in the right-hand part of the screen, and then right click to perform one of the following options:

The pop-up menu allows you to perform an action on the selected ptx/CTC object. For example, the Copy option allows you to copy the selected ptx/CTC object. For more information about these options, click the online Help button at the bottom of each option's dialog box.


CTC Database Properties

The only CTC database property that you can change is the global timeout value. The global timeout value is the amount of time ptx/CTC will wait for an object's start, stop, or verify procedure to complete. The default value for the global timeout period is 3600 seconds (one hour). You can change the global timeout by entering a new value in the global timeout field.

You can also set the refresh interval and designate the ptx/CTC database location by right clicking on the CTC database icon and then left clicking on the appropriate option.


Create a ptx/CTC Database

To create a ptx/CTC database, right-click the CTC Database icon and then left-click the "Set CTC Database Location" option. The following form appears:

The database must be accessible by all nodes in the cluster and should be stored on a raw, shared ptx/SVM volume or a raw shared partition. It is strongly recommended that the database reside on a mirrored, shared ptx/SVM volume as a safeguard against disk failure.

Each object in the database takes approximately 2,000 bytes. In general, the database should be large enough to hold the maximum number of objects in your configuration, while leaving enough room for future growth.

The "Nodes" section of the form lists the nodes that belong to the cluster. Click the checkbox for each node on which you want the database to be located.

The "Shared Devices/SVM Volumes" section of the form lists all shared raw devices and shared ptx/SVM volumes on the nodes. Pick one of the devices or volumes and double-click on it. The device name or volume should now appear in the "DB Location" field.


ATTENTION

All shared raw devices and shared ptx/SVM volumes on the nodes are listed in the "Shared Devices/SVM Volumes" section of the form, regardless of whether they contain existing data. Use caution when selecting one of the listed devices so that you do not destroy existing data that you need.



Change the Data Refresh Interval

By default, CommandPoint Clusters updates cluster data every 30 seconds and updates EES (event logging) data every 30 seconds. However, you can change the default interval values by selecting "Set Refresh Interval" from the Options pull-down menu. When you select the "Set Refresh Interval" option, the following form appears:

The minimum value is 5 (seconds); there is no maximum. When you have made the change(s), click the OK button; the new value(s) will go into effect immediately.


Product Documentation

Product documentation for CommandPoint Clusters, in addition to these release notes, includes the online help available with the product, ptx/CLUSTERS Administration Guide, and the ptx/CTC Administration Guide.


Problem Reports

This section lists the following problem report summaries:

The numbers in parentheses identify the problems in the problem-tracking system.


Fixed Problems in CommandPoint Clusters V2.3.0

The following problems have been fixed in this release of CommandPoint Clusters:


Open Problems in CommandPoint Clusters V2.3.0

The following problems have been reported against this release of CommandPoint Clusters:

Some GUI Text Not Visible When Running DYNIX/ptx Client Via eXceed

A problem in eXceed and Java relating to the display of certain fonts causes some text within the CommandPoint Clusters interface to disappear completely when the DYNIX/ptx client is run via eXceed. This problem only occurs when the Windows NT "Color Palette" is set to greater than 256 colors.

Workaround: To change the color palette to use only 256 colors:

  1. From the Control Panel (Start->Settings->Control Panel), double-click on the Display icon.

  2. In the Display Properties dialog, select the Settings tab.

  3. Set the color palette to use 256 colors.

  4. Click OK or Apply.

When Starting CommandPoint Clusters on Microsoft Windows NT 4.0 Terminal Server Edition, Java Generates a Fatal Error and CommandPoint Clusters Will Not Run

A defect in Java generates a fatal error and prevents CommandPoint Clusters from running when it is started on Microsoft Windows NT 4.0 Terminal Server Edition. The problem is documented in the Sun Microsystems Java Bug Database under Bug ID 4193603. The problem is caused by a flaw in the way Java accesses font information in the Terminal Server environment.

Workaround: Edit the batch file that starts CommandPoint Clusters and insert text to define the location of the fonts on the NT system, as follows:

  1. Using the Windows NT Explorer, find the file named CPClusters.bat (if CommandPoint Clusters was installed in the default installation location, the files should be located in C:\Program Files\CommandPoint\CPClusters\CPClusters.bat).

  2. Using the "Properties" option, deselect the "Read-only" option, and then apply the change.

  3. Edit the file CPClusters.bat and insert the following line near the middle of the file:

    "SET JAVA_FONTS=C:\wtsrv\fonts" 

    Set the drive letter as appropriate for your system and verify the location of the wtsrv\fonts directory.

  4. Save the changes to the file.

CommandPoint Clusters Icon Not Used in Window of Iconified DYNIX/ptx Application (242348)

When CommandPoint Clusters is run as a DYNIX/ptx application, the windowing system's icon is used as the icon when the application is iconified, instead of the CommandPoint Clusters icon.

Workaround: None.

Java Exception Occurs During Property Sheet Initialization on DYNIX/ptx (241372, 243983)

When CommandPoint Clusters is run as a DYNIX/ptx application, a Java exception occurs and is displayed in the window where CommandPoint Clusters is started.

Workaround: None.

Command Broadcast Feature Does Not Properly Display Tabs (239099)

When CommandPoint Clusters is run as a DYNIX/ptx application, the command broadcast feature does not display tabs in the result text (output of the issued command). When run as a Windows NT application, the command broadcast feature displays the '?' character instead of tabs.

Workaround: None.