gtpm6m0hMain Supervisor Reference

Using Keypoints to Maintain System Operations

Control program keypoints are data records used to maintain system operations. These records reflect the current status of the system and are essential to the startup/restart process. A copy of the keypoint records is maintained in the fixed file area on each disk pack in the online system.

File-resident keypoints are retrieved and updated via system ECB-controlled programs. There is one program for retrieving records; one for filing. As these keypoint records are resident in file storage, control program macros are used to activate the update sequence (see Initializing ECBs for Entries from Control Transfer Macros).

TPF users can maintain main-storage resident data records in a portion of main storage called the global area. Global records are data records that support user written applications. A back-up copy of the global records is maintained on file. For this reason, global records are often referred to as application keypoint records. Main-storage resident control program and application keypoints are updated through a program called the keypoint update mechanism. Requests to update main-storage resident keypoints (both control program and application) receive a higher priority than do requests to update file-resident keypoints. This process is referred to as demand keypointing.

Control program keypoints are 4096-bytes (4KB) and are retrieved in 4KB blocks. Any ECB-controlled program can retrieve or file a keypoint record. When the file copy of a main storage keypoint is updated, that is, keypointed, the name of the program that initiated the request is placed within the filed record.

The record types used to manage keypoints are the keypoint staging area (#KSAx), the working keypoint area (#KEYPT), and the keypoint backup area (#KBA). The keypoint staging areas are image-unique records; therefore, there is a one-to-one correspondence between the #KSA areas and the #CIMR areas. The auxiliary loader loads to the keypoint staging areas rather than directly to the working keypoint area. The keypoints can then be moved to the working area by issuing the ZIMAG KEYPT MOVE command. There are also other ZIMAG KEYPT command options to manipulate and display the keypoint staging area. The working keypoint area is shared by all images and contains the keypoints that are currently used by the online system. Keypoints can be loaded directly to the working keypoint with the general file loader. The keypoint backup area is used to maintain backup copies of the active keypoints that have been overlaid by a ZIMAG KEYPT MOVE command. Keypoints can be selectively fallen back from the keypoint backup area with the ZIMAG KEYPT RESTORE command. See TPF Operations for more information about the ZIMAG command. See TPF System Installation Support Reference for more information about multiple TPF images.

Note:
Each subsystem in a multiple database function (MDBF) system can maintain unique keypoints or may share keypoints with the basic subsystem. Shared keypoints are filed using information that identifies the filing subsystem.

Time Initiated Keypoint Copy

Copies of the control program keypoints are written, on a rotational basis, to the fixed file area on the first 256 device type A disk packs in the system. The keypoints are copied from a location on one pack that is not used in the copy rotation. This pack is called the prime module. In a duped system, the dupe of the prime module will not be in the copy rotation. In a fully duped system, keypoints are written to one disk pack and not its corresponding dupe pack. This is because the order in which the keypoints are written is determined by the disk pack's symbolic module number. This function is used to propagate the ICR, CTKX and IPL areas across the online modules. Using the time initiated keypoint copy function means that when someone is IPLing a pack with the most recent keypoints, the most recent copies of the ICR, CTKX, and IPL areas will also be there. Note that because the ICR, CTKX, and IPL areas are not altered often, they do not need to be propagated with every time initiated update. Instead, these areas are copied to all modules in a cyclic order when any of them is modified. Because the propagation of the IPL areas is infrequent and is not required when the ICR or CTKX is modified, it is controlled separately.

Note:
In an MDBF environment, the active processor with the lowest ordinal number copies the keypoints for all processors in the complex.

An alternate keypoint copy mechanism called fallback extent keypointing can also be generated. In fallback extent keypointing, keypoints are copied from their prime location to an extent area on the same disk pack. This area is an extension of the prime area (not necessarily contiguous to it). Each fallback copy area (fallback extent) is a separate record definition in the FACE table (FCTB) with record types #KFBXn (where n = 0 - 254). Fallback extents must be defined using contiguous record types starting with record type #KFBX0. If n fallback extents are defined, the record types that are used to define them in the FACE table must be #KFBX0 - #KFBXm, where m equals n - 1. The keypoints are copied to these areas on the same timed rotational basis as described previously. This option is intended for use only in an MDBF system where the basic subsystem (BSS) is generated on a small number of disk packs. Fallback extent keypointing is not necessary when the number of packs that are available is enough for normal rotational keypointing. Keypoint fallback extents that are defined on subsytems other than the BSS are not used.

The time increment for keypoint copying is set in keypoint record A (CTKA) during system initialization. The keypoint copy program reactivates itself with a CRETC (create a time-initiated entry) macro using the increment value in keypoint A. See TPF General Macros for more information about the CRETC macro, or the TPF C/C++ Language Support User's Guide for more information about the cretc function.

Control Program Keypoints


Table 1. Control Program Keypoints

Keypoint Macro Name Function Processor SS Initialized By Residency Demand Keypointable
Record A (CTKA) CK1KE Contains information required for system loading and for the initializer program. Unique Unique SIP File No
Record B (CTKB) CK9KC Miscellaneous initialization and restart values, for example, clock status, VFA status, and DASD error thresholds. Unique Unique SIP Main storage Yes
Record C (CTKC) CK8KE Status of Computer Room Agent Set (CRAS) attached terminals, initial Routing Control Application Table (RCAT) and Terminal Address Table (WGTA) control information. Shared Shared SIP Main storage No
Record D (CTKD) CK7KE Status used by the synchronous link programs. Unique Shared SIP Main storage Yes
Record E (CTKE) CK6KE Describes the non-SNA communications network. Unique Shared SIP File No
Record I (CTKI) IC0CK Describes the status of all processors in a loosely coupled complex of the HPO feature. Shared Shared SIP File No
Record M (CTKM) MK0CK Describes the status of each subsystem and each subsystem user. Shared Shared SIP Main storage No
Record V (CTKV) IDSCKV Contains volume serial number ranges for the online modules, the copy module, and the loader general file. Shared Unique SIP File No
Record 0 (CTK0) CK0KE Contains legal disk hardware addresses. Shared Shared SIP File No
Record 1 (CTK1) CK2KC Contains the Tape Status Table (TSTB). Unique Shared N/A Main storage Yes
Record 2 (CTK2) CK2SN Contains all the information in the system about the SNA configuration and the TCP/IP device parameters. Unique Shared Source, contains no SIP provided inputs Main storage No
Record 3 (CTK3) None This keypoint is available for customer use. Unique Shared Customer File No
Record 4 (CTK4) VK4CK This keypoint is available for customer use. Shared Unique Customer File No
Record 5 (CTK5) None This keypoint is reserved for IBM use. Shared Shared N/A File No
Record 6 (CTK6) CJ6KP Contains the DASD module status indicators. Shared Unique SIP File and main storage (see note 1) No
Record 9 (CTK9) CY1KR Contains the status of the DASD pools. Shared Unique Source, contains no SIP provided inputs Main storage No
Note:
  1. The entire keypoint is file-resident; the first section of the keypoint is also main-storage-resident.
  2. Processor shared means that there is one copy of the keypoint for all processors in a loosely coupled environment.
  3. Processor unique means that there is one copy of the keypoint per processor in a loosely coupled environment.
  4. Subsystem (SS) shared means that there is one copy of the keypoint residing in the BSS in an MDBF environment.
  5. Subsystem (SS) unique means that there is one copy of the keypoint per subsystem in an MDBF environment.