Some CICS® application programming techniques may create an affinity between transactions depending on how they are implemented.
The programming techniques that may be suspect are described in the following sections.
CICS application programs commonly use temporary storage (TS) queues to hold temporary application data, and to act as scratch pads.
Sometimes a TS queue is used to pass data between application programs that execute under one instance of a transaction (for example, between programs that pass control by a LINK or XCTL command in a multi-program transaction). Such use of a TS queue requires that the queue exists only for the lifetime of the transaction instance, and therefore it does not need to be shared between different target regions, because a transaction instance executes in one, and only one, target region.
Sometimes a TS queue holds information that is specific to the target region, or holds read-only data. In this case the TS queue can be replicated in each target region, and no sharing of data between target regions is necessary.
However, in many cases a TS queue is used to pass data between transactions, in which case the queue must be globally accessible to enable the transactions using the queue to run in any dynamically selected target region. It is possible to make a temporary storage queue globally accessible by function shipping TS requests to a queue-owning region (QOR), provided the TS queue can be defined as remote. Shared queues are defined by using a temporary storage pool in a coupling facility. Shared temporary storage applies only to non-recoverable queues. You can make queues in auxiliary storage recoverable, but not queues in main storage.
In a pseudoconversational transaction, you can change the program to use a COMMAREA to pass data between the phases of the conversation. However, using temporary storage data-sharing avoids inter-transaction affinity by being able to use dynamic routing to any target region. Shared temporary storage queue requests for specific SYSIDs are routed in the same way as remote queue requests. The SYSID value defined to shared TS pools is TST TYPE=SHARED.
The methods for specifying TS pool make it easy to migrate queues from a QOR to a TS data-sharing pool. You can use the temporary storage global user exit, XTSEREQ, to modify the SYSID on a TS request so that it references a TS data-sharing pool. Figure 62 shows four AORs that are using the same TST (TST=XX). The SYSIDNT option on the TST TYPE=SHARED macro maps temporary storage requests to a pool of shared TS queues called DFHXQTS1. You should define DFHXQTS1 in the poolname parameter of the JCL to start up the TS data sharing server DFHXQMN. See the CICS System Definition Guide for more information about setting up a data sharing server.
To define a queue as remote you must include an entry for the queue in a temporary storage table (TST), or use an appropriate TSMODEL. TS queue names are frequently generated dynamically, but they can also be unique fixed names.
The usual convention is a 4-character prefix (for example, the transaction identifier) followed by a 4-character terminal identifier as the suffix. This generates queue names that are unique for a given terminal. Such generic queue names can be defined easily as remote queues that are owned, for example, by:
Remote queues and shared queues can be defined in the same way for application programs, but requests for specific SYSIDs are routed to a temporary storage data server by means of TST TYPE=SHARED. However, if the naming convention for dynamically named queues does not conform to this rule, the queue cannot be defined as remote, and all transactions that access the queue must be routed to the target region where the queue was created. Furthermore, a TS queue name cannot be changed from a local to a remote queue name using the global user exits for TS requests.
See Figure 63 for an illustration of the use of a remote queue-owning region.
When you eliminate inter-transaction affinity relating to TS queues by the use of a global QOR, you must also take care to review exception condition handling. This is because some exception conditions can occur that previously were not possible when the transactions and the queue were local in the same region. This situation arises because the target region and QOR can fail independently, causing circumstances where:
Another form of data queue that CICS application programs commonly use is the transient data queue (TD). The dynamic transaction routing considerations for TD queues have much in common with those for temporary storage. To enable transactions that use a TD queue that needs to be shared, to be dynamically routed to an target region, you must ensure that the TD queues are globally accessible.
All transient data queues must be defined to CICS, and must be installed before the resources become available to a CICS region. These definitions can be changed to support a remote transient data queue-owning region (QOR).
However, there is a restriction for TD queues that use the trigger function. The transaction to be invoked when the trigger level is reached must be defined as a local transaction in the region where the queue resides (in the QOR). Thus the trigger transaction must execute in the QOR. However, any terminal associated with the queue need not be defined as a local terminal in the QOR. This does not create an inter-transaction affinity.
Figure 64 illustrates the use of a remote transient data queue-owning region.
When you eliminate inter-transaction affinity relating to TD queues by the use of a global QOR, there should not be any new exception conditions (other than SYSIDERR if there is a system definition error or failure).
The use of some synchronization techniques permit the sharing of task-lifetime storage between two synchronized tasks. For example, the RETRIEVE WAIT and START commands could be used for this purpose, as illustrated in Figure 65.
In this example, TRN1 is designed to retrieve data from an asynchronous task, TRN2, and therefore must wait until TRN2 makes the data available. Note that for this mechanism to work, TRN1 must be a terminal-related transaction.
The steps are as follows:
In the example of task synchronization using RETRIEVE WAIT and START commands shown in Figure 65, the START command that satisfies the RETRIEVE WAIT must:
An application design based on the remote TRANSID technique only works for two target regions. An application design using the SYSID option on the START command only works for multiple target regions if all target regions have connections to all other target regions (which may not be desirable). In either case, the application programs need to be modified: there is no acceptable way to use this programming technique in a dynamic or distributed routing program except by imposing restrictions on the routing program. In general, this means that the dynamic or distributed routing program has to ensure that TRN2 has to execute in the same region as TRN1 to preserve the application design.
Using this CICS application programming technique, one transaction issues a START command to start another transaction after a specified interval. Another transaction (not the one requested on the START command) determines that it is no longer necessary to run the requested transaction, (which is identified by the REQID parameter) and cancels the START request. Note that the cancel is only effective if the specified interval has not yet expired.
A temporary storage queue is one way that the REQID can be passed from task to task.
Figure 66 illustrates this programming technique.
Using this application programming technique, the CANCEL command that cancels the START request must:
An application design based on the remote TRANSID technique only works for two target regions. An application design using the SYSID option on the cancel command only works for multiple target regions if all target regions have connections to all other target regions. In either case, the application programs need to be modified: there is no acceptable way to use this programming technique in a dynamic or distributed routing program except by imposing restrictions on the routing program.
In general, this means that the dynamic or distributed routing program has to ensure that TRN2 executes in the same region as TRN1 to preserve the application design, and also that TRNX is defined as a local transaction in the same region.
Using this CICS application programming technique, one task can resume another task that has been suspended by a DELAY command.A DELAY request can only be cancelled by a different task from the one issuing the DELAY command, and the CANCEL command must specify the REQID associated with DELAY command. Both the DELAY and CANCEL command must specify the REQID option to use this technique.
The steps involved in this technique using a temporary storage queue to pass the REQID are as follows:
EXEC CICS WRITEQ TS
QUEUE('DELAYQUE') FROM(reqid_value)
EXEC CICS DELAY
INTERVAL(1000) REQID(reqid_value)
The process using a TS queue is illustrated in Figure 67.
Another way to pass the REQID when employing this technique would be for TRN1 to start TRN2 using the START command with the FROM and TERMID options. TRN2 could then obtain the REQID with the RETRIEVE command, using the INTO option.
Using this application programming technique, the CANCEL command that cancels the DELAY request must:
An application design based on the remote TRANSID technique only works for two target regions. An application design using the SYSID option on the cancel command only works for multiple target regions if all target regions have connections to all other target regions. In either case, the application programs need to be modified: there is no acceptable way to use this programming technique in a dynamic or distributed routing environment except by imposing restrictions on the routing program.
If the CANCEL command is issued by a transaction that is initiated from a terminal, it is possible that the transaction could be dynamically routed to the wrong target region.
The CICS POST command is used to request notification that a specified time has expired. Another transaction (TRN2) can force notification, as if the specified time has expired, by issuing a CANCEL of the POST request.
The time limit is signalled (posted) by CICS by setting a bit pattern in the event control block (ECB). To determine whether notification has been received, the requesting transaction (TRN1) has either to test the ECB periodically, or to issue a WAIT command on the ECB.
A TS storage queue is one way that can be used to pass the REQID of the POST request from task to task.
Figure 68 illustrates this technique.
If this technique is used, the dynamic or distributed routing program has to ensure that TRN2 executes in the same CICS region as TRN1 to preserve the application design.
The CANCEL command that notifies the task that issued the POST must:
An application design based on the remote TRANSID technique only works for two target regions. An application design using the SYSID option on the cancel command only works for multiple target regions if all target regions have connections to all other target regions. In either case, the application programs need to be modified: there is no acceptable way to use this programming technique in a dynamic or distributed routing program except by imposing restrictions on the routing program.
In general, this means that the dynamic or distributed routing program has to ensure that TRN2 executes in the same region as TRN1 to preserve the application design.
Clearly, there is no problem if the CANCEL is issued by the same task that issued the POST. If a different task cancels the POST command, it must specify REQID to identify the particular instance of that command. Hence the CANCEL command with REQID is indicative of an inter-transaction affinity problem. However, REQID need not be specified on the POST command because CICS automatically generates a REQID and pass it to the application in EIBREQID.
[[ Contents Previous Page | Next Page Index ]]