This section describes how you tune your CICS-DBCTL setup to make efficient use of resources to help you reach performance objectives. It covers:
System design considerations for CICS® with DBCTL are similar to those that applied to local DL/I. For example, do not allow excessive database accesses or updates in a single UOW.
However, there are some differences.
The fact that DBCTL is structured to have one TCB per thread is an additional consideration for CICS. This allows more concurrent processing, but you must be aware of the need to specify minimum and maximum numbers of threads that are consistent with your system’s needs. For more information, see Specifying numbers of threads.
The storage specified in CICS system initialization parameters DSALIM and EDSALIM is used for different resources in a CICS-DBCTL environment. DSALIM is used to specify the upper limit of the total amount of storage within which CICS can allocate the individual DSAs below the 16MB line. EDSALIM is used to specify the upper limit of the total amount of storage within which CICS can allocate the individual EDSAs above the 16MB line. Local uses DSA for PSB and DMB pools, but with DBCTL, these blocks are stored outside CICS. Instead, you need to allow for the storage DBCTL needs in CICS for DRA code when specifying DSALIM and EDSALIM. This storage is allocated in the CICS region, but not from DSA or EDSA storage. See the CICS System Definition Guide and the the CICS Performance Guide for information about specifying DSALIM and EDSALIM, and the IMS System Administration Guide or the IMS Administration Guide: System for guidance on DBCTL storage estimates.
CICS can use single-phase commit instead of two-phase commit when, for a particular UOW, DBCTL is the only recoverable resource used. Using single-phase commit in these circumstances improves CICS performance with DBCTL by eliminating unnecessary logging, cutting restart time, decreasing transaction cost, and improving response time in both CICS and DBCTL. For information on using single-phase commit, see the CICS Customization Guide.
From an IMS™ point of view, tuning DBCTL is much like tuning an IMS system. Additional considerations are DRA threads, described in Specifying numbers of threads, and DEDBs, described in DEDB performance and tuning considerations.
To minimize response times, we recommend that you assign a higher dispatching priority to the CICS address space than to the DBCTL address spaces (DBCTL, DLISAS, DBRC). Although CICS can be regarded as a "front-end" to DBCTL, you should be aware that CICS also has to manage the network and the application environment for non-DLI transactions such as DB2® or VSAM. This means that it has very different CPU requirements from other front ends to DBCTL such as a BMP or a MPP. For example, when a CICS transaction is waiting for a response to a DBCTL request, CICS dispatches other CICS transactions.
We recommend that if IRLM is assigned a priority of n, CICS should have a priority of n-1, DBRC a priority of n-2, and DBCTL and DLISAS a priority of n-3.
For further guidance on assigning priorities, see the IMS System Administration Guide or the IMS Administration Guide: System.
The DRA startup parameters MINTHRD and MAXTHRD specifies the minimum and maximum numbers of threads that can process DBCTL DL/I or DEDB requests. (See Defining the IMS DRA startup parameter table for information on DRA startup parameters.)
The IMS system generation parameter MAXREGN specifies the number of regions (or threads), to be allocated at startup, that DBCTL can handle for all connected CICS systems and BMPs. The number can increase dynamically, to a limit of 255, as needed. (See Generating DBCTL for information on system generation parameters.)
The number you specify for MAXREGN should be no less than the sum of MINTHRDs specified for active CICS systems, and for BMPs.
In Figure 50, the following threads are in use: one from BMPA, one from BMPB, five from CICSA and three from CICSB, making a total of 10 threads. A MAXREGN of 10 has therefore been specified for DBCTLA.
The maximum number of threads you can specify in DBCTL is 255. One thread is equivalent to one MVS™ TCB. The number you specify must be large enough for your system’s needs, but if you specify a number that exceeds those needs, this will have an adverse effect on the performance of the DRA. If you specify a minimum thread value that is higher than your system’s actual minimum activity, this will tie up threads unnecessarily, preventing DBCTL from allocating them to other CICS systems or BMPs. If you specify a minimum thread value that is too low, this can also affect performance; if the level of thread activity falls, this could cause the DRA to release threads down to the minimum value. These threads would then have to be reestablished if the thread requests increased again.
The number you specify for MAXTHRD should reflect what you consider to be the peak load of DBCTL threads needed. The number of threads you specify will affect performance. The larger the number you have preallocated, the more storage is needed. However, if threads are preallocated, the time needed to allocate them on demand is saved, thus improving response time and throughput. So, if your system is storage constrained, specify a lower value for MINTHRD, and use MAXTHRD as a "safety valve". If response time and throughput are more important than storage requirements, specify a higher number for MINTHRD so that more threads are ready to be used.
Also bear DBCTL thread activity in mind when specifying the MXT system initialization parameter. You use MXT to specify the maximum number of tasks that CICS will allow to exist at any time. With DBCTL, MXT should be enough to allow for the number specified in MINTHRD, plus the number you need for "standard" CICS tasks. With DB2, there is no minimum number of threads. See the CICS Performance Guide for general help on MXT.
To help you decide on the optimum values for minimum and maximum numbers of DBCTL threads, monitor thread usage and IMS task throughput (to see if tasks are being delayed), and IMS I/O rates. For details of thread statistics produced, including maximum and minimum thread usage, see DBCTL statistics. See DBCTL data returned to IMS log for details of data produced for monitoring IMS I/O rates. You can also use CICS auxiliary trace to check for queueing for threads and PSBs.
If you use DEDBs, you must define the characteristics and usage of the IMS DEDB buffer pool. You do this by specifying parameters (including DRA startup parameters, as described in Defining the IMS DRA startup parameter table) during IMS system definition or execution.
The main concerns in defining DEDB buffer pools are the total number of buffers in the IMS region and how they are shared by CICS threads. You use the following parameters on the IMS FPCTRL macro to define the number of buffers:
The number of buffers available for the needs of CICS threads is the number remaining when you subtract the value specified for DBFX from DBBF. In this discussion, we have assumed a fixed number for DBFX. DBBF must therefore be large enough to accommodate all BMPs and CICS systems that you want to connect to a particular DBCTL.
When a CICS thread connects to IMS, its DEDB buffer requirements are as specified using a normal buffer allocation (NBA) parameter. For a CICS system, there are two NBA parameters in the DRA startup table:
A CICS system may fail to connect to DBCTL if its CNBA value is more than that available from DBBF. An application attempting to schedule PSBs that contains references to DEDBs may receive a schedule failure if the FPBUF value is more than that available from CNBA.
When a CICS system has successfully connected to DBCTL, and the application has successfully scheduled a PSB containing DEDBs, the DRA startup parameter FPBOF becomes relevant. FPBOF specifies the number of overflow buffers each thread will get if it exceeds FPBUF. These buffers are not taken from CNBA. Instead, they are buffers that are serially shared by all CICS applications or other dependent regions that are currently exceeding their NBA allocation.
Because overflow buffer allocation (OBA) usage is serialized, thread performance can be affected by NBA and OBA specifications. If FPBUF is too small, more applications need to use OBA, which may cause delays due to contention. If both NBA and OBA are too small, the application fails. If FPBUF is too large, this affects the number of threads that can concurrently access DEDB resources, and increases the number of schedule failures.
In a CICS-DBCTL environment, the main performance concern is the trade-off between speed and concurrent access. The size of this trade-off is dictated by the kind of applications you are running in the CICS system. If the applications have approximately the same NBA requirements, there is no trade-off. You can specify a FPBUF large enough to never need OBA. This speeds access and there is no waste of buffers in CNBA, thus enabling a larger number of concurrent threads using DEDBs. The more the buffer requirements of your applications vary, the greater the trade-off. If you want to maintain speed of access (because OBAs are not being used) but decrease concurrent access, you should increase the value of FPBUF. If you prefer to maintain concurrent access, do not increase the value of FPBUF. However, speed of access will decrease because this and possibly other threads will need to use the OBA function.
For information on specifying the parameters CNBA, FPBOF, and FPBUF, see Defining the IMS DRA startup parameter table. For further guidance on DEDB buffer specification and tuning, see sections on DEDBs in the IMS Administration Guide: Database Manager and the IMS Administration Guide: System.
Using DEDBs can give you performance improvements in the following areas:
DEDB writes are not done during the life of the transactions but are kept in buffers. Actual update operations are delayed until a synchronization point and are done by asynchronous processing using output threads in the control region. The output thread runs as a service request block (SRB)--a separate dispatchable MVS task. You can specify up to 255 output threads. This means that:
The cost of I/O per SDEP segment inserted can be very low because SDEP segments are gathered in one buffer and are written out only when it is full. This means that many transactions can "share the cost" of SDEP CI writes to a DEDB. SDEPs should have larger CIs to reduce I/Os.
DEDB log buffers are written to OLDS only when they are full. This means less I/O than would be needed with full function databases.
Using DBCTL enables you to use high speed sequential processing (HSSP). HSSP is useful with applications that do large scale sequential updates to DEDBs, which may require an image copy after the DEDBs are updated. Using HSSP provides the following major benefits:
For further guidance on HSSP, see the IMS Release Planning Guide.
IMS includes the asynchronous database buffer purge facility. At syncpoint time, when database buffers are to be flushed, buffers that are to be written to different devices are written concurrently, rather than serially, as in earlier releases of IMS. (For further guidance, see the IMS System Administration Guide or the IMS Administration Guide: System).
The asynchronous database buffer purge facility should improve response time for transactions that update databases on multiple devices in a single UOW.
CICS regions that previously used local DL/I can obtain considerable virtual storage constraint relief because the following storage areas reside in the DBCTL address spaces:
However, DBCTL requires some MVS CSA storage, which may lower the maximum available region size in the MVS system. See the IMS System Administration Guide or the IMS Administration Guide: System for details of CSA and other DBCTL storage requirements.
You can obtain throughput improvements on multiprocessors because the CICS-DBCTL interface resides in multiple address spaces and because it uses separate MVS subtasks to manage threads.
If you currently use MRO function shipping, converting the CICS DOR to use DBCTL should result in improved throughput due to multiprocessor exploitation and the reduced instruction pathlength of the CICS-DBCTL interface. DBCTL provides a separate TCB for each CICS application thread, which significantly improves the amount of concurrent processing.
You can obtain further performance improvements by using DEDBs instead of full-function databases. See Access to data entry databases (DEDBs) for introductory guidance on DEDBs, and Using DEDBs for information on the performance aspects.
When you migrate your CICS shared database batch and IMS batch jobs to use BMPs, this will simplify log management. Although a BMP may run more slowly than the same job running as an IMS batch job, performance for CICS shared database jobs running as BMPs should be improved. Observations show that the elapsed time for CICS shared database job converted to run as a BMP job is considerably shorter, and the CICS degradation of the CICS online workload in terms of transaction response and throughput is significantly less.