The functional benefits that DBCTL offers are in the areas of:
Previously, if you did not use DBCTL, and a database was unavailable
when CICS® tried to schedule a program specification block (PSB), the transaction
received a return code to say that the schedule has failed. DBCTL enables CICS to take advantage of the data availability that IMS™ provides; you
can successfully schedule a PSB, even though some of the databases used in
that PSB are unavailable.
Scheduling for database recovery is more flexible because database blocks
(or CIs) that have had read or write errors are still available after a DBCTL
restart.
See Enhanced scheduling for more information on data availability and
the system service requests you can use in connection with it.
Running batch jobs (both CICS shared database and "native" IMS batch
jobs) as BMPs enables you to use system service requests, such as symbolic
checkpoint (CHKP) and extended restart (XRST), and to access GSAM databases,
which you could not do with CICS shared database. With BMPs, all logging
goes to a single log (the IMS log), which eliminates the need for separate batch
logs. BMPs also support automatic backout, and automatic restart from the
last checkpoint (without requiring JCL changes). BMPs communicate directly
with the DBCTL address space instead of accessing databases through CICS, and enable
concurrent access to databases without the need to use IMS data sharing.
Using BMPs gives a performance advantage compared with the same programs that
ran as CICS shared database jobs, both in terms of the elapsed time of the batch
jobs themselves, and in terms of transaction response and throughput, because
they do not delay the CICS online workload as much. See Batch message processing programs (BMPs) for
more information.
Your CICS application programs can use the following IMS system
service requests in addition to those related to data availability:
- DEQ (in its command or call format) releases segments
that were retrieved using the LOCKCLASS keyword or the Q command code. LOCKCLASS
and Q enable an application program to reserve segments for its use.
- LOG (in its command or call format) can be used to write a record
from an application program to the IMS log. You may prefer to use this instead
of EXEC CICS journal commands so that all your DBCTL information is on the IMS log
instead of the CICS log.
See Application programming for DBCTL for more information on using these requests.
Data entry databases (DEDBs) provide the same
features as HDAM databases (with the exceptions of secondary indexing and
logical relationships). They also have a number of advantages. Using DEDBs
enables you to have very large databases with high availability. DEDBs are
designed to provide efficient storage and fast online gathering, retrieval,
and update of data, using VSAM entry sequenced data sets (ESDSs).
DEDBs are hierarchic databases that can contain up to 127 segment types.
One of these segments is always a root segment. The remaining 126 segments
can either be direct dependent (DDEP) segments, or 125 DDEP segments and one
sequential dependent (SDEP) segment. A DEDB structure can have as many as
15 hierarchical levels.
DEDBs are made up of database records stored in a set of up to 240 areas.
Each area contains a range of database records (which you can specify using
the DEDB randomizing routine) that contain the entire logical structure for
a set of root segments and their dependent segments. Areas are independent
of each other, are individually recognized, can be accessed by multiple programs
and DEDB utilities, are the basis for recovery procedures, and are largely
transparent to application programs.
DEDBs provide the following advantages:
- Large databases
- Areas can be as large as 4 gigabytes, and because you can have up to 240
areas in a single database, you can use very large databases, which you would
have to partition if you were not using DEDBs.
- Flexible design
- Each area can be designed to meet your storage, availability, performance,
and application needs. Areas can be separately reorganized and reacquired.
- You use the DEDB direct reorganization utility to physically reorganize
DEDBs to reduce ESDS fragmentation without taking them offline.
- Increased data availability
- If a DEDB area is not available, a PSB requiring that database can still
be scheduled provided the area it requires is not the one that is unavailable
and, of course, the database itself is available. A PSB that requires an unavailable
area is still scheduled, and receives a status code indicating the condition.
You can therefore delay recovery until it is convenient to take the area offline.
- You can have up to seven copies of the same area. Each copy is called
an area data set (ADS) and all are automatically maintained in synchronization.
This is called multiple area data set (MADS) support. Write operations are
done to each ADS, but read operations are done from only one ADS. With MADS,
read and write errors are much less common because, if data cannot be read
from, or written to, the first copy, the next copy will automatically be used.
Read errors are transparent to application programs (except in the rare instance
where a read operation is unsuccessful with all ADSs).
- You can use DEDB utilities, which are run on an area basis and can be
run online concurrently with online update. This helps to reduce the time
for which areas have to be taken offline. For example, you can avoid using
offline database recovery by using the DEDB area data set create utility.
This online utility makes a new corrected copy of an area from existing copies
of that area. It creates one or more copies from multiple DEDB ADSs during
online transaction processing, enabling application programs to continue while
the utility is running.
- You use the DEDB initialization utility to initialize one or more data sets or one or more areas of
a DEDB offline.
- You can use the DEDB area data set compare utility if you suspect you
may have problems with compatibility of data. It compares control intervals (CIs)
of different copies of an area, and lists all the CIs that do not have equal
content. In the case of unequal comparison, full dumps of up to ten unmatched
CIs are printed out on the device you have specified.
- Efficient data retrieval and entry
- DEDB attempts to physically write DDEP segments hierarchically in the
same CI as the parent segment, which can make retrieval faster.
- The SDEP segment (located at the end of the ADS) is designed especially
for fast, online, mass insert in applications such as data collection, auditing,
and journaling. This is because SDEP segments for an area are stored rapidly,
regardless of the root on which they are dependent. For example, in a banking
application, transaction data can be collected during the day and inserted
as SDEPs in an account database. At the end of the day, these transactions
can be reprocessed by first retrieving them using the sequential dependent
scan utility. This online utility retrieves SDEP segments in mass and copies them to
a sequential data set. You can then process this data set offline using your
own programs; for example, for a statistical analysis. The area involved remains
available while the utility is running.
- You can delete SDEPs using the DEDB sequential dependent delete utility, which
deletes SDEP segments within a specified limit of a DEDB area.
- The ability to use high speed sequential processing (HSSP). HSSP is useful
with applications that do large scale sequential updates to DEDBs. HSSP can
reduce DEDB processing time, enables an image copy to be taken during a sequential
update job, and minimizes the amount of log data written to the IMS log. For further
guidance, see High speed sequential processing (HSSP).
- Improved performance
- Pathlength is reduced because DEDBs use the MVS™ Data Facility Product (MVS/DFP) Media Manager
offering.
- You can improve speed of access, or concurrent access, to DEDBs by tuning
DEDB buffer pool specifications. (See DEDB performance and tuning considerations.)
- Logging overhead is reduced because only after-images are logged and because
logging is done during syncpoint processing only.
- The amount of I/O needed for each SDEP segment inserted can be very low,
because SDEPs are gathered from various transactions, stored in last-in first-out
order in one buffer, and are written out only when that buffer is full. This
means that many transactions "share the cost" of SDEP writes.
- Most DEDB processing is done in parallel to allow multithreading. Writes
to the database are done by a number you specify (up to 255) of parallel processes
called output threads. Furthermore, the DEDBs are not updated during application
program processing, but the updates are kept in buffers until a syncpoint
occurs. (See When updates are written to databases.) This means that waiting applications
can be processed sooner and improves throughput on multiprocessors.
- DEDBs have their own resource manager and normally need to interact very
infrequently with program isolation or the IRLM (unless you are using block
level sharing). DEDBs maintain their own buffer pool.
- You can use subset pointers in your application programs to speed up processing. A major problem in some applications is the need to process
long twin chains of segments. Occasionally database design must be modified
because some database records have excessively long twin chains. Subset pointers
give direct access to subsets of long twin chains of segments, which can speed
up application processing because segments located in front of the subset
do not have to be searched. Each pointer points to the first occurrence of
a subset in a range of direct dependent segments. See Command codes to manage subset pointers in DEDBs and Keywords and corresponding command codes for information about using subset pointers in application
programs. (See the IMS Database Administration Guide or the IMS Administration Guide: Database Manager for
guidance on database structure.)

[[ Contents Previous Page | Next Page Index ]]