gtpm2m0eMigration Guide: Program Update Tapes

Changes or New Functions Found on Program Update Tapes (PUTs)

Table 3 includes changes or new functions to the TPF 4.1 system that you will apply as program update tapes (PUTs). See the chapters that follow in this publication for more information about these changes or new functions and the resulting migration considerations.

The information in Table 3 is presented in alphabetic order by the area of change.

Table 3. Areas with Changes or New Functions Found on Program Update Tapes (PUTs)

Area with Changes or New Functions Description of the Changes or New Functions
C++ Class Library Support The TPF 4.1 system now provides support for C++ class libraries. C++ class libraries provide you, the programmer, with more powerful tools for the development and maintenance of object-oriented programs. Built on the solid foundation of the C language, the C++ language adds support for object-oriented programs and many other features without sacrificing any of the power, elegance, or flexibility of the C language. C++ class library support provides specific class libraries that you can use and extends the power of the C++ language, which enables you to take advantage of more powerful C++ features and standards.

Although the TPF 4.1 system does not provide all the C++ class libraries that are available, it does provide the I/O Stream Class Library.

In addition, the TPF 4.1 system provides support for the STLport standard template library.

See C++ Class Library Support (APARs PJ26187 and PJ26173) for more information about C++ class library support.

C++ Class Library Support for Application Support Class Library With APAR PJ27627, the TPF 4.1 system provides support for a subset of the Application Support Class Library. This library provides specific classes that extend the power of the C++ language. The specific classes supported are: IBinaryCodedDecimal and decimal, IDate, IException, IString, ITime, ITimeStamp, and I0String.

As described in the OS/390 C/C++ IBM Open Class Library Reference, header files idate.hpp, idecimal.hpp, iexcbase.hpp, istring.hpp, itime.hpp, itmstamp.hpp, and i0string.hpp are used by applications to make use of the Application Support Class Library (CPP3). All other header files shipped with APAR PJ27627 are for implementation only.

The TPF 4.1 system also provides enhanced I/O Stream Classes that support the Application Support Class Library.

A tar file, which is available for this APAR, includes Application Support Class Library source code.

To build applications with a partitioned data set (PDS), the SEARCH option for the compiler must be set to xxx.inl, (where xxx is the PDS for the include files). This PDS is shipped with the Application Support Class Library code.

See Appendix A, PUT 2-15 Interface Changes by Authorized Program Analysis Report (APAR) for more information about APAR PJ27627 and see C++ Class Library Support (APARs PJ26187 and PJ26173) for more information about C++ class library support. See the OS/390 C/C++ IBM Open Class Library User's Guide for more information about the Application Support Class Library and the I/O Stream Class Library.

C++ Support The TPF 4.1 system now provides support for the C++ language, which is designed to take advantage of object-oriented (OO) programming concepts. Except for minor details, the C++language is a superset of C language. In addition to the facilities provided by C language, the C++ language provides flexible and efficient facilities for defining new data types. You can partition an application into manageable pieces by defining new data types that closely match the logical design of the application. When used well, these techniques result in shorter, easier to understand, and easier to maintain programs.

Dynamic link libraries (DLLs) are now supported. A DLL is a collection of one or more functions or variables gathered in a load module that can be run or accessed from a separate application load module. The key concept in DLLs is that functions or variables can be dynamically linked while the application is running rather than statically when the application is built. You can, therefore, call a function or use a variable in a load module other than the one that contains the definition. This allows you greater flexibility in accessing library functions or variables.

See C++ Support (APAR PJ25084) for more information about C++ support.

C Function Trace C function trace provides the ability to trace ISO-C programs. When an ISO-C program has been compiled with the TEST option of one of the IBM C/370 family of compilers supported by the TPF 4.1 system, C function trace provides the programmer with relevant information to expedite the analysis of C program problems.

APAR PJ23493 provides trace information in the C function trace table for breakpoints other than program entry breakpoints and program exit breakpoints.

See C Function Trace (APAR PJ19422) for more information about C function trace. See Trace Information in the C Function Trace Table (APAR PJ23493) for more information about APAR PJ23493.

Continuous Data Collection (CDC)

Continuous data collection (CDC) collects real-time TPF 4.1 system performance information. CDC uniquely stores the data in a relational database by using the TPF Application Requester (TPFAR) feature. An offline application is then used to interpret and display the data. This information is available in the database on a continual basis, so you can use CDC as a monitoring tool. You can run CDC with a minimum impact on TPF 4.1 system performance. Data collected in CDC is a subset of the data collected by the system performance and measurement package. User exits assist in recording additional data.

Coupling Facility (CF) Record Lock Support The limited lock facility (LLF) and the concurrency filter lock facility (CFLF), which are two external lock facilities (XLFs) supported by the TPF 4.1 system, were required to control access to data shared by two or more processors in a loosely coupled complex. CF record lock support now provides the option of using one or more CFs as XLFs. See Coupling Facility (CF) Record Lock Support (APAR PJ26707) for more information about CF record lock support.
Coupling Facility (CF) Support Coupling facility (CF) support allows TPF routines to use a CF for high-performance, high-availability data sharing. A coupling facility (CF) is an IBM System/390 processor used to centralize storage for all attached processors in a processor configuration by providing shared storage and shared storage management functions. TPF services support data sharing while maintaining data integrity and consistency.

See Coupling Facility (CF) Support (APAR PJ25781) for more information about CF support.

Coverage Display Tools Coverage display tools provides the following display-type commands. See TPF Operations for more information about these commands.

Command
Description

 ZCHCH 
You can now obtain a list of the file pool addresses chained from a specific location in a specific record, and display each file pool address or only the last address in the chain.

 ZDDSI 
You can now display the status of a subchannel for an input/output (I/O) device.

 ZDEBB 
You can now display blocked tapes online and perform the following functions:
  • Display records on a blocked tape
  • Shift a blocked tape forward or backward
  • Search a blocked tape for a specific item.

 ZDECB 
You can now display information about entry control blocks (ECBs) that are in use.

 ZDFCT 
You can now display the characteristics of a record type as well as the file address and module, cylinder, head and record (MCHR) information for each extent of a record type.

 ZDMOD 
You can now display the following main storage addresses for a module file:
  • File status table
  • Device number
  • Queue length
  • Control status table.

 ZDPLT 
You can now display:
  • Program names from the program allocation table (PAT) and the extra PAT slots with the specified linkage type.
  • The date and time when the PAT was created.

 ZDTOD 
You can now display:
  • The current value of the time-of-day (TOD) clock, including a translation to a date and time. The TOD clock is set to Greenwich Mean Time (GMT).
  • The corresponding date and time for a specific TOD clock value.
  • The corresponding TOD clock value for a specific date and time.

 ZDWGT 
You can now display the terminal control table (WGTA) entry for a specific line number, interchange address, and terminal address (LNIATA) and CPU ID.

 ZFECB 
You can now display a list of active ECBs or a formatted display of a particular ECB.

 ZMPIF 
You can now display the current contents of the Multi-Processor Interconnect Facility (MPIF) I/O trace table.

 ZSTAT 
You can now display the delayed and deferred ECB count.
Enhancements to TPF MQSeries Local Queue Manager Support Enhancements to TPF MQSeries local queue manager support include the following:
  • The TPF MQGET application programming interface (API) now supports the MQGMO_WAIT option, which allows an entry control block (ECB) to be suspended when a queue is empty and to resume when a message arrives. A wait interval is specified by the application to indicate how long the MQGET API should wait for the message. The MQGMO_WAIT option works with processor unique and processor shared queues.
  • TPF local normal queues now support trigger type EVERY, which triggers a new ECB every time a message arrives on the application queue. Trigger type EVERY works with processor unique and processor shared queues.
  • TPF local normal queues now allow you to associate a process object with a queue. This allows you to trigger a program in the process object, when a message arrives on that queue.
Enterprise Storage Server (ESS) disk storage system support ESS disk storage system support for the TPF 4.1 system exploits the functions provided by the ESS. The ESS disk storage system is defined to the TPF 4.1 system as a type of IBM 3990 storage controller with attached IBM 3380 or 3390 device types configured in TPF mode. The ESS disk storage system provides the following performance, scalability, and accessibility improvements:
  • You can add capacity and connectivity and upgrade performance while your complex continues to run. Specifically, additional channel command words (CCWs) improve performance for standard TPF disk input/output (I/O) operations for single record read and write operations as well as full track I/O operations used during TPF transaction services, copy, and capture and restore processing.
  • The ESS disk storage system scales from 420 gigabytes (GB) to 11 terabytes (TB).
  • The ESS disk storage system works with a variety of platforms including IBM System/390, IBM VM/ESA, and TPF to name a few.
Expression Enhancements for the TPF Debuggers Expression enhancements for the TPF debuggers provides the following enhancements for TPF Assembler Debugger for VisualAge Client and TPF C Debugger for VisualAge Client (referred to as the assembler debugger and C debugger, respectively, in the remainder of this information):
  • Assembler debugger symbolic support has been added to the assembler debugger listing view.
  • Global symbol display for both assembler and C has been added to the assembler and C debuggers.
  • ecbptr function support in the C debugger.
  • You can now use expressions with all breakpoint type in the assembler debugger.
  • Local variable support was added to the assembler debugger to display the operands of the current line of execution.
  • You can now view expressions in different data representation in the assembler debugger.

See Expression Enhancements for the TPF Debuggers (APAR PJ27905) for more information about expression enhancements for the TPF debuggers.

Fiber Channel Support Fiber channel support allows the TPF 4.1 system to exploit the performance features associated with IBM System/390 fiber channels. Input/output (I/O) devices attach to the IBM System/390 fiber channels through native channel-attached control units or through a fiber channel bridge device to control units attached to Enterprise Systems Connection (ESCON) Architecture for migration.
FIFO Special File Support

FIFO special file support builds on the infrastructure provided previously with TPF Internet server support (APARs PJ25589 and PJ25703) and open systems infrastructure (APAR PJ26188). FIFO special file support provides the following:

  • Support for FIFO special files. A FIFO special file is a file that is typically used to send data from one process to another so that the receiving process reads the data in first-in-first-out (FIFO) format. A FIFO special file is also known as a named pipe. This support provides a method for independent processes to communicate with each other by using TPF file system functions, such as the read and write functions.
  • Enhancements to the select function to allow the use of file descriptors for named pipes.
  • A syslog daemon to provide a message logging facility for all application and system processes. Internet server applications and components use the syslog daemon for logging purposes and can also send trace information to the syslog daemon. Messages can be logged to file or to tape. Remote syslog daemons can also log messages to the local syslog daemon through remote sockets.

See FIFO Special File Support (APAR PJ27214) for more information about FIFO special file support.

File System Support The TPF 4.1 system now provides support for a file system. The key concepts of file system support are an application programming interface (API) and C run-time environment supporting main functions. Implementation of file system support eases porting of applications by providing a standard and open interface.

The file access API contains all of the standard C library functions and part of the Portable Operating System Interface for Computer Environments (POSIX) standards and reduces the complexity of TPF applications suited to flat files by providing a flat-file data model as a simpler alternative to the TPF linked-record architecture.

See File System Support (APAR PJ25089) for more information about the file system.

Additional file system support includes:

  • Authorized Program Analysis Report (APAR) PJ26174, which provides additional commands to manage the file system. See Table 1280 for a list of these functional messages.
  • APAR PJ26713, which enhances the performance of the file system.
File System Tools File system tools enhances the file system by allowing you to do the following:
  • Activate scripts and TPF segments from the command line
  • Pass data from the standard output (stdout) of one ZFILE command to another as standard input (stdin) by using a vertical bar (|) as a pipe
  • Use ZFILE commands to support Web hosting and scripting
  • Emulate additional Portable Operating System Interface for Computer Environments (POSIX) functions to process file system files with more than one level of redirection
  • Preserve the meaning of special characters by using a quoting mechanism in the ZFILE commands.

See File System Tools (APAR PJ27277) for more information about file system tools.

File Transfer Protocol (FTP) Server Support The TPF 4.1 system now provides FTP server support, which allows you to transfer files between the TPF 4.1 system and a remote host that supports Transmission Control Protocol/Internet Protocol (TCP/IP) and FTP clients. FTP server support provides the following benefits:
  • Reliable file transfer. FTP server support is built on the transport layer of Transmission Control Protocol services.
  • Features and options such as the following:
    • User authentication
    • Data conversion
    • Directory listings.

See File Transfer Protocol (FTP) Server Support (APAR PJ27028) for more information about FTP server support.

Heap Storage An application may allocate and release system heap storage using the GSYSC and RSYSC macros, respectively. The contiguous storage allocated by the GSYSC macro is in the system virtual memory (SVM) address space and is accessible to all ECBs at the same address. The system heap storage is a convenient way to share data among ECBs but must be used with care. The storage obtained is not attached to the ECB; therefore, the application must provide storage management and cleanup.

See ISO-C File Resident Support (APAR PJ21167) for more information about the system heap storage.

System heap enhancements (APAR PJ28363) allows a TPF 4.1 system with up to 2 GB of storage to have a large system heap area without losing large amounts of real storage. The CORREQ system initialization program (SIP) macro was updated to include the SSPS parameter, which allows you to define the size of the system heap area. In TPF 4.1 systems with 2 GB of storage, the system heap area is permanently backed with real storage. In TPF 4.1 systems with less than 2 GB of storage, if there is no need to remove the real storage to make room for system heap virtual addresses, the system heap area is backed with 4 KB frames as each system heap storage request is made.

See PUT 16 Interface Changes by Authorized Program Analysis Report (APAR) for more information about APAR PJ28363. See TPF System Generation for more information about SIP and the CORREQ macro.

High-Performance Routing (HPR) Support HPR support allows the TPF 4.1 system to connect to a Systems Network Architecture (SNA) network as an HPR rapid transport protocol (RTP) node.

See High-Performance Routing (HPR) Support (APAR PJ25760) for more information about HPR support.

Infrastructure for 32-Way Loosely Coupled Processor Support Infrastructure for 32-way loosely coupled processor support provides necessary prerequisite infrastructure support for future expansion to 32-way loosely coupled processors. This support includes the following:
  • Enhancements to the internal event facility (IEF) so that an application processor can track responses from as many as 32 loosely coupled processors.
  • Changes to the communication control record structure so that additional loosely coupled processors can be added without reorganizing the control record structure.
  • Enhancements to the processor resource ownership table (PROT) to support additional #PRORI ordinals and to E-type loaders to remove the constraint of a maximum of 8 loosely coupled processors.
Note:
Program update tape (PUT) 13 does not remove the constraint of a maximum of 8 loosely coupled processors. Additional functions are required to complete 32-way loosely coupled processor support.

See Infrastructure for 32-Way Loosely Coupled Processor Support (APAR PJ27387) for more information about infrastructure for 32-way loosely coupled processor support.

Integrated Online Pool Maintenance and Recoup Support Integrated online pool maintenance and recoup support enhances pool utilities in a TPF 4.1 system environment by doing the following:
  • Eliminating most offline processing
  • Eliminating recoup and pool general files
  • Increasing performance and data integrity
  • Allowing all phases of recoup to be run in NORM state
  • Providing multiprocessor and multi-I-stream capability
  • Providing online historical data
  • Providing recoup and PDU fallback capability.

See Integrated Online Pool Maintenance and Recoup Support (APAR PJ27469) for more information about integrated online pool maintenance and recoup support.

ISO-C Support With ISO-C support, you can write applications using an ANSI/ISO-conforming implementation of C. This brings TPF applications closer to open systems.

A few highlights of ISO-C support are:

  • Standard support for all ANSI/ISO C freestanding environment language features
  • Removal of the 4 KB limitation for C object modules, stack frames, and static blocks
  • Removal of the 1 MB limitation for heap storage
  • Easy migration of existing and potential TARGET(TPF) C applications

See ISO-C Support (APAR PJ17852) for more information about ISO-C support.

Additional ISO-C support includes:

Link Map Support for C Load Modules Link map support makes it easier to debug problems that occur in C load modules. C load modules loaded to the online TPF 4.1 system will contain link maps that can be displayed using a new command. Link maps will also be included in certain types of dumps that include C load modules.

See Link Map Support for C Load Modules (APAR PJ24845) for more information about link map support.

Logical Record Cache and Coupling Facility (CF) Cache Support Logical record cache and CF cache support further exploits CF support and CF record lock support, which were provided on program update tape (PUT) 9 and 11 respectively. With logical record cache support you can use the logical record cache for data consistency and to keep track of data that resides in the local cache and in permanent storage; you can create processor shared caches and processor unique caches. In contrast, CF cache support supports processor shared caches. See Logical Record Cache and Coupling Facility (CF) Cache Support (APAR PJ27083) for more information about logical record cache and CF cache support.
Mapping of Airline Traffic over Internet Protocol (MATIP) Support MATIP support allows the TPF 4.1 system to receive and transmit airline reservation, ticketing, and messaging traffic over a Transmission Control Protocol/Internet Protocol (TCP/IP) network. MATIP support is provided for the communication of two main types of airline traffic: transactional and messaging. The ZMATP command and seven user exits have been provided to enable you to use MATIP support, which can coexist with your current network configurations.

See Mapping of Airline Traffic over Internet Protocol (MATIP) (APAR PJ26161) for more information about MATIP support.

Mapping of Airline Traffic over Internet Protocol (MATIP) Enhancements Mapping of Airline Traffic over Internet Protocol (MATIP) enhancements expands MATIP support by providing a way to define a host descriptor table for Type-A and Type-B hosts. The ZMATP command is updated with new parameters, there are new error messages, and a new user exit is added.

Agent set control unit (ASCU) information, which was provided through user exits, is also provided through the ZMATP command and can be associated with a specific host name. ASCU information is preserved on file, so if you do an initial program load (IPL) of the TPF 4.1 system you do not need to reenter the information.

See Mapping of Airline Traffic over Internet Protocol (MATIP) Enhancements (APAR PJ26693) for more information about MATIP enhancements.

Message Queue Interface (MQI) Client With the MQI client, you can write applications using the message queue interface (MQI). The MQI client connects to MQSeries queue managers that support the MQSeries function using LU 6.2 sessions. Using a remote procedure call (RPC) type of interface, the MQI client sends the MQI function calls to MQSeries for processing.

See Message Queue Interface (MQI) Client (APAR PJ22434) for more information about MQI client support.

Multiple I-Stream DASD I/O Support Multiple I-stream DASD I/O support allows:
  • The TPF 4.1 system to process most DASD input/output (I/O) requests from any I-stream
  • You to take advantage of processors with more I-streams by not overloading the main I-stream with DASD I/O related work.

See Multiple I-Stream DASD I/O Support (APAR PJ21313) for more information about multiple I-stream DASD I/O.

Multiple Module Copy Support Multiple module copy support allows you to:
  • Run more than one copy function at the same time.
  • Display the status of direct access storage device (DASD) modules that are being copied.
  • Set the maximum number of DASD modules that can be copied concurrently in a processor complex, on a channel, on a control unit, or on a processor.
  • Copy a DASD module to the same channel or control unit as a specified prime DASD module.
  • Cancel copying a DASD module from a processor that is not performing the copy function.
  • Restart all DASD module copies in a processor complex.
Multibyte Character, Wide Character, and Locale (MWL) Support Multibyte and wide-character functions and header files and support for extended locales is now part of the TPF 4.1 system. Extended locales are the locale definition files based on the localedef utility that is provided with IBM C/C++ compilers on the IBM System/390 platform. The localedef utility processes locale definition files and produces the locale load modules.
Open Systems Infrastructure

Open systems infrastructure eases the porting of applications written for other systems to run in the TPF 4.1 system. Infrastructure components include:

  • Shared memory using the X/Open interface to allow sharing of data among processes
  • Pipes using a POSIX interface for interprocess communications
  • Enhanced signal support so that:
    • A signal handler remains installed after a signal is raised.
    • A process can choose to block certain signals.
    • The TPF 4.1 system automatically blocks a signal when that signal is being handled.
  • Enhancements to support password and group files as actual files in the file system.

See Open Systems Infrastructure (APAR PJ26188) for more information about these components.

Pool Expansion (PXP) Support Currently, the TPF 4.1 system is limited to 64 K file pool directories. However, with the new pool expansion (PXP) support you can:
  • Expand your database capacity up to 65 536 file pool directories for each short-term pool section and 16 777 216 file pool directories for each subsystem
  • Exploit the file addressing range provided by file address reference format 4 (FARF4) and file address reference format 5 (FARF5)
  • Have a complex of 2 or more processors (one with PXP support and one without) coexist indefinitely
  • Take advantage of an increase in offline performance because DYOPM now runs above 16 MB.

See Pool Expansion (PXP) Support (APAR PJ17912) for more information about PXP support.

Recoup Follow-On Support

Recoup follow-on support includes the following:

  • Recoup core resident descriptor support, which enhances recoup processing performance because descriptors are loaded to and accessed from memory instead of files.
  • Recoup functional support console (FSC) support, which provides a recoup profile option that routes recoup status messages to the real-time database services (RDBS) console instead of the prime CRAS.
  • Recoup message parsing enhancements, which centralize ZRECP command parsing routines and allow extra spaces to be specified when a ZRECP command is entered with parameters.

Remote Procedure Call In the TPF 4.1 system you can run remote procedure call (RPC) servers through a partial port of the Distributed Computing Environment (DCE) RPC run-time library. RPC allows applications on one workstation to start functions that reside on and are run by another workstation. The RPC run-time library allows you to develop RPC server applications that can be accessed using Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) Internet protocols. The RPC library application programming interfaces (APIs) establish required client/server connections through the use of socket APIs.

Client applications that run on IBM or non-IBM DCE platforms are able to run RPC to a TPF server. All DCE services are available to client applications; however, the TPF 4.1 system supports only a subset of the DCE RPC services.

See Remote Procedure Call (APAR PJ26575) for more information about RPC.

Secure Sockets Layer (SSL) Support The SSL protocol, which was originally developed for Web browsers, is a set of rules governing authenticated and encrypted communication between Transmission Control Protocol/Internet Protocol (TCP/IP) clients and servers. SSL is widely used on the Internet by an increasing number of varied applications, especially for interactions that involve exchanging confidential information such as credit card numbers. SSL evolved into the Transport Layer Security (TLS) Version 1 standard.

SSL is positioned as a protocol layer between the TCP layer and the application to form a secure connection between clients and servers by providing privacy, integrity, and authentication. SSL support on the TPF 4.1 system, which is based on the OpenSSL Version 0.9.6 open source package, supports the following:

  • SSL version 2, SSL version 3, and TLS version 1.
  • Rivest-Shamir-Adelman (RSA) public key cryptography.
  • Rivest's Cipher (RC) 2, RC4, Data Encryption Standard (DES), and Triple-DES ciphers.
  • Message Digest Algorithm 5 (MD5) and Secure Hash Algorithm (SHA) digests.
  • Client and server authentication using digital certificates.
  • A single x509 certificate or chain of x509 certificates.
  • Use of any SSL toolkit to create public and private keys and certificates (including OpenSSL on another platform). You can then use File Transfer Protocol (FTP) to send the key and certificate files to the TPF 4.1 system.
  • Certificate revocation lists (CRLs).

See Secure Sockets Layer (SSL) Support (APAR PJ27863) for more information.

Shared PR/SM Partition Support The TPF 4.1 system now supports tightly coupled systems running in shared PR/SM partitions. This improves customer utilization of processor capacity. Before the TPF 4.1 system, only uniprocessor systems were supported in shared PR/SM partitions; tightly coupled systems required dedicated partitions when in a PR/SM environment.

See Shared PR/SM (APAR PJ17778) for more information.

Shared SSL Session Support

Shared SSL session support provides the following enhancements to SSL support:

  • Activate on receipt (AOR) capability for SSL through the SSL_aor function
  • Secure Web server support
  • Shared SSL sessions
  • SSL diagnostic tools.

See Shared SSL Session Support (APAR PJ28118) for more information.

Simple Network Management Protocol (SNMP) Agent Support SNMP is an industry-standard protocol that enables you to monitor and manage diverse and complex Transmission Control Protocol/Internet Protocol (TCP/IP) networks. SNMP is defined by a series of Request for Comments (RFC) documents that describe the flows and information that is communicated between the network management station and the different TCP/IP devices that are being managed. The SNMP architecture defines three entities:
  • SNMP agents, which are network devices such as hosts, gateways, routers, or servers that receive requests from SNMP managers to retrieve or change Management Information Base (MIB) variables. SNMP agents then respond to these requests.
  • An SNMP manager, which runs an application or suite of applications to manage and monitor TCP/IP networks.
  • The MIB, which contains data that provides information about the SNMP agent and the TCP/IP network to the SNMP manager.

The TPF 4.1 system provides agent support for SNMP Version 1 with a standard set of SNMP MIB variables (MIB-II). This allows an SNMP manager to monitor and manage the TPF 4.1 system as an SNMP agent. SNMP agent support provides the following:

  • A program interface to send enterprise-specific traps (unsolicited messages) to notify SNMP managers of significant system events
  • A user exit to provide security by validating SNMP requests
  • A user exit to retrieve your own enterprise-specific MIB variables.

See Simple Network Management Protocol Agent Support (APAR PJ27932) for more information.

SNA Resource Definition You are no longer required to define remote logical unit (LU) resources and adjacent link station (ALS) resources to the TPF 4.1 system using the offline ACF/SNA table generation (OSTG) program. You can use dynamic LU support to automatically create resource definitions for remote LU resources when they log on to applications in the TPF 4.1 system. If the TPF 4.1 system is running in TPF Advanced Peer-to-Peer Networking (TPF/APPN) mode, you can also use dynamic LU support to automatically create resource definitions for ALS resources when the ALS links are activated.

You can also use the ZNDYN ADD command online to create resource definitions for ALS, CDRM, CTC, and NCP resources.

See Dynamic LU Support (APAR PJ21044) for more information about dynamic LU support and the ZNDYN ADD command.

SNA Resource Names SNA network IDs and resource names must both begin with an uppercase letter (A-Z), @, #, or $. The remaining characters can be uppercase letters (A-Z), numbers (0-9), @, #, or $. See TPF ACF/SNA Network Generation for more information.
Tape Record Migration Tape record migration supports the use of additional record types for tape support. Ordinal-based processor unique fixed file record types are replaced with file address compute (FACE) program table processor unique fixed file record types. Using these additional record types removes some of the complexity of adding a processor to your complex. See Tape Record Migration (APAR PJ26577) for more information about the additional #TPLBL, #TDTDR, and #IBMMP4 fixed file record types.
Threads Precursor Threads precursor provides the following in the TPF 4.1 system:
  • An address space change.

    The TPF 4.1 system will allocate enough frames and entry control blocks (ECBs) to cover the total size of the ISO-C stack, the ECB heap, and the ECB private area of 1 MB.

  • The YIELDC macro.

    This macro is used to give up control of the processor and allow processing of other entries. The entry is placed on the specified processor list.

  • longjmp and setjmp enhancements.

    The longjmp and setjmp functions are not restricted to the same dynamic load module (DLM). Additional programming considerations were added.

See Threads Precursor (APAR PJ24530) for more information about Threads Precursor.

TPF Advanced Peer-to-Peer Networking (TPF/APPN) Support With TPF/APPN support, the TPF 4.1 system can connect to an Advanced Peer-to-Peer Networking (APPN) network as an end node (EN).

See TPF Advanced Peer-to-Peer Networking (TPF/APPN) Support (APAR PJ19949) for more information about TPF/APPN support.

TPF Application Requester Enhancements The TPF 4.1 system supports connections to distributed relational database architecture (DRDA) level-1 compliant platforms in addition to the IBM Multiple Virtual Storage (IBM MVS) system for applications using structured query language (SQL). These DRDA platforms include the IBM RISC System/6000 (RS/6000) and IBM Personal System/2 (PS/2) platforms. Run-time binding, dynamic SQL verbs, expanded (double-byte and mixed-byte) character representations, and expanded diagnostics are available as well.

See TPF Application Requester Enhancements (APAR PJ23931) for more information about TPF Application Requester Enhancements.

TPF Assembler Debugger for VisualAge Client TPF Assembler Debugger for VisualAge Client is a workstation development environment that is similar to TPF C Debugger for VisualAge Client with the following two main differences:
  • It debugs programs at the assembler level.
  • The disassembler does not require hooks to start tracing programs.

Like TPF C Debugger for VisualAge Client, this remote development environment offers easy-to-use tools that provide you with an effective means of increasing your programming productivity when developing applications for the TPF 4.1 system. For example, TPF Assembler Debugger for VisualAge Client provides a disassembled view or a corresponding listing view of the program that you are tracing.

Note:
TPF Assembler Debugger for VisualAge Client does not support 24-bit file resident or private programs.

The loaders enhancement for the TPF Assembler Debugger for VisualAge Client gives you the ability to load ADATA files used by the assembler debugger rather than using trivial file transfer protocol (TFTP) to transfer ADATA files to the online TPF system. Loaders enhancement for the TPF Assembler Debugger for VisualAge Client provides the following benefits:

  • Eliminates the need to remember and specify the path and name of the ADATA file in the hierarchial file system (HFS). The debugger finds and uses the ADATA file loaded by the TPF loader.
  • E-type loader support for ADATA files allows the assembler debugger to automatically use the correct ADATA file for any version of a program.
  • Provides a foundation for changes to the assembler debugger that enable tracing in a multiple database function (MDBF) environment by loading ADATA files to a specific subsystem.

See Loaders Enhancement for the TPF Assembler Debugger for VisualAge Client (APAR PJ27422) for more information about loaders enhancement for the TPF Assembler Debugger for VisualAge Client.

TPF C Debugger for VisualAge Client TPF C Debugger for VisualAge Client, which is part of VisualAge TPF for Windows NT, is a workstation development environment that provides you, the C and C++ programmer, with an effective means of increasing your programming productivity when developing applications for the TPF 4.1 system. This remote development environment provides easy-to-use tools that enable you as a TPF developer to improve quality and productivity by writing, debugging, and analyzing the performance of your applications in a team environment.

See TPF C Debugger for VisualAge Client (APAR PJ25632) for more information about TPF C Debugger for VisualAge Client.

TPF C Debugger for VisualAge Client was enhanced by APAR PJ25982 to work with a new version of the VisualAge for TPF user interface code that resides on your workstation. To use APAR PJ25982, you must install VisualAge for TPF corrective service diskette (CSD) 14 before applying the APAR. VisualAge for TPF CSD 14 is compatible with the PUT 9 version of TPF C Debugger for VisualAge Client as well as this APAR.

TPF Collection Support

TPF collection support (TPFCS) is a database manager service that enables application programs running on TPF to create, modify, and access collections. Collections are abstract representations of data. TPFCS provides three collection lifetimes:

  • Persistent long-term
  • Persistent short-term
  • Temporary.

Collections are said to be persistent if they maintain their state beyond the life of the entry control block (ECB) that creates them. Those which are temporary maintain their state and are accessible only for the life of the ECB that creates them.

TPFCS transparently integrates database functionality with the application program and eliminates the need for data translation routines.

See TPF Collection Support (APAR PJ25098) for more information about TPFCS.

APAR PJ25332 provides the following TPFCS enhancements:

  • Recoup has been made more robust:
    • Embedded file address support has been added to TPFCS recoup.
    • TPFCS recoup processing now uses an enhanced control mechanism with a user-defined number of entry control blocks (ECBs) activated. If an ECB is available for use by recoup, it will be used immediately without waiting for any other ECBs to end.
  • Performance enhancements have been added.
  • Record information for TPFCS can now be displayed.
  • Two new collection types have been added:
    • Sorted set
    • Key sorted bag.
  • New application programming interfaces (APIs) for processing binary large object (BLOB) collections have been added.

See TPF Collection Support Enhancements (APAR PJ25332) for more information about TPFCS enhancements.

APAR PJ26143 provides the following TPFCS enhancements:

  • New APIs and commands for adding, displaying, using, and removing alternate key paths for persistent keyed and persistent sorted collections have been added.
  • Support for the ZBROW PATH command, which displays the following:
    • The actual path information for keys and relative record numbers (RRNs)
    • The actual starting location of an array element or relative byte address (RBA).
  • The ZBROW DISPLAY command has been updated so the contents of a directory entry for a specific RRN can be displayed.
  • Support for reusing released long-term pool records has been added.
  • The maximum number of bytes that can be managed for a binary large object (BLOB) has been increased from 32 KB to 4 MB.

See TPF Collection Support Enhancements (APAR PJ26143) for more information about TPFCS enhancements.

TPF Collection Support - Continued APAR PJ26887 provides support for the ZBROW RECOUP command, which has been added to help manage recoup indexes.

See TPFCS Recoup Index Command Support (APAR PJ26887) for more information about the TPFCS recoup index command.

APAR PJ27380 provides the following TPFCS enhancements:

  • New APIs and commands for deleting, migrating, and re-creating data stores, and for retrieving and migrating collections have been added.
  • The ZBROW COLLECTION command now allows immediate processing of collections marked for deletion and also allows emptying of collections.

See Appendix A, PUT 2-15 Interface Changes by Authorized Program Analysis Report (APAR) for more information about APAR PJ27380 and see TPF Collection Support Enhancements (APAR PJ26143) for more information about other TPFCS enhancements.

APAR PJ28386 provides the following TPFCS enhancements:

  • The ZBROW ALTER command has been updated to allow you to:
    • Add new elements to a collection in the data store
    • Modify elements in a collection
    • Delete elements from a collection.
  • The ZBROW DISPLAY command has been updated to allow you to display an element of a collection based on the qualification of ZBROW.
  • The ZBROW QUALIFY command has been updated to allow you to:
    • Set additional parameters for subsequent ZBROW ALTER and ZBROW DISPLAY command requests
    • Reset the parameters of the ZBROW qualification.

See PUT 16 Interface Changes by Authorized Program Analysis Report (APAR) for more information about TPFCS enhancements and see TPF Operations for more information about the ZBROW commands.

TPF Data Event Control Block (DECB) Support Before TPF DECB support, the TPF 4.1 system restricted the number of entry control block (ECB) data levels (D0-DF) that were available for use to 16 (the number of data levels defined in the ECB). With TPF DECB support, that restriction has been removed. TPF DECB support also provides the following:
  • 8-byte file addressing in 4x4 format, which provides standard 4-byte file addresses (FARF3, FARF4, or FARF5) to be stored in an 8-byte field
  • New interfaces to allow TPF programs to access file records with a DECB instead of a data level in an ECB
  • New macros for managing DECBs
  • The ability for you to associate symbolic names with each DECB; this allows different components of a program to easily pass information in core blocks attached to a DECB.

See TPF Data Event Control Block Support (APAR PJ27393) for more information about TPF DECB support.

TPF Internet Server Support TPF Internet server support enables the TPF 4.1 system to run Internet servers, such as a Web server, by providing:
  • An Internet daemon that manages inbound Internet traffic for Internet servers on the TPF 4.1 system; Internet servers are referred to as Internet server applications in the TPF publications
  • A Trivial File Transfer Protocol (TFTP) server as a file transfer server to send and receive files, such as Web site contents
  • The ability to retrieve data from the TPF 4.1 system by starting TPF applications from the Internet
  • A process model to assist with the porting of Internet server applications that are compliant with Portable Operating System Interface for Computer Environments (POSIX) standards from other platforms such as UNIX.

See TPF Internet Server Support (APARs PJ25589 and PJ25703) for more information about TPF Internet server support.

TPF Internet Mail Server Support TPF Internet mail server support provides a set of servers that implement the standard Internet mail protocols on the TPF 4.1 system. Users, or mail clients, interact with the TPF Internet mail servers to send and retrieve Internet mail, also known as electronic mail (e-mail).

The TPF 4.1 system supports the following standard Internet protocols:

  • Simple Mail Transfer Protocol (SMTP)
  • Internet Message Access Protocol (IMAP) Version 4
  • Post Office Protocol (POP) Version 3.

See TPF Internet Mail Server Support (APARs PJ27784 and PJ27865) for more information.

TPF Internet mail server enhancements for PUT 15 improve the performance and functionality of TPF Internet mail server support as follows:

  • The number of I/O requests and the path length for processing each piece of mail were reduced significantly, improving the overall performance of the TPF Internet mail servers.
  • The SYSLOG parameter was added to the ZMAIL command to allow you to start or stop logging mail messages to the syslog daemon.
  • The mail function was expanded to allow you to access Internet mail through the use of file addresses on the TPF database. Previously, you could only access mail through the use of files on the TPF file system. These changes make it easier for you to access Internet mail with TPF applications that are written in assembler language.

See TPF Internet Mail Server Enhancements for PUT 15 (APAR PJ27966) for more information.

APAR PJ28396 continues to improve the performance of TPF Internet mail server support as follows:

  • Additional reductions in the number of I/O requests and the path length for processing each piece of mail were made to further improve the overall performance of the TPF Internet mail servers.
  • The MAX_HANGING_RECEIVE_MANAGERS parameter was added to the TPF configuration file /etc/tpf_mail.conf. This parameter reduces overhead by starting a specified number of permanent mail ECBs to accept mail items and put them on the delivery queue.

See PUT 16 Interface Changes by Authorized Program Analysis Report (APAR) for more information about APAR PJ28396. See TPF Transmission Control Protocol/Internet Protocol for more information about TPF Internet mail server support.

TPF MQSeries Clear Queue Support and Display Enhancements

TPF MQSeries clear queue support and display enhancements includes the following:

  • The ZMQSC CLEAR QL command was created to allow you to remove all messages from a local normal queue.
  • The ZMQSC DISPLAY command was updated to allow you to display a channel or queue that has certain characteristics.

See TPF MQSeries Clear Queue Support and Display Enhancements (APAR PJ28339) for more information.

TPF MQSeries Local Queue Manager Support TPF MQSeries local queue manager support implements a local queue manager on the TPF 4.1 system. A message queue interface (MQI) client was implemented previously to allow applications to interact with queue managers that are remote to the TPF 4.1 system. See Message Queue Interface (MQI) Client (APAR PJ22434) for more information about the MQI client. With TPF MQSeries local queue manager support, TPF applications can now interact with the local queue manager or with the remote queue manager server.

See TPF MQSeries Local Queue Manager Support (APAR PJ25780) for more information.

Additional TPF MQSeries local queue manager support enhancements include the following:

  • Support for alias queues was added.
  • The trace function was enhanced.
  • Additional MQSeries application programming interface (API) functions were added.
  • You can now disable and enable TPF MQSeries receiver channels.

See TPF MQSeries Local Queue Manager Support Enhancements (APAR PJ26156) for more information.

Turbo enhancements for TPF support of MQSeries local queue manager include the following:

  • A TPF resource manager to control MQSeries application programming interfaces (APIs) was created.
  • Processor unique queues now reside in memory and provide enhanced performance.

See Turbo Enhancements for TPF Support of MQSeries Local Queue Manager (APAR PJ27023 and APAR PJ27050) for more information.

TPF MQSeries enhancements include the following:

  • TPF MQSeries client Transmission Control Protocol/Internet Protocol (TCP/IP) support (APAR PJ27230)
  • TPF MQSeries user exit support (APAR PJ27231).
  • TPF MQSeries slow queue sweeper and move support (APARs PJ27351 and PJ27431)

See TPF MQSeries Enhancements (APARs PJ27230, PJ27231, PJ27351, and PJ27431) for more information.

TPF MQSeries Server Support

TPF MQSeries server support provides the following:

  • TPF MQSeries local queue manager server support
  • TPF MQSeries database rebuild support.

TPF MQSeries local queue manager server support allows an MQSeries client to connect to a TPF 4.1 system by using a server connection channel. MQSeries clients can now pass MQSeries application programming interfaces (APIs) to the TPF 4.1 system, which can act as the server, run the API, and return code to the client. TPF MQSeries user exits and APIs have been added and existing APIs have been enhanced as part of this support.

TPF MQSeries database rebuild support provides the ZMQSC DBREBUILD command, which allows you to rebuild TPF MQSeries definitions in the current file address reference format (FARF) on the TPF system without losing those definitions and without losing any messages that are currently on queue.

See TPF MQSeries Server Support (APAR PJ28435) for more information.

TPF Performance Execution Trace Analyzer for VisualAge Client TPF Performance Execution Trace Analyzer for VisualAge Client is a workstation development environment that provides you, the C and C++ programmer, with a means of analyzing performance data for your TPF programs. Performance statistics are available as a detailed table by class, a dynamic call graph, a call nesting structure, and a time line.
TPF Support for VisualAge Client TPF Support for VisualAge Client includes the following three small programming enhancements (SPEs) for program update tape (PUT) 11:

Debug on system error (APAR PJ26600) helps you to recover after getting a system error while running an application program. When you see a problem in the program, debug on system error gives you the opportunity to correct the error and to continue testing the program.

The universal data display (APAR PJ26581) provides a single interface to display entry control block (ECB) data for the TPF Assembler Debugger for VisualAge Client or TPF C Debugger for VisualAge Client. The ECB data is more comprehensive and more readable with the universal data display (UDD) than with displays that were previously available; the UDD provides for views of the ECB work areas, levels, and other selected fields. The UDD shows you a seamless view of the ECB no matter which debugger is active.

Trace on production (APAR PJ26666) offers enhancements to the ZDBUG command, including the ability to disable the TPF Assembler Debugger for VisualAge Client or TPF C Debugger for VisualAge Client. You can also display trace registration information for one or both of the trace-by-program and trace-by-terminal tables whether the entry status is active or nonactive. You can also clear the trace entry for a specified Internet Protocol (IP) address.

See TPF Support for VisualAge Client (APARs PJ26600, PJ26581, and PJ26666) for more information.

Enhancements to TPF Support for VisualAge Client include the following items:

  • Macro breakpoints entered in either the TPF C Debugger for VisualAge Client or the TPF Assembler Debugger for VisualAge Client are in effect for both C and assembler programs.
  • Deferred line breakpoints for the TPF Assembler Debugger for VisualAge Client are saved between debugging sessions so that you need to set them once only and they are available during any following debugging sessions.
  • Enter/back support for the TPF Performance Execution Trace Analyzer for VisualAge Client means that when a transaction is run through the performance analyzer, the analyzer will record information for assembler segments as well as C and C++ programs.
  • PRINT NOGEN support addresses the ability to generate assembler debugger ADATA files, which allow macro expansions to be suppressed.

See Enhancements to TPF Support for VisualAge Client (APAR PJ27383) for more information.

TPF Transaction Services Transaction Processing Facility (TPF) transaction services includes support for a transaction manager (TM), resource managers (RMs), log manager, and recovery log to ensure a consistent view of the database. Applications call a set of assembler macros or C functions to begin, commit, roll back, suspend, or resume a transaction in a commit scope.

See TPF Transaction Services (APAR PJ25094) for more information about TPF transaction services.

Additional TPF transaction services enhancements include the ability for you to define the location of the recovery log to an application subsystem rather than to the basic subsystem (BSS).

Transmission Control Protocol/Internet Protocol (TCP/IP) With TCP/IP support, socket application programs on the TPF 4.1 system can use the socket application programming interface (API) to communicate with remote socket applications. The TPF 4.1 system can connect to a TCP/IP network in the following ways:

Functions common to TCP/IP offload support and TCP/IP native stack support include:

  • A socket sweeper program, which closes inactive sockets after a specified period of time
  • Trace tools, such as the PING and TRACEROUTE programs, which are activated by the ZDTCP command
  • User exits to allow you to perform functions such as activating your socket applications and deciding which remote connection requests will be accepted
  • Subsystem support, which allows socket applications running in all subsystems to issue socket API function calls.

TCP/IP native stack support also provides the following additional functions:

  • Improved network throughput
  • Support for more socket options
  • Support for a new TPF-unique socket API function, activate_on_accept
  • Support for a full function IP trace facility.

TCP/IP PUT 12 Enhancements has support for both IP routing tables (APAR PJ26890) for TPF TCP client applications and TCP/IP network tools (APAR PJ26904).

  • IP routing tables benefits TCP/IP native stack support by enabling TPF client socket applications to connect to remote servers without having to explicitly bind to a specific TPF local IP address or rely on a single default local IP address when TPF is connected to multiple networks. The IP address of the remote server can be associated with any local IP address defined on a given TPF processor. The ability of a TPF processor to connect to multiple IP networks is also enhanced. By removing the dependency on a single default local IP address, TPF processors have greater flexibility in connecting to multiple IP networks. The ZTRTE command manages the IP routing tables.
  • TCP/IP network tools benefits TPF TCP/IP native stack support by providing the ZSOCK commnd to display socket control block information or a summary report of sockets, or to selectively deactivate a specific socket or subset of sockets.

See Transmission Control Protocol/Internet Protocol (TCP/IP) PUT 12 Enhancements (APARs PJ26890 and PJ26904) for more information about TCP/IP Put 12 enhancements.

Transmission Control Protocol/Internet Protocol (TCP/IP) - Continued

Open Systems Adapter (OSA)-Express support is now enabled on the TPF 4.1 system. An Open Systems Adapter is integrated hardware (the OSA-Express card) that combines the functions of an IBM System/390 (S/390) input/output (I/O) channel with the functions of a network port to provide direct connectivity between IBM S/390 applications and remote Transmission Control Protocol/Internet Protocol (TCP/IP) applications on the attached networks. OSA-Express is the third generation of OSA and provides the following enhancements:

  • You can dynamically configure an OSA-Express card by using the ZOSAE command to manage OSA-Express connections.
  • The queued direct I/O (QDIO) protocol is used to communicate between the TPF 4.1 system and an OSA-Express card by sharing memory and eliminating the need for real I/O operations (channel programs) for data transfer between them. The load on your I/O processor is reduced, path lengths in the TPF 4.1 system are reduced, and throughput is increased.
  • OSA-Express support enables the TPF 4.1 system to connect to high bandwidth TCP/IP networks such as the Gigabit Ethernet (GbE or GENET) network.
  • OSA-Express support provides virtual IP address (VIPA) support to eliminate single points of failure in a TCP/IP network.
  • Movable virtual IP address (VIPA) support provides the ability to balance TCP/IP workloads across processors in the same loosely coupled complex by using the ZVIPA command.

See OSA-Express Support (APAR PJ27333) for more information about OSA-Express support.

Domain Name System (DNS) support provides the following:

  • Allows the TPF 4.1 system to process incoming DNS requests, enabling load balancing of the TCP/IP connections in a loosely coupled complex.
  • Allows you to customize the load balancing algorithms by using the UDNS user exit.
  • Enhances DNS client performance of your TPF 4.1 system by providing a cache to store information received from remote DNS servers.

See Domain Name System (DNS) Support (APAR PJ27268) for more information about Domain Name System (DNS) support.

Transmission Control Protocol/Internet Protocol (TCP/IP) - Continued

TCP/IP enhancements for PUT 14 increase the usability and effectiveness of TCP/IP native stack support and OSA-Express support with the following APARs:

  • Display enhancements (APAR PJ27451), which adds the following:
    • The ZTTCP CLEAR command, to clear TCP/IP statistics.
    • The output of the ZTTCP DISPLAY command with the STATS parameter specified now includes the number of TCP sockets that have been cleaned up because of retransmit timeouts.
    • The output of the ZTTCP DISPLAY command with the ALL parameter specified now includes the symbolic device addresses (SDAs) for Open Systems Adapter (OSA)-Express connections.
    • The output of the ZVIPA command with the IP parameter specified now includes which CPUs have the specified movable virtual IP address (VIPA) defined to them.
  • Movable VIPA program interface (APAR PJ27491 and APAR PJ27714), which provides additional ways to move a VIPA from one processor to another by use of application programs.
  • Individual IP trace support (APAR PJ27617), which provides individual IP trace tables for tracing packets to and from specific remote nodes and also provides the option to turn off the tracing of Routing Information Protocol (RIP) messages.
  • Fast Ethernet OSA-Express support (APAR PJ27625), which allows the TPF 4.1 system to connect to Fast Ethernet (FENET) OSA-Express adapters.
  • Diagnostic tools (APAR PJ27650), which provides the operator with the ability to see if a given socket or an OSA-Express connection is hung.
  • TCP/IP activate on receipt load balancing (APAR PJ27679), which allows load balancing of applications that use the activate_on_receipt or activate_on_receipt_with_length function and can be run on all I-streams.

See TCP/IP Enhancements for PUT 14 (APARs PJ27451, PJ27491, PJ27714, PJ27617, PJ27625, PJ27650, PJ27679, and PJ27859) for more information about TCP/IP enhancements for PUT 14.

Transmission Control Protocol/Internet Protocol (TCP/IP) - Continued

TCP/IP enhancements for PUT 15 increase the usability and functionality of TCP/IP native stack support, OSA-Express support, and DNS support with the following APARs:

  • Internet daemon listen backlog support (APAR PJ28026) allows you to specify a listen backlog value for TCP Internet server applications.
  • An operator interface to resolve host names and IP addresses (APAR PJ28029) allows you to resolve a host name to an IP address or an IP address to a host name with the ZDTCP command.
  • IP packet network prioritization (APAR PJ28034) allows you to define a type of service (TOS) value for the network priority of outbound TPF IP packets.
  • OSA-Express polling enhancements (APAR PJ28064) improves the efficiency of the Open Systems Adapter (OSA) polling process to increase network throughput through OSA-Express connections. This enhancement also allows you to tune the number of OSA read buffers to maximize the message processing capacity of each OSA-Express connection.
  • The OSA-Express gateway selection enhancement (APAR PJ28067) improves the way an OSA-Express gateway is selected during IP routing table processing.
  • DNS server wildcard support (APAR PJ28093) allows you to specify a wildcard character at the beginning of a host name in the /etc/host.txt file that is used to build the TPF host name table. Using a wildcard character allows you to define multiple host names with a single entry in the /etc/host.txt file.
  • Greater than 32 KB socket send support (APAR PJ28087) allows you to send up to 1 GB of data on a TCP socket with each send API call. Previously, you could send a maximum of 32 KB of data.

See TCP/IP Enhancements for PUT 15 (APARs PJ28026, PJ28029, PJ28034, PJ28064, PJ28067, PJ28093, and PJ28087) for more information about TCP/IP enhancements for PUT 15.

Transmission Control Protocol/Internet Protocol (TCP/IP) - Continued

TCP/IP enhancements for PUT 16 increase the usability and functionality of TCP/IP native stack support and Simple Network Management Protocol (SNMP) agent support with the following APARs:

  • SNMP MIB display support (APAR PJ28168) allows you to use the ZSNMP command to display Management Information Base (MIB) variables from the TPF 4.1 system. You can also save the display information to a file.
  • TCP/IP network services database support (APAR PJ28195) allows you to:
    • Define TCP/IP server applications so that you can use the getservbyname socket API to retrieve the port number for an application and the getservbyport socket API to retrieve the name of an application.
    • Define a quality of service (QoS) differentiated services codepoint value for each application.
    • Identify the applications for which you want to collect data, such as message, byte, and packet counts.
  • TCP/IP packet filtering firewall support (APAR PJ28213) provides added security for your Internet server applications by allowing you to define a set of rules to filter inbound packets destined for TPF applications. You can also use the IP trace facility to identify packets that violate the packet filtering rules or cause other exception conditions.
  • Fast TCP retransmit support (APAR PJ28344) improves TCP/IP performance by detecting lost messages in the network faster.

See TCP/IP Enhancements for PUT 16 (APARs PJ28168, PJ28195, PJ28213, and PJ28344) for more information about TCP/IP enhancements for PUT 16.

TCP/IP Support for the TPF Application Requester The TPF Application Requester (TPFAR) feature has been enhanced to support connectivity by using Transmission Control Protocol/Internet Protocol (TCP/IP). This adds an additional level of operability with relational databases that use Distributed Relational Database Architecture (DRDA) level 3. Data can now be shared between database servers that are compliant with DRDA level 3 and a TPF application using the TPFAR feature. The communication manager providing TCP/IP network protocol support (CMNTCPIP) and the security manager (SECMGR) are supported at DRDA level 5. No other features of DRDA level 3 have been added.

An existing TPFAR application will continue to run without the need to recompile, reassemble, or reload. In addition, application programs that currently use the TPFAR feature can take advantage of new functions without recompiling, but rather by reconfiguring the internal Structured Query Language (SQL) database management system directory (SDD) by using the ZSQLD command to specify connection information.

Hotcons are now supported to include TCP/IP socket connections. Previously, only hot conversations for LU 6.2 were supported. The TPF socket sweeper is disabled while connections are in the hotcon table (HCT).

Support is provided for both offload and native stack devices.

See Transmission Control Protocol/Internet Protocol Support for the TPF Application Requester (APAR PJ27079) for more information about TCP/IP support for the TPF Application Requester.

Unlimited Pool Segment Support Unlimited pool segment support enhances the recoup, pool directory update (PDU), pool generation, pool reallocation, and pool deactivation utilities in a TPF 4.1 system environment by doing the following:
  • Eliminating pool segment restrictions
  • Increasing performance
  • Simplifying pool generation and reallocation procedures.
Virtual File Access (VFA) Synchronization Virtual file access (VFA) is a storage management facility that dynamically allocates frequently referenced records to main storage. Adding VFA synchronization to your TPF 4.1 system enhances current VFA support by providing VFA synchronization candidacy support for fixed file records and pool records that are synchronized across processors.

Before VFA synchronization:

  • Very few records (except for read-only data records and processor-unique records) were good candidates for VFA in a loosely coupled environment.
  • Synchronization of records did not exist. If a record located in the VFA area of several processors is updated on one processor, the other processors then refer to old data until a FIND and HOLD macro is issued. Although you may find this process acceptable for records that are seldomly updated but frequently referenced (such as credit verification files that are updated only once a week), you may not find it acceptable for records that are accessed and updated frequently. Such records may include fare data in an airline application program.

With VFA synchronization:

  • There is an effective way in a loosely coupled environment to support VFA synchronization candidacy for records that are updated by application programs during normal system operation. VFA synchronization provides the capability to synchronize updates to frequently referenced records in VFA across all processors in a complex. This synchronization ability allows a wider range of data, such as fare data in an airline application, to be accurately and quickly accessible to application programs on all processors in a loosely coupled environment.

    Synchronization across processors occurs when each processor is notified as a record is modified; then, each processor referencing that record is notified to refresh its copy so that the record is always current. The synchronization is done by using the locking capabilities of the IBM 3990 Model 3 or later models with the multi-path lock facility (MPLF) installed.

  • Overall TPF 4.1 system performance is improved by allowing increased use of VFA. This lowers the number of physical input/output (I/O) operations performed from the TPF 4.1 system to attached DASD control units and improves the effectiveness of accessing data and communication between application programs, which results in quick response times even during high system activity.

See Virtual File Access (VFA) Synchronization (APAR PJ25094) for more information about VFA synchronization.

Virtual Storage Access Method (VSAM) Database Support Virtual storage access method (VSAM) database support for the TPF 4.1 system permits you to access a VSAM database from an IBM multiple virtual storage (MVS) system in read-only format using TPF general data set (GDS) support. This allows TPF applications to access VSAM data sets.

See Virtual Storage Access Method (VSAM) Database Support (APAR PJ26150) for more information about VSAM database support.

XML Parser The XML parser allows you to read (parse) Extensible Markup Language (XML) data on the TPF 4.1 system. The XML Parser for C++ (XML4C) Version 3.1.2 was ported to the TPF 4.1 system. This parser is XML Version 1.0 compliant and allows you to do the following:
  • Parse XML documents using the Document Object Model (DOM) Version 1.0 specification
  • Parse XML documents using the Simple API for XML (SAX) Version 1.0 specification
  • Parse XML documents with or without validation against a specified Document Type Definition (DTD).

See XML Parser (APAR PJ27634) for more information about the XML parser.

XML4C Parser 3.5.1 XML4C parser 3.5.1 allows your applications to read (parse) and write Extensible Markup Language (XML) data on the TPF 4.1 system. XML Parser for C++ (XML4C) Version 3.5.1 was ported to the TPF 4.1 system, is XML Version 1.0 compliant, and allows TPF 4.1 applications written in C++ language to do the following:
  • Parse XML documents using the Document Object Model (DOM) Level 1.0 or 2.0 specification. You can also parse XML documents using the experimental IDOM API, but this is not formally supported by the XML4C parser and, therefore, not formally supported on the TPF 4.1 system.
  • Parse XML documents using the Simple API for XML (SAX) Version 1.0 or 2.0 specification.
  • Parse XML documents with or without validation against a specified Document Type Definition (DTD).
  • Parse XML documents with or without validation against a document written in the XML Schema language.
    Note:
    XML Schema support is experimental and only includes a subset of the W3C Schema language.

See XML4C Parser 3.5.1 (APAR PJ28176) for more information about XML4C parser 3.5.1.

8-Byte File Address Support

FARF6 is the exploitation of 7 of the 8 bytes in the file address field, which expands addressing capacity to a maximum of 64 petabytes, or PB (64 PB equals 72 057 594 037 927 936 records or 256). 8-byte file address support includes the following:

  • Two modes of 8-byte file addressing: 4x4 format and file address reference format 6 (FARF6). FARF6 addresses can be used when the TPF 4.1 system is in either stage FARF3/4 or stage FARF4/5. The FARF6 address format has a spare byte reserved for use by IBM in byte 0, which must be 0 and a fixed 2-byte universal format type (UFT) (bytes 1 and 2).
  • An 8-byte standard header (c$std8.h).
  • Updated application programming interfaces (APIs) and macros that handle 8-byte file addresses.
  • The definition of a new 4-K duplicated long-term FARF6 (4D6) pool type.
  • The increase of pool ordinal numbers (PSONs) and counts of available pools to 8 bytes.
  • Changes to commands that handle file addresses and record type ordinals.
  • Updates to recoup for the processing of either 4- or 8-byte file addresses.
  • Updates to TPF collection support (TPFCS) so 8-byte file addresses can be used wherever pool addresses are stored.
  • Changes to the FACE table generator (FCTBG).
  • An optional input card (Path card) can now be included in the load deck portion of the offline loader job control language (JCL) that is used to run ALDR and TLDR. This card will specify the hierarchical file system (HFS) location of the FACE table (FCTB) in program object format. In addition, changes have been made to the Load FCTB card to specify the HFS location of the FCTB in program object format.
  • Changes to offline procedures, including new parameters on the RAMFIL and UFTFTI SIP macros that allow you to define FARF6 file addresses.
  • Changes to fixed file records.
  • Updates to database reorganization (DBR).
  • Updates to continuous data collection (CDC) to add additional columns for 4D6 pools.
  • Updates to exception recording and logging.
  • Support for test tools.

See 8-Byte File Address Support (APAR PJ28097) for more information about 8-byte file address support.

16-Way Tightly Coupled Multiprocessor To fully exploit the capacity of 10-way tightly coupled processors as well as future processors, the TPF 4.1 system has been modified to accommodate processors with as many as 16 central processing units (CPUs) or I-streams. The current restriction of 8 I-streams has been removed. In addition, to support the increase in I-streams, the FACE table (FCTB) has been modified to reduce its size and conserve space.

See 16-Way Tightly Coupled Multiprocessor (APAR PJ26146) for more information about 16-way tightly coupled multiprocessor.

32-Way Loosely Coupled Pool Support Currently, the TPF 4.1 system is limited to 8-way loosely coupled processors. 32-way loosely coupled pool support is another step toward having 32-way loosely coupled processors and provides the following:
  • Pool data structure enchancements:
    • Expands processor unique data fields
    • Reserves additional space for future pool sections
    • Moves keypoint data to new fixed file records
    • Converts the short-term processor control record (STPCR) fixed file records from ordinal-based processor unique allocations to file address compute (FACE) program processor unique fixed file record types.
  • Extensions to pool data structure access functions and user exits.
  • Coexistence of processors running 32-way loosely coupled pool support and pool expansion (PXP).

The basic migration approach used is an extension of the techniques developed by PXP support in PUT 2.

Note:
32-way loosely coupled pool support for PUT 14 does not remove the constraint of a maximum of 8 loosely coupled processors. Additional functions are required to complete support for 32-way loosely coupled processors.

See 32-Way Loosely Coupled Pool Support (APAR PJ27686) for more information.

32-Way Loosely Coupled Processor Support 32-way loosely coupled processor support removes the final restrictions that limit TPF 4.1 to 8-way loosely coupled processors. With 32-way loosely coupled processor support, a TPF 4.1 system can support as many as 32 processors in a loosely coupled configuration. 32-way loosely coupled processor support provides the following:
  • Multi-Processor Interconnect Facility (MPIF) enchancements:
    • #HDREC record type to expand of the number of records and fields that contain processor information
    • #PDREU record type to expand path information for additional processors
    • ZMPIF PDR command ALL parameter to initialize all processors.
  • Internet daemon configuration file (IDCF) has been expand with the #IDCF1 record type to support additional processors.
  • The CLAW device table (CDT) and the TCP/IP configuration table (ITCPC) have been moved to the processor unique #IBMMP4 fixed file record type to support additional processors.
  • Routing control application table initialization (RCIT) record has been expanded for 32 processors.
  • Node control block (NCB) has been extended for 32 processors.
  • SNA dynamic resource definition processor masks have been extended for 32 processors.
  • Keypoint I (CTKI) subsystem state table has been moved to the #CN1ST record type to support additional processors.
  • Keypoint C (CTKC) table of terminals TPF has been expanded to allow additional entries for functional support consoles and alternate consoles
  • Super global storage allocation (GOA) records extended to support as many as 32 processors with 16 I-streams each.
  • General File (GF) and General Data Set (GDS) control structures have been moved to #IBMMP4 and #DSCRU processor unique fixed file record types.
  • Commit and rollback has been extended by increasing the number of #IBMM4 fixed file records for the control table (CRTB) checkpoint area and the number of log fixed file records to 32 (#RLOG1 - #RLOG32).
  • Interprocessor Communications (IPC) has been changed from using of 8-bit masks for representing destination processors to using lists of processors.
  • Keypoint accessing has been extended for 32 processors with the following:
    • #CTKX record type to support the additional processors
    • The keypoint status table has been reformatted so that additional processors can be supported
    • #KFBX0 - nnn fixed file records with a keypoint pointer record to support additional keypoint extents and copies for additional processors.
  • Recoup has increased the number of FC33 records, added the @@32BUSED field, and replaced #SONRPE processor unique fixed file records with #SONRPE0 - #SONRPE7 processor shared records.

See 32-Way Loosely Coupled Processor Support (APAR PJ27785) for more information.

3590 Support 3590 support exploits the functions provided by the IBM 3590 control unit and the IBM 3590 device. IBM 3590 control units provide the following performance, capacity, and error rate improvements:
  • Data rate of 9 MB for each second (3 times the data rate of a 3490E device)
  • Cartridge capacity of 10-40 GB (25 times the capacity of a 3490E cartridge)
  • 33 percent improved data compaction algorithm
  • Improved error rates because of improved media, recording techniques, error detection, and error correction methods.

See 3590 Support (APAR PJ24563) for more information about 3590 support.