gtpm1m3m | TPF V4R1 Migration Guide: 3.1 to 4.1 |
Before this release, determining the source of a storage overlay error was difficult because the program that detected the error could be totally unrelated to the program that caused the error. Furthermore, certain TPF system services allowed an E-type program to do things that were difficult to detect. For example, the MONTC macro (get supervisor state), allowed an E-type program to change anything.
Before this release, E-type programs could hide main storage blocks, pass them to other ECBs, chain them to globals, and so on.
Beginning with this release, the TPF system provides improved program isolation and entry protection with two hardware facilities:
The primary function of the DAT facility is to provide a virtual storage environment for the processing program. The DAT facility works through a set of tables that, in addition to defining the virtual-to-real storage mapping, can define the areas of storage an entry is allowed to address or modify. When used correctly, DAT hardware can detect programs that are storing into protected storage at the time the store is attempted. The TPF 4.1 system uses the DAT facility to provide each ECB with private areas that are now more difficult to be corrupted by other ECBs or by the TPF system when operating on behalf of other ECBs.
Low address protection is a hardware facility that protects the first 512 bytes of storage against alteration by a program regardless of the storage key used by the program. The TPF 4.1 system uses this facility to protect the first 512 bytes of each I-stream's page 0 against corruption by either application programs or the TPF system. Not even the TPF control program can modify low storage while low address protection is active. Low address protection guards the part of the system that is more likely to be modified by a storage corrupter, and has no performance impact.
The TPF 4.1 system also provides two other system support enhancements:
The TPF 4.1 system provides run-time authorization for the following
You can specify which of these privileges you want to grant to programs and add additional restrictions or privileges of your own when you allocate the programs. See Allocating Programs, Transfer Vectors, and Pools and Adding Your Own Authorization Bits for more information.
See ESA/370 Principles of Operation for more information about the storage protection key.
In the TPF 4.1 system, an ECB's virtual memory has its own page and segment tables, which translate the address of the ECB and any allocated frames. If an ECB refers to an address that is not valid, the ECB receives a system error and exits.
A major difference between the TPF 3.1 system and the TPF 4.1 system is in the sharing of storage between ECBs. The program isolation provided by the TPF 4.1 system prohibits some of the storage sharing techniques used in previous releases. However, ECBs can still share storage.
All ECBs share a pool of working storage under 16 MB, called the common area. This area is carved into 4 KB frames called common blocks. These blocks are visible to all ECBs at the same EVM addresses. Application programs can use common blocks to pass data between ECBs, allowing existing application programs that do this to run with minimal updates. However, this is not the preferred mechanism because this is a limited resource.
Several new macros are used by the control program to manage working storage blocks.
Finally, the TPF 4.1 system provides a block checking mode to flag certain coding errors, such as writing beyond the end of a block, passing blocks chained to other blocks, and using storage that has already been released. You can turn block checking mode on and off without an IPL by using the new ZSTRC command. Performance is degraded when block checking mode is running. See Block Checking Mode for more information.