This document supports Version 4.5.1 of the DYNIX/ptx® operating system. Be sure to review this document before you install or run this release of DYNIX/ptx.
DYNIX/ptx V4.5.1 is supported on NUMA-Q® systems and on Symmetry® 5000 systems with Model 5, Model 10, or Model 20 processor boards only.
Following are the minimum version requirements for system software and layered products. Earlier versions of these products are not supported on DYNIX/ptx V4.5.1.
Fibre Channel Bridge Software V1.5.3 (NUMA-Q systems)
NUMA-Q Console Software V1.7.2 (NUMA-Q systems)
CSM software V1.6.1 (Symmetry systems)
QCIC software V3.4.2 (Symmetry systems)
ptx®/Online diagnostics V1.4.1 (Symmetry systems)
Backup Toolkit V4.4.2 (required for SAMS:Alexandria®)
cfwdl-Compatible Firmware Bundle V1.0.2
CommandPointTM Admin V4.5.1
CommandPoint Base V4.5.1
CommandPoint Clusters V2.2.1
CommandPoint for Unicenter TNG V1.0.0
CommandPoint SVM V2.2.1
EES V1.2.0
edb Debugger V3.4.0
Encryption Software V4.5.1 (U.S. and Canada only)
MQSeriesTM for DYNIX/ptx V2.1.0
Micro Focus® COBOL Developer Suite V4.0
Micro Focus Application Server/OSX V4.0
ptx/AGENT V1.4.0
ptx/ATM V4.6.0
ptx/BaseComms V1.2.0
ptx/C++ V5.2.3
ptx/C++ Runtime V5.2.3
ptx/CFS V1.1.1
ptx/Channel Attach V4.5.1
ptx/CLUSTERS V2.2.1
ptx/Configuration Assistant V1.0.0
ptx/CTC V1.1.3
ptx/EFS V1.4.1
ptx/ITX V4.5.0
ptx/JSE V3.0.1
ptx/LAN V4.7.1
ptx/LICENSE V1.1.1
ptx/LIP V1.0.1
ptx/NFS® V4.7.1
ptx/OSI Transport V4.5.0
ptx/PDC V1.4.0
ptx/RAID V2.1.0
ptx/SESMON V1.1.0
ptx/SNA 3270 Terminal Emulator V4.6.0
ptx/SNA APPC V4.6.0
ptx/SNA LU0 API V4.6.0
ptx/SNA LU6.2 V4.6.0
ptx/SNA PU2.1 Base Server V4.6.0
ptx/SNA RJE (3770) V4.6.0
ptx/SNA SDLC & LLCI V4.6.0
ptx/SNA TN3270 Client V4.6.0
ptx/SNA TN3270 Server V4.6.0
ptx/SPDRIVERS V3.1.0
ptx/SVM V2.2.1
ptx/SYNC V4.5.0
ptx/TCP/IP V4.6.1
ptx/X.25 V4.5.0
ptx/XWM V4.6.1
ptx/XWM Contributed V4.6.1
Public Software V4.5.1
SequentLINKTM V4.3.0
Documentation Browser V3.1.0 for Windows®
The following products have been retired and are no longer provided on the DYNIX/ptx and Layered Products CDs.
ptx/Netscape FastTrack Server®.
ptx/NWS and ptx/LDAP. If these products are currently on your system, they must be deinstalled before the upgrade to DYNIX/ptx V4.5.1.
The following products are not supported on DYNIX/ptx V4.5.1:
NetWare® for Sequent Information Servers
ptx/18GB_DISK (incorporated into DYNIX/ptx V4.4.4)
ptx/DNA
ptx/ESBM (replaced by SAMS:Alexandria and Backup Toolkit)
ptx/FTAM
ptx/LAT
ptx/LDAP
ptx/NWS
ptx/OSBM (replaced by SAMS:Alexandria and Backup Toolkit)
ptx/PEP (replaced by ptx/PDC)
ptx/SDI
ptx/VT
ptx/X.400 Base Services
ptx/X.400 Sendmail Gateway
The DYNIX/ptx operating system complies with the following standards:
ISO/IEC 9899:1990 Information technology - Programming Language C.
System V Interface Definition, Third Edition, Volume 1 (base system and kernel extension).
IEEE Standard 1003.1-1990 Portable Operating System Interface for Computer Environments (POSIXTM) and the Federal Information Processing Standards Publication (FIPS PUB 151-2) qualifications and extensions to the POSIX specification.
Portable Operating System Interface (POSIX) IEEE 1003.2-1992. Although not yet officially certified, DYNIX/ptx is compliant with this standard.
Threaded interfaces are compliant with those in POSIX 1003.1-1996.
X/Open Portability Guide (XPG4.2).
System V Application Binary Interface Third Edition and ABI+ extensions.
Operating Systems Programming Interface section of the Applications Environment Specification (AES) standard from OSF®.
The following platforms and devices are not supported in DYNIX/ptx V4.5:
Symmetry 2000 systems
MULTIBUSTM-based systems and associated peripherals (Systech® terminal driver (st), parallel printer driver (lp), communications cards)
ELS systems
Systems containing i386TM processors (Model B and Model C processor boards)
Systems containing i486 processors (Model D and E processor boards)
Model F processor boards
Systems containing the Weitek® FPA
zd and vj disks
300 and 600 MB disks
5.25" Pbays and devices are supported as storage devices on Symmetry systems only. They are supported on NUMA-Q 2000 systems in the boot Pbay only.
The COFF compatibility development environment has been removed. The COFF libraries present in earlier versions of DYNIX/ptx are no longer available.
COFF binaries will continue to work unless one or more of the following conditions exists:
The binary accesses /dev/kmem.
The binary accesses a kernel component.
The binary relies on the DYNIX/ptx V2.x directory structure.
For information about installing this release of DYNIX/ptx, refer to the DYNIX/ptx V4.5.1 and Layered Products Release Notes.
The following restrictions must be followed when running DYNIX/ptx V4.5. In these restrictions, local refers to devices connected directly to the PCI/SCSI interface on NUMA-Q systems, or to devices connected directly to the CSM on Symmetry 5000 systems.
The root, dump, and primary swap partitions must be located on local (not shared) SCSI disks connected to a bootable Pbay or boot bay. They cannot be located on disks attached to the Fibre Channel.
If the root and primary swap partitions are under ptx/SVM control, they must be a single complete plex (that is, they can be mirrored, but not striped or concatenated).
The miniroot partition must be located on a local (not shared) SCSI disk. It must occupy the entire partition.
The /etc/dumplist file, which lists devices that can be used for a memory dump, should not contain the primary swap partition. If you are using swap partitions as dump devices, you must have enough secondary swap partitions to accommodate an entire crash dump.
When powering up a NUMA-Q system, power on all Pbays before powering on the Fibre Channel Bridges. On power-up, the FC Bridge will attempt to spin up all disks. If the Pbay is off, the FC Bridge will not be able to register the disks.
In general, the stand-alone kernel should not be used for system maintenance operations, except as noted in the release notes. System maintenance operations should be done while running the standard kernel in single-user mode. The SAK is primarily a boot loader. It has limited memory availability and does not support the full set of services available in a standard kernel. Applications that rely on services that are not available or that use large amounts of memory may hang or panic the SAK. For details about running commands from the SAK, refer to the DYNIX/ptx System Administration Guide.
DYNIX/ptx V4.5.1 is primarily a maintenance release and includes the fixes described in Chapter 2. It also includes the following implementation changes.
DYNIX/ptx V4.5.1 provides support for the following hardware:
New quad. The CQuad (NUMA-Q 2000 systems) and MCQuad (NUMA-Q 1000 systems) contain a new Intel® processor.
Writeable CDR device. See the DYNIX/ptx System Administration Guide for information about copying data to a CDR.
Hitachi Data Systems HDSTM 5800 disk array. A new utility, hdsinfo, displays configuration status, log (FRU status), and parameter settings for the device. See the document NUMA-Q Supplemental Information for Hitachi Disk Arrays for more information about this device, including how to interpret the output from hdsinfo.
Native threads and application regions can now be used on the same system. A light-weight process (LWP) will migrate across regions unless it is in one of the following situations:
If the LWP is doing long duration I/O, the migration will fail.
If the LWP has received a fatal signal or is exiting or dumping core, the migration will fail.
If the LWP has a mandatory attachment to a quad, the migration will fail.
If the LWP is a vfork child process, the migration will be deferred until the child is executed or exits.
If the LWP has marked itself as non-swappable, the migration will be postponed until the LWP becomes swappable.
If the LWP is executing remotely from the destination quad, the migration will be postponed until the LWP is executed on the destination quad.
If the migration fails or is deferred, the LWP will take memory from either the old region (if it is still active) or the system region.
When the migration fails for a particular LWP, an EES message will be logged. The message will contain information such as the ID of the LWP and the name of the process to which it belongs.
The application region subsystem includes a "borrow" memory policy that allows you to specify the minimum and maximum amounts of memory for a region. The minimum amount is guaranteed to be available to the region. Additional memory up to the maximum is borrowed from the free pool maintained by the system.
This policy is not supported in DYNIX/ptx V4.5.1; however, it will be implemented in a future release. For the V4.5.1 release, we recommend that you configure each region with an adequate minimum amount of memory.
In the DYNIX/ptx V4.4 version of sed, you could search for the < character by specifying either < or the regular expression \< as the search pattern.
For POSIX compliance, this behavior has been changed in DYNIX/ptx V4.5. The regular expression \< now matches the beginning of the word. To search for the < character, use < as the search pattern.
DYNIX/ptx now provides support for POSIX threads. The traditional UNIX process model consisted of an address space and exactly one thread. POSIX threads extends this model to be an address space with one or more threads of control. Because of the shared nature of the address space among threads, multi-threaded applications must synchronize access to process global resources.
When compiling a threaded application, you must include the -Kthread option on the cc command line as described in the cc(1) man page.
DYNIX/ptx continues to support libpps, which provides a parallel thread-like programming model. In the libpps model, all memory resources to be shared between "microtasks" must be explicitly declared as such; whereas, in the threads model, they are shared implicitly.
For more information about the DYNIX/ptx implementation of threads, see the DYNIX/ptx Programming Tools Guide.
The Application Region ManagerTM has been enhanced to allow memory and process table resources to be assigned to an application region. Previously, only CPUs could be assigned to a region. When you create a region, you must assign at least one CPU to the region. Assigning memory and process table resources is optional.
Users can now be assigned to an application region. At login time, the user is placed in the specified region and all processes started by the user will execute there.
Application regions can be created and managed through ptx/ADMIN, CP Admin, or from the command line. See the DYNIX/ptx System Configuration and Performance Guide for more information about creating and managing regions. For information about assigning users to application regions, see the DYNIX/ptx System Administration Guide.
DYNIX/ptx is now compliant with the ISO/IEC 9899 Amendment 1, C Integrity standard.
For applications to be compliant with this standard, the __STRICT_ISO_C_AMM1__ macro must be used.
The following changes were made for compliance with this standard:
The API for the wcstok() function has been changed. Previously, wcstok() used the XPG4 interface:
wchar_t *wcstok(wchar_t *, const wchar_t *);
The new ISO C interface is as follows:
wchar_t *wcstok (wchar_t *, const wchar_t *, wchar_t **);
The definitions for fpos_t and fpos64_t in stdio.h have been changed from long to struct to include the mbstate_t object for wide character support.
The following new functions have been added. See the man pages for information about using these functions.
btowc() |
towctrans() |
wctob() |
fwide() |
wcscanf() |
wctrans() |
mbrstring() |
wcscoll() |
wcxfrm() |
mbsinit() |
wcsttombs() |
wmemory() |
mbstrwocs() |
wcsstr() |
The ISO C standard specifies that streams have an orientation. When a file is opened, the stream is considered unbound (or without any orientation). When byte I/O functions are applied on the stream, the stream is considered to be byte-oriented. Similarly, when wide character I/O functions are applied on a stream that is not oriented, the stream is considered to be wide-oriented. Wide character I/O functions are not allowed on byte-oriented streams and byte I/O functions are not allowed on wide-character streams (these functions fail with the error code EINVAL). A stream's orientation can be changed with either freopen(), which removes any orientation, or fwide(). Also, wide character streams include an mbstate_t object that maintains the current conversion state information of the stream. This mbstate_t object is used in locales that support shift-state-encoding.
DYNIX/ptx now includes Perl Version 5.005-03, the third maintenance release of V5.005. Perl is installed as /usr/bin/perl. The Perl Programmer's Reference Guide is provided in man-page format. See perl(1) for more information.
Changes have been made in the device autoconfiguration routines to greatly reduce the time required to boot the operating system. The improvement you will see is dependent on your hardware configuration. In general, machines with large Fibre Channel Bridge or Pbay configurations show the most improvement; devices on these machines will be configured several times faster than with DYNIX/ptx V4.4.
When the system is booted with the -v (verbose) option, the kernel messages are now displayed in a different order. The first messages, which begin with a plus sign, indicate the first devices to be configured. The second group of messages describe the available memory on the system. The third group of messages begin with processor numbers and indicate that the remaining devices are being configured.
+qlc0 pci port - (unit 0xb) found on +quad0 pci port 0 (unit -) +asy0 eisa port - (unit 0x0) found on +quad0 eisa port - (unit -) +asy1 eisa port - (unit 0x1) found on +quad0 eisa port - (unit -) +mdc0 eisa port - (unit 0x0) found on +quad0 eisa port - (unit -) real memory = 24544.00 megabytes. available memory = 23126.39 megabytes. using 131072 buffers containing 1024.00 megabytes of memory. 00: Configuring devices, please wait. 20: +scsibus0 mscsi port - (unit -) found on +qlc0 mscsi port 0 (unit 0x70) 16: +scsibus1 mscsi port - (unit -) found on +qlc1 mscsi port 0 (unit 0x70) 12: +scsibus2 mscsi port - (unit -) found on +qlc2 mscsi port 0 (unit 0x70) 16: +sd0 scsi port - (unit 0x0) found on +scsibus1 scsi port - (unit -) 09: +sd3 scsi port - (unit 0x30) found on +scsibus1 scsi port - (unit -) 10: +sd1 scsi port - (unit 0x20) found on +scsibus1 scsi port - (unit -) 11: +sd2 scsi port - (unit 0x10) found on +scsibus1 scsi port - (unit -) ...
Under the following scenarios, the operating system can be rebooted although the system console is not running.
User-initiated reboot
The system has been running normally, but the console or the VCS application has died sometime previously. When the user initiates a reboot with a command such as init 6 or shutdown, the system performs an orderly shutdown and then reboots using the normal reboot sequence:
runtime DYNIX/ptx -> ptxldr -> SAK -> runtime DYNIX/ptx
Reboot after a power failure
A power failure has shut down both the NUMA-Q system and the console. If the console fails to boot or to start VCS when the power comes back on, the operating system will use the normal reboot sequence to reboot without the console.
BIOS ->Lynxer -> ptxldr -> SAK -> runtime DYNIX/ptx
The following types of situations can cause a consoleless reboot to fail:
Unexpected runtime errors during the boot process.
The boot configuration does not allow a reboot to multiuser mode. For example, autoboot is set to 0, or the system is configured to boot to single-user mode.
Deliberate user interactions. For example, an rc script may ask a yes/no question on the console during boot and will not proceed without an answer.
In these cases, a working console will be needed to diagnose and fix any problems, to boot the system, or to perform the necessary user interactions.
In previous releases, /bin/login set the default path for the PATH environment variable to .:/bin:/usr/bin, which caused the home directory to be searched first. To provide better security, the default path is now /bin:/usr/bin:., which causes the current working directory to be searched last.
In previous releases, ksh accepted the syntax "((...)...)", but did not always interpret it correctly. In the V4.5 release, this syntax has been replaced by "( (...)...)". (A space is now required between the left parentheses.) The shell now reports an error for the older syntax.
If your shell scripts rely on the older syntax, you will need to modify them to use the new "( (...)...)" syntax.
Each cylinder group now contains additional information that identifies the filesystem to which the cylinder group belongs. This information is used to help detect filesystem inconsistency. The cylinder groups in older filesystems are automatically upgraded with this information the first time that fsck is run on the filesystems.
Previously, memory (mfs) filesystems were temporary; when the filesystem was unmounted, its contents were lost. These filesystems can now be stored in a regular file. A new -I image_file option has been added to the mount_mfs command to allow you to specify the file where the image is to be stored. If image_file already exists, the -I option will mount it. The filesystem utilities have been enhanced as necessary to work with the image_file.
When an optical unit (GBIC or GLM) in a FC Host Adapter, Switch port, or Bridge begins to fail, it can intermittently, but repeatedly, corrupt data transfers to and from Fibre Channel components within the mass storage subsystem.
In previous DYNIX/ptx releases, when this situation occurred, the system could become bottlenecked and stall for long periods of time as it attempted to recover from I/O failures caused by the failing component. Although the system could sometimes eventually recover, critical applications may have timed-out and failed by then. These failures were characterized by a series of command sequence timeout messages logged by the Fibre Channel Host Adapter driver (ff).
In DYNIX/ptx V4.5, the affected SCSI bus initiator(s) and/or specific disk ports/routes will be disabled when the failure occurs. This enables the system to quickly route around the point of failure if possible and avoids system stalls. The SCSI bus driver and/or the disk driver will log error messages indicating that a SCSI bus initiator or a specific disk port/route has been disabled. The system will automatically attempt to reintegrate disabled SCSI bus initiators at five minute intervals. Disk ports will be quickly reenabled and reintegrated into the system if they pass a simple driver-provided diagnostic, but will be "permanently disabled" if they exhibit the behavior again within the next 10 minutes.
Once the source of the failure(s) has been determined and corrected, you can use OLR operations (with devctl(1M)) to restore disk access through the "permanently disabled" ports. Alternatively, you can use the revive option to the pbayid/diskid program to restore access. The revive option avoids the overhead of OLR, as you do not need to unmount filesystems, take disks out of SVM control, or destroy VTOCs.
To ensure that the system can route around disabled components, you can use the following configuration methods to provide alternate routes:
SVM mirrored disk volumes
Multiported disk devices with ports on unrelated buses and Fibre Channel fabrics
Dual-initiator SCSI buses with initiators on unrelated Host Adapters and Fibre Channel fabrics
Multiple Fibre Channel fabrics
The NUMA-Q online diagnostics product includes a new utility, /usr/onldiag/fcdcu, that collects diagnostic data from the Fibre Channel devices on the system. The utility is started automatically when the system is booted to multiuser mode. By default, it runs as a daemon and polls the FC devices every 30 minutes. fcdcu stores the data it collects in a log file in the /usr/adm/fclog directory. A shell script, /usr/onldiag/purgefclogs, is also provided to find old log files and remove them from the system. For more information, see the DYNIX/ptx System Administration Guide.
The NUMA-Q online diagnostics product includes a new utility, /usr/onldiag/swutil, that extracts the SES pages from a Switch. This utility makes it unnecessary to telnet into the Switch to obtain similar data through the standard command line interface. For more information about this utility, see the swutil man page.
In previous releases, when a user connected two streams using the ioctl() operations I_LINK or I_PLINK, any other user could unlink them with I_UNLINK or I_PUNLINK . To improve security, the I_UNLINK and I_PUNLINK operations are now restricted to the owner of the link (the user who created the link) and to the root user.
This restriction is enabled by the new configurable parameter muxunlink_restricted. When the parameter is set to 1, which is the default value, the restriction is in effect.
We recommend that you modify your applications to use the new behavior of these ioctls. However, if your site requires the previous unrestricted behavior, you can set the muxunlink_restricted parameter to 0 and then recompile the kernel and reboot. The parameter is located in /etc/conf/uts/kernel/i386_space/param_space.c.
The kernel now maintains files containing vmtune parameters for each user-defined application region defined on your system, allowing you to tune each region separately. The file for the system region contains global vmtune parameters.
The new vmtune -gregion option can be used to view or change the parameters for a specific region.
To maintain parameter changes across system boots, you must create files, one per region, specifying the parameter values that are to be applied to each region. The files must be named vmtune.<region_name>, where region_name is either system (for the system region), or the name of a user-defined region. The files must be placed in the /etc/vmtune.d directory. When the system is booted, the parameters will be adjusted as specified for each activated region having a file in /etc/vmtune.d.
To create a file, redirect the output of the vmtune command. The following example creates a file for the region app1:
$ /etc/vmtune -g app1 > /etc/vmtune.d/vmtune.app1
Starting in the DYNIX/ptx V4.4.4 release, a mechanism called eXtended Kernel Virtual Address space (XKVA) was provided to extend the range of KVA addresses possible on NUMA-Q systems configured with wide 64-bit PTEs. This mechanism overlayed an additional 3 GB of kernel virtual space on top of the standard 3 GB of user virtual space. The kernel then mapped and remapped user virtual space and XKVA as needed.
By default, this mechanism was enabled for all NUMA-Q systems running with wide 64-bit PTEs, providing support for up to 64 GB of physical memory. However, on systems that did not need the extra kernel virtual space (physical memory was 16 GB or less and the system was not running out of kernel virtual space in primary KVA), this mechanism could reduce system performance, as it required extra faults and TLB flushing. Systems that did not need the XKVA feature could disable it by patching the kernel variable xkva_alloc_enabled.
In the V4.5 release, the XKVA implementation has been improved. The operating system now uses XKVA for dynamic kernel virtual space allocations only when all primary KVA is completely used up. The xkva_alloc_enabled parameter is obsolete; systems that do not need the XKVA feature no longer need to disable it. The V4.5 release also includes several improvements in direct I/O page-locking, which improves the performance of direct I/O operations when XKVA usage is forced by the system workload.
The following kernel structures are now allocated dynamically:
Mount structures
CDFS mount structures
File descriptor tables
Session structures
Fifo structures
UFS disk quota structures
In previous releases, the allocation of most of these structures was controlled by tunable kernel parameters. The allocations can now be adjusted as necessary with the kmstune command. See the next section "Changes to kmstune" for information about the memory pools that are used to allocate these structures.
The following memory pools have been added in DYNIX/ptx V4.5:
Site modifications to kernel structure pools can now be easily saved and reinstated at boot time. A new directory, /etc/ktune.d, contains executable scripts that define the parameters to be modified in specific pools. When the system is booted, the startup script /etc/rc2.d/S04ktune runs each script.
You must create a separate script for each pool whose parameters are to be modified at boot time. The name of the script must match the name of the pool, as described in the /etc/ktune.d/README file.
The following sample script, /etc/ktune.d/base.queue, sets the reslow parameter for the base.queue pool to 100 and the yellow zone to 10000.
#! /bin/sh cat << EOF | /etc/kmstune -F - base.queue reslow 100 base.queue yellow 10000 EOF
For more information about kmstune, see the kmstune(1M) man page.
The following parameters are now obsolete:
The FDIV_BUG parameter has been renamed FDIV_BUG_FLAG to remove a conflict with the -D FDIV_BUG compiler option. (The conflict could cause error messages when compiling the kernel.) If your site file includes the FDIV_BUG parameter, be sure to rename it to FDIV_BUG_FLAG before compiling your local V4.5 kernel.
The default values have been changed for the following parameters. If your site file includes any of these parameters, you may want to reconfigure them.
The following parameters were added in the V4.5 release:
The existing MAXAIO parameter now provides a soft limit for the maximum number of pending asynchronous I/O requests per process. Its default value is also 50.
Whether the I_UNLINK and I_PUNLINK operations are restricted to the owner of the link and the root user. By default, the parameter is set to 1, which provides the restricted behavior.
The format used for core files has been changed. A core file now includes the following process information:
Process status
Status of every light-weight process (LWP) in the process
Every loadable and writeable memory segment that was part of the process's address space, including shared library segments
The size of the core file created by a process can be controlled by the user (see getrlimit(2)). Core files for multi-threaded processes are typically much larger than core files for single-threaded processes.
For more information about the new format, see core(4).
The Create a Region option now allows you to include memory and process table resources in an application region. The Modify a Region option allows you to modify those resources.
The Display Regions option now allows you to select the regions for which you want to display attributes.
A new option, Configure Region Options, allows you to specify the default login region.
The Add a User Account and Change a User Account options now allow you to specify the application region into which a user should be placed at login.
In previous releases, the Create a Custom VTOC File option created partitions that could be either larger or smaller than the sizes you specified. The option now allows you to indicate whether partition sizes should be at least, approximately, or exactly the size you specify.
The following options can now be specified on the cc command line:
-K{thread | nothread}
Generates code for threaded or non-threaded applications. The default is -Knothread.
-W0,-xstring_merge_ro
Creates only one copy of identical read-only (const char *) literal strings.
-W0,-xstring_merge_rw
Creates only one copy of identical read/write (char *) literal strings.
-W0,-xstring_merge
Shorthand for specifying both -W0,-xstring_merge_ro and -W0,-xstring_merge_rw.
-W1,-no_red_zone
Suppresses the generation of stack probe code for multi-threaded applications.
Two new predefined macros, __STDC_VERSION__ and __STRICT_ANSI__, are now available. Table 1-2 shows how these macros are defined in each cc compilation mode.
Mode |
__STDC_VERSION__ |
__STRICT_ANSI__ |
-Wc,-seq |
undefined |
undefined |
-Xs |
undefined |
undefined |
-Xt |
199409L |
undefined |
-Xa |
199409L |
undefined |
-Xc |
199409L |
1 |
By default, ptx/C is compliant with most areas of the ISO/IEC 9899 Amendment 1, C Integrity standard. For complete compliance with the standard, specify -D __STRICT_ISO_C_AMM1__ on the cc command line. This macro causes different versions of the following functions to be used:
The assembler now supports the MMXTM and Streaming SIMD instruction set extensions. The MMX instructions are supported on both Pentium II Xeon and Pentium III Xeon processors; the SIMD instructions are supported only on Pentium III Xeon processors. If a system contains a mix of processor types, application writers can use a primitive to ensure that the MMX instructions are run only on processors that support those instructions. The SIMD instructions are restricted to systems containing only Pentium III Xeon processors. Another primitive is available to determine whether all processors on the system support SIMD instructions. For information about these primitives, see the engdata(3SEQ) man page. For descriptions of the instructions, see the DYNIX/ptx Assembly Language User's Manual.
The edb debugger is now provided with DYNIX/ptx. (This debugger was previously provided with ptx/C++.) edb must be used to control and debug multi-threaded programs and to analyze core files from them. The older debug debugger is still available and can be used to debug non-threaded programs only.
The following changes have been made since the previous release of edb.
New functionality:
Thread support on DYNIX/ptx 4.5
The commands continue and halt can use %j to control all threads %j.t
The commands cancel, signals, stack, and return can take an optional job argument
The show command allows types and expressions as arguments
Support for the C/C++ long long type
Support for debugging dynamically linked shared objects
The wait command waits for events on background jobs
New options:
Enhancements to the edb graphical user interface:
Three new commands have been added to the popup menu of the Source pane. Right click in the Source pane to see these items:
An X resource toggle to display icons has been added:
Edb*iconsPaneToggle.set: true
The edb control buttons can now be personalized:
Edb*viewer_run_button.labelString: |
Run |
Edb*viewer_continue_button.labelString: |
Continue |
Edb*viewer_step_button.labelString: |
Step |
Edb*viewer_step_over_button.labelString: |
Step over |
Edb*viewer_return_button.labelString: |
Return |
Edb*viewer_halt_button.labelString: |
Halt |
Edb*viewer_detach_button.labelString: |
Detach |
Edb*viewer_kill_button.labelString: |
Kill |
Edb*viewer_make_button.labelString: |
Make |
Edb*viewer_edit_button.labelString: |
Edit |
NAME CFGTYPE DEVNUM UNIT FLAGS OnBUS OnDEVICE ff0 ff 0 0x00000006 SP pci quad0 fabric8 fabric 8 0x00000001 SM fc ff0
If you need LUN information, you must use the -m option, which produces output in a machine-readable format. In this format, the units of Fibre Channel devices can contain very large numbers (22 hex-digits). The first six hex-digits are the N-PORT address; the remaining hex-digits are the LUN (NNNNNNLLLLLLLLLLLLLLLL). Programs should treat these unit values as strings because they are too large to fit in any machine-sized number, even the long long type in C.
A new -o modifyroot option has been added to allow fsck to modify the root filesystem at boot time and in certain disaster recovery situations. fsck will not modify the root filesystem if this option is not provided. See fsck_ufs(1M).
flockfile, ftrylockfile, funlockfile
Locks or unlocks a stream. See flockfile(3S).
fwide
Sets or obtains the orientation of the specified stream. See fwide(3C).
getprinfo
Returns process-related data, such as information that may be useful to the ps command. See getprinfo(2SEQ).
getrgnname
Returns the region name of the caller or the specified process. See getrgnname(2SEQ).
lwp_trace
Allows a process to observe and/or control other unrelated processes and light-weight processes within those processes. See lwp_trace(2SEQ).
mbsinit
Determines whether the conversion state described by ps represents an initial conversion state. See mbsinit(3C).
mbstrwocs
A restartable conversion function that converts a sequence of multibyte characters into the corresponding sequence of wide characters. See mbrstring(3C).
pread64
Reads from a file at a given offset without changing the file pointer. See pread64(2SEQ).
pthread_atfork
Registers fork handlers. See pthread_atfork(3C).
pthread_attr_init, pthread_attr_destroy
Initializes or destroys the threads attribute object. See pthread_attr_init(3C).
pthread_attr_setdetachstate, pthread_attr_getdetachstate
Sets or gets the detachstate attribute. See pthread_attr_getdetachstate(3C).
pthread_attr_setschedparam, pthread_attr_getschedparam
Sets or gets the schedparam attribute. See pthread_attr_getschedparam(3C).
pthread_attr_setstackaddr, pthread_attr_getstackaddr
Sets or gets the stackaddr attribute. See pthread_attr_getstackaddr(3C).
pthread_attr_setstacksize, pthread_attr_getstacksize
Sets or gets the stacksize attribute. See pthread_attr_getstacksize(3C).
pthread_cancel
Cancels the execution of a thread. See pthread_cancel(3C).
pthread_condattr_init, pthread_condattr_destroy
Initializes or destroys the condition variable attributes object. See pthread_condattr_init(3C).
pthread_cond_init, pthread_cond_destroy
Initializes or destroys condition variables. See pthread_cond_init(3C).
pthread_cond_signal, pthread_cond_broadcast
Signals or broadcasts a condition. See pthread_cond_signal(3C).
pthread_cond_wait, pthread_cond_timedwait
Waits on a condition. See pthread_cond_wait(3C).
pthread_cleanup_push, pthread_cleanup_pop
Establishes cancellation handlers. See pthread_cleanup_push(3C).
pthread_create
Creates a thread. See pthread_create(3C).
pthread_detach
Detaches a thread. See pthread_detach(3C).
pthread_equal
Compares thread IDs. See pthread_equal(3C).
pthread_exit
Terminates the calling thread. See pthread_exit(3C).
pthread_getspecific, pthread_setspecific
Thread-specific data management. See pthread_getspecific(3C).
pthread_join
Waits for a thread to terminate. See pthread_join(3C).
pthread_kill
Sends a signal to a thread. See pthread_kill(3C).
pthread_key_create
Creates a thread-specific data key. See pthread_key_create(3C).
pthread_key_delete
Deletes a thread-specific data key. See pthread_key_delete(3C).
pthread_mutexattr_init, pthread_mutexattr_destroy
Initializes or destroys the mutex attributes object. See pthread_mutexattr_init(3C).
pthread_mutex_init, pthread_mutex_destroy
Initializes or destroys a mutex. See pthread_mutex_init(3C).
pthread_mutex_lock, pthread_mutex_trylock, pthread_mutex_unlock
Locks or unlocks a mutex. See pthread_mutex_lock(3C).
pthread_once
Dynamic package initialization. See pthread_once(3C).
pthread_mutexattr_settype, pthread_mutexattr_gettype
Sets or gets a mutex type. See pthread_mutexattr_gettype(3C).
pthread_rwlockattr_init, pthread_rwlockattr_destroy
Initializes or destroys the read-write lock attributes object. See pthread_rwlockattr_init(3C).
pthread_rwlock_init, pthread_rwlock_destroy
Initializes or destroys a read-write lock object. See pthread_rwlock_init(3C).
pthread_rwlock_rdlock, pthread_rwlock_tryrdlock
Locks a read-write lock object for reading. See pthread_rwlock_rdlock(3C).
pthread_rwlock_unlock
Unlocks a read-write lock object. See pthread_rwlock_unlock(3C).
pthread_rwlock_wrlock, pthread_rwlock_trywrlock
Locks a read-write lock object for writing. See pthread_rwlock_wrlock(3C).
pthread_sched_get_priority_max_np, pthread_sched_get_priority_min_np
Gets the maximum or minimum valid thread priority values. See pthread_sched_get_priority_max_np(3C).
pthread_self
Gets the thread identifier of the calling thread. See pthread_self(3C).
pthread_setcancelstate, pthread_setcanceltype, pthread_testcancel
Sets the calling thread's cancelability state. See pthread_setcancelstate(3C).
pthread_sigmask
Examines and changes blocked signals for the current thread. See pthread_sigmask(3C).
pwrite64
Writes to a file at a given offset without changing the file pointer. See pwrite64(2SEQ).
rgnassign
Assigns a process or group of processes to an active region. See rgnassign(2SEQ).
rgn_cpus_online
Returns the number of processors currently configured and online in the caller's region. See rgn_cpus_online(2SEQ).
rgnctl
Activates, modifies, or deactivates a region. See rgnctl(2SEQ).
rgn_engfillset
Initializes the specified set of processors to include all processors that are currently online in the specified region. See engemptyset(2SEQ).
rgn_getkerndata
Provides kernel statistics for the specified region. See rgn_getkerndata(2SEQ).
rgninfo
Provides a snapshot of the attributes and statistics for active regions. See rgninfo(2SEQ).
rgn_quadfillset
Initializes the specified set of Quads to include all Quads that are currently configured in the specified region. See quademptyset(2SEQ).
sched_yield
Terminates a thread. See sched_yield(3C).
towctrans
Translates a given wide character according to the specified property. See wctrans(3C).
ups_ctl
Queries the system state of an uninterruptible power supply (UPS). See ups_ctl(2SEQ).
vfwprintf
Similar to fwprintf, but is called with an argument list instead of a variable number of arguments. See vwprintf(3S).
virtwin_synctlb, virtwinv_synctlb
Synchronizes virtually-windowed address translations to a mapped object. See virtwin_synctlb(2SEQ).
vm_rgnctl
Examines and changes virtual memory parameters for the caller's region. See vm_ctl(2SEQ).
vswprintf
Similar to swprintf, but is called with an argument list instead of a variable number of arguments. See vwprintf(3S).
vwprintf
Similar to wprintf, but is called with an argument list instead of a variable number of arguments. See vwprintf(3S).
wcscoll
Performs string comparisons using collating information. See wcscoll(3C).
wcsrtombs
A restartable conversion function that converts a sequence of wide characters into a sequence of corresponding multibyte characters. See mbrstring(3C).
wcsstr
Finds the first occurrence of a wide string. See wcsstr(3C).
wctrans
Performs extensible wide character mapping. See wctrans(3C).
wcxsfrm
Performs wide character string transformations. See wcxsfrm(3C).
wmemory
Functions that operate as efficiently as possible on arrays of wide characters stored in memory. See wmemory(3C).
wprintf, swprintf, fwprintf
Prints formatted output. See wprintf(3S).
wscanf, fwscanf, swscanf
Reads multibyte characters, interprets them according to a format, and stores the results. See wscanf(3S).
See rpc_clnt_calls(3N) for more information about clnt_geterr().
Now initializes the specified processor set to include only those processors that are currently online in the caller's region. See engemptyset(2SEQ).
Now assigns a user to the appropriate login region. If the region is inactive, the user's login fails. See grant_login_priv(3X).
When MAP_NORESERVE is specified for a private anonymous mmap, the system does not reserve the anonymous swap space backing this mmap at the time of the call. Instead, it is reserved only when and if the corresponding pages need to be swapped out. It is extremely important to note, though, that if the pages do need to be swapped out later and sufficient swap space cannot be allocated at that time, then the process will receive a SIGKILL.
The paging policy semantics have been changed to restrict the allocation of pages to the Quads whose memory is assigned to the caller's region. See mmap(2).
When virtwin() or virtwinv() is used and the application process is comprised of multiple lightweight processes (LWPs), the application may need to take additional steps before using the address translations instantiated by these system calls. See virtwin(2SEQ) for details.
wchar_t *wcstok(wchar_t *, const wchar_t *);
wchar_t *wcstok (wchar_t *, const wchar_t *, wchar_t **);
See wcstring(3C).
If you are updating to DYNIX/ptx V4.5 from a release earlier than V4.4.6, you may want to review the following information.
DYNIX/ptx V4.4.6 contains the following:
Support for 8 GB of physical memory per Quad (0300-Series Quads only). The maximum amount of physical memory per node remains at 64 GB. A node containing a Quad earlier than the 0300 Series is limited to 4 GB of physical memory per Quad.
Support for the Pentium III Xeon Processor.
DYNIX/ptx V4.4.5 provides support for NUMA-Q 1000 systems. These systems operate in the same manner as NUMA-Q 2000 systems; however, the bootbay contains a single local disk that is used for the root filesystem. This disk must be used for doing software upgrades and saving crash dumps.
The NUMA-Q 1000 system uses a special VTOC for the disk in the bootbay. The VTOC configures the data partitions on the disk as follows:
Partition |
Use |
0 |
Root filesystem |
1 |
Primary swap partition |
2 |
Alternate root partition |
3 |
Crash dump partition |
Partitions 0, 1, 2, and 3 are reserved for the operating system, primary swap, and saving crash dumps. You can use other data partitions for any data you desire.
Partition 9 is a miniroot partition that can be used to build a custom miniroot.
By default, the operating system is installed on partition 0. If your system contains a single bootbay, you will need to upgrade the operating system on the alternate root partition 2. See the DYNIX/ptx and Layered Products Software Installation Release Notes for more information.
NUMA-Q 1000 systems can use either of the following methods to save crash dumps:
Copy the crash dump directly to a filesystem, which must be located on the dump partition on the root disk (typically sd0d3).
Copy the crash dump to a dump device on the root disk (typically sd0d3) and then run savecore to copy the dump to /usr/crash.
Copying the dump directly to a filesystem is faster. Refer to the DYNIX/ptx System Recovery and Troubleshooting Guide for details about configuring the system to save crash dumps.
DYNIX/ptx now supports 0300-Series Quads. A NUMA-Q system using only 0300-Series Quads can include up to 16 Quads.
A NUMA-Q system can contain both 0300-Series Quads and the earlier Quads; however, in this configuration the system is limited to eight Quads.
Commands such as /etc/showcfg and /etc/showquads identify the processors in 0300-Series Quads as running at 360MHz.
The Application Region Manager provides the ability to partition system resources into multiple regions. Applications can then be assigned to run in a specific region, allowing you to balance your system workload.
The Processor Group Affinity scheduler can be used to further partition the workload within a region. You can create run queues containing certain CPUs from the region, assign processes to those run queues, and assign priorities that determine when each run queue's processes will be executed.
When you create an application region, you specify attributes for it, including its name, the resources to be associated with the region, whether the region should be activated now, and whether it should be activated automatically at system boot. Information about regions is stored in the region registry file, /etc/system/region_db.
An application region can be active or inactive. When it is active, the operating system will attach the specified resources to the region and processes can be executed there. When the region is inactive, no resources are associated with the region and processes cannot be executed. If desired, regions can be activated automatically at system boot. You can also activate or deactivate regions as needed.
For more information about the Application Region Manager, see the DYNIX/ptx System Configuration and Performance Guide.
Support for 18-GB disks has been added to DYNIX/ptx V4.4.4. This support was previously provided by the ptx/18GB_DISK layered product. The following VTOCs are available: ibms18w for the IBM® drive or seag118273 for the Seagate® drive.
When the system is booted, the standload program reads the boot strings, including the autoBoot flag, and takes the appropriate action, typically either booting the uptime unix kernel or executing the stand-alone dump program.
In previous releases, if dump was invoked but either failed or could not be started, standload took the system to single-user mode and displayed the SAK shell prompt. To provide more flexibility in the case of a dump failure, a new value has been added to the autoBoot flag to tell standload to continue to boot the unix kernel even if dump fails.
In the V4.4.4 release, the following values can be specified for the autoBoot flag:
NUMA-Q systems can now be configured to copy a memory dump directly to a filesystem. This method can reduce the time needed to recover from a system failure, as it bypasses the savecore program. Also, you do not need to maintain dump devices. The DYNIX/ptx System Recovery and Troubleshooting Guide describes how to implement this option.
The new upsmon daemon can be used to monitor a UPS attached to a serial port. upsmon monitors the port for a simple ON-BATTERY signal. When this condition occurs, upsmon executes the /etc/powerfail powerfail script, which is provided with the operating system. upsmon then continues to monitor the serial port. When it sees the OFF-BATTERY condition, it executes /etc/powerfail powerok.
The upsmon program is generic in nature and will monitor any serial port; however, it was written specifically for NUMA-Q systems. For details about running this daemon, see upsmon(1M).
Locales have been added to support the new EURO currency. The names of the locales match the existing locale names with the extension _EU (for example, fr_EU and es_EU). If your site is located in a country participating in the European Union and you want to use the EURO currency, set your locale to <localename>_EU. To use the local currency, set your locale to <localename>.
The following changes have been made to the kernel:
New BUFCACHE_MAX tunable parameter that specifies the maximum size, in bytes, of the system buffer cache. The parameter is used in the following manner:
If bufpages has been changed from its default value of zero, the system uses that value to compute the size of the system buffer cache. This size can exceed BUFCACHE_MAX.
If bufpages has not been changed, the system uses the BUFPCT and BUFPAGES_INCR parameters to compute the size of the system buffer cache. It then compares this size with the value of BUFCACHE_MAX. If the computed size is larger than BUFCACHE_MAX, the size of the system buffer cache will be set to BUFCACHE_MAX.
For more information about this parameter, see the DYNIX/ptx System Configuration and Performance Guide.
Message queues on NUMA-Q systems.
The scalability of message queue operations on NUMA-Q systems has been greatly improved in this release. As a result of these changes, space for message queue messages and message headers is now allocated separately for each Quad. The total space allocated for these resources is the amount specified by the MSGSEG and MSGTQL kernel parameters, multiplied by the number of Quads in the system. The space for a given message is allocated from the pools on the Quad where the process sending the message is located.
If you have previously increased the values of MSGSEG and MSGTQL, you may find that you can now reduce the values of these parameters without hurting message queue performance. See the DYNIX/ptx System Configuration and Performance Guide for more information about these parameters.
Kernel memory-allocation pool attributes.
Various kernel components create a component-specific pool of identical data structures used by that component. There are several attributes and statistics associated with each pool. A new command, kmstune, displays or sets these attributes and statistics, enabling you to monitor and tune the kernel memory used by a component's pool of data structures. For more information, refer to the DYNIX/ptx System Configuration and Performance Guide and to the kmstune(1M) man page.
Manufacturing kernel now provided.
DYNIX/ptx and certain layered products are now shipped with special kernel components that allow the building of a manufacturing (MFG) kernel. This kernel is intended for debugging purposes only; it contains many checks that can cause the kernel to panic in situations in which the standard kernel would continue to operate. The manufacturing kernel should be built and booted only at the direction of Customer Support.
The cc compiler has been modified as follows:
New __IDENT__ predefined macro.
During compilation, this macro is replaced by the argument string from the previous #ident directive within the source file. If there is not an #ident directive before __IDENT__, the macro is expanded in the same manner as a __FILE__ macro.
New OFL function ordering algorithm.
The new -Wofl,-quick option tells the compiler to pass the -Oquick option to the linker instead of the -Oreduce option. The -Oquick option specifies that the linker should use an alternate function ordering algorithm that runs in linear time, rather than the default graph reduction algorithm, which can consume a large amount of processor time for large programs. The -Wofl,-quick option can be used with either -Wofl,-static or -Wofl,-dynamic.
C Macros can now support a variable number of arguments.
To enable C macros to allow a variable number of arguments, append ... to the name of the final parameter, which is known as the "rest" parameter. During expansion of the macro, the "rest" parameter is replaced with the corresponding argument plus all following arguments, which must be separated by commas.
If the optional part of the argument list is empty, the macro can be defined by inserting ## before the "rest" parameter name in the replacement list. During macro expansion, if there are no arguments corresponding to the "rest" parameter, then the preprocessor token preceding the "##" is not used. For example,
#define WERROR(format, args...) \ fprintf(stderr, "line %d: " format, __LINE__ , ## args)
specifies that if WERROR is used with a single argument, then the comma after the format should not be included in the macro expansion. If there is more than one argument, then the comma is used.
WERROR("error\n"); WERROR("%s and %s are not defined\n", n1, n2);
expands to:
fprintf ( stderr , "line %d: " "error\n" , 111 ) ; fprintf ( stderr , "line %d: " "%s and %s are not defined\n" , 112 , n1, n2 ) ;
The following options have been added:
The following directives have been added:
A new option, -r rgnname, lists processes belonging to the specified region. The new -R option includes information about all regions.
The -o option now accepts the format specifier rgn to list the region to which each process belongs. See ps(1) for details about these options.
You can configure top2 to display processes belonging to a specified region. To do this, enter one or more regions names in the region field of the process selection screen. See top2(1).
New shmgetv() routine.
This new interface provides the ability to vectorize shared memory requests on NUMA-Q systems. See shmget(2).
New gettimeofday_mapped() routine for obtaining the time of day.
This routine returns the current time of day without the overhead of a gettimeofday() system call. It consults a special mapped page in which the time of day is readable by all user processes. See gettimeofday_mapped(2SEQ).
New process-to-process attachment facility for NUMA-Q systems.
This facility can be used with qfork(), qexec(), and attach_proc() and provides an alternative to attaching a process to a Quad. When a process is attached to another process, both processes will always be located on the same Quad; however, the system may migrate the processes to another Quad to balance the load. If one of the attached processes is migrated to another Quad, the other process will accompany it.
A new QUAD_ATTACH_TO_PARENT flag is available for qfork() and qexec() to attach a process to its parent process. The R_PID option is used with attach_proc() to attach a process to an arbitrary process. See qfork(2SEQ), qexec(2SEQ), and attach_proc(2SEQ).
The following manuals are available for DYNIX/ptx V4.5: