This document supports Version 4.4.10 of the DYNIX/ptx operating system. Be sure to review this document before you install or run this release of DYNIX/ptx.
DYNIX/ptx V4.4.10 is supported on IBM xSeries 430 systems, IBM NUMA-Q® 2000 systems, and Symmetry® 5000 systems (CSM-based systems). It is not supported on Symmetry 2000 systems (SSM-based systems).
Following are the minimum version requirements for system software and layered products. Earlier versions of these products are not supported on DYNIX/ptx V4.4.10.
Fibre Channel Bridge Software V1.5.5 (IBM xSeries 430 and NUMA-Q 2000 systems).
Fibre Channel Switch Software a2.2.1a (V2.2.1) for IBM 2109/SilkWorm 2000 family of switches and 1.6c3 (V1.6.3) for SilkWorm 1000 family of switches (IBM xSeries 430 and NUMA-Q 2000 systems with switches).
Fibre Channel Host Adapter Software FF2.30 (V2.3.0) (IBM xSeries 430 and NUMA-Q 2000 systems with Emulex 2 and Emulex 3) and SF3.22a0 (V3.2.2) (IBM xSeries 430 and NUMA-Q 2000 systems with Emulex 4).
IBM Fibre Channel JBOD 9GB/18GB Disk Software SQ00 (V0.0.0)
IBM Fibre Channel 36 GB Disk Software V1.1.1
Seagate Fibre Channel 9GB/18GB Disk Software SQ25 (V2.5.0)
NUMA Console Software V1.7.8.
CSM software V1.6.1 (CSM-based systems).
SSM/VBAD software V4.9.2 (for CSM-based systems with VME Bus Adapter (VBAD) boards).
QCIC software V3.4.1 (CSM-based systems).
NUMA-Q Online Diagnostics V2.6.0 (IBM xSeries 430 and NUMA-Q 2000 only)
ptx®/Online Diagnostics V1.4.1 (CSM-based systems).
Backup Toolkit V4.4.4 (required for SAMS:Alexandria®).
CommandPointTM Admin V4.4.0.
CommandPoint Clusters V2.1.1
CommandPoint for Unicenter TNG V1.0.0.
CommandPoint SVM V2.1.0.
edb Debugger V3.4.5.
Encryption Software V4.4.10.
ptx/AGENT V1.3.2.
ptx/ATM V2.0.2.
ptx/BaseComms V1.1.3.
ptx/C++ V5.2.6.
ptx/C++ Runtime V5.2.6.
ptx/CFS V1.0.5.
ptx/CLUSTERS V2.1.4.
ptx/CTC V1.1.3.
ptx/EES V1.0.1.
ptx/EFS V1.3.7.
ptx/ITX V4.4.1.
ptx/JSE V1.1.2 and V1.2.1
ptx/LAN V4.6.3.
ptx/LICENSE V2.0.0.
ptx/NFS® V4.6.5.
ptx/OSI Transport V4.4.1.
ptx/PDC V1.2.0.
ptx/RAID V2.0.5.
ptx/SESMON V1.0.1.
ptx/SNA 3270 Terminal Emulator V4.5.1.
ptx/SNA APPC V4.5.1.
ptx/SNA LU0 API V4.5.1.
ptx/SNA LU6.2 V4.5.1.
ptx/SNA PU2.1 Base Server V4.5.1.
ptx/SNA RJE (3770) V4.5.1.
ptx/SNA SDLC & LLCI V4.5.1.
ptx/SNA TN3270 Client V4.5.1.
ptx/SNA TN3270 Server V4.5.1.
ptx/SPDRIVERS V2.4.0.
ptx/SVM V2.1.4.
ptx/SYNC V4.4.3.
ptx/TCP/IP V4.5.4.
ptx/X.25 V4.4.2.
ptx/XWM V4.5.4.
SequentLINKTM V4.3.2.
The following products are no longer available from IBM or have been removed from the distribution CD beginning with the "DYNIX/ptx V4.4.8 Operating System and Layered Products Software" CD, August 2001, Revision C. However, if you are upgrading from a previous version of DYNIX/ptx V4.4.x and already have the versions indicated installed on your host, they are supported for use with DYNIX/ptx V4.4.10.
Apache Web Server V1.3.12
Micro Focus® COBOL Developer Suite V4.1.0.
Micro Focus Application Server/OSX V4.1.0.
MQSeriesTM for DYNIX/ptx V1.1.0.
ptx/Channel Attach V1.0.1.
ptx/XWM Contributed V4.5.3.
The following products have been retired and are no longer available on the "DYNIX/ptx Operating System and Layered Products Software" or "Systems Management Software" CDs.
CommandPoint Monitor
CommandPoint for Tivoli® TME10TM
NetWare for Sequent Information Servers (NSIS)
ptx/DNA
ptx/ESBM (replaced by Backup Toolkit and SAMS:Alexandria)
ptx/FTAM
ptx/LAT
ptx/LDAP
ptx/NFTS
ptx/NWS
ptx/OSBM (replaced by Backup Toolkit and SAMS:Alexandria)
ptx/PEP
ptx/VT
ptx/X.400 Base Services
ptx/X.400 Sendmail Gateway
The ptx/AGENT Subagent Development Kit (SDK) is no longer available with ptx/AGENT. You must now contact SNMP Research International, Inc. to obtain a Subagent Development Kit.
The following products are not supported on DYNIX/ptx V4.4.10:
Annex3®, MicroAnnex (new version will be available later)
NetWare® for Sequent Information Servers (replaced by ptx/NWS)
ptx/9GB_DISK (incorporated into DYNIX/ptx V4.4.x)
ptx/18GB_DISK (incorporated into DYNIX/ptx V4.4.4)
ptx/AUTOCHANGER
ptx/BACKUP (replaced by SAMS:Alexandria and Backup Toolkit)
ptx/CKM
ptx/DLT (replaced by ptx/SPDRIVERS)
ptx/ESBM (replaced by SAMS:Alexandria and Backup Toolkit)
ptx/JUKEBOX
ptx/MEMDISK
ptx/NQS
ptx/OPTICAL
ptx/OSBM (replaced by SAMS:Alexandria and Backup Toolkit)
ptx/SDI
ptx/SLPT (incorporated into DYNIX/ptx V4.4.x)
ptx/VJ
The DYNIX/ptx operating system complies with the following standards:
ISO/IEC 9899:1990 Information technology - Programming Language C.
System V Interface Definition, Third Edition, Volume 1 (base system and kernel extension).
IEEE Standard 1003.1-1990 Portable Operating System Interface for Computer Environments (POSIXTM) and the Federal Information Processing Standards Publication (FIPS PUB 151-2) qualifications and extensions to the POSIX specification.
Portable Operating System Interface (POSIX) IEEE 1003.2-1992. Although not yet officially certified, DYNIX/ptx is compliant with this standard.
X/Open Portability Guide, (XPG4).
System V Application Binary Interface Third Edition and ABI+ extensions.
Operating Systems Programming Interface section of the Applications Environment Specification (AES) standard from OSF®.
The following platforms and devices are not supported in DYNIX/ptx V4.4.10:
Symmetry 2000 systems. (SSM-based)
MULTIBUSTM-based systems and associated peripherals (Systech® terminal driver (st), parallel printer driver (lp), communications cards)
ELS systems
Systems containing i386TM processors (Model B and Model C processor boards)
Systems containing the Weitek® FPA
zd and vj disks
300 and 600 MB disks
5.25" Pbays and devices are supported as storage devices on Symmetry systems only. They are supported on IBM xSeries 430 and NUMA-Q 2000 systems in the boot Pbay only.
The COFF compatibility development environment has been removed. The COFF libraries present in earlier versions of DYNIX/ptx are no longer available.
COFF binaries will continue to work unless one or more of the following conditions exists:
The binary accesses /dev/kmem.
The binary accesses a kernel component.
The binary relies on the DYNIX/ptx V2.x directory structure.
For information about installing this release of DYNIX/ptx, refer to the DYNIX/ptx V4.4.10 and Layered Products Software Installation Release Notes.
The following restrictions must be followed when running DYNIX/ptx V4.4.10. In these restrictions, local refers to devices connected directly to the PCI/SCSI interface on SCI-based systems, or to devices connected directly to the CSM on CSM-based systems.
The root, dump, and primary swap partitions must be located on local (not shared) SCSI disks connected to a bootable Pbay. They cannot be located on disks attached to the Fibre Channel.
If the root and primary swap partitions are under ptx/SVM control, they must be a single complete plex (that is, they can be mirrored, but not striped or concatenated).
If /usr is a separate filesystem, it must be located on a local (not shared) SCSI disk.
The miniroot partition must be located on a local (not shared) SCSI disk. It must occupy the entire partition.
The optional /etc/dumplist file, which lists devices that can be used for a memory dump, should not contain the primary swap partition. If you are using swap partitions as dump devices, you must have enough secondary swap partitions to accommodate an entire crash dump.
When powering up a IBM xSeries 430 and NUMA-Q 2000 system, power on all Pbays before powering on the Fibre Channel Bridges. On power-up, the FC Bridge will attempt to spin up all disks. If the Pbay is off, the FC Bridge will not be able to register the disks.
Do not run commands that use large amounts of memory while the system is in the stand-alone kernel (SAK), particularly when it is in restricted mode. The system may hang or panic if an attempt is made to use too many memory resources. For details about running commands from the SAK, refer to the DYNIX/ptx System Administration Guide.
DYNIX/ptx V4.4.10 is primarily a maintenance release with the fixes identified in "Problems Fixed in the V4.4.10 Release." Other changes are as follows.
The release includes a new version of the edb debugger that provides the fixes described in Chapter 2.
The following additional changes have been made to the edb debugger:
A new command, history, displays the command history for the current edb session.
There are two new control options:
The Public Software distributed with previous versions of DYNIX/ptx V4.4.x is no longer provided, and is no longer available from IBM. However, DYNIX/ptx customers who already have Public Software V4.4.x can use this product with DYNIX/ptx V4.4.10. This software is not supported by Customer Support.
This release provides support for the following hardware:
Intel 900 MHz Pentium® III Xeon processor on Centurion Quads (IBM xSeries 430 and NUMA-Q 2000 systems only)
DLT7000E tape drive as an add-on device in STK libraries or as a replacement for a faulty DLT7000 tape drive in an STK library.
IBM FAStT200TM Storage Servers, which are RAID-enabled, native Fibre-Channel disk subsystems.
DYNIX/ptx V4.4.9 also introduces new software versions for Fibre Channel Bridges, FC Host Adapters, and IBM 2109/SilkWorm 2000-family FC Switches for IBM xSeries 430 and NUMA-Q 2000 systems. This updated FC software recipe enables disk subsystem sharing among NUMA hosts running different versions of DYNIX/ptx.
ATTENTION DYNIX/ptx V4.4.9 does not include any software changes beyond those in support of the new hardware. This may require you to reinstall fastpatches that you have already installed. Included with the DYNIX/ptx V4.4.9 release is the latest "Best Recipe" CD, which includes level 2 fastpatches. You should install this CD on your system as part of the DYNIX/ptx V4.4.9 release set.
If you received a patch after the publication date of the "Best Recipe" CD or if you have a level 0 or 1 fastpatch on your system, you must explicitly reinstall these patches since they are not included on the "Best Recipe" CD.
This release provides support for the following hardware:
DYNIX/ptx V4.4.7 is primarily a maintenance release and contains the fixes described in Chapter 2.
The release also includes support for the Hitachi Data Systems HDSTM 5800 disk array. A new utility, hdsinfo, displays configuration status, FRU status, and parameter settings for the device. See the document Hitachi Disk Arrays on NUMA Systems for more information about this device, including how to interpret the output from hdsinfo.
A new mechanism called eXtended Kernel Virtual Address space (XKVA) is now available to extend the range of KVA addresses possible on IBM xSeries 430 and NUMA-Q 2000 systems configured with wide 64-bit PTEs. This mechanism overlays an additional 3 GB of kernel virtual space on top of the standard 3 GB of user virtual space. The kernel then maps and remaps user virtual space and XKVA as needed.
ATTENTION XKVA was implemented in DYNIX/ptx V4.4.4; however, it has not been previously documented.
By default, this mechanism is enabled for all IBM xSeries 430 and NUMA-Q 2000 systems running with wide 64-bit PTEs, providing support for up to 64 GB of physical memory. However, on systems that do not need the extra kernel virtual space (physical memory is 16 GB or less and the system is not running out of kernel virtual space in primary KVA), this mechanism can reduce system performance, as it requires extra faults and TLB flushing.
If your system does not need the XKVA feature, you can disable it by patching the xkva_alloc_enabled kernel variable:
# /etc/bp /unix xkva_alloc_enabled 0
You must reboot the system for this change to take effect.
The edb debugger is now provided with DYNIX/ptx. (This debugger was previously provided with ptx/C++.) The following changes have been made since the previous release of edb.
New functionality:
The commands cancel, signals, stack, and return can take an optional job argument.
The show command allows types and expressions as arguments.
Support for the C/C++ long long type.
Support for debugging dynamically linked shared objects.
The wait command waits for events on background jobs.
New options:
Enhancements to the edb graphical user interface:
Three new commands have been added to the popup menu of the Source pane. Right click in the Source pane to see these items:
An X resource toggle to display icons has been added:
Edb*iconsPaneToggle.set: true
The edb control buttons can now be personalized:
Edb*viewer_run_button.labelString: |
Run |
Edb*viewer_continue_button.labelString: |
Continue |
Edb*viewer_step_button.labelString: |
Step |
Edb*viewer_step_over_button.labelString: |
Step over |
Edb*viewer_return_button.labelString: |
Return |
Edb*viewer_halt_button.labelString: |
Halt |
Edb*viewer_detach_button.labelString: |
Detach |
Edb*viewer_kill_button.labelString: |
Kill |
Edb*viewer_make_button.labelString: |
Make |
Edb*viewer_edit_button.labelString: |
Edit |
The DYNIX/ptx V4.4 System Recovery and Troubleshooting Guide, which was published December 1999, document version dprtaa09, should be corrected as follows:
Step 7a on page 7-4 in Chapter 7, "Perform Minimal ptx/SVM Recovery on the Miniroot," incorrectly refers to step 5b. These instructions should reference step 6b instead.
Step 2a and 2b on page 10-20 and page 10-21 in Chapter 10, "Recover Filesystems," are in reversed order. You should execute devctl -A before you execute devctl -N.
DYNIX/ptx V4.4.6 contains the following:
Support for 8 GB of physical memory per quad (0300-Series quads only). The maximum amount of physical memory per node remains at 64 GB. A node containing a quad earlier than the 0300 Series is limited to 4 GB of physical memory per quad.
Support for the Pentium® III XeonTM Processor.
The release also includes the fix described in Chapter 2.
DYNIX/ptx V4.4.5 is primarily a maintenance release and contains the fixes described in Chapter 2. It also includes support for the new NUMA-Q 1000 system. This system has recently been renamed NUMA-Q 2000 with Direct-Connect interconnects.
NUMA-Q 1000 systems operate in the same manner as NUMA-Q 2000 systems; however, the bootbay contains a single local disk that is used for the root filesystem. This disk must be used for doing software upgrades and saving crash dumps.
The NUMA-Q 1000 system uses a special VTOC for the root disk in the bootbay.The VTOC configures the data partitions on the disk as follows:
Partition |
Use |
0 |
Root filesystem |
1 |
Primary swap partition |
2 |
Alternate root partition |
3 |
Crash dump partition |
4 |
User data partition |
5 |
User data partition |
Partitions 0, 1, 2, and 3 are reserved for the operating system, primary swap, and saving crash dumps. You can use partitions 4 and 5 for any data you desire.
Partition 9 is a miniroot partition that can be used to build a custom miniroot.
ATTENTION Do not make any changes to the partition layout of this disk. The partitions are sized to provide adequate space for the operating system.
By default, the operating system is installed on partition 0. If your system contains a single bootbay, you will need to upgrade the operating system on the alternate root partition 2. See the DYNIX/ptx V4.4.6 and Layered Products Software Installation Release Notes for more information.
NUMA-Q 1000 systems can use either of the following methods to save crash dumps:
Copy the crash dump directly to a filesystem, which must be located on the dump partition on the root disk (typically sd0d3).
Copy the crash dump to a dump device on the root disk (typically sd0d3) and then run savecore to copy the dump to /usr/crash.
Copying the dump directly to a filesystem is faster. Refer to the DYNIX/ptx V4.4 System Recovery and Troubleshooting Guide for details about configuring the system to save crash dumps.
DYNIX/ptx V4.4.4 is a maintenance release and contains the fixes described in Chapter 2. The release also includes the following implementation changes.
DYNIX/ptx now supports 0300-Series quads, which contain the Intel® Xeon processor. A NUMA-Q system using only 0300-Series quads can include up to 16 quads.
A NUMA-Q system can contain both 0300-Series quads and the earlier quads; however, in this configuration the system is limited to eight quads.
Commands such as /etc/showcfg and /etc/showquads identify the processors in 0300-Series quads as running at 360MHz.
NUMA-Q systems now support up to 64 GB of physical memory. The maximum memory per quad is 4 GB.
The Application Region Manager provides the ability to partition system resources (currently only CPUs) into multiple regions. Applications can then be assigned to run in a specific region, allowing you to balance your system workload.
The Processor Group Affinity scheduler can be used to further partition the workload within a region. You can create run queues containing certain CPUs from the region, assign processes to those run queues, and assign priorities that determine when each run queue's processes will be executed.
When you create an application region, you specify attributes for it, including its name, the CPUs to be associated with the region, whether the region should be activated now, and whether it should be activated automatically at system boot. Information about regions is stored in the region registry file, /etc/system/region_db.
An application region can be active or inactive. When it is active, the operating system will attach the specified CPUs to the region and processes can be executed there. When the region is inactive, no CPUs are associated with the region and processes cannot be executed. If desired, regions can be activated automatically at system boot. You can also activate or deactivate regions as needed.
Changes can be made to a region without terminating the processes running there. If you remove a CPU from a region, any processes running on that CPU will be automatically moved to another CPU in the region. If you deactivate or remove a region, its processes will be moved to the active region that you specify.
You can use the ptx/ADMIN Region Management menu and the following commands to manage application regions.
For more information about the Application Region Manager, see the DYNIX/ptx System Configuration and Performance Guide.
![]() | REFERENCE region(4), rgnctl(1M), rgnstat(1M), rgnassign(1M), rgnrun(1M) |
Support for 18-GB disks has been added to DYNIX/ptx V4.4.4. This support was previously provided by the ptx/18GB_DISK layered product. The following VTOCs are available: ibms18w for the IBM® drive or seag118273 for the Seagate® drive.
When the system is booted, the standload program reads the boot strings, including the autoBoot flag, and takes the appropriate action, typically either booting the uptime unix kernel or executing the stand-alone dump program.
In previous releases, if dump was invoked but either failed or could not be started, standload took the system to single-user mode and displayed the SAK shell prompt. To provide more flexibility in the case of a dump failure, a new value has been added to the autoBoot flag to tell standload to continue to boot the unix kernel even if dump fails.
In the V4.4.4 release, the following values can be specified for the autoBoot flag:
NUMA-Q systems can now be configured to copy a memory dump directly to a filesystem. This method can reduce the time needed to recover from a system failure, as it bypasses the savecore program. Also, you do not need to maintain dump devices. The DYNIX/ptx V4.4 System Recovery and Troubleshooting Guide describes how to implement this option.
Locales have been added to support the new EURO currency. The names of the locales match the existing locale names with the extension _EU (for example, fr_EU and es_EU). If your site is located in a country participating in the European Union and you want to use the EURO currency, set your locale to <localename>_EU. To use the local currency, set your locale to <localename>.
This parameter specifies the maximum size, in bytes, of the system buffer cache. The parameter is used in the following manner:
If bufpages has been changed from its default value of zero, the system uses that value to compute the size of the system buffer cache. This size can exceed BUFCACHE_MAX.
If bufpages has not been changed, the system uses the BUFPCT and BUFPAGES_INCR parameters to compute the size of the system buffer cache. It then compares this size with the value of BUFCACHE_MAX. If the computed size is larger than BUFCACHE_MAX, the size of the system buffer cache will be set to BUFCACHE_MAX.
For more information about this parameter, see the DYNIX/ptx System Configuration and Performance Guide.
The scalability of message queue operations on NUMA-Q systems has been greatly improved in this release. As a result of these changes, space for message queue messages and message headers is now allocated separately for each quad. The total space allocated for these resources is the amount specified by the MSGSEG and MSGTQL kernel parameters, multiplied by the number of quads in the system. The space for a given message is allocated from the pools on the quad where the process sending the message is located.
If you have previously increased the values of MSGSEG and MSGTQL, you may find that you can now reduce the values of these parameters without hurting message queue performance. See the DYNIX/ptx System Configuration and Performance Guide for more information about these parameters.
Various kernel components create a component-specific pool of identical data structures used by that component. There are several attributes and statistics associated with each pool. A new command, kmstune, displays or sets these attributes and statistics, enabling you to monitor and tune the kernel memory used by a component's pool of data structures. For more information, refer to the DYNIX/ptx System Configuration and Performance Guide and to the kmstune(1M) man page.
DYNIX/ptx and certain layered products are now shipped with special kernel components that allow the building of a manufacturing (MFG) kernel. This kernel is intended for debugging purposes only; it contains many checks that can cause the kernel to panic in situations in which the standard kernel would continue to operate. The manufacturing kernel should be built and booted only at the direction of your customer service representative.
The compiler now includes a new __IDENT__ predefined macro. During compilation, this macro is replaced by the argument string from the previous #ident directive within the source file. If there is not an #ident directive before __IDENT__, the macro is expanded in the same manner as a __FILE__ macro.
The new -Wofl,-quick option tells the compiler to pass the -Oquick option to the linker instead of the -Oreduce option. The -Oquick option specifies that the linker should use an alternate function ordering algorithm that runs in linear time, rather than the default graph reduction algorithm, which can consume a large amount of processor time for large programs. The -Wofl,-quick option can be used with either -Wofl,-static or -Wofl,-dynamic.
To enable C macros to allow a variable number of arguments, append ... to the name of the final parameter, which is known as the "rest" parameter. During expansion of the macro, the "rest" parameter is replaced with the corresponding argument plus all following arguments, which must be separated by commas.
If the optional part of the argument list is empty, the macro can be defined by inserting ## before the "rest" parameter name in the replacement list. During macro expansion, if there are no arguments corresponding to the "rest" parameter, then the preprocessor token preceding the "##" is not used. For example,
#define WERROR(format, args...) \ fprintf(stderr, "line %d: " format, __LINE__ , ## args)
specifies that if WERROR is used with a single argument, then the comma after the format should not be included in the macro expansion. If there is more than one argument, then the comma is used.
WERROR("error\n"); WERROR("%s and %s are not defined\n", n1, n2);
expands to:
fprintf ( stderr , "line %d: " "error\n" , 111 ) ; fprintf ( stderr , "line %d: " "%s and %s are not defined\n" , 112 , n1, n2 ) ;
The new -Oquick option orders functions using a linear algorithm instead of the NP-complete graph reduction algorithm. The performance of the program is similar to that of a program linked with -Oreduce, but the link time will be much shorter for larger programs.
The following options have been added:
The following directives have been added:
The descriptions of the -g and -e options have been modified as follows:
The new upsmon daemon can be used to monitor a UPS attached to a serial port. upsmon monitors the port for a simple ON-BATTERY signal. When this condition occurs, upsmon executes the /etc/powerfail powerfail script, which is provided with the operating system. upsmon then continues to monitor the serial port. When it sees the OFF-BATTERY condition, it executes /etc/powerfail powerok.
The upsmon program is generic in nature and will monitor any serial port; however, it was written specifically for NUMA-Q systems. For details about running this daemon, see upsmon(1M).
A new option, -r rgnname, lists processes belonging to the specified region. The new -R option includes information about all regions.
The -o option now accepts the format specifier rgn to list the region to which each process belongs. See ps(1) for details about these options.
You can configure top2 to display processes belonging to a specified region. To do this, enter one or more regions names in the region field of the process selection screen. See top2(1).
A new interface, shmgetv(), provides the ability to vectorize shared memory requests on NUMA-Q systems.
![]() | REFERENCE shmget(2) |
A new routine, gettimeofday_mapped(), returns the current time of day without the overhead of a gettimeofday() system call. It consults a special mapped page in which the time of day is readable by all user processes.
![]() | REFERENCE gettimeofday_mapped(2SEQ) |
This release includes a new process-to-process attachment facility that can be used with qfork(), qexec(), and attach_proc(). Process-to-process attachment provides an alternative to attaching a process to a quad. When a process is attached to another process, both processes will always be located on the same quad; however, the system may migrate the processes to another quad to balance the load. If one of the attached processes is migrated to another quad, the other process will accompany it.
A new QUAD_ATTACH_TO_PARENT flag is available for qfork() and qexec() to attach a process to its parent process. The R_PID option is used with attach_proc() to attach a process to an arbitrary process.
![]() | REFERENCE qfork(2SEQ), qexec(2SEQ), attach_proc(2SEQ) |
DYNIX/ptx V4.4.2 is primarily a maintenance release and contains the fixes described in Chapter 2. The release also includes the following implementation changes.
The following enhancements and changes have been made for C2 security:
The makec2 script now disables the perform_keylogin parameter in the site_secp file. This parameter is used for secure RPC and allows the user's login password to be used as the network password. See the DYNIX/ptx System Administration Guide for details.
A new ptx/ADMIN option has been added to convert your system for C2 security. This option creates or updates the /etc/shadow file, enables you to change the default action specified for the drestart_d daemon in the /etc/inittab file, reports outstanding at and cron jobs that were submitted without auditing information, runs the makec2 script, and increases the size of the small password search space. See the DYNIX/ptx System Administration Guide for details.
A new makeallC2 kernel parameter has been added. This parameter makes auditing compulsory and enables the kernel for C2 functionality. The parameter appears in the unix.std file and is enabled on the ptx/ADMIN "Configure a kernel with site specific parameters" form. See the DYNIX/ptx System Configuration and Performance Guide for more information.
The interface for the password encryption and generation algorithms has been changed. The crypt() and mk_pass() routines now include a username argument. For crypt(), the argument specifies the name of the user or group for whom the key will be encrypted. For mk_pass(), the argument specifies the user for whom the password will be generated. For details, see the DYNIX/ptx System Administration Guide and the crypt(3C) and mk_pass(3X) man pages.
The default normal audit-alias mask now includes successful fork, exit, and chdir operations.
A new nssconfupd command has been added. The command can be used to update the name-service-switch configuration file (nsswitch.conf). See nssconfupd(1M) for details.
Additional audit capabilities have been added to certain system calls. See audit(4) for information about the auditing performed for each system call.
The following changes have been made to the disk subsystem:
Full multiport support. Certain disk storage units provide multiple physical connections (ports) to multiple I/O buses and then configure their logical disks to allow access from those ports simultaneously. Multiporting provides redundant access to individual disks, increasing availability. The operating system also takes advantage of multiporting to better distribute its I/O load. For more information, see the DYNIX/ptx System Configuration and Performance Guide.
The dumpconf and diskid utilities have been modified to support multiported devices. The output from these utilities now lists a device multiple times to represent each of its configured ports.
New sd driver based on the mpt driver. Basing sd on mpt rather than scsidisk, as was the case in previous releases, allows sd to support multiported disk devices as well as the common SCSI disks it has previously supported. The scsidisk driver is still present but is now used only for CD-ROM devices and certain devices provided by layered products.
Support for up to eight logical unit numbers (LUNs) per target ID. Previously, only one LUN was supported for each target ID. Multiple LUNs can be used only on disk storage units that support this addressing scheme.
Improved handling of device errors and improved error messages.
Improved handling of non-error reports from devices.
The DYNIX/ptx kernel stores two priorities for a process: the priority the process has when executing user code, and the priority it has when executing kernel code. The priority value is a small integer that is inversely related to the priority (that is, larger values are lower priority).
Processes executing kernel code should always have a higher priority than processes executing user code. In releases earlier than V4.4.2, this was not always the case. In V4.4.2, changes have been made to ensure that processes executing kernel code have a higher priority than processes executing in user mode.
In releases before V4.4.2, the following rules were followed to establish the user-level priority for a process:
The starting priority was 50.
The nice value could alter the priority by between -40 to +40.
CPU consumption could add from 0 to 63 to the priority.
The priority range for a user process was 10 to 126.
Any processes with elevated (negative) nice values were inadvertently treated as though noage(1M) had been applied.
In V4.4.2, the following rules apply when establishing the user-level priority of a process:
The starting priority is 50 (this is the highest priority available to a user process).
The nice value adds from 0 to 40 to the value; causing a normal process to start at 70.
CPU consumption can still add from 0 to 63 to the priority.
The priority range for a user process is from 50 to 126.
A process will not have noage applied (that is, CPU utilization is ignored), unless noage is explicitly used.
These changes enable processes executing kernel code to always obtain and release resources as soon as possible, which should give better performance.
The mx and td drivers, the mc command, and the associated man pages have been moved from the base operating system to a new layered product, ptx/SPDRIVERS. This product provides drivers for devices supported on DYNIX/ptx, including those devices previously supported by ptx/DLT. ptx/DLT is now obsolete and must be deinstalled before the update to V4.4.2.
The ptx/SPDRIVERS product provides support for new devices independently of DYNIX/ptx releases. This product should be installed on your system before configuring the kernel.
Because of its potential negative consequences, the RPC callback feature has been disabled in this release.
RPC callbacks enable RPC servers to service client requests while in the middle of a client call to another server (a call has been made to the server, but a reply has not yet been received). While the server is listening for the reply, it can also listen for and service any new client requests.
The use of RPC callbacks can have the following negative consequences:
Servers that are clients of other RPC servers can dump core because of memory mismanagement (typically freeing already freed data).
Clients who sent the first requests may be responded to last. In addition, those clients may timeout if the load is heavy.
It is recommended that you do not use the RPC callback feature and has disabled it in V4.4.2 through the new RPC_CBACKS kernel parameter. By default, this parameter is set to 0 (disabled) in the /etc/conf/uts/kernel/i386_space/config file. If you should choose to use RPC callbacks, the parameter can be enabled through the menu system.
All known two-digit year notation problems (also known as year 2000 problems) have been fixed. In some cases, two-digit years were converted to use the full value of the year (currently four digits). In other cases, two-digit years are now interpreted as follows:
70 - 99 = 19xx
00 - 69 = 20xx
A operating-system workaround has been incorporated for the recently discovered Intel Pentium CMPXCHG8B bug, where an illegal instruction can cause a Pentium processor to "freeze," resulting in a system crash.
DYNIX/ptx and layered products software and other commercial software do not contain this illegal instruction. However, a malicious user could create a program containing the instruction and run the program to crash the system. This crash can be caused by a user with normal access -- root access is not required.
Because a specific invalid instruction is required to cause the crash, the problem will only occur when someone intentionally creates a program and runs it because they want to crash the system. The crash is immediate and all data in volatile memory may be lost. On a system running DYNIX/ptx, the system will either hang, in which case it must be reset and rebooted manually, or it will detect the failing processor, perform a scan dump, and reboot.
The workaround prevents the Pentium processor "freeze" and resulting system crash. To the best of our knowledge, there is no performance impact or compatibility issue associated with the workaround.
This problem only impacts systems containing an Intel Pentium processor. It does not affect systems containing Pentium II, Pentium Pro, i486TM, and earlier processors. The affected systems are Symmetry 5000. NUMA-Q systems are not affected.
On systems with QCIC software V3.4.x installed, the SCSI starvation avoidance algorithm, commonly called Ping-Poll, is enabled by default. SCSI command queueing has also been enabled.
The following changes have been made in this release:
The version of /bin/ksh provided with releases of DYNIX/ptx earlier than V4.2 (pre-POSIX.2 standard) is now available as /bin/ksh88. The /bin/ksh88 version can be used for scripts that are not compatible with the newer (POSIX.2-compliant) /bin/ksh. A ksh88(1) man page has also been added.
The setuid bit on the localedef command has been disabled to prevent non-privileged users from creating public locales in /usr/lib/locale. If necessary, the administrator can re-enable the setuid bit to allow users to create public locales.
A new option, -u user, has been added to the crontab command. This option allows you to specify the user name when there are other users with the same numeric UID as the invoking user. (For example, the users root and sysadm both have UID 0.)
DYNIX/ptx V4.4.1 is primarily a maintenance release and contains the fixes described in Chapter 2. The release also includes the following implementation changes:
A new devctl -A option has been added. When the system is booted, this option is run by an rc script to associate shareable devices with their stored names in the naming database. If ptx/CLUSTERS is installed, shareable entries in the naming database will also be synchronized throughout the cluster.
The default entry for altcon in the /etc/inittab file has been disabled. If you are using the altcon console port on a Symmetry system, you will need to enable the entry so that a getty will be spawned on the port. You can do this either through ptx/ADMIN (use System Operations -> Terminal Management -> Alter Terminal Configuration) or by editing the /etc/inittab file manually. To change the file manually, locate the following line and change the value off to respawn.
ac:234:off:/etc/getty altcon 9600
NUMA-Q systems can now use up to 4 GB of physical memory per quad. The maximum physical memory capacity on a 4-quad system is 16 GB. The ENABLE_VLM kernel parameter must be set to 1 on systems with physical memory above 4 GB. This parameter is enabled by default on NUMA-Q systems.
The maximum physical memory on CSM-based systems remains at 3.5 GB.
The following enhancements have been made to support C2-level security. For more information, refer to the DYNIX/ptx System Administration Guide.
Device allocation. This feature can be used to control user access to tape and CD-ROM devices. When device allocation is enabled, the system administrator must use the allocate command to give users access to these devices. The deallocate command, which can be run by either the administrator or the user, removes the access. By default, device allocation is disabled and all users have read access to CD-ROM and tape devices.
Locking of login accounts after a specific number of consecutive unsuccessful login attempts. This feature is disabled by default.
A makec2 script that enables certain C2 security features.
The ability to require machine-generated passwords. The administrator can require that all passwords be generated by the system. This feature is controlled by the passgenreq script and is disabled by default.
Replaceable password encryption and generation algorithms.
DYNIX/ptx V4.4 supports both automatic and user-requested process migration for NUMA-Q systems. Both forms of process migration involve moving a process' text, data, stack, page tables, and other related memory structures from one quad's memory to another. Process migration is transparent to the process being migrated; the only effect on the process is a delay during the migration. The time required to migrate a process is similar to the time required to do a qexec() to a remote quad.
The goal of automatic process migration is to correct load imbalances among the quads of a system. Load imbalances occur dynamically as a result of processes waiting for I/O, continuing after such waits, or terminating. Using a 2-quad system as an example, we might at some moment find that eight processes are running, six of which reside on quad 0 and two on quad 1.
Rather than remain idle, the two "extra" processors on quad 1 will execute the two "excess" processes on quad 0. Without process migration, they must access those processes' text and data remotely, resulting in reduced performance for those processes. As long as the imbalance persists, the remotely executed processes on quad 0 will perform significantly more poorly than the processes executed locally.
The automatic process migration facility can identify such imbalances and migrate the excess processes from an overloaded quad to an underloaded quad, allowing those processes to be executed locally rather than remotely. To avoid constantly "thrashing" processes between quads, processes are migrated between quads only when the imbalance is large enough and persists long enough to make the migration worthwhile.
The automatic process migration facility determines which quads have excess processes and which have process shortages, identifies suitable processes to migrate, and transparently migrates those processes to their new home quads.
Users can explicitly request that processes be migrated by using either the attach_proc command or the attach_proc() system call. These commands "attach" a process to a quad or set of quads; if the process is not already located on the specified quad, it will be migrated to that quad. If more than one quad is specified, the process is migrated to the "best" quad in that set, taking process load and memory availability into consideration. Also, the process is marked so that automatic process migration will not transparently move the process from that set of quads.
Later, if it is no longer necessary for the process to be attached to a quad, it can be detached using either the detach_proc command or the detach_proc() system call. These commands "unmark" the process, making it eligible for automatic migration.
To determine one or more suitable quads for a process, use either the quad_loc command or the quad_loc() system call. These commands identify suitable quads according to the specific processes already residing on them, the shared memory segments located on them, the devices attached to them, and so on.
![]() | REFERENCE attach_proc(1M), detach_proc(1M), quad_loc(1M), attach_proc(2SEQ), detach_proc(2SEQ), quad_loc(2SEQ) |
DYNIX/ptx now supports multipath devices, which can be accessed through more than one physical path. Both NUMA-Q and Symmetry systems can be configured to provide multiple paths to an I/O device.
Multipath access is provided in two ways:
Configuring two controllers on the same bus. For example, on a Symmetry system, two QCIC controllers can be attached to the same SCSI bus. On NUMA-Q systems, two PCI FC host adapters can be attached to same fabric device (the FC Switch or FC-AL Hub).
Multiported devices, where multiple connections can be made to a single device.
Using multipath devices increases both system availability and performance. If a path to a device becomes unavailable because of hardware failure or other system problems, the operating system will use another path to access the device.
On Symmetry systems, the use of multipath devices can improve performance by allowing the system to balance its workload over multiple paths. On NUMA-Q systems, multiple paths enable the operating system to avoid remote disk accesses and thereby speed up processing.
DYNIX/ptx now supports both local and shared devices. Devices connected directly to the PCI/SCSI interface (NUMA-Q systems) or the CSM (CSM-based systems) are considered to be local devices. All other devices are shareable.
The dumpconf output specifies whether devices are local or shareable. The boot disk must be a local device, as well as all devices used for primary swap, firmware (CSM-based systems), and dump space.
The 9-GB disk drive is supported in V4.4. The default VTOC for this disk is located in /etc/vtoc/ibms9w.
Device names are stored in a device naming database instead of being specified in the system configuration file. When the system is booted, the autoconfiguration procedure locates each device on the system. If a device is listed in the naming database, the autoconfiguration procedure assigns the appropriate name to the device. If a device is not included in the naming database, the autoconfiguration procedure will assign a temporary name to the device. By default, when the system transitions to multiuser mode, it will assign permanent names to devices having temporary names.
The device naming database provides much more flexibility than the device configuration method used in previous releases. Because device names are no longer determined by the physical location of the device, a device can be moved from one location to another and still retain its original name. Also, administrators can assign their own device names.
The devctl command is used to assign device names and to configure and deconfigure devices. The dumpconf command displays information about the system configuration. For more information about the device naming database and the devctl and dumpconf commands, refer to the DYNIX/ptx System Configuration and Performance Guide. If you are currently running an earlier version of DYNIX/ptx, be sure to review this information. Major changes have been made to the naming database and to the devctl and dumpconf commands.
For normal boots, only the root device must now be specified in the bootstring. A new boot-time configuration file, /etc/system/boot, contains the location of the primary swap partition and the naming database, making it unnecessary to specify these items in the bootstring.
The old stand-alone notation is no longer used to specify the root device. This notation has been replaced by a physical path specifier, which indicates the physical hardware path to the device.
Several new boot options are also available for other booting scenarios.
The DYNIX/ptx System Administration Guide describes system booting in detail. The DYNIX/ptx and Layered Products Software Installation Release Notes also include a summary of the booting procedure. Also refer to the boot(4), physpath(8), and unix(8) man pages.
A new kernel parameter, ENABLE_VLM, has been added to support NUMA-Q systems with very large memories in which physical memory can be located above 4 GB. This parameter does not apply to Symmetry systems.
By default, this parameter is set to 1 to enable large memory support. If your system has physical memory above 4 GB, do not set this parameter to zero.
When ENABLE_VLM is set to 1, the system uses 64-bit wide page-table entries to address system memory, doubling the size of all page tables. If your NUMA-Q system does not have memory above 4 GB, you can set the parameter to zero to save memory. (Use the ptx/ADMIN® Kernel Configuration option to reset the parameter.)
However, if you add memory above 4 GB, you must restore the ENABLE_VLM parameter to its default value of 1, and then rebuild and install the kernel before rebooting the system. If the parameter is set to 0 on a system with memory above 4 GB, the system will panic when you attempt to boot it because the page-table support needed to address the memory has been disabled.
The NMOUNT parameter specifies the maximum number of mounted filesystems that can exist at any one time. The default value for this parameter has been increased to 128. Previously, the default value was 32. If you have modified this parameter in your site file, you may want to modify it again.
The range of acceptable values for the BUFPCT parameter is now 1 to 95. Previously, it was 5 to 95.
The default value for the NABUF parameter is now 256. Previously, it was 200. Two related parameters have also been added:
DYNIX/ptx V4.4.x supports the following fabric protocols:
Previously, device names consisted of the device type followed by a number, such as sd0. In V4.4, you can assign your own device names. The names must consist of a string of alpha characters followed by a string of numeric characters. The maximum number of characters in a device name is 15.
Device nodes exist permanently in /dev. When you add a new device or rename an existing device with the devctl command, it automatically creates or revises the appropriate device nodes. Also, when you use devctl to unname a device or remove device entries from the naming database, the command will automatically remove the appropriate device nodes.
A new set of files, /etc/devinfo/*.info, define how device nodes are to be created for each supported device type.
The /dev/MAKEDEV command is still used to create device nodes for pseudodevices.
The following partition types have been added:
DYNIX/ptx provides two default VTOCs for each supported disk type. The names of the VTOCs have the form <disk_type> and <disk.type>.scan. The .scan version of the VTOC is intended to be used on the root disk only. It includes a SCAN dump partition and a custom miniroot partition. The other version of the VTOC is intended for general use.
A custom miniroot partition (partition 9) has been added to the root VTOCs in this release. This partition should be used when creating a custom miniroot and during system recovery operations that require loading and booting the custom miniroot. A partition 8 for ptx/SVM use has also been added.
The general-use VTOCs have been modified to include a partition 8 for ptx/SVM use.
Also, most default VTOCs have been modified to increase the size of partitions 0, 1, and 2. Partition 3 has been removed.
Two types of core dumps can be saved on NUMA-Q systems: a dump of only CSYS memory pages, or a dump of all memory pages. The default is to dump only CSYS memory pages. This type of dump is faster and requires less disk space. The DYNIX/ptx V4.4 System Recovery and Troubleshooting Guide describes how to set up the system to save core dumps.
The format of the /etc/dumplist file has been changed in V4.4.
Regular dumplist entries now have this form:
dump_device offset [size]
Per-quad dumplist entries (NUMA-Q systems only) now have this form:
quad dump_device offset [size]
On NUMA-Q systems, the dump_device must be a UNIX® filename such as /dev/rdsk/sd5s1. On CSM-based systems, it must be a physical path specifier such as slic(22,1)scsi(5,0)disk(0). The offset and size have the same meaning as in previous releases of DYNIX/ptx and are described in the dump(8) man page.
If the entries in the file are not correct, you can either update them manually or re-enter the dump devices on the ptx/ADMIN "Modify the Kernel and Dumper Bootstrings" form.
Previously, the usrlimit password had to be set in the /etc/passwd and /etc/shadow files to allow more than one user to log into the system. Systems are now shipped with the usrlimit password set to allow unlimited logins. The Change usrlimit Password menu option has been disabled.
For POSIX compliance, the requirements for login names have been modified. Login names can now consist of a string of alphanumeric, period, underscore, and hyphen characters. The first character in a login name cannot be a hyphen.
Previously, login names were restricted to lowercase alphanumeric characters; the first character could not be numeric.
As of the V4.3 release, /usr is a directory in the root filesystem, rather than a separate filesystem. When you perform a scratch installation, root and /usr will be installed in the same partition.
The following security parameters have been added to the /etc/site_secp file.
failed_auth_limit
Specifies the maximum number of consecutive failed login attempts that can be made before a user's account is locked. This parameter is disabled by default.
c2system
Specifies whether certain C2-level security features are enabled. This parameter is set to "off" by default.
perform_keylogin
Specifies whether the login password will be used as the keylogin password, which is needed to access secure RPC services. The default value is no.
The first two parameters can be set on the ptx/ADMIN View and Change Site Security Parameters form. The perform_keylogin parameter must be set manually in the /etc/site_secp file. For more information, see the DYNIX/ptx System Administration Guide.
To increase system security, the permissions have been changed on the following files and directories.
File |
Old Mode |
New Mode |
/usr/spool/cron/crontabs |
0755 |
0700 |
/usr/spool/cron/atjobs |
0755 |
0700 |
/usr/spool/cron/ataudit |
0755 |
0700 |
/usr/spool/cron/cronaudit |
0755 |
0700 |
/usr/spool/cron/crontabs |
0755 |
0700 |
/usr/spool/cron/crontabs/adm |
0644 |
0600 |
/usr/spool/cron/crontabs/lp |
0644 |
0600 |
/usr/spool/cron/crontabs/root |
0644 |
0600 |
/usr/spool/cron/crontabs/sys |
0644 |
0600 |
/usr/spools/cron/crontabs/sysadm |
0644 |
0600 |
/usr/spool/cron/cronaudit/adm |
0644 |
0600 |
/usr/spool/cron/cronaudit/lp |
0644 |
0600 |
/usr/spool/cron/cronaudit/root |
0644 |
0600 |
/usr/spool/cron/cronaudit/sys |
0644 |
0600 |
/usr/spools/cron/cronaudit/sysadm |
0644 |
0600 |
/usr/lib/cron |
0755 |
0700 |
/usr/lib/cron/.proto |
0744 |
0700 |
/usr/lib/cron/at.allow |
0644 |
0600 |
/usr/lib/cron/at.deny |
0644 |
0600 |
/usr/lib/cron/cron.allow |
0644 |
0600 |
/usr/lib/cron/cron.deny |
0644 |
0600 |
/usr/lib/cron/queuedefs |
0644 |
0600 |
/usr/lib/cron/logchecker |
0544 |
0500 |
The following changes have been made to the ptx/C compiler in the V4.4 release:
The compiler now handles the -g and -O options in a different manner. It used to ignore an optimization option when -g was specified. The compiler now generates the same code that it would without the -g and provides a limited amount of debugging information. For more information, see cc(1). For caveats about using -g with optimization, see debug(1).
Two new pragmas, sequent_assert_field_alignment and sequent_assert_field_offset, have been added.
The sequent_assert_field_alignment pragma verifies that the byte alignment of the following structure member or bit field is align.
The sequent_assert_field_offset pragma verifies that the byte offset of the following structure member or bit field is byteoff or that the bit offset is byteoff *8+ bitoff.
For more information about these pragmas, see cc(1).
The base operating system and layered products consult several databases for information about hosts, users, groups, and so forth. There can be multiple sources for the information in a particular database. For example, host names and addresses can be found in /etc/hosts, or the NIS or NIS+ databases. A new configuration file, /etc/nsswitch.conf, specifies the sources for each database and the order in which the sources should be consulted. The secure RPC commands and functions have been modified to use this file if it exists. For information about setting up this file, which is provided with ptx/NFS, refer to nsswitch.conf(4).
The following commands have been added in V4.4:
The SI86MEM option to the sysi86() system call is now obsolete. In previous releases, this command returned the total amount of memory in the system, specified in bytes. Because the return value for sysi86() is a 32-bit field, it is not possible to return the full amount of memory for systems having 4 GB or more of memory. Therefore, the SI86MEM command has been removed in V4.4.
Existing binaries using this interface will continue to be supported. For these binaries, if the system has 4 GB or more of memory, the return value will be truncated to specify that there are (4GB -1) bytes of memory.
The sysi86(2) man page describes alternatives that can be used in place of SI86MEM. If you have a program that used SI86MEM, you can change the source code to use an alternate interface for retrieving the amount of system memory and then rebuild the program.
This section describes changes that were made in the DYNIX/ptx V4.3.0 and V4.3.1 releases. If you are updating from DYNIX/ptx V4.1 or V4.2, be sure to review this information.
DYNIX/ptx V4.3 supports NUMA-Q systems, which feature distributed shared memory (NUMA architecture), and PCI peripheral devices connected through both SCSI and Fibre Channel host adapters.
The previous installation procedure has been replaced by a new layered product, ptx/INSTALL, that provides a versatile method for installing DYNIX/ptx software packages.
ptx/INSTALL and the installation procedure are described in the DYNIX/ptx and Layered Products Software Installation Release Notes. The ptx/INSTALL Software Installation Guide contains supplementary information about the installation procedure.
The V4.3 release provides a new buildmini utility that can be used to create a custom miniroot. A custom miniroot contains a minimal set of functionality from the base operating system and the layered products that were on the system when the miniroot was built. The custom miniroot can be used to restore the operating system following a system failure. For information about creating a custom miniroot, refer to the DYNIX/ptx System Recovery and Troubleshooting Guide.
The maximum number of partitions that can be created on a disk is now 62. The partitions are numbered starting from 0.
Most default VTOCs have been modified to increase the size of partitions 0, 1, and 2. Partition 3 has been removed.
The V4.3 release provides the following pass-through drivers:
The diskonline, diskoffline, tapeonline, and tapeoffline commands have been replaced by the new devctl command. For details about devctl, see the DYNIX/ptx System Configuration and Performance Guide and the devctl(1M) man page.
Disks on NUMA-Q systems can be formatted only with the online format utility. The stand-alone CCSformat utility is available only for CSM-based systems.
A new directory, /etc/conf/uts/symmetry/sci, contains the kernel files for NUMA-Q systems. This directory includes a system-configuration file (unix_sci.std) and a modules file (unix_sci.mod) that specify modules specific to NUMA-Q systems.
The following tunable parameters have been added:
Changes have been made to the following parameters. If you have modified these parameters in your site file, you may want to modify them again.
The maxRSS virtual-memory parameter, which specifies the maximum resident-set size, can now be modified on a per process basis. (Previously, maxRSS could be modified but the effect was system-wide.) A new command, /etc/maxrss, allows a user with the appropriate privileges to reduce the value of maxRSS for a specific command. The maxrss command has this syntax:
/etc/maxrss limit command [arguments]
maxrss executes the specified command with a maximum resident-set size of limit KB.
For more information, see maxrss(1M) and proc_ctl(2).
DYNIX/ptx supports a new pageable, memory-based filesystem type called MFS.
MFS filesystem images are held in virtual memory associated with processes created by the mount process. The contents of the filesystem are lost when the filesystem is unmounted or the system is shut down. To lock an MFS mount process into physical memory, use the noswap command.
MFS filesystems are constructed and mounted using the mount -f mfs option. Additional options can be specified with -o (the options must be separated by commas):
An MFS device number must be associated with each MFS filesystem. To let the system choose a device number, use /dev/dsk/mfs as the device to be mounted. To specify a device number manually, use/dev/dsk/mfs[n], where [n] is the device number. For example, /dev/dsk/mfs3 is device number 3. While the filesystem is mounted, filesystem utilities such as dump and ff can access the raw filesystem through /dev/dsk/mfs[n] and /dev/rdsk/mfs[n].
MFS filesystems can be included in the /etc/vfstab file or mounted manually. Following are some sample vfstab entries.
/dev/dsk/mfs /dev/rdsk/mfs /mfs3 mfs - yes -s=60000,-p=6 /dev/dsk/mfs4 /dev/rdsk/mfs4 /mnt mfs - yes -s=60000,-p=6
To mount an MFS filesystem manually, use the mount command and specify -f mfs and the appropriate -o options. Following are some examples.
# mount -o -s=10000,-i=8192,-p=1 -f mfs /dev/dsk/mfs1 /mnt
# mount -o -s=40000,-p=8 -f mfs /dev/dsk/mfs /mnt2
![]() | REFERENCE mount_mfs(1M) |
Version 1.09 of the Rock Ridge Interchange Protocol (RRIP) has been implemented in V4.3. This protocol is an extension to the ISO 9660 format for CD-ROM. It provides CD-ROM support for POSIX filesystem semantics including UNIX-style filenames, file types, file permissions, and directory hierarchies. Sparse files are not supported.
The protocol defines a set of System Use Fields for recording the following:
uid, gid, permissions
file mode bits, file types, setuid, setgid, sticky bit
file links
device nodes
POSIX filenames
reconstruction of deep directories
time stamps
The cdsuf command can be used to access the System Use Fields. See cdsuf(1M) for more information.
In the previous release, the /etc/constab file was used to store a list of constructed devices. The constab entries are now maintained in shared memory and are no longer written to /etc/constab. The devbuild command (with no options) and the library function read_constab can be used to list the constab entries. See read_constab(3SEQ).
The Install Software Package option now invokes the ptx/INSTALL installation procedure. For details about this procedure, see the DYNIX/ptx and Layered Products Software Installation Release Notes. The Preview Software Package and Preload Software Package options have been removed.
The "Modify the Kernel and Dumper boot strings" form contains a new "per-quad" memory dump option that creates a separate dumplist for each quad. It also contains new options for specifying the permanent bootstring. For details, see the DYNIX/ptx System Administration Guide and the DYNIX/ptx V4.4 System Recovery and Troubleshooting Guide.
The Printer Management menu contains a new option, Display Printer Definition, that lists the configuration of the printers you specify. It uses the lpstat -p and -l options to obtain the information.
The following stand-alone commands have been added to support NUMA-Q systems:
The following commands have been added in the V4.3 release:
The following commands have been modified in V4.3. For details about these commands, refer to the appropriate man page.
sequent_alignment and sequent_alignment_end
Prevent unnecessary cache misses in multi-stream programs by forcing regions of data into separate cache lines.
sequent_init
Invoke the specified functions before the main program is executed.
sequent_fini
Invoke the specified functions after the main program has returned or when exit() is called.
debug can now read and execute the debugger commands specified in a startup file whenever a new process is created, grabbed, or executed. The debugger does not echo the commands before executing them. To use the startup file, set the %procrc variable to the name of the file. You can specify either a relative or full pathname. The following example sets the variable to the file newproc.cmd in the current directory.
set %procrc "./newproc.cmd"
The output from -o now includes segment numbers when the program header in a core file is dumped. These segment numbers can be used with the new -S option.
The following system calls have been added in the V4.3 release:
The following system calls have been modified in V4.3. For details, refer to the appropriate man page.
The libxio library is obsolete and has been removed from the V4.3 release.
The following library routines have been added in the V4.3 release:
The following library routines have been modified in V4.3. For details, refer to the appropriate man page.
The following manuals are available on the HTML documentation CD distributed with DYNIX/ptx V4.4.10 and can be obtained in hardcopy:
DYNIX/ptx System Administration Guide
ptx/ADMIN Quick Reference Card
DYNIX/ptx System Configuration and Performance Guide
DYNIX/ptx Printer Management Guide
ptx/INSTALL Software Installation Guide
DYNIX/ptx User's Guide
The following manual is available at http://webdocs.numaq.ibm.com/. Note that an older version of this document, which should be disregarded, is distributed on the HTML documentation CD. IBM no longer provides this document in hardcopy. To obtain hardcopy, print the desired information directly from your web browser.
DYNIX/ptx V4.4 System Recovery and Troubleshooting Guide
The following manuals are only available on the HTML documentation CD distributed with DYNIX/ptx V4.4.10:
DYNIX/ptx Error Messages
ptx/C User's Manual
edb User's Guide
debug User's Guide
DYNIX/ptx V2.x to V4.x Porting Guide
DYNIX/ptx Programming Tools Guide
Assembly Language User's Manual
Link Editor (ld) Technical Reference
DYNIX/ptx STREAMS Programming Guide
DYNIX/ptx Network Programming Guide
DYNIX/ptx RPC Programming Guide
Extended Terminal Interface (ETI) Programming Guide
DYNIX/ptx Implementation Differences
POSIX Conformance Specification
BCS Conformance Specification
X/Open Conformance Specification
ptx/ADMIN Development Guide
DYNIX/ptx FACE User's Guide
DYNIX/ptx FMLI Programming Guide
The DYNIX/ptx Native Language Support Programming Guide has been discontinued. For information about programming for internationalization and locales, refer to the POSIX and X/Open documentation.