Release Notes for

IBM Informix Dynamic Server 2000

Date: 11/14/2001

Version: 9.21


Table of Contents

I. Legal Notice

II. New Features in Version 9.21 of IBM Informix Dynamic Server 2000
A. Features from Version 7.31
B. Database Server Features
C. Extensibility Features
D. Java Features
E. New Network Protocols: ontliimc and onsocimc
F. Documentation Notes and Release Notes in HTML Format


III. Major Features in Version 9.20 of IBM Informix Dynamic Server 2000
A. Extensibility Enhancements
B. Performance Improvements


IV. Special Features in Version 9.20 and Version 9.21
A. Version 9.20 Features from IBM Informix Dynamic Server 7.30
B. Version 9.20 Features from IBM Informix Dynamic Server 9.14


V. Caveats
A. Feature Restrictions
B. Installation and Migration
C. SQL
D. SPL
E. User-Defined Routines
F. Smart Large Objects
G. Database Server Administration
H. Informix Storage Manager
I. Back up and Restore
J. Enterprise Replication
K. Indexes
L. Client Applications
M. Other


VI. IBM Informix Database Server Products
A. Installation Change
B. Compatibility with DataBlade Products
C. CREATE SYNONYMS from 7.24/7.3 to 9.21 Problem
D. New SQL Reserved Words
E. Year-2000 Compliance
F. Migration to IBM Informix Dynamic Server 2000, Version 9.21
G. Limits in IBM Informix Dynamic Server 2000


VII. J/Foundation
A. J/Foundation for UNIX Systems
B. J/Foundation for Windows NT Systems


VIII. Security Alert

IX. Future Discontinuation of Feature Support

X. Caveats for 9.21 Early Release (Beta) Testing
A. Java UDRs and Transaction Processing
B. General Notes and Information about Java
C. Other Notes
D. Bug Reports


I. Legal Notice

**************************************************************************

(c) 2001, IBM Corporation.

PROPRIETARY DATA

THIS DOCUMENT CONTAINS TRADE SECRET DATA WHICH IS THE PROPERTY OF IBM CORPORATION. THIS DOCUMENT IS SUBMITTED TO RECIPIENT IN CONFIDENCE. INFORMATION CONTAINED HEREIN MAY NOT BE USED, COPIED OR DISCLOSED IN WHOLE OR IN PART EXCEPT AS PERMITTED BY WRITTEN AGREEMENT SIGNED BY AN OFFICER OF IBM SOFTWARE, INC. THIS MATERIAL IS ALSO COPYRIGHTED AS AN UNPUBLISHED WORK UNDER SECTIONS 104 AND 408 OF TITLE 17 OF THE UNITED STATES CODE. UNAUTHORIZED USE, COPYING OR OTHER REPRODUCTION IS PROHIBITED BY LAW.

THIS PRODUCT INCLUDES CRYPTOGRAPHIC SOFTWARE WRITTEN BY ERIC YOUNG (eay@mincom.oz.au). IT IS PROVIDED BY ERIC YOUNG "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Description: release notes file (without fixed and known bugs) for database server product.

Owner group: Technical Publications

*****************************************************************************
IMPORTANT: The name of the database server has been changed from "Informix Dynamic Server 2000" to "IBM Informix Dynamic Server 2000."


II. New Features in Version 9.21 of IBM Informix Dynamic Server 2000

IBM Informix Dynamic Server 2000, Version 9.21, contains new features in the following areas:

A. Features from Version 7.31

All Version 7.31 features are supported in Version 9.21.

B. Database Server Features

The following section provides an overview of new database server features.

1. onpladm Utility

Version 9.21 of the High-Performance Loader (HPL) includes the command-line utility onpladm. You can use the onpladm utility to create, modify, describe, list, run, configure, and delete jobs from the command line. For detailed information about the onpladm utility, refer to the online documentation available in HTML format in $INFORMIXDIR/release/en_us/0333/onpladm/index.HTML.

2. SQL Statement Cache

The database server uses the SQL statement cache to store SQL statements that a user executes. When other users execute a statement stored in the SQL statement cache, the database server does not parse and optimize the statement again, which improves performance. The SQL statement cache has been enhanced to support the following items:

For more information on the new features in the SQL statement cache, see the documentation notes for the Performance Guide, Administrator's Guide, and the Administrator's Reference.

3. 9.x DB-Access to 7.x Synonyms

Previously, you could use DB-Access to access synonym names only if the remote database server was Version 9.x. You can now access synonym names on remote Version 7.x database servers.

C. Extensibility Features

The following section provides an overview of new extensibility features. For more information, see the documentation notes for the DataBlade API Programmer's Manual.

1. C++ Support

You can now write user-defined routines in C++ with fewer restrictions. You can use the virtual inheritance feature of C++ without restriction.

You must still follow coding guidelines for C, including those for the use of system calls and memory allocation.

2. Controlling the Virtual Processor Environment

The DataBlade API now provides functions that can perform the following tasks:

3. The mi_fp_funcname() Function

The new mi_fp_funcname() function allows a C user-defined routine to get the SQL name of a function.

4. ON-Bar Suite Feature

Command line clearup of onbar tool. The following example shows the syntax for log backup now:

The old syntax (onbar -l) still works.

D. Java Features

The following section provides an overview of new J/Foundation features. For more information, see the documentation notes for Creating UDRs in Java.

1. JVM 1.2 Support

IBM Informix Internet Foundation 2000, Version 9.21, supports Java 2 and includes a tested version of the Java Runtime Environment (JRE). The database server supports the 1.2 classic version of the Java Virtual Machine (JVM).

2. Default Values of Java Configuration Parameters

The default values of the JDKVERSION, JVPJAVAHOME, JVPJaVALIB, and JVPJAVAVM parameters in the ONCONFIG file have changed for IBM Informix Internet Foundation 2000, Version 9.21.

3. GLS Support for J/Foundation

J/Foundation supports the following GLS features:

4. The update_jars.sql Script

IBM Informix provides the script update_jars.sql to update the three-part names of installed Java archive files (jar files) when you rename the database to which the jar files belong. You must execute the update_jars.sql script in the database after you have renamed it. You need to execute the update_jars.sql script only if you have renamed a database that has one or more installed jar files.

5. Runtime Environment Variables

J/Foundation supports the JVM_MAX_HEAP_SIZE, JAR_TEMP_PATH, JAVA_COMPILER, and AFDEBUG environment variables. For more information, see the documentation notes for Creating UDRs in Java and the Informix Guide to SQL: Reference.

6. Dynamically Drop JVPs

The database server dynamically drops JVP virtual-processor classes.

7. Variable-Length UDT Send/Receive

Version 9.21 provides full J/Foundation support for:

You can now write user-defined routines and DataBlade modules in Java. For more information, see the documentation notes for Creating UDRs in Java and Extending Informix Dynamic Server 2000.

E. New Network Protocols: ontliimc and onsocimc

Version 9.21 supports MaxConnect with two new network protocols: ontliimc and onsocimc. This benefits the Informix MaxConnect product, which is separately orderable. Informix MaxConnect enables IBM Informix Dynamic Server 2000 to support greatly increased numbers of client connections. MaxConnect is a new software tier, introduced between the database server and clients, that transparently funnels multiple client connections onto a smaller number of server connections. The database server is freed from managing thousands of client connections, which results in improved response time and decreased CPU cost for the database server.

F. Documentation Notes and Release Notes in HTML Format

The documentation notes and release notes for this database server release are provided in HTML format for improved readability. A table of contents includes links to all the documentation notes and release notes included in this release. The table of contents file is $Informix/DIR/release/en_us/0333/DOCSUNIXTOC.HTML. The Release Notes Addendum, which lists the PTS defects that have been fixed in Version 9.2x, remains in ASCII format.


III. Major Features in Version 9.20 of IBM Informix Dynamic Server 2000

The previous release of IBM Informix Dynamic Server 2000, Version 9.20, contained new features since the 9.14 release in the following areas:

IBM Informix Dynamic Server 2000, Version 9.20, also contains features from Version 7.30 of IBM Informix Dynamic Server.

A. Extensibility Enhancements

Version 9.20 of IBM Informix Dynamic Server 2000 has the following extensibility enhancements:

1. Enhancements to the database server: dynamic lock allocation

2. General enhancements to SQL:

3. Enhancements to smart large objects:

4. Enhancements to collections:

5. Enhancements to row types:

6. Enhancements to user-defined routines (UDRs):

7. Extensions to the DataBlade API:

8. Extensions to the ON-Bar Suite:

9. The oncheck and onlog utilities for R-tree indexes

10. Enhancements to R-tree indexes:

B. Performance Improvements

Version 9.20 of IBM Informix Dynamic Server 2000 provides the following performance improvements:


IV. Special Features in Version 9.20 and Version 9.21

Version 9.20 and Version 9.21 of IBM Informix Dynamic Server 2000 provide the following special features:

A. Version 9.20 Features from Informix Dynamic Server 7.30

Version 9.20 of IBM Informix Dynamic Server 2000 also has features first released in Version 7.30. These features fall into the following areas:

B. Version 9.20 Features from IBM Informix Dynamic Server 9.14

The feature set of Version 9.20 of IBM Informix Dynamic Server 2000 is a logical superset of the Version 9.14 features and includes features that were first released in that version, and later extended to Version 9.20. These Version 9.14 features fall into the following categories:


V. Caveats

The following sections describe issues and restrictions that can affect various features of Version 9.21.

A. Feature Restrictions

Version 9.21 of IBM Informix Dynamic Server 2000 does not support the following features, among others:

A future release of the database server will not support the following features:

The following restrictions apply to the High-Performance Loader:

Additional feature restrictions are as follows:

B. Installation and Migration

1. Client-server finderr Compatibility

Because the finderr filename is the same for both the 9.21 database server and the 2.2 client products, install the database server (with a later release date) after you install the client.

2. System Catalog and sysmaster Changes

The system catalog tables and the sysmaster database for IBM Informix Dynamic Server 2000, Version 9.21, are different from those for IBM Informix database servers earlier than Version 9.20. Some column widths, data types, and treatment of null values have changed. Also, columns have been added to some tables, and some tables have been added or deleted. For details, see the documentation notes for the Informix Migration Guide.

3. Difference in sysindexes for 7.x and 9.21

In 7.x releases, sysindexes is a table. In 9.20 and 9.21, sysindexes is a view.

This can be verified by looking at the table type for that table in 'systables'

This returns a 'T'. In 9.21, this same select returns a 'V' for a view. The difference comes from expected and documented behavior for dealing with tables; sysindexes is a view and is subject to documented guidelines for views. For example, this means that the ALTER TABLE statement will fail for sysindexes, because it is not allowed on views.

4. Detached Indexes

By default, all new indexes that the CREATE INDEX statement creates in IBM Informix Dynamic Server 2000 are detached. IBM Informix Dynamic Server 2000, Version 9.21 supports attached indexes that were created in any 7.x release.

To get the 7.x attached index behavior, you can set the environment variable DEFAULT_ATTACH in the application environment. You can attach only B-tree indexes that are nonfragmented and that are on non-fragmented tables (the 7.x behavior). All other indexes, including extensibility related indexes such as R-trees and UDT indexes, must be detached. A future release of the database server might not support the DEFAULT_ATTACH environment variable.

5. Migration for SAP Customers

Because of the increased space needed in the root chunk to support long identifiers in Version 9.21 of IBM Informix Dynamic Server 2000, it is possible to fill the root chunk in the rootdbs. One reason for this is that certain tables in the sysmaster database will grow extremely large when a large number of tables is created in the system.

If the root chunk becomes full, it will no longer be possible to add additional chunks to the system, even though disk space is available for the chunk itself. This is because metadata about the chunk must be stored in the root chunk of the rootdbs.

If the chunk creation fails for this reason, the following message will be reported:



If you see this message when creating a new chunk, and there is sufficient space on disk for the requested chunk, then you can create additional space by dropping the sysmaster database (see steps below). This works because, although the sysmaster database must reside in the rootdbs, it does NOT need to reside in the root chunk. Therefore, dropping sysmaster frees room in the root chunk, which can then be used for other things. When sysmaster is re-created, it is (partially) moved to other chunks in the rootdbs.

Here are steps to work around this root chunk space limitation:

  1. Drop sysmaster database.
  2. 
    
    

  3. Add the required chunks.
  4. Create the sysmaster database.
  5. 
    
    

6. Case-Sensitive Name Space

Case-insensitive schemas might need to be revised because IBM Informix Dynamic Server 2000, Version 9.21 has a case-sensitive name space. This can affect the resolution of blobspaces and SPL names.

C. SQL

1. Precision for FLOAT and SMALLFLOAT Conversions to DECIMAL

The DECIMAL precision has been increased for conversions from FLOAT and SMALLFLOAT to the DECIMAL data type, as follows.

From To Old Precision New Precision
SMALLFLOAT DECIMAL 8 9
FLOAT DECIMAL 16 17

The change in the DECIMAL precision might also be visible when floating-point data are converted to ASCII text, because if a floating-point value fits within the range of a DECIMAL data type, the database server first internally converts the floating-point value to a DECIMAL value.

DB-Access now displays an extra digit for SMALLFLOAT types because, by default, DB-Access displays 14 characters of floating-point data. To reduce the number of digits that DB-Access displays, you can use the DBFLTMASK environment variable.

In ESQL/C applications built with Client SDK 2.30 and above, call to the deccvflt( ) or deccvdbl( ) function now result in an extra decimal digit in the return value. For example, if the C constant 123.4 is assigned to a C float variable, its binary representation is equivalent to 123.400001525.... Before the change, the following function call will generate the decimal number 123.4 as, in its binary representation, the 9th digit is 1 which does not have any rounding effect on the 8th digit.

After the change, the same function call will generate the decimal number 123.400002 as the number 5 in the 10th digit is rounded into 1 in the 9th digit.

As another example, assume the C constant 8788888.88 is assigned to a C double float variable. The binary representation of 8788888.88 is equivalent to 8788888.880000000819...

Before the change, the following function call will generate the decimal number 8788888.880000001 because, in its binary representation, the 17th digit is 8, which is rounded into 1 in the 16th digit.

After the change, the same function call will generate the decimal number 8788888.8800000008 as the 18th digit is 1 and does not have any rounding effect on the 17th digit.

The rationale for the preceding change is as follows: When a binary IEEE 4-byte floating-point value is converted to the closest eight-digit decimal number, or an 8-byte floating-point value is converted to the closest sixteen-digit decimal number, it is not always possible to uniquely recover the binary representation of the number from the decimal representation. If nine decimal digits, however, are used for the 4-byte floating-point value (and seventeen decimal digits are used for the 8-byte floating-point value), then converting the decimal value to the closest binary number will recover the original binary representation of the floating-point number.

2. Interpretation of Two-Digit Years Within Objects

This section does not apply if this is a first-time installation of IBM Informix, or if two-digit years are not used in the expressions of the following objects:

NOTE: Some of these features might not be supported in this version of the product.

This release introduces a change in when date literals with two-digit years within expressions of objects are evaluated according to the settings of relevant environment variables, such as, but not limited to, DBCENTURY. Previous to this release, two-digit year dates in the expressions of the objects were interpreted by IBM Informix according to the environment variable settings which prevailed at runtime time of the object. However, starting with Version 9.20, the date literal is always interpreted using the environment variable settings prevailing at the creation time (or else at the time of last modification) of the object with which the date literal is associated. The settings of environment variables at runtime will not be used by the object. This applies only to date strings having two-digit years in the expression of the objects mentioned above. That is, it does not apply if four-digit years are used in the objects.

The following two steps are required to take advantage of this change that was introduced in the Version 9.21 release:

  1. Upgrade the IBM Informix server to this release.
  2. Redefine all objects that use two-digit year expressions.

For fragmentation expression, redefining means detaching and reattaching the expression. For all other objects, the object must be dropped from the database and recreated. Only after the objects are redefined using this new server, the date literals in the expressions within objects will be interpreted according to the environment variable settings at the time when the object was created or was last modified.

The reference date used for this interpretation is the creation date or the last modification date of the object, and not the current date when a query is run.

If the objects are not redefined using this new server, then the behavior of the object will remain the same as prior to the upgrade. However, since any new objects created after the software upgrade will behave differently from those created prior to the upgrade, administration of the database may become difficult because the database will have a mix of new and old behavior of objects in the database (with respect to when a two-digit year within expressions of objects are evaluated). Therefore, IBM Informix recommends that you follow the two previous upgrade steps.

Lastly, in order to avoid any possibility of misinterpreting two-digit years within the objects, IBM Informix recommends that you take this opportunity to change the use of two-digit years to four-digits years if possible.

3. Use of DBDATE to Interpret Dates Within Objects

This section does not apply to first-time installations of IBM Informix database products or if date literals are not present in the following objects within the database:

NOTE: Some of these features might not be supported in this version of the product.

In the rare case that the setting of DBDATE prevailing at creation time or time of last modification of the object differs from the one that is in effect at the run time of the object, you might either get a runtime error from the database server or get erroneous results due to incorrect interpretation of the date literal.

In order to maintain consistency, starting with objects created or modified using this release, the date literals within expressions of objects will be evaluated according to the setting of DBDATE prevailing at creation time or at the time of last modification of the object. The settings of environment variables at runtime of the object will not be used to evaluate the date literal within the objects. However, the prevailing setting at runtime of the query will still be in effect for date-related data processed within the query.

If your operating environment is such that the objects were created using one set of assumptions regarding the DBDATE setting and the runtime environment uses a different setting, you may encounter some problems. It is recommended that the usage of the database be modified so that the settings of DBDATE at creation, modification, and runtime are consistent throughout.

IBM Informix recommends that you use 4-digit years.

4. Changes to Floating-Point Operations and Aggregate Functions

Arithmetical operations (add, subtract, multiply, or divide) on two floating-point numbers have been changed when one or both of the operands are of type FLOAT or SMALLFLOAT. Previously, the database server converted the numbers to DECIMAL as necessary, used decimal floating-point arithmetic, and returned a result of DECIMAL data type. In the 9.21 release, the database server converts the numbers to FLOAT as necessary, uses binary floating-point arithmetic, and the result is of FLOAT data type. This change allows arithmetical operations to be performed on larger floating-point numbers, as the FLOAT type can store larger values than the DECIMAL type.

As a consequence of the above change, the database server now returns FLOAT for AVG, STDEV, and VARIANCE of a FLOAT or SMALLFLOAT type. The value these functions return might be slightly different than in previous releases because of the difference in precision for the FLOAT and DECIMAL data types.

5. The stdev() Function in SELECT Statement with GROUP BY Clause

The stdev( ) function returns a zero variance for a count of 1. You can omit this special case through appropriate query construction (for example, "having count(*) > 1"). Otherwise, someone could have a data set that has only a few such cases, and this will block the rest of the query result.

6. Casting Null Values for Row Types

Before Version 9.14, the database server allowed untyped null values in row constructors, even though it ignored these null values. In Version 9.14 and later releases, you must explicitly cast any null value that a row-type value contains.

For example, suppose you have the row_t named row type and the tab1 and tab2 tables, defined as follows:





The following examples show the correct way to insert a null value into the named and unnamed row types:



7. Row Literal Values

To support row literal values for columns that are defined as named row types or unnamed row types, you must use the ROW( ) constructor. However, you do not need to specify the row value as a quoted string. For more information, see the Informix Guide to SQL: Tutorial.

8. MOD( ) Function Returns INT8

The SQL function MOD( ) can return INT8 values in addition to INTEGER values. The client application must ensure that any variable that holds the result of the MOD( ) function has a compatible data type.

9. Changed Behavior of DISTINCT Data Types

When you create a DISTINCT data type, the database server automatically creates two explicit casts: one cast from the DISTINCT type to its source type, and another cast from the source type to the DISTINCT type. A DISTINCT type of a built-in type, however, no longer inherits the system-defined casts that are provided for the built-in type.

A DISTINCT type does inherit any user-defined casts that are defined on the source type.

Suppose you create the following DISTINCT types and table:





One way to insert values into the currency_tab table is to use two explicit casts to convert the values to the DISTINCT type. The following example shows how to perform the explicit casts.

When the value that you specify in an INSERT or UPDATE statement includes a decimal point, the database server assumes that the value is a DECIMAL type. The database server has a system-defined cast between DECIMAL and MONEY. However, the yen and rouble DISTINCT types do not inherit this cast. Therefore, for each value in the following example, the first cast converts the DECIMAL value to the source type (MONEY); the second cast converts the value from MONEY to the DISTINCT type (yen or rouble).



An alternate way to insert values into the currency_tab table is to specify each DISTINCT-type value as a quoted string. The database server handles a quoted string as LVARCHAR type, and provides implicit casts to handle conversions from an LVARCHAR value to any DISTINCT type (provided that the DISTINCT type is defined on a built-in type). The following INSERT statement is also valid:



Because no casts have been defined to handle conversions between DECIMAL and yen values or decimal and rouble values the following INSERT statement fails, and returns an error message:




10. Truncated Values

When a user assigns a value to a CHAR(n) column or variable and the length of that value exceeds n characters, the database server truncates the last characters without raising an error.

Suppose that you define the following table:

The database server truncates the data values in the following INSERT statements to "jo" and "sa" respectively but does not return a warning:



Thus the semantic integrity of data for a CHAR(n) column or variable is not enforced when the value inserted or updated exceeds length n.

11. No Redefinition of NULL Keyword

IBM Informix Dynamic Server 2000 prohibits the redefinition of NULL because allowing such definition would restrict the global scope of the NULL keyword.

IBM Informix Dynamic Server 2000 SQL syntax permits cast expressions in the SELECT list. This means that users can write expressions of the form NULL::datatype, in which datatype is any data type known to the database.

The keyword NULL is a global symbol in the syntactic context of expressions, meaning that its scope of reference is global. Within SQL, the keyword NULL is the only syntactic mechanism for accessing a NULL value. Therefore, any mechanism that restricts the global scope or redefines the scope of the keyword NULL will syntactically disable any cast expression involving a NULL value. Therefore, you must ensure that the occurrence of the keyword NULL receives its global scope in all expression contexts.

For example, consider the following SQL code:









The CREATE TABLE statement is valid because the column identifiers have a scope of reference that is restricted to f the table definition; they can be accessed only within the scope of a table.

However, the SELECT statement in the example poses some syntactic ambiguities. Does the identifier null appearing in the SELECT list refer to the global keyword NULL, or does it refer to the column name null that was defined in the CREATE TABLE statement?

NOTE:: A SELECT statement of the following form is valid because the null column of newtable is qualified with the table name:

More involved syntactic ambiguities arise in the context of an SPL routine that has a variable named null. An example follows:



















When the preceding function executes in DB-Access, in the expressions of the LET statement, the identifier null is treated as the keyword NULL. The function returns a NULL value instead of 7.

Regarding null as the declared variable of an SPL routine would restrict the use of a NULL value in the body of the SPL routine. Therefore, the preceding SPL code is not allowed and causes IBM Informix Dynamic Server 2000 to return the following error:



D. SPL

1. Collection-Derived Tables for SPL Routines

In IBM Informix Dynamic Server 2000 9.21, collection-derived tables (CDT) are enhanced for SPL routines in a manner that is different from the original CDT. For the new CDT, the following statements return the fields of the SPL collection variable instead of the underlying data type:









Under certain circumstances, IBM Informix supports the older CDT format in this release:





For the older CDT, these statements return the underlying data type of the collection, as follows:

However, this works only if the SELECT statement meets the original requirements of the old CDT format: no WHERE clause and only '*' in the select list.

Any other SELECT format produces results like an SQL table, in the new CDT format. For example, you could issue the query as follows:



For the new CDT, this query returns the following result:

The following query, however, is not supported:



The original CDT formats are a deprecated feature for which support will be discontinued in some future release of IBM Informix Dynamic Server 2000. Instead, use the new CDT format, which is described in the Informix Guide to Database Design and Implementation.

2. SYSTEM Statements in SPL Routines

SYSTEM statements in an SPL routine are executed only if the current user executing the SPL routine has logged on with a password.

When a SYSTEM statement in an SPL routine executes, the database server waits for the outcome of the execution of the command that the SYSTEM statement specifies. The client application can hang if this command never completes or never returns.

3. DEFINE variable LIKE serial-type Column

DEFINE variable LIKE serial-type-col is allowed in an SPL routine. For this DEFINE syntax, the database server maps the variable to an INTEGER data type. However, DEFINE variable SERIAL and DEFINE variable SERIAL8 continue to be invalid syntax.

4. Change in When Some SPL Session Threads Are Released

In Version 7.24.UC2 and earlier, if an SPL routine included a query with a GROUP BY clause, and the PDQPRIORITY environment variable was set to any value before (or during) execution of the routine, the session threads for that SPL routine were released as soon as the routine completed its execution.

As a result of the fix for PTS defect 72444, Version 7.24.UC3 and later (including this version) retain the threads allocated for the SPL routine until the session terminates. (This change was not introduced in Version 9.21, but it is documented for the first time in these release notes.)

5. Unusual Locking of SYSPROCPLAN

A known defect (PTS# 128406) can result in unusual locking of sysprocplan and other system catalog tables when executing a stored procedure that uses a collection containing an unnamed ROW type.

This problem only manifests itself when a stored procedure that uses a collection containing an unnamed ROW type is executed for the first time, and it is only visible to the customer if that first execution takes place in the context of a transaction that holds a number of tables for a prolonged period of time.

Once that transaction commits (or rolls back), the performance slowdown will not occur again, with one possible exception: If a change occurs to a database object on which that SPL routine depends, then the SPL routine will be forced to recompile.

The workaround is to execute the SPL routine once, in order to get it optimized.

E. User-Defined Routines

1. Overloaded Functions

When a function is overloaded (has the same name as one or more other functions), the database server chooses the closest matched function, based on parameters that the function call provides. An overloaded function can be a new function that a user defines and owns, or it can be built into the database server and, therefore, owned by user Informix. If a built-in function and a user-defined function have identical signatures, then the database server chooses the user-defined function.

However, confusion can arise when function names are the same, but the argument types are not exactly the same. The following examples can help clarify what the database server does in such situations.

During the creation of an ANSI-compliant database, user Informix creates two round() functions:





Later, a user (other than Informix) creates an overloaded round() function with a FLOAT parameter:

The following SELECT statements execute round() functions:











In SELECT statement 1, the data type of 1.2 is DECIMAL and the data type of 10 is INTEGER, so the database server chooses round() function II.

In SELECT statement 2, the invocation is explicitly qualified with Informix, so the database server chooses one of the built-in functions that user Informix owns, round() function I.

In SELECT statement 3, the function call specifies two functions that match exactly, round functions I and III. Because user-defined routines take precedence, the database server chooses round() function III.

In SELECT statement 4, it is the same case as in statement 3.

In SELECT statement 5, the function is explicitly qualified with username, so the database server chooses the function that username owns, round() function III, even though the parameter types most closely match round() function III.

2. DataBlade API Notes

The following notes apply to the DataBlade API:

The Informix GLS library is an application programming interface (API) that lets developers of user-defined routines, DataBlade modules, and client LIBMI applications create internationalized applications. The macros and functions of Informix GLS provide access within an application to GLS locales, which contain culture-specific information. For more information on the Informix GLS library, see the Informix Programmer's Manual.

F. Smart Large Objects

1. Logging for Smart Large Objects

If you are planning to log smart large objects, refer to the description of the onspaces utility in the Administrator's Reference for instructions on how to turn on logging. To check whether your smart large objects are currently logged, use the following command:

In the preceding syntax, sbspace represents the name of the sbspace that contains your smart large objects. If you see the LO_NOLOG flag in the output of the preceding command, all smart large objects in sbspace are not being logged. If you see the LO_LOG flag, all smart large objects in this sbspace are being logged.

2. Smart-Large-Object Restrictions

The following restrictions apply to the use of smart large objects (BLOB and CLOB data types):

3. Lightweight I/0

The LO_NOBUFFER flag forces a log flush and a synchronous write in many situations. IBM Informix recommends that you avoid using lightweight I/O with smart large objects smaller than 8080 bytes. (With small objects, do not turn on the LO_NOBUFFER flag.)

G. Database Server Administration

1. Informix Server Administrator Documentation

The ONWeb utility is replaced in this release by Informix Server Administrator (ISA), which is documented in the online help. This release of Informix Server Administrator does not have a hard-copy manual.

2. ROOTOFFSET Parameter on UNIX Platforms

On some UNIX platforms, it is an error to set ROOTOFFSET to 0. When this parameter is set incorrectly, you must reinitialize disk space and reload data to resume proper operation of the database server. Always check your machine notes file for information about correct settings before you configure the database server.

3. Default Checkpoint Type for 9.21

The default checkpoint type for 9.21 is fuzzy. In general, it is best to use fuzzy checkpoints.

You can perform a hard checkpoint by issuing an onmode -c command. For large loads of tables through the buffer cache, such as through INSERT or PUT statements, it is better to force a hard checkpoint by using onmode -c to clean the buffer cache than to let the server perform fuzzy checkpoints on its own.

With fuzzy checkpoints, raising the values of the LRU_MAX_DIRTY and LRU_MIN_DIRTY configuration parameters might help increase transactional throughput because less aggressive cleaning is needed than with hard checkpoints. Do not change the gap between LRU_MAX_DIRTY and LRU_MIN_DIRTY.

Fuzzy checkpoints might result in slightly longer roll-forward fast recovery times than before. The server might occasionally perform a hard checkpoint to avoid loss of logical log records due to log wraparound.

4. Password Encryption Option

IBM Informix Dynamic Server 2000, Version 9.21, provides a communications support module, called the Simple-Password Communications Support Module (SPWDCSM), that provides password encryption. This encryption protects a password when it must be sent between the client and the database server for authentication. SPWDCSM is available on all platforms.

For details about the configuration for password encryption, see the Administrator's Guide.

H. Informix Storage Manager

1. ISM Setup

IBM Informix Dynamic Server 2000, Version 9.21, uses Informix Storage Manager (ISM) 2.2. When setting up ISM, you might need setup information about the following features:

IMPORTANT: If you are using NetWare IPX/SPX, it should be installed on the same computer as the ISM server.

This section provides information about the first three items.

For information about ISM installation and certification, see "Migration to IBM Informix Dynamic Server 2000, Version 9.21" under "Informix Database Server Products" elsewhere in these release notes. For more information about ISM setup and ISM 2.2 features, see the Informix Storage Manager Administrator's Guide.

ISMData or ISMLogs Name Change

If you change the name of either ISMData or ISMLogs, you also must perform the following steps:

  1. Update ISM_DATA_POOL and ISM_LOG_POOL in the ONCONFIG file with the new names.
  2. Change the create-bootstrap command in the onbar script ($INFORMIXDIR/bin/onbar or onbar.bat).
NSRADMIN Utility

The end user should not use the NSRADMIN character-based user interface unless instructed to do so by IBM Informix Technical Support. Incorrect use of these tools could result in problems with your ISM system. These tools are undocumented.

Year-2000 Compliant Status

ISM 2.2 is Year-2000 compliant. For details, see Chapter 1 of the Informix Storage Manager Administrator's Guide.

ISM supports dates in the Year-2000 and beyond. All internal dates are stored in an internal format that allows representation of dates from January 1, 1970 through December 31, 2037. ISM correctly interprets the Year-2000 as a leap year. When a year is entered as a two-digit specification, ISM 2.2 interprets it as follows:

Upgrading from ISM 1.0 to ISM 2.2

Replace the instructions in the section "Migrating ISM 1.0 to ISM 2.2" on pages 1-18 through 1-21 with the following instructions:

You can either upgrade ISM 1.0 to ISM 2.2 alone or upgrade ISM along with the database server version. Migration is the reinstallation of ISM binaries while maintaining the ISM data (the catalogs and tape volumes that contain the save sets).

The following section explains how to migrate ISM 1.0 to ISM 2.2.

ISM 2.2 includes changes to the format of data in the ISM catalogs and volumes. Begin the following procedure with ISM 1.0 running on your earlier database server version.

IMPORTANT: Do not use ISM 1.0 storage media for future backups after you have migrated from ISM 1.0 to ISM 2.2.
  1. Complete a full backup of your system with one of the following commands:
  2. 
    
    

  3. Create a bootstrap of your ISM 1.0 server with the following command:
  4. The bootstrap is a copy of the files and directories in $INFORMIXDIR/ism/mmindex, and res (UNIX) or %ISMDIR%\mm, index, and res (Windows NT). These directories are backed up into a single save set, called the bootstrap.

  5. Shut down the ISM 1.0 server.
  6. On UNIX:

    On Windows NT:

  7. Remove the ISM 1.0 catalogs.
IMPORTANT: Keep the resources res part of the catalogs.

On UNIX:



On Windows NT:



If you have file-type devices configured in ISM, you cannot move, copy, or rename the directories that contain those devices.

  1. Uninstall ISM 1.0. (This step is optional.)
  2. Follow the instructions on how to uninstall ISM 1.0 under "Uninstalling ISM on UNIX" on page 1-16 or "Uninstalling ISM on Windows NT" on page 1–17. Use regedt32 to check the registry keys.

    IMPORTANT: Do not remove the res directory.
  1. On Windows  NT, you must rename or remove the ISM 1.0 bin directory because the ISM 2.2 installer installs the ISM files in a different directory. Then move the ISM 2.2 bin directory into the ISM 1.0 bin directory location.
  2. Install the new ISM 2.2 files, either separately or with the new database server version.
  3. WARNING: The new ISM 2.2 files must be installed in the same directory as the ISM 1.0 files.
  1. If you are upgrading ISM on Windows  NT, follow these steps to ensure that ISM is properly configured.
  1. The installer might have made a Windows  NT command window script for working in the IBM Informix environment. The filename of this script is servername.CMD.
  2. Edit this file to be sure that ISMDIR and PATH are correct for the location of the new ISM 2.2 directory.

  3. Change your %INFORMIXDIR%\BIN\ONBAR.BAT file for any user-customized references to the ISM directory.
  4. If necessary, edit the %INFORMIXDIR%\BIN\SETISM.BAT file to ensure that it refers to the ISM 2.2 directory.
  5. Check your database server configuration file (usually %INFORMIXDIR%\ETC\ONCONFIG.servername). Be sure that the BAR_BSALIB_PATH parameter points to the libbsa.dll in the bin subdirectory of the new ISM 2.2 directory.
  6. Check Windows  NT system environment variable settings that affect the PATH or that set the ISMDIR variable.
  7. Copy the sm_versions.std file to create a new sm_versions file.
  8. If you changed the configuration files, you might need to reboot your Windows  NT system.
  9. WARNING: If you encounter an error message that an entry point cannot be found in libnsr.dll, it means that part of the Windows NT configuration still references the old ISM installation.

  1. Start your ISM server with the following command. Do not initialize the server.
  2. Place the tape that contains the bootstrap in a device and mount it, if not already mounted.
  3. Create an index for your host with the following command:
  4. Locate the bootstrap on the tape, and note the save set ID:
  5. Recover your bootstrap with the following command:
  6. IMPORTANT: Do not replace the res directory with the res.R directory. Wait for the message from the preceding command stating that the index has been fully recovered.
  1. Unmount all the defined devices with the following command. You must unmount each device individually.
  2. Segregate all ISM 1.0 volumes. Make file-level backups of file type devices.
  3. IMPORTANT: For future use, you must store the tape that contains the bootstrap that you created in step 2. Without the bootstrap, you cannot revert to ISM 1.0 (if you should need to).

  1. Label new volumes.
  2. ISM 2.2 must not write to any ISM 1.0 volumes because they would become unreadable by ISM 1.0 if you choose to revert.

  3. Mount the new volumes with the following command for each device:
  4. Create a new bootstrap to back up the converted indexes.:
  5. Upgrade the database server to the new version (if necessary) and then start the database server.
  6. Immediately perform a level-0 backup.
Revisions to the Procedure for Reverting from ISM 2.2 to ISM 1.0

Replace the instructions on reverting from ISM 2.2 to ISM 1.0 on page 1-22 of the ISM Administrator's Guide with the following information:

When you revert the database server, IBM Informix does not recommend reverting to ISM 1.0. All versions of the database server, up through 9.2, support ISM 2.2. Also, ISM 2.2 is Year-2000 compliant; ISM 1.0 is not. Versions of the database server that did not include ON-Bar are not compatible with ISM.

  1. Because the database server installers install ISM, be sure to preserve the ISM 2.2 directory by renaming it.
  2. Follow the instructions in the Installation Guide to install the earlier database server version.
  3. Follow the instructions in the Informix Migration Guide to revert to the earlier database server version.
  4. Restore the ISM directory. (Copy the new ISM files to the directory that you renamed earlier and rename the directory to its original name).
  5. If you need to revert the database server and perform a point-in-time restore of the earlier database server version, ISM 2.2 might have the original backups in its catalog (if you followed the procedure in this manual for upgrading ISM). If the backups are no longer in the ISM catalog, recover the catalog tables from the backup media after you revert the database server.
ISM Installation and Uninstallation Problems on Solaris

The following restrictions apply to ISM on Solaris systems:

2. Setup Without ISM

If you choose not to use ISM, remove the create-bootstrap command from the onbar script or onbar.bat.

3. Configuring Informix Storage Manager

The ISM daemons startup and shutdown can be added to your startup scripts for your operating system.

The ISM daemons must be running before you can use ISM. IBM Informix recommends that you add the following command to one of the startup scripts for your operating system:

To shut down the ISM daemons at system shutdown time, IBM Informix recommends that you add the following command to one of the shutdown scripts for your operating system:

For systems that support the /etc/init.d directory, you can write your own startup/stop script. On Solaris, for example, you could add the following script as /etc/init.d/ism:





















Link this script to the startup sequence as follows:

Link this script to the shutdown sequence as follows:

TIP: The sequence numbers are for illustrative purposes only. You may have to adjust the sequence numbers as necessary for your system. ISM depends on TCP/IP services and RPC services. Therefore, you will have to start ISM after these services are started, and shut down ISM before these services are shut down.

Once the ISM daemon is started, you can add one or more devices as ISM storage devices, and label storage media in those devices as ISM volumes. Refer to the Informix Storage Manager Administrator's Guide for further information about ISM. See also the section "Problems Fixed in ISM 2.2" in the Addendum to these release notes.

The following table summarizes the rules for specifying the location of the XBSA shared-library path for ON-Bar and Informix Storage Manager communications on various platforms.

 
Location
AIX
3.x
AIX 
4.x
 
HP
 
Solaris
Other
UNIX
 
Windows
/usr/lib/ibsad001.ext .o .o .sl .so .sl  or  .so .dll
Library pathname to BAR_BSALIB_PATH No, use $LIBPATH variable Yes Yes Yes Yes Yes
Symbolic link Yes Yes Yes Yes Yes No
LIBPATH in onbar script No Yes Yes Yes Depends No

For 64-bit Solaris computers, the default path for BAR_BSALIB_PATH is /usr/lib/sparcv9/ibsad001.so.

For 32-bit Solaris computers, the default path for BAR_BSALIB_PATH is /usr/lib/ibsad001.so (as described above).

4. Using HDR with ISM and ON-Bar

You can set up ISM (imported restore) to run with ON-Bar. The following shell script provides the minimum setup required for ISM operations with ON-Bar. Edit the locations for SM_DISKDEV1 and SM_DISKDEV2 as needed. Two of the operations must be done as root (or Informix) as shown in the following script comments.





I. Backup and Restore

1. External Restore

If you want the external restore to also restore logs, then you must make sure that at least the log containing the external backup (the onmode -c block and unblock) and possibly more are backed up. Do this with the following command after the backup:

Or use the following command before the restore:

Alternatively, you can do a PIT or PIL or nonlog external ON-Bar restore to get around this. However, you must have backed up the entire system at once to do a nonlog external restore.

2. ON-Bar Backup Verification Option

ON-Bar has a backup verification option, onbar -v, that you can use to check the status of a backup. For information about this option, see Chapter 2 and 4 of the Backup and Restore Guide.

3. Physical Restore When rootdb or Regular dbspace Lost

A physical restore is the only form of system recovery available when the rootdb or regular dbspace is lost and ISM or Legato is the storage manager in use.

4. The sm_versions.std File Included in the Windows NT Release

The sm_versions.std file is now included in the Windows  NT release as it is with the UNIX release. The sm_versions.std file is a template for setting up the sm_versions file with storage manager information. For information on the sm_versions file, see the Backup and Restore Guide.

J. Enterprise Replication

1. Restrictions on Enterprise Replication

Enterprise Replication (ER) supports replication of data types supported by Version 7.30 of IBM Informix Dynamic Server. In addition, ER supports some very limited forms of replication of user-defined types. User-defined type (UDT) replication is only supported in an environment of homogeneous computer systems. (For example, replicating a row containing a UDT from a Solaris platform to a Hewlett-Packard platform is not supported.)

ER does not support replication of smart large objects, collection types, list types, multirepresentational types, types which may contain out-of-row data, and other data types and features. Most DataBlade modules use features that are not supported by ER. Thus, in most cases, replicating data managed by a DataBlade module is unsupported. Significant restrictions also apply to other new features, such as user-defined routines (UDRs) and inherited tables. IBM Informix recommends that the current version of ER not be used to replicate extensible data types.

If replication is defined on tables containing unsupported data types, in some cases replication will appear to function properly. However, data might silently fail to replicate properly at a later time.

2. Alteration of Replicated Tables

The schema of a replicated table must not be changed by ALTER TABLE statements. In most cases, ER prevents such an ALTER TABLE from executing. In certain cases, after a cdr stop command has been issued, an ALTER TABLE statement may succeed. Upon restarting ER, errors may result. Do not issue ALTER TABLE statements for replicated tables when ER is in the stopped state.

3. Replication Networks

In a replication network, it is possible to replicate data between servers of different versions. This allows one to upgrade servers in a replication network incrementally, rather than all at once. However, IBM Informix recommends mixed version replication as a transitional strategy only, and not for long-term operation.

With mixed versions, certain caveats are associated with new features that are not available on older versions. In general, it is best to avoid using the new features until the entire ER network is upgraded. Before converting or reverting servers between versions, the documented conversion process should be followed.

The following additional restrictions apply to replication networks involving different server versions:

4. Saving the Update of Chunk Status on the Secondary Database Server of an HDR Pair

For a data-replication pair, if the status of a chunk (Down, On-line) is changed on the secondary database server, and that secondary server is restarted before a checkpoint is completed, then the updated chunk status will not be saved.

To ensure that the new chunk status is flushed to the reserved pages on the secondary database server, force a checkpoint on the primary database server and verify that a checkpoint also completes on the secondary database server. Now the new chunk status will be retained, even if the secondary database server is restarted.

For example, if the primary chunk on the secondary database server is down and is to be recovered from the mirror, the following steps should be taken:

  1. Run onspaces -s or use the onmonitor utility on the secondary database server to recover the primary chunk.
  2. Run onmode -c on the primary database server to force a checkpoint.
  3. Run onmode -m on the primary database to verify that a checkpoint was actually performed.
  4. Run onmode -m on the secondary database server to verify that a checkpoint was also completed on the secondary database server.

Once these steps are completed, a restart of the secondary database server will not cause the new (on-line) status of the primary chunk to be lost.

K. Indexes

1. R-Tree Access Method

This release of IBM Informix Dynamic Server 2000 contains support for the R-tree secondary access method, a subsystem for creating indexes on multidimensional objects. The R-tree secondary access method provides the following features:

If you plan to use the R-tree secondary access method, the Informix R-Tree Secondary Access Method DataBlade module must be registered in your database. This registration normally occurs when you register a dependent DataBlade module; that is, one that can only be registered if the Informix R-Tree Secondary Access Method DataBlade module has been previously registered.

For detailed information on using the R-tree secondary access method and the Informix R-Tree Secondary Access Method DataBlade module, refer to the following documentation:

2. R-Tree and Database Server Compatibility

For IBM Informix Dynamic Server 2000, Version 9.21, the Informix R-Tree Secondary Access Method DataBlade module, Version 2.0 is automatically installed with the database server. You register the DataBlade module in new databases as needed.

For database servers that are upgrading from Version 9.1 of Informix Universal Server to Version 9.21 of IBM Informix Dynamic Server 2000, Version 2.0 of the Informix R-Tree Secondary Access Method DataBlade module is automatically installed as part of the standard upgrade procedure, which also includes the conversion of the databases in the database server. When the upgrade process converts a database in which the R-Tree Secondary Access DataBlade module is registered, the conversion automatically upgrades the R-Tree Secondary Access DataBlade module from Version 1.0 to Version 2.0. If you subsequently revert the database back to the old database server version, you do not need to downgrade the R-Tree Secondary Access DataBlade module, because Version 2.0 of the R-Tree Secondary Access DataBlade module is compatible with Version 9.1 of Informix Universal Server.

The following table summarizes which versions of the R-Tree Secondary Access DataBlade module are compatible with each relevant version of the database server:

Database Server Version Compatible R-Tree Version
9.1x 1.0, 2.0
9.21 2.0

3. R-Tree Secondary Access DataBlade Module Directory Name Change

The name of the directory that contains the Informix R-Tree Secondary Access DataBlade module is changed from $INFORMIXDIR/extend/ifxrltree.1.00 to $INFORMIXDIR/extend/ifxrltree.2.00 in this release.

4. Generic B-Tree and Functional Indexes

Currently, the built-in compare() routine, which a generic B-tree uses, has the following restrictions:

Although these are existing restrictions, the database server allowed index creation using such routines in previous releases. In release 9.12, additional checks were added to make sure that the compare() routine used for a generic B-tree index is not written in SPL and is not variant.

An additional check was also added to make sure that built-in functions (such as ABS, MOD, LENGTH, UPPER, LOWER, and so on) are not used as keys in functional indexes. Those functions are not supported for functional indexes.

L. Client Applications

1. OPTOFC Environment Variable

When a user's database is ANSI compliant and the environment variable OPTOFC is set, the user might get error -214 if the user closes the database. Error -214 states:

The database content is not damaged.

2. Client SDK Bundle Libraries

Client libraries, such as lib/esql/*.so, are no longer included with the database server release; you need the Client SDK bundle to install these libraries. This change in bundling might affect DataBlade modules and applications that rely on these client libraries.

M. Other

1. High-Performance Loader (HPL)

The following notes apply to the High-Performance Loader:

The 9.21 release adds a new command line utility for the HPL, called onpladm, which allows users to load or unload tables or an entire database. The onpladm documentation is provided in HTML format only, as multiple HTML files within the /onpladm subdirectory in the same file system as these Release Notes.

2. Reorganization of IBM Informix Documentation Since Version 9.1

In general, the CD-ROM and hard-copy documentation of the 9.21 release is identical to that of the documentation set of the 9.20 release. New features that were not part of the IBM Informix Dynamic Server 2000 Version 9.20 release have been documented in the Documentation Notes of the 9.21 release. Read the Documentation Notes (Version 9.21) for descriptions of new features that this release introduces.

This section summarizes major organizational changes to manuals since the 7.30 and 8.20 releases. It is intended to help you locate information in the 9.21 IBM Informix Dynamic Server 2000 documentation set.

Informix Guide to SQL: Tutorial

The Informix Guide to SQL: Tutorial has been reorganized as follows:

Informix Guide to SQL: Reference

The Informix Guide to SQL: Reference has a new appendix, Appendix B, which provides the dimensional model and table schema information for the sales_demo and superstores_demo databases. This appendix complements the information on the stores_demo database in the existing Appendix A.

Informix Guide to SQL: Syntax

The Informix Guide to SQL: Syntax has been reorganized since Version 9.10 as follows:

Informix Guide to Database Design and Implementation

The Informix Guide to Database Design and Implementation has been reorganized (taking and updating material from the Informix Guide to SQL: Tutorial) and contains primary documentation for the following functionality and features:

Extending Informix Dynamic Server (new for 9.20)

The following 9.10 manuals were combined into a single book for Version 9.20:

Specific information relating to the development of C-language and Java-language user-defined routines has been moved to the following language-specific manuals:

DataBlade API Programmer's Manual

The DataBlade API Programmer's Manual has been substantially enhanced to provide more detailed information on how to use the DataBlade API to create C user-defined routines (UDRs) and client LIBMI applications. This manual has been reorganized into the following sections:

Creating UDRs in Java (new for 9.20)

The manual Creating UDRs in Java was new for the 9.20 release. It describes how to implement the new 9.21 feature: writing user-defined routines (UDRs) in Java. It provides information on how to:

For information on the full set of classes and methods supported by the Informix JDBC driver, see the Informix JDBC Driver Programmer's Guide.

Administrator's Guide and Administrator's Reference

The 9.1 Administrator's Guide has been divided into the following two manuals:

In addition, the Administrator's Reference is now generic. The Administrator's Reference covers Version 9.2 and 8.30.

Most of the monitoring and tuning information in the administrator books was moved to the Performance Guide, including these topics:

Performance Guide for Informix Dynamic Server

The Performance Guide has been reorganized as follows:

Informix Migration Guide

The Informix Migration Guide has been completely reorganized. It now contains the following sections:

Appendix A still lists the database server environment variables.

Backup and Restore Guide

The Backup and Restore Guide contains the following changes:

Guide to Informix Enterprise Replication

The Enterprise Replication manual has been reorganized in the following ways:

Informix R-Tree Index User's Guide

With Version 9.20, this manual includes a section to describe how to check R-tree indexes with the oncheck utility.

Online Documentation of onpladm Utility of HPL

The 9.21 release adds a new command line utility for the HPL, called onpladm, which allows users to load or unload tables or an entire database. The onpladm documentation is provided in HTML format only. It is an online reference document composed of multiple HTML files within the /onpladm subdirectory in the same file system as these release notes and the documentation notes.

3. Release 9.21 Known Problems

Defect 118535: For Versions 9.21.UC1 and ESQL/C 9.30.UC1, trying to insert non-null terminated data into VARCHAR host variable may cause client segmentation fault and a core dump.

Defect 119527: In a particular test situation, an ESQL-C program processing dataset for decimal field incorrectly processes when character value is read. It appears that the first run doesn't free the memory pointer, and the second run results in floating point exception.

Defect 124117: If the ESQL/C program is compiled without the flag specifying packed structures, the program runs fine. On AIX operating system, if the program is compiled with the flag specifying packed structures, the program will core dump at runtime on the SQL statement.

Defect 125041: On Version 7.30.UC10, blocked in a deadlock situation at a checkpoint request.

Defect 125043: OnVersion 7.30.UC10, ONCHECK -CI on database level gives ISAM error: FILE IS LOCKED

Defect 127153: Using enhanced security on a DEC 4.0D machine does not allow user "Informix" to connect to the database unless we start as user root.

Defect 127729: The translation of a float value transported between DEC-UNIX client and a not DEC-UNIX server does not work properly.

Defect 122810: Using a non-null terminated data into VARCHAR host variable may cause client segmentation fault and a core dump.

Defect 128221: Unknown problem exists when using Matches on NCHAR column.

Defect 128248: In threaded XA application, "EXECUTE...USING DESCRIPTOR..." returns 0 even if the UPDATE fails.

Defect 128545: Two threads updating the same table get error -143 instead of waiting when isolation level is set to RR and lock mode is set to wait.

Defect 128588: Test case error while removing UDRs registered with install_jar() routine.

4. Release 9.20 Known Problems and Defect Workarounds

Defect 71698: Error -1213 in DBIMPORT, DBEXPORT GENERATE added "$" to value of Money column in .SQL file when CLIENT_LOCALE JA_JP.UJIS and DB_LOCALE JA_JP.SJIS

This is a test case. It is not a serious problem.

The schema file contains a dollars sign ('$') in the CREATE VIEW statement because DBMONEY is set to '$.' when dbaccessdemo executes the CREATE VIEW statement. Even though it is executed under a Japanese locale, the DBMONEY environment variable overrides the currency symbol of the Japanese locale.

The dbaccessdemo script is used against different locales, so DBMONEY is set to '$.' in the script to ensure portability. Users need to set the DBMONEY environment variable in order to use the dbimport or dbexport utility. Another alternative would be to modify the dbaccessdemo script to reset the DBMONEY environment variable under the Japanese locale.

Defect 78029: C language module corresponding to a UDT is not unloaded after dropping the database in which UDT is defined.

The behavior of unloading of a module has changed. You can force an unload of a DataBlade module, but this is not generally necessary, and IBM Informix does not recommend it unless you need to recover memory associated with the loaded shared library.

To force unloading of a DataBlade module, execute the function ifx_unload_module(), which takes two parameters: the module name and the language.

For example, to unload string.so, you call the following function:

Defect 79661: ONAUDIT will not track event, ONAU, for user "Informix": see defect 42642. Only -C option is tracked, -O, -A, -D, and -M are not tracked for ONAU.

The utility onaudit is supposed to issue an audit record for the ONAU event but is not doing so. This record should contain onaudit command-line arguments and should be located in /tmp with the filename dbservername.0.

The way that -O, -A, -D, -M options are audited is to have individual events enabled. That is, setting a mask for LSAM, CRAM, DRAM, UPAM. The ONAU event is limited to audit configuration change.

Defect 81366: GROUP BY clause in stored procedure returns incorrect rows, same SQL run in DBACCESS returns correct results.

The same GROUP BY clause in an SPL routine returns the same row many times. That same SQL code can be run in DB-Access, and the correct results are returned. If the GROUP BY is changed to 1,2,3,4, then the SQL routine returns the last row of the correct results.

The user can choose different variable names from the column names or by using table_name.column_name in the SELECT statement.

Defect 92877: Type DECIMAL(X) retrieved with X+5 significant digits after decimal point in SELECT statement.

A column defined as DECIMAL(5) has a precision of 5 (maximum number of digits) and a scale of 255, which means that the decimal point floats. A DECIMAL(5) can have values such as 12345, .12345, 1.2345e120, -1.2345e-120, and so on. When genlib calculates the display width, it needs to take all of these options into consideration and calculate the maximum length needed. The largest display format for a DECIMAL is scientific notation, so that is the one that genlib assumes:

For DECIMAL(5), assuming a default locale, the following numbers of bytes would apply:

1 byte for the sign
1 byte for the first digit
1 byte for the decimal
4 bytes for the rest of the digits (precision of 5 - 1)
1 byte for the 'e'
1 byte for the sign of the exponent
3 bytes for the exponent
TOTAL: 12 bytes

This is why '12345' inserted into DECIMAL(5) is displayed as 12345.000000. This allows the value -1.2345e-120 to be displayed without truncation.

This only affects floating-point decimals. If the user specifies the scale, then they get the old display. Because this only affects floating-point decimals, this defect fix will not affect ANSI-compliant databases, which allow only fixed-point DECIMAL values.

A DECIMAL column defined as (5) in a database that is not ANSI-compliant means that there are at maximum 5 digits, and the decimal point can be anywhere. When you enter 0.123449, that is 6 digits, so rounding occurs. This is the correct behavior.

Defect 94241: Inserting into table fragmented by DATE with DBDATE set should return -271, but returns 0

This is a test case. It is not a serious problem.

If the DATE() function is specified in the fragmentation expression of a table, then the dates are converted to internal date formats at table creation time itself. The dates are not open to interpretation at query time, and therefore the DBDATE format at query time has no bearing on it.

Defect 95200: Assignments to UDT variables in SPL return 9635 if "LET" has more than one variable.

This is a UDT feature that was never implemented. The missing feature is limited to IMPLICIT casts of multiple assignments. Both EXPLICIT casts and no-cast-required work fine under these conditions.

A direct workaround, for all but one case, follows:







Unfortunately, the following case does not have an easy solution (where splfunc() returns more than one value):

In this case, the only solution is to use the right destination data types; that is, do not rely on implicit casts. This will not work under all conditions, but it is generally better to use the correct data types.

Defect 95599: Received incorrect error message.

Defect 95599 has been fixed. It is reported here for your information.

Defect 101747: DATE literal interpretation in TRIGGER body is done incorrectly.

Defect 101747 has been fixed. It is reported here for your information.

Defect 101848: DATE literal interpretation in PROCEDURE body is done incorrectly.

Defect 108O92: Y2K PROBLEM: Changing interpretation of DATE and DATETIME values could result in unexpected and/or wrong results

The behavior of the database has changed in situations when a DATE or DATETIME literal is used in any of the following database objects:

Unintended behavior can result if one or more environment variables are subsequently reset, if the new settings change the way in which DATE or DATETIME literals are evaluated by the database object.

In earlier releases, the server interpreted the DATE or DATETIME literal using the date environment settings prevailing at execution time. Here date environment settings include the following:

Now, however, the date is always interpreted using the date environment settings prevailing at the creation time of the object with which the DATE or DATETIME literal is associated.

IMPORTANT: The following example (of the legacy behavior) assumes that DBDATE is set to "MDY2-" and that the time of execution is within the interval between 31 December, 1899, and 30 June, 1994.

Example:















Old behavior: The row goes into dbs3 because the fragmentation expression is interpreted at insert time and treated as





New behavior: The row goes into dbs1 because the fragmentation expression is interpreted based on the DBCENTURY value at the time of creation and is treated as follows:





Another date-related environment variable is DBDATE.

Example:







Old behavior: Error 1218 (string to date conversion). This is because the date in the check constraint (12-31-1995) is being interpreted at the time of insert and since DBDATE is y4md, it gives an error.

New behavior: INSERTsucceeds. The 12-31-1995 has been interpreted at the time of object creation and already converted into a date, so there is no error.

TIP: In this example the DATE value in the INSERT statement is evaluated according to the date environment at runtime, using only the current settings of environment variables. Only database objects, not SQL statements, make any distinction between the date environment at time-of-object-creation and the runtime environment in interpreting literal DATE values.

TIP: This new behavior only takes effect for database objects that are created with a version of the database server that contains this fix. Previously created objects will not change in behavior. Therefore, if the new behavior is more desirable, users must drop and recreate their existing objects. For example, if constraints use a DATE or DATETIME literal, they can be dropped and recreated by using ALTER TABLE or ALTER FRAGMENT.

Defect 96415: Server requests large shared memory segment during logical recovery and runs out of shared memory and the master daemon dies.

The reproduction causes a server failure before these logical logs could be backed up. During restore, the server expects these logical logs from tape and the restore will not complete without these logical logs.

Because onbar -b was used to back up the storage spaces, all the logical logs that were current when the archives of the storage spaces started (in parallel) need to be backed up to the logical log tape.

Defect 97000: OS Auditing does not work with nonroot VPs

With nonroot changes, OS-Auditing is not working. This is because audit operations like open/write of system audit file needs to be run as superuser. However CPU VPs which are turned nonroot (Informix) after initialization try executing the audit_subsystem calls failing with EPERM. Worse still, any error while executing audit_subsystem calls results in Assert failure, and Informix Dynamic Server is brought down.

With Informix Dynamic Server configured with C2 Auditing on, any registered events like CREATE, SELECT, CLOSE, or DROP DATABASE, exercised by an audit mask created by onaudit, fails with an error, and Informix Dynamic Server is brought down.

OS-managed C2 auditing (ADTMOD of 2,4,6, or 8) is supported only when the server is started as a user who is a security administrator. On UNIX platforms, the superuser is always a security administrator. On most platforms, however, it is possible to have users who do not have root privileges but do have security administrator privileges. Consult your OS documentation for details.

IBM Informix managed C2 auditing (ADTMODE of 1,3,5, or 7) is always available.

There are some options for making OS Auditing work with nonroot changes:

  1. Run the server as root. (This is dangerous if there are UDRs running.)
  2. Run the server as a security administrator. (It is platform-specific whether this works.)
  3. Disable nonroot VPs by setting the undocumented environment variable NONROOT_OFF.

Defect 99824: Insert into subtable from parent works in 9.14 but should get error 360 as in 9.21.

This is a test case. It is not a serious problem.

In Version 9.14, one could insert into a subtable from a parent, but now the user gets error -360.

Defect 101457: Should prevent partition table data scans of the consumed table when ALTER FRAGMENT...ATTACH FRAGMENT does not need to move data.

Defect 101457 has been fixed. It is reported here for your information.

During an ALTER FRAGMENT ... ATTACH operation, the database server will no longer scan the new or consumed fragments if both of the following conditions are true:

The preceding two conditions ensure that there is no movement of rows between fragments during the ATTACH operation.

Defect 102224: Adding two DATETIME columns returns error -1260

The following are invalid operations on DATETIME or DATE columns:



After all invalid operations, the user should receive error message -1263.

The only valid binary operation on DATETIME or DATE columns are the following operations:















Defect 102347: The self-referential INSERT into MYTAB SELECT from MYTAB... is needed

Consider the following INSERT statement:



The earlier implementation did not allow the source table to be the same as the target table. Any table occurrence in the SELECT clause of the INSERT clause cannot have the target table. The server returns error -360 if it detects such a case.

This feature relaxes the above restriction by allowing the use of target tables in the SELECT clause of the INSERT statement.

Semantics:

If one of the tables in the SELECT clause is the target table, then rows newly inserted into the target table by the INSERT statement are not used for evaluation of the select or any of the nested subquery of the INSERT statement.

The effect of the preceding statement is the same as the effect of the following statements executed in a transaction.





Restrictions:

If procedure someproc scans or updates target-table, then the database server returns error -360.

The behavior of the UPDATE and DELETE statements has not changed when the target table is used in their select subqueries. In this case, the database server returns error -360.

Defect 102727: BLOBs not properly replicated under some UPDATE conditions.

Under some conditions, if a simple large object is updated on a source server, it will be set to zero length on the target server(s). One of these conditions is as follows:

  1. There are two replicates on the table containing the simple large object, with each replicate only covering part of the primary key; for example, one replicate defined for PK <5, and another defined for PK >=5.
  2. In the same transaction, an UPDATE is done that does not cause the primary key to move across the boundary created by the replicates, and then an UPDATE is done that does cause the primary key to move across the boundary. Continuing with the previous example, first an UPDATE is done that changes a primary key of 2 to 3, and then one is done that changes it from 3 to 7, and then a COMMIT is done.

Defect 103044: FETCH fails with error -243 COULD NOT POSITION WITHIN TABLE and error -12804: ERROR INDICATED BY AN ACCESS_METHOD ROUTINE.

When the user selects data from a table that has a collection, it needs to find out if the collection has a BLOB or CLOB object embedded in it. For this it needs to walk up the sysattrtypes. This is what is causing the problem. The workaround is to set variable DISPCLOB to 0. When this is set, the contents of the CLOB object will not be displayed.

Defect 103179: Dropping of logical log is allowed in certain scenarios, even if ER might have needed them.

There are a few cases where the DBA is allowed to drop logical files that Enterprise Replication (ER) might need later on. These cases occur only when ER is currently not running on the server (either because the server is in quiescent mode or because the DBA stopped ER).

Make sure that logical log files are not dropped in the following scenarios.

  1. If server comes to quiescent mode directly (from the off-line mode) using onmode -s
  2. If server is brought up but ER was stopped (using cdr stop) before the server went down.

Suggested solutions:

  1. In both these cases, try to delay dropping logs until after ER is started back up and the following message has been printed on the on-line log:
  2. Look in the replay table to see what logs ER needs:
  3. 
    
    

    Then delete only log files less than the value reported.

Defect 103299: SYSMASTER queries on ER-related SMI tables while ER is shutting down can cause access violation.

Queries on ER-specific System Monitoring Interface (SMI) tables should not be performed while ER is shutting down.

Defects 103708 and 107052: Common library naming convention of CSM libraries.

Currently, due to the common library name convention of CSM libraries and due to the fact that they both get installed at the same location within INFORMIXDIR, the following problems can occur:

Defect 103851: ONMODE -K and -C behavior for a stalled checkpoint is identical to that for a completed checkpoint.

One can work around this problem by taking these steps to shut down the server with a checkpoint:

  1. Execute the onmode -sy command to put the database server in quiescent mode.
  2. Wait for all users to exit.
  3. Find the end of the system message log file. It can be helpful to run 'tail -f' on the log file in a second window while running the rest of these steps.
  4. Execute the onmode -l command to move to the next logical log. This will add an entry of the following form to the end of the system message log file:
  5. Execute the onmode -c command to force a checkpoint. You will see an entry of the following form in the system message log file, at which point you know that the checkpoint has completed:
  6. Execute the onmode -yuk command to shut down the system.

Defects 104514 and 118081: After conversion to Version 9.21, some DBSPACES missing. Got a "CHECKPOINT PAGE WRITE ERROR" during conversion but server was online.

To calculate the amount of space needed for reserved page extension during conversion to 9.21, do the following:

Run ckconvsp.sh (on UNIX systems) or ckconvsp.bat (on Windows  NT systems) to determine if you have enough space in your root chunk to perform an upgrade to 9.21.

If you do not have enough space, this message will appear:



The script will also inform you how much more space is needed in the root chunk. You can use oncheck -pe rootdbsname to see your current allocation in the root chunk extents.

In some cases, even if the server conversion is successful, internal conversion of some databases may fail because of insufficient space for catalogs.

In this case, the online log will report that conversion was successful:

However, prior to this message you will also see a message specifying that a specified database failed internal conversion:



The following steps should be taken to correct the problem.

  1. Execute: dbaccess database_name
  2. If a 131 ISAM error is returned (ISAM error: no free disk space) then additional chunk space is required to complete the internal conversion. Add an additional chunk to the dbspace containing the database in question.
  3. If you are not sure which dbspace contains the database, you can get the necessary information from sysmaster:

    
    
    
    
    
    
    
    
    

    Once the chunk has been added, execution of dbaccess database_name should connect to the database. In addition, a message will be written to the online log indicating that the database was successfully converted:

    
    
    

  4. If a different error is returned (other than no free disk space). then the conversion of the database failed for some reason other than a lack of available chunk space. Use finderr to determine possible reasons for the failure.

Defect 109647: Event viewer not able to get correct description for events logged by the server.

This only affects Windows  NT systems. This problem is related to onaudmsg.dll not being copied to the \winnt40\system32 directory.

The workaround is to manually copy onaudmsg.dll from %INFORMIXDIR%\bin to \winnt40\system32 directory.

Defect 109826: Internal error -9986: CORRUPTED COLLECTION ERROR returns when trying to update the collection row.

This defect only occurs when updating, or possibly inserting, a collection (set, list, multiset) containing smart large objects (BLOB, CLOB) when the collection is large enough that it will no longer fit inline in the row.

Defect 113508: When creating an accessor with Informix OLEDB utilizing 500 parameters, Informix Dynamic Server Version 9.21 on a UNIX platform crashes. Works fine on Version 9.21 for NT.

If your buffer size is 32 kilobytes, no more than 241 identifiers are supported, rather than the documented upper limit of 500.

The workaround is to increase the buffer size to 64 kilobytes.

Defect 113877: DBEXPORT and DBSCHEMA do not output OWNER.OBJECT references to all database objects and , thus, fail when creating ANSI databases with multiple users.

Neither dbexport nor dbschema use the owner.object model for all database object references in the extensible data types, and so the scripts fail for creating ANSI-compliant databases. Exporting a database by one user and running dbschema as another user may not work in case of extensible data type.

The workaround is to specify any columns declared as user-defined data types in owner.object format.

Defect 114130: Warm restore with -O option failed because server got the wrong copy of logical log.

If the Informix Storage Manager instance has been running longer than the current server instance, and has backups from previous server incarnations, and is backing up the logs during a warm restore, ON-Bar sometimes sends a log from a previous incarnation when in fact a more recent log is required.

We do not currently know of a way to help in this situation. Here is an example of this problem:

  1. User has Storage Manager and the database server running.
  2. User brings down server but leaves Storage Manager running; the last log used was 20.
  3. User reinitializes server (oninit -i) which clears the servers records of the logs
  4. User starts continuous log backups via the onbar -l -C command.
  5. log7 fills, and log backup of log7 starts (with log8 now current).
  6. Server is up but user wants to restore, so onbar -r -O command is started.
  7. log7 backup completes.
  8. All noncritical spaces are brought down, and physical restore completes.
  9. Log restore begins. Server says that it needs log7 and log8, but ON-Bar does not have log8 in its record, and so leaves data blank in list.
  10. log8 back up starts.
  11. log7 is restored.
  12. ON-Bar queries Storage Manager for log8.
  13. Storage Manager still has old log8 backup (from Step 1) marked as active and passes data for that backup to ON-Bar.
  14. log8 backup completes and Storage Manager marks this log backup as active.
  15. ON-Bar passes old log8 data to server and server gives an error.

In the original data for this PTS entry, it was a matter of missing log8 by 4 seconds.

What a DBA can do to avoid this occurrence:

  1. Minimalize oninit -i (disk initialization) server startup.
  2. Reinitialize storage manager before using oninit -iy.
  3. For ISM only: Run ism_op_label -ALLVOLS to wipe out all volume labels.
  4. Expunge old Storage Manager records from the time just prior to the oninit -iy.
  5. Use a new server name in $ONCONFIG and the DBA's $INFORMIXSERVER environment variable (and hide this name from users via the sqlhosts file data).

What a user can do to recover after #13 happens:

  1. Rerun the restore from the beginning. (The -O option is not required since all spaces will now be down.)

The necessary log is now backed up and it should succeed without extra steps.

This happened in a system where the server was reinitialized but the ISM was not, and the command was an override of restoring the entire system while it is online (but not the critical dbspaces). This is not a scenario that will occur frequently.

Defect 118116: ON-ARCHIVE menu mode permits entry of 1/2/3-digits year value into the "wait" and "expiry" qualifiers that may cause incorrect date interpretation.

The customer should use four-digit year values in ON-Archive menu mode.

Defect 127557: If a whole-system backup verification is performed first, ON-BAR fails to build the cold object list correctly during the whole-system restore.

If you performed a whole-system backup of the rootdbs and other dbspaces and then verified the rootdbs only, ON-Bar restores only the rootdbs.









To avoid this problem, do one of the following:


VI. IBM Informix Database Server Products

THIS PRODUCT INCLUDES CRYPTOGRAPHIC SOFTWARE WRITTEN BY ERIC YOUNG (eay@mincom.oz.au). IT IS PROVIDED BY ERIC YOUNG "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

A. Installation Change

For versions of the database server earlier than 9.20, the product installed as user root. For IBM Informix Dynamic Server 2000, Version 9.21, the product installs as user Informix and then runs a script as user root.

For Version 9.21, Linux users receive product files in RPM format. Linux users must therefore follow these steps:

  1. Follow the procedures that are described in "Installing Dynamic Server on Linux using RPM" in the Installation Guide.
  2. Use the following loadline to load product files:

B. Compatibility with DataBlade Products

IBM Informix Dynamic Server 2000 supports the following DataBlade products:

C. CREATE SYNONYMS from 7.24/7.3 to 9.21 Problem

CREATE SYNONYMS from 7.24/7.30 to 9.21 hangs indefinitely (Defect 96374). IBM Informix Dynamic Server interim release Versions 7.31.UC3, 7.24.UC9, and 7.30.UC9 have this defect fixed. Upgrade to these interim versions if your 7.x server creates synonyms to 9.21.

D. New SQL Reserved Words

IBM Informix Dynamic Server 2000, Version 9.21, recognizes new SQL reserved words that might affect migration of your applications. For a complete list of SQL reserved words, see Appendix A of the Informix Guide to SQL: Syntax, Version 9.21.

Although you can use almost any word as an SQL identifier, syntactic ambiguities can occur. An ambiguous statement might not produce the desired results. For information about workarounds for such ambiguities, see the Informix Guide to SQL: Syntax.

New reserved words in 9.21 (beyond 9.14, but already in 7.30, 7.31, and 9.20):

  
ALL_ROWS
  
CASE
  
CRCOLS
  
DECODE
  
FIRST_ROWS
  
MEMORY_RESIDENT
  
NON_RESIDENT
  
NVL
  
REPLICATION
  
SUBSTR
  
SUBSTRING

New reserved words in 9.21 (but already in 7.31):

  
INNER
  
JOIN
  
LEFT
  
LOCKS
  
RETAIN

Other new 9.21 reserved words (but already in 9.20):

  
AGGREGATE
  
CACHE
  
COSTFUNC
  
ITEM
  
SELCONST

The following reserved words are implemented for the first time in 9.21:

  
RAW
  
STANDARD

E. Year-2000 Compliance

IBM Informix Dynamic Server 2000 has been engineered and tested to be Year-2000 compliant. This means that the use or occurrence of dates on or after January 1, 2000, will not adversely affect the following:

From time to time, through continued testing efforts, IBM Informix may find that certain of its software products contain date-related defects. We have provided a history table on our Year-2000 Web Site as these defects are discovered. For more details, customers with access to TechInfo Center should go to the Year-2000 Tech Alerts section. Call your local technical support office for assistance, or for information on how to enroll in the program.

F. Migration to IBM Informix Dynamic Server 2000, Version 9.21

To migrate to IBM Informix Dynamic Server 2000, Version 9.21 from any 9.x database server prior to 9.14, you must first upgrade to a 9.14 database server and open each database.

No conversion or reversion is required between IBM Informix Dynamic Server, Version 9.20, and Version 9.21.

Conversion from Version 7.31 is supported.

To migrate from any Version 7.x database server prior to 7.2x, you must first upgrade to a 7.3x or 7.22.UC1 (or later) database server and open each database.

When migrating from any Informix Dynamic Server 9.14, 7.3x, or 7.2x database server to IBM Informix Dynamic Server 2000, Version 9.21, you must first calculate the amount of space that you need for the conversion.

1. Amount of Space Required for Conversion

Run ckconvsp.sh (on UNIX systems) or ckconvsp.bat (on Windows  NT systems) to determine if you have enough space in your root chunk to perform an upgrade to 9.21.

If you do not have enough space, the following message will appear:



The script will also inform you how much more space is needed in the root chunk. You can use oncheck -pe rootdbsname to see your current allocation in the root chunk extents.

In some cases, even if the server conversion is successful, internal conversion of some databases may fail because of insufficient space for system catalog tables. Refer to defect 104514 in the defect Workaround Section above for more information.

2. Storage Manager Installation and Certification During Migration

When you convert or revert an IBM Informix database server, the storage manager that you used on the old version might not be certified for the version that you are migrating to. Verify that IBM Informix has certified the storage manager for the target database server version and platform. If not, you need to install a certified storage manager before performing backups with ON-Bar.

Before you upgrade to a later version of the database server, save a copy your current sm_versions file, which should be in the $INFORMIXDIR/etc directory on UNIX or the %INFORMIXDIR%/etc directory on Windows NT. If you are using a different directory as INFORMIXDIR for the new database server, copy sm_versions to the new $INFORMIXDIR/etc or %INFORMIXDIR%/etc directory, or copy sm_versions.std to sm_versions in the new directory, and then edit the sm_versions file with appropriate values before starting the upgrade.

When you upgrade to the new database server version, install the storage manager before you bring up the database server. That way if you have automatic log backup set up on the database server, ON-Bar can start backing up the logs when the database server comes on-line.

3. Upgrade to Version 9.21 from a 9.14, 7.3x, or 7.2x Database Server

This section provides some guidelines for upgrading to IBM Informix Dynamic Server 2000, Version 9.21 from Universal Server, Version 9.14, Dynamic Server, Version 7.3x, or OnLine Dynamic Server, Version 7.22.UC1 (or later). For details on the upgrade procedure, see the Informix Migration Guide.

You need to add any additional free space to the system prior to conversion. If the dbspaces are very full, you need to add space before you start the conversion procedure.

Prior to migrating the old system to the new one, make sure that there are no open transactions. Fast recovery would fail when rolling back open transactions during conversion. You can use oninit -s on the source side (old system) as a check against any open transactions. For more information on how to close transactions properly before migration, see the Informix Migration Guide.

A shutdown procedure does not guarantee to roll back all open transactions. To guarantee that the old system (9.14, 7.3x, or 7.22.UC1 or later) has no open transactions prior to conversion, the old database server needs to be taken down to quiescent mode. It is not enough to run onmode -yuk. You need to execute the onmode -s command first, followed by onmode -yuk. Wait until onmode -s is completed. It is possible that some users are still active.

Only after proper shutdown can you bring the new server (9.21 through the conversion path. Any open transaction in fast recovery during conversion will cause an execution failure in fast recovery.

After a successful conversion, you need to run UPDATE STATISTICS on some of the system catalog tables in your databases. For conversion from a 7.3x or 7.22.UC1 or later database server to IBM Informix Dynamic Server 2000, Version 9.21, run UPDATE STATISTICS on the following system tables in IBM Informix Dynamic Server 2000, Version 9.21:

  
SYSBLOBS
  
SYSCOLAUTH
  
SYSCOLUMNS
  
SYSCONSTRAINTS
  
SYSDEFAULTS
  
SYSDISTRIB
  
SYSFRAGAUTH
  
SYSFRAGMENTS
  
SYSINDICES
  
SYSOBJSTATE
  
SYSOPCLSTR
  
SYSPROCAUTH
  
SYSPROCEDURES
  
SYSROLEAUTH
  
SYSSYNONYMS
  
SYSSYNTABLE
  
SYSTABAUTH
  
SYSTABLES
  
SYSTRIGGERS
  
SYSUSERS

For conversion from an Informix Dynamic Server 9.14 database server to IBM Informix Dynamic Server 2000 9.21, run UPDATE STATISTICS on the following system tables in 9.21:

  
SYSBLOBS
  
SYSCOLAUTH
  
SYSCOLUMNS
  
SYSCONSTRAINTS
  
SYSDEFAULTS
  
SYSDISTRIB
  
SYSFRAGAUTH
  
SYSFRAGMENTS
  
SYSINDICES
  
SYSOBJSTATE
  
SYSOPCLSTR
  
SYSPROCAUTH
  
SYSPROCEDURES
  
SYSROLEAUTH
  
SYSSYNONYMS
  
SYSSYNTABLE
  
SYSTABAUTH
  
SYSTABLES
  
SYSTRIGGERS
  
SYSUSERS
  
SYSXTDTYPES
  
SYSATTRTYPES
  
SYSCOLATTRIBS
  
SYSCASTS
  
SYSXTDTYPEAUTH
  
SYSROUTINELANGS
  
SYSLANGAUTH
  
SYSAMS
  
SYSTABAMDATA
  
SYSOPCLASSES
  
SYSTRACEMSGS
  
SYSAGGREGATES

For more information about upgrading to IBM Informix Dynamic Server 2000, Version 9.21, from Universal Server 9.14, Dynamic Server 7.3x, or OnLine Dynamic Server 7.22.UC1 or later, see the Informix Migration Guide.

4. Reversion from Version 9.21 to a 9.14, 7.3x, or 7.2x Database Server

This section provides some guidelines for reverting from IBM Informix Dynamic Server 2000, Version 9.21 to Universal Server, Version 9.14, Dynamic Server, Version 7.3x, or OnLine Dynamic Server, Version 7.22.UC1 or later. For details on the reversion procedure, see the Informix Migration Guide.

You can revert from IBM Informix Dynamic Server 2000, Version 9.21, to Universal Server 9.14, Dynamic Server 7.3x, or OnLine Dynamic Server 7.2 if you have not added any extensions to Version 9.21 of the database server.

When you run BladeManager against a database, you automatically create extensions because BladeManager registers its utility DataBlade module, which adds extensions to the database. If you need to downgrade from Version 9.21 and you have run BladeManager, you must first run BladeManager and specify the following command to remove the BladeManager extensions:

The following restrictions apply to reversion from 9.21 to 9.14, 7.3x, or 7.2x:

  1. You cannot revert a database that was created on the 9.21 database server (reversion will fail). Drop the database before attempting reversion.
  2. You cannot revert to Version 9.14, 7.3x, or 7.2x from a 9.21 database server that has had extensions added.
  3. You cannot revert if you created new data types or routines either explicitly or by registering a different version of a DataBlade module.

    To be able to revert, you need to downgrade any DataBlade module back to the version that was registered prior to conversion and explicitly drop any data types and routines that were created outside of any DataBlade registration. For information on how to use DataBlade modules, see the DataBlade Developers Kit User's Guide and the BladeManager User's Guide.

  4. No new routines should have been created in the converted databases (either implicitly or explicitly).
  5. No new triggers should be defined in the converted databases.
  6. Select triggers should not be in use.
  7. User-defined statistics should not be in use.
  8. No long identifiers or long usernames should be in use
  9. Before reversion, make sure that the R-tree indexes do not use long identifiers as indexed column names, opclass names, or opclass function names.

    Also, make sure that the following disk structures do not use long identifiers: databases (owner and database name length), tblspaces (owner and tblspace name length), dbspaces (owner and dbspace name length), and chunks (path length).

  10. No storage space should have a name more than 18 bytes long.
  11. No in-place ALTER TABLE statement should be pending against any table.

If a user table has an incomplete in-place ALTER operation, then you need to ensure that the in-place ALTER operation is complete by running a dummy update statement against the table. If the reversion process does not complete successfully because of in-place ALTER operations, it lists all of the tables that need dummy updates. You need to perform a dummy update on each of the tables in the list before you can revert to the older database server.

If an in-place ALTER operation is incomplete against a system table, run one of the following scripts while connected to the database.

9.21 to 9.14 reversion:

9.21 to 7.3x or 7.2x reversion:

  1. No fragment expressions or constraints created on the 9.21 database server should exist in the databases.
  2. Fragment strategies that existed before conversion to the 9.21 database server cannot be changed by using ALTER TABLE or ALTER INDEX statements.

The following restrictions also apply to reversion from 9.21 to 9.14:

  1. No new routine languages should be defined in the converted databases.
  2. No new language authorizations must have been done in the converted databases.
  3. No new operator classes, casts, or extended types should be defined on the 9.21 database server.

The following restrictions also apply to reversion from 9.21 to 7.3x or 7.2x:

  1. To revert a database from 9.21 to 7.3x or 7.2x, no semi-detached indexes should be in the database.
  2. The databases cannot have tables whose primary access method is a user-defined access method.
  3. Databases cannot have typed tables.
  4. Tables cannot have any user-defined type columns.
  5. Tables cannot have named row types.
  6. All indexes must be B-tree indexes with a total key length less than or equal to 255.
  7. Tables cannot have any functional or VII indexes.
  8. Databases cannot use any extensibility features, including user-defined access methods, user-defined types, user-defined aggregates, routine languages, language authorizations, trace messages, trace message classes, operator classes, errors, type authorizations, and casts.

After a successful reversion, you need to run UPDATE STATISTICS on some of the system catalog tables in your databases. For reversion from IBM Informix Dynamic Server 2000 Version 9.21 to a 7.2x or 7.3x database server, run UPDATE STATISTICS on the following system tables in 7.2x or 7.3x:

  
SYSBLOBS
  
SYSCOLAUTH
  
SYSCOLUMNS
  
SYSCONSTRAINTS
  
SYSDEFAULTS
  
SYSDISTRIB
  
SYSFRAGAUTH
  
SYSFRAGMENTS
  
SYSINDEXES
  
SYSOBJSTATE
  
SYSOPCLSTR
  
SYSPROCAUTH
  
SYSPROCEDURES
  
SYSROLEAUTH
  
SYSSYNONYMS
  
SYSSYNTABLE
  
SYSTABAUTH
  
SYSTABLES
  
SYSTRIGGERS
  
SYSUSERS

For reversion from IBM Informix Dynamic Server 2000 Version 9.21 to a 9.14 database server, run UPDATE STATISTICS on the following system tables in 9.14:

  
SYSBLOBS
  
SYSCOLAUTH
  
SYSCOLUMNS
  
SYSCONSTRAINTS
  
SYSDEFAULTS
  
SYSDISTRIB
  
SYSFRAGAUTH
  
SYSFRAGMENTS
  
SYSINDICES
  
SYSOBJSTATE
  
SYSOPCLSTR
  
SYSPROCAUTH
  
SYSPROCEDURES
  
SYSROLEAUTH
  
SYSSYNONYMS
  
SYSSYNTABLE
  
SYSTABAUTH
  
SYSTABLES
  
SYSTRIGGERS
  
SYSUSERS
  
SYSXTDTYPES
  
SYSATTRTYPES
  
SYSCOLATTRIBS
  
SYSCASTS
  
SYSXTDTYPEAUTH
  
SYSROUTINELANGS
  
SYSLANGAUTH
  
SYSAMS
  
SYSTABAMDATA
  
SYSOPCLASSES
  
SYSTRACEMSGS
  
SYSAGGREGATES

When reverting back to a previous version of the server, do not reinitialize the database server by using the -i command-line parameter. If you convert from an older version of the server to a newer version, and if you then decide to revert back to the older version, you will see a message similar to the following:















In the second-to-last line, reinitializing refers to re-starting the database server (sometimes referred to as re-initializing shared memory), not reinitializing the existing root dbspace. Using the -i parameter would re-initialize the root dbspace, which would destroy your databases. Do not use the -i parameter.

For more information about reverting from IBM Informix Dynamic Server 2000 Version 9.21 to an older database server, see the Informix Migration Guide.


G. Limits in IBM Informix Dynamic Server 2000

The following table lists selected capacity limits and system defaults for this release of IBM Informix Dynamic Server 2000.

 
System-Level Parameters
Maximum Capacity
per Computer System
IBM Informix Dynamic Server 2000 systems per computer
(Dependent on available system resources)
255
Maximum number of accessible remote sites Machine specific
Table-Level Parameters (based on 2-kilobyte page size) Capacity per Table
Data rows per fragment. 4,277,659,295
Data pages per fragment. 16,775,134
Data bytes per fragment (excludes Smart Large Objects (BLOB, CLOB) and Simple Large Objects (BYTE, TEXT) created in Blobspaces) 33,818,671,136
Binary Large Object bytes 2**31
Row length 32,767
Number of columns 32,000
Columns per index 16
Bytes per index 390
 
Access Capabilities
Maximum Capacity per IBM Informix Dynamic Server 2000 System
Maximum number of databases 21 million
Maximum number of tables 477,102,080
Maximum active users (minus the minimum number of system threads) 32,000 user threads
Maximum active users per database and table (also limited by the number of available locks, a tunable parameter) 32,000 user threads
Maximum number of open tables 32,000
Maximum number of open tables per user and join 32,000
Maximum locks per system and database 8 million
Maximum number of page cleaners 128
Maximum number of recursive synonym mappings 16
Maximum number of tables locked by user 32
Maximum number of cursors per user Machine specific
Maximum chunk size 2 gigabytes
Maximum number of 2-kilobyte pages per chunk 1 million
Maximum number of open BLOBs (applies only to simple large objects: TEXT and BYTE data types) 20
Maximum number of B-tree levels 20
Maximum amount of decision support memory Machine specific
 
IBM Informix Dynamic Server 2000 System Defaults
  Table lock mode Page
  Initial extent size 8 pages
  Next extent size 8 pages
  Read-only isolation level (with database transactions) Committed Read
  Read-only isolation level (ANSI-compliant database) Repeatable Read
 
ON-Monitor Statistics
  Number of displayed user threads 1000
  Number of displayed chunks 1000
  Number of displayed dbspaces 1000
  Number of displayed databases 1000
  Number of displayed logical logs 1000

VII. J/Foundation

This section describes the J/Foundation feature of IBM Informix Dynamic Server 2000.

A. J/Foundation for UNIX Systems

IMPORTANT: This feature is supported both on UNIX and on Windows NT platforms, but subsections that follow describe UNIX and Linux systems only. (See "J/Foundation for Windows NT Systems" later in this section for details of using J/Foundation on Windows NT systems.)

1. Introduction

If you have purchased IBM Informix Internet Foundation.2000, this release supports the following features:

2. Installation and Server Configuration

Follow these steps to install and configure your J/Foundation software:

  1. Obtain and install the Foundation.2000 release.
  1. cd to $INFORMIXDIR/extend/krakatoa.
  2. This directory will be referred to as <jvphome>.

  3. Edit the JVPPROPFILE, .jvpprops, to change the trace level or other properties if necessary. See the file <jvphome>/.jvpprops.template for an example. You must copy the <jvphome>/.jvpprops.template file to .jvpprops regardless of whether you change any properties because you are required to have a .jvpprops file. Likewise, you must copy Informix.policy.std to Informix.policy.
  4. Include <jvphome>/krakatoa_g.jar and <jvphome>/jdbc_g.jar in CLASSPATH so that you can compile UDR source files that use the related packages.
  1. Configure the database server's ONCONFIG parameters:
  1. Set the J/Foundation parameters in the onconfig file. See the Creating UDRs in Java manual for more information on these parameters.
  2. An example setting follows:

    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    

    For this release, jdbc(_g).jar is the server-side JDBC driver.

  3. Ensure that the default smart-large-object configuration parameter, SBSPACENAME, is defined in the onconfig file, and that the sbspace that it specifies exists. The database server uses this sbspace as temporary storage during some Java-related operations.
  1. JVPs by default bring up the JVM with a heap size of 16 megabytes. This size can be configured using the environment variable JVM_MAX_HEAP_SIZE. Set this variable to the maximum heap size needed for the JVM depending on the estimated requirement of the application.
  2. Set the environment variable JAR_TEMP_PATH to point to a directory in server's local file system where copies of .jar files may be temporarily stored by J/Foundation during execution of jar management procedure (install_jar, replace_jar, etc.).
  3. This directory should be readable and writeable by the user who brings up the 9.2 server instance. The remaining permissions can be adjusted to the level of security desired for jar files. If this environment variable is not set, temporary copies of jar files will be created in the /tmp directory of the server's local file system.

  4. Thread pooling is available as a performance enhancement from the 9.21.UC4/TC4 release onwards.
  5. Thread pooling has been implemented to avoid the overhead in creating a number of threads on the fly. When a thread is needed to perform a task, it is simply allocated to the task from a list of threads that have already been created. Thread pooling properties can be set in the JVPPROPFILE (e.g. $INFORMIXDIR/extend/krakatoa/.jvpprops). The two properties that impact thread pooling are pool size and patrol interval.

    The pool size is the initial number of threads that are created in the pool. The patrol interval is used to time a patrol thread, which runs every 'n' number of minutes, and destroys threads that have not been used in a specific interval of time. These two properties are set as follows:

    
    
    
    
    
    
    

    The thread pool or the patrol thread can be disabled by explicitly setting the properties to 0. If not specified in the JVPPROPFILE, the default pool size is 20, and the default patrol interval is 5 minutes.

3. Documentation

Information pertaining to J/Foundation can be found in these documents:

There are also some code examples in the examples subdirectory. See the README file in that directory for details.

Visit the JavaSoft web site (http://java.sun.com) for information about JDBC 1.0 and JDBC 2.0.

4. Known Problems

  1. The J/Foundation JDBC driver does not support CREATE DATABASE or DROP DATABASE, and it does not support switching databases (via the DATABASE command).
  2. Performance enhancements in execution of Java user-defined routines are being worked on currently.
  3. A recommendation for performance is to minimize the level of tracing. The trace levels are described by the example properties file in the distribution directory (.jvpprops.template). Another way to improve performance is to use the nondebuggable versions of JDK libraries, the J/Foundation jar file, and the JDBC driver jar file while setting the corresponding parameters in the server configuration file.

    NOTE: In case of failures, high levels of tracing and debuggable versions of libraries will provide maximum information in identifying the source of the failure.

  4. This release requires users to explicitly call the close() method on the instances of the following classes to avoid memory leaks:
  1. In using the JDBC batch update feature, the number of queries in the batch need to be limited to about 300. This is a limitation imposed by the server layer used by the JDBC driver.

B. J/Foundation for Windows NT Systems

IMPORTANT: This feature is supported on UNIX and on Windows NT platforms, but the paragraphs that follow describe Windows NT systems only. (The preceding section, "J/Foundation for UNIX Systems" describes the use of J/Foundation on UNIX or Linux platforms.)

1. Introduction

If you have purchased IBM Informix Internet Foundation.2000, this release supports the following features:

2. Installation and Server Configuration

Follow these steps to install and configure your J/Foundation software:

  1. Obtain and install the Foundation.2000 release.
  1. cd to %INFORMIXDIR%\extend\krakatoa. This directory will be referred to as <jvphome>.
  2. Edit the JVPPROPFILE, which is set to .jvpprops_ol_yourservername (by the Windows NT installer), to change the trace level or other properties if necessary. See the file <jvphome>\.jvpprops.template for an example. You must copy the <jvphome>/.jvpprops.template file to .jvpprops regardless of whether you change any properties because you are required to have a .jvpprops file. Likewise, you must copy Informix.policy.std to Informix.policy
  3. Include <jvphome>\krakatoa_g.jar and <jvphome>\jdbc_g.jar in CLASSPATH so that you can compile UDR source files that use the related packages.
  1. Perform the following steps to configure your IBM Informix database server instance:
  1. In addition to the J/Foundation configuration parameters in the onconfig file set during install time by the installer the following configuration parameters specific to your environment have to be set.
  2. An example setting is shown below, where the J/Foundation release is in <jvphome>:

    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    

    For this release, jdbc(_g).jar is the interim JDBC driver.

    If debugging is not required, change JVPCLASSPATH to use the nondebuggable version of the .jar files:

  3. Ensure that the default smart-large-object configuration parameter, SBSPACENAME, is defined in the onconfig file, and that the sbspace that it specifies exists.
  1. JVPs by default bring up the JVM with a heap size of 16 megabytes. This size can be configured using the JVM_MAX_HEAP_SIZE environment variable. Set this variable to the maximum heap size needed for the JVM depending on the estimated requirement of the application. On Windows NT, this environment variable needs to be set in the registry. To set this variable in the registry, start the registry editor regedit.
  2. Navigate through the following sequence:

    
    
    

    To add new environment variable choose Edit->New->String Value. Set Value Name to JVM_MAX_HEAP_SIZE and set Value data to the estimated requirement of the application.

  3. Set the environment variable JAR_TEMP_PATH to point to a directory in the server's local file system where copies of jar files may be temporarily stored by the server during execution of jar management procedures (install_jar, replace_jar, etc.). On Windows NT this environment variable needs to be set in the registry. To set this variable in the registry, start the registry editor regedit.
  4. Navigate through the following sequence:

    To add new environment variable, choose Edit->New->String Value. Set Value name to JAR_TEMP_PATH and set Value data to a directory in your server's local file system.

    This directory should be readable and writeable by the user who brings up the 9.2 server instance. The remaining permissions can be adjusted to the level of security desired for jar files. If this environment variable is not set, temporary copies of jar files will be created in the \tmp directory of the server's local file system.

  5. Thread pooling is available as a performance enhancement from the 9.21.UC4/TC4 release onwards.
  6. Thread pooling has been implemented to avoid the overhead in creating a number of threads on the fly. When a thread is needed to perform a task, it is simply allocated to the task from a list of threads that have already been created. Thread pooling properties can be set in the JVPPROPFILE (e.g. $INFORMIXDIR/extend/krakatoa/.jvpprops). The two properties that impact thread pooling are pool size and patrol interval.

    The pool size is the initial number of threads that are created in the pool. The patrol interval is used to time a patrol thread, which runs every 'n' number of minutes, and destroys threads that have not been used in a specific interval of time. These two properties are set as follows :

    
    
    
    
    
    
    

    The thread pool or the patrol thread can be disabled by explicitly setting the properties to 0. If not specified in the JVPPROPFILE, the default pool size is 20, and the default patrol interval is 5 minutes.

3. Documentation

Information pertaining to J/Foundation can be found in these documents:

There are also some code examples in the examples subdirectory. See the README file in that directory for details.

Visit the JavaSoft web site (http://java.sun.com) for information about JDBC 1.0 and JDBC 2.0.

4. Known Problems

  1. The J/Foundation JDBC driver does not support CREATE DATABASE or DROP DATABASE, and it does not support switching databases (via the DATABASE command).
  2. Performance enhancements in execution of Java user-defined routines are being worked on currently.
  3. A recommendation for performance is to minimize the level of tracing. The trace levels are described by the example properties file in the distribution directory (.jvpprops.template). Another way to improve performance is to use the nondebuggable versions of JDK libraries, the Java in the Server jar file, and the JDBC driver jar file while setting the corresponding parameters in the server configuration file. (The current Windows NT release does not support debuggable versions of JDK libraries).

    NOTE: In case of failures, high levels of tracing and debuggable versions of libraries will provide maximum information in identifying the source of the failure.

  4. In using the JDBC batch update feature, the number of queries in the batch need to be limited to about 300. This is a limitation imposed by the server layer used by the JDBC driver.
  5. During internal stress testing it was observed that a JVM_MAX_HEAP_SIZE of 128 megabytes was the optimal value required to run tests with configuration of 2 JVPs and 16 sessions.
  6. During internal stress testing it was observed that onconfig STACKSIZE of 64 was necessary to run tests under 2 JVP configuration.
  7. This release requires users to explicitly call the close() method on the instances of the following classes to avoid memory leaks:


VIII. Security Alert

A technique for obtaining root access using IBM Informix Software was published on the internet 4 September 2001. A Tech Alert describes the exploit and provides directions for obtaining a script that fixes the permissions on the executables that are used to obtain root access. Please refer to the Tech Alert Index page:

http://www.Informix.com/Informix/services/ilink/alerts/alerts.htm

The specific Tech Alert about the root exploit is on the following web page:

http://www.Informix.com /Informix/services/ilink/alerts/091301_152768n152769n152770n152789.htm

This release of IBM Informix Dynamic Server includes the most important of the changes recommended by the Tech Alert, but you can improve the security of your system still more by reading the Tech Alert to understand the issue and by using the script, ibmifmx_security.sh. The script is located in the directory $INFORMIXDIR/bin. Information has been placed in the header of the script, ibmifmx_security.sh, providing information on how to use the script.

You should also take care to ensure that the following security precautions are implemented:

You can ask about PTS defects #152768, #152769, #152770, and #152789 for more information.

Also, remember to follow these basic security rules for IBM Informix software:

Similarly, all device files (raw disks) and any cooked files that are used for chunks must implement all of the following security features:


IX. Future Discontinuation of Feature Support

Starting with the release of Informix Dynamic Server Version 9.21.UC7, the utility DB/Cockpit will no longer be supported. The functionality provided through this utility can be obtained using ISA.


X. Caveats for 9.21 Early Release (Beta) Testing

A. Java UDRs and Transaction Processing

Transaction processing should be handled outside of Java UDRs. Attempting to handle transaction processing within a Java UDR is not supported and might lead to inconsistent behavior. Thus, a Java UDR can be embedded within a transaction, but should not contain a transaction, or part of a transaction.

B. General Notes and Information about Java

1. JDK 1.2 Support

Version 9.21 is the first release of J/Foundation providing support for JDK, Version 1.2. IBM Informix supports the reference JDK 1.2 from Sun, which is distributed along with the rest of J/Foundation. However, the distribution has only the Java runtime environment (JRE 1.2). To compile user-defined Java routines, use any standard JDK 1.2 production version. Also use the standard production JDK 1.2 for any client-side applications.

NOTE: JRE 1.2 is distributed with J/Foundation solely for the purpose of running database server-side Java user-defined routines. With the JDK 1.2 support in J/Foundation, users can call any standard documented JDK 1.2 package APIs, such as java.net, java.io, and so on, within their Java user-defined routines. Check the manual for one or two packages, such as swing components, that cannot be used because they do not make semantic sense within Java UDRs. JDBC 2.0 features under the Java.SQL package in JDK 1.2 can also be used. For the list of JDBC 2.0 features available with Version 9.21, check the manual.

C. Other Notes

The ON-Archive utility included in this release is no longer being enhanced and is scheduled to be removed from a future version of the database server. Customers are advised to begin transition to using either the ontape or ON-Bar backup utilities in its place.

The DB/Cockpit utility provided in this release is no longer being enhanced and is scheduled to be removed from a future version of the database server.

D. Defect Reports

Defect 122682: Select DE_DE database using QA_QA locale setting returns -23101 where -23115 is expected.

An incorrect error message is returned. This message will not be fixed in Version 9.21.

Defect 123282: Unable to perform DELETE (error 12097) on an NLT (raw table) when a rollback of an INSERT operation is performed.

This condition will not be fixed in 9.21. Instead, perform a rollback of an insert operation on a raw table.

Defect 125987: Batch update of 3000 STMTS within a UDR triggers a huge memory consumption, >1.5 gigabytes, and thousands of scan threads.

Huge memory consumption and table lock overflow during JDBC operations in UDRs is possible in extreme cases. This situation will not be fixed in Version 9.21. Be aware that batch update implementation has this restriction.



Copyright \051\0402001, IBM Software, Inc. All rights reserved