Date: 11/14/2001
Version: 9.21
(c) 2001, IBM Corporation.
PROPRIETARY DATA
THIS DOCUMENT CONTAINS TRADE SECRET DATA WHICH IS THE PROPERTY OF IBM CORPORATION. THIS DOCUMENT IS SUBMITTED TO RECIPIENT IN CONFIDENCE. INFORMATION CONTAINED HEREIN MAY NOT BE USED, COPIED OR DISCLOSED IN WHOLE OR IN PART EXCEPT AS PERMITTED BY WRITTEN AGREEMENT SIGNED BY AN OFFICER OF IBM SOFTWARE, INC. THIS MATERIAL IS ALSO COPYRIGHTED AS AN UNPUBLISHED WORK UNDER SECTIONS 104 AND 408 OF TITLE 17 OF THE UNITED STATES CODE. UNAUTHORIZED USE, COPYING OR OTHER REPRODUCTION IS PROHIBITED BY LAW.
THIS PRODUCT INCLUDES CRYPTOGRAPHIC SOFTWARE WRITTEN BY ERIC YOUNG (eay@mincom.oz.au). IT IS PROVIDED BY ERIC YOUNG "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Description: release notes file (without fixed and known bugs) for database server product.
Owner group: Technical Publications
IMPORTANT: The name of the database server has been changed from "Informix Dynamic Server 2000" to "IBM Informix Dynamic Server 2000."IBM Informix Dynamic Server 2000, Version 9.21, contains new features in the following areas:
All Version 7.31 features are supported in Version 9.21.
The following section provides an overview of new database server features.
Version 9.21 of the High-Performance Loader (HPL) includes the command-line utility onpladm. You can use the onpladm utility to create, modify, describe, list, run, configure, and delete jobs from the command line. For detailed information about the onpladm utility, refer to the online documentation available in HTML format in $INFORMIXDIR/release/en_us/0333/onpladm/index.HTML.
The database server uses the SQL statement cache to store SQL statements that a user executes. When other users execute a statement stored in the SQL statement cache, the database server does not parse and optimize the statement again, which improves performance. The SQL statement cache has been enhanced to support the following items:
For more information on the new features in the SQL statement cache, see the documentation notes for the Performance Guide, Administrator's Guide, and the Administrator's Reference.
Previously, you could use DB-Access to access synonym names only if the remote database server was Version 9.x. You can now access synonym names on remote Version 7.x database servers.
The following section provides an overview of new extensibility features. For more information, see the documentation notes for the DataBlade API Programmer's Manual.
You can now write user-defined routines in C++ with fewer restrictions. You can use the virtual inheritance feature of C++ without restriction.
You must still follow coding guidelines for C, including those for the use of system calls and memory allocation.
The DataBlade API now provides functions that can perform the following tasks:
The new mi_fp_funcname() function allows a C user-defined routine to get the SQL name of a function.
Command line clearup of onbar tool. The following example shows the syntax for log backup now:
The old syntax (onbar -l) still works.
The following section provides an overview of new J/Foundation features. For more information, see the documentation notes for Creating UDRs in Java.
IBM Informix Internet Foundation 2000, Version 9.21, supports Java 2 and includes a tested version of the Java Runtime Environment (JRE). The database server supports the 1.2 classic version of the Java Virtual Machine (JVM).
The default values of the JDKVERSION, JVPJAVAHOME, JVPJaVALIB, and JVPJAVAVM parameters in the ONCONFIG file have changed for IBM Informix Internet Foundation 2000, Version 9.21.
J/Foundation supports the following GLS features:
IBM Informix provides the script update_jars.sql to update the three-part names of installed Java archive files (jar files) when you rename the database to which the jar files belong. You must execute the update_jars.sql script in the database after you have renamed it. You need to execute the update_jars.sql script only if you have renamed a database that has one or more installed jar files.
J/Foundation supports the JVM_MAX_HEAP_SIZE, JAR_TEMP_PATH, JAVA_COMPILER, and AFDEBUG environment variables. For more information, see the documentation notes for Creating UDRs in Java and the Informix Guide to SQL: Reference.
The database server dynamically drops JVP virtual-processor classes.
Version 9.21 provides full J/Foundation support for:
You can now write user-defined routines and DataBlade modules in Java. For more information, see the documentation notes for Creating UDRs in Java and Extending Informix Dynamic Server 2000.
Version 9.21 supports MaxConnect with two new network protocols: ontliimc and onsocimc. This benefits the Informix MaxConnect product, which is separately orderable. Informix MaxConnect enables IBM Informix Dynamic Server 2000 to support greatly increased numbers of client connections. MaxConnect is a new software tier, introduced between the database server and clients, that transparently funnels multiple client connections onto a smaller number of server connections. The database server is freed from managing thousands of client connections, which results in improved response time and decreased CPU cost for the database server.
The documentation notes and release notes for this database server release are provided in HTML format for improved readability. A table of contents includes links to all the documentation notes and release notes included in this release. The table of contents file is $Informix/DIR/release/en_us/0333/DOCSUNIXTOC.HTML. The Release Notes Addendum, which lists the PTS defects that have been fixed in Version 9.2x, remains in ASCII format.
The previous release of IBM Informix Dynamic Server 2000, Version 9.20, contained new features since the 9.14 release in the following areas:
IBM Informix Dynamic Server 2000, Version 9.20, also contains features from Version 7.30 of IBM Informix Dynamic Server.
Version 9.20 of IBM Informix Dynamic Server 2000 has the following extensibility enhancements:
Version 9.20 of IBM Informix Dynamic Server 2000 provides the following performance improvements:
Version 9.20 and Version 9.21 of IBM Informix Dynamic Server 2000 provide the following special features:
Informix Server Administrator (ISA), formerly known as "ONWeb," is a browser-based tool that provides Web-based system administration for the entire range of IBM Informix database servers. ISA is the first in a new generation of browser-based, cross-platform administrative tools. It provides access to every IBM Informix database server command-line function and presents the output in an easy-to-read format.
Examples of ISA functionality include:
ISA works with all IBM Informix 7.2, 7.3, 9.2, 8.21, and 8.3 database servers. The database server CD-ROM distributed with your product includes ISA. For information on how to install ISA, see the following file on the CD-ROM.
Operating System | File |
---|---|
UNIX | /SVR_ADM/README |
Windows NT | \SVR_ADM\readme.txt |
Version 9.20 of IBM Informix Dynamic Server 2000 also has features first released in Version 7.30. These features fall into the following areas:
The feature set of Version 9.20 of IBM Informix Dynamic Server 2000 is a logical superset of the Version 9.14 features and includes features that were first released in that version, and later extended to Version 9.20. These Version 9.14 features fall into the following categories:
Users can bypass the IBM Informix buffer pool (and the LRU queues) for database server I/O of smart large objects. Users can open a smart large object with lightweight I/O in either the ESQL/C or the DataBlade API interface.
This release of IBM Informix Dynamic Server 2000 supports dynamic growth of the lock table in case of an overflow. On every overflow, Dynamic Server allocates the existing number of locks, which doubles the number of locks, subject to a maximum of 100-kilobyte locks on each allocation.
All CPU and user-defined VP classes run as nonroot by default. This enables users to put their UDR object libraries in a location other than /usr/lib.
An EXECUTE FUNCTION statement can be used within SPL in a FOREACH EXECUTE statement.
An INSERT trigger can update columns for which the trigger event did not supply any value (not even NULL).
This change limits the scope of the SQL statement SET SESSION AUTHORIZATION to the current database only.
The INFORMIXCONTIME environment variable needs to be set to a large number to make the DCE CSM work (not necessary for non-CSM environments)
Optical support is now part of the database server.
The TP/XA libraries are available for client applications.
ON-Bar has been enabled for GLS.
The oncheck utility functions against tables and indexes with user-defined access methods.
The onstat utility displays information about user-defined access methods that have been used since IBM Informix Dynamic Server 2000 was initialized.
The 9.12 and later releases of the DataBlade API provide the following functions:
The 9.12 and later releases of IBM Informix Dynamic Server contain superstores_demo, an enhanced version of the stores_demo schema and database that illustrates some of the new features of IBM Informix Dynamic Server Version 9.1x and IBM Informix Dynamic Server 2000 Version 9.21.
The superstores_demo database does not replace the stores_demo database. Both databases are available. The superstores_demo database schema is not backward-compatible with stores_demo. The IBM Informix ESQL/C demonstration programs work only on stores_demo.
The ifxrltree.1.00 DataBlade module is bundled with Version 9.14 of the database server. (Version 9.21 of IBM Informix Dynamic Server 2000 bundles ifxrltree.2.00.)
The following sections describe issues and restrictions that can affect various features of Version 9.21.
Version 9.21 of IBM Informix Dynamic Server 2000 does not support the following features, among others:
A future release of the database server will not support the following features:
ON-Archive will become obsolete in the next release of this database server. The replacement for ON-Archive is a combination of ON-Bar and a storage manager (SM). ON-Bar and the SM provide more power than ON-Archive, with an easier-to-use interface. With ON-Bar, there is also the potential to upgrade to a more powerful storage management subsystem available in the market. The next release of this database server will continue to support the ontape utility.
In this release, the database server supports the older CDT format only under certain circumstances. In a future release, the database server will not support the older CDT format. For more information about the old and new CDT formats in this release of the database server, see "Collection-Derived Tables for SPL Routines" later in these release notes.
A future release of the database server might not support the DEFAULT_ATTACH environment variable, which changes the database server default for creating indexes from detached to attached. For more information, see "Detached Indexes" later in these release notes.
The following restrictions apply to the High-Performance Loader:
Additional feature restrictions are as follows:
Because the finderr filename is the same for both the 9.21 database server and the 2.2 client products, install the database server (with a later release date) after you install the client.
The system catalog tables and the sysmaster database for IBM Informix Dynamic Server 2000, Version 9.21, are different from those for IBM Informix database servers earlier than Version 9.20. Some column widths, data types, and treatment of null values have changed. Also, columns have been added to some tables, and some tables have been added or deleted. For details, see the documentation notes for the Informix Migration Guide.
In 7.x releases, sysindexes is a table. In 9.20 and 9.21, sysindexes is a view.
This can be verified by looking at the table type for that table in 'systables'
This returns a 'T'. In 9.21, this same select returns a 'V' for a view. The difference comes from expected and documented behavior for dealing with tables; sysindexes is a view and is subject to documented guidelines for views. For example, this means that the ALTER TABLE statement will fail for sysindexes, because it is not allowed on views.
By default, all new indexes that the CREATE INDEX statement creates in IBM Informix Dynamic Server 2000 are detached. IBM Informix Dynamic Server 2000, Version 9.21 supports attached indexes that were created in any 7.x release.
To get the 7.x attached index behavior, you can set the environment variable DEFAULT_ATTACH in the application environment. You can attach only B-tree indexes that are nonfragmented and that are on non-fragmented tables (the 7.x behavior). All other indexes, including extensibility related indexes such as R-trees and UDT indexes, must be detached. A future release of the database server might not support the DEFAULT_ATTACH environment variable.
Because of the increased space needed in the root chunk to support long identifiers in Version 9.21 of IBM Informix Dynamic Server 2000, it is possible to fill the root chunk in the rootdbs. One reason for this is that certain tables in the sysmaster database will grow extremely large when a large number of tables is created in the system.
If the root chunk becomes full, it will no longer be possible to add additional chunks to the system, even though disk space is available for the chunk itself. This is because metadata about the chunk must be stored in the root chunk of the rootdbs.
If the chunk creation fails for this reason, the following message will be reported:
Unable to extend %d reserved pages for Chunk descriptor
page in ROOT chunk.
If you see this message when creating a new chunk, and there is sufficient space on disk for the requested chunk, then you can create additional space by dropping the sysmaster database (see steps below). This works because, although the sysmaster database must reside in the rootdbs, it does NOT need to reside in the root chunk. Therefore, dropping sysmaster frees room in the root chunk, which can then be used for other things. When sysmaster is re-created, it is (partially) moved to other chunks in the rootdbs.
Here are steps to work around this root chunk space limitation:
UNIX: $INFORMIXDIR/etc/conv/rebuildsmi.sh -droponly
Windows NT: $INFORMIXDIR\etc\conv\rebuildsmi.bat -droponly
UNIX: $INFORMIXDIR/etc/conv/rebuildsmi.sh -createonly
Windows NT: $INFORMIXDIR\etc\conv\rebuildsmi.bat -createonly
Case-insensitive schemas might need to be revised because IBM Informix Dynamic Server 2000, Version 9.21 has a case-sensitive name space. This can affect the resolution of blobspaces and SPL names.
The DECIMAL precision has been increased for conversions from FLOAT and SMALLFLOAT to the DECIMAL data type, as follows.
From | To | Old Precision | New Precision |
---|---|---|---|
SMALLFLOAT | DECIMAL | 8 | 9 |
FLOAT | DECIMAL | 16 | 17 |
The change in the DECIMAL precision might also be visible when floating-point data are converted to ASCII text, because if a floating-point value fits within the range of a DECIMAL data type, the database server first internally converts the floating-point value to a DECIMAL value.
DB-Access now displays an extra digit for SMALLFLOAT types because, by default, DB-Access displays 14 characters of floating-point data. To reduce the number of digits that DB-Access displays, you can use the DBFLTMASK environment variable.
In ESQL/C applications built with Client SDK 2.30 and above, call to the deccvflt( ) or deccvdbl( ) function now result in an extra decimal digit in the return value. For example, if the C constant 123.4 is assigned to a C float variable, its binary representation is equivalent to 123.400001525.... Before the change, the following function call will generate the decimal number 123.4 as, in its binary representation, the 9th digit is 1 which does not have any rounding effect on the 8th digit.
After the change, the same function call will generate the decimal number 123.400002 as the number 5 in the 10th digit is rounded into 1 in the 9th digit.
As another example, assume the C constant 8788888.88 is assigned to a C double float variable. The binary representation of 8788888.88 is equivalent to 8788888.880000000819...
Before the change, the following function call will generate the decimal number 8788888.880000001 because, in its binary representation, the 17th digit is 8, which is rounded into 1 in the 16th digit.
After the change, the same function call will generate the decimal number 8788888.8800000008 as the 18th digit is 1 and does not have any rounding effect on the 17th digit.
The rationale for the preceding change is as follows: When a binary IEEE 4-byte floating-point value is converted to the closest eight-digit decimal number, or an 8-byte floating-point value is converted to the closest sixteen-digit decimal number, it is not always possible to uniquely recover the binary representation of the number from the decimal representation. If nine decimal digits, however, are used for the 4-byte floating-point value (and seventeen decimal digits are used for the 8-byte floating-point value), then converting the decimal value to the closest binary number will recover the original binary representation of the floating-point number.
This section does not apply if this is a first-time installation of IBM Informix, or if two-digit years are not used in the expressions of the following objects:
NOTE: Some of these features might not be supported in this version of the product.
This release introduces a change in when date literals with two-digit years within expressions of objects are evaluated according to the settings of relevant environment variables, such as, but not limited to, DBCENTURY. Previous to this release, two-digit year dates in the expressions of the objects were interpreted by IBM Informix according to the environment variable settings which prevailed at runtime time of the object. However, starting with Version 9.20, the date literal is always interpreted using the environment variable settings prevailing at the creation time (or else at the time of last modification) of the object with which the date literal is associated. The settings of environment variables at runtime will not be used by the object. This applies only to date strings having two-digit years in the expression of the objects mentioned above. That is, it does not apply if four-digit years are used in the objects.
The following two steps are required to take advantage of this change that was introduced in the Version 9.21 release:
For fragmentation expression, redefining means detaching and reattaching the expression. For all other objects, the object must be dropped from the database and recreated. Only after the objects are redefined using this new server, the date literals in the expressions within objects will be interpreted according to the environment variable settings at the time when the object was created or was last modified.
The reference date used for this interpretation is the creation date or the last modification date of the object, and not the current date when a query is run.
If the objects are not redefined using this new server, then the behavior of the object will remain the same as prior to the upgrade. However, since any new objects created after the software upgrade will behave differently from those created prior to the upgrade, administration of the database may become difficult because the database will have a mix of new and old behavior of objects in the database (with respect to when a two-digit year within expressions of objects are evaluated). Therefore, IBM Informix recommends that you follow the two previous upgrade steps.
Lastly, in order to avoid any possibility of misinterpreting two-digit years within the objects, IBM Informix recommends that you take this opportunity to change the use of two-digit years to four-digits years if possible.
This section does not apply to first-time installations of IBM Informix database products or if date literals are not present in the following objects within the database:
NOTE: Some of these features might not be supported in this version of the product.
In the rare case that the setting of DBDATE prevailing at creation time or time of last modification of the object differs from the one that is in effect at the run time of the object, you might either get a runtime error from the database server or get erroneous results due to incorrect interpretation of the date literal.
In order to maintain consistency, starting with objects created or modified using this release, the date literals within expressions of objects will be evaluated according to the setting of DBDATE prevailing at creation time or at the time of last modification of the object. The settings of environment variables at runtime of the object will not be used to evaluate the date literal within the objects. However, the prevailing setting at runtime of the query will still be in effect for date-related data processed within the query.
If your operating environment is such that the objects were created using one set of assumptions regarding the DBDATE setting and the runtime environment uses a different setting, you may encounter some problems. It is recommended that the usage of the database be modified so that the settings of DBDATE at creation, modification, and runtime are consistent throughout.
IBM Informix recommends that you use 4-digit years.
Arithmetical operations (add, subtract, multiply, or divide) on two floating-point numbers have been changed when one or both of the operands are of type FLOAT or SMALLFLOAT. Previously, the database server converted the numbers to DECIMAL as necessary, used decimal floating-point arithmetic, and returned a result of DECIMAL data type. In the 9.21 release, the database server converts the numbers to FLOAT as necessary, uses binary floating-point arithmetic, and the result is of FLOAT data type. This change allows arithmetical operations to be performed on larger floating-point numbers, as the FLOAT type can store larger values than the DECIMAL type.
As a consequence of the above change, the database server now returns FLOAT for AVG, STDEV, and VARIANCE of a FLOAT or SMALLFLOAT type. The value these functions return might be slightly different than in previous releases because of the difference in precision for the FLOAT and DECIMAL data types.
The stdev( ) function returns a zero variance for a count of 1. You can omit this special case through appropriate query construction (for example, "having count(*) > 1"). Otherwise, someone could have a data set that has only a few such cases, and this will block the rest of the query result.
Before Version 9.14, the database server allowed untyped null values in row constructors, even though it ignored these null values. In Version 9.14 and later releases, you must explicitly cast any null value that a row-type value contains.
For example, suppose you have the row_t named row type and the tab1 and tab2 tables, defined as follows:
CREATE ROW TYPE row_t (a CHAR(5), b FLOAT);
CREATE TABLE tab1 (col1 row_t);
CREATE TABLE tab2 (col2 row(a INT, b CHAR(5)));
The following examples show the correct way to insert a null value into the named and unnamed row types:
INSERT INTO tab1 VALUES (row(NULL::char(5), NULL::float)::row_t)
INSERT INTO tab2 VALUES (row(NULL::int, NULL::char(5)))
To support row literal values for columns that are defined as named row types or unnamed row types, you must use the ROW( ) constructor. However, you do not need to specify the row value as a quoted string. For more information, see the Informix Guide to SQL: Tutorial.
The SQL function MOD( ) can return INT8 values in addition to INTEGER values. The client application must ensure that any variable that holds the result of the MOD( ) function has a compatible data type.
When you create a DISTINCT data type, the database server automatically creates two explicit casts: one cast from the DISTINCT type to its source type, and another cast from the source type to the DISTINCT type. A DISTINCT type of a built-in type, however, no longer inherits the system-defined casts that are provided for the built-in type.
A DISTINCT type does inherit any user-defined casts that are defined on the source type.
Suppose you create the following DISTINCT types and table:
CREATE DISTINCT TYPE yen AS MONEY;
CREATE DISTINCT TYPE rouble AS MONEY;
CREATE TABLE currency_tab (yen_col yen, rouble_col rouble);
One way to insert values into the currency_tab table is to use two explicit casts to convert the values to the DISTINCT type. The following example shows how to perform the explicit casts.
When the value that you specify in an INSERT or UPDATE statement includes a decimal point, the database server assumes that the value is a DECIMAL type. The database server has a system-defined cast between DECIMAL and MONEY. However, the yen and rouble DISTINCT types do not inherit this cast. Therefore, for each value in the following example, the first cast converts the DECIMAL value to the source type (MONEY); the second cast converts the value from MONEY to the DISTINCT type (yen or rouble).
An alternate way to insert values into the currency_tab table is to specify each DISTINCT-type value as a quoted string. The database server handles a quoted string as LVARCHAR type, and provides implicit casts to handle conversions from an LVARCHAR value to any DISTINCT type (provided that the DISTINCT type is defined on a built-in type). The following INSERT statement is also valid:
Because no casts have been defined to handle conversions between DECIMAL and yen values or decimal and rouble values the following INSERT statement fails, and returns an error message:
INSERT INTO currency_tab VALUES (4.44::yen, 6.55::rouble)
9634: No such cast.
When a user assigns a value to a CHAR(n) column or variable and the length of that value exceeds n characters, the database server truncates the last characters without raising an error.
Suppose that you define the following table:
The database server truncates the data values in the following INSERT statements to "jo" and "sa" respectively but does not return a warning:
Thus the semantic integrity of data for a CHAR(n) column or variable is not enforced when the value inserted or updated exceeds length n.
IBM Informix Dynamic Server 2000 prohibits the redefinition of NULL because allowing such definition would restrict the global scope of the NULL keyword.
IBM Informix Dynamic Server 2000 SQL syntax permits cast expressions in the SELECT list. This means that users can write expressions of the form NULL::datatype, in which datatype is any data type known to the database.
The keyword NULL is a global symbol in the syntactic context of expressions, meaning that its scope of reference is global. Within SQL, the keyword NULL is the only syntactic mechanism for accessing a NULL value. Therefore, any mechanism that restricts the global scope or redefines the scope of the keyword NULL will syntactically disable any cast expression involving a NULL value. Therefore, you must ensure that the occurrence of the keyword NULL receives its global scope in all expression contexts.
For example, consider the following SQL code:
CREATE TABLE newtable
(
null int
);
SELECT null, null::int FROM newtable;
The CREATE TABLE statement is valid because the column identifiers have a scope of reference that is restricted to f the table definition; they can be accessed only within the scope of a table.
However, the SELECT statement in the example poses some syntactic ambiguities. Does the identifier null appearing in the SELECT list refer to the global keyword NULL, or does it refer to the column name null that was defined in the CREATE TABLE statement?
NOTE:: A SELECT statement of the following form is valid because the null column of newtable is qualified with the table name:
More involved syntactic ambiguities arise in the context of an SPL routine that has a variable named null. An example follows:
CREATE FUNCTION nulltest() RETURNING INT;
DEFINE a INT;
DEFINE null INT;
DEFINE b INT;
LET a = 5;
LET null = 7;
LET b = null;
RETURN b;
END FUNCTION;
EXECUTE FUNCTION nulltest();
When the preceding function executes in DB-Access, in the expressions of the LET statement, the identifier null is treated as the keyword NULL. The function returns a NULL value instead of 7.
Regarding null as the declared variable of an SPL routine would restrict the use of a NULL value in the body of the SPL routine. Therefore, the preceding SPL code is not allowed and causes IBM Informix Dynamic Server 2000 to return the following error:
-947 Declaration of an SPL variable named 'null' conflicts
with SQL NULL value.
In IBM Informix Dynamic Server 2000 9.21, collection-derived tables (CDT) are enhanced for SPL routines in a manner that is different from the original CDT. For the new CDT, the following statements return the fields of the SPL collection variable instead of the underlying data type:
DEFINE collection_var SET{row(a INT, b CHAR(5))};
DEFINE an_int INTEGER;
DEFINE a_char5 CHAR(5);
LET collection_var = SET{row(1, 'abc')};
SELECT * INTO :an_int, :a_char5 FROM table(:collection_var);
Under certain circumstances, IBM Informix supports the older CDT format in this release:
DEFINE row_var row(a INT, b CHAR(5));
-- Use the old collection derived table (CDT)
SELECT * INTO :row_var FROM table(:collection_var);
For the older CDT, these statements return the underlying data type of the collection, as follows:
However, this works only if the SELECT statement meets the original requirements of the old CDT format: no WHERE clause and only '*' in the select list.
Any other SELECT format produces results like an SQL table, in the new CDT format. For example, you could issue the query as follows:
SELECT * INTO :an_int, :a_char5
FROM table(:collection_var) t(x,y);
For the new CDT, this query returns the following result:
SELECT * INTO :row_var
FROM table(:collection_var) t(x,y);
The original CDT formats are a deprecated feature for which support will be discontinued in some future release of IBM Informix Dynamic Server 2000. Instead, use the new CDT format, which is described in the Informix Guide to Database Design and Implementation.
SYSTEM statements in an SPL routine are executed only if the current user executing the SPL routine has logged on with a password.
When a SYSTEM statement in an SPL routine executes, the database server waits for the outcome of the execution of the command that the SYSTEM statement specifies. The client application can hang if this command never completes or never returns.
DEFINE variable LIKE serial-type-col is allowed in an SPL routine. For this DEFINE syntax, the database server maps the variable to an INTEGER data type. However, DEFINE variable SERIAL and DEFINE variable SERIAL8 continue to be invalid syntax.
In Version 7.24.UC2 and earlier, if an SPL routine included a query with a GROUP BY clause, and the PDQPRIORITY environment variable was set to any value before (or during) execution of the routine, the session threads for that SPL routine were released as soon as the routine completed its execution.
As a result of the fix for PTS defect 72444, Version 7.24.UC3 and later (including this version) retain the threads allocated for the SPL routine until the session terminates. (This change was not introduced in Version 9.21, but it is documented for the first time in these release notes.)
A known defect (PTS# 128406) can result in unusual locking of sysprocplan and other system catalog tables when executing a stored procedure that uses a collection containing an unnamed ROW type.
This problem only manifests itself when a stored procedure that uses a collection containing an unnamed ROW type is executed for the first time, and it is only visible to the customer if that first execution takes place in the context of a transaction that holds a number of tables for a prolonged period of time.
Once that transaction commits (or rolls back), the performance slowdown will not occur again, with one possible exception: If a change occurs to a database object on which that SPL routine depends, then the SPL routine will be forced to recompile.
The workaround is to execute the SPL routine once, in order to get it optimized.
When a function is overloaded (has the same name as one or more other functions), the database server chooses the closest matched function, based on parameters that the function call provides. An overloaded function can be a new function that a user defines and owns, or it can be built into the database server and, therefore, owned by user Informix. If a built-in function and a user-defined function have identical signatures, then the database server chooses the user-defined function.
However, confusion can arise when function names are the same, but the argument types are not exactly the same. The following examples can help clarify what the database server does in such situations.
During the creation of an ANSI-compliant database, user Informix creates two round() functions:
CREATE DATABASE myansidb WITH LOG MODE ANSI;
I. Informix.round(FLOAT, INTEGER DEFAULT 0)
II. Informix.round(DECIMAL, INTEGER DEFAULT 0)
Later, a user (other than Informix) creates an overloaded round() function with a FLOAT parameter:
The following SELECT statements execute round() functions:
1. SELECT round(1.2, 10) FROM ...
2. SELECT Informix.round(1.2::FLOAT, 10) FROM ...
3. SELECT round(1.2::FLOAT, 10) FROM ...
4. SELECT round(float_var, 10) FROM ...
(where float_var is typed FLOAT)
5. SELECT username.round(1.2, 10) FROM ...
In SELECT statement 1, the data type of 1.2 is DECIMAL and the data type of 10 is INTEGER, so the database server chooses round() function II.
In SELECT statement 2, the invocation is explicitly qualified with Informix, so the database server chooses one of the built-in functions that user Informix owns, round() function I.
In SELECT statement 3, the function call specifies two functions that match exactly, round functions I and III. Because user-defined routines take precedence, the database server chooses round() function III.
In SELECT statement 4, it is the same case as in statement 3.
In SELECT statement 5, the function is explicitly qualified with username, so the database server chooses the function that username owns, round() function III, even though the parameter types most closely match round() function III.
The following notes apply to the DataBlade API:
If you specify an existing file as a trace file in the mi_tracefile_set( ) function, the database server attempts to append trace messages to that file instead of closing the existing file and opening it. If the specified file does not exist, the database server opens a new file.
Message text returned by the mi_errmsg( ) function for database server exceptions is always the most-specific text; that is, the text associated with the Informix SQLCODE value. The message text returned is no longer the ANSI or X/Open message text, in cases where this exists.
A separate, additional callback follows the callback for the SQLCODE when an exception has an access-method error code. This error code and message text are available through the mi_errmsg( ) and mi_error_sqlcode( ) functions.
The Informix GLS library is an application programming interface (API) that lets developers of user-defined routines, DataBlade modules, and client LIBMI applications create internationalized applications. The macros and functions of Informix GLS provide access within an application to GLS locales, which contain culture-specific information. For more information on the Informix GLS library, see the Informix Programmer's Manual.
If you are planning to log smart large objects, refer to the description of the onspaces utility in the Administrator's Reference for instructions on how to turn on logging. To check whether your smart large objects are currently logged, use the following command:
In the preceding syntax, sbspace represents the name of the sbspace that contains your smart large objects. If you see the LO_NOLOG flag in the output of the preceding command, all smart large objects in sbspace are not being logged. If you see the LO_LOG flag, all smart large objects in this sbspace are being logged.
The following restrictions apply to the use of smart large objects (BLOB and CLOB data types):
The -g option is not implemented for creating an sbspace; regardless of how you set this option, its value remains "1".
The -Df MAX_LO_LOCKS feature is not implemented.
The ESQL/C function ifx_lo_stat_mtime_usec( ) is not implemented.
The DataBlade API functions mi_lo_stat_mtime_usec( ) and mi_lo_stat_uid( ) are not implemented.
The moderate integrity feature for smart large objects is not currently implemented. The database server uses high integrity for the format of pages in an sbspace. Use of the MODERATE INTEG keywords in the PUT clause of the CREATE TABLE statement does not cause a syntax error. However, the database server ignores the keywords and uses high integrity instead.
Smart large objects cannot be replicated using Enterprise Replication.
The LO_NOBUFFER flag forces a log flush and a synchronous write in many situations. IBM Informix recommends that you avoid using lightweight I/O with smart large objects smaller than 8080 bytes. (With small objects, do not turn on the LO_NOBUFFER flag.)
The ONWeb utility is replaced in this release by Informix Server Administrator (ISA), which is documented in the online help. This release of Informix Server Administrator does not have a hard-copy manual.
On some UNIX platforms, it is an error to set ROOTOFFSET to 0. When this parameter is set incorrectly, you must reinitialize disk space and reload data to resume proper operation of the database server. Always check your machine notes file for information about correct settings before you configure the database server.
The default checkpoint type for 9.21 is fuzzy. In general, it is best to use fuzzy checkpoints.
You can perform a hard checkpoint by issuing an onmode -c command. For large loads of tables through the buffer cache, such as through INSERT or PUT statements, it is better to force a hard checkpoint by using onmode -c to clean the buffer cache than to let the server perform fuzzy checkpoints on its own.
With fuzzy checkpoints, raising the values of the LRU_MAX_DIRTY and LRU_MIN_DIRTY configuration parameters might help increase transactional throughput because less aggressive cleaning is needed than with hard checkpoints. Do not change the gap between LRU_MAX_DIRTY and LRU_MIN_DIRTY.
Fuzzy checkpoints might result in slightly longer roll-forward fast recovery times than before. The server might occasionally perform a hard checkpoint to avoid loss of logical log records due to log wraparound.
IBM Informix Dynamic Server 2000, Version 9.21, provides a communications support module, called the Simple-Password Communications Support Module (SPWDCSM), that provides password encryption. This encryption protects a password when it must be sent between the client and the database server for authentication. SPWDCSM is available on all platforms.
For details about the configuration for password encryption, see the Administrator's Guide.
IBM Informix Dynamic Server 2000, Version 9.21, uses Informix Storage Manager (ISM) 2.2. When setting up ISM, you might need setup information about the following features:
This section provides information about the first three items.
For information about ISM installation and certification, see "Migration to IBM Informix Dynamic Server 2000, Version 9.21" under "Informix Database Server Products" elsewhere in these release notes. For more information about ISM setup and ISM 2.2 features, see the Informix Storage Manager Administrator's Guide.
ISMData or ISMLogs Name ChangeIf you change the name of either ISMData or ISMLogs, you also must perform the following steps:
The end user should not use the NSRADMIN character-based user interface unless instructed to do so by IBM Informix Technical Support. Incorrect use of these tools could result in problems with your ISM system. These tools are undocumented.
Year-2000 Compliant StatusISM 2.2 is Year-2000 compliant. For details, see Chapter 1 of the Informix Storage Manager Administrator's Guide.
ISM supports dates in the Year-2000 and beyond. All internal dates are stored in an internal format that allows representation of dates from January 1, 1970 through December 31, 2037. ISM correctly interprets the Year-2000 as a leap year. When a year is entered as a two-digit specification, ISM 2.2 interprets it as follows:
Replace the instructions in the section "Migrating ISM 1.0 to ISM 2.2" on pages 1-18 through 1-21 with the following instructions:
You can either upgrade ISM 1.0 to ISM 2.2 alone or upgrade ISM along with the database server version. Migration is the reinstallation of ISM binaries while maintaining the ISM data (the catalogs and tape volumes that contain the save sets).
The following section explains how to migrate ISM 1.0 to ISM 2.2.
ISM 2.2 includes changes to the format of data in the ISM catalogs and volumes. Begin the following procedure with ISM 1.0 running on your earlier database server version.
IMPORTANT: Do not use ISM 1.0 storage media for future backups after you have migrated from ISM 1.0 to ISM 2.2.
onbar -b -w
onbar -b -L 0
The bootstrap is a copy of the files and directories in $INFORMIXDIR/ism/mm, index, and res (UNIX) or %ISMDIR%\mm, index, and res (Windows NT). These directories are backed up into a single save set, called the bootstrap.
On UNIX:
On Windows NT:
On UNIX:
rm -rf $INFORMIXDIR/ism/index
rm -rf $INFORMIXDIR/ism/mm
On Windows NT:
del %ISMDIR%/index
del %ISMDIR%/mm
If you have file-type devices configured in ISM, you cannot move, copy, or rename the directories that contain those devices.
Follow the instructions on how to uninstall ISM 1.0 under "Uninstalling ISM on UNIX" on page 1-16 or "Uninstalling ISM on Windows NT" on page 117. Use regedt32 to check the registry keys.
IMPORTANT: Do not remove the res directory.
Edit this file to be sure that ISMDIR and PATH are correct for the location of the new ISM 2.2 directory.
ISM 2.2 must not write to any ISM 1.0 volumes because they would become unreadable by ISM 1.0 if you choose to revert.
Replace the instructions on reverting from ISM 2.2 to ISM 1.0 on page 1-22 of the ISM Administrator's Guide with the following information:
When you revert the database server, IBM Informix does not recommend reverting to ISM 1.0. All versions of the database server, up through 9.2, support ISM 2.2. Also, ISM 2.2 is Year-2000 compliant; ISM 1.0 is not. Versions of the database server that did not include ON-Bar are not compatible with ISM.
The following restrictions apply to ISM on Solaris systems:
*** The ISM setup script detected that another storage manager is installed
in /nsr, which conflicts with ISM. If you want to set up ISM, you need to
de-install the other storage manager and then run
'$INFORMIXDIR/bin/ism_startup -init'.
For ISM to be installed on a computer having a NetWorker server or client on it, the NetWorker server or client has to be completely de-installed first. (On Solaris, for example, use pkgrm and then remove the entire /nsr structure.)
If you choose not to use ISM, remove the create-bootstrap command from the onbar script or onbar.bat.
The ISM daemons startup and shutdown can be added to your startup scripts for your operating system.
The ISM daemons must be running before you can use ISM. IBM Informix recommends that you add the following command to one of the startup scripts for your operating system:
To shut down the ISM daemons at system shutdown time, IBM Informix recommends that you add the following command to one of the shutdown scripts for your operating system:
For systems that support the /etc/init.d directory, you can write your own startup/stop script. On Solaris, for example, you could add the following script as /etc/init.d/ism:
#!/bin/sh
#
# Startup for Informix Storage Manager
#
INFORMIXDIR=....# Fill in the correct INFORMIXDIR
case "$1" in
'start')
$INFORMIXDIR/bin/ism_startup
;;
'stop')
*)
echo "Usage: /etc/init.d/ism { start | stop }"
;;
esac
exit 0
Link this script to the startup sequence as follows:
Link this script to the shutdown sequence as follows:
Once the ISM daemon is started, you can add one or more devices as ISM storage devices, and label storage media in those devices as ISM volumes. Refer to the Informix Storage Manager Administrator's Guide for further information about ISM. See also the section "Problems Fixed in ISM 2.2" in the Addendum to these release notes.
The following table summarizes the rules for specifying the location of the XBSA shared-library path for ON-Bar and Informix Storage Manager communications on various platforms.
For 64-bit Solaris computers, the default path for BAR_BSALIB_PATH is /usr/lib/sparcv9/ibsad001.so.
For 32-bit Solaris computers, the default path for BAR_BSALIB_PATH is /usr/lib/ibsad001.so (as described above).
You can set up ISM (imported restore) to run with ON-Bar. The following shell script provides the minimum setup required for ISM operations with ON-Bar. Edit the locations for SM_DISKDEV1 and SM_DISKDEV2 as needed. Two of the operations must be done as root (or Informix) as shown in the following script comments.
# setup script for ISM
echo "ISM setup begins"
# Edit these two lines to select
# appropriate disk locations for
# storage manager use.
setenv SM_DISKDEV1 /some/place/on/disk/dir1
setenv SM_DISKDEV2 /some/place/on/disk/dir2
# update storage manager version in sysutils
echo "1|1.0.1|ism|1|" > $INFORMIXDIR/etc/sm_versions
# start ISM server
echo "initialize ISM server, must be done as root"
ism_startup -init
echo "give test user ISM admin permissions, must be done as root"
ism_add -admin $USER
# we need to create some devices/media etc
echo "create and mount ISM devices and pools"
ism_add -device $SM_DISKDEV1 -type file
ism_add -device $SM_DISKDEV2 -type file
ism_op -label $SM_DISKDEV1 -pool ISMData -volume ISMData
ism_op -label $SM_DISKDEV2 -pool ISMLogs -volume ISMLogs
ism_op -mount $SM_DISKDEV1
ism_op -mount $SM_DISKDEV2
echo "end of ISM setup"
# end of setup script for ISM
If you want the external restore to also restore logs, then you must make sure that at least the log containing the external backup (the onmode -c block and unblock) and possibly more are backed up. Do this with the following command after the backup:
Or use the following command before the restore:
Alternatively, you can do a PIT or PIL or nonlog external ON-Bar restore to get around this. However, you must have backed up the entire system at once to do a nonlog external restore.
ON-Bar has a backup verification option, onbar -v, that you can use to check the status of a backup. For information about this option, see Chapter 2 and 4 of the Backup and Restore Guide.
A physical restore is the only form of system recovery available when the rootdb or regular dbspace is lost and ISM or Legato is the storage manager in use.
The sm_versions.std file is now included in the Windows NT release as it is with the UNIX release. The sm_versions.std file is a template for setting up the sm_versions file with storage manager information. For information on the sm_versions file, see the Backup and Restore Guide.
Enterprise Replication (ER) supports replication of data types supported by Version 7.30 of IBM Informix Dynamic Server. In addition, ER supports some very limited forms of replication of user-defined types. User-defined type (UDT) replication is only supported in an environment of homogeneous computer systems. (For example, replicating a row containing a UDT from a Solaris platform to a Hewlett-Packard platform is not supported.)
ER does not support replication of smart large objects, collection types, list types, multirepresentational types, types which may contain out-of-row data, and other data types and features. Most DataBlade modules use features that are not supported by ER. Thus, in most cases, replicating data managed by a DataBlade module is unsupported. Significant restrictions also apply to other new features, such as user-defined routines (UDRs) and inherited tables. IBM Informix recommends that the current version of ER not be used to replicate extensible data types.
If replication is defined on tables containing unsupported data types, in some cases replication will appear to function properly. However, data might silently fail to replicate properly at a later time.
The schema of a replicated table must not be changed by ALTER TABLE statements. In most cases, ER prevents such an ALTER TABLE from executing. In certain cases, after a cdr stop command has been issued, an ALTER TABLE statement may succeed. Upon restarting ER, errors may result. Do not issue ALTER TABLE statements for replicated tables when ER is in the stopped state.
In a replication network, it is possible to replicate data between servers of different versions. This allows one to upgrade servers in a replication network incrementally, rather than all at once. However, IBM Informix recommends mixed version replication as a transitional strategy only, and not for long-term operation.
With mixed versions, certain caveats are associated with new features that are not available on older versions. In general, it is best to avoid using the new features until the entire ER network is upgraded. Before converting or reverting servers between versions, the documented conversion process should be followed.
The following additional restrictions apply to replication networks involving different server versions:
This can be accomplished by defining all of the servers in such a mixed version network as roots (no nonroots or leaves).
For a data-replication pair, if the status of a chunk (Down, On-line) is changed on the secondary database server, and that secondary server is restarted before a checkpoint is completed, then the updated chunk status will not be saved.
To ensure that the new chunk status is flushed to the reserved pages on the secondary database server, force a checkpoint on the primary database server and verify that a checkpoint also completes on the secondary database server. Now the new chunk status will be retained, even if the secondary database server is restarted.
For example, if the primary chunk on the secondary database server is down and is to be recovered from the mirror, the following steps should be taken:
Once these steps are completed, a restart of the secondary database server will not cause the new (on-line) status of the primary chunk to be lost.
This release of IBM Informix Dynamic Server 2000 contains support for the R-tree secondary access method, a subsystem for creating indexes on multidimensional objects. The R-tree secondary access method provides the following features:
If you plan to use the R-tree secondary access method, the Informix R-Tree Secondary Access Method DataBlade module must be registered in your database. This registration normally occurs when you register a dependent DataBlade module; that is, one that can only be registered if the Informix R-Tree Secondary Access Method DataBlade module has been previously registered.
For detailed information on using the R-tree secondary access method and the Informix R-Tree Secondary Access Method DataBlade module, refer to the following documentation:
For IBM Informix Dynamic Server 2000, Version 9.21, the Informix R-Tree Secondary Access Method DataBlade module, Version 2.0 is automatically installed with the database server. You register the DataBlade module in new databases as needed.
For database servers that are upgrading from Version 9.1 of Informix Universal Server to Version 9.21 of IBM Informix Dynamic Server 2000, Version 2.0 of the Informix R-Tree Secondary Access Method DataBlade module is automatically installed as part of the standard upgrade procedure, which also includes the conversion of the databases in the database server. When the upgrade process converts a database in which the R-Tree Secondary Access DataBlade module is registered, the conversion automatically upgrades the R-Tree Secondary Access DataBlade module from Version 1.0 to Version 2.0. If you subsequently revert the database back to the old database server version, you do not need to downgrade the R-Tree Secondary Access DataBlade module, because Version 2.0 of the R-Tree Secondary Access DataBlade module is compatible with Version 9.1 of Informix Universal Server.
The following table summarizes which versions of the R-Tree Secondary Access DataBlade module are compatible with each relevant version of the database server:
Database Server Version | Compatible R-Tree Version |
---|---|
9.1x | 1.0, 2.0 |
9.21 | 2.0 |
The name of the directory that contains the Informix R-Tree Secondary Access DataBlade module is changed from $INFORMIXDIR/extend/ifxrltree.1.00 to $INFORMIXDIR/extend/ifxrltree.2.00 in this release.
Currently, the built-in compare() routine, which a generic B-tree uses, has the following restrictions:
Although these are existing restrictions, the database server allowed index creation using such routines in previous releases. In release 9.12, additional checks were added to make sure that the compare() routine used for a generic B-tree index is not written in SPL and is not variant.
An additional check was also added to make sure that built-in functions (such as ABS, MOD, LENGTH, UPPER, LOWER, and so on) are not used as keys in functional indexes. Those functions are not supported for functional indexes.
When a user's database is ANSI compliant and the environment variable OPTOFC is set, the user might get error -214 if the user closes the database. Error -214 states:
The database content is not damaged.
Client libraries, such as lib/esql/*.so, are no longer included with the database server release; you need the Client SDK bundle to install these libraries. This change in bundling might affect DataBlade modules and applications that rely on these client libraries.
The following notes apply to the High-Performance Loader:
If you are loading an opaque data type with the HPL, the import and importbin support functions must reside on the server computer.
HPL will not run if there is bit discrepancy between ipload and onpload on the database server. For example, on the HP-UX 11 platform, ipload is 32-bit and onpload is 64-bit, so HPL will not run.
The 9.21 release adds a new command line utility for the HPL, called onpladm, which allows users to load or unload tables or an entire database. The onpladm documentation is provided in HTML format only, as multiple HTML files within the /onpladm subdirectory in the same file system as these Release Notes.
In general, the CD-ROM and hard-copy documentation of the 9.21 release is identical to that of the documentation set of the 9.20 release. New features that were not part of the IBM Informix Dynamic Server 2000 Version 9.20 release have been documented in the Documentation Notes of the 9.21 release. Read the Documentation Notes (Version 9.21) for descriptions of new features that this release introduces.
This section summarizes major organizational changes to manuals since the 7.30 and 8.20 releases. It is intended to help you locate information in the 9.21 IBM Informix Dynamic Server 2000 documentation set.
Informix Guide to SQL: TutorialThe Informix Guide to SQL: Tutorial has been reorganized as follows:
The Informix Guide to SQL: Reference has a new appendix, Appendix B, which provides the dimensional model and table schema information for the sales_demo and superstores_demo databases. This appendix complements the information on the stores_demo database in the existing Appendix A.
Informix Guide to SQL: SyntaxThe Informix Guide to SQL: Syntax has been reorganized since Version 9.10 as follows:
The Informix Guide to Database Design and Implementation has been reorganized (taking and updating material from the Informix Guide to SQL: Tutorial) and contains primary documentation for the following functionality and features:
The following 9.10 manuals were combined into a single book for Version 9.20:
Specific information relating to the development of C-language and Java-language user-defined routines has been moved to the following language-specific manuals:
The DataBlade API Programmer's Manual has been substantially enhanced to provide more detailed information on how to use the DataBlade API to create C user-defined routines (UDRs) and client LIBMI applications. This manual has been reorganized into the following sections:
(This reference section provides syntax and descriptions for all public DataBlade API functions.)
The manual Creating UDRs in Java was new for the 9.20 release. It describes how to implement the new 9.21 feature: writing user-defined routines (UDRs) in Java. It provides information on how to:
For information on the full set of classes and methods supported by the Informix JDBC driver, see the Informix JDBC Driver Programmer's Guide.
Administrator's Guide and Administrator's ReferenceThe 9.1 Administrator's Guide has been divided into the following two manuals:
This manual consists of Chapters 32 through 38 from Version 9.1 of the Administrator's Guide.
In addition, the Administrator's Reference is now generic. The Administrator's Reference covers Version 9.2 and 8.30.
Most of the monitoring and tuning information in the administrator books was moved to the Performance Guide, including these topics:
The Performance Guide has been reorganized as follows:
This material cross-references the Extending Informix Dynamic Server manual for details.
The Informix Migration Guide has been completely reorganized. It now contains the following sections:
Appendix A still lists the database server environment variables.
Backup and Restore GuideThe Backup and Restore Guide contains the following changes:
The Enterprise Replication manual has been reorganized in the following ways:
With Version 9.20, this manual includes a section to describe how to check R-tree indexes with the oncheck utility.
Online Documentation of onpladm Utility of HPLThe 9.21 release adds a new command line utility for the HPL, called onpladm, which allows users to load or unload tables or an entire database. The onpladm documentation is provided in HTML format only. It is an online reference document composed of multiple HTML files within the /onpladm subdirectory in the same file system as these release notes and the documentation notes.
Defect 125041: On Version 7.30.UC10, blocked in a deadlock situation at a checkpoint request.
Defect 125043: OnVersion 7.30.UC10, ONCHECK -CI on database level gives ISAM error: FILE IS LOCKED
Defect 128221: Unknown problem exists when using Matches on NCHAR column.
Defect 128588: Test case error while removing UDRs registered with install_jar() routine.
This is a test case. It is not a serious problem.
The schema file contains a dollars sign ('$') in the CREATE VIEW statement because DBMONEY is set to '$.' when dbaccessdemo executes the CREATE VIEW statement. Even though it is executed under a Japanese locale, the DBMONEY environment variable overrides the currency symbol of the Japanese locale.
The dbaccessdemo script is used against different locales, so DBMONEY is set to '$.' in the script to ensure portability. Users need to set the DBMONEY environment variable in order to use the dbimport or dbexport utility. Another alternative would be to modify the dbaccessdemo script to reset the DBMONEY environment variable under the Japanese locale.
The behavior of unloading of a module has changed. You can force an unload of a DataBlade module, but this is not generally necessary, and IBM Informix does not recommend it unless you need to recover memory associated with the loaded shared library.
To force unloading of a DataBlade module, execute the function ifx_unload_module(), which takes two parameters: the module name and the language.
For example, to unload string.so, you call the following function:
The utility onaudit is supposed to issue an audit record for the ONAU event but is not doing so. This record should contain onaudit command-line arguments and should be located in /tmp with the filename dbservername.0.
The way that -O, -A, -D, -M options are audited is to have individual events enabled. That is, setting a mask for LSAM, CRAM, DRAM, UPAM. The ONAU event is limited to audit configuration change.
The same GROUP BY clause in an SPL routine returns the same row many times. That same SQL code can be run in DB-Access, and the correct results are returned. If the GROUP BY is changed to 1,2,3,4, then the SQL routine returns the last row of the correct results.
The user can choose different variable names from the column names or by using table_name.column_name in the SELECT statement.
A column defined as DECIMAL(5) has a precision of 5 (maximum number of digits) and a scale of 255, which means that the decimal point floats. A DECIMAL(5) can have values such as 12345, .12345, 1.2345e120, -1.2345e-120, and so on. When genlib calculates the display width, it needs to take all of these options into consideration and calculate the maximum length needed. The largest display format for a DECIMAL is scientific notation, so that is the one that genlib assumes:
For DECIMAL(5), assuming a default locale, the following numbers of bytes would apply:
This is why '12345' inserted into DECIMAL(5) is displayed as 12345.000000. This allows the value -1.2345e-120 to be displayed without truncation.
This only affects floating-point decimals. If the user specifies the scale, then they get the old display. Because this only affects floating-point decimals, this defect fix will not affect ANSI-compliant databases, which allow only fixed-point DECIMAL values.
A DECIMAL column defined as (5) in a database that is not ANSI-compliant means that there are at maximum 5 digits, and the decimal point can be anywhere. When you enter 0.123449, that is 6 digits, so rounding occurs. This is the correct behavior.
This is a test case. It is not a serious problem.
If the DATE() function is specified in the fragmentation expression of a table, then the dates are converted to internal date formats at table creation time itself. The dates are not open to interpretation at query time, and therefore the DBDATE format at query time has no bearing on it.
Defect 95200: Assignments to UDT variables in SPL return 9635 if "LET" has more than one variable.
This is a UDT feature that was never implemented. The missing feature is limited to IMPLICIT casts of multiple assignments. Both EXPLICIT casts and no-cast-required work fine under these conditions.
A direct workaround, for all but one case, follows:
Instead of -- let a,b = '1','2'
use -- let a = '1'
let b = '2'
or -- let a,b = '1'::a_type, '2'::b_type
Unfortunately, the following case does not have an easy solution (where splfunc() returns more than one value):
In this case, the only solution is to use the right destination data types; that is, do not rely on implicit casts. This will not work under all conditions, but it is generally better to use the correct data types.
Defect 95599: Received incorrect error message.
Defect 95599 has been fixed. It is reported here for your information.
Defect 101747: DATE literal interpretation in TRIGGER body is done incorrectly.
Defect 101747 has been fixed. It is reported here for your information.
Defect 101848: DATE literal interpretation in PROCEDURE body is done incorrectly.Defect 108O92: Y2K PROBLEM: Changing interpretation of DATE and DATETIME values could result in unexpected and/or wrong results
The behavior of the database has changed in situations when a DATE or DATETIME literal is used in any of the following database objects:
Unintended behavior can result if one or more environment variables are subsequently reset, if the new settings change the way in which DATE or DATETIME literals are evaluated by the database object.
In earlier releases, the server interpreted the DATE or DATETIME literal using the date environment settings prevailing at execution time. Here date environment settings include the following:
Now, however, the date is always interpreted using the date environment settings prevailing at the creation time of the object with which the DATE or DATETIME literal is associated.
IMPORTANT: The following example (of the legacy behavior) assumes that DBDATE is set to "MDY2-" and that the time of execution is within the interval between 31 December, 1899, and 30 June, 1994.Example:
Unset DBCENTURY
create table t (d date)
fragment by expression
d < '07-01-94' in dbs1,
d >= '07-01-94' and d <= '12-30-94' in dbs2,
remainder in dbs3;
Set DBCENTURY = P
insert into t values ('04-15-1980');
Old behavior: The row goes into dbs3 because the fragmentation expression is interpreted at insert time and treated as
d < '07-01-1894' in dbs1,
d >= '07-01-1894' and d <= '12-30-1894' in dbs2,
remainder in dbs3
New behavior: The row goes into dbs1 because the fragmentation expression is interpreted based on the DBCENTURY value at the time of creation and is treated as follows:
d < '07-01-1994' in dbs1,
d >= '07-01-1994' and d <= '12-30-1994' in dbs2,
remainder in dbs3
Another date-related environment variable is DBDATE.
Example:
Unset DBDATE
create table t (d date check (d <= '12-31-1995'));
Set DBDATE = y4md
insert into t values ('1995-10-31');
Old behavior: Error 1218 (string to date conversion). This is because the date in the check constraint (12-31-1995) is being interpreted at the time of insert and since DBDATE is y4md, it gives an error.
New behavior: INSERTsucceeds. The 12-31-1995 has been interpreted at the time of object creation and already converted into a date, so there is no error.
TIP: This new behavior only takes effect for database objects that are created with a version of the database server that contains this fix. Previously created objects will not change in behavior. Therefore, if the new behavior is more desirable, users must drop and recreate their existing objects. For example, if constraints use a DATE or DATETIME literal, they can be dropped and recreated by using ALTER TABLE or ALTER FRAGMENT.The reproduction causes a server failure before these logical logs could be backed up. During restore, the server expects these logical logs from tape and the restore will not complete without these logical logs.
Because onbar -b was used to back up the storage spaces, all the logical logs that were current when the archives of the storage spaces started (in parallel) need to be backed up to the logical log tape.
Defect 97000: OS Auditing does not work with nonroot VPs
With nonroot changes, OS-Auditing is not working. This is because audit operations like open/write of system audit file needs to be run as superuser. However CPU VPs which are turned nonroot (Informix) after initialization try executing the audit_subsystem calls failing with EPERM. Worse still, any error while executing audit_subsystem calls results in Assert failure, and Informix Dynamic Server is brought down.
With Informix Dynamic Server configured with C2 Auditing on, any registered events like CREATE, SELECT, CLOSE, or DROP DATABASE, exercised by an audit mask created by onaudit, fails with an error, and Informix Dynamic Server is brought down.
OS-managed C2 auditing (ADTMOD of 2,4,6, or 8) is supported only when the server is started as a user who is a security administrator. On UNIX platforms, the superuser is always a security administrator. On most platforms, however, it is possible to have users who do not have root privileges but do have security administrator privileges. Consult your OS documentation for details.
IBM Informix managed C2 auditing (ADTMODE of 1,3,5, or 7) is always available.
There are some options for making OS Auditing work with nonroot changes:
Defect 99824: Insert into subtable from parent works in 9.14 but should get error 360 as in 9.21.
This is a test case. It is not a serious problem.
In Version 9.14, one could insert into a subtable from a parent, but now the user gets error -360.
Defect 101457 has been fixed. It is reported here for your information.
During an ALTER FRAGMENT ... ATTACH operation, the database server will no longer scan the new or consumed fragments if both of the following conditions are true:
The preceding two conditions ensure that there is no movement of rows between fragments during the ATTACH operation.
Defect 102224: Adding two DATETIME columns returns error -1260
The following are invalid operations on DATETIME or DATE columns:
DATETIME +/-/*/DIV DATETIME
(unary -) DATETIME
After all invalid operations, the user should receive error message -1263.
The only valid binary operation on DATETIME or DATE columns are the following operations:
DATETIME - INTERVAL
DATETIME + INTERVAL
INTERVAL + DATETIME
INTERVAL + DATE
DATETIME - DATE
DATE - DATETIME
DATE - INTERVAL
DATE + INTERVAL
Defect 102347: The self-referential INSERT into MYTAB SELECT from MYTAB... is needed
Consider the following INSERT statement:
INSERT INTO target-table
(SELECT * FROM source-table WHERE ...);
The earlier implementation did not allow the source table to be the same as the target table. Any table occurrence in the SELECT clause of the INSERT clause cannot have the target table. The server returns error -360 if it detects such a case.
This feature relaxes the above restriction by allowing the use of target tables in the SELECT clause of the INSERT statement.
Semantics:
If one of the tables in the SELECT clause is the target table, then rows newly inserted into the target table by the INSERT statement are not used for evaluation of the select or any of the nested subquery of the INSERT statement.
The effect of the preceding statement is the same as the effect of the following statements executed in a transaction.
SELECT * FROM source-table WHERE ... INTO TEMP temp-tab;
INSERT INTO target-table SELECT * FROM temp-tab;
DROP TABLE temp-tab;
Restrictions:
If procedure someproc scans or updates target-table, then the database server returns error -360.
The behavior of the UPDATE and DELETE statements has not changed when the target table is used in their select subqueries. In this case, the database server returns error -360.
Defect 102727: BLOBs not properly replicated under some UPDATE conditions.
Under some conditions, if a simple large object is updated on a source server, it will be set to zero length on the target server(s). One of these conditions is as follows:
When the user selects data from a table that has a collection, it needs to find out if the collection has a BLOB or CLOB object embedded in it. For this it needs to walk up the sysattrtypes. This is what is causing the problem. The workaround is to set variable DISPCLOB to 0. When this is set, the contents of the CLOB object will not be displayed.
There are a few cases where the DBA is allowed to drop logical files that Enterprise Replication (ER) might need later on. These cases occur only when ER is currently not running on the server (either because the server is in quiescent mode or because the DBA stopped ER).
Make sure that logical log files are not dropped in the following scenarios.
Suggested solutions:
database sysmaster;
select * from replaytab
Then delete only log files less than the value reported.
Queries on ER-specific System Monitoring Interface (SMI) tables should not be performed while ER is shutting down.
Defects 103708 and 107052: Common library naming convention of CSM libraries.
Currently, due to the common library name convention of CSM libraries and due to the fact that they both get installed at the same location within INFORMIXDIR, the following problems can occur:
One can work around this problem by taking these steps to shut down the server with a checkpoint:
To calculate the amount of space needed for reserved page extension during conversion to 9.21, do the following:
Run ckconvsp.sh (on UNIX systems) or ckconvsp.bat (on Windows NT systems) to determine if you have enough space in your root chunk to perform an upgrade to 9.21.
If you do not have enough space, this message will appear:
Conversion to 9.21 will not succeed, because the root
chunk does not have enough contiguous space on it.
The script will also inform you how much more space is needed in the root chunk. You can use oncheck -pe rootdbsname to see your current allocation in the root chunk extents.
In some cases, even if the server conversion is successful, internal conversion of some databases may fail because of insufficient space for catalogs.
In this case, the online log will report that conversion was successful:
However, prior to this message you will also see a message specifying that a specified database failed internal conversion:
15:39:09 Converting database database_name ...
15:39:13 The conversion of the database database_name has failed.
The following steps should be taken to correct the problem.
If you are not sure which dbspace contains the database, you can get the necessary information from sysmaster:
dbaccess sysmaster - <<EOF
select name from sysdbstab where dbsnum=
(select partdbsnum(partnum)
from sysdatabases where NAME=database_name);
EOF
Once the chunk has been added, execution of dbaccess database_name should connect to the database. In addition, a message will be written to the online log indicating that the database was successfully converted:
16:01:23 Converting database_name database ...
16:01:23 database_name conversion completed successfully.
Defect 109647: Event viewer not able to get correct description for events logged by the server.
This only affects Windows NT systems. This problem is related to onaudmsg.dll not being copied to the \winnt40\system32 directory.
The workaround is to manually copy onaudmsg.dll from %INFORMIXDIR%\bin to \winnt40\system32 directory.
This defect only occurs when updating, or possibly inserting, a collection (set, list, multiset) containing smart large objects (BLOB, CLOB) when the collection is large enough that it will no longer fit inline in the row.
If your buffer size is 32 kilobytes, no more than 241 identifiers are supported, rather than the documented upper limit of 500.
The workaround is to increase the buffer size to 64 kilobytes.
Neither dbexport nor dbschema use the owner.object model for all database object references in the extensible data types, and so the scripts fail for creating ANSI-compliant databases. Exporting a database by one user and running dbschema as another user may not work in case of extensible data type.
The workaround is to specify any columns declared as user-defined data types in owner.object format.
Defect 114130: Warm restore with -O option failed because server got the wrong copy of logical log.
If the Informix Storage Manager instance has been running longer than the current server instance, and has backups from previous server incarnations, and is backing up the logs during a warm restore, ON-Bar sometimes sends a log from a previous incarnation when in fact a more recent log is required.
We do not currently know of a way to help in this situation. Here is an example of this problem:
In the original data for this PTS entry, it was a matter of missing log8 by 4 seconds.
What a DBA can do to avoid this occurrence:
What a user can do to recover after #13 happens:
The necessary log is now backed up and it should succeed without extra steps.
This happened in a system where the server was reinitialized but the ISM was not, and the command was an override of restoring the entire system while it is online (but not the critical dbspaces). This is not a scenario that will occur frequently.
The customer should use four-digit year values in ON-Archive menu mode.
If you performed a whole-system backup of the rootdbs and other dbspaces and then verified the rootdbs only, ON-Bar restores only the rootdbs.
oninit -iy # rootdbs, dbs1, dbs2 in system
onbar -b -w
onbar -v rootdbs
onmode -yuk
onbar -r -w
To avoid this problem, do one of the following:
THIS PRODUCT INCLUDES CRYPTOGRAPHIC SOFTWARE WRITTEN BY ERIC YOUNG (eay@mincom.oz.au). IT IS PROVIDED BY ERIC YOUNG "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
For versions of the database server earlier than 9.20, the product installed as user root. For IBM Informix Dynamic Server 2000, Version 9.21, the product installs as user Informix and then runs a script as user root.
For Version 9.21, Linux users receive product files in RPM format. Linux users must therefore follow these steps:
IBM Informix Dynamic Server 2000 supports the following DataBlade products:
CREATE SYNONYMS from 7.24/7.30 to 9.21 hangs indefinitely (Defect 96374). IBM Informix Dynamic Server interim release Versions 7.31.UC3, 7.24.UC9, and 7.30.UC9 have this defect fixed. Upgrade to these interim versions if your 7.x server creates synonyms to 9.21.
IBM Informix Dynamic Server 2000, Version 9.21, recognizes new SQL reserved words that might affect migration of your applications. For a complete list of SQL reserved words, see Appendix A of the Informix Guide to SQL: Syntax, Version 9.21.
Although you can use almost any word as an SQL identifier, syntactic ambiguities can occur. An ambiguous statement might not produce the desired results. For information about workarounds for such ambiguities, see the Informix Guide to SQL: Syntax.
New reserved words in 9.21 (beyond 9.14, but already in 7.30, 7.31, and 9.20):
New reserved words in 9.21 (but already in 7.31):
Other new 9.21 reserved words (but already in 9.20):
The following reserved words are implemented for the first time in 9.21:
IBM Informix Dynamic Server 2000 has been engineered and tested to be Year-2000 compliant. This means that the use or occurrence of dates on or after January 1, 2000, will not adversely affect the following:
From time to time, through continued testing efforts, IBM Informix may find that certain of its software products contain date-related defects. We have provided a history table on our Year-2000 Web Site as these defects are discovered. For more details, customers with access to TechInfo Center should go to the Year-2000 Tech Alerts section. Call your local technical support office for assistance, or for information on how to enroll in the program.
To migrate to IBM Informix Dynamic Server 2000, Version 9.21 from any 9.x database server prior to 9.14, you must first upgrade to a 9.14 database server and open each database.
No conversion or reversion is required between IBM Informix Dynamic Server, Version 9.20, and Version 9.21.
Conversion from Version 7.31 is supported.
To migrate from any Version 7.x database server prior to 7.2x, you must first upgrade to a 7.3x or 7.22.UC1 (or later) database server and open each database.
When migrating from any Informix Dynamic Server 9.14, 7.3x, or 7.2x database server to IBM Informix Dynamic Server 2000, Version 9.21, you must first calculate the amount of space that you need for the conversion.
Run ckconvsp.sh (on UNIX systems) or ckconvsp.bat (on Windows NT systems) to determine if you have enough space in your root chunk to perform an upgrade to 9.21.
If you do not have enough space, the following message will appear:
Conversion to 9.21 will not succeed, because the root
chunk does not have enough contiguous space on it.
The script will also inform you how much more space is needed in the root chunk. You can use oncheck -pe rootdbsname to see your current allocation in the root chunk extents.
In some cases, even if the server conversion is successful, internal conversion of some databases may fail because of insufficient space for system catalog tables. Refer to defect 104514 in the defect Workaround Section above for more information.
When you convert or revert an IBM Informix database server, the storage manager that you used on the old version might not be certified for the version that you are migrating to. Verify that IBM Informix has certified the storage manager for the target database server version and platform. If not, you need to install a certified storage manager before performing backups with ON-Bar.
Before you upgrade to a later version of the database server, save a copy your current sm_versions file, which should be in the $INFORMIXDIR/etc directory on UNIX or the %INFORMIXDIR%/etc directory on Windows NT. If you are using a different directory as INFORMIXDIR for the new database server, copy sm_versions to the new $INFORMIXDIR/etc or %INFORMIXDIR%/etc directory, or copy sm_versions.std to sm_versions in the new directory, and then edit the sm_versions file with appropriate values before starting the upgrade.
When you upgrade to the new database server version, install the storage manager before you bring up the database server. That way if you have automatic log backup set up on the database server, ON-Bar can start backing up the logs when the database server comes on-line.This section provides some guidelines for upgrading to IBM Informix Dynamic Server 2000, Version 9.21 from Universal Server, Version 9.14, Dynamic Server, Version 7.3x, or OnLine Dynamic Server, Version 7.22.UC1 (or later). For details on the upgrade procedure, see the Informix Migration Guide.
You need to add any additional free space to the system prior to conversion. If the dbspaces are very full, you need to add space before you start the conversion procedure.
Prior to migrating the old system to the new one, make sure that there are no open transactions. Fast recovery would fail when rolling back open transactions during conversion. You can use oninit -s on the source side (old system) as a check against any open transactions. For more information on how to close transactions properly before migration, see the Informix Migration Guide.
A shutdown procedure does not guarantee to roll back all open transactions. To guarantee that the old system (9.14, 7.3x, or 7.22.UC1 or later) has no open transactions prior to conversion, the old database server needs to be taken down to quiescent mode. It is not enough to run onmode -yuk. You need to execute the onmode -s command first, followed by onmode -yuk. Wait until onmode -s is completed. It is possible that some users are still active.
Only after proper shutdown can you bring the new server (9.21 through the conversion path. Any open transaction in fast recovery during conversion will cause an execution failure in fast recovery.
After a successful conversion, you need to run UPDATE STATISTICS on some of the system catalog tables in your databases. For conversion from a 7.3x or 7.22.UC1 or later database server to IBM Informix Dynamic Server 2000, Version 9.21, run UPDATE STATISTICS on the following system tables in IBM Informix Dynamic Server 2000, Version 9.21:
For conversion from an Informix Dynamic Server 9.14 database server to IBM Informix Dynamic Server 2000 9.21, run UPDATE STATISTICS on the following system tables in 9.21:
For more information about upgrading to IBM Informix Dynamic Server 2000, Version 9.21, from Universal Server 9.14, Dynamic Server 7.3x, or OnLine Dynamic Server 7.22.UC1 or later, see the Informix Migration Guide.
This section provides some guidelines for reverting from IBM Informix Dynamic Server 2000, Version 9.21 to Universal Server, Version 9.14, Dynamic Server, Version 7.3x, or OnLine Dynamic Server, Version 7.22.UC1 or later. For details on the reversion procedure, see the Informix Migration Guide.
You can revert from IBM Informix Dynamic Server 2000, Version 9.21, to Universal Server 9.14, Dynamic Server 7.3x, or OnLine Dynamic Server 7.2 if you have not added any extensions to Version 9.21 of the database server.
When you run BladeManager against a database, you automatically create extensions because BladeManager registers its utility DataBlade module, which adds extensions to the database. If you need to downgrade from Version 9.21 and you have run BladeManager, you must first run BladeManager and specify the following command to remove the BladeManager extensions:
The following restrictions apply to reversion from 9.21 to 9.14, 7.3x, or 7.2x:
You cannot revert if you created new data types or routines either explicitly or by registering a different version of a DataBlade module.
To be able to revert, you need to downgrade any DataBlade module back to the version that was registered prior to conversion and explicitly drop any data types and routines that were created outside of any DataBlade registration. For information on how to use DataBlade modules, see the DataBlade Developers Kit User's Guide and the BladeManager User's Guide.
Before reversion, make sure that the R-tree indexes do not use long identifiers as indexed column names, opclass names, or opclass function names.
Also, make sure that the following disk structures do not use long identifiers: databases (owner and database name length), tblspaces (owner and tblspace name length), dbspaces (owner and dbspace name length), and chunks (path length).
If a user table has an incomplete in-place ALTER operation, then you need to ensure that the in-place ALTER operation is complete by running a dummy update statement against the table. If the reversion process does not complete successfully because of in-place ALTER operations, it lists all of the tables that need dummy updates. You need to perform a dummy update on each of the tables in the list before you can revert to the older database server.
If an in-place ALTER operation is incomplete against a system table, run one of the following scripts while connected to the database.
9.21 to 9.14 reversion:
9.21 to 7.3x or 7.2x reversion:
Fragment strategies that existed before conversion to the 9.21 database server cannot be changed by using ALTER TABLE or ALTER INDEX statements.
The following restrictions also apply to reversion from 9.21 to 9.14:
The following restrictions also apply to reversion from 9.21 to 7.3x or 7.2x:
After a successful reversion, you need to run UPDATE STATISTICS on some of the system catalog tables in your databases. For reversion from IBM Informix Dynamic Server 2000 Version 9.21 to a 7.2x or 7.3x database server, run UPDATE STATISTICS on the following system tables in 7.2x or 7.3x:
For reversion from IBM Informix Dynamic Server 2000 Version 9.21 to a 9.14 database server, run UPDATE STATISTICS on the following system tables in 9.14:
When reverting back to a previous version of the server, do not reinitialize the database server by using the -i command-line parameter. If you convert from an older version of the server to a newer version, and if you then decide to revert back to the older version, you will see a message similar to the following:
This will make all necessary modifications to disk structures
so that the IBM Informix Dynamic Server 2000 space will be
compatible with Informix OnLine Version 7.3
Beginning process of reverting system to 7.3...
...
Reversion complete.
Install Informix OnLine Version 7.3 before reinitializing OnLine.
Do you wish to continue (y/n)? qaonmode -b 7.3: passed
In the second-to-last line, reinitializing refers to re-starting the database server (sometimes referred to as re-initializing shared memory), not reinitializing the existing root dbspace. Using the -i parameter would re-initialize the root dbspace, which would destroy your databases. Do not use the -i parameter.
For more information about reverting from IBM Informix Dynamic Server 2000 Version 9.21 to an older database server, see the Informix Migration Guide.
The following table lists selected capacity limits and system defaults for this release of IBM Informix Dynamic Server 2000.
This section describes the J/Foundation feature of IBM Informix Dynamic Server 2000.
If you have purchased IBM Informix Internet Foundation.2000, this release supports the following features:
Follow these steps to install and configure your J/Foundation software:
This directory will be referred to as <jvphome>.
An example setting follows:
JVPHOME <jvphome>
JVPLOGFILE <jvphome>/jvp.log
JVPPROPFILE <jvphome>/.jvpprops
JVPJAVAVM hpi:jvm:java:net:math:zip:jpeg
VPCLASS jvp,num=1
JVPJAVAHOME <jvphome>/jre/
JDKVERSION 1.2
JVMTHREAD native
JVPJAVALIB /lib/sparc -- for SPARC/Solaris
JVPCLASSPATH <jvphome>/krakatoa_g.jar:<jvphome>/jdbc_g.jar
For this release, jdbc(_g).jar is the server-side JDBC driver.
This directory should be readable and writeable by the user who brings up the 9.2 server instance. The remaining permissions can be adjusted to the level of security desired for jar files. If this environment variable is not set, temporary copies of jar files will be created in the /tmp directory of the server's local file system.
Thread pooling has been implemented to avoid the overhead in creating a number of threads on the fly. When a thread is needed to perform a task, it is simply allocated to the task from a list of threads that have already been created. Thread pooling properties can be set in the JVPPROPFILE (e.g. $INFORMIXDIR/extend/krakatoa/.jvpprops). The two properties that impact thread pooling are pool size and patrol interval.
The pool size is the initial number of threads that are created in the pool. The patrol interval is used to time a patrol thread, which runs every 'n' number of minutes, and destroys threads that have not been used in a specific interval of time. These two properties are set as follows:
# This sets thread pool size to 30
JVP.dbathreadpool.poolsize:30
# This sets patrol interval to 5 minutes
JVP.dbathreadpool.patrolinterval:5
The thread pool or the patrol thread can be disabled by explicitly setting the properties to 0. If not specified in the JVPPROPFILE, the default pool size is 20, and the default patrol interval is 5 minutes.
Information pertaining to J/Foundation can be found in these documents:
There are also some code examples in the examples subdirectory. See the README file in that directory for details.
Visit the JavaSoft web site (http://java.sun.com) for information about JDBC 1.0 and JDBC 2.0.
A recommendation for performance is to minimize the level of tracing. The trace levels are described by the example properties file in the distribution directory (.jvpprops.template). Another way to improve performance is to use the nondebuggable versions of JDK libraries, the J/Foundation jar file, and the JDBC driver jar file while setting the corresponding parameters in the server configuration file.
NOTE: In case of failures, high levels of tracing and debuggable versions of libraries will provide maximum information in identifying the source of the failure.
If you have purchased IBM Informix Internet Foundation.2000, this release supports the following features:
Follow these steps to install and configure your J/Foundation software:
An example setting is shown below, where the J/Foundation release is in <jvphome>:
JVPHOME <jvphome>
JVPLOGFILE <jvphome>\ol_server_jvp.log
JVPPROPFILE <jvphome>\.jvpprops_ol_server
JVPJAVAVM hpi;jvm:;java;net;math;zip;jpeg
VPCLASS jvp,num=1
JVPJAVAHOME <jvphome>\jre\
JDKVERSION 1.2
JVMTHREAD native
JVPJAVALIB \bin\
JVPCLASSPATH <jvphome>\krakatoa_g.jar;<jvphome>\jdbc_g.jar
For this release, jdbc(_g).jar is the interim JDBC driver.
If debugging is not required, change JVPCLASSPATH to use the nondebuggable version of the .jar files:
Navigate through the following sequence:
HKEY_LOCAL_MACHINE->Softare->Informix >Online->ol_servername->Environment
To add new environment variable choose Edit->New->String Value. Set Value Name to JVM_MAX_HEAP_SIZE and set Value data to the estimated requirement of the application.
Navigate through the following sequence:
To add new environment variable, choose Edit->New->String Value. Set Value name to JAR_TEMP_PATH and set Value data to a directory in your server's local file system.
This directory should be readable and writeable by the user who brings up the 9.2 server instance. The remaining permissions can be adjusted to the level of security desired for jar files. If this environment variable is not set, temporary copies of jar files will be created in the \tmp directory of the server's local file system.
Thread pooling has been implemented to avoid the overhead in creating a number of threads on the fly. When a thread is needed to perform a task, it is simply allocated to the task from a list of threads that have already been created. Thread pooling properties can be set in the JVPPROPFILE (e.g. $INFORMIXDIR/extend/krakatoa/.jvpprops). The two properties that impact thread pooling are pool size and patrol interval.
The pool size is the initial number of threads that are created in the pool. The patrol interval is used to time a patrol thread, which runs every 'n' number of minutes, and destroys threads that have not been used in a specific interval of time. These two properties are set as follows :
# This sets thread pool size to 30
JVP.dbathreadpool.poolsize:30
# This sets patrol interval to 5 minutes
JVP.dbathreadpool.patrolinterval:5
The thread pool or the patrol thread can be disabled by explicitly setting the properties to 0. If not specified in the JVPPROPFILE, the default pool size is 20, and the default patrol interval is 5 minutes.
Information pertaining to J/Foundation can be found in these documents:
There are also some code examples in the examples subdirectory. See the README file in that directory for details.
Visit the JavaSoft web site (http://java.sun.com) for information about JDBC 1.0 and JDBC 2.0.
A recommendation for performance is to minimize the level of tracing. The trace levels are described by the example properties file in the distribution directory (.jvpprops.template). Another way to improve performance is to use the nondebuggable versions of JDK libraries, the Java in the Server jar file, and the JDBC driver jar file while setting the corresponding parameters in the server configuration file. (The current Windows NT release does not support debuggable versions of JDK libraries).
NOTE: In case of failures, high levels of tracing and debuggable versions of libraries will provide maximum information in identifying the source of the failure.
This release requires users to explicitly call the close() method on the instances of the following classes to avoid memory leaks:
A technique for obtaining root access using IBM Informix Software was published on the internet 4 September 2001. A Tech Alert describes the exploit and provides directions for obtaining a script that fixes the permissions on the executables that are used to obtain root access. Please refer to the Tech Alert Index page:
http://www.Informix.com/Informix/services/ilink/alerts/alerts.htm
The specific Tech Alert about the root exploit is on the following web page:
http://www.Informix.com /Informix/services/ilink/alerts/091301_152768n152769n152770n152789.htm
This release of IBM Informix Dynamic Server includes the most important of the changes recommended by the Tech Alert, but you can improve the security of your system still more by reading the Tech Alert to understand the issue and by using the script, ibmifmx_security.sh. The script is located in the directory $INFORMIXDIR/bin. Information has been placed in the header of the script, ibmifmx_security.sh, providing information on how to use the script.
You should also take care to ensure that the following security precautions are implemented:
You can ask about PTS defects #152768, #152769, #152770, and #152789 for more information.
Also, remember to follow these basic security rules for IBM Informix software:
Similarly, all device files (raw disks) and any cooked files that are used for chunks must implement all of the following security features:
Starting with the release of Informix Dynamic Server Version 9.21.UC7, the utility DB/Cockpit will no longer be supported. The functionality provided through this utility can be obtained using ISA.
Transaction processing should be handled outside of Java UDRs. Attempting to handle transaction processing within a Java UDR is not supported and might lead to inconsistent behavior. Thus, a Java UDR can be embedded within a transaction, but should not contain a transaction, or part of a transaction.
Version 9.21 is the first release of J/Foundation providing support for JDK, Version 1.2. IBM Informix supports the reference JDK 1.2 from Sun, which is distributed along with the rest of J/Foundation. However, the distribution has only the Java runtime environment (JRE 1.2). To compile user-defined Java routines, use any standard JDK 1.2 production version. Also use the standard production JDK 1.2 for any client-side applications.
NOTE: JRE 1.2 is distributed with J/Foundation solely for the purpose of running database server-side Java user-defined routines. With the JDK 1.2 support in J/Foundation, users can call any standard documented JDK 1.2 package APIs, such as java.net, java.io, and so on, within their Java user-defined routines. Check the manual for one or two packages, such as swing components, that cannot be used because they do not make semantic sense within Java UDRs. JDBC 2.0 features under the Java.SQL package in JDK 1.2 can also be used. For the list of JDBC 2.0 features available with Version 9.21, check the manual.
The ON-Archive utility included in this release is no longer being enhanced and is scheduled to be removed from a future version of the database server. Customers are advised to begin transition to using either the ontape or ON-Bar backup utilities in its place.
The DB/Cockpit utility provided in this release is no longer being enhanced and is scheduled to be removed from a future version of the database server.
Defect 122682: Select DE_DE database using QA_QA locale setting returns -23101 where -23115 is expected.
An incorrect error message is returned. This message will not be fixed in Version 9.21.
Defect 123282: Unable to perform DELETE (error 12097) on an NLT (raw table) when a rollback of an INSERT operation is performed.
This condition will not be fixed in 9.21. Instead, perform a rollback of an insert operation on a raw table.
Defect 125987: Batch update of 3000 STMTS within a UDR triggers a huge memory consumption, >1.5 gigabytes, and thousands of scan threads.
Huge memory consumption and table lock overflow during JDBC operations in UDRs is possible in extreme cases. This situation will not be fixed in Version 9.21. Be aware that batch update implementation has this restriction.