You might encounter problems while migrating from an older version of WebSphere Application Server.
CWSIC1002E: An internal error occurred. An object of class JsMessage cannot be created because of exception MessageDecodeFailedException
com.ibm.ws.sib.mfp.MessageDecodeFailedException: com.ibm.ws.sib.mfp.jmf.JMFSchemaViolationException: messageType not compatible with null... CWSIC1003E: An internal error occurred. An object of class JsMessage cannot be created because of exception MessageDecodeFailedException" with a stack trace that contains the com.ibm.ws.sib.mfp.MessageDecodeFailedException exception
This problem can occur if you are trying to run the WASPostUpgrade tool or the WASPreUpgrade tool from a directory other than app_server_root\bin. Verify that the WASPostUpgrade or WASPreUpgrade scripts reside in the app_server_root\bin directory, and launch either file from that location.
The administrative console no longer displays deprecated JDBC provider names. The new JDBC provider names used in the administrative console are more descriptive and less confusing. The new providers will differ only by name from the deprecated ones.
The deprecated names will continue to exist in the jdbc-resource-provider-templates.xml file for migration reasons (for example, for existing JACL scripts); however, you are encouraged to use the new JDBC provider names in your JACL scripts.
The WASPreUpgrade tool saves selected files from WebSphere Application Server Version 4.x and Version 5.x bin directories. It also exports the existing application server configuration from the repository.
If you are migrating from WebSphere Application Server Version 4.0.x Advanced Edition, the WASPreUpgrade command calls the XMLConfig command to export the existing application server configuration from the repository. If errors occur during this part of the WASPreUpgrade command, you might have to apply fixes to the installation to successfully complete the export step. Contact IBM Support for the latest applicable fixes.
This problem can occur if you are trying to run the WASPostUpgrade tool or the WASPreUpgrade tool from a directory other than app_server_root\bin. Verify that the WASPostUpgrade or WASPreUpgrade scripts reside in the app_server_root\bin directory, and launch either file from that location.
The most likely cause of this error is that Version 4.0.x or 5.x of the WebSphere Application Server is installed, and the WASPostUpgrade tool was not run from the bin directory of the WebSphere Application Server Version 6.0.x installation root.
MIGR0002I: java com.ibm.websphere.migration.postupgrade.WASPostUpgrade backup_directory_name -adminNodeName primary_node_name [-nameServiceHost host_name [ -nameServicePort port_number]] [-substitute "key1=value1[;key2=value2;[...]]"] In input xml file, the key(s) should appear as $key$ for substitution.") [-import xml data file] [-traceString trace specification [-traceFile filename]]}"
To correct this problem, run the WASPostUpgrade command from the bin directory of the WebSphere Application Server Version 6.0.x installation root.
During the course of a deployment manager or a managed node migration, WASPostUpgrade disables the old environment. If after running WASPostUpgrade you want to run WASPreUpgrade again against the old installation, you must run the migrationDisablementReversal.jacl script located in the old app_server_root/bin directory. After running this JACL script, your Version 5.x environment will be in a valid state again, allowing you to run WASPreUpgrade to produce valid results.
During the course of a federated migration, the migration process makes a call up to the DeploymentManager to perform a portion of the migration. If, after seeing the message MIGR0388I, you see a SOAP/RMI connection timeout exception, rerun WASPostUpgrade specifying the "-connectionTimeout" to some value greater than the default (the default is 10 minutes). A good rule of thumb is to double the value and run WASPostUpgrade again.
/websphere60/appserver/profiles/dm_profile/temp/nodeX_migration_temp
The logs and everything else involved in the migration for this node on the deployment manager node are located in this folder. This folder will also be required for IBM support related to this scenario.
During a federated migration, if any of the Version 6.0.x applications fail to install, they will be lost during the syncing of the configurations. The reason this happens is that one of the final steps of WASPostUpgrade is to run a syncNode command. This has the result of downloading the configuration on the deployment manager node and overwriting the configuration on the federated node. If the applications fail to install, they will not be in the configuration located on the deployment manager node. To resolve this issue, manually install the applications after migration. If they are standard Version 6.0.x applications, they will be located in the app_server_root/installableApps directory.
Manually install the applications using wsadmin after WASPostUpgrade has completed.
This indicates that a configuration error was detected before beginning the migration process. This can be due to either incorrect data entered when you created the migration jobs or a configuration problem. Review the log output for the error detected, then correct and rerun. The logs are located in temporary_directory_location/nnnnn, where temporary_directory_location is the value that you specified when you created the migration jobs (where the default is /tmp/migrate) and nnnnn is a unique number that is generated and displayed during the creation of your migration jobs as well as displayed in the JESOUT DDNAME of the WROUT and WRERR steps of your batch job stream.
In the event of failure in the migration job after the Verify step, you can rerun the migration job; but first, you must delete the WebSphere Application Server for z/OS configuration home directory created in the CRHOME step. This corresponds to the home directory that you entered when you created the migration jobs, and it can also be found in the migration JCL environment variable V6_HomeDir. Because the migration procedure creates a new configuration HFS for each node being migrated, it is a simple process to delete the configuration and start from scratch.
A federated node is the most complex node to migrate because it is essentially two migrations rolled into one. A federated node requires a migration of the node configuration information contained in the deployment manager's master repository, as well as the configuration information contained in the federated node. Federated node migration requires a JMX connection to the deployment manager as well as an active connection to the configuration repository on the deployment manager. If you have security enabled, it is essential that you follow the instructions that were generated when you created the migration jobs. The migration job must be submitted with a WebSphere Administrator's user ID that has been properly configured for obtaining secure connections. Failure to do so will result in the inability to initiate the migration process on the deployment manager.
./wsadmin.sh -connType -username -password -host -port
The federated migration process runs inside of the deployment manager's servant and obtains a JMX connection to an active repository to perform its tasks. The size of your node's configuration will determine the length of time required to perform the migration. If a timeout occurs, the migration task on the deployment manager will continue to run and might continue to completion; however, the migration will not have completed on the node being migrated. In this case, wait for the processing on the deployment manager to complete before proceeding. To increase the timeout value, you will need to modify the migration JCL connectionTimeout=120 parameter, which specifies the timeout interval in minutes. Follow the instructions in the Job fails after the Verify step above to proceed.
The migration procedure that runs on the deployment manager produces a separate set of log files that is not located in the same place as the federate node's log files and is not copied to the output logs of your batch migration job. Debugging failures on the deployment manager during a federated node migration requires inspecting the log files on the deployment manager's configuration file system. These log files are placed under the /temp directory of your deployment manager's default profile in a folder prefixed with the node name that is being migrated.
/WebSphere/V6R0/DeploymentManager/profiles/default/temp/SY1_migration_temp
On failure, there will be a Migration Failed file present in this directory. Examine the deployment manager servant's log for additional messages that might indicate the cause of failure.
The migration logs are located in temporary_directory_location/nnnnn, where temporary_directory_location is the value that you specified when you created the migration jobs (where the default is /tmp/migrate) and nnnnn is a unique number that was generated and displayed during the creation of your migration jobs. Normally, the space requirements for the migration logs are small. If you enable tracing, however, the log files can be quite large. The best practice is to enable tracing only after problems have been found. If tracing is required, try to only enable tracing related to the step in the process that is being debugged. This will help to reduce the space requirements.
TraceState=enabled profileTrace=disabled preUpGradeTrace=disabled postUpGradeTrace=enabled
During migration, a backup copy of your Version 5.x configuration is made. This backup becomes the source of the information being migrated. The default backup location is /tmp/migrate/nnnnn. This location can be changed when you create the migration jobs. It is specified by the V5_BackupDirectory variable. Depending on the size of the node being migrated, this backup can be quite large. If your temporary space is inadequate, then you will need to relocate this backup.
Each z/OS installation is different with respect to job classes and time limitations. Make sure you have specified appropriate job classes and timeout values on your job card.
Review the instructions that were generated when you created the migration jobs. Verify that the JCL procedures have been copied over correctly to your PROCLIB, the RACF definitions have been created, the Version 6.0.x libraries have been authorized, and, if required, your STEPLIB statements to the Version 6.0.x libraries have been specified. Make sure that the daemon process associated with your cell is at the appropriate level. The daemon process must be at the highest WebSphere Application Server for z/OS version level of all servers that it manages within the cell.
If you update Version 6.0 or 6.0.1 to Version 6.0.2 before migrating the Version 5.x deployment manager, you can use the Version 6.0.2 administrative console to add a Version 5.x member to a Version 5.x cluster.
If you did not find your problem listed, contact IBM support.
In this information ...Subtopics
| IBM Redbooks, demos, education, and more(Index) Use IBM Suggests to retrieve related content from ibm.com and beyond, identified for your convenience. This feature requires Internet access. Most of the following links will take you to information that is not part of the formal product documentation and is provided "as is." Some of these links go to non-IBM Web sites and are provided for your convenience only and do not in any manner serve as an endorsement by IBM of those Web sites, the material thereon, or the owner thereof. |