When you profile an application, the console view does not appear in the Profiling and Logging perspective by default.
To open the console view in the Profiling and Logging perspective, select Window->Show View->Console.
To get stdout to appear in the Console click Window->Preferences->Run/Debug->Console and select Show when program writes to standard out.
When creating a new Probekit source file, the wizard lets you choose the XML encoding to use. The default selection is ASCII. If you want to use non-ASCII characters anywhere in the probe source file (for example, in the Label or Description fields, or in a fragment's Java code), you must choose UTF-8 encoding, not ASCII.
To change the encoding of an existing probe source file, right-click on the file and select Open With -> Text Editor. Change the encoding in the XML header to "UTF-8" and save and close the file. Right-click again and choose Open With -> Probe Editor to edit the contents.
The Leak Analysis feature is not available for user programs run on OS/400® iSeries(TM). The Hyades Optimized heap dumps generated on this platform are incomplete, and it is not possible to generate heap dumps in any other format.
The performance of the profiling tools is directly related to the amount of data being collected, and the rate at which this data is transferred to the workbench. As the amount of data being collected increases, a user will experience decreased performance both in terms of the time it takes to do analysis, and in terms of the memory available for performing different tasks. There are several ways that a user can enhance the profiling performance.
- A good place to start is to collect the minimum amount of data you think is sufficient to profile a given piece of functionality. This can be accomplished by setting up a more efficient Filter in the Profiling launch configuration. In the Run->Profile dialog, select the Profiling tab. Select a profiling set, followed by the Edit button, followed by Next > and you'll be at the Filter dialog. Use a filter to just include areas of interest. You can always change the filter to include different or more data on a later run.
- If you don't want to profile the startup code, try unchecking the "Automatically start monitoring on application launched" checkbox on the Profiling - Limits tab of the Run->Profile dialog. That should help reduce the time it takes to launch the program being analyzed, and also eliminate start-up code from being profiled. Note that to start profiling, you need to click the "start monitoring" toolbar button in the Profiling Monitor after the workspace comes up.
- You can try redirecting the output to a file. This will use less memory in RAD. You can import the file into RAD at a later time when it is being used only for looking at this profiling file so it has more memory free for this task. This is done via the Run->Profile dialog, select the Profiling Tab, Destination subtab and select the "send profiling data to a file" checkbox before profiling. Later use the File->Import dialog, and select the Profiling File type. Note that you can not view the data while profiling if this option is selected. You must Import first, then you can view it. There is an opportunity to further reduce the amount of memory by only importing a piece of the profiling file in the Import dialog. Maybe importing different pieces and examining them independently will help.
- Profiling can result in a lot of memory overhead, so it might help to increase the virtual memory being used by RAD. To start RAD with 512MB of virtual memory (and 1GB maximum), please add the following line to your rationalsdp.ini file: VMArgs=-Xms512m -Xmx1024m
- If the problem is during data collection on the target system, you might try increasing the size of the buffers it uses to send data to RAD. Add the following line to the serviceconfig.xml and restart the agent. (This will increase the buffer size to 256MB). In very CPU intensive apps, increasing the data channel size further also helps: <Agent configuration="default" name="Java Profiling Agent" dataChannelSize="256M" type="profiler"/>
When collecting the binary Hyades Optimized heap dumps, if you send the data to a trcxml file by selecting "Send profiling data to a file", please be aware of the following:
You must have Agent Controller running on the deployment host to access the heap files that are saved there. The first time you run Import->Profiling file on the trcxml file, leak analysis and viewing Object Reference Graphs work as expected.
If you run Import->Profiling file a second time, the import works, but attempts to run Leak Analysis or view an Object Reference Graph may fail. This is because the heap files that are required may no longer available on the deployment host.
If you encounter this problem, please access the heap files from the project where you first imported the trcxml file. The heap files are in a directory named "leakanalysisheapdir" under the project directory.
The IBM(C)OS/390(SVC) heap dumps are very large. Expanding large heap dumps to view them in the Object Reference Graph view can take a long time. As a result, the operation may seem to hang. The workbench may still be actively expanding the heap dump even when the progress monitor appears stuck at 100%.
Performing the "Capture heap dump" action generates Hyades Optimized heap dumps on host where the target application is deployed. The heap dump destination directory is controlled by the setting of LOCAL_AGENT_TEMP_DIR in Agent Controller's configuration file, serviceconfig.xml. For information on locating and modifying this file, see the Help topic "Administering the Agent Controller" under "Detecting and analyzing runtime problems."
If you get either of the following error messages, "Expand Heap Dump failed in step: ...Reading file" or "Leak Analysis failed in step: Creating heap object reference graph", please verify that Agent Controller is running on the deployment host and retry your command. Agent Controller assists in copying the files from the deployment host to the workbench project directory.
If you experience problems during leak analysis, you may find the Leak Analysis log file helpful.
During Leak Analysis, diagnostic information is written to the LeakAnalysis.log file. LeakAnalysis.log contains the output of the various steps performed during leak analysis and will indicate the success or failure of the leak analysis run.
LeakAnalysis.log is written to the profiling project associated with the profile data. For example, on Windows, <my_workspace>\ProfileProject\LeakAnalysis.log.
Additional information can be written to the log file by using the RADLEAKREGIONDUMP system property. Add this option to the rationalsdp.ini:
VMArgs=-DRADLEAKREGIONDUMP=1
The rationalsdp.ini file is found in Rational Software Architect installation directory.
If your leak analysis fails with the following message in the LeakAnalysis.log file, 'JVMDUMP006I Processing Dump Event "uncaught", detail "java/lang/OutOfMemoryError"' you must increase the heap size of the leak analysis process.
To do this, please set the Rational Software Architect system attribute RADLEAKJVMSIZE. This attribute controls the JVM heap size available during leak analysis.
To set RADLEAKJVMSIZE, add this option to the rationalsdp.ini file:
VMArgs=-DRADLEAKJVMSIZE=value
Where value is the new heap size limit, such as 1024M. The default value is 512M. You must indicate whether the heap size is expressed in megabytes or gigabytes (M or G).
The rationalsdp.ini file is found in the Rational Software Architect installation directory.
When using the IBM classic JVM with the Thread Analysis profiling feature, the Thread View of the Profiling and Logging perspective doesn't display 'Waiting for lock' states for all the threads implied in a deadlock. This is due to missing information in the collected data. Workaround: Use the IBM J9 JVM by adding -Xj9 in the VM arguments field of the Arguments tab of the Profile dialog.
Probekit source files with non-ASCII characters in their names will not be processed correctly. Use only ASCII characters in Probekit source file names.
Do not use the Probekit->Compile action that appears in the context menu for *.probe files. Instead, convert the project containing the *.probe file to a Probekit project, and use the standard build mechanism. (To convert a Java project to a Probekit project, use File->New->Other and from the Profiling and Logging section choose Convert Java projects to Probekit projects).
Do not use non-ASCII characters in the patterns for Probekit "Target" specifications. Probes which contain non-ASCII characters in Target patterns will not be processed correctly.
Do not use non-ASCII characters when adding method patterns for "Flush coverage data when..."
If you enter non-ASCII characters in the package, class, or method fields of the method pattern Add dialog, an invalid input error is displayed and you will not be able to dismiss the dialog.
Workaround: Use a wildcard (asterisk) character in place of the non-ASCII characters in your patterns.
An EXCLUDE filter beginning with a wildcard character (asterisk), such as "*foo", causes the Coverage Statistics, Coverage Navigator and Annotated Source views to display no data. Workaround: Do not use such an EXCLUDE filter.
Before you can collect profiling data, Agent Controller must be running on the machine from which you intend to collect the data. On RedHat Linux machines, Agent Controller requires the libstdc++.so patch libstdc++-libc6.2-2.so.3.
The Leak Analysis feature is not available for user programs running the IBM J9 JVM.
The IBM J9 JVM creates heap files with names similar to heapdump.20041012.093936.2192.dmp when you set the environment variable IBM_HEAPDUMP and send "kill -3" signals to the running Java process. These .dmp files need to be post-processed by running j9extract and jdmpview and create IBM heap dumps.
The format of these heap dumps is not identical to the format of heap dumps generated by the classic IBM jvm.
If you import multiple sets of heap dumps with the same monitor name into an existing project, you may lose data if you later save the project, or exit the workbench.
To prevent this, please specify a unique Project/Monitor combination for each set of heaps that are imported.
If you start a WAS server and attach to it, Probekit and Line Level Coverage profiling types will not collect data for any class that has already been loaded in the target JVM. Workaround: To collect data from these classes, restart the project containing these classes.
While profiling your WAS applications for leak analysis on Linux, the optheap files are placed in the following directory:
For WAS 6.0, in runtimes/base_v6/profiles/default in the Rational Software Architect installation directory
For WAS 5.x, in the Rational Software Architect installation directory.
During profiling, all double byte characters show up as ???? in the console view.
The locale setting on the workbench host, the remote deployment host, and the target application, must all be the same when collecting Hyades Optimized heap dumps.
When profiling for Thread Analysis with IBM JVM 1.4.1 or earlier, the Threads View in the Profiling and Logging perspective does not show the thread owner of lock monitors as this data is not collected. Workaround: Upgrade to IBM JRE 1.4.2.
When profiling remotely on Solaris, a defect in the Sun 1.4.x JRE prevents profiling for some combinations of features, especially with memory profiling or threads analysis enabled. Sun's site describes this problem: http://developer.java.sun.com/developer/bugParade/bugs/4614956.html Workaround: Use Sun JRE 1.4.2_06 or later.
Return to the main readme file