The application server is a Java based process and requires a Java virtual machine (JVM) environment to run and support the Java applications running on the application server. You can configure the Java runtime environment to tune performance and system resource usage. This topic applies to the IBM Technology for Java Virtual Machine. Refer to the topic Tuning the Classic JVM if you are using the IBM Developer Kit for Java that is provided with the i5/OS product.
Issue the java –fullversion command from within your application server app_server_root/java/bin directory.
In response to this command, the Java writes information about the JVM, including the JVM provider information, to stderr.
On the z/OS platform there is a JVM in both the controller and servant. This information applies to the JVM in the servant. Usually the JVM in the controller does not need to be tuned.
A Java runtime environment provides the execution environment for Java based applications and servers such as WebSphere Application Server. Therefore the Java configuration plays a significant role in determining performance and system resource consumption for the product, and the applications that you are running.
The IBM Java 5.0 and newer versions include major improvements in virtual machine technology to provide significant performance and serviceability enhancements over IBM's earlier Java execution technology. Refer to the Web site http://www.ibm.com/software/webservers/appserv/was/performance.html for more information about this new technology.
Even though JVM tuning is dependent on the JVM provider you use, there are some general tuning concepts that apply to all JVMs. These general concepts include:
The following steps provide specific instructions on how to perform the following types of tuning for each JVM. The steps do not have to be performed in any specific order.
To determine the setting for the Enable the JIT property, in the administrative console, click Servers > Application servers >server_name, and then, in the Server Infrastructure section, click Java and process management > Process definition, select either Control, Servant, or Adjunct, and then clickJava virtual machine
In some environments, such as a development environment, it is more important to optimize the startup performance of your application server rather than the runtime performance. In other environments, it is more important to optimize the runtime performance. By default, IBM virtual machines for Java are optimized for runtime performance, while HotSpot based JVMs are optimized for startup performance.
The Java Just-In-Time (JIT) compiler has a big impact on whether startup or runtime performance is optimized. The initial optimization level that the compiler uses influences the length of time it takes to compile a class method, and the length of time it takes to start the server. For faster startups, you should reduce the initial optimization level that the compiler uses. However if you reduce the initial optimization level, the runtime performance of your applications might be degraded because the class methods are now compiled at a lower optimization level.
This setting influences how the IBM virtual machine for Java uses a lower optimization level for class method compiles. A lower optimization level provides for faster server startups, but lowers runtime performance. If this parameter is not specified, the IBM virtual machine for Java defaults to starting with a high initial optimization level for compiles, which results in faster runtime performance, but slower server starts.
Default: | High initial compiler optimization level |
Recommended: | High initial compiler optimization level |
Usage: | -Xquickstart provides faster server startup. |
-Xquickstart -Xverify:none
In certain error conditions, multiple application server threads might fail and the JVM requests a TDUMP for each of those threads. This situation can cause a large number of TDUMPs to be taken concurrently leading to other problems, such. as a shortage of auxiliary storage. You can use the JAVA_DUMP_OPTS environment variable to indicate the number of dumps that you want the JVM to produce in certain situations. However it does not affect the number of TDUMPS that are generated because of com.ibm.jvm.Dump.SystemDump() calls from applications that are running on the application server.
JAVA_DUMP_OPTS=ONANYSIGNAL(JAVADUMP[3],SYSDUMP[1]),ONINTERRUPT(NONE)
See the IBM Developer Kit Diagnostics Guide for more information on using the JAVA_DUMP_OPTS environment variable.
The Java heap parameters influence the behavior of garbage collection. Increasing the heap size supports more object creation. Because a large heap takes longer to fill, the application runs longer before a garbage collection occurs. However, a larger heap also takes longer to compact and causes garbage collection to take longer.
The IBM Developer Kit and Runtime Environment, Java2 Technology Edition, Version 5.0 Diagnostics Guide, that is available on the developerWorks Web site, provides additional information on tuning the heap size.
Java Heap information is contained in SMF records and can be viewed dynamically using the console command DISPLAY,JVMHEAP.
To use the administrative console to configure the heap size:
You can also specify values for both fields if you need to adjust both settings.
The Initial heap size setting specifies, in megabytes, the amount of storage that is allocated for the JVM heap when the JVM starts. The Maximum heap size setting specifies, in megabytes, the maximum amount of storage that can be allocated to the JVM heap. Both of these settings have a significant effect on performance.
When tuning a production system where the working set size of the Java application is not understood, a good starting value for the initial heap size is 25% of the maximum heap size. The JVM then tries to adapt the size of the heap to the working set size of the application.
The illustration represents three CPU profiles, each running a fixed workload with varying Java heap settings. In the middle profile, the initial and maximum heap sizes are set to 128MB. Four garbage collections occur. The total time in garbage collection is about 15% of the total run. When the heap parameters are doubled to 256MB, as in the top profile, the length of the work time increases between garbage collections. Only three garbage collections occur, but the length of each garbage collection is also increased. In the third profile, the heap size is reduced to 64MB and exhibits the opposite effect. With a smaller heap size, both the time between garbage collections and the time for each garbage collection are shorter. For all three configurations, the total time in garbage collection is approximately 15%. This example illustrates an important concept about the Java heap and its relationship to object utilization. There is always a cost for garbage collection in Java applications.
If the heap free space settles at 85% or more, consider decreasing the maximum heap size values because the application server and the application are under-utilizing the memory allocated for heap.
If you have servers configured to run in 64-bit mode, you can specify a JVM maximum heap size for those servers that is significantly larger than the default setting. For example, you can specify an initial maximum heap size of 1844m for the controller and the servant if the server is configured to run in 64-bit mode.
You can also use the following command line parameters to adjust these settings. These parameters apply to all supported JVMs and are used to adjust the minimum and maximum heap size for each application server or application server instance.
This setting controls the initial size of the Java heap. Properly tuning this parameter reduces the overhead of garbage collection, which improves server response time and throughput. For some applications, the default setting for this option might be too low, which causes a high number of minor garbage collections.
Default: | 50MB. This default value applies for both 31-bit and 64-bit configurations. |
Recommended: | Workload specific, but higher than the default. |
Usage: | -Xms256m sets the initial heap size to 256 megabytes. |
This setting controls the maximum size of the Java heap. Increasing this parameter increases the memory available to the application server, and reduces the frequency of garbage collection. Increasing this setting can improve server response time and throughput. However, increasing this setting also increases the duration of a garbage collection when it does occur. This setting should never be increased above the system memory available for the application server instance. Increasing the setting above the available system memory can cause system paging and a significant decrease in performance.
Default: | 256MB. This default value applies for both 31-bit and 64-bit configurations. |
Recommended: | Workload specific, but higher than the default, depending on the amount of available physical memory. |
Usage: | -Xmx512m sets the maximum heap size to 512 megabytes. |
You can check if the application is overusing objects, by observing the counters for the JVM runtime. You have to set the -XrunpmiJvmpiProfiler command line option, as well as the JVM module maximum level in order to enable the Java virtual machine profiler interface (JVMPI) counters. The best result for the average time between garbage collections is at least 5-6 times the average duration of a single garbage collection. If you do not achieve this number, the application is spending more than 15% of its time in garbage collection.
If the information indicates a garbage collection bottleneck, there are two ways to clear the bottleneck. The most cost-effective way to optimize the application is to implement object caches and pools. Use a Java profiler to determine which objects to target. If you can not optimize the application, adding memory, processors and clones might help. Additional memory allows each clone to maintain a reasonable heap size. Additional processors allow the clones to run in parallel.
Memory leaks in the Java language are a dangerous contributor to garbage collection bottlenecks. Memory leaks are more damaging than memory overuse, because a memory leak ultimately leads to system instability. Over time, garbage collection occurs more frequently until the heap is exhausted and the Java code fails with a fatal out-of-memory exception. Memory leaks occur when an unused object has references that are never freed. Memory leaks most commonly occur in collection classes, such as Hashtable because the table always has a reference to the object, even after real references are deleted.
High workload often causes applications to crash immediately after deployment in the production environment. This is especially true for leaking applications where the high workload accelerates the magnification of the leakage and a memory allocation failure occurs.
Memory leak problems can manifest only after a period of time, therefore, memory leaks are found easily during long-running tests. Short running tests can lead to false alarms. It is sometimes difficult to know when a memory leak is occurring in the Java language, especially when memory usage has seemingly increased either abruptly or monotonically in a given period of time. The reason it is hard to detect a memory leak is that these kinds of increases can be valid or might be the intention of the developer. You can learn how to differentiate the delayed use of objects from completely unused objects by running applications for a longer period of time. Long-running application testing gives you higher confidence for whether the delayed use of objects is actually occurring.
In many cases, memory leak problems occur by successive repetitions of the same test case. The goal of memory leak testing is to establish a big gap between unusable memory and used memory in terms of their relative sizes. By repeating the same scenario over and over again, the gap is multiplied in a very progressive way. This testing helps if the number of leaks caused by the execution of a test case is so minimal that it is hardly noticeable in one run.
You can use repetitive tests at the system level or module level. The advantage with modular testing is better control. When a module is designed to keep the private module without creating external side effects such as memory usage, testing for memory leaks is easier. First, the memory usage before running the module is recorded. Then, a fixed set of test cases are run repeatedly. At the end of the test run, the current memory usage is recorded and checked for significant changes. Remember, garbage collection must be suggested when recording the actual memory usage by inserting System.gc() in the module where you want garbage collection to occur, or using a profiling tool, to force the event to occur.
Some memory leak problems can occur only when there are several threads running in the application. Unfortunately, synchronization points are very susceptible to memory leaks because of the added complication in the program logic. Careless programming can lead to kept or unreleased references. The incident of memory leaks is often facilitated or accelerated by increased concurrency in the system. The most common way to increase concurrency is to increase the number of clients in the test driver.
Also, look at the difference between the number of objects allocated and the number of objects freed. If the gap between the two increases over time, there is a memory leak.
Heap consumption indicating a possible leak during a heavy workload (the application server is consistently near 100% CPU utilization), yet appearing to recover during a subsequent lighter or near-idle workload, is an indication of heap fragmentation. Heap fragmentation can occur when the JVM can free sufficient objects to satisfy memory allocation requests during garbage collection cycles, but the JVM does not have the time to compact small free memory areas in the heap to larger contiguous spaces.
Another form of heap fragmentation occurs when small objects (less than 512 bytes) are freed. The objects are freed, but the storage is not recovered, resulting in memory fragmentation until a heap compaction has been run.
Heap fragmentation can be reduced by forcing compactions to occur, but there is a performance penalty for doing this. Use the Java -X command to see the list of memory options.
The Java virtual machine (JVM) uses a parallel garbage collector to fully exploit an SMP during most garbage collection cycles. The HotSpot based JVMs have a single-threaded garbage collector.
Examining Java garbage collection gives insight to how the application is utilizing memory. Garbage collection is a Java strength. By taking the burden of memory management away from the application writer, Java applications are more robust than applications written in languages that do not provide garbage collection. This robustness applies as long as the application is not abusing objects. Garbage collection normally consumes from 5% to 20% of total execution time of a properly functioning application. If not managed, garbage collection is one of the biggest bottlenecks for an application.
Monitoring garbage collection during the execution of a fixed workload, enables you to gain insight as to whether the application is over-utilizing objects. Garbage collection can even detect the presence of memory leaks.
You can use JVM settings to configure the type and behavior of garbage collection. When the JVM cannot allocate an object from the current heap because of lack of contiguous space, the garbage collector is invoked to reclaim memory from Java objects that are no longer being used. Each JVM vendor provides unique garbage collector policies and tuning parameters.
You can use the Verbose garbage collection setting in the administrative console to enable garbage collection monitoring. The output from this setting includes class garbage collection statistics. The format of the generated report is not standardized between different JVMs or release levels.
To adjust your JVM garbage collection settings:
For more information about the –X options for the different JVM garbage collectors refer to the following:
Use the Java -X option to view a list of memory options.
Default: | optthruput |
Recommended: | optthruput |
Usage: | Xgcpolicy:optthruput sets the garbage collection to optthruput |
Setting gcpolicy to optthruput disables concurrent mark. You should get the best throughput results when you use the optthruput policy unless you are experiencing erratic application response times, which is an indication that you might have pause time problems
Setting gcpolicy to optavgpause enables concurrent mark with its default values. This setting alleviates erratic application response times that normal garbage collection causes. However, this option might decrease overall throughput.
By default, the JVM unloads a class from memory whenever there are no live instances of that class left. The overhead of loading and unloading the same class multiple times, can decrease performance.
When this option is used, if you have to redeploy an application, you should always restart the application server to clear the classes and static data from the pervious version of the application.
Default: | Class garbage collection is enabled. |
Recommended: | Do not disable class garbage collection. |
Usage: | Xnoclassgc disables class garbage collection. |
Information | Value |
---|---|
Default | com.ibm.cacheLocalHost = false |
Recommended | com.ibm.cacheLocalHost = true (see description) |
Usage | ![]() ![]() sep2012 |
The share classes option of the IBM Java 2 Runtime Environment (J2RE) Version 1.5.0 lets you share classes in a cache. Sharing classes in a cache can improve startup time and reduce memory footprint. Processes, such as application servers, node agents, and deployment managers, can use the share classes option.
If you use this option, you should clear the cache when the process is not in use. To clear the cache, either call the app_server_root/bin/clearClassCache.bat/sh utility or stop the process and then restart the process.
If you need to disable the share classes option for a process, specify the generic JVM argument -Xshareclasses:none for that process:
Default: | The Share classes in a cache option is enabled. |
Recommended: | Leave the share classes in a cache option enabled. |
Usage: | -Xshareclasses:none disables the share classes in a cache option. |
If you are using the wsadmin command wsadmin -conntype none in local mode, you must set the config_consistency_check property to false before issuing this command.
If you use DB2, consider disabling SafepointPolling technology in the HP virtual machine for Java for HP-UX. Developed to ensure safepoints for Java threads, SafepointPolling technology generates a signal that can interfere with the signal between WebSphere Application Server and a DB2 database. When this interference occurs, database deadlocks often result. Prevent the interference by starting the JVM with the -XX:-SafepointPolling option, which disables SafepointPolling during runtime.
In this information ...Related tasks
Related reference
| IBM Redbooks, demos, education, and more(Index) Use IBM Suggests to retrieve related content from ibm.com and beyond, identified for your convenience. This feature requires Internet access. Most of the following links will take you to information that is not part of the formal product documentation and is provided "as is." Some of these links go to non-IBM Web sites and are provided for your convenience only and do not in any manner serve as an endorsement by IBM of those Web sites, the material thereon, or the owner thereof. |