Out Of Memory (OOM): An Out of Memory error occurs due to memory exhaustion, either in java heap or native memory. In the JVM, OOM errors are thrown when the JVM cannot allocate an object because it is out of heap memory, and no more heap memory could be made available by the garbage collector.
Memory Leak: A memory leak occurs if memory is used by an application and not released by the application when it is finished with it. A memory leak can occur in either java heap or native memory, and either will eventually cause an out of memory situation.


Problem Troubleshooting

Please note that not all of the following items would need to be done. Some issues can be solved by only following a few of the items.

Troubleshooting Steps

Java heap, Native memory and Process size

Java heap: This is the memory that the JVM uses to allocate java objects. The maximum value of java heap memory is specified using the -Xmx flag in the java command line. If the maximum heap size is not specified, then the limit is decided by the JVM considering factors like the amount of physical memory in the machine and the amount of free memory available at that moment. It is always recommended to specify the max java heap value.
Native memory: This is the memory that the JVM uses for its own internal operations. The amount of native memory heap that will be used by the JVM depends on the amount of code generated, threads created, memory used during GC for keeping java object information and temporary space used during code generation, optimization etc.
If there is a third party native module, it could also use the native memory. For example, native JDBC drivers allocate native memory.
The max amount of native memory is limited by the virtual process size limitation on any given OS and the amount of memory already committed for the java heap with -Xmxflag. For example, if the application can allocate a total of 3 GB and if the max java heap is 1 GB, then the max possible native memory is approximately 2 GB.
Process size: Process size will be the sum of the java heap, native memory and the memory occupied by the loaded executables and libraries. On 32-bit operating systems, the virtual address space of a process can go up to 4 GB. Out of this 4 GB, the OS kernel reserves some part for itself (typically 1 - 2 GB). The remaining is available for the application.
Windows: Different versions of Windows support different process sizes. See http://msdn.microsoft.com/en-us/library/windows/desktop/aa366914%28v=VS.85%29.aspx for more details.
RedHat Linux: Different kernels are available on RH Linux, and these differnet kernels support different process sizes. See https://blogs.oracle.com/gverma/entry/redhat_linux_kernels_and_proce_1 for more details.
For other operating systems, please refer to the OS documentation for your configuration.
For more information on configuring all of these for WebLogic Server, please see Tuning Java Virtual Machines (JVMs).



Difference between process address space and physical memory

Each process gets its own address space. In 32-bit operating systems, this address space will range from 0 to 4 GB. This is independent of the available RAM or swap space in the machine. Since JVM GC performance requires that the bulk of the java heap lies in the RAM, adding RAM helps more than increasing the swap size.For example, with a Java application that uses 8 gigs of Java heap, adding 4 gigs of RAM plus 12 gigs of swap won't help as much as adding the equivalent amount of pure RAM.
The memory address within a process is virtual. The kernel maps this virtual address to the physical address. The physical address points to a location somewhere in the physical memory. At any given time, the sum of all the virtual memory used by the running processes in a machine cannot exceed the total physical memory available on that machine.

Why does the OOM problem occur and what does the JVM do in this situation?

Out of memory in java heap

The JVM throws java out of memory (java OOM) error if it is not able get more memory in java heap to allocate more java objects. The JVM cannot allocate more java objects if the java heap is full of live objects and it is not able to expand the java heap anymore.
In this situation, the JVM lets the application decide on what to do after throwing the java.lang.OutOfMemoryError. For example, the application may handle this error and decide to shut down itself in a safe way or decide to run ignoring this error. If the application doesn't handle this error, then the thread that throws this error will exit (you will not see this thread if you take a java thread dump).
In the case of WebLogic Server, this error is handled if it is thrown by an execute thread and the error is logged. If this error is being thrown continuously, then the core health monitor thread shuts down WebLogic Server.

Out of memory in native heap

The JVM throws native out of memory (native OOM) if it is not able to get any more native memory. This usually happens when the process reaches the process size limitation on that OS or the machine runs out of RAM and swap space.
When this happens, the JVM handles the native OOM condition, logs a message saying that it ran out of native memory or unable to acquire memory and exits. If the JVM or any other loaded module (like libc or a third party module) doesn't handle this native OOM situation, then the OS will send a sigabort signal to the JVM which will make the JVM exit. Usually, the JVMs will generate a core file when it gets a sigabort signal.

Steps to debug the problem

Initially, determine whether it is a Java OOM or Native OOM

  • If the stdout/stderr message says that this is a java.lang.OutOfMemoryError referring to the Java heap space, then this is a Java OOM.
  • If the stdout/stderr message says that it failed to acquire memory or says that it is a java.lang.OutOfMemoryError referring to a native method, then this is a Native OOM.
Please note that the above messages go to stdout or stderr and not to the application-specific log files like weblogic.log.

For Java OOM:

  1. Collect and analyze the verbose garbage collection (GC) output.
    1. Enable verbose GC logging. In order for GC activity to be efficiently logged, the following options should be included in the JVM upon start up:
      • For HotSpot: -verbose:gc, -XX:+PrintGCDetails, and -XX:+PrintGCTimeStamps. Xloggc:could also be specified to redirect the GC detailed stats to an output file. It’s important to understand that the overhead for basic GC is null except for some disk space consumed by the log file (See Java HotSpot VM Options for more details).
      • For JRockit: -Xverbose:gc,gcpause,memdbg(See JRockit Command-Line Options for more details).
    2. Make sure that the JVM does the following before throwing java OOM
      • Full GC run:
        Does a full GC and all the un-reachable, phantomly, weakly and softly reachable objects are removed and those spaces are reclaimed. More details on different levels of object reachability can be found at: http://java.sun.com/docs/books/performance/1st_edition/html/JPAppGC.fm.html , see 'A.4.1 Types of Reference Objects'.

        You can check whether full GC was done before the OOM message. A message like the following is printed when a full GC is done (format varies depending on the JVM: Check JVM help message to understand the format)
        [memory ] 7.160: GC 131072K->130052K (131072K) in 1057.359 ms
        The format of the above output follows (Note: the same format will be used throughout this Pattern):
        [memory ] <start>: GC <before>K-><after>K (<heap>K), <total> ms
        [memory ] <start> - start time of collection (seconds since jvm start)
        [memory ] <before> - memory used by objects before collection (KB)
        [memory ] <after> - memory used by objects after collection (KB)
        [memory ] <heap> - size of heap after collection (KB)
        [memory ] <total> - total time of collection (milliseconds)
        However, there is no way to conclude whether the soft/weak/phantomly reachable objects are removed using the verbose messages. If you suspect that these objects are still around when OOM was thrown, contact the JVM vendor.

        If the garbage collection algorithm is a generational algorithm (gencopy or gencon in case of Jrockit and the default algorithm in case of other JDKs), you will also see verbose output something like this:
        [memory ] 2.414: Nursery GC 31000K->20760K (75776K), 0.469 ms
        The above is the nursery GC (or young GC) cycle which will promote live objects from nursery (or young space) to old space. This cycle is not important for our analysis. More details on generational algorithms can be found in JVM documentation.

        If the GC cycle doesn't happen before java OOM, then it is a JVM bug.
      • Full compaction:
        Make sure that the JVM does proper compaction work and the memory is not fragmented which could prevent large objects being allocated and trigger a java OOM error.

        Java objects need the memory to be contiguous. If the available free memory is fragmented, then the JVM will not be able to allocate a large object, as it may not fit in any of the available free chunks. In this case, the JVM should do a full compaction so that more contiguous free memory can be formed to accommodate large objects.

        Compaction work involves moving of objects (data) from one place to another in the java heap memory and updating the references to those objects to point to the new location. JVMs may not compact all the objects unless there is a need. This is to reduce the pause time of GC cycle.

        We can check whether the java OOM is due to fragmentation by analyzing the verbose gc messages. If you see output similar to the following where the OOM is being thrown even whether there is free java heap available, then it is due to fragmentation.
        [memory ] 8.162: GC 73043K->72989K (131072K) in 12.938 ms
        [memory ] 8.172: GC 72989K->72905K (131072K) in 12.000 ms
        [memory ] 8.182: GC 72905K->72580K (131072K) in 13.509 ms
        java.lang.OutOfMemoryError
        In the above case you can see that the max heap specified was 128MB and the JVM threw OOM when the actual memory usage is only 72580K. The heap usage is only 55%. Therefore, the effect of fragmentation in this case is to throw an OOM even when there is 45% of free heap. This is a JVM bug or limitation. You should contact the JVM vendor.
  2. If the JVM does its work properly (all the things mentioned in the above step), then the java OOM could be an application issue. The application might be leaking some java memory constantly, which may cause this problem. Or, the application uses more live objects and it needs more java heap memory. The following things can be checked in the application:
    • Caching in the application - If the application caches java objects in memory, then we should make sure that this cache is not growing constantly. There should be a limit for the number of objects in the cache. We can try reducing this limit to see if it reduces the java heap usage.

      Java soft references can also be used for data caching as softly reachable objects are guaranteed to be removed when the JVM runs out of java heap.
    • Long living objects - If there are long living objects in the application, then we can try reducing the life of the objects if possible. For example, tuning HTTP session timeout will help in reclaiming the idle session objects faster.
    • Memory leaks: One example of memory leak is when using database connection pools in application server. When using connection pools, the JDBC statement and resultset objects must be explicitly closed in a finally block. This is due to the fact that calling close() on the connection objects from pool will simply return the connection back to the pool for re-use and it doesn't actually close the connection and the associated statement/resultset objects.

      It is recommended to follow the coding practices suggested in the following documents to avoid memory leaks in your application.
    • Increase the java heap - We can also try increasing the java heap if possible to see whether that solves the problem.
    • Workaround - As a temporary workaround, the application may be gracefully re-started when the java heap usage goes about 90%. When following this workaround, the java max heap can be set to as high as possible so that the application will take more time to fill all the java heap. The java heap usage can be monitored by adding the '-verbosegc' flag (see above) in the java command line which will send the GC/heap usage info to stdout or stderr.
  3. If none of the above suggestions is applicable to the application, then we need to use a JVMPI (JVM Profiler Interface) based profiler to find out which objects are occupying the java heap. The profilers also give details on the place in the java code from where these objects are being created. This document doesn't cover the details on each profiler. Refer to the profiler documentation to understand how to set and start the application with this profilers. In general, JVMPI based profilers have high overhead and drastically reduce the performance of the application. Therefore, it is not advisable to use these profilers in production environments. A number of open source profilers can be explored from this site.

For Native OOM Problem

  1. Collect the following information:
    1. Enable verbose GC logging (see above) to monitor the java heap usage. This will help to understand the java memory requirement for this application.

      It should be noted that independent of the actual java heap usage by the application, the amount of max heap specified (using -Xmx flag in the java command line) is reserved at the JVM startup and this reserved memory is not available for any other purpose.

      In the case of JRockit, use -verbose instead of -verbosegc as this gives codegen information in addition to GC information.
    2. Record the process virtual memory size periodically from the time the application was started until the JVM runs out of native memory. This will help to understand whether the process really hits the size limitation on that OS.

      In case of Windows, use the following procedure to monitor the virtual process size:
      1. In the Start -> Run... dialog, enter "perfmon" and click OK.
      2. In the "Performance" window that pops up, click on the '+' button (above the graph).
      3. Select the following options in the resulting Add Counters dialog:
        • Performance object: Process (not the default Processor)
        • Select counter from list: Virtual Bytes
        • Select instances from list: Select the JVM (java) instance
      4. Click "Add", then "Close"

      In case of Unix or Linux, for a given PID, the virtual memory size can be found using this command: ps -p <PID> -o vsz.

      In the Linux case, each java thread within a single JVM instance is shown as a separate process. It is enough if we take the PID of the root java process. The root java process can be found using the --forest option for the ps command. For example, ps -lU <user> --forest will give a ASCII tree art for all the processes started by the specified user. You can find the root java from the tree art.
  2. Memory availability in the machine
    If the machine doesn't have enough RAM and swap space, then the OS will not be able to give more memory to this process, and that could also result in out of memory. Make sure that the sum of RAM and swap space in the disk is sufficient to cater to all the running processes in that machine.
  3. Tuning the java heap
    If the java heap usage is well within the max heap, then reducing the java max heap will give more native memory to the JVM. This is not a solution but a workaround that can be tried. Since the OS limits the process size, we need to strike a balance between the java heap and the native heap.
  4. Third party native modules or JNI code in the application
    Check whether you are using any third-party native module like database drivers. These native modules could also allocate native memory and the leak may be from these modules. In order to narrow down the problem, you should attempt to reproduce the problem without these third-party modules. For example, you can use pure java drivers instead of native database drivers.

    Check whether your application uses some JNI code. This could also be causing native memory leak and you can try to run the application without the JNI code if possible.
  5. If the source of native memory cannot be found after the above steps, then you need to work with the JVM vendor to get a special build which can trace the native memory allocation calls and give more information about the leak.

HP JVM specific tools/tips

The following URL gives some tools and tips specific to OOM situations with HP JVM: HP JVM tools/tips

JRockit-specific features

JRockit supports JRA recording (Java Runtime Analyzer). This is useful to gather information at JVM run time which will give information about the application, for example, the number of GCs running, number of soft/weak/phantom references, hot methods, etc. It is useful to make a recording for a few minutes and analyze the data if the JVM has performance problems or hang problems. More details on this can be found in Jrockit docs: http://download.oracle.com/docs/cd/E13150_01/jrockit_jvm/jrockit/webdocs/index.html
To troubleshoot memory leak problems, we recommend taking a flight recorder recording and check the heap usage after each old collection. A continuously rising memory usage could indicate a memory leak. For information about creating and interpreting a Flight Recorder recording, see the Oracle JRockit Flight Recorder Run Time Guide.

Popular JVM heap analyzing tools

Java VisualVM

Java VisualVM is a tool that provides a visual interface for viewing detailed information about Java applications while they are running on a Java Virtual Machine (JVM), and for troubleshooting and profiling these applications. Various optional tools, including Java VisualVM, are provided with Sun's distribution of the Java Development Kit (JDK) for retrieving different types of data about running JVM software instances. For example, most of the previously standalone tools JConsole, jstat, jinfo, jstack, and jmap are part of Java VisualVM. Java VisualVM federates these tools to obtain data from the JVM software, then re-organizes and presents the information graphically, to enable you to view different data about multiple Java applications uniformly, whether they are running locally or on remote machines. Furthermore, developers can extend Java VisualVM to add new functionality by creating and posting plug-ins to the tool's built-in update center.
Java VisualVM can be used by Java application developers to troubleshoot applications and to monitor and improve the applications' performance. Java VisualVM can allow developers to generate and analyse heap dumps, track down memory leaks, browse the platform's MBeans and perform operations on those MBeans, perform and monitor garbage collection, and perform lightweight memory and CPU profiling.
Java VisualVM was first bundled with the the Java platform, Standard Edition (Java SE) in JDK version 6, update 7.
See Java VisualVM for more information.

JRockit Memory Leak Detector

The JRockit Memory Leak Detector is a tool for discovering and finding the cause for memory leaks in a Java application. The JRockit Memory Leak Detector's trend analyzer discovers slow leaks, it shows detailed heap statistics (including referring types and instances to leaking objects), allocation sites, and it provides a quick drill down to the cause of the memory leak. The Memory Leak Detector uses advanced graphical presentation techniques to make it easier to navigate and understand the sometimes complex information. See Getting Started with Memory Leak Detection for more details.

Eclipse Memory Analysis Tool (MAT)

The stand-alone Memory Analysis Tool (MAT) is based on Eclipse RCP. It is useful if you do not want to install a full-fledged IDE on the system you are running the heap analysis. More details on this can be found in MAT home page at http://www.eclipse.org/mat/downloads.php.



0 Comments