JDK-4857305 : (bf) Heavy allocation of direct buffers leads to unrecoverable OutOfMemoryError
  • Type: Bug
  • Component: core-libs
  • Sub-Component: java.nio
  • Affected Version: 1.4.2
  • Priority: P3
  • Status: Closed
  • Resolution: Duplicate
  • OS: linux
  • CPU: x86
  • Submitted: 2003-05-01
  • Updated: 2003-05-02
  • Resolved: 2003-05-02
Related Reports
Duplicate :  
Description

Name: nt126004			Date: 05/01/2003


FULL PRODUCT VERSION :
java version "1.4.1_02"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.1_02-b06)
Java HotSpot(TM) Client VM (build 1.4.1_02-b06, mixed mode)

(the corresponding Server VM also exhibits this problem)

FULL OS VERSION :
Debian GNU/Linux 'unstable': Linux flood 2.2.19 #1 Mon Aug 13 17:25:01 NZST 2001 i686 unknown unknown GNU/Linux

Solaris 9: SunOS v8 5.9 Generic_112233-04 sun4u sparc SUNW,Sun-Fire-880


A DESCRIPTION OF THE PROBLEM :
Allocating direct NIO buffers via java.nio.ByteBuffer.allocateDirect() can eventually throw OutOfMemoryError while there are collectable (unreferenced) direct ByteBuffers on the heap. Subsequent calls to System.gc() cause the VM to exit with an OutOfMemoryError within GC itself (apparently not as a catchable Java exception). There is no way to avoid or deal with this subsequent OOME -- once allocateDirect() has failed, GC appears permanently wedged. Other non-GC failures have been seen as well (for example, a CompileThread failing to allocate memory).

If no System.gc() call is done, the VM is likely to die soon with an OOME anyway (comment out the System.gc() call in the attached testcase to see this). And, obviously, at this point you still can't allocate a new direct ByteBuffer..

This makes it unsafe to use direct buffers that are allocated on the fly: depending on the exact heap settings and available process VM size, calling allocateDirect() may throw OOME, and once it has, there's nothing you can do to recover from it.

As I understand it, this is somewhat like running out of file descriptors when collectable open file objects still exist (in that there is a heap object that is keeping resources outside the heap alive, so allocation of the external resource might fail if there is no feedback to GC on allocation). With files, you can (and should) close the files when you are done with them, but there is no equivalent way to force a direct ByteBuffer to reclaim the non-heap resources it is keeping alive when you are done with it. It's made worse by the fact that the VM needs non-heap memory available to keep running.


STEPS TO FOLLOW TO REPRODUCE THE PROBLEM :
1. Compile the attached test case.
2. Set an appropriate process VM size limit:
    $ ulimit -v 256000
3. Run the test case:
    $ java -verbose:gc -Xmx16m -Xms16m DirectBufferGCKiller2
4. Wait for the VM to bomb out with an OutOfMemoryError.

You may need to tweak the VM size vs. heap size for a particular system to reproduce this; it is most reproducable with a large heap size (& therefore young generation size) such that non-heap space used by allocateDirect() is exhausted faster than the in-heap ByteBuffers are reclaimed.


EXPECTED VERSUS ACTUAL BEHAVIOR :
Expected result: the test case should run indefinitely allocating new buffers while old buffers are collected by GC.
Actual result: the VM rapidly dies with OutOfMemoryErrors while doing a full GC.
This is with 1.4.1 :

$ java -verbose:gc -Xmx16m -Xms16m DirectBufferGCKiller2
Allocated 0
Allocated 10000
[GC 2047K->1940K(16320K), 0.0513334 secs]
java.lang.OutOfMemoryError
        at sun.misc.Unsafe.allocateMemory(Native Method)
        at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:57)
        at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:283)
        at DirectBufferGCKiller2.main(DirectBufferGCKiller2.java:11)
[Full GCException java.lang.OutOfMemoryError: requested 16000 bytes

(note that there are two exceptions above: the first is the in-Java caught-and-printed exception from allocateDirect; the second is caused by System.gc() and appears to be unrecoverable).

I've seen a similar OOME from a CompileThread with a similar testcase (from memory, one that ignored OOMEs from allocateDirect() and just continued).


ERROR MESSAGES/STACK TRACES THAT OCCUR :
I can also reproduce it with the 1.4.2 beta on a Solaris 8 box (it produces a
slightly more useful error message than 1.4.1_02 does):

bash-2.03$ ulimit -v 256000
bash-2.03$ export JAVA_HOME=/export/home/oliver/j2re1.4.2     
bash-2.03$ $JAVA_HOME/bin/java -verbose:gc -Xmx16m -Xms16m DirectBufferGCKiller2
Allocated 0
Allocated 10000
java.lang.OutOfMemoryError
        at sun.misc.Unsafe.allocateMemory(Native Method)
        at java.nio.DirectByteBuffer.<init>(Unknown Source)
        at java.nio.ByteBuffer.allocateDirect(Unknown Source)
        at DirectBufferGCKiller2.main(DirectBufferGCKiller2.java:11)
[Full GC
Exception java.lang.OutOfMemoryError: requested 16000 bytes for GrET* in
/export1/jdk/jdk1.4.2/hotspot/src/share/vm/utilities/growableArray.cpp. Out of swap space?
bash-2.03$ $JAVA_HOME/bin/java -version
java version "1.4.2-beta"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.2-beta-b19)
Java HotSpot(TM) Client VM (build 1.4.2-beta-b19, mixed mode)
bash-2.03$ uname -a
SunOS n1 5.8 Generic_108528-14 sun4u sparc SUNW,Ultra-250
bash-2.03$ /usr/sbin/psrinfo -v
Status of processor 0 as of: 04/30/03 12:43:37
  Processor has been on-line since 04/28/03 12:59:03.
  The sparcv9 processor operates at 296 MHz,
        and has a sparcv9 floating point processor.
Status of processor 1 as of: 04/30/03 12:43:37
  Processor has been on-line since 04/28/03 12:59:05.
  The sparcv9 processor operates at 296 MHz,
        and has a sparcv9 floating point processor.

REPRODUCIBILITY :
This bug can be reproduced always.

---------- BEGIN SOURCE ----------
import java.nio.ByteBuffer;

public class DirectBufferGCKiller2 {
    public static void main(String[] args) {
        int count = 0;
        while (true) {
            if (count % 10000 == 0)
                System.err.println("Allocated " + count);
            
            try {
                ByteBuffer discarded = ByteBuffer.allocateDirect(1024);
            } catch (OutOfMemoryError e) {
                e.printStackTrace();
                System.gc();
            }

            ++count;
        }
    }
}

---------- END SOURCE ----------

CUSTOMER SUBMITTED WORKAROUND :
1. Don't use direct ByteBuffers; or
2. Preallocate a static pool of direct ByteBuffers at startup, and pray you never leak them or have to allocate more.

Forcing finalization does not seem to work: System.runFinalization() needs to create a native thread and at that point we're completely out of non-heap memory so thread creation fails.
(Review ID: 183775) 
======================================================================