JDK-4469299 : (bf) Excessive native memory growth with NIO due to finalization delay
  • Type: Bug
  • Status: Open
  • Resolution: Unresolved
  • Component: core-libs
  • Sub-Component: java.nio
  • Priority: P3
  • Affected Version: 5.0,5.0u6,6,6u10
  • OS: generic,linux,solaris_8,solaris_10
  • CPU: generic,x86,sparc
  • Submit Date: 2001-06-13
  • Updated Date: 2017-08-11
Related Reports
Duplicate :  
Duplicate :  
Relates :  
Relates :  
The java.nio package added DirectByteBuffer and MappedByteBuffer classes
that allocate native memory on the native C heap and via a platform specific
memory mapping function (mmap()). The current implementation provides no
api to allow Java applications to deallocate these memory regions, but
instead relys upon finalizers to perform the deallocation. In some cases,
particularly with applications with large heaps and light to moderate
loads where collections happen infrequently, the Java process can consume
memory to the point of process address space exhaustion.

This problem is not unique to nio, as any native resources that rely upon
finalizers for cleanup can also exhibit similar issues. However, nio exposes
this issue in new ways.

This problem is observable by running a nio file copy program that uses
a small (8K) MappedByteBuffer to repeatedly copy a large (~7m) file within the
same VM with a 50m heap. 100 iterations result in the process size growing
from 84m to 564m, with pmap reporting large numbers, >60K , of 8K
mapped pages. After the only minor GC, finialization kicks in and reduces
the number of mapped pages. 

   java -server -Xms50m -Xmx50m CopyFile 7 100 <src_file> <dest_file> 8192

This problem is also reproducible with the same copy program using
ByteBuffer objects. With the same heap, buffer, and file sizes, the native
C heap grows to nearly 2G before a minor GC event occurs. On Solaris,
the C heap size does not retract. Note the collection takes 9.6s and
collects 4.4K of Java heap space. The large native heap is the result
of nio allocating a new DirectByteBuffer for each read and write call
that uses a heap allocated ByteBuffer, creating many DirectByteBuffer

   java -server -Xms50m -Xmx50m 3 100 <src_file> <dst_file> 8192

In a more realistic scenario, the nio HttpServer also exhibits native
C heap growth when subjected to moderate loads with a 96mb Eden and 256mb
heap. While ramping up load to 100 request/s, the process size grows to
437mb (248mb RSS, 142mb native C heap size) before the first minor GC
(5 min after start of load). Growth eventually returns as load ramps up.

Although these examples may seem dubious, they do expose some real problems.
Some communication between the native allocation mechanisms and the Java
garbage collectors are needed to force a GC of the Java heap when native
resources exceed some threshold, possibly set by a hueristic or tunable.
Otherwise, java.nio will need to provide deallocation api's so native
resources can be freed long before finalization.

EVALUATION The -XX:MaxDirectMemorySize=<size> option can be used to limit the amount of direct memory used. An attempt to allocate direct memory that would cause this limit to be exceeded causes a full GC so as to provoke reference processing and release of unreferenced buffers. There are currently no plans to introduce an explicit API to release buffers as that would not be safe to use in a multi-threaded environment.

CONVERTED DATA BugTraq+ Release Management Values COMMIT TO FIX: mustang

PUBLIC COMMENTS java.nio relys on finialization to clean up DirectByteBuffer and MappedByteBuffer objects. This can lead to excessive native memory utilization in systems that allocate native memory resources at a rate higher than the Java garbage creation rate. This condition results in infrequent GC events and delayed finalization. This problem is not necessarily unique to java.nio.

WORK AROUND Known workarounds: - Insert occasional explicit System.gc() invocations to ensure that direct buffers are reclaimed. - Reduce the size of the young generation to force more frequent GCs. - Explicitly pool direct buffers at the application level. -- ###@###.### 2002/4/12
178-10-04 0

EVALUATION A few comments on the three examples in the description: (1) The file-copy program that maps many 8KB memory regions is extremely unrealistic. It does, however, point out that any program that makes heavy use of mapped files will suffer if full GCs are sufficiently infrequent. This is probably the most common type of problem that developers will see until this bug is fixed, though it's interesting to note that we have yet to receive an external report of this. (2) The file-copy program that uses non-direct buffers and grows to 2GB in Merlin beta does not have this problem in FCS due to the fix for 4462815. (3) The HTTP server uses direct buffers in a way that's contradictory to the advice we've been giving developers and which is included in the spec: * <p> A direct byte buffer may be created by invoking the {@link * #allocateDirect(int) allocateDirect} factory method of this class. * The buffers returned by this method typically have somewhat higher * allocation and deallocation costs than non-direct buffers. The * contents of direct buffers may reside outside of the normal * garbage-collected heap, and so their impact upon the memory * footprint of an application might not be obvious. It is therefore * recommended that direct buffers be allocated primarily for large, * long-lived buffers that are subject to the underlying system's * native I/O operations. In general it is best to allocate direct * buffers only when they yield a measureable gain in program * performance. The right fix to this bug involves making the GC code aware of NIO's memory usage for direct and mapped byte buffers. This work is currently being planned for Tiger (1.5). -- ###@###.### 2002/4/12 In Mantis (1.4.2) we've rewritten the buffer-reclamation code to use phantom references rather than rely upon finalization (4824417). That change may ameliorate the problem described here to some degree, but it will not eliminate it in general. -- ###@###.### 2003/3/7
173-11-03 0