United StatesChange Country, Oracle Worldwide Web Sites Communities I am a... I want to...
Bug ID: JDK-6676016 ParallelOldGC leaks memory
JDK-6676016 : ParallelOldGC leaks memory

Details
Type:
Bug
Submit Date:
2008-03-17
Status:
Resolved
Updated Date:
2010-05-09
Project Name:
JDK
Resolved Date:
2009-01-07
Component:
hotspot
OS:
solaris_10
Sub-Component:
gc
CPU:
sparc
Priority:
P3
Resolution:
Fixed
Affected Versions:
5.0u12
Fixed Versions:
5.0u17-rev (b09)

Related Reports
Backport:
Backport:
Backport:
Backport:
Backport:
Relates:
Relates:

Sub Tasks

Description
Customer observes with pmap, the C-heap is growing slowly.

We got them to apply the libumem for memory leak detection and found the leak.

                                    

Comments
EVALUATION

The first problem is ChunkTaskQueueWithOverflow (CTQWO) in taskqueue.cpp.  The initialize() method allocates a growable array for the _overflow_stack from the c heap that is never freed.  CTQWO doesn't include a destructor or other method to do cleanup.  

The CTQWO is allocated by the ParCompactionManager constructor (psCompactionManager.cpp). The second problem is that the ParCompactionManager destructor doesn't delete or cleanup the CTQWO allocated by the ctor.  ParCompactionManager was cloned from PSPromotionManager, which assumes all instances live for the lifetime of the JVM.

There is one instance of ParCompactionManager allocated during each full gc--the "serial_CM" which is used primarily for compaction of the perm gen.
                                     
2008-04-09
EVALUATION

The items described in the previous entry are problems, but they are not the source of the noticeable leak.  The real leak is because GCTaskThreads fail to release memory allocated in the ResourceArea.  The threads all have ResourceMarks in place, but the scope of the ResourceMarks within the GCTaskThread::run method is never exited.  The parallel compaction code is missing a call to GCTaskManager::release_all_resources(), which informs the GCTaskThreads to release resources.  In parallel scavenge, this is called during gc setup.
                                     
2008-05-06
SUGGESTED FIX

Use an existing (statically allocated) compaction manager for the perm gen and include a call to GCTaskManager::release_all_resources() toward the beginning of each full gc.

diff -r b5489bb705c9 src/share/vm/gc_implementation/parallelScavenge/psParallelCompact.cpp
--- a/src/share/vm/gc_implementation/parallelScavenge/psParallelCompact.cpp	Tue May 06 15:37:36 2008 -0700
+++ b/src/share/vm/gc_implementation/parallelScavenge/psParallelCompact.cpp	Wed May 07 16:03:51 2008 -0700
@@ -1004,6 +1004,9 @@
 
   DEBUG_ONLY(mark_bitmap()->verify_clear();)
   DEBUG_ONLY(summary_data().verify_clear();)
+
+  // Have worker threads release resources the next time they run a task.
+  gc_task_manager()->release_all_resources();
 }
 
 void PSParallelCompact::post_compact()
@@ -1949,12 +1952,6 @@
   TimeStamp compaction_start;
   TimeStamp collection_exit;
 
-  // "serial_CM" is needed until the parallel implementation
-  // of the move and update is done.
-  ParCompactionManager* serial_CM = new ParCompactionManager();
-  // Don't initialize more than once.
-  // serial_CM->initialize(&summary_data(), mark_bitmap());
-
   ParallelScavengeHeap* heap = gc_heap();
   GCCause::Cause gc_cause = heap->gc_cause();
   PSYoungGen* young_gen = heap->young_gen();
@@ -1968,6 +1965,10 @@
   // miscellaneous bookkeeping.
   PreGCValues pre_gc_values;
   pre_compact(&pre_gc_values);
+
+  // Get the compaction manager reserved for the VM thread.
+  ParCompactionManager* const vmthread_cm =
+    ParCompactionManager::manager_array(gc_task_manager()->workers());
 
   // Place after pre_compact() where the number of invocations is incremented.
   AdaptiveSizePolicyOutput(size_policy, heap->total_collections());
@@ -2008,7 +2009,7 @@
     bool marked_for_unloading = false;
 
     marking_start.update();
-    marking_phase(serial_CM, maximum_heap_compaction);
+    marking_phase(vmthread_cm, maximum_heap_compaction);
 
 #ifndef PRODUCT
     if (TraceParallelOldGCMarkingPhase) {
@@ -2039,7 +2040,7 @@
 #endif
 
     bool max_on_system_gc = UseMaximumCompactionOnSystemGC && is_system_gc;
-    summary_phase(serial_CM, maximum_heap_compaction || max_on_system_gc);
+    summary_phase(vmthread_cm, maximum_heap_compaction || max_on_system_gc);
 
 #ifdef ASSERT
     if (VerifyParallelOldWithMarkSweep &&
@@ -2067,13 +2068,13 @@
       // code can use the the forwarding pointers to
       // check the new pointer calculation.  The restore_marks()
       // has to be done before the real compact.
-      serial_CM->set_action(ParCompactionManager::VerifyUpdate);
-      compact_perm(serial_CM);
-      compact_serial(serial_CM);
-      serial_CM->set_action(ParCompactionManager::ResetObjects);
-      compact_perm(serial_CM);
-      compact_serial(serial_CM);
-      serial_CM->set_action(ParCompactionManager::UpdateAndCopy);
+      vmthread_cm->set_action(ParCompactionManager::VerifyUpdate);
+      compact_perm(vmthread_cm);
+      compact_serial(vmthread_cm);
+      vmthread_cm->set_action(ParCompactionManager::ResetObjects);
+      compact_perm(vmthread_cm);
+      compact_serial(vmthread_cm);
+      vmthread_cm->set_action(ParCompactionManager::UpdateAndCopy);
 
       // For debugging only
       PSMarkSweep::restore_marks();
@@ -2084,15 +2085,13 @@
     compaction_start.update();
     // Does the perm gen always have to be done serially because
     // klasses are used in the update of an object?
-    compact_perm(serial_CM);
+    compact_perm(vmthread_cm);
 
     if (UseParallelOldGCCompacting) {
       compact();
     } else {
-      compact_serial(serial_CM);
+      compact_serial(vmthread_cm);
     }
-
-    delete serial_CM;
 
     // Reset the mark bitmap, summary data, and do other bookkeeping.  Must be
     // done before resizing.
                                     
2008-05-07
EVALUATION

http://hg.openjdk.java.net/jdk7/hotspot-gc/hotspot/rev/05712c37c828
                                     
2008-06-21



Hardware and Software, Engineered to Work Together