JDK-6819891 : ParNew: Fix work queue overflow code to deal correctly with +UseCompressedOops
  • Type: Bug
  • Component: hotspot
  • Sub-Component: gc
  • Affected Version: 6u14
  • Priority: P1
  • Status: Closed
  • Resolution: Fixed
  • OS: generic
  • CPU: generic
  • Submitted: 2009-03-19
  • Updated: 2011-03-07
  • Resolved: 2011-03-07
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 6 JDK 7 Other
6u14Fixed 7Fixed hs14Fixed
Related Reports
Relates :  
Relates :  
The fix for 6774607 was insufficient. Fix it correctly this time by using compressed pointers for linking up the overflown objects. The old fix in 6774607 was still vbulnerable in the case where the overflown object was an object array whose size is
larger than ParGCArrayScanChunk.

EVALUATION http://hg.openjdk.java.net/jdk7/hotspot-gc/hotspot/rev/cea947c8a988

SUGGESTED FIX repo=/net/jano2/export2/hotspot/ws/14/baseline delta= changeset: 529:640db98269d8 user: ysr date: Tue Mar 24 18:35:17 2009 -0700 files: src/share/vm/gc_implementation/concurrentMarkSweep/concurrentMarkSweepGeneration.cpp src/share/vm/gc_implementation/parNew/parNewGeneration.cpp src/share/vm/gc_implementation/parNew/parNewGeneration.hpp src/share/vm/runtime/globals.hpp description: 6819891: ParNew: Fix work queue overflow code to deal correctly with +UseCompressedOops Summary: When using compressed oops, rather than chaining the overflowed grey objects' pre-images through their klass words, we use GC-worker thread-local overflow stacks. Reviewed-by: jcoomes, jmasa

WORK AROUND In particular, the last workaround above, of increasing the default value of ParGCArrayScanChunk is a bit of a Catch-22, in the sense that increasing the value can (if the user's program uses large arrays) actually increase the incidence of work queue overflow (and thus adversely affect performance). However, despite the overflow, the program will continue to work properly in that case. On the other hand, with a small value for the option, the chances of overflow are now much reduced (even in the presence of very large object arrays in the heap); however, were such an overflow to occur, involving an object array, then we would be vulnerable to corrupting the heap and/or crashing in GC.

SUGGESTED FIX preliminary webrev: http://analemma.sfbay.sun.com/net/neeraja/export/

SUGGESTED FIX The above does not work for the case when the overflow of a work queue when pushing an object array happens during promotion failure; see comments section. To work around these problems, when using compressed oops we will use private, non-shared growable arrays per thread to store oops that overflow their work queue. In these cases, it might be beneficial to drain the overflow lists more eagerly than currently done in some of the scanning/evacuation closures.

EVALUATION See suggested fix section.

WORK AROUND -XX:-UseParNewGC OR -XX:-UseCompressedOops OR -XX:ParGCArrayScanChunk=<LARGER THAN LARGEST OBJECT ARRAY EVER ALLOCATED BY PROGRAM> Each of these three alternatives however can adversely (sometimes very adversely) affect performance.

SUGGESTED FIX The ParNew overflow list handling code should use compressed pointers to link the overflown objects so as not to mess with the length field used by the array chunking code.