JDK-6865703 : G1: parallelize cache clean up
  • Type: Enhancement
  • Component: hotspot
  • Sub-Component: gc
  • Affected Version: hs16
  • Priority: P3
  • Status: Closed
  • Resolution: Fixed
  • OS: generic
  • CPU: generic
  • Submitted: 2009-07-28
  • Updated: 2013-09-18
  • Resolved: 2010-01-15
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 6 JDK 7 Other
6u18Fixed 7Fixed hs16Fixed
Related Reports
Duplicate :  
Description
When running performance tests with large heaps (24G or so) I noticed that GC worker 0 was taking longer in the RS updating phase than the rest. Here's an example:

      [Update RS (ms):  4.6  2.3  2.4  2.6  2.3  2.4  2.5  2.7  2.2  2.6  2.6  2.4  2.3
       Avg:   2.6, Min:   2.2, Max:   4.6]

When the pause times were short, this resulted in long termination times for all but worker 0:

      [Termination (ms):  0.0  2.0  1.9  1.9  2.0  1.9  1.9  1.9  1.9  1.9  2.0  1.9  1.9
       Avg:   1.8, Min:   0.0, Max:   2.0]

which added unnecessary time to the collection pause.

Comments
EVALUATION http://hg.openjdk.java.net/jdk7/hotspot/hotspot/rev/15c5903cf9e1
06-08-2009

EVALUATION http://hg.openjdk.java.net/jdk7/hotspot-gc/hotspot/rev/15c5903cf9e1
04-08-2009

SUGGESTED FIX The fix is to parallelize the card cache cleaning code (by, say, chunking the cache and each worker having to claim each chunk before processing it).
28-07-2009

EVALUATION The bottleneck is the worker 0 alone serially cleans up the card cache. And if that processing takes a non-trivial amount of time, then the rest might have to wait for it during termination.
28-07-2009