JDK-6819077 : G1: first GC thread coming late into the GC
  • Type: Enhancement
  • Component: hotspot
  • Sub-Component: gc
  • Affected Version: hs15
  • Priority: P3
  • Status: Closed
  • Resolution: Fixed
  • OS: generic
  • CPU: generic
  • Submitted: 2009-03-18
  • Updated: 2013-09-18
  • Resolved: 2010-01-15
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 6 JDK 7 Other
6u18Fixed 7Fixed hs16Fixed
Related Reports
Duplicate :  
Relates :  
We recently tried G1 on a 32GB heap, with a 4GB / 8GB young gen. We noticed that termination times are relatively long (around 30-40ms). The reason for this is that GC worker with id 0 (it's always GC worker with id 0) is late coming into the GC code which results in all the other threads waiting for it during termination. This is clearly a performance bottleneck. We should resolve this.

EVALUATION http://hg.openjdk.java.net/jdk7/hotspot/hotspot/rev/6cb8e9df7174

EVALUATION http://hg.openjdk.java.net/jdk7/hotspot-gc/hotspot/rev/6cb8e9df7174

EVALUATION Adding an extra timer to the code and running the 32GB heap application confirmed that the culprit is indeed the one mentioned in the previous Evaluation note. clear_and_record_card_counts() would take more than 60ms on relatively state-of-the-art hardware.

EVALUATION Well, with a quick glance at the code I think I've found the culprit. void HRInto_G1RemSet::oops_into_collection_set_do(OopsInHeapRegionClosure* oc, int worker_i) { ... if (worker_i == 0) { _cg1r->clear_and_record_card_counts(); } ...

EVALUATION Ramki pointed out that it's not necessarily the same OS thread that gets id 0. And given that it's always thread 0 that has the issue we suspect some pathology in the G1 code.