JDK-7005137 : G1: Decide whether we've gone over the reserve dynamically
  • Type: Enhancement
  • Component: hotspot
  • Sub-Component: gc
  • Affected Version: hs20
  • Priority: P4
  • Status: Open
  • Resolution: Unresolved
  • OS: generic
  • CPU: generic
  • Submitted: 2010-12-07
  • Updated: 2017-11-17
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
Related Reports
Relates :  
G1 has the notion of a "reserve" set by the G1ReservePercent parameter. The idea behind the reserve is that G1 should always try to leave the reserve memory free in case a collection copies unexpectedly more objects and requires more to-space than expected / predicted. Currently, the reserve is only taken into account when calculating the target number of eden regions to be allocated between an evacuation pause and a subsequent one.

There are two situations which might cause G1 to eat into the reserve:

1) Eden expansion due to the GC locker (6994056: G1: when GC locker is active, extend the Eden instead of allocating into the old gen) - the regions the eden will be expanded by might eat into the reserve.
2) Humongous allocations might fill up the heap between two evacuation pauses and the eden region target number will not be adjusted accordingly.

We should look into taking the reserve into account more dynamically to minimize the chance of eating into it.

An additional improvement could by the default value of the reserve. Currently, it is set to a very generous / conservative value (10% of the heap). Maybe we can reconsider this and, when the user does not set it explicitly, we can maybe make it less conservative or adjust it according to the current heap parameters.

With larger heaps this becomes more and more and issue and a big waste of space: e.g. 10% of e.g. 100GB are 10GB, and it is typically impossible for G1 to e.g. have a 10G young gen with these heaps due to time constraints.