JDK-6976350 : G1: deal with fragmentation while copying objects during GC
  • Type: Enhancement
  • Component: hotspot
  • Sub-Component: gc
  • Affected Version: hs19
  • Priority: P3
  • Status: Resolved
  • Resolution: Fixed
  • OS: generic
  • CPU: generic
  • Submitted: 2010-08-11
  • Updated: 2014-02-14
  • Resolved: 2013-06-05
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 8 Other
8Fixed hs25Fixed
Related Reports
Relates :  
Relates :  
Description
Currently, when an allocation request fails on a region we retire that region and grab a new one. This works fine most of the time when allocation requests are small. However, if the GC comes across some large-ish objects every now and then, that does not fit in the remainder of the current GC alloc region, we'll retire that region, maybe wasting a fair amount of space in it. It'd be good if we minimized how much space we waste because of this fragmentation.

Comments
SUGGESTED FIX A couple of extra notes on the fix: - for large-ish allocations, we should make sure that we do not retire PLABs and we satisfy such allocations in the way I described in note #1 while being able to go back to the half-full PLAB we're holding on to and carry on allocating into it whenever possible
11-08-2010

PUBLIC COMMENTS It might be worth implementing the scheme described in Note #1 in the Suggested Fix section in a modular way to re-use it for the allocation regions we use to satisfy mutator allocation requests.
11-08-2010

SUGGESTED FIX Currently we only keep track of one allocation region per GC purpose (i.e., survivor or tenured) and when an allocation fails we simply retire it and set a new region for that allocation purpose. If a regin does not have enough space to satisfy an allocation request, but we would like to re-use it in the future in case it can satisfy a smaller allocation request, we'll have to keep track of more than one region per allocation purpose. Imagine if we keep track of N regions R[0..N-1]. We first set up R[0] and allocate out of it. If R[0] cannot satisfy an allocation, and we think there might be too much space wasted if we retire it, we then set up R[1], allocate into it, and go back to satisfy subsequent allocation requests out of R[0]. Later it's possible that R[0] and R[1] might be half full and still not be able to allocate a subsequent requst. In which case we'll set up R[2], etc. If we have set up N regions and still cannot satisfy an allocation request, we'll retire R[0] to allow us to create a new region. When we retire R[0], we'll need to shift R[1..N-1] to become R[0..N-2] and start using the new R[0], i.e., the old R[1], as the "default" allocation region. I imagine that N will be small, i.e., 2 or 3, but I assume that the larger it is the better we'll be able to deal with fragmentation, maybe at the expense of some overhead of managing R. I will also guess that, for a lot of applications that only use small objects, all allocations will go into R[0] and we'll never set up any other region in R.
11-08-2010