JDK-8034052 : Investigate using different CodeCacheSegmentSizes in segmented code cache
  • Type: Enhancement
  • Component: hotspot
  • Sub-Component: compiler
  • Affected Version: 9,10
  • Priority: P4
  • Status: Closed
  • Resolution: Duplicate
  • OS: generic
  • Submitted: 2014-02-10
  • Updated: 2017-01-31
  • Resolved: 2017-01-31
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 10
10Resolved
Related Reports
Duplicate :  
Description
JDK-8029779 showed that tiered compilation can result in a large memory overhead (up to 20%) for storing compiled code. Having multiple code heaps allows setting CodeCacheSegmentSize per code heap. As a result, the memory overhead of storing compiled code can be reduced by setting CodeCacheSegementSize according to the sizes of compiled code that go into a particular heap.
Comments
This effort of improving space efficiency in the code cache should be investigated with JDK-7072317. Closing as duplicate.
31-01-2017

Suggestion from David Chase to optimize code cache allocation: I'm also late to this. There's plenty of work that's been done on memory allocation for C, trading off speed and fragmentation. The size-indexed freelist trick works pretty well; very good speed, and consumption not that bad. That's what we used back in the days of silly C benchmark wars. If you're willing to spend the time, it is my recollection that "Cartesian Trees" are very good at space, but not so good at speed. David
17-02-2014

Suggestion from Chris Plummer to optimize code cache allocation: Hi Albert, Sorry I'm a bit late getting back to this. Thanks for the data. From this it looks like the larger CodeCacheSegmentSize is wasting some memory in the CodeBlob, but that is made up for not needing as large of a segmap. Sounds good to me. Regarding first fit vs best fit, my understanding is that the former is best when performance is critical and sizes vary greatly (typical C app with many mallocs). I think best fit is probably better for codecache usage since performance is less critical and the sizes don't vary so much. One thing you might want to consider is having a array of freelists for the smaller sizes so you can quickly index to the size you need (or the next size up that is big enough) and then have a general free list for the larger sizes. This should work well for the codecache since there are not that many different sizes to deal with. If you had an array of just size 32, that would cover free blobs up to 2k in size, which I'm guessing is well over 90% of the allocations. For sizes under 2k, at most you check 32 buckets for a free one of the right size. For sizes of 2k, you end up searching a pretty short freelist. If the list does get long, you could limit how deep in the list you search, and just use the best fit up to that point. cheers, Chris
17-02-2014