JDK-8006952 : CodeCacheFlushing degenerates VM with excessive codecache freelist iteration
  • Type: Bug
  • Component: hotspot
  • Sub-Component: compiler
  • Affected Version: 8
  • Priority: P3
  • Status: Resolved
  • Resolution: Fixed
  • OS: generic
  • CPU: generic
  • Submitted: 2013-01-25
  • Updated: 2016-06-28
  • Resolved: 2013-04-17
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 7 JDK 8 Other
7u101Fixed 8Fixed hs25Fixed
Related Reports
Relates :  
Relates :  
Description
Code cache flushing causes stop in compilation, contention on codecache lock and lots of wasted CPU cycles when the code cache gets full.

When uncommited codecache memory is less than the the size of the total memory on the freelist, we iterate the entire freelist for the largest block. This is done from ~8 places in the compiler broker. As long as the largest block is less than CodeCacheMinimumFreeSpace (1,5M) all compilation is halted and flushing is invoked. Since gathering a 1,5M continous freelist block will take some time, compilations is delayed, and regular flushing makes the freelist longer and longer. After a while it is very long, but still far from being continous. More and more time is spent iterating the freelist. All profile counters that overflow will end up checking the freelist. All compiler threads will check the freelist a few times before delaying compilation. In addition the freelist is accessed holding the codecache lock making the excessive iterating fully serilized. After a day or so a CPU core may spend 100% of its cycles banging the codecache locka and iterating the freelist. Also the application slows down when more and more code is flushed from the cache. 

This problem is mostly mainfested with tiered compilation since we compile and reclaim a lot more code. A clear symptom is when the VM has stopped compiling even though it reports it has enough free code cache. The problem is worse with big codecaches since the freelist will be more fragmented and get much longer before finding a continous block.

Workaround: Turn of code cache flushing.

Solution is probably not to require continous free memory. The cause for the threshold is to gaurantee space for adapters, and they are usually small and will fit anyway. 

Comments
This was actually a regression going from JRockit to JDK 7, so not a regression in our usual sense
14-06-2016

CodeCacheMinimumFreeSpace is 500k, not 1500k, and I don't believe is the problem here. CodeCacheFlushingMinimumFreeSpace is 1500k and appears to be the problem. It is what is being compared against the largest free block: static bool needs_flushing() { return largest_free_block() < CodeCacheFlushingMinimumFreeSpace; } And as long as needs_flushing() returns true, we don't compile: // If the compiler is shut off due to code cache flushing or otherwise, // fail out now so blocking compiles dont hang the java thread if (!should_compile_new_jobs() || (UseCodeCacheFlushing && CodeCache::needs_flushing())) { CompilationPolicy::policy()->delay_compilation(method()); return NULL; }
19-03-2013