Code cache flushing causes stop in compilation, contention on codecache lock and lots of wasted CPU cycles when the code cache gets full.
When uncommited codecache memory is less than the the size of the total memory on the freelist, we iterate the entire freelist for the largest block. This is done from ~8 places in the compiler broker. As long as the largest block is less than CodeCacheMinimumFreeSpace (1,5M) all compilation is halted and flushing is invoked. Since gathering a 1,5M continous freelist block will take some time, compilations is delayed, and regular flushing makes the freelist longer and longer. After a while it is very long, but still far from being continous. More and more time is spent iterating the freelist. All profile counters that overflow will end up checking the freelist. All compiler threads will check the freelist a few times before delaying compilation. In addition the freelist is accessed holding the codecache lock making the excessive iterating fully serilized. After a day or so a CPU core may spend 100% of its cycles banging the codecache locka and iterating the freelist. Also the application slows down when more and more code is flushed from the cache.
This problem is mostly mainfested with tiered compilation since we compile and reclaim a lot more code. A clear symptom is when the VM has stopped compiling even though it reports it has enough free code cache. The problem is worse with big codecaches since the freelist will be more fragmented and get much longer before finding a continous block.
Workaround: Turn of code cache flushing.
Solution is probably not to require continous free memory. The cause for the threshold is to gaurantee space for adapters, and they are usually small and will fit anyway.