JDK-8166317 : InterpreterCodeSize should be computed
  • Type: Enhancement
  • Component: hotspot
  • Sub-Component: runtime
  • Affected Version: 9,10
  • Priority: P4
  • Status: Resolved
  • Resolution: Fixed
  • Submitted: 2016-09-19
  • Updated: 2021-03-10
  • Resolved: 2017-10-24
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 10
10 b31Fixed
Related Reports
Relates :  
Relates :  
Description
Too often while developing with JVMCI, I still see this when trying to run the VM with support for attaching a debugger (e.g., -Xdebug -Xrunjdwp:transport=dt_socket,server=y,address=8000,suspend=y):

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  Internal Error (interpreter.hpp:105), pid=82573, tid=0x0000000000001703
#  guarantee(codelet_size > 0 && (size_t)codelet_size > 2*K) failed: not enough space for interpreter generation
#
# JRE version:  (8.0_92-b14) (build )
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.71-b01-internal-jvmci-0.21-dev mixed mode bsd-amd64 compressed oops)
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /Users/dsimon/graal/graal-core/hs_err_pid82573.log
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
#

The underlying issue is that the code cache size required for the interpreter should not be "guessed", especially if the guess doesn't allow for sufficient wiggle room. Instead, the interpreter should be generated into an oversized buffer that is trimmed back to exactly the right size.

An intermediate fix would be convert InterpreterCodeSize from a C++ constant to a VM option so that the user can at least recover when the statically chosen guess was wrong.
Comments
I opened a new issue JDK-8187091 "ReturnBlobToWrongHeapTest fails because of problems in CodeHeap::contains_blob()" for the problem with ReturnBlobToWrongHeapTest.java described in the previous comment.
01-09-2017

I've just posted a possible fix for review: http://mail.openjdk.java.net/pipermail/hotspot-dev/2017-August/028162.html
31-08-2017

While working on this, I found another problem which is related to the fix of JDK-8183573 and leads to crashes when executing the JTreg test compiler/codecache/stress/ReturnBlobToWrongHeapTest.java. The problem is that JDK-8183573 replaced virtual bool contains_blob(const CodeBlob* blob) const { return low_boundary() <= (char*) blob && (char*) blob < high(); } by: bool contains_blob(const CodeBlob* blob) const { return contains(blob->code_begin()); } But that my be wrong in the corner case where the size of the CodeBlob's payload is zero (i.e. the CodeBlob consists only of the 'header' - i.e. the C++ object itself) because in that case CodeBlob::code_begin() points right behind the CodeBlob's header which is a memory location which doesn't belong to the CodeBlob anymore. This exact corner case is exercised by ReturnBlobToWrongHeapTest which allocates CodeBlobs of size zero (i.e. zero 'payload') with the help of sun.hotspot.WhiteBox.allocateCodeBlob() until the CodeCache fills up. The test first fills the 'non-profiled nmethods' CodeHeap. If the 'non-profiled nmethods' CodeHeap is full, the VM automatically tries to allocate from the 'profiled nmethods' CodeHeap until that fills up as well. But in the CodeCache the 'profiled nmethods' CodeHeap is located right before the non-profiled nmethods' CodeHeap. So if the last CodeBlob allocated from the 'profiled nmethods' CodeHeap has a payload size of zero and uses all the CodeHeaps remaining size, we will end up with a CodeBlob whose code_begin() address will point right behind the actual CodeHeap (i.e. it will point right at the beginning of the adjacent, 'non-profiled nmethods' CodeHeap). This will result in the following guarantee to fire, when we will try to free the last allocated CodeBlob (with sun.hotspot.WhiteBox.freeCodeBlob()): # A fatal error has been detected by the Java Runtime Environment: # # Internal Error (heap.cpp:248), pid=27586, tid=27587 # guarantee((char*) b >= _memory.low_boundary() && (char*) b < _memory.high()) failed: The block to be deallocated 0x00007fffe6666f80 is not within the heap starting with 0x00007fffe6667000 and ending with 0x00007fffe6ba000 The fix is trivial so I'll include it into this bug - just revert the change mentioned at the beginning of this comment such that contains_blob() again uses the address of the CodeBlob instead of CodeBlob->code_begin().
29-08-2017

Ah, ok, it was the trimming at the end of the stub queue blobs for each bytecode. It seems a reasonable idea to trim the end of the blob and return it to the CodeCache. I had added a test a long time ago to catch when we had buffer overruns quickly. It turned on JVMTI and some other things. That was also when we changed PrintInterpreter to be available in product mode so we can see the numbers. Again, I don't know of a non-intrusive way to fix this perfectly.
24-08-2017

No, I don't think the memory which is allocated for the interpreter gets shrunken to its actual size. TemplateInterpreter contains a StubQueue for the generated interpreter codelets and stubs. The StubQueue directly allocates a BufferBlob of InterpreterCodeSize right in the CodeCache which can not be shrunken any more. The generation of the various stubs/codelets always uses the complete remaining space of the StubQueue/BufferBlob respectively, but after the generation, they are trimmed to their actual size within their StubQueue/BufferBlob. So the stubs/codelets are dense within the interpreter BufferBlob, but the free space at the end of that BufferBlob isn't returned back to the CodeCache once the interpreter generation has finished (because CodeCache currently simply doesn't offer this functionality). The wasted space is actually reported by "-XX:+PrintInterpreter": ---------------------------------------------------------------------- Interpreter code size = 139K bytes total space = 267K bytes wasted space = 128K bytes This is from the current jdk9-b181 without debugging turned on. I think theoretically it should be simple to return the wasted space to the CodeCache because at the early stage where we generate the interpreter there shouldn't be other allocations from the CodeCache. But I still have to evaluate that.
24-08-2017

Are you sure the buffer is shrunk at the end of interpreter generation? If so, why not just not make InterpreterCodeSize 2 or 3 x its current size?
23-08-2017

If someone comes up with a good way to fix this, they can reopen and do it. We've spent some time and don't have a good solution, so we are not planning to fix it even though it's inconvenient to have an initial fixed size. I believe if the initial size is too large, it's shrunk at the end of interpreter generation.
23-08-2017

Not a current priority, closing as WNF
15-05-2017

As this issue is related to the interpreter, I'd like to ask the Runtime Team to look into it.
08-02-2017