JDK-8038356 : JVM should manage multiple versions of compiled methods
  • Type: Enhancement
  • Component: hotspot
  • Sub-Component: compiler
  • Affected Version: 9,10
  • Priority: P4
  • Status: Open
  • Resolution: Unresolved
  • Submitted: 2014-03-25
  • Updated: 2022-09-05
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
Other
tbdUnresolved
Related Reports
Duplicate :  
Relates :  
Relates :  
Relates :  
Description
Currently, a JVM byte coded method has a single nullable pointer which refers to its unique compiled "nmethod", called Method::_code.

If the JVM needs to manage multiple compiled versions of a method, this design does not scale.  For example, we store OSR methods (perhaps several versions per method distinguished by entry point BCI) in a per-class side table.

In the future, mixtures of profiled and optimized execution modes may require two or more versions of a method (at different profile levels) to be kept available at the same time.  Also in the future, heterogeneous processing platforms may require optimized n-methods for more than one ISA at the same time.

To make this situation manageable, the _code field should be the root of a linked list of n-methods, reaching all versions of a given method.  An additional field nmethod::_next_code can link to additional versions.  The first item in the linked list can be privileged to interoperate with stubs and with normal calls on the CPU.

There are many use cases for a 1-N relation between one Method* meta-object and N code blobs:

A. differing optimization levels: keep a slower version handy if we need to fall back from a speculative optimization

B. differing profiling levels: the profiled version executes 1% of the time to gather more info, while the best optimized version runs 99% of the time

C. alternate entry points for on-stack replacement:  some hot methods must be entered at mid-execution from the interpreter; each BCI has a different entry point and different optimized code

D. coprocessor methods: code blobs for dispatch on GPUs or gate arrays

E. customized stream loops: optimizations of work-stealing scheduler methods for particular stream configurations

F. customized vector loops: vectorized optimizations of methods, to be used only in contexts where large data sets are input

G. native-mode methods:  callbacks from native code which are called outside the JVM's invariants

H. specialized methods:  hot generic methods which are specialized to particular layouts or data types, as with Arrays.sort on inline types

I. parametric VM specializations:  an explicitly specialized method will be specialized to its relevant "specialization anchor"; see https://github.com/openjdk/valhalla-docs/blob/main/site/design-notes/parametric-vm/parametric-vm.md

J. specializations to ad hoc preconditions, as in JDK-8255024

K. re-compilation of heuristically selected "template-like" methods when the JVM detects that it would be profitable to recompile a whole method against a refined (subclass) receiver type (e.g., because the "template-like" method depends on values or types which the subclass detectably specializes)

More background:

http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2013-October/012078.html
https://mail.openjdk.java.net/pipermail/panama-dev/2020-June/009488.html
Comments
I just ran into Dean's comment. I think concentrating the magic on the _code field works better, precisely because it does not spread the magic into other places. If we expose optimization transforms visibly by creating alternate API points, suddenly the interpreter and the rest of the dynamic toolchain must take our optimizations into account. That would include reflection, accidental overrides of user-written methods, and many other user-visible interactions. It's a mess to spread the magic around; optimizations should be secret!
05-09-2022

What if, instead of putting all the magic being the _code field, we instead create specialized Method's with special signatures that describe the desired constraints? So for method(bool mode, int val) where mode == true, instead of the non-specialized "(ZI)" the Method would use a specialized signature "(ZI=true)"? That might work nicely when the caller is compiled and the constraint is always true, allowing us to specialize at link time. But how do we allow the interpreter to take advantage of specialized methods? We might want to have generated code that tests the constraint and then dispatches to the specialized version. This might look like a "i2c" stub, or maybe a JIT-generated method that uses a tail call.
05-03-2022

One possible starting point: Suppose you have a boolean mode argument that’s super-important inside the method. Compile versions of the method for constant true (resp. constant false), when (a) a caller observes a true value, (b) the caller elects not to inline, and (c ) somebody notices that “super-important” quality of the boolean inside the (non-inlined) method body. Then link the caller to the specialized method, rather than the non-specialized one (or the v-table slot). Anything done with a simple boolean as above can be done with something more complicated, and in particular with a reference argument, if the reference has a known concrete type (inferred or speculated or profiled). All this pairs well with argument profiling, since that could cause the caller to speculate about the mode argument, and then request a specialized method.
04-03-2022

Having a specialized version of the compiled code might be useful for "System Java" / Project Metropolis purposes. This is assuming the System Java implementation multiplexes on the same Method objects as non-System code, rather than having a solution that allows a 1:1 mapping, such as running in its own container/library/logical JVM, etc, or multiplexing at an different level (system dictionary, Klass, etc). Some things that specialized code might do differently: - fully inline, so no unexpected stack overflows (JEP 270) - disable EA (JDK-8227309) - code in catch/finally guaranteed to complete ("guaranteed cleanup") - no deoptimization allowed (if for example "System Java" is all AOT-compiled w/o interpreter support) - offer few or zero safepoints - no "remote"/asynchronous exceptions - no JFR/JVMTI events - no debugger breakpoints - special handling of object monitors - in summary, no surprises, predictable execution comparable to C++
26-01-2022

Possible short-term exploration: Try to customize methods based on user-side profiles. Useful not as a full solution, but rather to develop implications of managing multiple versions of method code.
24-03-2021

Albert's last comment: JDK-8038356 seems to be related to Values Types, so maybe Roland can look into this.
02-03-2015