JDK-6824466 : (reflect) java.lang.reflect.Method should use java.lang.invoke.MethodHandle
  • Type: Enhancement
  • Component: core-libs
  • Sub-Component: java.lang:reflect
  • Affected Version: 7,8
  • Priority: P3
  • Status: In Progress
  • Resolution: Unresolved
  • OS: generic
  • CPU: generic
  • Submitted: 2009-03-31
  • Updated: 2019-02-21
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
Other
tbdUnresolved
Related Reports
Relates :  
Relates :  
Relates :  
Relates :  
Description
Java 7 has method handles as part of JSR 292 (bug 6655638).  Reflective method call should be reimplemented on top of method handles, to provide a more direct call path to the target method.

With method handles there is no need to create adapter classes for calling methods.  The JVM can link directly to any method using a direct method handle.  This involves no class loading.

Comments
First of all, thanks Peter for working on this! Regarding the problem with @CallerSensitive methods mentioned in 2013, did you ever get that one figured out? Let's consider NativeMethodAccessorImpl and MethodAccessorGenerator separately as candidates for replacement. I was hoping your MHMethodAccessor could be a drop-in replacement for the accessors created by MethodAccessorGenerator, but your benchmarks show a slight slowdown on the *Var variants. Any idea why? As a replacement for NativeMethodAccessorImpl, I was afraid that we wouldn't be able to get the same startup latency with MethodHandles, and your numbers seem to confirm that. I have some ideas for an alternative way to do it, but I don't think I have time to investigate it for 12. For the curious, here's a link to how we did it in a previous project: https://github.com/AllBinary/phoneme-components-cdc/blob/master/src/share/javavm/classes/java/lang/reflect/Method.java#L291 but I would do it a little differently for HotSpot. Basically, the pre-checks, exception handling/wrapping, and boxing of the return value would be done in the caller, but the invoke part would use something like MH.invokeWithArguments in native or generated code but allowing a tail-call that removes the native frame, or using a frameless adapter, which has its own challenges if it needs to push arguments on the stack. For 12 and Loom, I suggest leaving NativeMethodAccessorImpl and replacing just MethodAccessorGenerator with MHMethodAccessor (assuming no performance regression), and having a way to disable NativeMethodAccessorImpl for Loom, perhaps with something like -Dsun.reflect.noNative.
08-10-2018

As it turns out, it is not trivial to implement a LowLatencyMHMethodAccessor which would be compliant with Method.invoke() specification regarding exceptions thrown, short of re-creating in code the logic to pre-check all argumens passed to Method.invoke and throw any specified exceptions even before invoking the direct MH with MethodHandle.invokeWithArguments(). Only in that case can any exceptions thrown be attributed to the invoked method itself and not to argument conversion logic, so they can be safely wrapped with InvocationTargetException. Should such LowLatencyMHMethodAccessor be created and placed into service instead of NativeMethodAccessor, what can be hoped for in terms of cold-start latency? For start I didn't what to bother with the argument pre-check logic and simply benchmarked the cold-start latency of creating and invoking a direct method handle via invokeWithArguments(), comparing it to cold-start latency of Method.invoke (i.e. NativeMethodAccessor). In this code, all methods are different and invoked for the 1st time in @BenchmarkMode(Mode.SingleShotTime)): @Benchmark public void invokeMethods(Blackhole bh) throws ReflectiveOperationException { for (Method m : methods) { bh.consume(m.invoke(null, arg)); } } @Benchmark public void invokeMHs(Blackhole bh) throws Throwable { for (Method m : methods) { MethodHandle mh = lookup.unreflect(m); bh.consume(mh.invokeWithArguments(arg)); } } The results are: Benchmark Mode Cnt Score Error Units ReflectionColdstartBenchmark.invokeMHs ss 10 4045.261 �� 379.912 us/op ReflectionColdstartBenchmark.invokeMethods ss 10 263.001 �� 48.815 us/op Which means that such LowLatencyMHMethodAccessor would actually be called HighLatencyMHMethodAccessor compared to MHMethodAccessor benchmarked in previous attempts above. Where does this leave us? Any ideas?
08-10-2018

For comparison, here's JDK 11, -Dsun.reflect.noInflation=true Benchmark Mode Cnt Score Error Units ReflectionSpeedBenchmark.instanceDirect avgt 10 2.369 �� 0.005 ns/op ReflectionSpeedBenchmark.instanceReflectiveConst avgt 10 14.946 �� 0.091 ns/op ReflectionSpeedBenchmark.instanceReflectiveVar avgt 10 15.171 �� 0.163 ns/op ReflectionSpeedBenchmark.staticDirect avgt 10 2.366 �� 0.009 ns/op ReflectionSpeedBenchmark.staticReflectiveConst avgt 10 15.367 �� 0.069 ns/op ReflectionSpeedBenchmark.staticReflectiveVar avgt 10 15.626 �� 0.154 ns/op ReflectionColdstartBenchmark.invokeMethods ss 10 7896.686 �� 989.307 us/op ...which means that MHMethodAccessort is still 2.5x faster than generated methods accessor on startup. But unfortunately 11x slower than native method accessor.
04-10-2018

Here are the results of benchmarks: JDK 11 Benchmark Mode Cnt Score Error Units ReflectionSpeedBenchmark.instanceDirect avgt 10 2.496 �� 0.008 ns/op ReflectionSpeedBenchmark.instanceReflectiveConst avgt 10 15.547 �� 0.095 ns/op ReflectionSpeedBenchmark.instanceReflectiveVar avgt 10 15.908 �� 0.039 ns/op ReflectionSpeedBenchmark.staticDirect avgt 10 2.396 �� 0.013 ns/op ReflectionSpeedBenchmark.staticReflectiveConst avgt 10 15.989 �� 0.089 ns/op ReflectionSpeedBenchmark.staticReflectiveVar avgt 10 15.933 �� 0.182 ns/op ReflectionColdstartBenchmark.invokeMethods ss 10 252.256 �� 34.249 us/op JDK 12 patched, -Djdk.useMethodHandlesForReflection=false Benchmark Mode Cnt Score Error Units ReflectionSpeedBenchmark.instanceDirect avgt 10 2.392 �� 0.037 ns/op ReflectionSpeedBenchmark.instanceReflectiveConst avgt 10 15.079 �� 0.072 ns/op ReflectionSpeedBenchmark.instanceReflectiveVar avgt 10 15.940 �� 0.111 ns/op ReflectionSpeedBenchmark.staticDirect avgt 10 2.378 �� 0.032 ns/op ReflectionSpeedBenchmark.staticReflectiveConst avgt 10 15.177 �� 0.121 ns/op ReflectionSpeedBenchmark.staticReflectiveVar avgt 10 15.982 �� 0.086 ns/op ReflectionColdstartBenchmark.invokeMethods ss 10 253.022 �� 30.055 us/op JDK 12 patched, -Djdk.useMethodHandlesForReflection=true Benchmark Mode Cnt Score Error Units ReflectionSpeedBenchmark.instanceDirect avgt 10 2.389 �� 0.032 ns/op ReflectionSpeedBenchmark.instanceReflectiveConst avgt 10 8.994 �� 0.027 ns/op ReflectionSpeedBenchmark.instanceReflectiveVar avgt 10 17.114 �� 0.134 ns/op ReflectionSpeedBenchmark.staticDirect avgt 10 2.404 �� 0.016 ns/op ReflectionSpeedBenchmark.staticReflectiveConst avgt 10 8.676 �� 0.088 ns/op ReflectionSpeedBenchmark.staticReflectiveVar avgt 10 16.676 �� 0.180 ns/op ReflectionColdstartBenchmark.invokeMethods ss 10 2885.284 �� 828.736 us/op Benchmarks are here: http://cr.openjdk.java.net/~plevart/jdk-dev/6824466_MHReflectionAccessors/ It seems that I'll have to try constructing LowLatencyMH(Method|Constructor)Accessor as Dean Long suggested...
04-10-2018

Here's a preview: http://cr.openjdk.java.net/~plevart/jdk-dev/6824466_MHReflectionAccessors/webrev.00.2/ Benchmarks follow shortly... (preliminary results show that final invocation overhead is comparable, in some situations even almost half the overhead of the generated accessors because of @Stable tricks. I'm curious about the startup latency...)
04-10-2018

Ok, starting work on this.
01-10-2018

Yes, I believe NativeMethodAccessor is the only one that gives Loom trouble, because of the native frame on the stack. So for loom, I imagine NativeMethodAccessor being replaced with MHMethodAccessor, and keeping the delagation to bytecode-generated MethodAccessor as long as it's faster. It's not obvious to me why MHMethodAccessor can't be just as fast. That would be an interesting investigation, and might lead to making the bytecode-generated MethodAccessor obsolete. I'm worried about the startup latency impact of MHMethodAccessor, however. Did you measure the cost of creating and transforming the MethodHandle in the worst case, where a new MH is needed for each invoke? That's one reason why I was asking about doing less work in the MethodHandle and more in static code, to allow us to create the MethodHandle faster. If there is a startup slowdown in JDK 12 with MHMethodAccessor, then I think it would need to be configurable so that JDK 12 can still use NativeMethodAccessor. Or have a tiered approach. Start with LowLatencyMHMethodAccessor and then switch to HighPerformanceMHMethodAccessor after enough invokes, like we do now in the delegating MethodAccessor.
28-09-2018

Yes, I would be interested in bringing this to life. So in what form would this be brought to JDK 12? As an optional MethodAccessor selected with system property? As default MethodAccessor that replaces existing accessors? I suspect only NativeMethodAccessor is problematic for Loom. FieldAccessor(s) use Unsafe which is OK?
28-09-2018

[~plevart] Peter, it turns out this feature would help Project Loom. Would you be interesting in finishing the work and pushing the result to JDK 12? If so, I can assign this to you. If not, I may give it a try.
27-09-2018

Good point about specialization and performance. Thanks Peter.
27-09-2018

Perhaps some of that NPE handling logic could be extracted out from MH transfromations to MHMethodAccessor.invoke method itself (NPE could conditionally be wrapped with InvocationTargetException only when target is null and method is instance method or perhaps null target / instance method could use a pre-check and throw NPE before invoking the MH) but performance wise it is better to specialize code with the MH transformations. VM can do a lot of optimization (constant folding) for the following scenario: - Method object is assigned to static final field (common usecase) - MethodAccessor is assigned to @Stable field in Method object - the resulting transformed MethodHandle is assigned to @Stable field in MHMethodAccessor
27-09-2018

When creating this, I tried to comply with what other MethodAccessor implementations do, so that MHMethodAccessor could be used as a drop-in replacement (perhaps enabled with a system property). That's one reason why everything is done in MethodAccessor. Another reason for handling exceptions with MethodHandle transformations is a necessity to distinguish exceptions thrown because of inappropriate use of the MethodHandle (wrong number or type of arguments, etc.) which must be transformed to appropriate exception types defined by the Method.invoke contract, from exceptions thrown by the invoked target method itself, which must be wrapped with InvocationTargetException. If this MethodHandle transformation exception handling was not done, and the following invocation in MHMethodAccessor: return mh.invokeExact(target, args); ...threw for example NullPointerException, you could not decide whether this was thrown by the invoked target method itself (and should be wrapped with InvocationTargetException) or by the null ''target" pointer for example (which must not be wrapped).
27-09-2018

Is there a reason why some of the exception throwing, catching, and wrapping can't be done in Method.invoke instead of in the MethodAccessor? It seems like that could simplify things a bit.
26-09-2018

Thanks Alan, I don't know how I missed that :-)
24-09-2018

Method.invoke does the access check (with single entry cache) before it invokes the method with the MethodAccessor so it should be okay to cache the MH.
24-09-2018

This idea may be useful to Project Loom, where yielding with native frames on the stack is troublesome. Looking at the PoC, it appears to be doing the permissions check lookup only once and caching the resulting MH. My understanding is that we need to be prepared to do the permissions check on every invoke in the worst case. An example of this is jdk.dynalink.beans.CallerSensitiveDynamicMethod. For performance we may be able to cache both the caller and the MH and then avoid the permissions check if the caller matches the cached value, or some other clever trick.
24-09-2018

java.lang.reflect.Field could also be updated to use java.lang.invoke.MethodHandle. I have attached a simple jmh-based micro-benchmark comparing field set/get of various approaches, which when run on my Mac produces the following results: Benchmark Mode Thr Cnt Sec Mean Mean error Units u.UnvolatileSetGetTest.invokeExact avgt 1 20 0 0.003 0.000 us/op u.UnvolatileSetGetTest.invokeOnSubType avgt 1 20 0 0.011 0.000 us/op u.UnvolatileSetGetTest.put_getfield_Private avgt 1 20 0 0.003 0.000 us/op u.UnvolatileSetGetTest.put_getfield_Public avgt 1 20 0 0.003 0.000 us/op u.UnvolatileSetGetTest.reflection avgt 1 20 0 0.011 0.000 us/op u.UnvolatileSetGetTest.unsafe avgt 1 20 0 0.003 0.000 us/op u.VolatileSetGetTest.invokeExact avgt 1 20 0 0.008 0.000 us/op u.VolatileSetGetTest.invokeOnSubType avgt 1 20 0 0.014 0.000 us/op u.VolatileSetGetTest.put_getfield_Private avgt 1 20 0 0.008 0.000 us/op u.VolatileSetGetTest.put_getfield_Public avgt 1 20 0 0.008 0.000 us/op u.VolatileSetGetTest.reflection avgt 1 20 0 0.013 0.000 us/op u.VolatileSetGetTest.unsafe avgt 1 20 0 0.008 0.000 us/op So, when the JIT does it's thing and inlines the MH.invokeExact, the performance of MH.invokeExact setting/getting a field is comparable to that of using sun.misc.Unsafe directly.
29-11-2013

That's a nice proof-of-concept, thank you! I would like to handle @CS methods by creating a pair of entry points for each @CS method, one as today, and a private one with the Class argument explicitly reified (as a trailing parameter). Then we can get rid of stack walking completely, by using appropriate linker tricks to resolve calls to the public method by calling the private entry point, with an appended argument supplied by the linker, for each distinct call site. This trick would give a good hook for method handles and old reflection, also. The extra entry point should be explicitly in the source and byte code, as a cost of doing business with @CS methods.
12-11-2013

I tried to tackle this. Here's a prototype sun.reflect.MHMethodAccessor that I came up with: http://cr.openjdk.java.net/~plevart/jdk8-tl/MHMethodAccessor/sun/reflect/MHMethodAccessor.java Here's a test that tests compliance with Method.invoke() specification: http://cr.openjdk.java.net/~plevart/jdk8-tl/MHMethodAccessor/MHMATest.java Some performance testing I did shows that although a little slower than bytecode-generated method accessor it is comparable in performace (only about 1.5 ... 2x slower). It is about 30x faster than NativeMethodAccessor though. The unsolved problem with this method accessor is that it doesn't behave correctly when used for invoking @CallerSensitive methods. The Reflection.getCallerClass() in such methods does not return the Class of MHMethodAccessor invoker, but some JDK internal class. I haven't yet studied the Reflection.getCallerClass() implementation to see how it "skips" reflection frames... It could be used - as is, for methods declared in VM-anonymous classes, since they are usually not @CallerSensitive.
11-11-2013

This change would address a possible performance regression introduced by the fix for JDK-7194897. See: http://mail.openjdk.java.net/pipermail/core-libs-dev/2013-November/022859.html
04-11-2013