======== Results (absolute); warmups: 5; measurements: 10; iterations/run: 1000000; micro iterations: 5
### TRACE 1: Direct call: 69.2 ns (stddev: 2.4 = 3%) // 20.6 times FASTER than Reflection API Method.invoke()
### TRACE 1: Reflection API Method.invoke(): 1423.3 ns (stddev: 422.7 = 29%)
### TRACE 1: MH.invokeExact(): 96.2 ns (stddev: 7.5 = 7%) // 14.8 times FASTER than Reflection API Method.invoke()
### TRACE 1: MH.invoke(): 94.2 ns (stddev: 6.8 = 7%) // 15.1 times FASTER than Reflection API Method.invoke()
### TRACE 1: invokedynamic instruction: 68.6 ns (stddev: 2.7 = 3%) // 20.7 times FASTER than Reflection API Method.invoke()
### TRACE 1:
======== Conclusions
### TRACE 1: Comparing invocation time orders
#>
#> WARNING: switching log to verbose mode,
#> because error is complained
#>
# ERROR: Test marked failed at vm.mlvm.mixed.stress.regression.b6969574.INDIFY_Test.verifyTimeOrder(INDIFY_Test.java:297):
# ERROR: Reflection API Method.invoke() invocation time order (1423.3 ns) is greater than of MH.invokeExact()(96.2 ns)!
The following stacktrace is for failure analysis.
nsk.share.TestFailure: Test marked failed at vm.mlvm.mixed.stress.regression.b6969574.INDIFY_Test.verifyTimeOrder(INDIFY_Test.java:297): Reflection API Method.invoke() invocation time order (1423.3 ns) is greater than of MH.invokeExact()(96.2 ns)!
It seems like the failures show up with running with -Xcomp.