JDK-8192992 : Test8007294.java failed: attempted to spill a non-spillable item
  • Type: Bug
  • Component: hotspot
  • Sub-Component: compiler
  • Affected Version: 9,10
  • Priority: P3
  • Status: Resolved
  • Resolution: Fixed
  • OS: linux
  • CPU: x86_64
  • Submitted: 2017-12-04
  • Updated: 2019-09-13
  • Resolved: 2018-05-23
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 11
11 b15Fixed
Related Reports
Duplicate :  
Relates :  
Relates :  
Relates :  
Relates :  
Relates :  
Relates :  
Sub Tasks
JDK-8193840 :  
Description
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  Internal Error (/workspace/open/src/hotspot/share/opto/coalesce.cpp:298), pid=17394, tid=17419
#  assert(false) failed: attempted to spill a non-spillable item: 594: testN_mem_reg0, ireg = 13, spill_type: PhiInputSpillCopy
#
# JRE version: Java(TM) SE Runtime Environment (10.0) (fastdebug build 10-internal+0-jdk10-hs.401)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (fastdebug 10-internal+0-jdk10-hs.401, mixed mode, compressed oops, g1 gc, linux-amd64)
#

---------------  S U M M A R Y ------------

Command Line: -Dtest.class.path.prefix=/testoutput/jtreg/JTwork/classes/0/compiler/c2/Test8007294.d:/scratch/opt/mach5/mesos/work_dir/jib-master/install/jdk10-hs.401/src.full/open/test/hotspot/jtreg/compiler/c2 -Dtest.src=/scratch/opt/mach5/mesos/work_dir/jib-master/install/jdk10-hs.401/src.full/open/test/hotspot/jtreg/compiler/c2 -Dtest.src.path=/scratch/opt/mach5/mesos/work_dir/jib-master/install/jdk10-hs.401/src.full/open/test/hotspot/jtreg/compiler/c2 -Dtest.classes=/testoutput/jtreg/JTwork/classes/0/compiler/c2/Test8007294.d -Dtest.class.path=/testoutput/jtreg/JTwork/classes/0/compiler/c2/Test8007294.d -Dtest.vm.opts=-XX:MaxRAMPercentage=8 -Dtest.tool.vm.opts=-J-XX:MaxRAMPercentage=8 -Dtest.compiler.opts= -Dtest.java.opts=-XX:+CreateCoredumpOnCrash -ea -esa -XX:CompileThreshold=100 -XX:+UnlockExperimentalVMOptions -server -XX:-TieredCompilation -Dtest.jdk=/scratch/opt/mach5/mesos/work_dir/jib-master/install/jdk10-hs.401/linux-x64-debug.jdk/jdk-10/fastdebug -Dcompile.jdk=/scratch/opt/mach5/mesos/work_dir/jib-master/install/jdk10-hs.401/linux-x64-debug.jdk/jdk-10/fastdebug -Dtest.timeout.factor=10.0 -Dtest.nativepath=/scratch/opt/mach5/mesos/work_dir/jib-master/install/jdk10-hs.401/linux-x64-debug.test/hotspot/jtreg/native -XX:MaxRAMPercentage=8 -XX:+CreateCoredumpOnCrash -ea -esa -XX:CompileThreshold=100 -XX:+UnlockExperimentalVMOptions -XX:-TieredCompilation -Djava.library.path=/scratch/opt/mach5/mesos/work_dir/jib-master/install/jdk10-hs.401/linux-x64-debug.test/hotspot/jtreg/native -XX:+IgnoreUnrecognizedVMOptions -XX:+AlwaysIncrementalInline -XX:-UseOnStackReplacement -XX:-BackgroundCompilation com.sun.javatest.regtest.agent.MainWrapper /testoutput/jtreg/JTwork/compiler/c2/Test8007294.d/main.0.jta

---------------  T H R E A D  ---------------

Current thread (0x00007f40783a0800):  JavaThread "C2 CompilerThread1" daemon [_thread_in_native, id=17419, stack(0x00007f4050b4f000,0x00007f4050c50000)]


Current CompileTask:
C2:   3988  538    b        jdk.internal.loader.URLClassPath::getResource (74 bytes)

Stack: [0x00007f4050b4f000,0x00007f4050c50000],  sp=0x00007f4050c4ae00,  free space=1007k
Native frames: (J=compiled Java code, A=aot compiled Java code, j=interpreted, Vv=VM code, C=native code)
V  [libjvm.so+0x182f8b2]  VMError::report_and_die(int, char const*, char const*, __va_list_tag*, Thread*, unsigned char*, void*, void*, char const*, int, unsigned long)+0x162
V  [libjvm.so+0x183067f]  VMError::report_and_die(Thread*, char const*, int, char const*, char const*, __va_list_tag*)+0x2f
V  [libjvm.so+0xb3868d]  report_vm_error(char const*, int, char const*, char const*, ...)+0xdd
V  [libjvm.so+0xa31375]  PhaseAggressiveCoalesce::insert_copies(Matcher&)+0x2805
V  [libjvm.so+0x909dc4]  PhaseChaitin::Register_Allocate()+0x2c4
V  [libjvm.so+0xa9c829]  Compile::Code_Gen()+0x3a9
V  [libjvm.so+0xaa00ba]  Compile::Compile(ciEnv*, C2Compiler*, ciMethod*, int, bool, bool, bool, DirectiveSet*)+0x130a
V  [libjvm.so+0x8b87c2]  C2Compiler::compile_method(ciEnv*, ciMethod*, int, DirectiveSet*)+0x2e2
V  [libjvm.so+0xaaa9fe]  CompileBroker::invoke_compiler_on_method(CompileTask*)+0x38e
V  [libjvm.so+0xaab5f1]  CompileBroker::compiler_thread_loop()+0x241
V  [libjvm.so+0x178629a]  JavaThread::thread_main_inner()+0x21a
V  [libjvm.so+0x17864da]  JavaThread::run()+0x17a
V  [libjvm.so+0x14bad4a]  thread_native_entry(Thread*)+0xfa
C  [libpthread.so.0+0x7dc5]  start_thread+0xc5

Comments
After taking a very close look I found that the anti-dependency checking that hoists the testN_mem_reg from the jmpCon is broken, and that the hoisting is unnecessary. So this is not a case where we need anti-depenency checking for loads before matching. Generally the insert_anti_dependences looks good, except the store->is_Phi() clause that is full of holes (overly conservative). I don't think I fully understand how the graph looks when the clause is needed, but it tries to find stores upwards that is otherwise unreachable from the downward memory flow search. I found these three flaws: 1) A Phi in a block that is preceded by a store - even though the store is dominated by the loads LCA it will force the testN up! We don't check where the stores are located. 2) A Phi that consumes the same memory as the load may force it up, even though no stores are involved. 3) A Phi that consumes a mergemem, which in itself has already has been processed and passed as irrelevant, may force the testN up. One could add that any predecessor to the phi would have to be a store/call to affect the load placement.
14-03-2018

http://cr.openjdk.java.net/~neliasso/8192992/webrev.02/
13-03-2018

Attached replay file that works with recent builds. java -XX:+ReplayCompiles -XX:ReplayDataFile=replay_pid1829.log -XX:+ReplayIgnoreInitErrors -XX:-TieredCompilation -XX:+AlwaysIncrementalInline
06-03-2018

I find the bug very hard to reproduce. Using the replay files from the latest failures works good on recent builds (the first replay files doesn't work in new builds.)
05-03-2018

http://cr.openjdk.java.net/~neliasso/8192992/webrev.01/
05-03-2018

Okay, let's defer this to JDK 11 then. It would be good if we could create a regression test.
11-01-2018

Thanks Roland, I implemented your suggestion but had trouble reproducing the issue. The code in the JDK that causes this was removed Dec 5 by [~redestad] (this bug was opened Dec 4th): hangeset: 48194:6c4bdbf90897 user: redestad date: Tue Dec 05 22:26:17 2017 +0100 summary: 8193064: JarFile::getEntry0 method reference use cause for startup regression So this isn't even a P3 anymore. I will fix this but suggest targeting to 11
10-01-2018

What about bailing out with C2Compiler::retry_no_subsuming_loads() and re-attempting compilation? See clone_node() in reg_split.cpp for instance.
09-01-2018

Sounds like JDK-8173709 but shouldn't verify_compare only be executed if VerifyLoopOptimizations is set?
08-01-2018

I can add that adding -XX:+TraceLoopOpts makes the compilation crash in verify_compare.
04-01-2018

[~dholmes], thanks for putting the test on the problem list! The main reason why this bug reproduces with that test is that it uses -XX:+AlwaysIncrementalInline. We see the same failures with some of our MVT tests that are heavily using -XX:+AlwaysIncrementalInline (see JDK-8193882). [~neliasso], yes, making anti-dependency detection smarter will help but we should be careful to not introduce other problems. I agree that we can lower the priority because the bug does not crash the VM in product builds: Updated ILW = Crash during compilation in register allocator (bailout in product builds), reproducible and happens when compiling common method but seems to be specific to -XX:+AlwaysIncrementalInline, disable compilation or use -XX:-SplitIfWithBlocks = MMM = P3
03-01-2018

I have been reading through your thorough explanation and reproduced it. I lean towards the alternative of making the anti-dependency detection smarter. I think we can lower the priority of this one to 3. 1) It is really hard to reproduce without the replayfile. Running the test I can make it compile, but never gotten it to fail even when using all the relevant flags. 2) We also know that we handle the problem gracefully in production builds. 3) The are known workarounds.
22-12-2017

This test should be problem-listed if it can't be fixed promptly - and we're already beyond promptly. We really want to see no failures in tier2. Thanks.
19-12-2017

To reproduce the crash, the compile statement of the replay file can be replaced by the following command to limit inlining and thus simplify the graph: compile jdk/internal/loader/URLClassPath getResource (Ljava/lang/String;Z)Ljdk/internal/loader/Resource; -1 4 inline 15 1 53 jdk/internal/loader/URLClassPath$JarLoader getResource (Ljava/lang/String;Z)Ljdk/internal/loader/Resource; 2 22 java/util/jar/JarFile getJarEntry (Ljava/lang/String;)Ljava/util/jar/JarEntry; 3 2 java/util/jar/JarFile getEntry (Ljava/lang/String;)Ljava/util/zip/ZipEntry; 4 7 java/util/jar/JarFile isMultiRelease ()Z The crash happens because a testN_mem_reg0 (CmpN(LoadN(mem), NULL)) is scheduled in a different block than its jmpCon user and the register allocator tries to spill the flag register. The problem is that PhaseCFG::schedule_late() detects an anti-dependency for the testN_mem_reg0 on a bottom memory Phi and therefore raises the LCA to the early block (see PhaseCFG::insert_anti_dependences()) which is "far away" from its jmpCon user. I've traced the LoadN back through the C2 optimization phases. After parsing, the relevant graph looks like this: 624 Proj === 621 [[ 362 569 379 672 661 669 592 ]] #2 Memory: @BotPTR *+bot, idx=Bot; !orig=[649] !jvms: JarFile::isMultiRelease @ bci:16 JarFile::getEntry @ bci:7 JarFile::getJarEntry @ bci:2 URLClassPath$JarLoader::getResource @ bci:22 URLClassPath::getResource @ bci:53 669 StoreB === 659 624 667 25 [[ 672 ]] @java/util/jar/JarFile+42 *, name=isMultiRelease, idx=8; Memory: @java/util/jar/JarFile:NotNull+42 * (speculative=java/util/jar/JarFile:NotNull:exact+42 * (inline_depth=2)), name=isMultiRelease, idx=8; !jvms: JarFile::isMultiRelease @ bci:25 JarFile::getEntry @ bci:7 JarFile::getJarEntry @ bci:2 URLClassPath$JarLoader::getResource @ bci:22 URLClassPath::getResource @ bci:53 672 Phi === 647 669 624 [[ 362 675 592 ]] #memory Memory: @java/util/jar/JarFile+42 *, name=isMultiRelease, idx=8; !orig=[671] !jvms: JarFile::isMultiRelease @ bci:28 JarFile::getEntry @ bci:7 JarFile::getJarEntry @ bci:2 URLClassPath$JarLoader::getResource @ bci:22 URLClassPath::getResource @ bci:53 592 MergeMem === _ 1 624 1 1 1 1 1 672 [[ 579 318 244 349 329 363 ]] { - - - - - N672:java/util/jar/JarFile+42 * } Memory: @BotPTR *+bot, idx=Bot; !jvms: JarFile::getEntry @ bci:7 JarFile::getJarEntry @ bci:2 URLClassPath$JarLoader::getResource @ bci:22 URLClassPath::getResource @ bci:53 349 LoadN === _ 592 348 [[ 350 ]] @jdk/internal/loader/URLClassPath$JarLoader+32 * [narrow], name=index, idx=6; #narrowoop: jdk/internal/util/jar/JarIndex * !jvms: URLClassPath$JarLoader::getResource @ bci:39 URLClassPath::getResource @ bci:53 The LoadN is then optimized by step_through_mergemem() and its memory input is directly connected to the ProjNode because all the memory instruction in-between are on different memory slices: 624 Proj === 621 [[ 362 569 379 672 661 669 592 349 ]] #2 Memory: @BotPTR *+bot, idx=Bot; !orig=[649] !jvms: JarFile::isMultiRelease @ bci:16 JarFile::getEntry @ bci:7 JarFile::getJarEntry @ bci:2 URLClassPath$JarLoader::getResource @ bci:22 URLClassPath::getResource @ bci:53 669 StoreB === 659 624 667 25 [[ 672 ]] @java/util/jar/JarFile+42 *, name=isMultiRelease, idx=8; Memory: @java/util/jar/JarFile:NotNull+42 * (speculative=java/util/jar/JarFile:NotNull:exact+42 * (inline_depth=2)), name=isMultiRelease, idx=8; !jvms: JarFile::isMultiRelease @ bci:25 JarFile::getEntry @ bci:7 JarFile::getJarEntry @ bci:2 URLClassPath$JarLoader::getResource @ bci:22 URLClassPath::getResource @ bci:53 672 Phi === 647 669 624 [[ 362 675 592 ]] #memory Memory: @java/util/jar/JarFile+42 *, name=isMultiRelease, idx=8; !orig=[671] !jvms: JarFile::isMultiRelease @ bci:28 JarFile::getEntry @ bci:7 JarFile::getJarEntry @ bci:2 URLClassPath$JarLoader::getResource @ bci:22 URLClassPath::getResource @ bci:53 592 MergeMem === _ 1 624 1 1 1 1 1 672 [[ 579 318 244 363 329 ]] { - - - - - N672:java/util/jar/JarFile+42 * } Memory: @BotPTR *+bot, idx=Bot; !jvms: JarFile::getEntry @ bci:7 JarFile::getJarEntry @ bci:2 URLClassPath$JarLoader::getResource @ bci:22 URLClassPath::getResource @ bci:53 349 LoadN === _ 624 348 [[ 350 ]] @jdk/internal/loader/URLClassPath$JarLoader+32 * [narrow], name=index, idx=6; #narrowoop: jdk/internal/util/jar/JarIndex * !jvms: URLClassPath$JarLoader::getResource @ bci:39 URLClassPath::getResource @ bci:53 Now the split-if optimization kicks in and splits the MergeMem through the Phi: 624 Proj === 621 [[ 362 569 379 672 349 669 710 701 709 710 ]] #2 Memory: @BotPTR *+bot, idx=Bot; !orig=[649] !jvms: JarFile::isMultiRelease @ bci:16 JarFile::getEntry @ bci:7 JarFile::getJarEntry @ bci:2 URLClassPath$JarLoader::getResource @ bci:22 URLClassPath::getResource @ bci:53 669 StoreB === 682 624 598 25 [[ 672 709 ]] @java/util/jar/JarFile+42 *, name=isMultiRelease, idx=8; Memory: @java/util/jar/JarFile:NotNull+42 *, name=isMultiRelease, idx=8; !jvms: JarFile::isMultiRelease @ bci:25 JarFile::getEntry @ bci:7 JarFile::getJarEntry @ bci:2 URLClassPath$JarLoader::getResource @ bci:22 URLClassPath::getResource @ bci:53 672 Phi === 647 669 624 [[ 362 ]] #memory Memory: @java/util/jar/JarFile+42 *, name=isMultiRelease, idx=8; !orig=[671] !jvms: JarFile::isMultiRelease @ bci:28 JarFile::getEntry @ bci:7 JarFile::getJarEntry @ bci:2 URLClassPath$JarLoader::getResource @ bci:22 URLClassPath::getResource @ bci:53 710 MergeMem === _ 1 624 1 1 1 1 1 624 [[ 708 ]] { - - - - - - } Memory: @BotPTR *+bot, idx=Bot; !orig=592 !jvms: JarFile::getEntry @ bci:7 JarFile::getJarEntry @ bci:2 URLClassPath$JarLoader::getResource @ bci:22 URLClassPath::getResource @ bci:53 709 MergeMem === _ 1 624 1 1 1 1 1 669 [[ 708 ]] { - - - - - N669:java/util/jar/JarFile+42 * } Memory: @BotPTR *+bot, idx=Bot; !orig=592 !jvms: JarFile::getEntry @ bci:7 JarFile::getJarEntry @ bci:2 URLClassPath$JarLoader::getResource @ bci:22 URLClassPath::getResource @ bci:53 708 Phi === 647 709 710 [[ 329 363 244 318 579 ]] #memory Memory: @BotPTR *+bot, idx=Bot; !orig=592 !jvms: JarFile::getEntry @ bci:-1 JarFile::getJarEntry @ bci:2 URLClassPath$JarLoader::getResource @ bci:22 URLClassPath::getResource @ bci:53 349 LoadN === _ 624 348 [[ 350 ]] @jdk/internal/loader/URLClassPath$JarLoader+32 * [narrow], name=index, idx=6; #narrowoop: jdk/internal/util/jar/JarIndex * !jvms: URLClassPath$JarLoader::getResource @ bci:39 URLClassPath::getResource @ bci:53 The 710 MergeMem is redundant and will be removed. The 708 Phi is then directly connected to the Proj and during register allocation the graph looks like this (node indices changed): 109 MachProj === 86 [[ 108 111 112 110 355 150 164 166 203 232 354 ]] #2/unmatched Memory: @BotPTR *+bot, idx=Bot; !jvms: JarFile::isMultiRelease @ bci:16 JarFile::getEntry @ bci:7 JarFile::getJarEntry @ bci:2 URLClassPath$JarLoader::getResource @ bci:22 URLClassPath::getResource @ bci:53 112 storeImmB0 === 293 109 93 [[ 111 166 ]] memory Memory: @java/util/jar/JarFile+42 *, name=isMultiRelease, idx=8; !jvms: JarFile::isMultiRelease @ bci:25 JarFile::getEntry @ bci:7 JarFile::getJarEntry @ bci:2 URLClassPath$JarLoader::getResource @ bci:22 URLClassPath::getResource @ bci:53 166 Phi === 78 112 109 [[ 164 ]] #memory Memory: @java/util/jar/JarFile+42 *, name=isMultiRelease, idx=8; !jvms: JarFile::isMultiRelease @ bci:28 JarFile::getEntry @ bci:7 JarFile::getJarEntry @ bci:2 URLClassPath$JarLoader::getResource @ bci:22 URLClassPath::getResource @ bci:53 111 MergeMem === _ 0 109 0 0 0 0 0 112 [[ 110 ]] { - - - - - N112:java/util/jar/JarFile+42 * } Memory: @BotPTR *+bot, idx=Bot; !jvms: JarFile::getEntry @ bci:7 JarFile::getJarEntry @ bci:2 URLClassPath$JarLoader::getResource @ bci:22 URLClassPath::getResource @ bci:53 232 Phi === 218 90 109 [[ 231 233 ]] #memory Memory: @BotPTR *+bot, idx=Bot; !jvms: JarFile::getEntry @ bci:7 JarFile::getJarEntry @ bci:2 URLClassPath$JarLoader::getResource @ bci:22 URLClassPath::getResource @ bci:53 354 testN_mem_reg0 === _ 109 95 [[ 356 ]] #32/0x0000000000000020narrowoop: NULL !orig=[117] !jvms: JarFile::getEntry @ bci:-1 JarFile::getJarEntry @ bci:2 URLClassPath$JarLoader::getResource @ bci:22 URLClassPath::getResource @ bci:53 The 232 Phi that was introduced by split-if is wrongly treated as anti-dep for the testN_mem_reg0. As expected, the crash disappears with -XX:-SplitIfWithBlocks. Also, this only reproduces with -XX:+AlwaysIncrementalInline. Without incremental inlining, the split-if optimization does not kick in, no anti-dependency is found and the testN_mem_reg0 can be scheduled late (i.e. in the same block as the consuming JmpCon). I've tried hard to write a simple regression test for this by looking at the code of jdk.internal.loader.URLClassPath::getResource but could only ever reproduce this with replay compilation. I'm not sure yet how to fix this. We could either make anti-dep detection smarter or avoid the split-if but maybe there are other ways. [~neliasso], as discussed with Dave, I'm assigning this to you because I'll be on vacation.
15-12-2017

[~dlong], sure, I have a look.
11-12-2017

We have a testN_mem_reg0 being used by a jmpCon, but they are in different blocks. It looks similar to the situation in JDK-8172850. but happens even without G1.
09-12-2017

[~thartmann] Tobias, would you mind taking a look at this, since it appears to be similar to JDK-8172850?
09-12-2017

Yes, this is *not* an integration_blocker. As I've mentioned above, this existing problem in C2's register allocator was most likely triggered by JDK-8189611 which was fixed in jdk/jdk and down-synced to jdk/hs recently. I've verified this by trying to reproduce the problem with jdk/jdk Mach5 build jdk10-master.366 and jdk10-master.367. It only reproduces with 367 which is the build where JDK-8189611 was fixed. I'm removing the integration_blocker label. Edit: The good news is also that this does not affect product because with a release build, we just bail out and mark the method as non-compilable.
07-12-2017

according to JBS, JDK-8189611 is fixed jdk10+b34 by http://hg.openjdk.java.net/jdk/jdk/rev/85ea7e83af30
07-12-2017

[~jwilhelm], JDK-8189611 was pushed to jdk/jdk Below is an excerpt from the push notification when you did the sync down from jdk/jdk to jdk/hs on Dec 2: Changeset: 85ea7e83af30 Author: sherman Date: 2017-11-29 15:01 -0800 URL: http://hg.openjdk.java.net/jdk/hs/rev/85ea7e83af30 8189611: JarFile versioned stream and real name support Reviewed-by: psandoz, alanb, mchung, martin ! src/java.base/share/classes/java/util/jar/JarEntry.java ! src/java.base/share/classes/java/util/jar/JarFile.java ! src/java.base/share/classes/java/util/jar/JarVerifier.java ! src/java.base/share/classes/java/util/zip/ZipCoder.java ! src/java.base/share/classes/java/util/zip/ZipFile.java ! src/java.base/share/classes/jdk/internal/loader/URLClassPath.java ! src/java.base/share/classes/jdk/internal/misc/JavaUtilZipFileAccess.java ! src/java.base/share/classes/jdk/internal/module/ModulePath.java ! src/java.base/share/classes/jdk/internal/module/ModuleReferences.java ! src/java.base/share/classes/module-info.java ! src/jdk.jlink/share/classes/jdk/tools/jlink/internal/JarArchive.java + test/jdk/java/util/jar/JarFile/mrjar/TestVersionedStream.java - test/jdk/jdk/internal/util/jar/TestVersionedStream.java
07-12-2017

[~thartmann] says above that the problem was likely triggered by JDK-8189611 which has not been pushed to jdk/jdk yet.
07-12-2017

[~jwilhelm], as I read [~iklam]'s comment it seems this was introduced by merge w/ jdk/jdk, so the defect is already in jdk/jdk and hence it can't be a blocker for jdk/hs -> jdk/jdk integration.
07-12-2017

This looks like an integration blocker to me. Is there any reason it shouldn't be?
07-12-2017

We fail in PhaseAggressiveCoalesce::insert_copies() when trying to spill a flag register. This is similar to JDK-8141137.
05-12-2017

The assert always triggers when compiling jdk.internal.loader.URLClassPath::getResource(), it's very likely that this problem was triggered by the fix for JDK-8189611 which changed the inlined methods java/util/jar/JarFile and jdk/internal/loader/URLClassPath$JarLoader. I was able to reproduce this on Linux x86_64 with attached replay compilation file: java -XX:+ReplayCompiles -XX:ReplayDataFile=replay_pid22272.log -XX:+ReplayIgnoreInitErrors -XX:-TieredCompilation -XX:+AlwaysIncrementalInline ILW = Crash during compilation in register allocator, reproducible and happens when compiling common method but seems to be specific to -XX:+AlwaysIncrementalInline, disable compilation = HMM = P2
05-12-2017

According to HotSpot CI Pipeline, the failures happened reliably (on linux/x64-fastdebug only) after the following push, which synced latest jdk repo into to hs repo: http://mail.openjdk.java.net/pipermail/jdk-hs-changes/2017-December/000176.html where the tip is Changeset: 48ff95f16a16 Author: jwilhelm Date: 2017-12-02 06:51 +0100 URL: http://hg.openjdk.java.net/jdk/hs/rev/48ff95f16a16 Merge
04-12-2017