JDK-8185273 : Test8004741.java crashes with SIGSEGV in JDK10-hs nightly
  • Type: Bug
  • Component: hotspot
  • Sub-Component: runtime
  • Affected Version: 10
  • Priority: P1
  • Status: Closed
  • Resolution: Fixed
  • OS: linux
  • CPU: generic
  • Submitted: 2017-07-25
  • Updated: 2020-09-01
  • Resolved: 2017-07-31
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 10
10 b21Fixed
Related Reports
Relates :  
Relates :  
Relates :  
Description
This test failure was spotted in the 2017-07-25 JDK10-hs nightly:

compiler/c2/Test8004741.java
    Test failed due to SIGSEGV on Linux AArch64 64-bit Server VM.
    Here is a snippet of the stack trace:

    ---------------  T H R E A D  ---------------

    Current thread (0x0000007fa81a2800):  VMThread "VM Thread" [stack: 0x0000007f74359000,0x0000007f74459000] [id=1824]

    Stack: [0x0000007f74359000,0x0000007f74459000],  sp=0x0000007f74456d30,  free space=1015k
    Native frames: (J=compiled Java code, A=aot compiled Java code, j=interpreted, Vv=VM code, C=native code)
    V  [libjvm.so+0xf8e8dc]  JavaThread::send_thread_stop(oop)+0xac;;  JavaThread::send_thread_stop(oop)+0xac
    V  [libjvm.so+0x1057d1c]  VM_ThreadStop::doit()+0xac;;  VM_ThreadStop::doit()+0xac
    V  [libjvm.so+0x1058bcc]  VM_Operation::evaluate()+0x138;;  VM_Operation::evaluate()+0x138
    V  [libjvm.so+0x1055bd8]  VMThread::evaluate_operation(VM_Operation*)+0x134;;  VMThread::evaluate_operation(VM_Operation*)+0x134
    V  [libjvm.so+0x1056700]  VMThread::loop()+0x5e4;;  VMThread::loop()+0x5e4
    V  [libjvm.so+0x1056914]  VMThread::run()+0xd4;;  VMThread::run()+0xd4
    V  [libjvm.so+0xd59d5c]  thread_native_entry(Thread*)+0x118;;  thread_native_entry(Thread*)+0x118
    C  [libpthread.so.0+0x7e2c]  start_thread+0xb0

Normally I would start this bug out in in hotspot/compiler for initial
triage, but the crashing stack trace is very much runtime code so
hotspot/runtime for initial triage.
Comments
I was just thinking this morning about updating that assert...
31-07-2017

OMG, how have I missed that?! Probably because Threads::assert_all_threads_claimed() doesn't check the VMThread?
31-07-2017

Tracked the bug down to the VMThread's _oops_do_parity field getting out of sync due to a bug in Threads::parallel_java_threads_do(). That function correctly follows the "claims" protocol for JavaThreads, but it does not follow the "claims" protocol for the VMThread. The ParallelSPCleanupThreadClosure is appropriate for JavaThreads and Threads::parallel_java_threads_do() is correct in not applying that closure the VMThread. However, the VMThread still needs to "claimed" in order to prevent a subsequent parallel GC thread pass from missing the VMThread due to the VMThreads' _oops_do_parity field being the wrong value. This bug is intermittent because the bug is only exposed by a particular sequence of events: 1) The VM_ThreadStop vm-op has to be executed as a coalesced vm-op piggy backing on a G1IncCollectionPause vm-op as the primary vm-op. This allows the two vm-ops to overlap which can permit the missed VMThread GC pass to cause problems. 2) Even when G1IncCollectionPause vm-op is primary and coalesced with VM_ThreadStop vm-op, a GC that visits the VMThread can sometimes happen in parallel to the execution of the VM_ThreadStop vm-op and that GC updates the oops in VM_ThreadStop vm-op before they are used by the vm-op. I'm testing the fix now on my Solaris X64 server.
30-07-2017

Ok. I'm about to go dark and go on vacation in a day or so.
28-07-2017

> have you tried to backout just the safepoint cleanup patch, and > see if the crash reliably goes away? If by "safepoint cleanup patch" you mean the entire patch for JDK-8180932, then yes I already did that when I tested the JPRT job before yours. Without the fix for JDK-8180932, this bug does not reproduce. I've already made that clear a couple of times. > Can you add an assert that checks that the current thread is a JavaThread > in Thread::send_async_exception() ? We can already see from the crashing stack trace that the current thread is a JavaThread. > the eval mode seems to allow concurrent evaluation of the op As already stated, the problem occurs when VM_ThreadStop vm op is coalesced after a G1IncCollectionPause VM op. This: > Mode evaluation_mode() const { return _async_safepoint; } is part of what allows VM_ThreadStop vm op to be asynchronous which is what allows it to be coalesced. Today, I'm going to focus on finding the part of the patch for JDK-8180932 that allows the oops on the JavaThread calling the VM_ThreadStop vm op to be GC'ed earlier than done previously. At this point, we know that coalescing of VM ops hasn't changed so now I have to come at it differently and figure out the change in GC behavior.
28-07-2017

I wonder if this here in VM_ThreadStop could give us problems: bool allow_nested_vm_operations() const { return true; } Mode evaluation_mode() const { return _async_safepoint; } the eval mode seems to allow concurrent evaluation of the op, if I read the code correctly. ?
28-07-2017

It appears to me that this problem can only occur when the VM_ThreadStop is sent from a thread that does not participate in safepointing. Can you add an assert that checks that the current thread is a JavaThread in Thread::send_async_exception() ? Because if it's not, then it's not safe to pass naked oops around.
28-07-2017

All of this makes sense, but I still fail to see what the safepoint cleanup patch could possibly have to do with it :-) If you can somewhat reliably reproduce it, have you tried to backout just the safepoint cleanup patch, and see if the crash reliably goes away?
28-07-2017

I've added the same tracing to the repo from the previous JPRT job (8181917) and I see: [2.495s][debug][vmthread] Evaluating safepoint VM operation: G1IncCollectionPause [2.539s][debug][vmthread] Evaluating coalesced safepoint VM operation: ThreadStop without crashes. So a VM_ThreadStop coalescing after a G1IncCollectionPause is not new, but something else with the fix for JDK-8180932 has changed the system behavior such that the oops in the VM_ThreadStop are not kept alive. Also, I've tried an experiment with adding the locking described in JDK-8185345 and while that makes the crash happen less frequently, it still happens. Thinking about it, all the locking did was prevent the VM_ThreadStop from being added to the VMOperationsQueue until after the GC was done. However, the bad oops in the allocated VM_ThreadStop are still there. There's nothing in the code that created the VM_ThreadStop object to do an oops_do() on those oops until it gets on the VMOperationsQueue. So GC can cleanup the oops on the calling thread's stack, but it can't cleanup that in-flight VM_ThreadStop until it gets to the VMOperationsQueue.
28-07-2017

> A GC op followed by VM_ThreadStop should not cause a crash per se. > According to JDK-8185345 it should only trigger a crash when concurrently > modifying the VM_Ops queue. You're missing the point of my new logging. A G1IncCollectionPause VM op followed by a coalesced VM_ThreadStop will cause the crash. The reason is that the VM_ThreadStop was concurrently added to the VMOperationsQueue queue while the G1IncCollectionPause VM op was executing and that GC didn't see the VM_ThreadStop in order to fix its oops. If you have a G1IncCollectionPause VM op followed by a VM_ThreadStop VM op with no coalescing, then we don't have the bug because they are independent of each other, i.e., no overlap.
27-07-2017

Dan: Can you also add logging that shows when a new VM_Op is added to the queue? A GC op followed by VM_ThreadStop should not cause a crash per se. According to JDK-8185345 it should only trigger a crash when concurrently modifying the VM_Ops queue.
27-07-2017

The test uses -XX:+SafepointALot and -XX:GuaranteedSafepointInterval=100 besides some other stuff, which does at least sound like it may be sensitive to amplified conflation of VM_Ops when timing of cleanup changes? IOW, maybe the parallel safepoint cleanup probably makes JDK-8185273 more likely, and not directly cause it? Another observation: my unsuccessful attempts to reproduce were done with release build. Then I noticed you guys are posting crash logs from fastdebug builds, so I tried this too, and bingo! got a crash. Does that probably support the timing theory?
27-07-2017

I've added some logging to the vmThread to track when safepoint VM ops and coalesced safepoint VM ops are executed: $ hg -R 8180932_exp/hotspot diff diff -r a3b8c747b6bf src/share/vm/logging/logTag.hpp --- a/src/share/vm/logging/logTag.hpp Fri Jul 07 12:49:11 2017 +0200 +++ b/src/share/vm/logging/logTag.hpp Thu Jul 27 16:43:04 2017 -0600 @@ -142,6 +142,7 @@ LOG_TAG(verification) \ LOG_TAG(verify) \ LOG_TAG(vmoperation) \ + LOG_TAG(vmthread) \ LOG_TAG(vtables) \ LOG_TAG(workgang) \ LOG_TAG_LIST_EXT diff -r a3b8c747b6bf src/share/vm/runtime/vmThread.cpp --- a/src/share/vm/runtime/vmThread.cpp Fri Jul 07 12:49:11 2017 +0200 +++ b/src/share/vm/runtime/vmThread.cpp Thu Jul 27 16:43:04 2017 -0600 @@ -25,6 +25,8 @@ #include "precompiled.hpp" #include "compiler/compileBroker.hpp" #include "gc/shared/collectedHeap.hpp" +#include "logging/log.hpp" +#include "logging/logConfiguration.hpp" #include "memory/resourceArea.hpp" #include "oops/method.hpp" #include "oops/oop.inline.hpp" @@ -484,6 +486,7 @@ // If we are at a safepoint we will evaluate all the operations that // follow that also require a safepoint if (_cur_vm_operation->evaluate_at_safepoint()) { +log_debug(vmthread)("Evaluating safepoint VM operation: %s", _cur_vm_operation->name()); _vm_queue->set_drain_list(safepoint_ops); // ensure ops can be scanned @@ -499,6 +502,7 @@ // to grab the next op now VM_Operation* next = _cur_vm_operation->next(); _vm_queue->set_drain_list(next); +log_debug(vmthread)("Evaluating coalesced safepoint VM operation: %s", _cur_vm_operation->name()); evaluate_operation(_cur_vm_operation); _cur_vm_operation = next; if (PrintSafepointStatistics) { @@ -532,6 +536,7 @@ SafepointSynchronize::end(); } else { // not a safepoint operation +log_debug(vmthread)("Evaluating non-safepoint VM operation: %s", _cur_vm_operation->name()); if (TraceLongCompiles) { elapsedTimer t; t.start(); diff -r a3b8c747b6bf src/share/vm/runtime/vm_operations.cpp --- a/src/share/vm/runtime/vm_operations.cpp Fri Jul 07 12:49:11 2017 +0200 +++ b/src/share/vm/runtime/vm_operations.cpp Thu Jul 27 16:43:04 2017 -0600 @@ -95,6 +95,8 @@ } void VM_ThreadStop::doit() { +tty->print_cr("XXX - in VM_ThreadStop::doit"); +tty->flush(); assert(SafepointSynchronize::is_at_safepoint(), "must be at a safepoint"); JavaThread* target = java_lang_Thread::thread(target_thread()); // Note that this now allows multiple ThreadDeath exceptions to be When I see this kind of coalescing: [8.734s][debug][vmthread] Evaluating safepoint VM operation: G1IncCollectionPause [8.779s][debug][vmthread] Evaluating coalesced safepoint VM operation: ThreadStop XXX - in VM_ThreadStop::doit # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0xffff80fd60a3c7af, pid=2770, tid=57 # # JRE version: Java(TM) SE Runtime Environment (10.0) (slowdebug build 10-internal+0-adhoc.dcubed.8180932exp) # Java VM: Java HotSpot(TM) 64-Bit Server VM (slowdebug 10-internal+0-adhoc.dcubed.8180932exp, mixed mode, compressed oops, g1 gc, solaris-amd64) # Problematic frame: # V [libjvm.so+0x243c7af] void JavaThread::send_thread_stop(oopDesc*)+0x16f we crash. That confirms what we already know. The G1IncCollectionPause destroys the oops in the VM_ThreadStop and we blow up. I also see this kind of coalescing: [2.947s][debug][vmthread] Evaluating safepoint VM operation: ThreadStop XXX - in VM_ThreadStop::doit [2.967s][debug][vmthread] Evaluating coalesced safepoint VM operation: G1IncCollectionPause test got ThreadDeath and we don't blow up. In some of the runs with no crashes at all, I also see this kind of coalescing: [4.856s][debug][vmthread] Evaluating safepoint VM operation: RevokeBias [4.856s][debug][vmthread] Evaluating coalesced safepoint VM operation: BulkRevokeBias I'm checking out the previous JPRT job to see if there's something obviously different about the VM operations...
27-07-2017

Just hit it in JPRT on macOS-x64.
27-07-2017

FWIW, I did a few 100 runs both x86 and aarch64 of that test with latest jdk10/hs release server VM build, and could not get a crash.
27-07-2017

> All GC in JDK return NULL for now. I had forgotten that. OK. Now I know I'm looking in the wrong place. I can ignore the parallel parts of the changeset and focus on the very few other parts...
27-07-2017

No surprise. All GC in JDK return NULL for now. There could be a slight change in timing by conflation of monitor deflation and nmethod marking. Don't see how this could cause more conflated VM_Ops though.
27-07-2017

Temporarily added an option to disable the Parallel Safepoint Cleanup mechanism: $ hg -R 8180932_exp/hotspot diff diff -r a3b8c747b6bf src/share/vm/runtime/globals.hpp --- a/src/share/vm/runtime/globals.hpp Fri Jul 07 12:49:11 2017 +0200 +++ b/src/share/vm/runtime/globals.hpp Thu Jul 27 10:48:18 2017 -0600 @@ -704,6 +704,10 @@ "Print out every time compilation is longer than " \ "a given threshold") \ \ + product(bool, EnableParallelSafepointCleanup, true, \ + "Enable the Parallel Safepoint Cleanup mechanism if the GC " \ + "in use supports worker threads.") \ + \ develop(bool, SafepointALot, false, \ "Generate a lot of safepoints. This works with " \ "GuaranteedSafepointInterval") \ diff -r a3b8c747b6bf src/share/vm/runtime/safepoint.cpp --- a/src/share/vm/runtime/safepoint.cpp Fri Jul 07 12:49:11 2017 +0200 +++ b/src/share/vm/runtime/safepoint.cpp Thu Jul 27 10:48:18 2017 -0600 @@ -651,14 +651,19 @@ CollectedHeap* heap = Universe::heap(); assert(heap != NULL, "heap not initialized yet?"); - WorkGang* cleanup_workers = heap->get_safepoint_workers(); + WorkGang* cleanup_workers = NULL; + if (EnableParallelSafepointCleanup) { + cleanup_workers = heap->get_safepoint_workers(); + } if (cleanup_workers != NULL) { // Parallel cleanup using GC provided thread pool. uint num_cleanup_workers = cleanup_workers->active_workers(); +tty->print_cr("Doing parallel cleanup using %u workers", num_cleanup_workers); ParallelSPCleanupTask cleanup(num_cleanup_workers, &deflate_counters); StrongRootsScope srs(num_cleanup_workers); cleanup_workers->run_task(&cleanup); } else { +tty->print_cr("Doing serial cleanup using VMThread"); // Serial cleanup using VMThread. ParallelSPCleanupTask cleanup(1, &deflate_counters); StrongRootsScope srs(1); Original version of the above did not have the tty->print_cr() calls and I still got crashes with both the default and -XX:-EnableParallelSafepointCleanup.
27-07-2017

> Why are we passing naked oops to VM_ThreadStop anyway? Ancient code that depends on the VMop oops_do() to fix things...
27-07-2017

> In particular, if this gets conflated with a GC VM_Op, > it seems like it would cause exactly what we see. That's pretty much Erik O's theory with: JDK-8185345 VMOperationQueue::oops_do() is not thread-safe The question is how could the fix for JDK-8180932 possibly cause this conflation of VM ops to become so frequent? I don't see the cause yet and that's what concerns me. If I don't understand how JDK-8180932 could make this bug pop up, then I don't know what else might be happening.
27-07-2017

Well, the cause of the crash is because the oops being passed to VM_ThreadStop::doit() didn't get kept alive when the change for JDK-8180932 is in place. Erik O has a theory: JDK-8185345 VMOperationQueue::oops_do() is not thread-safe I'm having trouble seeing how your fix would allow JDK-8185345 to happen more frequently. > Maybe we need to acquire Threads_lock before iterating all threads? Which code are you talking about here? The safepoint cleanup code is running at a safepoint so the VMThread has the Threads_lock...
27-07-2017

In particular, if this gets conflated with a GC VM_Op, it seems like it would cause exactly what we see. Correct me if I'm wrong.
27-07-2017

Why are we passing naked oops to VM_ThreadStop anyway? This sounds dangerous for something that asks for a safepoint.
27-07-2017

Will keep updating the bug... I'm hoping to nail this down today, but the clock is ticking because we see the failure mode in JPRT so it's affecting everyday work. I may have to backout JDK-8180932 temporarily until I can get to the bottom of this craziness.
27-07-2017

Ok. I can't see how this relates to parallel safepoint cleanup yet, but at least it's a theory ;-) With Threads_lock I was poking in the dark. I'm almost in vacation mode already, and will disappear starting tomorrow evening... Let me know what else you find.
27-07-2017

I suspect: - either the change to nmethod::_stack_traversal_mark and/or the related removal of the fence() - the conflated processing of monitor-deflation and nmethod-marking into one pass no quick idea how this could be related to the crash though. The crash seems to involve Thread::stop(). Maybe we need to acquire Threads_lock before iterating all threads? OTOH, the previous code also iterated all threads without acquiring Threads_lock, and also, everything seems to run in VMThread too. /me shrugs
27-07-2017

> There is no parallel cleanup processing involved, cleanup is always > executed in the VMThread. And that's exactly why I'm confused. I don't see how your change could be causing this new failure. > What makes you think the crash is caused by the parallel cleanup stuff? If you look at the comments above, you'll see where I showed that the bug doesn't happen in the JPRT push before yours and starts happening with yours.
27-07-2017

There is no parallel cleanup processing involved, cleanup is always executed in the VMThread. What makes you think the crash is caused by the parallel cleanup stuff? The date of commit?
27-07-2017

Added Roman to this bug since it looks like this bug was caused by: JDK-8180932 Parallelize safepoint cleanup
27-07-2017

I've been able to reproduce the failure on my Solaris X64 server with locally built bits. I tried a quick work around that disables the Parallel Safepoint Cleanup mechanism (and lets the VMThread do the work), but that didn't stop the crash. I'll take a closer look at the changeset in the morning.
27-07-2017

The compiler/c2/Test8004741.java test did not fail in the 2017-07-25 JDK10-hs nightly. I downloaded the 2017-07-24-155826.ddaugher.8185102_for_jdk10_hs JPRT bits from the 2017-07-24 JDK10-hs nightly to my Solaris X64 server and was able to reproduce the crash 7 times in 211 runs. Update: reproduces 7 runs out of 101 with JPRT bits from the 2017-07-21 JDK10-hs nightly: 2017-07-21-171233.rehn.vanilla-hs Update: reproduces 0 runs out of 101 with JPRT bits from the 2017-07-20 JDK10-hs nightly: 2017-07-20-192117.iklam.jdk10 Update: reproduces 0 runs out of 1614 with JPRT bits from the 2017-07-21-111949.edvbld.8181917 which is the job right before 2017-07-21-171233.rehn.vanilla-hs. It looks like compiler/c2/Test8004741.java is failing due to changes in the fix for: 8180932: Parallelize safepoint cleanup Summary: Provide infrastructure to do safepoint cleanup tasks using parallel worker threads Reviewed-by: dholmes, rehn, dcubed, thartmann I'm taking a close look at JDK-8180932 and trying to see what we missed during the code review cycle.
26-07-2017

Another sighting in JPRT: http://jdk-services.us.oracle.com/jprt/archive/2017/07/2017-07-26-193431.iklam.iter/logs/solaris_x64_5.11-fastdebug-c2-hotspot_tier1_compiler_1.log.FAILED.log
26-07-2017

Changed to P1. I am getting JPRT failures now. 2017-07-25-163714.iklam.jdk10/logs/linux_i586_3.8-fastdebug-c2-hotspot_tier1_compiler_1.log.FAILED.log: command: main -Xmx64m -Xbatch -XX:+IgnoreUnrecognizedVMOptions -XX:+UnlockDiagnosticVMOptions -XX:-TieredCompilation -XX:+StressCompiledExceptionHandlers -XX:+SafepointALot -XX:GuaranteedSafepointInterval=100 compiler.c2.Test8004741 reason: User specified action: run main/othervm -Xmx64m -Xbatch -XX:+IgnoreUnrecognizedVMOptions -XX:+UnlockDiagnosticVMOptions -XX:-TieredCompilation -XX:+StressCompiledExceptionHandlers -XX:+SafepointALot -XX:GuaranteedSafepointInterval=100 compiler.c2.Test8004741 [.......] # Problematic frame: # V [libjvm.so+0x140e4d1] JavaThread::send_thread_stop(oop)+0xd1 siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 0xbaadbabe Register to memory mapping: EAX=0xbaadbabe is an unknown value EBX=0xf7476000: <offset 0x019ca000> in /scratch/opt/jprt/T/P1/163714.iklam/testproduct/linux_i586_3.8-fastdebug/lib/server/libjvm.so at 0xf5aac000 ECX=0xecbd06b0 is an unknown value EDX=0xbaadbabe is an unknown value ESP=0xed1fec64 is an unknown value EBP=0xed1fef78 is an unknown value ESI=0xed289000 is an unknown value EDI=0xed1fefac is an unknown value Stack: [0xed180000,0xed200000], sp=0xed1fec64, free space=507k Native frames: (J=compiled Java code, A=aot compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x140e4d1] JavaThread::send_thread_stop(oop)+0xd1 V [libjvm.so+0x14ee5b5] VM_ThreadStop::doit()+0x95 V [libjvm.so+0x14ef6d1] VM_Operation::evaluate()+0x1f1 V [libjvm.so+0x14eb7b9] VMThread::evaluate_operation(VM_Operation*)+0x169 V [libjvm.so+0x14ec398] VMThread::loop()+0x2a8 V [libjvm.so+0x14ecb1d] VMThread::run()+0xcd V [libjvm.so+0x11824c4] thread_native_entry(Thread*)+0x114 C [libpthread.so.0+0x6b2c] start_thread+0xcc
25-07-2017

Analysis of my linux_x86 crash: The crash is at: # V [libjvm.so+0x140e4d1] JavaThread::send_thread_stop(oop)+0xd1 EAX=0xbaadbabe is an unknown value In an normal execution: (gdb) x/4i 0xf77494c8 0xf77494c8 <JavaThread::send_thread_stop(oop)+200>: mov 0x8(%ebp),%eax => 0xf77494cb <JavaThread::send_thread_stop(oop)+203>: sub $0xc,%esp 0xf77494ce <JavaThread::send_thread_stop(oop)+206>: mov 0x8(%ebp),%edx 0xf77494d1 <JavaThread::send_thread_stop(oop)+209>: mov (%eax),%eax <<< CRASH ADDRESS (gdb) p/x $eax $2 = 0xe8d09c00 (gdb) p this $3 = (JavaThread * const) 0xe8d09c00 (gdb) p/x $pc $4 = 0xf77494cb So EAX contains the "this" pointer at send_thread_stop(oop)+0xd1. When the crash happened, "this" is 0xdeadbabe. The caller is: void VM_ThreadStop::doit() { assert(SafepointSynchronize::is_at_safepoint(), "must be at a safepoint"); JavaThread* target = java_lang_Thread::thread(target_thread()); // Note that this now allows multiple ThreadDeath exceptions to be // thrown at a thread. if (target != NULL) { // the thread has run and is not already in the process of exiting target->send_thread_stop(throwable()); <<<<<<<< CALLER } } JavaThread* java_lang_Thread::thread(oop java_thread) { return (JavaThread*)java_thread->address_field(_eetop_offset); } so it looks like target_thread() points to an unallocated heap area, and thus loading from this address would yield 0xdeadbabe.
25-07-2017

The following failures were spotted in the 2017-07-24 JDK10-hs Tier3 nightly: compiler/c2/Test8004741.java Test failed due to EXCEPTION_ACCESS_VIOLATION on Win Server 2012 64-bit Server VM. Here is a snippet of the stack trace: --------------- T H R E A D --------------- Current thread (0x000000e797aa9800): VMThread "VM Thread" [stack: 0x000000e797b00000,0x000000e797c00000] [id=17200] Stack: [0x000000e797b00000,0x000000e797c00000], sp=0x000000e797bfe4b0, free space=1017k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [jvm.dll+0xbb1847] JavaThread::send_thread_stop+0x167;; ?send_thread_stop@JavaThread@@QEAAXPEAVoopDesc@@@Z+0x167 V [jvm.dll+0xc08d96] VM_Operation::evaluate+0xe6;; ?evaluate@VM_Operation@@QEAAXXZ+0xe6 V [jvm.dll+0xc06737] VMThread::evaluate_operation+0xc7;; ?evaluate_operation@VMThread@@AEAAXPEAVVM_Operation@@@Z+0xc7 V [jvm.dll+0xc071fe] VMThread::loop+0x52e;; ?loop@VMThread@@QEAAXXZ+0x52e V [jvm.dll+0xc07a5a] VMThread::run+0xfa;; ?run@VMThread@@UEAAXXZ+0xfa V [jvm.dll+0xa43a5c] thread_native_entry+0x11c;; ?thread_native_entry@@YAIPEAVThread@@@Z+0x11c compiler/c2/Test8004741.java Test failed due to EXCEPTION_ACCESS_VIOLATION on Solaris X64 64-bit Server VM. Here is a snippet of the stack trace: --------------- T H R E A D --------------- Current thread (0x0000000000760000): VMThread "VM Thread" [stack: 0xffff80ffbd40f000,0xffff80ffbd50f000] [id=14] Stack: [0xffff80ffbd40f000,0xffff80ffbd50f000], sp=0xffff80ffbd50d620, free space=1017k Native frames: (J=compiled Java code, A=aot compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x24d442c] void JavaThread::send_thread_stop(oop)+0x15c;; __1cKJavaThreadQsend_thread_stop6MnDoop__v_+0x15c V [libjvm.so+0x25c85e1] void VM_ThreadStop::doit()+0xd1;; __1cNVM_ThreadStopEdoit6M_v_+0xd1 V [libjvm.so+0x25c8125] void VM_Operation::evaluate()+0x1c5;; __1cMVM_OperationIevaluate6M_v_+0x1c5 V [libjvm.so+0x25c67cf] void VMThread::loop()+0x67f;; __1cIVMThreadEloop6M_v_+0x67f V [libjvm.so+0x25c5b82] void VMThread::run()+0xd2;; __1cIVMThreadDrun6M_v_+0xd2 V [libjvm.so+0x2238888] thread_native_entry+0x168;; thread_native_entry+0x168
25-07-2017