JDK-8212933 : Thread-SMR: requesting a VM operation whilst holding a ThreadsListHandle can cause deadlocks
  • Type: Bug
  • Component: hotspot
  • Sub-Component: runtime
  • Affected Version: 11,12
  • Priority: P3
  • Status: Resolved
  • Resolution: Fixed
  • OS: generic
  • CPU: generic
  • Submitted: 2018-10-24
  • Updated: 2020-05-18
  • Resolved: 2018-10-31
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 11 JDK 12
11.0.8-oracleFixed 12 b18Fixed
Related Reports
Duplicate :  
Relates :  
Description
Leonid Mesnik extended the Kitchensink stress testing with modules providing test coverage for the JVMTI and found a deadlock.
Please, find a file with thread dumps in the attachments.

This is some initial analysis from Robbin Enh:

Robbin Enh:

In short:
# VM Thread
VM Thread is in a loop, takes Threads_lock, takes a new snapshot of the Threads_list, scans the list and process handshakes on behalf of safe threads.
Releases snapshot and Threads_lock and checks if all handshakes are completed

# An exiting thread
A thread exiting thread removes it self from _THE_ threads list, but must stick around if it is on any snapshots of alive. When it is no on any list it will cancel the handshake.

Since VM thread during the handshake takes a new snapshot every iteration any exiting can proceed since it will not be a the new snapshot. Thus cancel the handshake and VM thread can exit the loop (if this was the last handshake).

Constraint:
If any thread grabs a snapshot of threads list and later tries to take a lock which is 'used' by VM Thread or inside the handshake we can deadlock.

Considering that looking at e.g. : JvmtiEnv::SuspendThreadList
Which calls VMThread::execute(&tsj); with a ThreadsListHandle alive, this could deadlock AFAICT. Since the thread will rest on VMOperationRequest_lock with a Threads list snapshot but VM thread cannot finishes handshake until that snapshot is released.

I suggest first step is to add something like this patch below and fix the obvious ones first.

Note, I have not verified that is the problem you are seeing, I'm saying that this seem to be real issue. And considering how the stack traces looks, it may be this.

You want me going through this, just assign a bug if there is one?

/Robbin

diff -r 622fd3608374 src/hotspot/share/runtime/thread.hpp
--- a/src/hotspot/share/runtime/thread.hpp    Tue Oct 23 13:27:41 2018 +0200
+++ b/src/hotspot/share/runtime/thread.hpp    Wed Oct 24 09:13:17 2018 +0200
@@ -167,2 +167,6 @@
   }
+ public:
+  bool have_threads_list();
+ private:
+
   // This field is enabled via -XX:+EnableThreadSMRStatistics:
diff -r 622fd3608374 src/hotspot/share/runtime/thread.inline.hpp
--- a/src/hotspot/share/runtime/thread.inline.hpp    Tue Oct 23 13:27:41 2018 +0200
+++ b/src/hotspot/share/runtime/thread.inline.hpp    Wed Oct 24 09:13:17 2018 +0200
@@ -111,2 +111,6 @@

+inline bool Thread::have_threads_list() {
+  return OrderAccess::load_acquire(&_threads_hazard_ptr) != NULL;
+}
+
 inline void Thread::set_threads_hazard_ptr(ThreadsList* new_list) {
diff -r 622fd3608374 src/hotspot/share/runtime/vmThread.cpp
--- a/src/hotspot/share/runtime/vmThread.cpp    Tue Oct 23 13:27:41 2018 +0200
+++ b/src/hotspot/share/runtime/vmThread.cpp    Wed Oct 24 09:13:17 2018 +0200
@@ -608,2 +608,3 @@
   if (!t->is_VM_thread()) {
+    assert(!t->have_threads_list(), "Deadlock if we have exiting threads and if vm thread is running an VM op."); // fatal/guarantee
     SkipGCALot sgcalot(t);    // avoid re-entrant attempts to gc-a-lot

David Holmes:

Thanks Robbin! So you're no allowed to request a VM operation if you hold a ThreadsListHandle ? I suppose that is no different to not being able to request a VM operation whilst holding the Threads_lock.

I suspect before ThreadSMR this may have been a case where we weren't ensuring a target thread could not terminate, and now with SMR we're ensuring that but potentially introducing a deadlock. I say potentially because obviously we don't deadlock every time we suspend threads.

One more description from Robbin:

1: VM Threads takes a thread list snapshot, Z
2: Arm all threads on that list
3: Save a count on how many are armed
3: Release threads list
-> 5: Takes Threads_lock
6: Takes a new threads list snapshot, N+1
7: Iterate over the threads, testing if the thread it self have done the
handshake operation, or if the VM thread may processes it for the thread.
8: Release threads list and Threads_lock
9: Check if count is zero, else go to step 5.

Thread X have already transition into VM and started the exit path,
but is still present on threads list when snapshot Z is taken.

Thread W takes a threads list snapshot which is the same snapshot as Z, meaning thread X is present in there.
Thread W calls ::execute and waits on VMOperationRequest_lock for it's ticket to come up.

Thread X grabs Threads_lock and removes it self from _the_ threads list.
But since it present on threads W snapshot Z it must wait until that snapshot is released.

The VM cannot see Thread X, since it takes a new list every iteration, N+1 will not contain X.

Thread X cannot cancel it's handshake until it is off all threads list snapshots
because it my be present on the list the VM thread holds.
Thread X cannot processes it's handshake because it's off the threads list, e.g.
GC cannot see it.

While thread W holds a snapshot (z) of the threads list with X on it the handshake cannot be completed, and while VM thread is running the handshake it
cannot processes W's VM op.


Comments
jdk11 backport request I would like to have the patch in OpenJDK11 as well (for better parity with 11.0.8_oracle). The patch applies cleanly.
15-05-2020

Here's the changeset that added handshakes to the compiler: $ hg -R open annot open/src/hotspot/share/runtime/sweeper.cpp | grep 'Handshake::execute' 51865: Handshake::execute(&tcl); $ hg -R open log -r 51865 changeset: 51865:eb954a4b6083 user: rkennke date: Mon Sep 24 18:44:39 2018 +0200 summary: 8132849: Increased stop time in cleanup phase because of single-threaded walk of thread stacks in NMethodSweeper::mark_active_nmethods() This changeset was integrated in jdk-12+13.
26-10-2018

Leonid attached sampler_only.stack to this bug (JDK-8212933). Let's see if I can find the right thread stacks for this deadlock: Here's the VMThread: Thread 164 (Thread 0x2ae40b28a700 (LWP 27948)): #0 0x00002ae3937ac6aa in thread_at (i=<optimized out>, this=<optimized out>) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/threadSMR.hpp:188 #1 next (this=0x2ae40b289a70) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/threadSMR.hpp:365 #2 VM_HandshakeAllThreads::doit (this=0x2ae40b990c60) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/handshake.cpp:214 #3 0x00002ae393dea578 in VM_Operation::evaluate (this=this@entry=0x2ae40b990c60) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/vm_operations.cpp:67 #4 0x00002ae393de7e0f in VMThread::evaluate_operation (op=0x2ae40b990c60, this=0x2ae40b990c60) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/vmThread.cpp:370 #5 0x00002ae393de853a in VMThread::loop (this=this@entry=0x2ae3984b5000) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/vmThread.cpp:552 #6 0x00002ae393de871b in VMThread::run (this=0x2ae3984b5000) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/vmThread.cpp:267 #7 0x00002ae393ba0070 in thread_native_entry (thread=0x2ae3984b5000) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/os/linux/os_linux.cpp:698 #8 0x00002ae3927b1e25 in start_thread () from /lib64/libpthread.so.0 #9 0x00002ae392cc234d in clone () from /lib64/libc.so.6 so the VMThread is in VM_HandshakeAllThreads::doit() and appears to be looping in Thread-SMR code. That matches what Robbin said. Here's the agent_sampler thread: Thread 136 (Thread 0x2ae494100700 (LWP 28023)): #0 0x00002ae3927b5945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00002ae393ba8d63 in os::PlatformEvent::park (this=this@entry=0x2ae454005800) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/os/posix/os_posix.cpp:1897 #2 0x00002ae393b50cf8 in ParkCommon (timo=0, ev=0x2ae454005800) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutex.cpp:399 #3 Monitor::IWait (this=this@entry=0x2ae398023c10, Self=Self@entry=0x2ae454004800, timo=timo@entry=0) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutex.cpp:768 #4 0x00002ae393b51f2e in Monitor::wait (this=this@entry=0x2ae398023c10, no_safepoint_check=<optimized out>, timeout=timeout@entry=0, as_suspend_equivalent=as_suspend_equivalent@entry=false) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutex.cpp:1106 #5 0x00002ae393de7867 in VMThread::execute (op=op@entry=0x2ae4940ffb10) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/vmThread.cpp:657 #6 0x00002ae393d6a3bd in JavaThread::java_suspend (this=this@entry=0x2ae3985f2000) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/thread.cpp:2321 #7 0x00002ae3939ad7e1 in JvmtiSuspendControl::suspend (java_thread=java_thread@entry=0x2ae3985f2000) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/prims/jvmtiImpl.cpp:847 #8 0x00002ae3939887ae in JvmtiEnv::SuspendThread (this=this@entry=0x2ae39801b270, java_thread=0x2ae3985f2000) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/prims/jvmtiEnv.cpp:955 #9 0x00002ae39393a8c6 in jvmti_SuspendThread (env=0x2ae39801b270, thread=0x2ae49929fdf8) at /scratch/lmesnik/ws/hs-bigapps/build/linux-x64/hotspot/variant-server/gensrc/jvmtifiles/jvmtiEnter.cpp:527 #10 0x00002ae394d973ee in agent_sampler (jvmti=0x2ae39801b270, env=<optimized out>, p=<optimized out>) at /scratch/lmesnik/ws/hs-bigapps/closed/test/hotspot/jtreg/applications/kitchensink/process/stress/modules/libJvmtiStressModule.c:274 #11 0x00002ae3939ab24d in call_start_function (this=0x2ae454004800) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/prims/jvmtiImpl.cpp:85 #12 JvmtiAgentThread::start_function_wrapper (thread=0x2ae454004800, __the_thread__=<optimized out>) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/prims/jvmtiImpl.cpp:79 #13 0x00002ae393d7338a in JavaThread::thread_main_inner (this=this@entry=0x2ae454004800) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/thread.cpp:1795 #14 0x00002ae393d736c6 in JavaThread::run (this=0x2ae454004800) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/thread.cpp:1775 #15 0x00002ae393ba0070 in thread_native_entry (thread=0x2ae454004800) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/os/linux/os_linux.cpp:698 #16 0x00002ae3927b1e25 in start_thread () from /lib64/libpthread.so.0 #17 0x00002ae392cc234d in clone () from /lib64/libc.so.6 so the agent_sampler thread is doing a jvmti_SuspendThread() call which means it holds a ThreadsListHandle: build/macosx-x86_64-normal-server-fastdebug/hotspot/variant-server/gensrc/jvmti files/jvmtiEnter.cpp static jvmtiError JNICALL jvmti_SuspendThread(jvmtiEnv* env, jthread thread) { <snip> ThreadsListHandle tlh(this_thread); if (thread == NULL) { java_thread = current_thread; } else { err = JvmtiExport::cv_external_thread_to_JavaThread(tlh.list(), thread, &java_thread, NULL); if (err != JVMTI_ERROR_NONE) { return err; } } err = jvmti_env->SuspendThread(java_thread); return err; So I went looking for a thread that's "stuck" in the Thread-SMR delete thread protocol by searching for ThreadsSMRSupport::smr_delete. Imagine my surprise to find 50 threads waiting in a stack that looks like this one: Thread 52 (Thread 0x2ae64db18700 (LWP 29654)): #0 0x00002ae3927b5945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00002ae393ba8d63 in os::PlatformEvent::park (this=this@entry=0x2ae4600c1c00) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/os/posix/os_posix.cpp:1897 #2 0x00002ae393b50cf8 in ParkCommon (timo=0, ev=0x2ae4600c1c00) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutex.cpp:399 #3 Monitor::IWait (this=this@entry=0x87d000 <InterpreterMacroAssembler::profile_typecheck(RegisterImpl*, RegisterImpl*, RegisterImpl*)+640>, Self=Self@entry=0x2ae4600eb000, timo=timo@entry=0) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutex.cpp:768 #4 0x00002ae393b51ef1 in Monitor::wait (this=0x87d000 <InterpreterMacroAssembler::profile_typecheck(RegisterImpl*, RegisterImpl*, RegisterImpl*)+640>, no_safepoint_check=no_safepoint_check@entry=true, timeout=timeout@entry=0, as_suspend_equivalent=as_suspend_equivalent@entry=false) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/mutex.cpp:1091 #5 0x00002ae393d79213 in ThreadsSMRSupport::smr_delete (thread=thread@entry=0x2ae4600eb000) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/threadSMR.cpp:981 #6 0x00002ae393d732c8 in smr_delete (this=0x2ae4600eb000) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/thread.cpp:209 #7 JavaThread::thread_main_inner (this=this@entry=0x2ae4600eb000) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/thread.cpp:1801 #8 0x00002ae393d736c6 in JavaThread::run (this=0x2ae4600eb000) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/thread.cpp:1775 #9 0x00002ae393ba0070 in thread_native_entry (thread=0x2ae4600eb000) at /scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/os/linux/os_linux.cpp:698 #10 0x00002ae3927b1e25 in start_thread () from /lib64/libpthread.so.0 #11 0x00002ae392cc234d in clone () from /lib64/libc.so.6 So this thread is in the ThreadsSMRSupport::smr_delete() loop that says it can't safely leave the building while there's a ThreadsList that mentions it. If I understand Robbin's analysis correctly, we have the agent_sampler thread that holds a ThreadsListHandle that mentions some target thread 'X'. That means that 'X' (and anyone else on that ThreadsList) cannot leave the building until the agent_sampler thread suspend operation is done. That part makes perfect sense to me. Now, we have the VMThread in VM_HandshakeAllThreads::doit() and it is trying process the handshakes the threads. It sets a counter here: open/src/hotspot/share/runtime/handshake.cpp int number_of_threads_issued = 0; for (JavaThreadIteratorWithHandle jtiwh; JavaThread *thr = jtiwh.next(); ) { set_handshake(thr); number_of_threads_issued++; } If target thread ('X') is on this first ThreadsList, then it will be counted. Next we have to wait for the handshakes to complete: do { // Check if handshake operation has timed out if (handshake_has_timed_out(start_time)) { handle_timeout(); } // Have VM thread perform the handshake operation for blocked threads. // Observing a blocked state may of course be transient but the processing is guarded // by semaphores and we optimistically begin by working on the blocked threads { // We need to re-think this with SMR ThreadsList. // There is an assumption in the code that the Threads_lock should // be locked during certain phases. MutexLockerEx ml(Threads_lock, Mutex::_no_safepoint_check_flag); for (JavaThreadIteratorWithHandle jtiwh; JavaThread *thr = jtiwh.next(); ) { // A new thread on the ThreadsList will not have an operation, // hence it is skipped in handshake_process_by_vmthread. thr->handshake_process_by_vmthread(); } } while (poll_for_completed_thread()) { // Includes canceled operations by exiting threads. number_of_threads_completed++; } } while (number_of_threads_issued > number_of_threads_completed); The original loop set number_of_threads_issued based on one JavaThreadIteratorWithHandle (that included target thread 'X'). A different JavaThreadIteratorWithHandle (that does not include target thread 'X') is used to call thr->handshake_process_by_vmthread() on each thread, but only does work if the thread is blocked. Since the target thread 'X' has exited, it is not blocked so handshake_process_by_vmthread() would not have done any work anyway. The termination condition of the loop is incremented here: while (poll_for_completed_thread()) { // Includes canceled operations by exiting threads. number_of_threads_completed++; } If poll_for_completed_thread() somehow misses a thread, then number_of_threads_completed will remain < number_of_threads_issued and we'll never get out of the loop. Of course, I only have the sampler_only.stack file and not a live process so I don't really know if we're stuck in this do { ... } while loop... :-) poll_for_completed_thread() should return true when a thread has completed its handshake itself, or when a thread has cancelled its handshake itself, or when handshake_process_by_vmthread() has completed the handshake on behalf of a blocked thread. If Robbin could confirm that this analysis matches what he is thinking that would be great. Update: Additional note from Robbin about thread 'X': So for the terminating thread 'X', the increment is supposed to happen went it increases the _done semaphore on line 994. The increase of the semaphore will result in a inc on number_of_threads_completed. But thread 'X' is stuck on line 981. 978 // Wait for a release_stable_list() call before we check again. No 979 // safepoint check, no timeout, and not as suspend equivalent flag 980 // because this JavaThread is not on the Threads list. 981 ThreadsSMRSupport::delete_lock()->wait(Mutex::_no_safepoint_check_flag, 0, 982 !Mutex::_as_suspend_equivalent_flag); 983 if (EnableThreadSMRStatistics) { 984 _delete_lock_wait_cnt--; 985 } 986 987 ThreadsSMRSupport::clear_delete_notify(); 988 ThreadsSMRSupport::delete_lock()->unlock(); 989 // Retry the whole scenario. 990 } 991 992 if (ThreadLocalHandshakes) { 993 // The thread is about to be deleted so cancel any handshake. 994 thread->cancel_handshake(); 995 }
26-10-2018

Dan's analysis is correct. Note that in 11 only ZGC uses handshakes, so this deadlock can only happen if you use ZGC + Thread.stop/suspend and JVMTI similar functions. In 12 compiler utilize handshakes.
26-10-2018

With proposed patch issue reproduced with hs_err (file in attachement): # # A fatal error has been detected by the Java Runtime Environment: # # Internal Error (/scratch/lmesnik/ws/hs-bigapps/open/src/hotspot/share/runtime/vmThread.cpp:607), pid=26188, tid=26325 # assert(!t->have_threads_list()) failed: Deadlock if we have exiting threads and if vm thread is running an VM op. # # JRE version: Java(TM) SE Runtime Environment (12.0) (slowdebug build 12-internal+0-2018-10-24-2022348.lmesnik.hs-bigapps) # Java VM: Java HotSpot(TM) 64-Bit Server VM (slowdebug 12-internal+0-2018-10-24-2022348.lmesnik.hs-bigapps, mixed mode, sharing, tiered, compressed oops, g1 gc, linux-amd64) # Core dump will be written. Default location: Core dumps may be processed with "/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e %P %I %h" (or dumping to /scratch/lmesnik/ws/hs-bigapps/build/linux-x64/test-support/jtreg_closed_test_hotspot_jtreg_applications_kitchensink_KitchensinkSampler_java/scratch/0/core.26188) # # If you would like to submit a bug report, please visit: # http://bugreport.java.com/bugreport/crash.jsp # --------------- S U M M A R Y ------------ Command Line: -XX:MaxRAMPercentage=2 -XX:MaxRAMPercentage=50 -XX:+CrashOnOutOfMemoryError -Djava.net.preferIPv6Addresses=false -XX:-PrintVMOptions -XX:+DisplayVMOutputToStderr -XX:+UsePerfData -Xlog:gc*,gc+heap=debug:gc.log:uptime,timemillis,level,tags -XX:+DisableExplicitGC -XX:+PrintFlagsFinal -XX:+StartAttachListener -XX:NativeMemoryTracking=detail -XX:+FlightRecorder --add-exports=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-exports=java.xml/com.sun.org.apache.xerces.internal.parsers=ALL-UNNAMED --add-exports=java.xml/com.sun.org.apache.xerces.internal.util=ALL-UNNAMED -Djava.io.tmpdir=/scratch/lmesnik/ws/hs-bigapps/build/linux-x64/test-support/jtreg_closed_test_hotspot_jtreg_applications_kitchensink_KitchensinkSampler_java/scratch/0/java.io.tmpdir -Duser.home=/scratch/lmesnik/ws/hs-bigapps/build/linux-x64/test-support/jtreg_closed_test_hotspot_jtreg_applications_kitchensink_KitchensinkSampler_java/scratch/0/user.home -agentpath:/scratch/lmesnik/ws/hs-bigapps/build/linux-x64/images/test/hotspot/jtreg/native/libJvmtiStressModule.so applications.kitchensink.process.stress.Main /scratch/lmesnik/ws/hs-bigapps/build/linux-x64/test-support/jtreg_closed_test_hotspot_jtreg_applications_kitchensink_KitchensinkSampler_java/scratch/0/kitchensink.final.properties Host: Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz, 32 cores, 235G, Oracle Linux Server release 7.3 Time: Wed Oct 24 13:28:30 2018 PDT elapsed time: 3 seconds (0d 0h 0m 3s) --------------- T H R E A D --------------- Current thread (0x00002b9f68006000): JavaThread "Jvmti-AgentSampler" daemon [_thread_in_vm, id=26325, stack(0x00002b9f88808000,0x00002b9f88909000)] _threads_hazard_ptr=0x00002b9f68008e30, _nested_threads_hazard_ptr_cnt=0 Stack: [0x00002b9f88808000,0x00002b9f88909000], sp=0x00002b9f88907440, free space=1021k Native frames: (J=compiled Java code, A=aot compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x12a04bb] VMError::report_and_die(int, char const*, char const*, __va_list_tag*, Thread*, unsigned char*, void*, void*, char const*, int, unsigned long)+0x6a5 V [libjvm.so+0x129fdb3] VMError::report_and_die(Thread*, void*, char const*, int, char const*, char const*, __va_list_tag*)+0x57 V [libjvm.so+0x8ca5ab] report_vm_error(char const*, int, char const*, char const*, ...)+0x152 V [libjvm.so+0x12e485b] VMThread::execute(VM_Operation*)+0x99 V [libjvm.so+0xdbec54] JvmtiEnv::GetStackTrace(JavaThread*, int, int, _jvmtiFrameInfo*, int*)+0xc0 V [libjvm.so+0xd677cf] jvmti_GetStackTrace+0x2c2 C [libJvmtiStressModule.so+0x302d] trace_stack+0xa9 C [libJvmtiStressModule.so+0x3daf] agent_sampler+0x21f V [libjvm.so+0xddf595] JvmtiAgentThread::call_start_function()+0x67 V [libjvm.so+0xddf52a] JvmtiAgentThread::start_function_wrapper(JavaThread*, Thread*)+0xf2 V [libjvm.so+0x1218945] JavaThread::thread_main_inner()+0x17f V [libjvm.so+0x12187ad] JavaThread::run()+0x273 V [libjvm.so+0x100e4ee] thread_native_entry(Thread*)+0x192
24-10-2018