JDK-6700114 : Assertion (_thread->get_interp_only_mode() == 1,"leaving interp only when mode not one")
  • Type: Bug
  • Component: hotspot
  • Sub-Component: jvmti
  • Affected Version: hs12,hs14
  • Priority: P3
  • Status: Resolved
  • Resolution: Fixed
  • OS: generic
  • CPU: generic
  • Submitted: 2008-05-10
  • Updated: 2010-04-03
  • Resolved: 2009-03-18
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 6 JDK 7 Other
6u14Fixed 7Fixed hs14Fixed
Related Reports
Relates :  
Relates :  
Problem Description    : hs12-b03 fastdebug build fails with assertion
 (_thread->get_interp_only_mode() == 1,"leaving interp only when mode not one")

VM Release              : hs12
VM Builds               : b03
Build type              : fastdebug
VM flavors              : client | server
VM Modes                : -Xmixed | -Xcomp, -Xint
Java flags              :
Platform(s)             : solaris-i586, solaris-amd64 (didn't try other platforms)
This failure has been observed in nightly testing:


EVALUATION There is more than one code path for this failure mode. This bug fixed one of them. See the following bug for discussion about the other code path: 6814943 3/4 getcpool001 catches more than one JvmtiThreadState problem

EVALUATION http://hg.openjdk.java.net/jdk7/hotspot-rt/hotspot/rev/0386097d43d8

SUGGESTED FIX See the attached 6700114-webrev-cr0.tgz for the proposed fix.

SUGGESTED FIX Here is a very simple fix in JvmtiThreadState::state_for_while_locked() for this specific bug: --- a/src/share/vm/prims/jvmtiThreadState.hpp Sat Dec 20 09:59:01 2008 -0800 +++ b/src/share/vm/prims/jvmtiThreadState.hpp Mon Feb 02 15:08:10 2009 -0700 @@ -319,6 +319,11 @@ class JvmtiThreadState : public CHeapObj JvmtiThreadState *state = thread->jvmti_thread_state(); if (state == NULL) { + if (thread->is_exiting()) { + // don't add a JvmtiThreadState to a thread that is exiting + return NULL; + } + state = new JvmtiThreadState(thread); } return state; With the above fix in place along with the delay code and debug guarantee() calls from comment #9, I let the test run in a loop over the weekend. No failures in 25,000 iterations!

EVALUATION Scenario #2 is: 2) how can we create a JvmtiThreadState with a bad thread reference? and I have identified a race that fits this scenario. When a JavaThread is exiting it marks itself as "is exiting" and eventually removes itself from the "threads list". Thread list walkers need to check the "is exiting" flag before using a JavaThread returned from the threads list. JvmtiEventControllerPrivate::recompute_enabled() is used to recompute event flags and is called via several code paths. In threads.log.9522, it is reached via JvmtiExport::post_vm_death(); this is the failure that happens near the end of the test. In threads.log.9875, it is reached via JVM/TI SetEventNotificationMode(). Here is the errant code block: // We need to create any missing jvmti_thread_state if there are globally set thread // filtered events and there weren't last time if ( (any_env_thread_enabled & THREAD_FILTERED_EVENT_BITS) != 0 && (was_any_env_thread_enabled & THREAD_FILTERED_EVENT_BITS) == 0) { assert(JvmtiEnv::is_vm_live() || (JvmtiEnv::get_phase()==JVMTI_PHASE_START), "thread filtered events should not be enabled when VM not in start or live phase"); { MutexLocker mu(Threads_lock); //hold the Threads_lock for the iteration for (JavaThread *tp = Threads::first(); tp != NULL; tp = tp->next()) { JvmtiThreadState::state_for_while_locked(tp); // create the thread state if missing } }// release Threads_lock } The above loop does not check tp->is_exiting() so it can create a JvmtiThreadState for a thread that is exiting. However, the window is very tight in JavaThread::exit(): if (jvmti_thread_state() != NULL) { JvmtiExport::cleanup_thread(this); } #ifndef SERIALGC <snip> #endif // Remove from list of active threads list, and notify VM thread if we are the last non-daemon thread Threads::remove(this); The new JvmtiThreadState has to be created by state_for_while_locked() *after* cleanup_thread() is called and before Threads::remove() can grab the Threads_lock. Pretty darn tight. See comment #9 for some debug code that makes the bug much more reproducible. The result of the race is the creation of a JvmtiThreadState with a dangling JavaThread pointer. In the best case, the old JavaThread's memory is never reused and nothing bad happens. In the original crash, the JavaThread's memory was recycled so the memory occupied by interp_only_mode was no longer a 0 or a 1. I think that indicates that the memory was not reused by something other than another JavaThread. In the product bits, this can result in an unexpected decrement by one of that memory location. In bits with assertions enabled, only a decrement from one to zero will make it past the assert(). If the memory is reused by another JavaThread, this can result in an unexpected decrement of the interp_only_mode field. In a really strange twist, if the memory is reused by another JavaThread, then that JavaThread will have its own JvmtiThreadState and then there will be two JvmtiThreadState objects that refer to the same single JavaThread. I suspect that could result in double operations on the single JavaThread when the operations happen via the JvmtiThreadState list links. Fortunately this whole scenario can only happen when JVM/TI is enabled.

EVALUATION Coleen made the observation in comment #3 that: The jvmtiThreadState points to a _thread that seems to have been deallocated and _interp_only_mode is 0xabababab This makes me wonder if the jvmtiThreadState itself is also deallocated. jvmtiThreadState deallocation is protected by the JvmtiThreadState_lock. Here's a snippet of the crashing thread stack: [7] report_assertion_failure(0xd1a15b33, 0xcb, 0xd1a15ade, 0xd100040a), at 0xd08d9919 [8] JvmtiThreadState::leave_interp_only_mode(0x8211168, 0x0, 0x0, 0xd0dc59c2), at 0xd1000445 [9] JvmtiEventControllerPrivate::leave_interp_only_mode(0x8211168, 0x8211168, 0xd023e738, 0xd0dc6782), at 0xd0dc5b48 [10] JvmtiEventControllerPrivate::recompute_thread_enabled(0x8211168, 0xd1cb8460, 0xd023e788, 0xd0dc69b9), at 0xd0dc68ed [11] JvmtiEventControllerPrivate::recompute_enabled(0x807be00), at 0xd0dc6d37 JvmtiEventControllerPrivate::recompute_enabled() walks the list of JvmtiThreadState objects. [12] JvmtiEventControllerPrivate::vm_death(0xc9d32398, 0xd1cf8d40, 0xc9c03998), at 0xd0dc9b80 [13] JvmtiEventController::vm_death(0xc9d32398, 0xd1d15cec, 0x0, 0x1, 0xd1cf8d40, 0xd1cd6638), at 0xd0dca22a JvmtiEventController::vm_death() grabs the JvmtiThreadState_lock. [14] JvmtiExport::post_vm_death(0xd023ea0c, 0xd1fd9ec4, 0x1, 0xd1ffc28c, 0xd1cbdf30, 0x807b038), at 0xd0dcdb84 [15] before_exit(0x807c800), at 0xd0a8801b [16] JVM_Halt(0x5f, 0xc9d323c4, 0xc9d323c8, 0xd017ed18), at 0xd0b95458 Our JvmtiThreadState is passed as a parameter to JvmtiEventControllerPrivate::recompute_thread_enabled(): (dbx) x 0x8211168/24X 0x08211168: 0xd1cfc9f8 0x08210000 0x00000000 0xf1f1f100 0x08211178: 0x00000000 0x00000000 0x00000064 0xf1f1f1f1 0x08211188: 0x00000000 0x00000000 0x0000ead0 0x082111f8 0x08211198: 0x00000000 0x08211268 0x00000000 0x00000000 0x082111a8: 0x00000000 0x00000000 0xf1f1f101 0x00000000 0x082111b8: 0x0000000a 0x00000000 0x00000000 0x00000000 (dbx) x 0xd1cfc9f8/X 0xd1cfc9f8: __1cQJvmtiThreadStateG__vtbl_ : 0xd1a14ba8 The object's vtable link confirms that we have a JvmtiThreadState. The offset to the _thread field is 4, the _next field is 48 and _prev field is 52: The _thread field is easy: 0x08210000. Here are the _next and _prev fields. (dbx) x 0x8211168+48/X 0x08211198: 0x00000000 (dbx) x 0x8211168+52/X 0x0821119c: 0x08211268 Here's a dump of the _thread field: (dbx) x 0x08210000/20X 0x08210000: 0x00000041 0x00000000 0xabababab 0xabababab 0x08210010: 0xabababab 0xabababab 0xf1f1f1f1 0x00000014 0x08210020: 0xd1cddd10 0x00000c60 0x00000000 0xc9ca8a30 0x08210030: 0xf1f1001f 0xabababab 0xabababab 0xabababab 0x08210040: 0xabababab 0x00000000 0x00000041 0x00000000 Ouch! That doesn't look like a JavaThread. There is the previous JvmtiThreadState: (dbx) x 0x08211268/24X 0x08211268: 0xd1cfc9f8 0x0814bc00 0x00000000 0xf1f1f100 0x08211278: 0x00000000 0x00000000 0x00000064 0xf1f1f1f1 0x08211288: 0x00000000 0x00000000 0x0000ead0 0x082112f8 0x08211298: 0x08211168 0x08211368 0x00000000 0x00000000 0x082112a8: 0x00000000 0x00000000 0xf1f1f101 0x00000000 0x082112b8: 0x0000000a 0x00000000 0x00000000 0x00000000 The _next field in the previous JvmtiThreadState refers to our original JvmtiThreadState: (dbx) x 0x08211268+48/X 0x08211298: 0x08211168 And here is the _thread for the previous JvmtiThreadState: (dbx) x 0x0814bc00/20X 0x0814bc00: 0xd1cfe3c8 0x00000000 0x00000000 0x00000000 0x0814bc10: 0x0814ba48 0x0814ca98 0x00000000 0x00000000 0x0814bc20: 0x0814dcb8 0x00000000 0xcdc9db60 0x00000000 0x0814bc30: 0x00000000 0x00000000 0x00000000 0x00000000 0x0814bc40: 0xd1d16018 0x00000000 0x00000000 0x00000000 (dbx) x 0xd1cfe3c8/X 0xd1cfe3c8: __1cXLowMemoryDetectorThreadG__vtbl_ : 0xd1a31804 It looks like we have a valid JvmtiThreadState in hand, but the JavaThread we point at is toast. That matches Coleen's observations in comment #3. So we need to look at a couple of possible scenarios: 1) how can we destroy a JavaThread without also destroying the associated JvmtiThreadState? 2) how can we create a JvmtiThreadState with a bad thread reference? Here is the "normal" JvmtiThreadState destruction path: runtime/thread.cpp: JavaThread::exit() prims/jvmtiExport.cpp: JvmtiExport::cleanup_thread() prims/jvmtiEventController.cpp: JvmtiEventController::thread_ended() prims/jvmtiEventController.cpp: JvmtiEventControllerPrivate::thread_ended() grabs JvmtiThreadState_lock deletes JvmtiThreadState Since JvmtiThreadState objects are destroyed on the JavaThread destruction path, scenario #1 isn't going to fit the bill. However, I think I see a race that might fit scenario #2. More later.