JDK-8074292 : nsk/jdb/kill/kill001: generateOopMap.cpp assert(bb->is_reachable()) failed
  • Type: Bug
  • Component: hotspot
  • Sub-Component: runtime
  • Affected Version: 9,14,15,16,22
  • Priority: P3
  • Status: Open
  • Resolution: Unresolved
  • OS: linux
  • CPU: x86_64
  • Submitted: 2015-03-03
  • Updated: 2024-10-25
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
Other
tbdUnresolved
Related Reports
Relates :  
Relates :  
Relates :  
Relates :  
Description
Has failed intermittently on linux for at least 5 years.

RULE nsk/jdb/kill/kill001 Crash Internal Error ...generateOopMap.cpp...assert(bb->is_reachable()) failed: getting result from unreachable basicblock

hs.err:
#  Internal Error (/opt/jprt/T/P1/154159.amurillo/s/hotspot/src/share/vm/oops/generateOopMap.cpp:2204), pid=21518, tid=26
#  assert(bb->is_reachable()) failed: getting result from unreachable basicblock
#
# Java VM: Java HotSpot(TM) 64-Bit Server VM (1.9.0-internal-fastdebug-20150227154159.amurillo.jdk9-hs-2015-02--b00 compiled mode solaris-amd64 compressed oops)


---------------  T H R E A D  ---------------

Current thread (0x00000000006c5000):  VMThread [stack: 0xfffffd7fa23cc000,0xfffffd7fa24cc000] [id=26]

Stack: [0xfffffd7fa23cc000,0xfffffd7fa24cc000],  sp=0xfffffd7fa24c9eb0,  free space=1015k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V  [libjvm.so+0x1c2ffb8]  void VMError::report(outputStream*)+0xa18
V  [libjvm.so+0x1c3141c]  void VMError::report_and_die()+0x5ac
V  [libjvm.so+0xfbe54e]  void report_vm_error(const char*,int,const char*,const char*)+0x7e
V  [libjvm.so+0x118e9f3]  void GenerateOopMap::result_for_basicblock(int)+0xc3
V  [libjvm.so+0x18cc4e5]  void OopMapForCacheEntry::compute_map(Thread*)+0xf5
V  [libjvm.so+0x18cec70]  void OopMapCacheEntry::fill(methodHandle,int)+0x230
V  [libjvm.so+0x18d0a91]  void OopMapCache::lookup(methodHandle,int,InterpreterOopMap*)const+0x701
V  [libjvm.so+0x1228ed9]  void InstanceKlass::mask_for(methodHandle,int,InterpreterOopMap*)+0x99
V  [libjvm.so+0x17ed5a5]  void Method::mask_for(int,InterpreterOopMap*)+0x135
V  [libjvm.so+0x16011c7]  void VM_GetOrSetLocal::doit()+0x57
V  [libjvm.so+0x1c698b2]  void VM_Operation::evaluate()+0x122
V  [libjvm.so+0x1c65b9b]  void VMThread::evaluate_operation(VM_Operation*)+0x20b

Comments
It would be really good to know which bci was unreachable. Was it the exception handler bci (can't remember)? [~never] Your patch in mask_exc_handler.patch is helpful. Yes, we can safepoint at any bci's when we're running with single stepping turned on. Otherwise, it's only backwards branches and returns. When I looked at this years ago I thought it was getting the async exception after we exited the monitor in monitorexit, so would reexit the monitorexit and get infinite IMSE exceptions. The 2019 hs_err_pid file shows this too, but the stack trace in the bug report shows that the call from GetOrSetLocal, which is something I didn't look at. Both monitorexit's a JRT_LEAF so shouldn't get an async exception and monitorenter is a JRT_ENTRY_NO_ASYNC so also shouldn't, but the aload 4 can. Which doesn't explain the IMSE but would explain the assert if it's at the exception handler bci. generateOopMap doesn't know about async exceptions. I guess it should. Maybe all exception handlers should be reachable if you're single stepping.
25-10-2024

We've started seeing this same failure mode in JDK-8339293 so I decided to investigate a bit. I made the following change of computing the mask for the handler bci in InterpreterRuntime::exception_handler_for_exception: diff --git a/src/hotspot/share/interpreter/interpreterRuntime.cpp b/src/hotspot/share/interpreter/interpreterRuntime.cpp index 525258b1ebd..9247b74c462 100644 --- a/src/hotspot/share/interpreter/interpreterRuntime.cpp +++ b/src/hotspot/share/interpreter/interpreterRuntime.cpp @@ -37,6 +37,7 @@ #include "interpreter/interpreter.hpp" #include "interpreter/interpreterRuntime.hpp" #include "interpreter/linkResolver.hpp" +#include "interpreter/oopMapCache.hpp" #include "interpreter/templateTable.hpp" #include "jvm_io.h" #include "logging/log.hpp" @@ -546,6 +547,11 @@ JRT_ENTRY(address, InterpreterRuntime::exception_handler_for_exception(JavaThrea } #endif + if (handler_bci > 0) { + InterpreterOopMap mask; + OopMapCache::compute_one_oop_map(h_method, handler_bci, &mask); + } + // notify JVMTI of an exception throw; JVMTI will detect if this is a first // time throw or a stack unwinding throw and accordingly notify the debugger if (JvmtiExport::can_post_on_exceptions()) { The intent here was so see actual dispatch point which lead to the unreachable bci. I got a single crash, as it seems like the extra work might have perturbed the timing. V [libjvm.so+0xdc7ef4] GenerateOopMap::result_for_basicblock(int)+0x74 (generateOopMap.cpp:2226) V [libjvm.so+0x14dca44] OopMapForCacheEntry::compute_map(Thread*)+0xf4 (oopMapCache.cpp:118) V [libjvm.so+0x14dde50] OopMapCacheEntry::fill(methodHandle const&, int)+0xd0 (oopMapCache.cpp:336) V [libjvm.so+0x14de827] OopMapCache::compute_one_oop_map(methodHandle const&, int, InterpreterOopMap*)+0x67 (oopMapCache.cpp:616) V [libjvm.so+0xea2653] InterpreterRuntime::exception_handler_for_exception(JavaThread*, oopDesc*)+0x503 (interpreterRuntime.cpp:552) j nsk.jdb.kill.kill001.MyThread.run()V+86 J 2984 jvmci java.lang.Thread.run()V java.base@24-internal (23 bytes) @ 0x00007f6fbcab8da0 [0x00007f6fbcab8d20+0x0000000000000080] # fatal error: getting result from unreachable basicblock at bci 92 in method run From nsk.jdb.kill.kill001.MyThread.run 85: monitorenter 86: aload 4 88: monitorexit 89: goto 100 Exception table: from to target type 59 68 71 any 71 76 71 any 86 89 92 any So presumably we deoptimized on return from the monitor enter and resumed in the interpreter at the next byte code at 86 and then we were asked to deliver some async exception. That seems to agree with Dean’s last comment. Even in the absence of deopt, can’t we safe point at any byte code in the interpreter and be asked to deliver an asynchronous exception? I had some vague memory that the interpreter safe point table only set certain byte codes for safe point checks but looking at the code that's definitely not true. So it seems that handler is really reachable if we permit asynchronous delivery at bci 86. It sure seems like an -Xint test that repeatedly exited and entered monitors while another thread threw async exceptions at it could produce the same problem. You just need to get lucky and safepoint on the aload or even the monitorenter since interpreter safepoint checks are before the bytecode. If the interpreter can reach a bytecode then it sure seems like it must treated as reachable by generateOopMap. What interpeter oop map would be produced for these unreachable bcis? The failure mode in JDK-8339293 is that JVMTI is trying to set a local so it computes the oopmap for the bci to make sure it's not violating oop vs primtiive when setting locals. We get an assert in debug but in product it will compute some sort of result and it's unclear what it might contain.
25-10-2024

It's unreachable according to the logic in GenerateOopMap::do_exception_edge() for monitorexit. If the monitors are matching then it assumes no exception can be thrown. It does not take into account asynchronous exceptions. If no exceptions can reach the handler, the exception handler is marked not-reachable.
22-07-2024

[~dlong] that sounds slightly different. The original issue was the recursive monitorexit that kept rethrowing IMSE until we ran out of memory. What you describe seems much more direct and I'm not at all clear why something is being deemed "unreachable" in that case. ??
16-10-2023

I decided to take another look at this. I still don't know how to get an IllegalMonitorStateException, but getting the assert is still possible. Just run a test in jdb that contains a synchronized block. Single-step to the monitorexit. Then throw an exception with the "kill" jdb command and single-step again. Now we are in the "unreachable" exception handler. Next try to print the value of a local variable and the assert will be triggered.
09-10-2023

Maybe there's a scenario with monitorenter that could cause this. A missed monitorenter is just as bad a double monitorexit. Let's say we are compiled, doing the monitor enter, which takes the slow path because of contention. When the thread blocks, it probably allows a a safepoint, allowing jdb to request interpreter-only-mode and making us deoptimize the method. Maybe for good measure we also allow the asynchronous Thread.kill to happen, and all of this before the monitorenter locks the monitor. If there is a path that allows us to return without locking the monitor, and then after deoptimizing we don't reexecute the monitorenter, then the monitorexit should throw IllegalMonitorStateException.
05-03-2022

Unfortunately the fix I went with for JDK-8271055 only helps when deoptimizing with +VerifyStack. I don't think it's possible to deoptimize on a monitorexit and then reexecute in the interpreter if an exception is pending. If we could reexecute the monitorexit after deoptimizing w/o an exception, then maybe that could explain the bug, but I was assuming we are running interpreted only because of jdb setting breakpoints. But maybe there is a window because of -Xcomp. I'm OK with closing this as CNR for now.
05-03-2022

My test case was a hack to show the infinite loop for IMSE exceptions. I essentially added a monitorexit bytecode in the exception handler range using jasm. Or something like that. I'm not sure the verifier would allow that in real life. Nobody's been able to reproduce this problem for however many years this bug has been open. We've closed it several times as CNR. The reason I thought JDK-8271055 might fix it is if you have an exception handling range 150 65 aload_3 150 66 monitorexit <= deopt after monitorexit 150 67 goto 77 and deoptimized after executing the monitorexit and didn't advance the BCP to after the monitorexit, then you could execute the monitorexit again in the interpreter, and see the GC mask_for() call on the stack. But I think it's impossible to deopt in the monitorexit bytecode because it's now a JRT_LEAF. I think we should close this again as CNR so either 1. it tends to spitefully (!) show up just after we do this, or 2. never see it again because JDK-8271055 fixed it.
04-03-2022

I tried reproducing this in jdb with a jdk-16+10 build, but I still haven't been able trigger the extra monitorexit and IllegalMonitorStateException. [~coleenp], you said before you could reproduce a crash in a small test case: > I can get my small test case to crash in the same way now. Could you share your test and the steps to reproduce?
02-03-2022

[~dholmes] You're right, I forgot about the rethrow. How about this? synchronized (lock) { // lock count = 1 try { synchronized (lock) { // lock count = 2 } // A: monitorexit 1 } catch (Throwable t) { // eat async exception? } // B: critical section } // monitorexit 2 Do bad things happen if the monitorexit at A succeeds but then throws an async exception?
04-02-2022

[~dlong] Yes that exposes B to execution without the lock actually being held. Assuming an async exception can be thrown whilst we still appear to be executing the monitorexit.
04-02-2022

Async exceptions are only processed in the transition from native back to Java, and in the transition from the vm back to Java except when using the JRT_ENTRY_NO_ASYNC/JRT_BLOCK_NO_ASYNC wrappers or when at_poll_safepoint (not poll return). The transition code has been simplified but functionally the processing of async exception has not changed so I doubt the crash was fixed by any of those changes. Having said that, I also can't see how we could even get an exception installed at monitorexit. Also after 8253540, monitorexit is a JRT_LEAF called through call_VM_leaf(), so we won't even check exceptions upon returning. That means bcp will be advanced and if anything we will process the exception in the next bytecode. Before 8253540, monitorexit was a JRT_ENTRY_NO_ASYNC function, so after returning we could have jumped to the exception processing logic before advancing bcp. But again, how did we got the exception in monitorexit in the first place?
03-02-2022

> If it advanced the bci beyond where the monitorexit disabled async exceptions Where in the code does that happen? It's possible this could be related to JDK-8271055, though all the backtraces I've seen show Method::mask_for() as the trigger and not the deoptimization path. I'm curious enough to try to write a reproducer and step through it in gdb.
03-02-2022

I wonder if this isn't the same thing as JDK-8271055. From my memory of debugging this, I couldn't find any paths that could throw an async exception improperly from monitorenter or monitorexit, but there were several recent deoptimizations. If it advanced the bci beyond where the monitorexit disabled async exceptions, we might get the results we see in the hs_err_pid file. The infrequency and longstanding nature of this bug might be explained by that. This would be great if this is so.
03-02-2022

[~dlong] yes your analysis is correct, but note we won't actually execute the "B" critical section as the exception is propagating - however if B were in a finally block then yes it would be bad. There have been further changes in how async exceptions get processed since this was last discussed and there are further changes in the pipeline as well. So I don't know if this particular issue is already addressed. Unfortunately it remains very difficult to know exactly where the VM may materialize an async exception - but [~pchilanomate] may be able to clarify the current state.
03-02-2022

I think the correct fix for this is to not allow monitorexit to throw an async exception. Worse than getting into an infinite IMSE loop, I think the async exception problem can allow critical sections to become unlocked because of too many monitorexits. For example, if the caller already locked the object, or if there are nested synchronized blocks. synchronized (lock) { // lock count = 1 synchronized (lock) { // lock count = 2 } // A: monitorexit 1 // B: critical section } // monitorexit 2 Unless I am missing something, if the monitorexit at A succeeds but then throws an async exception, the monitorexit in the javac-generated exception handler will change the lock count from 1 to 0, leaving the critical section B unprotected, allowing the shared state to become corrupted by other threads. Based on the exception table generated by javac, it's OK for monitorexit to check for async exceptions before performing the unlock, but not after. And monitorenter cannot check for async exceptions at all. Or alternatively, to check for an async exception after executing the bytecode, the bci must be advanced to the next instruction first. I guess it's debatable which async exception problem is worse, inconsistent state due to a an extra unlock, or inconsistent state due to early exit of the critical section. To me the former seems worse.
03-02-2022

I ran tier 1-7 testing with a NoAsyncExceptionCheckVerifier in InterpreterRuntime::monitorexit and it did not trigger. There is no obvious path by which an async exception can be installed whilst executing monitorexit.
26-09-2020

David pointed out to me that the exception range is exclusive, so that here, bytecode 67 is not in the exception range: 150 65 aload_3 150 66 monitorexit 150 67 goto 77
15-04-2020

Stopping at the bytecode in the range of monitorexit exception block can install an async exception. This can happen because if a safepoint is requested during single stepping, InterpreterRuntime::at_safepoint() is callled for each bytecode. Due to the frequency that this bug is reproduced, that is consistent with a small race with the bytecode after monitorexit, ie. the goto 77 bytecode. For a feature that is deprecated, I don't know how important it is to install the async exception at all bytecodes, and even so it's probably not important when it's installed as long as it is eventually installed. I have a patch that only installs the async exception at safepoints that can be reached by backwards branches and returns, which relies on safepoint polling code. I don't know if this is worth doing and letting the async exceptions only be installed for calls into the runtime where we know it is safe. Sending out RFR
13-04-2020

Ideally we would have a way to clearly mark a region where async-exceptions should not be installed. Barring that it would be good if monitorexit could be specially handled - indeed both monitorenter and monitorexit do get special handling, it just seems to me that somewhere along the way we (re)introduced an async-exception checking codepath that we really don't want in this case. (Of course that codepath may not be able to determine what the "case" is.). A number of potential ways to try and address this, but all of them seem a high cost for a facility that is deprecated and should be barely used.
10-04-2020

From Tom's suggestion in the bug JDK-4651437. >We could also do something more radical and only have pending async exception checks at backward branch points and a few other InterpreterRuntime entry points. Other entry points would still have the checks though. I don't think that would violate the spec. The other suggestions were to tell generateOopMap that all exception regions are reachable if an async exception has been thrown. This would avoid the assertion but would make the case that's unlucky enough to install the async exception in the monitorexit block hit the infinite loop of having the exception handler throw IMSE and be handled by a monitorexit exception handler that will throw IMSE.
10-04-2020

For a bit of deja-vu see JDK-4651437.
10-04-2020

So this bug is caused by something installing an async exception after the monitorexit in this basic block: class nsk.jdb.kill.kill001.MyThread @0x0000000800bc1910 Disassembly for compiled method [public void run() @0x00007fb70c9e40b0 ] @0x00007fb718dd3e10 150 65 aload_3 150 66 monitorexit 150 67 goto 77 Which causes the the exception to be handled in the exception handler for this range: 65 67 70 Any 70 74 70 Any <=== 150 70 astore #5 150 72 aload_3 150 73 monitorexit 150 74 aload #5 Because the monitor has already exited, the monitorexit at 73 throws IllegalMonitorStateException (IMSE), which is handled in the same range, again throwing IMSE. Eventually, filling in the stack trace runs out of memory and calls GC, which runs an abstract interpreter over the bytecodes. Since the monitors are matching, the abstract interpreter throws out 70-74 as unreachable. And this leads to this exception. I can reproduce this crash by changing JRT_ENTRY_NO_ASYNC to plain JRT_ENTRY for this test. The only way I path I can find that can install an async exception in this bytecode range, is if the template table is replaced by the safepoint table for bci 67. Then it goes to InterpreterRuntime::at_safepoint, which can install the async exception. There are no other paths that I can find (even with calling deoptimization. I tried).
09-04-2020

I'm going to close this again as CNR. I'm looking at latest code so maybe there was a transient change that caused the async exception to be thrown on monitorexit. At any rate, we have RFEs to simplify this code and the suspend/resume architecture for JDK 15, so this would probably get fixed as a side effect.
08-11-2019

This bug has been around forever so I think it's unaffected by any recent changes. In the second block above, coming from ThreadInVMfromJavaNoAsyncException (JRT_ENTRY_NO_ASYNC), the thread state == _thread_in_vm_trans, so it won't call handle_special_runtime_exit_condition and install the async exception. The second block in SafepointSynchronize::block() appears to be needed for polling page exceptions. Transitioning from native code seems to also check async exceptions and suspend, and also call block(), but I think the parameter to handle_special_runtime_exit_condition() is false in this case. if (state != _thread_blocked_trans && state != _thread_in_vm_trans && <==== coming from ThreadInVMfromJavaNoAsync skips this here. thread->has_special_runtime_exit_condition()) { thread->handle_special_runtime_exit_condition( !thread->is_at_poll_safepoint() && (state != _thread_in_native_trans)); <==== coming from native doesn't check async exceptions } So I don't see how monitorexit with jdb throws an exception.
29-10-2019

ILW = HLM = P3
29-10-2019

[~hseigel] Yes exactly - that is why the monitorexit exists within the "finally" block that ensures a monitor is always exited. There is no way to ask "did you actually manage to exit the monitor or not". javac maintains correctness at the expense of liveness (which is the correct tradeoff). Processing async exceptions on monitorexit is always a problem. But I wonder, per Coleen's analysis, if we've somehow changed the exact circumstances under which we will do the async exception check these days? (I think Robbin, or Patricio, have a RFE to simplify the checks across safepoints/handshakes/thread-state-transitions.)
29-10-2019

You're right, it might be as simple as that. I reassigned it to myself. Maybe we need to make that exception handler reachable in generateOopMap if some jdb is active. It'll go into an infinite loop though. Maybe we should not throw the async exception on monitorexit instead (?) We don't check for async exceptions on monitorexit, at least in the transition JRT_ENTRY_NO_ASYNC(void, InterpreterRuntime::monitorexit(JavaThread* thread, BasicObjectLock* elem)) This transition has this with commentary: trans(_thread_in_vm, _thread_in_Java); // NOTE: We do not check for pending. async. exceptions. // If we did and moved the pending async exception over into the // pending exception field, we would need to deopt (currently C2 // only). However, to do so would require that we transition back // to the _thread_in_vm state. Instead we postpone the handling of // the async exception. // Check for pending. suspends only. if (_thread->has_special_runtime_exit_condition()) _thread->handle_special_runtime_exit_condition(false); But if the trans => transition(_thread_in_vm to _thread_in_Java) blocks It will call SafepointSynchronize::block(), which has the same code but different, and this would handle the async exception (if I'm following correctly): // Check for pending. async. exceptions or suspends - except if the // thread was blocked inside the VM. has_special_runtime_exit_condition() // is called last since it grabs a lock and we only want to do that when // we must. // // Note: we never deliver an async exception at a polling point as the // compiler may not have an exception handler for it. The polling // code will notice the async and deoptimize and the exception will // be delivered. (Polling at a return point is ok though). Sure is // a lot of bother for a deprecated feature... // // We don't deliver an async exception if the thread state is // _thread_in_native_trans so JNI functions won't be called with // a surprising pending exception. If the thread state is going back to java, // async exception is checked in check_special_condition_for_native_trans(). if (state != _thread_blocked_trans && state != _thread_in_vm_trans && thread->has_special_runtime_exit_condition()) { thread->handle_special_runtime_exit_condition( !thread->is_at_poll_safepoint() && (state != _thread_in_native_trans)); } Why is this code also in SafepointSynchronize::block()?
28-10-2019

If an exception occurs when exiting the monitor, has the monitor been exited ? I guess the code assumes that it hasn't. So, it decides that it needs to try again to exit the monitor.
28-10-2019

I haven't been able to dig that far into it. Why would async exception processing during monitorexit re-exit the monitor?
28-10-2019

"killing the threads" isn't done via Thread.stop is it? An async exception processed during monitorexit could lead to this behaviour.
27-10-2019

FYI: > since this exception handler covers this exception handler as well, which I found surprising It is a requirement of the Java language that once you have entered a synchronized code region you cannot leave it without releasing the monitor. If the monitorexit itself throws an exception (which shouldn't be possible for correctly compiled Java code) then what should the behaviour be? As there is no real answer it was decided way back at the beginning that the monitorexit would be included in the exception handler for the monitorexit - hence the loop.
27-10-2019

It's unlikely to be a P3 because I think we've only seen it twice in forever.
25-10-2019

I can get my small test case to crash in the same way now. If the monitorexit bytecode is executed twice, due to something that this test does when it tries to kill the threads, we get this assertion. The real problem here is that the jdb kill seems to rerun a monitorexit bytecode in MyThread.run. Reassigning it to serviceability team.
25-10-2019

The BasicBlock at bci 73 of nsk.jdb.kill.kill001.MyThread.run()V looks like: 70: astore 5 72: aload_3 73: monitorexit 74: aload 5 76: athrow Which is covered by the exception table: Exception table: from to target type 41 49 52 any 52 56 52 any 65 67 70 any 70 74 70 any 77 88 91 Class java/lang/InterruptedException So this bci should have been reachable, because we create basic blocks for exception handlers. In fact in my narrow test case, I don't get this assertion. It seems that the jdb breakpoints or kill commands caused the monitorexit at bytecode 66 to enter this exception handler and then this bci to throw IllegalMonitorStateException in a loop (since this exception handler covers this exception handler as well, which I found surprising.)
25-10-2019

ILW = HLM = P3
27-08-2019

I've attached the hs_err_pid file as: test-support_jtreg_open_test_hotspot_jtreg_vmTestbase_nsk_jdb_vmTestbase_nsk_jdb_kill_kill001_kill001_hs_err_pid20937.log Here's the crashing thread's stack: --------------- T H R E A D --------------- Current thread (0x00007f2b08011800): GCTaskThread "GC Thread#1" [stack: 0x00007f2b25104000,0x00007f2b25204000] [id=21074] Stack: [0x00007f2b25104000,0x00007f2b25204000], sp=0x00007f2b252012e0, free space=1012k Native frames: (J=compiled Java code, A=aot compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0xcf8f44] GenerateOopMap::result_for_basicblock(int)+0x94 V [libjvm.so+0x13cb442] OopMapForCacheEntry::compute_map(Thread*)+0x192 V [libjvm.so+0x13cd2ef] OopMapCacheEntry::fill(methodHandle const&, int)+0xdf V [libjvm.so+0x13ceb26] OopMapCache::lookup(methodHandle const&, int, InterpreterOopMap*)+0x316 V [libjvm.so+0x12fad1a] Method::mask_for(int, InterpreterOopMap*)+0x9a V [libjvm.so+0xbcb107] frame::oops_interpreted_do(OopClosure*, RegisterMap const*, bool)+0x2f7 V [libjvm.so+0x16beb9f] JavaThread::oops_do(OopClosure*, CodeBlobClosure*)+0x19f V [libjvm.so+0x16ca658] Threads::possibly_parallel_oops_do(bool, OopClosure*, CodeBlobClosure*)+0x188 V [libjvm.so+0xcb8f85] G1RootProcessor::process_java_roots(G1RootClosures*, G1GCPhaseTimes*, unsigned int)+0xc5 V [libjvm.so+0xcb945b] G1RootProcessor::evacuate_roots(G1ParScanThreadState*, unsigned int)+0x6b V [libjvm.so+0xc0af6f] G1EvacuateRegionsTask::scan_roots(G1ParScanThreadState*, unsigned int)+0x1f V [libjvm.so+0xc0bc40] G1EvacuateRegionsBaseTask::work(unsigned int)+0x100 V [libjvm.so+0x17f52e4] GangWorker::run_task(WorkData)+0x84 V [libjvm.so+0x17f5428] GangWorker::loop()+0x48 V [libjvm.so+0x16cc206] Thread::call_run()+0xf6 V [libjvm.so+0x13ec3ce] thread_native_entry(Thread*)+0x10e JavaThread 0x00007f2b444ad000 (nid = 21030) was being processed Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) J 1135 java.lang.Throwable.fillInStackTrace(I)Ljava/lang/Throwable; java.base@14-ea (0 bytes) @ 0x00007f2b3457bc2f [0x00007f2b3457bb60+0x00000000000000cf] J 3056 c2 java.lang.Throwable.fillInStackTrace()Ljava/lang/Throwable; java.base@14-ea (29 bytes) @ 0x00007f2b345d6700 [0x00007f2b345d6660+0x00000000000000a0] J 3086 c2 java.lang.Exception.<init>()V java.base@14-ea (5 bytes) @ 0x00007f2b345728e4 [0x00007f2b345726c0+0x0000000000000224] J 3063 c2 java.lang.RuntimeException.<init>()V java.base@14-ea (5 bytes) @ 0x00007f2b3455b8e4 [0x00007f2b3455b8a0+0x0000000000000044] J 3003 c2 java.lang.IllegalMonitorStateException.<init>()V java.base@14-ea (5 bytes) @ 0x00007f2b346593e4 [0x00007f2b346593a0+0x0000000000000044] v ~StubRoutines::call_stub j nsk.jdb.kill.kill001.MyThread.run()V+73 v ~StubRoutines::call_stub
23-08-2019

It's back..... :-) This time on a Linux-X64 machine... Here's a snippet from the log file: reply[4]: Thread-0[1] Sending command: cont reply[0]: > Exception in thread "Thread-1" Exception in thread "Thread-3" Exception in thread "Thread-4" java.lang.NullPointerException: kill001a's Exception reply[1]: at nsk.jdb.kill.kill001.kill001a.<clinit>(kill001a.java:53) reply[2]: nsk.jdb.kill.kill001.kill001a$MyException: kill001a's Exception reply[3]: at nsk.jdb.kill.kill001.kill001a.<clinit>(kill001a.java:53) reply[4]: com.sun.jdi.IncompatibleThreadStateException: kill001a's Exception reply[5]: at nsk.jdb.kill.kill001.kill001a.<clinit>(kill001a.java:53) reply[6]: # To suppress the following error report, specify this argument reply[7]: # after -XX: or in .hotspotrc: SuppressErrorAt=/generateOopMap.cpp:2222 reply[8]: # reply[9]: # A fatal error has been detected by the Java Runtime Environment: reply[10]: # reply[11]: # Internal Error (open/src/hotspot/share/oops/generateOopMap.cpp:2222), pid=20937, tid=21074 reply[12]: # assert(bb->is_reachable()) failed: getting result from unreachable basicblock reply[13]: # reply[14]: # JRE version: Java(TM) SE Runtime Environment (14.0+12) (fastdebug build 14-ea+12-378) reply[15]: # Java VM: Java HotSpot(TM) 64-Bit Server VM (fastdebug 14-ea+12-378, compiled mode, sharing, tiered, compressed oops, g1 gc, linux-amd64) reply[16]: # Problematic frame: reply[17]: # V [libjvm.so+0xcf8f44] GenerateOopMap::result_for_basicblock(int)+0x94 reply[18]: # reply[19]: # Core dump will be written. Default location: /scratch/opt/mach5/mesos/work_dir/slaves/00f4d7f9-7805-4b6a-aef8-9bb130db2435-S1403/frameworks/1735e8a2-a1db-478c-8104-60c8b0af87dd-0196/executors/3bffd179-3ec9-40d8-8a8f-2cef6f87a2eb/runs/7cd18b12-e25d-4cda-aefd-30ce0c53ae1c/testoutput/test-support/jtreg_open_test_hotspot_jtreg_vmTestbase_nsk_jdb/scratch/1/core.20937 reply[20]: # reply[21]: # An error report file with more information is saved as: reply[22]: # /scratch/opt/mach5/mesos/work_dir/slaves/00f4d7f9-7805-4b6a-aef8-9bb130db2435-S1403/frameworks/1735e8a2-a1db-478c-8104-60c8b0af87dd-0196/executors/3bffd179-3ec9-40d8-8a8f-2cef6f87a2eb/runs/7cd18b12-e25d-4cda-aefd-30ce0c53ae1c/testoutput/test-support/jtreg_open_test_hotspot_jtreg_vmTestbase_nsk_jdb/scratch/1/hs_err_pid20937.log reply[23]: # reply[24]: # If you would like to submit a bug report, please visit: reply[25]: # http://bugreport.java.com/bugreport/crash.jsp reply[26]: # reply[27]: Current thread is 21074 reply[28]: Dumping core ... reply[29]: reply[30]: The application has been disconnected Sending command: cont Expected breakpoint has not been hit yet Breakpoint has been hit Sending command: eval nsk.jdb.kill.kill001.kill001a.notKilled # ERROR: Value for nsk.jdb.kill.kill001.kill001a.notKilled is not found. The following stacktrace is for failure analysis. nsk.share.TestFailure: Value for nsk.jdb.kill.kill001.kill001a.notKilled is not found. at nsk.share.Log.logExceptionForFailureAnalysis(Log.java:428) at nsk.share.Log.complain(Log.java:399) at nsk.jdb.kill.kill001.kill001.runCases(kill001.java:141) at nsk.share.jdb.JdbTest.runTest(JdbTest.java:149) at nsk.jdb.kill.kill001.kill001.run(kill001.java:80) at nsk.jdb.kill.kill001.kill001.main(kill001.java:74) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at PropertyResolvingWrapper.main(PropertyResolvingWrapper.java:104) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.sun.javatest.regtest.agent.MainWrapper$MainThread.run(MainWrapper.java:127) at java.base/java.lang.Thread.run(Thread.java:830) # ERROR: Caught unexpected exception while executing the test: nsk.share.Failure: Attempt to send command :threads to terminated jdb. The following stacktrace is for failure analysis. nsk.share.TestFailure: Caught unexpected exception while executing the test: nsk.share.Failure: Attempt to send command :threads to terminated jdb. at nsk.share.Log.logExceptionForFailureAnalysis(Log.java:428) at nsk.share.Log.complain(Log.java:399) at nsk.share.jdb.JdbTest.failure(JdbTest.java:74) at nsk.share.jdb.JdbTest.runTest(JdbTest.java:158) at nsk.jdb.kill.kill001.kill001.run(kill001.java:80) at nsk.jdb.kill.kill001.kill001.main(kill001.java:74) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at PropertyResolvingWrapper.main(PropertyResolvingWrapper.java:104) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.sun.javatest.regtest.agent.MainWrapper$MainThread.run(MainWrapper.java:127) at java.base/java.lang.Thread.run(Thread.java:830) nsk.share.Failure: Attempt to send command :threads to terminated jdb. at nsk.share.jdb.Jdb.sendCommand(Jdb.java:284) at nsk.share.jdb.Jdb.receiveReplyFor(Jdb.java:350) at nsk.share.jdb.Jdb.receiveReplyFor(Jdb.java:333) at nsk.share.jdb.Jdb.receiveReplyFor(Jdb.java:322) at nsk.jdb.kill.kill001.kill001.runCases(kill001.java:145) at nsk.share.jdb.JdbTest.runTest(JdbTest.java:149) at nsk.jdb.kill.kill001.kill001.run(kill001.java:80) at nsk.jdb.kill.kill001.kill001.main(kill001.java:74) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at PropertyResolvingWrapper.main(PropertyResolvingWrapper.java:104) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.sun.javatest.regtest.agent.MainWrapper$MainThread.run(MainWrapper.java:127) at java.base/java.lang.Thread.run(Thread.java:830) Waiting for jdb exits jdb normally exited # ERROR: TEST FAILED The following stacktrace is for failure analysis. nsk.share.TestFailure: TEST FAILED at nsk.share.Log.logExceptionForFailureAnalysis(Log.java:428) at nsk.share.Log.complain(Log.java:399) at nsk.share.jdb.JdbTest.runTest(JdbTest.java:225) at nsk.jdb.kill.kill001.kill001.run(kill001.java:80) at nsk.jdb.kill.kill001.kill001.main(kill001.java:74) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at PropertyResolvingWrapper.main(PropertyResolvingWrapper.java:104) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.sun.javatest.regtest.agent.MainWrapper$MainThread.run(MainWrapper.java:127) at java.base/java.lang.Thread.run(Thread.java:830) #> #> SUMMARY: Following errors occured #> during test execution: #> # ERROR: Value for nsk.jdb.kill.kill001.kill001a.notKilled is not found. # ERROR: Caught unexpected exception while executing the test: nsk.share.Failure: Attempt to send command :threads to terminated jdb. # ERROR: TEST FAILED ----------System.err:(0/0)---------- ----------rerun:(40/7914)*----------
23-08-2019

This hasn't been seen for almost 5 months, closing as CNR. Please re-open if this happens again.
27-07-2015