JDK-8047212 : runtime/ParallelClassLoading/bootstrap/random/inner-complex assert(ObjectSynchronizer::verify_objmon_isinpool(inf)) failed: monitor is invalid
  • Type: Bug
  • Component: hotspot
  • Sub-Component: runtime
  • Affected Version: 9
  • Priority: P2
  • Status: Closed
  • Resolution: Fixed
  • OS: linux_oracle_6.0
  • CPU: x86_64
  • Submitted: 2014-06-18
  • Updated: 2024-11-04
  • Resolved: 2015-10-25
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 8 JDK 9 Other
8-poolUnresolved 9 b93Fixed openjdk8u252Fixed
Related Reports
Duplicate :  
Duplicate :  
Relates :  
Relates :  
Relates :  
Relates :  
Relates :  
Description
;; Using jvm: "/scratch/local/aurora/sandbox/sca/vmsqe/jdk/nightly/fastdebug/rt_baseline/linux-amd64/jre/lib/amd64/server/libjvm.so"
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  Internal Error (/opt/jprt/T/P1/003131.ddehaven/s/src/share/vm/runtime/synchronizer.cpp:1190), pid=22842, tid=140647806699264
#  assert(ObjectSynchronizer::verify_objmon_isinpool(inf)) failed: monitor is invalid
#
# JRE version: Java(TM) SE Runtime Environment (9.0-b18) (build 1.9.0-ea-fastdebug-b18)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (1.9.0-internal-201406180031.ddehaven.hotspot-fastdebug mixed mode linux-amd64 compressed oops)
# Core dump written. Default location: /scratch/local/aurora/sandbox/results/ResultDir/inner-complex_copy_1/core or core.22842
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.sun.com/bugreport/crash.jsp
#

---------------  T H R E A D  ---------------

Current thread (0x00007feca06ee800):  JavaThread "Loading Thread #31" [_thread_in_vm, id=23139, stack(0x00007feb1e7e8000,0x00007feb1e8e9000)]

Stack: [0x00007feb1e7e8000,0x00007feb1e8e9000],  sp=0x00007feb1e8e7150,  free space=1020k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V  [libjvm.so+0x10405ac]  VMError::report_and_die()+0x15c;;  VMError::report_and_die()+0x15c
V  [libjvm.so+0x72ff8b]  report_vm_error(char const*, int, char const*, char const*)+0x7b;;  report_vm_error(char const*, int, char const*, char const*)+0x7b
V  [libjvm.so+0xf7252c]  ObjectSynchronizer::inflate(Thread*, oop)+0x67c;;  ObjectSynchronizer::inflate(Thread*, oop)+0x67c
V  [libjvm.so+0xf759bc]  ObjectSynchronizer::fast_enter(Handle, BasicLock*, bool, Thread*)+0x1dc;;  ObjectSynchronizer::fast_enter(Handle, BasicLock*, bool, Thread*)+0x1dc
V  [libjvm.so+0x9bd368]  InterpreterRuntime::monitorenter(JavaThread*, BasicObjectLock*)+0x1c8;;  InterpreterRuntime::monitorenter(JavaThread*, BasicObjectLock*)+0x1c8
j  java.io.PrintStream.flush()V+4
j  runtime.ParallelClassLoading.shared.ClassLoadingThread.println(Ljava/lang/String;)V+43
j  runtime.ParallelClassLoading.shared.ClassLoadingThread.run()V+120
v  ~StubRoutines::call_stub
V  [libjvm.so+0x9d7bd7]  JavaCalls::call_helper(JavaValue*, methodHandle*, JavaCallArguments*, Thread*)+0x18a7;;  JavaCalls::call_helper(JavaValue*, methodHandle*, JavaCallArguments*, Thread*)+0x18a7
V  [libjvm.so+0x9d8d62]  JavaCalls::call_virtual(JavaValue*, KlassHandle, Symbol*, Symbol*, JavaCallArguments*, Thread*)+0x622;;  JavaCalls::call_virtual(JavaValue*, KlassHandle, Symbol*, Symbol*, JavaCallArguments*, Thread*)+0x622
V  [libjvm.so+0x9d90d7]  JavaCalls::call_virtual(JavaValue*, Handle, KlassHandle, Symbol*, Symbol*, Thread*)+0xb7;;  JavaCalls::call_virtual(JavaValue*, Handle, KlassHandle, Symbol*, Symbol*, Thread*)+0xb7
V  [libjvm.so+0xaac744]  thread_entry(JavaThread*, Thread*)+0xc4;;  thread_entry(JavaThread*, Thread*)+0xc4
V  [libjvm.so+0xfc55d4]  JavaThread::thread_main_inner()+0x1d4;;  JavaThread::thread_main_inner()+0x1d4
V  [libjvm.so+0xfc5845]  JavaThread::run()+0x1e5;;  JavaThread::run()+0x1e5
V  [libjvm.so+0xda6842]  java_start(Thread*)+0xf2;;  java_start(Thread*)+0xf2
C  [libpthread.so.0+0x7851]

Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
j  java.io.PrintStream.flush()V+4
j  runtime.ParallelClassLoading.shared.ClassLoadingThread.println(Ljava/lang/String;)V+43
j  runtime.ParallelClassLoading.shared.ClassLoadingThread.run()V+120
v  ~StubRoutines::call_stub

Comments
Still under review.
19-02-2020

Fix Request 8u What: Original patch need some tweaks due to the use of 'PaddedEnd' in jdk9 or higher version (Review thread: https://mail.openjdk.java.net/pipermail/jdk8u-dev/2019-August/010065.html) Why: As discussed in the thread, this bug can always be reproduced by running jcstress test on our 64/128-core aarch64 server platform with 8u aarch64 fastdebug build. We should fix this to be more stable on weakly ordered cpu platforms such as arm. Testing: Jtreg tested with 8u x86_64 fastdebug build Passed 2 round of Jcstress test with 8u aarch64 fastdebug build Risk: With the patch, we changed to use ordered access for global `gBlockList`, which is more conservative, risk is low.
22-08-2019

URL: http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/a0c7a69277da User: lana Date: 2015-11-18 23:53:22 +0000
18-11-2015

Stress test results for the latest version that went out for code review. This latest version was baselined on a JDK9-hs-rt repo as of this fix: Changeset: 9b74c5f1b10e Author: brutisso Date: 2015-10-20 14:00 +0200 URL: http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/rev/9b74c5f1b10e 8139868: CMSScavengeBeforeRemark broken after JDK-8134953 Reviewed-by: sjohanss, jwilhelm Here's all the repo top info: $ cat 2015-10-20-121401.brutisso.8139868/SourceTips.txt .:cd061b69a817 jdk:d68de0bab8ee jaxp:91795d86744f corba:1ee087da34d5 jaxws:51729143f8fe closed:bb694776dce5 hotspot:9b74c5f1b10e nashorn:d8936a4a0186 langtools:e6fcc24b6d14 jdk/src/closed:1056f5b75a9e jdk/make/closed:8498209d9810 jdk/test/closed:e93dd07f3dee hotspot/src/closed:e1b24390d910 hotspot/make/closed:7131ce6f91de hotspot/test/closed:4913ed0cb0cd No failures of any kind by mid Saturday afternoon in 600+K runs... $ elapsed_times mark.start_test_run doit_loop.fast_0.log mark.start_test_run 0 seconds doit_loop.fast_0.log 4 days 2 hours 26 minutes 4 seconds Merged with the latest JDK9-hs-rt and pushed by early Saturday evening. Left the stress tests on my Linux DevOps machine running: $ elapsed_times mark.start_test_run doit_loop.fast_0.log mark.start_test_run 0 seconds doit_loop.fast_0.log 5 days 18 hours 17 minutes 19 seconds $ grep -v PASS doit_loop.fast_?.log doit_loop.fast_0.log:Copy fast_0: loop #860849... doit_loop.fast_1.log:Copy fast_1: loop #861175... doit_loop.fast_2.log:Copy fast_2: loop #861168... doit_loop.fast_3.log:Copy fast_3: loop #861148... I'll probably let it run until we hit a million iterations or so. Here's the current results for my Solaris X64 server: $ elapsed_times mark.start_test_run doit_loop.fast_0.log mark.start_test_run 0 seconds doit_loop.fast_0.log 5 days 20 hours 15 seconds $ grep -v PASS doit_loop.fast_?.log doit_loop.fast_0.log:Copy fast_0: loop #604450... doit_loop.fast_1.log:Copy fast_1: loop #604330... doit_loop.fast_2.log:Copy fast_2: loop #604400... doit_loop.fast_3.log:Copy fast_3: loop #604408... This stress test has never failed on Solaris X64 so these results just show that no new failure mode has been introduced. Update: Here are the final stress test results. Linux DevOps machine: $ elapsed_times mark.start_test_run doit_loop.fast_0.log mark.start_test_run 0 seconds doit_loop.fast_0.log 6 days 20 hours 54 seconds $ grep -v PASS doit_loop.fast_?.log doit_loop.fast_0.log:Copy fast_0: loop #1019206... doit_loop.fast_1.log:Copy fast_1: loop #1019583... doit_loop.fast_2.log:Copy fast_2: loop #1019596... doit_loop.fast_3.log:Copy fast_3: loop #1019529... Solaris X64 machine: $ elapsed_times mark.start_test_run doit_loop.fast_0.log mark.start_test_run 0 seconds doit_loop.fast_0.log 6 days 21 hours 38 minutes 46 seconds $ grep -v PASS doit_loop.fast_?.log doit_loop.fast_0.log:Copy fast_0: loop #715125... doit_loop.fast_1.log:Copy fast_1: loop #714984... doit_loop.fast_2.log:Copy fast_2: loop #715051... doit_loop.fast_3.log:Copy fast_3: loop #715111...
27-10-2015

URL: http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/rev/a0c7a69277da User: dcubed Date: 2015-10-25 00:34:48 +0000
25-10-2015

Good find Dan! Of course the missing volatile won't fix that, we need a storeStore() barrier.
19-10-2015

Here is the code that manages ObjectMonitor block allocation: src/share/vm/runtime/synchronizer.cpp ObjectMonitor * NOINLINE ObjectSynchronizer::omAlloc(Thread * Self) { <snip> // 3: allocate a block of new ObjectMonitors // Both the local and global free lists are empty -- resort to malloc(). // In the current implementation objectMonitors are TSM - immortal. // Ideally, we'd write "new ObjectMonitor[_BLOCKSIZE], but we want // each ObjectMonitor to start at the beginning of a cache line, // so we use align_size_up(). // A better solution would be to use C++ placement-new. // BEWARE: As it stands currently, we don't run the ctors! assert(_BLOCKSIZE > 1, "invariant"); size_t neededsize = sizeof(PaddedEnd<ObjectMonitor>) * _BLOCKSIZE; PaddedEnd<ObjectMonitor> * temp; size_t aligned_size = neededsize + (DEFAULT_CACHE_LINE_SIZE - 1); void* real_malloc_addr = (void *)NEW_C_HEAP_ARRAY(char, aligned_size, mtInternal); temp = (PaddedEnd<ObjectMonitor> *) align_size_up((intptr_t)real_malloc_addr, DEFAULT_CACHE_LINE_SIZE); // NOTE: (almost) no way to recover if allocation failed. // We might be able to induce a STW safepoint and scavenge enough // objectMonitors to permit progress. if (temp == NULL) { vm_exit_out_of_memory(neededsize, OOM_MALLOC_ERROR, "Allocate ObjectMonitors"); } (void)memset((void *) temp, 0, neededsize); // Format the block. // initialize the linked list, each monitor points to its next // forming the single linked free list, the very first monitor // will points to next block, which forms the block list. // The trick of using the 1st element in the block as gBlockList // linkage should be reconsidered. A better implementation would // look like: class Block { Block * next; int N; ObjectMonitor Body [N] ; } for (int i = 1; i < _BLOCKSIZE; i++) { temp[i].FreeNext = (ObjectMonitor *)&temp[i+1]; } // terminate the last monitor as the end of list temp[_BLOCKSIZE - 1].FreeNext = NULL; // Element [0] is reserved for global list linkage temp[0].set_object(CHAINMARKER); // Consider carving out this thread's current request from the // block in hand. This avoids some lock traffic and redundant // list activity. // Acquire the gListLock to manipulate gBlockList and gFreeList. // An Oyama-Taura-Yonezawa scheme might be more efficient. Thread::muxAcquire(&gListLock, "omAlloc [2]"); gMonitorPopulation += _BLOCKSIZE-1; gMonitorFreeCount += _BLOCKSIZE-1; // Add the new block to the list of extant blocks (gBlockList). // The very first objectMonitor in a block is reserved and dedicated. // It serves as blocklist "next" linkage. temp[0].FreeNext = gBlockList; gBlockList = temp; // Add the new string of objectMonitors to the global free list temp[_BLOCKSIZE - 1].FreeNext = gFreeList; gFreeList = temp + 1; Thread::muxRelease(&gListLock); TEVENT(Allocate block of monitors); } The basic algorithm is: - allocate a block of ObjectMonitors - initialize the new block with all the right pointers - grab the gListLock - insert the new block at the head of the global block list - insert the new string of free ObjectMonitors at the head of the free list - drop the gListLock This code is an example of lock-free use of gBlockList: // Check if monitor belongs to the monitor cache // The list is grow-only so it's *relatively* safe to traverse // the list of extant blocks without taking a lock. int ObjectSynchronizer::verify_objmon_isinpool(ObjectMonitor *monitor) { PaddedEnd<ObjectMonitor> * block = (PaddedEnd<ObjectMonitor> *)gBlockList; while (block) { assert(block->object() == CHAINMARKER, "must be a block header"); if (monitor > (ObjectMonitor *)&block[0] && monitor < (ObjectMonitor *)&block[_BLOCKSIZE]) { address mon = (address) monitor; address blk = (address) block; size_t diff = mon - blk; assert((diff % sizeof(PaddedEnd<ObjectMonitor>)) == 0, "check"); return 1; } block = (PaddedEnd<ObjectMonitor> *) block->FreeNext; } return 0; } This comment stands out: // The list is grow-only so it's *relatively* safe to traverse OK so let's take a look at the code the connects the newly allocated block of ObjectMonitor into the global list: // Add the new block to the list of extant blocks (gBlockList). // The very first objectMonitor in a block is reserved and dedicated. // It serves as blocklist "next" linkage. temp[0].FreeNext = gBlockList; gBlockList = temp; This code is executed while holding the gListLock so it should be safe... except that we have lock-free uses of gBlockList... Here's the declaration of gBlockList (and friends): src/share/vm/runtime/synchronizer.hpp // gBlockList is really PaddedEnd<ObjectMonitor> *, but we don't // want to expose the PaddedEnd template more than necessary. static ObjectMonitor * gBlockList; // global monitor free list static ObjectMonitor * volatile gFreeList; // global monitor in-use list, for moribund threads, // monitors they inflated need to be scanned for deflation static ObjectMonitor * volatile gOmInUseList; // count of entries in gOmInUseList static int gOmInUseCount; Hmmm.... gFreeList is volatile, but gBlockList is not! When you have a lock-free algorithm, volatile is pretty much a requirement... So let's revisit that linkage code one more time: temp[0].FreeNext = gBlockList; gBlockList = temp; When locks are used, the lock exit operation will flush previous memory operations so when the new gBlockList value is published so is the temp[0].FreeNext update. This means that the chain of ObjectMonitor blocks is consistent... for locked callers... However, when a lock-free access of gBlockList occurs, the caller isn't waiting to acquire the lock so the caller can potentially see inconsistent memory state. In particular, the lock-free thread can see the results of this memory operation: gBlockList = temp; before seeing the results of this memory operation: temp[0].FreeNext = gBlockList; In the thread that allocated the new block and is inserting it in the chain of ObjectMonitor blocks, memory is consistent... In the other thread that happens to be executing verify_objmon_isinpool(), that thread won't see the target monitor on the gBlockList because the list is temporarily inconsistent... Of course, when the other thread checks again (like in my debug code), memory has become consistent and everything looks just fine...
16-10-2015

Repeating the debug info from doit.fast_0.log: Loading Thread #45: Starting...ERROR: verify_objmon_isinpool() returned not found. INFO: guarantee_objmon_isinpool() output: INFO: obj_oop=0x0000000097f22fd0 INFO: mark_oop= 0x00007f9b88008b82 INFO: mon_ptr= 0x00007f9b88008b80 INFO: block[0] start=0x00007f9bb8000e00, end=0x00007f9bb8008e00 INFO: block[1] start=0x00007f9b88000c80, end=0x00007f9b88008c80 ERROR: verify_objmon_isinpool() returned not found. ERROR: mon_ptr= 0x00007f9b88008b80 was found in the pool in this block. ObjectSynchronizer::guarantee_objmon_isinpool() is a wrapper that calls a slightly modified version of ObjectSynchronizer::verify_objmon_isinpool(); the slight modifications are returning failure instead of assert()'ing in a couple of cases. This allows the caller, guarantee_objmon_isinpool(), to detect the failure and then report much more detailed info about the state of the ObjectMonitor allocation pool. The two key lines in the above debug output are these: ERROR: verify_objmon_isinpool() returned not found. ERROR: mon_ptr= 0x00007f9b88008b80 was found in the pool in this block. What these two lines mean is that verify_objmon_isinpool() returned failure (no surprise) and that the more detailed look at the data structures done by guarantee_objmon_isinpool() did _not_ find the same failure. These results strongly indicate that we have some sort of race in the ObjectMonitor alloc subsystem instead of a memory corruption or a lost block situation.
15-10-2015

I did a run with the debug code in place and enabled with -XX:+UseNewCode. I started the four fastdebug instances in parallel on 2015.10.08 and all three had failed by almost 72 hours later on 2015.10.11: $ elapsed_times mark.start_test_run doit.fast_?.log mark.start_test_run 0 seconds doit.fast_0.log 1 days 2 hours 26 minutes 34 seconds doit.fast_3.log 1 days 13 hours 29 minutes 54 seconds doit.fast_2.log 39 minutes 32 seconds doit.fast_1.log 7 hours 15 minutes 13 seconds $ elapsed_times mark.start_test_run doit.fast_1.log mark.start_test_run 0 seconds doit.fast_1.log 2 days 23 hours 51 minutes 13 seconds The first three failures are for this bug (8047212). Here are snippets from the first failure (hs_err_pid32355.log): # Internal Error (/work/shared/bug_hunt/8047212_for_jdk9_hs_rt/hotspot/src/share/vm/runtime/synchronizer.cpp:1915), pid=32355, tid=32605 # guarantee(error_cnt == 0) failed: guarantee_objmon_isinpool() failed. # # JRE version: Java(TM) SE Runtime Environment (9.0) (build 1.9.0-internal-fastdebug-ddaugher_2015_10_07_07_16-b00) # Java VM: Java HotSpot(TM) 64-Bit Server VM (1.9.0-internal-fastdebug-ddaugher_2015_10_07_07_16-b00, mixed mode, tiered, compressed oops, g1 gc, linux-amd64) Current thread (0x00007f9bd4370000): JavaThread "Loading Thread #45" [_thread_in_vm, id=32605, stack(0x00007f9b7c755000,0x00007f9b7c856000)] Stack: [0x00007f9b7c755000,0x00007f9b7c856000], sp=0x00007f9b7c853f60, free space=1019k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x113eed5] VMError::report_and_die(int, char const*, char const*, __va_list_tag*, Thread*, unsigned char*, void*, void*, char const*, int, unsigned long)+0x155 V [libjvm.so+0x113fc0a] VMError::report_and_die(Thread*, char const*, int, char const*, char const*, __va_list_tag*)+0x4a V [libjvm.so+0x82bfb4] report_vm_error(char const*, int, char const*, char const*, ...)+0xd4 V [libjvm.so+0x106f3b5] ObjectSynchronizer::guarantee_objmon_isinpool(oop, markOopDesc*, ObjectMonitor*) [clone .part.118]+0x145 V [libjvm.so+0x1070472] ObjectSynchronizer::inflate(Thread*, oop)+0x6d2 V [libjvm.so+0x1073147] ObjectSynchronizer::slow_exit(oop, BasicLock*, Thread*)+0x77 V [libjvm.so+0xa9cf3d] InterpreterRuntime::monitorexit(JavaThread*, BasicObjectLock*)+0x1fd j java.io.PrintStream.write([BII)V+35 j sun.nio.cs.StreamEncoder.writeBytes()V+120 j sun.nio.cs.StreamEncoder.implFlushBuffer()V+11 j sun.nio.cs.StreamEncoder.flushBuffer()V+15 j java.io.OutputStreamWriter.flushBuffer()V+4 j java.io.PrintStream.write(Ljava/lang/String;)V+27 j java.io.PrintStream.print(Ljava/lang/String;)V+9 j java.io.PrintStream.println(Ljava/lang/String;)V+6 j runtime.ParallelClassLoading.shared.ClassLoadingThread.println(Ljava/lang/String;)V+37 j runtime.ParallelClassLoading.shared.ClassLoadingThread.run()V+52 v ~StubRoutines::call_stub And here's the debug info from doit.fast_0.log: Loading Thread #45: Starting...ERROR: verify_objmon_isinpool() returned not found. INFO: guarantee_objmon_isinpool() output: INFO: obj_oop=0x0000000097f22fd0 INFO: mark_oop= 0x00007f9b88008b82 INFO: mon_ptr= 0x00007f9b88008b80 INFO: block[0] start=0x00007f9bb8000e00, end=0x00007f9bb8008e00 INFO: block[1] start=0x00007f9b88000c80, end=0x00007f9b88008c80 ERROR: verify_objmon_isinpool() returned not found. ERROR: mon_ptr= 0x00007f9b88008b80 was found in the pool in this block. Here are snippets from the second failure (hs_err_pid20529.log): # Internal Error (/work/shared/bug_hunt/8047212_for_jdk9_hs_rt/hotspot/src/share/vm/runtime/synchronizer.cpp:1915), pid=20529, tid=20637 # guarantee(error_cnt == 0) failed: guarantee_objmon_isinpool() failed. # # JRE version: Java(TM) SE Runtime Environment (9.0) (build 1.9.0-internal-fastdebug-ddaugher_2015_10_07_07_16-b00) # Java VM: Java HotSpot(TM) 64-Bit Server VM (1.9.0-internal-fastdebug-ddaugher_2015_10_07_07_16-b00, mixed mode, tiered, compressed oops, g1 gc, linux-amd64) Current thread (0x00007f33e4359000): JavaThread "Loading Thread #29" [_thread_in_vm, id=20637, stack(0x00007f3343dfe000,0x00007f3343eff000)] Stack: [0x00007f3343dfe000,0x00007f3343eff000], sp=0x00007f3343efd260, free space=1020k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x113eed5] VMError::report_and_die(int, char const*, char const*, __va_list_tag*, Thread*, unsigned char*, void*, void*, char const*, int, unsigned long)+0x155 V [libjvm.so+0x113fc0a] VMError::report_and_die(Thread*, char const*, int, char const*, char const*, __va_list_tag*)+0x4a V [libjvm.so+0x82bfb4] report_vm_error(char const*, int, char const*, char const*, ...)+0xd4 V [libjvm.so+0x106f3b5] ObjectSynchronizer::guarantee_objmon_isinpool(oop, markOopDesc*, ObjectMonitor*) [clone .part.118]+0x145 V [libjvm.so+0x1070472] ObjectSynchronizer::inflate(Thread*, oop)+0x6d2 V [libjvm.so+0x1073147] ObjectSynchronizer::slow_exit(oop, BasicLock*, Thread*)+0x77 V [libjvm.so+0xa9cf3d] InterpreterRuntime::monitorexit(JavaThread*, BasicObjectLock*)+0x1fd j java.io.PrintStream.println(Ljava/lang/String;)V+14 j runtime.ParallelClassLoading.shared.ClassLoadingThread.println(Ljava/lang/String;)V+37 j runtime.ParallelClassLoading.shared.ClassLoadingThread.run()V+52 v ~StubRoutines::call_stub And here's the debug info from doit.fast_3.log: Loading Thread #29: Starting... ERROR: verify_objmon_isinpool() returned not found. INFO: guarantee_objmon_isinpool() output: INFO: obj_oop=0x0000000097f22fd0 INFO: mark_oop= 0x00007f3390008b82 INFO: mon_ptr= 0x00007f3390008b80 INFO: block[0] start=0x00007f33c0000f80, end=0x00007f33c0008f80 INFO: block[1] start=0x00007f33e438d580, end=0x00007f33e4395580 INFO: block[2] start=0x00007f3390000c80, end=0x00007f3390008c80 ERROR: verify_objmon_isinpool() returned not found. ERROR: mon_ptr= 0x00007f3390008b80 was found in the pool in this block.' Here are snippets from the third failure (hs_err_pid5635.log): # Internal Error (/work/shared/bug_hunt/8047212_for_jdk9_hs_rt/hotspot/src/share/vm/runtime/synchronizer.cpp:1915), pid=5635, tid=5739 # guarantee(error_cnt == 0) failed: guarantee_objmon_isinpool() failed. # # JRE version: Java(TM) SE Runtime Environment (9.0) (build 1.9.0-internal-fastdebug-ddaugher_2015_10_07_07_16-b00) # Java VM: Java HotSpot(TM) 64-Bit Server VM (1.9.0-internal-fastdebug-ddaugher_2015_10_07_07_16-b00, mixed mode, tiered, compressed oops, g1 gc, linux-amd64) Current thread (0x00007fb07c35b000): JavaThread "Loading Thread #33" [_thread_in_vm, id=5739, stack(0x00007faffc946000,0x00007faffca47000)] Stack: [0x00007faffc946000,0x00007faffca47000], sp=0x00007faffca45000, free space=1020k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x113eed5] VMError::report_and_die(int, char const*, char const*, __va_list_tag*, Thread*, unsigned char*, void*, void*, char const*, int, unsigned long)+0x155 V [libjvm.so+0x113fc0a] VMError::report_and_die(Thread*, char const*, int, char const*, char const*, __va_list_tag*)+0x4a V [libjvm.so+0x82bfb4] report_vm_error(char const*, int, char const*, char const*, ...)+0xd4 V [libjvm.so+0x106f3b5] ObjectSynchronizer::guarantee_objmon_isinpool(oop, markOopDesc*, ObjectMonitor*) [clone .part.118]+0x145 V [libjvm.so+0x1070472] ObjectSynchronizer::inflate(Thread*, oop)+0x6d2 V [libjvm.so+0x1073147] ObjectSynchronizer::slow_exit(oop, BasicLock*, Thread*)+0x77 V [libjvm.so+0xa9cf3d] InterpreterRuntime::monitorexit(JavaThread*, BasicObjectLock*)+0x1fd j java.lang.ClassLoader.loadClass(Ljava/lang/String;Z)Ljava/lang/Class;+113 j sun.misc.Launcher$AppClassLoader.loadClass(Ljava/lang/String;Z)Ljava/lang/Class;+36 j java.lang.ClassLoader.loadClass(Ljava/lang/String;Z)Ljava/lang/Class;+38 j java.lang.ClassLoader.loadClass(Ljava/lang/String;)Ljava/lang/Class;+3 j runtime.ParallelClassLoading.shared.ProvokeType.provoke(Ljava/lang/ClassLoader;Ljava/lang/String;Ljava/lang/String;Ljava/lang/String;)V+30 j runtime.ParallelClassLoading.shared.CustomProvokeType.provoke(Ljava/lang/ClassLoader;Ljava/lang/String;)V+10 j runtime.ParallelClassLoading.shared.ClassLoadingThread.run()V+83 v ~StubRoutines::call_stub And here's the debug info from doit.fast_2.log: Loading Thread #32: Starting... ERROR: verify_objmon_isinpool() returned not found. INFO: guarantee_objmon_isinpool() output: INFO: obj_oop=0x00000000976e1e40 INFO: mark_oop= 0x00007fb07c38e182 INFO: mon_ptr= 0x00007fb07c38e180 INFO: block[0] start=0x00007fb044000e00, end=0x00007fb044008e00 INFO: block[1] start=0x00007fb07c386280, end=0x00007fb07c38e280 ERROR: verify_objmon_isinpool() returned not found. ERROR: mon_ptr= 0x00007fb07c38e180 was found in the pool in this block. Here are snippets from the fourth failure (hs_err_pid26782.log): # SIGSEGV (0xb) at pc=0x00007f1eb57b2f20, pid=26782, tid=26802 # # JRE version: Java(TM) SE Runtime Environment (9.0) (build 1.9.0-internal-fastdebug-ddaugher_2015_10_07_07_16-b00) # Java VM: Java HotSpot(TM) 64-Bit Server VM (1.9.0-internal-fastdebug-ddaugher_2015_10_07_07_16-b00, mixed mode, tiered, compressed oops, g1 gc, linux-amd64) # Problematic frame: # V [libjvm.so+0x101bf20] StubCodeDesc::desc_for(unsigned char*)+0x10 Current thread (0x00007f1eb02af800): JavaThread "C1 CompilerThread2" daemon [_thread_in_native, id=26802, stack(0x00007f1e63afb000,0x00007f1e63bfc000)] Current CompileTask: C1: 181 2 3 java.lang.String::hashCode (57 bytes) Stack: [0x00007f1e63afb000,0x00007f1e63bfc000], sp=0x00007f1e63bf9a20, free space=1018k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x101bf20] StubCodeDesc::desc_for(unsigned char*)+0x10 V [libjvm.so+0xf95820] Relocation::runtime_address_to_index(unsigned char*)+0x50 V [libjvm.so+0xf958e4] external_word_Relocation::pack_data_to(CodeSection*)+0x24 V [libjvm.so+0xf93434] relocInfo::initialize(CodeSection*, Relocation*)+0x54 V [libjvm.so+0x761552] CodeSection::relocate(unsigned char*, RelocationHolder const&, int)+0x272 V [libjvm.so+0x4f7f13] Assembler::mov_literal64(RegisterImpl*, long, RelocationHolder const&)+0xe3 V [libjvm.so+0xd445cb] MacroAssembler::stop(char const*)+0xbb V [libjvm.so+0xd5f0ef] MacroAssembler::verify_heapbase(char const*)+0x12f V [libjvm.so+0xd783bf] MacroAssembler::decode_heap_oop(RegisterImpl*)+0x1f V [libjvm.so+0x5b8d39] LIR_Assembler::mem2reg(LIR_OprDesc*, LIR_OprDesc*, BasicType, LIR_PatchCode, CodeEmitInfo*, bool, bool)+0xf49 V [libjvm.so+0x5acc5f] LIR_Assembler::emit_op1(LIR_Op1*)+0x8f V [libjvm.so+0x5ad384] LIR_Assembler::emit_lir_list(LIR_List*)+0x54 V [libjvm.so+0x5adf18] LIR_Assembler::emit_code(BlockList*)+0xb8 V [libjvm.so+0x5603b6] Compilation::emit_code_body()+0x1e6 V [libjvm.so+0x561aa7] Compilation::compile_java_method()+0x6c7 V [libjvm.so+0x562008] Compilation::compile_method()+0x228 V [libjvm.so+0x56274e] Compilation::Compilation(AbstractCompiler*, ciEnv*, ciMethod*, int, BufferBlob*)+0x39e V [libjvm.so+0x563d40] Compiler::compile_method(ciEnv*, ciMethod*, int)+0x150 V [libjvm.so+0x7b8f71] CompileBroker::invoke_compiler_on_method(CompileTask*)+0x8b1 V [libjvm.so+0x7b9a28] CompileBroker::compiler_thread_loop()+0x408 V [libjvm.so+0x10c14c6] JavaThread::thread_main_inner()+0x1d6 V [libjvm.so+0x10c17f4] JavaThread::run()+0x2a4 V [libjvm.so+0xe8ea0a] java_start(Thread*)+0xca C [libpthread.so.0+0x7851] This last failure is an instance of: JDK-8138922 runtime/ParallelClassLoading/bootstrap/random/inner-complex hits SIGSEGV in StubCodeDesc::desc_for()
15-10-2015

I've created a ZIP archive with a standalone reproducer. 8047212_repro.zip has been used to reproduce the assert failure on my DevOps Linux box. I'm also running 8047212_repro.zip on my Solaris X64 box and on my MacOS 10.7.X MacMini. $ cat READ_ME.repro Repro kit for the following bug: JDK-8047212 runtime/ParallelClassLoading/bootstrap/random/inner-complex assert(ObjectSynchronizer::verify_objmon_isinpool(inf)) failed: monitor is invalid https://bugs.openjdk.java.net/browse/JDK-8047212 READ_ME.repro This file. doit.bash This script is used to run the test with a minimal amount of infrastructure overhead using a specified JDK. Arguments to the script (after the specified JDK) are passed to the "java" cmd that is used to run the test, e.g.: $ bash doit.bash $JAVA_HOME -server The doit.ksh script was created from inner-complex.tlog file. Quite a few changes were made to make it standalone. doit_loop.bash This script is used for running a copy of doit.bash in a loop, e.g.: $ bash doit_loop.bash $JAVA_HOME copy1 -server \ > doit_loop.copy1.log 2>&1 launches a copy of the test repeatedly until a failure occurs and the output from doit.bash goes into doit.copy1.log. doit_parallel.bash This script is used for running N parallel copies of doit.bash in a loop, e.g.: $ bash doit_parallel.bash $JAVA_HOME copy 2 -server launches two copies of the test repeatedly until a failure occurs. The doit_loop.bash output for each parallel run is put in doit_loop.copy_[12].log. The doit.bash output for each parallel run is put in doit.copy_[12].log. doit_4_fast_parallel.bash Runs 4 parallel fastdebug copies; this script will need a path update to work in your environment. doit_6_mixed_parallel.bash Runs 6 parallel copies (2 release, 2 fastdebug and 2 slowdebug); this script will need a path update to work in your environment. classes_from_testbase These are the necessary .class files from the VM TESTBASE for executing this test. The names of the classes were derived by running the doit.bash script with the '-verbose:class' option and filtering that output into a list of .class files to copy from the VM TESTBASE. The remaining artifacts are from the original failing test run: hs_err_pid22842.log inner-complex.cfg inner-complex.eout inner-complex.err inner-complex.log inner-complex.out inner-complex.README inner-complex.tlog inner-complex.tlog.rtmp replay_pid26367.log rerun.sh rerun.sh.east rerun.sh.ireland rerun.sh.local rerun.sh.sca rerun.sh.spb rerun.sh.spb.dev resultTL.tl
08-10-2015

I refreshed/resynced the debug code that I have for this bug. The original asserts are in place by default and -XX:+UseNewCode enables to new bug code that will hopefully shed light on this problem. I did a run with the debug code in place, but disabled just to make sure that the original asserts will still fire. $ grep -v PASS doit_loop.fast_*.log doit_loop.fast_0.log:Copy fast_0: loop #163072... doit_loop.fast_1.log:Copy fast_1: loop #163107... doit_loop.fast_2.log:Copy fast_2: loop #163098... doit_loop.fast_3.log:Copy fast_3: loop #136695...FAILED. doit_loop.fast_3.log:status=6 and I got one failure: # Internal Error (/work/shared/bug_hunt/8047212_for_jdk9_hs_rt/hotspot/src/share/vm/runtime/synchronizer.cpp:1313), pid=12640, tid=12887 # assert(ObjectSynchronizer::verify_objmon_isinpool(inf)) failed: monitor is invalid # # JRE version: Java(TM) SE Runtime Environment (9.0) (build 1.9.0-internal-fastdebug-ddaugher_2015_10_07_07_16-b00) # Java VM: Java HotSpot(TM) 64-Bit Server VM (1.9.0-internal-fastdebug-ddaugher_2015_10_07_07_16-b00, mixed mode, tiered, compressed oops, g1 gc, linux-amd64 with this stack snippet: --------------- T H R E A D --------------- Current thread (0x00007f736c355000): JavaThread "Loading Thread #41" [_thread_i n_vm, id=12887, stack(0x00007f7314452000,0x00007f7314553000)] Stack: [0x00007f7314452000,0x00007f7314553000], sp=0x00007f7314550520, free sp ace=1017k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x113eed5] VMError::report_and_die(int, char const*, char const*, __va_list_tag*, Thread*, unsigned char*, void*, void*, char const*, int, unsign ed long)+0x155 V [libjvm.so+0x113fc0a] VMError::report_and_die(Thread*, char const*, int, cha r const*, char const*, __va_list_tag*)+0x4a V [libjvm.so+0x82bfb4] report_vm_error(char const*, int, char const*, char con st*, ...)+0xd4 V [libjvm.so+0x10703bc] ObjectSynchronizer::inflate(Thread*, oop)+0x61c V [libjvm.so+0x1070cb8] ObjectSynchronizer::slow_enter(Handle, BasicLock*, Thr ead*)+0x138 V [libjvm.so+0x1070f20] ObjectLocker::ObjectLocker(Handle, Thread*, bool)+0xd0 V [libjvm.so+0xa77c72] InstanceKlass::set_initialization_state_and_notify_impl (instanceKlassHandle, InstanceKlass::ClassState, Thread*)+0xc2 V [libjvm.so+0xa77df2] InstanceKlass::set_initialization_state_and_notify(Inst anceKlass::ClassState, Thread*)+0x82 V [libjvm.so+0xa795f8] InstanceKlass::initialize_impl(instanceKlassHandle, Thr ead*)+0x8b8 V [libjvm.so+0xa79753] InstanceKlass::initialize(Thread*)+0xd3 V [libjvm.so+0xa7938f] InstanceKlass::initialize_impl(instanceKlassHandle, Thr ead*)+0x64f V [libjvm.so+0xa79753] InstanceKlass::initialize(Thread*)+0xd3 V [libjvm.so+0xa9fcf8] InterpreterRuntime::_new(JavaThread*, ConstantPool*, in t)+0x298 j custom.C.<init>()V+4
08-10-2015

In my 10.02 -> 10.04 experiment, all four fastdebug bits runs crashed: $ elapsed_times mark.start_test_run hs_err_pid* mark.start_test_run 0 seconds hs_err_pid26813.log 1 days 18 minutes 22 seconds hs_err_pid28187.log 17 hours 13 minutes 20 seconds hs_err_pid8500.log 1 hours 9 minutes 11 seconds hs_err_pid8630.log 11 hours 49 minutes 23 seconds The first crash (hs_err_pid26813.log) is due to this assertion failure so here's a snippet of the stack trace: Stack: [0x00007fe67813e000,0x00007fe67823f000], sp=0x00007fe67823c510, free space=1017k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x113e9e5] VMError::report_and_die(int, char const*, char const*, __va_list_tag*, Thread*, unsigned char*, void*, void*, char const*, int, unsigned long)+0x155 V [libjvm.so+0x113f71a] VMError::report_and_die(Thread*, char const*, int, char const*, char const*, __va_list_tag*)+0x4a V [libjvm.so+0x82bfb4] report_vm_error(char const*, int, char const*, char const*, ...)+0xd4 V [libjvm.so+0x1070079] ObjectSynchronizer::inflate(Thread*, oop)+0x869 V [libjvm.so+0x1070768] ObjectSynchronizer::slow_enter(Handle, BasicLock*, Thread*)+0x138 V [libjvm.so+0x10709d0] ObjectLocker::ObjectLocker(Handle, Thread*, bool)+0xd0 V [libjvm.so+0xa77c72] InstanceKlass::set_initialization_state_and_notify_impl(instanceKlassHandle, InstanceKlass::ClassState, Thread*)+0xc2 V [libjvm.so+0xa77df2] InstanceKlass::set_initialization_state_and_notify(InstanceKlass::ClassState, Thread*)+0x82 V [libjvm.so+0xa795f8] InstanceKlass::initialize_impl(instanceKlassHandle, Thread*)+0x8b8 V [libjvm.so+0xa79753] InstanceKlass::initialize(Thread*)+0xd3 V [libjvm.so+0xa7938f] InstanceKlass::initialize_impl(instanceKlassHandle, Thread*)+0x64f V [libjvm.so+0xa79753] InstanceKlass::initialize(Thread*)+0xd3 V [libjvm.so+0xa9fcf8] InterpreterRuntime::_new(JavaThread*, ConstantPool*, int)+0x298 j custom.C.<init>()V+4 j custom.D10.<init>()V+1 v ~StubRoutines::call_stub See the attached hs_err_pid26813.log for more details. The other three crashes were due to a SIGSEGV in StubCodeDesc::desc_for(). I can't find an existing bug that appears to match so I'll file a new one. Update: See JDK-8138922 runtime/ParallelClassLoading/bootstrap/random/inner-complex hits SIGSEGV in StubCodeDesc::desc_for()
05-10-2015

Checking out reproducibility of this bug on my DevOps Linux machine. My baseline is a clone of RT_Baseline at the following fix: Changeset: 703df4322ebb Author: dsamersoff Date: 2015-10-01 10:33 +0300 URL: http://hg.openjdk.java.net/jdk9/hs-rt/jdk/rev/703df4322ebb 8133063: Remove BasicLauncherTest from the problem list Summary: Remove BasicLauncherTest from the problem list Reviewed-by: jbachorik Here's the changeset info for the entire repo: $ cat SourceTips.txt .:34280222936a jdk:703df4322ebb jaxp:497bc2654e11 pubs:618464525123 corba:ca8a17195884 jaxws:bdb954839363 closed:57176e80ab18 deploy:53398009c566 hotspot:983c56341c80 install:a2caf79947c6 nashorn:678db05f13ba sponsors:9e31857dd56d langtools:8e76163b3f3a jdk/src/closed:59bd18af2265 jdk/make/closed:54d0705354f2 jdk/test/closed:de2be51ab426 hotspot/src/closed:002bf5205dcd hotspot/make/closed:d70cd66cf2f4 hotspot/test/closed:5524c847f372 My testing config is running 4 parallel test runs using the following locally built bits: $ ../build/linux-x86_64-normal-server-fastdebug/images/jdk/bin/java -version java version "1.9.0-internal-fastdebug" Java(TM) SE Runtime Environment (build 1.9.0-internal-fastdebug-ddaugher_2015_10_02_11_39-b00) Java HotSpot(TM) 64-Bit Server VM (build 1.9.0-internal-fastdebug-ddaugher_2015_10_02_11_39-b00, mixed mode) $ ../build/linux-x86_64-normal-server-fastdebug/images/jdk/bin/java -Xinternalversion Java HotSpot(TM) 64-Bit Server VM (1.9.0-internal-fastdebug-ddaugher_2015_10_02_11_39-b00) for linux-amd64 JRE (1.9.0-internal-ddaugher_2015_10_02_11_39-b00), built on Oct 2 2015 12:13:07 by "ddaugher" with gcc 4.8.2 I'll let this run over the weekend and we can see what shakes out.
02-10-2015

Search Aurora for failure matches associated with this bug. Here's the breakdown of the assert() failure matches: 1 gc/ArrayJuggle/Juggle27 1 gc/lock/jni/jnilock002 1 gc/lock/jniref/jniglobalreflock04 1 gc/lock/jniref/jnilocalreflock04 2 gc/lock/jniref/jnireflock04 1 gc/lock/jniref/jniweakglobalreflock04 1 gc/lock/jvmti/alloc/jvmtialloclock02 1 gc/lock/jvmti/alloc/jvmtialloclock03 2 gc/lock/malloc/malloclock03 1 mb/api/java/util/concurrent/GeneratedMaps/testConcurrentHashMap 1 nsk/monitoring/stress/lowmem/lowmem033 1 nsk/stress/jck122/jck122001 1 nsk/stress/network/network004 1 nsk/stress/network/network005 1 runtime/ParallelClassLoading/bootstrap/random/inner-complex 1 runtime/ParallelClassLoading/ClassCircularityError/simple/forName
01-10-2015

Latest failure has different stack - issue is with monitor-exit: Stack: [0x00007f5a3f3f4000,0x00007f5a3f4f5000], sp=0x00007f5a3f4f3190, free space=1020k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x119f4e1] VMError::report_and_die()+0x151;; VMError::report_and_die()+0x151 V [libjvm.so+0x82255b] report_vm_error(char const*, int, char const*, char const*)+0x7b;; report_vm_error(char const*, int, char const*, char const*)+0x7b V [libjvm.so+0x10c5faf] ObjectSynchronizer::inflate(Thread*, oop)+0x6bf;; ObjectSynchronizer::inflate(Thread*, oop)+0x6bf V [libjvm.so+0x10c8d67] ObjectSynchronizer::slow_exit(oop, BasicLock*, Thread*)+0x77;; ObjectSynchronizer::slow_exit(oop, BasicLock*, Thread*)+0x77 V [libjvm.so+0xab3af5] InterpreterRuntime::monitorexit(JavaThread*, BasicObjectLock*)+0x1d5;; InterpreterRuntime::monitorexit(JavaThread*, BasicObjectLock*)+0x1d5 j nsk.share.gc.lock.CriticalSectionLocker.lock()V+37 j gc.lock.LockerTest$Worker.run()V+4
19-07-2015

Finally getting back to checking on this after being on vacation for three weeks... $ elapsed_times mark.start_test_run doit.?.log mark.start_test_run 0 seconds doit.2.log 19 hours 58 minutes 52 seconds doit.3.log 2 hours 56 minutes 8 seconds doit.0.log 2 hours 17 minutes 11 seconds doit.1.log 1 hours 55 minutes 26 seconds doit.5.log 7 hours 29 minutes 33 seconds doit.4.log 14 hours 3 minutes 10 seconds The first failure appeared at the ~20 hour mark and the rest of the runs failed after varying amounts of time. $ elapsed_times mark.start_test_run doit.4.log mark.start_test_run 0 seconds doit.4.log 2 days 40 minutes 20 seconds In all it took just over 2 days for all six runs to fail: $ grep -v PASS doit_loop.?.log doit_loop.0.log:Copy 0: loop #100530...FAILED. doit_loop.0.log:status=6 doit_loop.1.log:Copy 1: loop #113304...FAILED. doit_loop.1.log:status=6 doit_loop.2.log:Copy 2: loop #74796...FAILED. doit_loop.2.log:status=6 doit_loop.3.log:Copy 3: loop #87995...FAILED. doit_loop.3.log:status=6 doit_loop.4.log:Copy 4: loop #299767...FAILED. doit_loop.4.log:status=6 doit_loop.5.log:Copy 5: loop #173318...FAILED. doit_loop.5.log:status=6 and all of the runs failed in the assertion for this bug: $ grep assert hs_err_pid* hs_err_pid23514.log:# assert(ObjectSynchronizer::verify_objmon_isinpool(inf)) failed: monitor is invalid hs_err_pid24304.log:# assert(ObjectSynchronizer::verify_objmon_isinpool(inf)) failed: monitor is invalid hs_err_pid24507.log:# assert(ObjectSynchronizer::verify_objmon_isinpool(inf)) failed: monitor is invalid hs_err_pid25547.log:# assert(ObjectSynchronizer::verify_objmon_isinpool(inf)) failed: monitor is invalid hs_err_pid4279.log:# assert(ObjectSynchronizer::verify_objmon_isinpool(inf)) failed: monitor is invalid hs_err_pid9719.log:# assert(ObjectSynchronizer::verify_objmon_isinpool(inf)) failed: monitor is invalid If anything, Mikael's fix for 8061964 has made this bug more reproducible. Thanks!
20-01-2015

Mikael Gerdin's fix for the following bug: 8061964 Insufficient compiler barriers for GCC in OrderAccess functions JDK-8061964 has made it into a promoted build: JDK9-B42. I've stopped the two remaining runs testing these bits: 2014-11-06-102018.mgerdin.hs-gc-8061964-compilerbarrier $ elapsed_times ../save.01 doit.3.log ../save.01 0 seconds doit.3.log 26 days 2 hours 10 minutes 46 seconds Here's the final run/loop counts: $ grep -v PASS doit_loop.?.log doit_loop.0.log:Copy 0: loop #125617...FAILED. doit_loop.0.log:status=6 doit_loop.1.log:Copy 1: loop #9744987... doit_loop.2.log:Copy 2: loop #8052...FAILED. doit_loop.2.log:status=6 doit_loop.3.log:Copy 3: loop #9744957... doit_loop.4.log:Copy 4: loop #156043...FAILED. doit_loop.4.log:status=6 doit_loop.5.log:Copy 5: loop #109999...FAILED. doit_loop.5.log:status=6 I've started another experiment with 6 fastdebug runs in parallel with these bits: $ ../jdk/linux-amd64/bin/java -version java version "1.9.0-ea-fastdebug" Java(TM) SE Runtime Environment (build 1.9.0-ea-fastdebug-b42) Java HotSpot(TM) 64-Bit Server VM (build 1.9.0-ea-fastdebug-b42, mixed mode)
12-12-2014

Copy #4 failed in ~13.25 hours with a different failure mode than we've seen in this hunt before. $ elapsed_times save.01 hs_err_pid26504.log save.01 0 seconds hs_err_pid26504.log 13 hours 22 minutes 38 seconds doit_loop.4.log:Copy 4: loop #156043...FAILED. doit_loop.4.log:status=6 # # A fatal error has been detected by the Java Runtime Environment: # # Internal Error (synchronizer.cpp:951), pid=26504, tid=140331244242688 # guarantee(!take->is_busy()) failed: invariant # # JRE version: Java(TM) SE Runtime Environment (9.0) (build 1.9.0-internal-201411061020.mgerdin.hs-gc-8061964-compi-b00) # Java VM: Java HotSpot(TM) 64-Bit Server VM (1.9.0-internal-201411061020.mgerdin.hs-gc-8061964-compi-b00 mixed mode linux-amd64 compressed oops) # Core dump written. Default location: /work/shared/bugs/8047212/inner-complex/core or core.26504 # # If you would like to submit a bug report, please visit: # http://bugreport.sun.com/bugreport/crash.jsp # --------------- T H R E A D --------------- Current thread (0x00007fa1c4211000): JavaThread "Loading Thread #13" [_thread_in_vm, id=26628, stack(0x00007fa169e6c000,0x00007fa169f6d000)] Stack: [0x00007fa169e6c000,0x00007fa169f6d000], sp=0x00007fa169f6ab40, free space=1018k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0xa12cd1] VMError::report_and_die()+0x151 V [libjvm.so+0x4b86ee] report_vm_error(char const*, int, char const*, char const*)+0x6e V [libjvm.so+0x987c2a] ObjectSynchronizer::omAlloc(Thread*)+0x1ca V [libjvm.so+0x988022] ObjectSynchronizer::inflate(Thread*, oopDesc*)+0x42 V [libjvm.so+0x9891ef] ObjectSynchronizer::fast_enter(Handle, BasicLock*, bool, Thread*)+0x6f V [libjvm.so+0x61fa91] InterpreterRuntime::monitorenter(JavaThread*, BasicObjectLock*)+0x91 j java.lang.ClassLoader.loadClass(Ljava/lang/String;Z)Ljava/lang/Class;+8 j sun.misc.Launcher$AppClassLoader.loadClass(Ljava/lang/String;Z)Ljava/lang/Class;+36 j java.lang.ClassLoader.loadClass(Ljava/lang/String;Z)Ljava/lang/Class;+38 j java.lang.ClassLoader.loadClass(Ljava/lang/String;)Ljava/lang/Class;+3 v ~StubRoutines::call_stub V [libjvm.so+0x62cd5a] JavaCalls::call_helper(JavaValue*, methodHandle*, JavaCallArguments*, Thread*)+0xeea V [libjvm.so+0x62a434] JavaCalls::call_virtual(JavaValue*, KlassHandle, Symbol*, Symbol*, JavaCallArguments*, Thread*)+0x2a4 V [libjvm.so+0x62aa86] JavaCalls::call_virtual(JavaValue*, Handle, KlassHandle, Symbol*, Symbol*, Handle, Thread*)+0x56 V [libjvm.so+0x991b58] SystemDictionary::load_instance_class(Symbol*, Handle, Thread*)+0x3b8 V [libjvm.so+0x990811] SystemDictionary::resolve_instance_class_or_null(Symbol*, Handle, Handle, Thread*)+0x821 V [libjvm.so+0x990c6e] SystemDictionary::resolve_or_fail(Symbol*, Handle, Handle, bool, Thread*)+0x1e V [libjvm.so+0x6a0cc0] find_class_from_class_loader(JNIEnv_*, Symbol*, unsigned char, Handle, Handle, unsigned char, Thread*)+0x30 V [libjvm.so+0x6a105d] JVM_FindClassFromCaller+0x14d C [libjava.so+0xd4d0] Java_java_lang_Class_forName0+0x110 j java.lang.Class.forName0(Ljava/lang/String;ZLjava/lang/ClassLoader;Ljava/lang/Class;)Ljava/lang/Class;+0 j java.lang.Class.forName(Ljava/lang/String;ZLjava/lang/ClassLoader;)Ljava/lang/Class;+49 j runtime.ParallelClassLoading.shared.ProvokeType.provoke(Ljava/lang/ClassLoader;Ljava/lang/String;Ljava/lang/String;Ljava/lang/String;)V+50 j runtime.ParallelClassLoading.shared.CustomProvokeType.provoke(Ljava/lang/ClassLoader;Ljava/lang/String;)V+10 j runtime.ParallelClassLoading.shared.ClassLoadingThread.run()V+83 v ~StubRoutines::call_stub V [libjvm.so+0x62cd5a] JavaCalls::call_helper(JavaValue*, methodHandle*, JavaCallArguments*, Thread*)+0xeea V [libjvm.so+0x62a434] JavaCalls::call_virtual(JavaValue*, KlassHandle, Symbol*, Symbol*, JavaCallArguments*, Thread*)+0x2a4 V [libjvm.so+0x62aa2a] JavaCalls::call_virtual(JavaValue*, Handle, KlassHandle, Symbol*, Symbol*, Thread*)+0x4a V [libjvm.so+0x69790e] thread_entry(JavaThread*, Thread*)+0x8e V [libjvm.so+0x9c2c90] JavaThread::thread_main_inner()+0xf0 V [libjvm.so+0x9c2e38] JavaThread::run()+0x158 V [libjvm.so+0x87b222] java_start(Thread*)+0xf2 C [libpthread.so.0+0x7851]
17-11-2014

Copy #0 failed in ~11.25 hours with a different failure mode than we've seen in this hunt before. $ $ elapsed_times save.01 hs_err_pid3622.log save.01 0 seconds hs_err_pid3622.log 11 hours 18 minutes 30 seconds doit_loop.0.log:Copy 0: loop #125617...FAILED. doit_loop.0.log:status=6 # # A fatal error has been detected by the Java Runtime Environment: # # Internal Error (synchronizer.cpp:1516), pid=3622, tid=139647647762176 # guarantee(!mid->is_busy()) failed: invariant # # JRE version: Java(TM) SE Runtime Environment (9.0) (build 1.9.0-internal-201411061020.mgerdin.hs-gc-8061964-compi-b00) # Java VM: Java HotSpot(TM) 64-Bit Server VM (1.9.0-internal-201411061020.mgerdin.hs-gc-8061964-compi-b00 mixed mode linux-amd64 compressed oops) # Core dump written. Default location: /work/shared/bugs/8047212/inner-complex/core or core.3622 # # If you would like to submit a bug report, please visit: # http://bugreport.sun.com/bugreport/crash.jsp # --------------- T H R E A D --------------- Current thread (0x00007f0250192800): VMThread [stack: 0x00007f0240604000,0x00007f0240705000] [id=3671] Stack: [0x00007f0240604000,0x00007f0240705000], sp=0x00007f02407038d0, free space=1022k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0xa12cd1] VMError::report_and_die()+0x151 V [libjvm.so+0x4b86ee] report_vm_error(char const*, int, char const*, char const*)+0x6e V [libjvm.so+0x987704] ObjectSynchronizer::deflate_idle_monitors()+0x434 V [libjvm.so+0x915cef] SafepointSynchronize::do_cleanup_tasks()+0x2f V [libjvm.so+0x91668d] SafepointSynchronize::begin()+0x6ad V [libjvm.so+0xa17d92] VMThread::loop()+0x1e2 V [libjvm.so+0xa181e6] VMThread::run()+0x86 V [libjvm.so+0x87b222] java_start(Thread*)+0xf2 VM_Operation (0x00007f0256bea680): Exit, mode: safepoint, requested by thread 0x00007f0250009000 Update: A search of the bug database turns up another bug with a similar stack trace: 8019160 ObjectSynchronizer::deflate_monitor traps on guarantee (7u40) JDK-8019160 The guarantee in the copy #0 failure: guarantee(!mid->is_busy()) failed: invariant is not mentioned as being seen in JDK-8019160, but the guarantee is on the same code path as the guarantee that JDK-8019160 about.
17-11-2014

Copy #5 failed in just over 10 hours with a failure mode we've seen before: $ elapsed_times save.01 hs_err_pid5554.log save.01 0 seconds hs_err_pid5554.log 10 hours 6 minutes 35 seconds doit_loop.5.log:Copy 5: loop #109999...FAILED. doit_loop.5.log:status=6 # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007fcf3ec098e0, pid=5554, tid=140526931134208 # # JRE version: Java(TM) SE Runtime Environment (9.0) (build 1.9.0-internal-201411061020.mgerdin.hs-gc-8061964-compi-b00) # Java VM: Java HotSpot(TM) 64-Bit Server VM (1.9.0-internal-201411061020.mgerdin.hs-gc-8061964-compi-b00 mixed mode linux-amd64 compressed oops) # Problematic frame: # V [libjvm.so+0x8ca8e0] NodeHash::hash_find_insert(Node*)+0x80 # # Core dump written. Default location: /work/shared/bugs/8047212/inner-complex/core or core.5554 # # If you would like to submit a bug report, please visit: # http://bugreport.sun.com/bugreport/crash.jsp # <snip> Stack: [0x00007fcef9bfc000,0x00007fcef9cfd000], sp=0x00007fcef9cfa1f0, free space=1016k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x8ca8e0] NodeHash::hash_find_insert(Node*)+0x80 V [libjvm.so+0x8d063c] PhaseGVN::transform_no_reclaim(Node*)+0xfc V [libjvm.so+0x46de21] Compile::Compile(ciEnv*, C2Compiler*, ciMethod*, int, bool, bool, bool)+0xa41 V [libjvm.so+0x3c9346] C2Compiler::compile_method(ciEnv*, ciMethod*, int)+0xd6 V [libjvm.so+0x476856] CompileBroker::invoke_compiler_on_method(CompileTask*)+0x836 V [libjvm.so+0x477468] CompileBroker::compiler_thread_loop()+0x4a8 V [libjvm.so+0x9c2c90] JavaThread::thread_main_inner()+0xf0 V [libjvm.so+0x9c2e38] JavaThread::run()+0x158 V [libjvm.so+0x87b222] java_start(Thread*)+0xf2 C [libpthread.so.0+0x7851] The similar failure was seen back in 2014.07 and was tracked back to: JDK-8025500 ATG bigapp crashed in NodeHash::hash_find_insert That bug is currently closed as "incomplete".
17-11-2014

Copy #2 failed in less than one hour, but with a different failure mode: $ elapsed_times save.01 hs_err_pid20892.log save.01 0 seconds hs_err_pid20892.log 54 minutes 1 seconds doit_loop.2.log:Copy 2: loop #8052...FAILED. doit_loop.2.log:status=6 # # A fatal error has been detected by the Java Runtime Environment: # # Internal Error (g1PageBasedVirtualSpace.cpp:125), pid=20892, tid=140078850774784 # guarantee(is_area_uncommitted(start, size_in_pages)) failed: Specified area is not uncommitted # # JRE version: (9.0) (build ) # Java VM: Java HotSpot(TM) 64-Bit Server VM (1.9.0-internal-201411061020.mgerdin.hs-gc-8061964-compi-b00 mixed mode, sharing linux-amd64 compressed oops) # Core dump written. Default location: /work/shared/bugs/8047212/inner-complex/core or core.20892 # # If you would like to submit a bug report, please visit: # http://bugreport.sun.com/bugreport/crash.jsp # --------------- T H R E A D --------------- Current thread (0x00007f66a0009000): JavaThread "Unknown thread" [_thread_in_vm, id=20908, stack(0x00007f66a6141000,0x00007f66a6242000)] Stack: [0x00007f66a6141000,0x00007f66a6242000], sp=0x00007f66a62403c0, free space=1020k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0xa12cd1] VMError::report_and_die()+0x151 V [libjvm.so+0x4b86ee] report_vm_error(char const*, int, char const*, char const*)+0x6e V [libjvm.so+0x573364] G1PageBasedVirtualSpace::commit(unsigned long, unsigned long)+0x184 V [libjvm.so+0x575675] G1RegionsLargerThanCommitSizeMapper::commit_regions(unsigned long, unsigned long)+0x35 V [libjvm.so+0x5ce285] HeapRegionManager::commit_regions(unsigned int, unsigned long)+0x75 V [libjvm.so+0x5ce34e] HeapRegionManager::make_regions_available(unsigned int, unsigned int)+0x2e V [libjvm.so+0x5ce8ea] HeapRegionManager::expand_by(unsigned int)+0x5a V [libjvm.so+0x5519ca] G1CollectedHeap::expand(unsigned long)+0x17a V [libjvm.so+0x55c273] G1CollectedHeap::initialize()+0x823 V [libjvm.so+0x9de419] Universe::initialize_heap()+0x109 V [libjvm.so+0x9de5e3] universe_init()+0x33 V [libjvm.so+0x5e1fca] init_globals()+0x5a V [libjvm.so+0x9c15e5] Threads::create_vm(JavaVMInitArgs*, bool*)+0x2b5 V [libjvm.so+0x67aa72] JNI_CreateJavaVM+0x52 C [libjli.so+0x747c] JavaMain+0x8c C [libpthread.so.0+0x7851] Update: A search for this guarantee: guarantee(is_area_uncommitted(start, size_in_pages)) failed: Specified area is not uncommitted turns up this resolved bug ID: 8055525 Bigapp weblogic+medrec fails to startup after JDK-8038423 JDK-8055525 Looks like 8055525 was fixed with this changeset: HG Updates added a comment - 2014-08-20 08:31 URL: http://hg.openjdk.java.net/jdk9/hs-gc/hotspot/rev/73561302492c User: tschatzl Date: 2014-08-20 14:28:59 +0000 Mikael G's fix for JDK-8061964 is much more recent and to the same group repo so I can't believe that Mikael's JPRT job did not include the fix for JDK-8055525: HG Updates added a comment - 2014-11-06 05:20 URL: http://hg.openjdk.java.net/jdk9/hs-gc/hotspot/rev/d4f303d3104c User: mgerdin Date: 2014-11-06 12:15:57 +0000 Don't know what this means yet.
17-11-2014

Mikael Gerdin's fix for the following bug: 8061964 Insufficient compiler barriers for GCC in OrderAccess functions JDK-8061964 pushed to JDK9-hs-gc on 2014.11.06. I've grabbed a copy of the JPRT job's Linux-X64 bits (2014-11-06-102018.mgerdin.hs-gc-8061964-compilerbarrier) and I'm running a 6 fastdebug runs in parallel.
15-11-2014

Failed in GC nightlies: http://aurora.ru.oracle.com/functional/faces/RunDetails.xhtml?names=615072.JAVASE.NIGHTLY.VM.GC_Baseline-Serial.2014-10-22-6 RULE gc/lock/jniref/jniglobalreflock04 Crash Internal Error ...synchronizer.cpp...assert(ObjectSynchronizer::verify_objmon_isinpool(inf)) failed: monitor is invalid
23-10-2014

Seen in GC nightly 09Sept2014 # # A fatal error has been detected by the Java Runtime Environment: # # Internal Error (/opt/jprt/T/P1/143932.mgerdin/s/src/share/vm/runtime/synchronizer.cpp:1177), pid=10655, tid=139819552806656 # assert(ObjectSynchronizer::verify_objmon_isinpool(inf)) failed: monitor is invalid # # JRE version: Java(TM) SE Runtime Environment (9.0-b29) (build 1.9.0-ea-fastdebug-b29) # Java VM: Java HotSpot(TM) 64-Bit Server VM (1.9.0-fastdebug-internal-201409091439.mgerdin.hs-gc-8057722 mixed mode linux-amd64 compressed oops) # Core dump written. Default location: /scratch/local/aurora/sandbox/results/ResultDir/jck122001/core or core.10655 # # If you would like to submit a bug report, please visit: # http://bugreport.sun.com/bugreport/crash.jsp # --------------- T H R E A D --------------- Current thread (0x00007f2aa834e000): JavaThread "Thread-8" [_thread_in_Java, id=10715, stack(0x00007f2a46b71000,0x00007f2a46c72000)] Stack: [0x00007f2a46b71000,0x00007f2a46c72000], sp=0x00007f2a46c6b4e0, free space=1001k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x1075581] VMError::report_and_die()+0x151;; VMError::report_and_die()+0x151 V [libjvm.so+0x7342db] report_vm_error(char const*, int, char const*, char const*)+0x7b;; report_vm_error(char const*, int, char const*, char const*)+0x7b V [libjvm.so+0xf9dd7a] ObjectSynchronizer::inflate(Thread*, oop)+0x70a;; ObjectSynchronizer::inflate(Thread*, oop)+0x70a V [libjvm.so+0xfa0f17] ObjectSynchronizer::slow_exit(oop, BasicLock*, Thread*)+0x77;; ObjectSynchronizer::slow_exit(oop, BasicLock*, Thread*)+0x77 V [libjvm.so+0x566b52] Runtime1::monitorexit(JavaThread*, BasicObjectLock*)+0x1e2;; Runtime1::monitorexit(JavaThread*, BasicObjectLock*)+0x1e2 v ~RuntimeStub::monitorexit_nofpu Runtime1 stub J 423 C1 java.util.zip.ZipFile.getEntry(Ljava/lang/String;)Ljava/util/zip/ZipEntry; (86 bytes) @ 0x00007f2a995e3df4 [0x00007f2a995e3820+0x5d4] j java.util.jar.JarFile.getEntry(Ljava/lang/String;)Ljava/util/zip/ZipEntry;+2 j java.util.jar.JarFile.getJarEntry(Ljava/lang/String;)Ljava/util/jar/JarEntry;+2 j sun.misc.URLClassPath$JarLoader.getResource(Ljava/lang/String;Z)Lsun/misc/Resource;+42 J 289 C1 sun.misc.URLClassPath.getResource(Ljava/lang/String;Z)Lsun/misc/Resource; (74 bytes) @ 0x00007f2a9955767c [0x00007f2a99557540+0x13c] J 318 C1 java.net.URLClassLoader$1.run()Ljava/lang/Class; (73 bytes) @ 0x00007f2a9956aba4 [0x00007f2a9956a800+0x3a4] J 317 C1 java.net.URLClassLoader$1.run()Ljava/lang/Object; (5 bytes) @ 0x00007f2a9956830c [0x00007f2a99568280+0x8c] v ~StubRoutines::call_stub V [libjvm.so+0x9e6c9a] JavaCalls::call_helper(JavaValue*, methodHandle*, JavaCallArguments*, Thread*)+0x1dda;; JavaCalls::call_helper(JavaValue*, methodHandle*, JavaCallArguments*, Thread*)+0x1dda V [libjvm.so+0xad113f] JVM_DoPrivileged+0x8bf;; JVM_DoPrivileged+0x8bf J 316 java.security.AccessController.doPrivileged(Ljava/security/PrivilegedExceptionAction;Ljava/security/AccessControlContext;)Ljava/lang/Object; (0 bytes) @ 0x00007f2a9956237c [0x00007f2a99562220+0x15c] J 314 C1 java.net.URLClassLoader.findClass(Ljava/lang/String;)Ljava/lang/Class; (29 bytes) @ 0x00007f2a9956656c [0x00007f2a99566240+0x32c] J 279 C1 java.lang.ClassLoader.loadClass(Ljava/lang/String;Z)Ljava/lang/Class; (122 bytes) @ 0x00007f2a99553d14 [0x00007f2a995532a0+0xa74] J 379 C1 sun.misc.Launcher$AppClassLoader.loadClass(Ljava/lang/String;Z)Ljava/lang/Class; (40 bytes) @ 0x00007f2a9959a5dc [0x00007f2a9959a0a0+0x53c] J 378 C1 java.lang.ClassLoader.loadClass(Ljava/lang/String;)Ljava/lang/Class; (7 bytes) @ 0x00007f2a99598604 [0x00007f2a99598500+0x104] [...]
10-09-2014

Copied from JDK-8055912: Rickard Backman added a comment - 2014-08-25 07:47 - Restricted to Confidential Options: -server -Xcomp -XX:MaxRAMFraction=8 -XX:+CreateMinidumpOnCrash -ea -esa -XX:+TieredCompilation -XX:CompileThreshold=100 -XX:+UnlockExperimentalVMOptions -XX:+IgnoreUnrecognizedVMOptions -XX:+AggressiveOpts -XX:-UseBiasedLocking Host: sc11136767, Intel Xeon 3058 MHz, 4 cores, 15G, Linux / Oracle Linux 6.4, x86_64 Failure: http://aurora.ru.oracle.com/functional/faces/RunDetails.xhtml?names=568872.JAVASE.NIGHTLY.VM.Comp_Baseline-Tiered.2014-08-21-19#testlist.api%20%5B!exclude%5D%20(tonga)_mb/api/java/util/concurrent/GeneratedMaps/testConcurrentHashMap RULE mb/api/java/util/concurrent/GeneratedMaps/testConcurrentHashMap Crash Internal Error ...synchronizer.cpp...assert(ObjectSynchronizer::verify_objmon_isinpool(inf)) failed: monitor is invalid
25-08-2014

Ran into a class loading failure on 2014.08.15 while chasing this bug. I found a similar failure in the bug database: JDK-8001335 SIGSEGV in Dependencies::DepStream::argument(int)+0x6a Here are the current results: $ grep -v PASS doit_loop.*.log | egrep -v 'doit_loop.8815|doit_loop.26367.log|doit_loop.28252.log'; uptime; pwd doit_loop.20007.log:Copy fast_10: loop #3258277...FAILED. doit_loop.20007.log:status=6 doit_loop.fast_11.log:Copy fast_11: loop #3516814... doit_loop.fast_12.log:Copy fast_12: loop #3517007... doit_loop.fast_13.log:Copy fast_13: loop #3516552... doit_loop.fast_1.log:Copy fast_1: loop #4449823... doit_loop.jvmg_1.log:Copy jvmg_1: loop #2202998... doit_loop.jvmg_2.log:Copy jvmg_2: loop #2202874... 11:10:55 up 204 days, 22:40, 1 user, load average: 11.12, 10.66, 10.38 /work/shared/bugs/8047212/inner-complex
18-08-2014

RULE gc/lock/jniref/jnilocalreflock04 Crash Internal Error ...synchronizer.cpp...assert(ObjectSynchronizer::verify_objmon_isinpool(inf)) failed: monitor is invalid RULE gc/ArrayJuggle/Juggle27 Crash Internal Error ...synchronizer.cpp...assert(ObjectSynchronizer::verify_objmon_isinpool(inf)) failed: monitor is invalid
06-08-2014

Ran into a class loading failure on 2014.07.27 while chasing this bug. I found a similar failure in the bug database: JDK-8049632 JDK 1.8.0 b132 :Linux x64 : Crash in ClassFileParser::copy_localvariable_table(..) Here are the current results: $ grep -v PASS doit_loop.*.log | egrep -v 'doit_loop.8815|doit_loop.26367.log|doit_loop.28252.log'; uptime; pwd doit_loop.fast_10.log:Copy fast_10: loop #1780901... doit_loop.fast_11.log:Copy fast_11: loop #1781113... doit_loop.fast_12.log:Copy fast_12: loop #1781140... doit_loop.fast_13.log:Copy fast_13: loop #1780786... doit_loop.fast_1.log:Copy fast_1: loop #2714155... doit_loop.jvmg_1.log:Copy jvmg_1: loop #1336005... doit_loop.jvmg_2.log:Copy jvmg_2: loop #1335942... 07:39:22 up 183 days, 19:08, 1 user, load average: 11.76, 11.69, 11.77 /work/shared/bugs/8047212/inner-complex
28-07-2014

Ran into a C2 failure on 2014.07.11 while chasing this bug. I found a similar failure in the bug database: JDK-8025500 ATG bigapp crashed in NodeHash::hash_find_insert Here are the current results: $ grep -v PASS doit_loop.*.log | egrep -v 'doit_loop.8815|doit_loop.26367.log'; uptime; pwd doit_loop.fast_10.log:Copy fast_10: loop #1347787... doit_loop.fast_11.log:Copy fast_11: loop #1348028... doit_loop.fast_12.log:Copy fast_12: loop #1348031... doit_loop.fast_13.log:Copy fast_13: loop #1347807... doit_loop.fast_1.log:Copy fast_1: loop #2280972... doit_loop.jvmg_1.log:Copy jvmg_1: loop #1121149... doit_loop.jvmg_2.log:Copy jvmg_2: loop #1121062... doit_loop.prod_1.log:Copy prod_1: loop #3431805... 10:34:15 up 177 days, 22:03, 3 users, load average: 12.98, 13.01, 13.16 /work/shared/bugs/8047212/inner-complex Coming up on a month of continuous runs with the debug bits and no sightings so far... Probably going to have to drop back to code analysis...
22-07-2014

In the original failure, we have two different threads in VMError::report_and_die: Thread 60 (Thread 0x7feb1eeee700 (LWP 23132)): #0 0x0000003598cab91d in nanosleep () from /lib64/libc.so.6 #1 0x0000003598cab790 in sleep () from /lib64/libc.so.6 #2 0x00007feca787a9e2 in os::infinite_sleep() () at /opt/jprt/T/P1/003131.ddehaven/s/src/os/linux/vm/os_linux.cpp:3783 #3 0x00007feca7b0e53f in VMError::report_and_die() () at /opt/jprt/T/P1/003131.ddehaven/s/src/share/vm/utilities/vmError.cpp:943 #4 0x00007feca71fdf8b in report_vm_error(char const*, int, char const*, char const*) () at /opt/jprt/T/P1/003131.ddehaven/s/src/share/vm/utilities/debug.cpp:226 #5 0x00007feca7a4052c in ObjectSynchronizer::inflate(Thread*, oop) () at /opt/jprt/T/P1/003131.ddehaven/s/src/share/vm/runtime/synchronizer.cpp:1190 #6 0x00007feca7a43ce7 in ObjectSynchronizer::slow_exit(oop, BasicLock*, Thread*) () at /opt/jprt/T/P1/003131.ddehaven/s/src/share/vm/runtime/synchronizer.cpp:191 #7 0x00007feca748ba6b in InterpreterRuntime::monitorexit(JavaThread*, BasicObjectLock*) () at /opt/jprt/T/P1/003131.ddehaven/s/src/share/vm/interpreter/interpreterRuntime.cpp:625 #8 0x00007fec910485e6 in ?? () #9 0x00007feca821cb40 in TemplateInterpreter::_active_table () from /scratch/local/aurora/sandbox/sca/vmsqe/jdk/nightly/fastdebug/rt_baseline/linux-amd64/jre/lib/amd64/server/libjvm.so #10 0x00007fec910484a5 in ?? () #11 0x000000047bc011c0 in ?? () #12 0x000000047bc01310 in ?? () #13 0x0000000000000003 in ?? () #14 0x000000047bc011c0 in ?? () #15 0x00007feb1eeed3f0 in ?? () #16 0x00007feb85b312a1 in ?? () #17 0x00007feb1eeed498 in ?? () #18 0x00007feb85bc8e98 in ?? () #19 0x0000000000000000 in ?? () and Thread 1 (Thread 0x7feb1e8e8700 (LWP 23139)): #0 0x0000003598c328a5 in raise () from /lib64/libc.so.6 #1 0x0000003598c34085 in abort () from /lib64/libc.so.6 #2 0x00007feca7875c51 in os::abort(bool) () at /opt/jprt/T/P1/003131.ddehaven/s/src/os/linux/vm/os_linux.cpp:1542 #3 0x00007feca7b0eb94 in VMError::report_and_die() () at /opt/jprt/T/P1/003131.ddehaven/s/src/share/vm/utilities/vmError.cpp:1091 #4 0x00007feca71fdf8b in report_vm_error(char const*, int, char const*, char const*) () at /opt/jprt/T/P1/003131.ddehaven/s/src/share/vm/utilities/debug.cpp:226 #5 0x00007feca7a4052c in ObjectSynchronizer::inflate(Thread*, oop) () at /opt/jprt/T/P1/003131.ddehaven/s/src/share/vm/runtime/synchronizer.cpp:1190 #6 0x00007feca7a439bc in ObjectSynchronizer::fast_enter(Handle, BasicLock*, bool, Thread*) () at /opt/jprt/T/P1/003131.ddehaven/s/src/share/vm/runtime/synchronizer.cpp:233 #7 0x00007feca748b368 in InterpreterRuntime::monitorenter(JavaThread*, BasicObjectLock*) () at /opt/jprt/T/P1/003131.ddehaven/s/src/share/vm/interpreter/interpreterRuntime.cpp:602 #8 0x00007fec91047fc0 in ?? () #9 0x00007fec91047e71 in ?? () #10 0x0000000000000003 in ?? () #11 0x000000047fc1d3d8 in ?? () #12 0x00007feb1e8e7648 in ?? () #13 0x00007feb85c6e384 in ?? () #14 0x00007feb1e8e76c0 in ?? () #15 0x00007feb85c71010 in ?? () #16 0x0000000000000000 in ?? () The same is true in Jon's sighting (but the other threads aren't similar): Thread 66 (Thread 0x7f220af1b700 (LWP 24545)): #0 0x00000037c98ab91d in nanosleep () from /lib64/libc.so.6 #1 0x00000037c98ab790 in sleep () from /lib64/libc.so.6 #2 0x00007f22b34015f2 in os::infinite_sleep() () at /opt/jprt/T/P1/002959.jmasamit/s/src/os/linux/vm/os_linux.cpp:3784 #3 0x00007f22b368d72f in VMError::report_and_die() () at /opt/jprt/T/P1/002959.jmasamit/s/src/share/vm/utilities/vmError.cpp:944 #4 0x00007f22b368e418 in crash_handler(int, siginfo*, void*) () at /opt/jprt/T/P1/002959.jmasamit/s/src/os/linux/vm/vmError_linux.cpp:106 #5 <signal handler called> #6 0x00007f22b2c379a7 in ciEnv::get_field_by_index(ciInstanceKlass*, int) () at /opt/jprt/T/P1/002959.jmasamit/s/src/share/vm/runtime/os.hpp:395 #7 0x00007f22b2c8274a in ciBytecodeStream::get_field(bool&) () at /opt/jprt/T/P1/002959.jmasamit/s/src/share/vm/ci/ciStreams.cpp:280 #8 0x00007f22b2b19c67 in GraphBuilder::access_field(Bytecodes::Code) () at /opt/jprt/T/P1/002959.jmasamit/s/src/share/vm/c1/c1_GraphBuilder.cpp:1523 #9 0x00007f22b2b1eeb1 in GraphBuilder::iterate_bytecodes_for_block(int) () at /opt/jprt/T/P1/002959.jmasamit/s/src/share/vm/c1/c1_GraphBuilder.cpp:2773 #10 0x00007f22b2b1f40d in GraphBuilder::iterate_all_blocks(bool) () at /opt/jprt/T/P1/002959.jmasamit/s/src/share/vm/c1/c1_GraphBuilder.cpp:2861 #11 0x00007f22b2b20322 in GraphBuilder::GraphBuilder(Compilation*, IRScope*) () at /opt/jprt/T/P1/002959.jmasamit/s/src/share/vm/c1/c1_GraphBuilder.cpp:3218 #12 0x00007f22b2b27da9 in IRScope::IRScope(Compilation*, IRScope*, int, ciMethod*, int, bool) () at /opt/jprt/T/P1/002959.jmasamit/s/src/share/vm/c1/c1_IR.cpp:126 #13 0x00007f22b2b2865c in IR::IR(Compilation*, ciMethod*, int) () at /opt/jprt/T/P1/002959.jmasamit/s/src/share/vm/c1/c1_IR.cpp:269 #14 0x00007f22b2b00d21 in Compilation::build_hir() () at /opt/jprt/T/P1/002959.jmasamit/s/src/share/vm/c1/c1_Compilation.cpp:147 #15 0x00007f22b2b01f33 in Compilation::compile_java_method() () at /opt/jprt/T/P1/002959.jmasamit/s/src/share/vm/c1/c1_Compilation.cpp:378 #16 0x00007f22b2b02708 in Compilation::compile_method() () at /opt/jprt/T/P1/002959.jmasamit/s/src/share/vm/c1/c1_Compilation.cpp:448 #17 0x00007f22b2b02dd8 in Compilation::Compilation(AbstractCompiler*, ciEnv*, ciMethod*, int, BufferBlob*) () at /opt/jprt/T/P1/002959.jmasamit/s/src/share/vm/c1/c1_Compilation.cpp:559 #18 0x00007f22b2b0462a in Compiler::compile_method(ciEnv*, ciMethod*, int) () at /opt/jprt/T/P1/002959.jmasamit/s/src/share/vm/c1/c1_Compiler.cpp:107 #19 0x00007f22b2d21685 in CompileBroker::invoke_compiler_on_method(CompileTask*) () at /opt/jprt/T/P1/002959.jmasamit/s/src/share/vm/compiler/compileBroker.cpp:1962 #20 0x00007f22b2d22998 in CompileBroker::compiler_thread_loop() () at /opt/jprt/T/P1/002959.jmasamit/s/src/share/vm/compiler/compileBroker.cpp:1785 #21 0x00007f22b3615474 in JavaThread::thread_main_inner() () at /opt/jprt/T/P1/002959.jmasamit/s/src/share/vm/runtime/thread.cpp:1691 #22 0x00007f22b36156e5 in JavaThread::run() () at /opt/jprt/T/P1/002959.jmasamit/s/src/share/vm/runtime/thread.cpp:1671 #23 0x00007f22b33fb4a2 in java_start(Thread*) () at /opt/jprt/T/P1/002959.jmasamit/s/src/os/linux/vm/os_linux.cpp:829 #24 0x00000037c9c07851 in start_thread () from /lib64/libpthread.so.0 #25 0x00000037c98e767d in clone () from /lib64/libc.so.6 and Thread 1 (Thread 0x7f22090e9700 (LWP 24573)): #0 0x00000037c98328a5 in raise () from /lib64/libc.so.6 #1 0x00000037c9834085 in abort () from /lib64/libc.so.6 #2 0x00007f22b33fc8b1 in os::abort(bool) () at /opt/jprt/T/P1/002959.jmasamit/s/src/os/linux/vm/os_linux.cpp:1543 #3 0x00007f22b368dd84 in VMError::report_and_die() () at /opt/jprt/T/P1/002959.jmasamit/s/src/share/vm/utilities/vmError.cpp:1092 #4 0x00007f22b2d9678b in report_vm_error(char const*, int, char const*, char const*) () at /opt/jprt/T/P1/002959.jmasamit/s/src/share/vm/utilities/debug.cpp:227 #5 0x00007f22b35c2b6c in ObjectSynchronizer::inflate(Thread*, oop) () at /opt/jprt/T/P1/002959.jmasamit/s/src/share/vm/runtime/synchronizer.cpp:1191 #6 0x00007f22b35c5ffc in ObjectSynchronizer::fast_enter(Handle, BasicLock*, bool, Thread*) () at /opt/jprt/T/P1/002959.jmasamit/s/src/share/vm/runtime/synchronizer.cpp:234 #7 0x00007f22b301e2c8 in InterpreterRuntime::monitorenter(JavaThread*, BasicObjectLock*) () at /opt/jprt/T/P1/002959.jmasamit/s/src/share/vm/interpreter/interpreterRuntime.cpp:603 #8 0x00007f229d047ebc in ?? () #9 0x00007f229d047d6d in ?? () #10 0x0000000000000003 in ?? () #11 0x00000003cfb38340 in ?? () #12 0x00007f22090e7738 in ?? () #13 0x00007f22281c13c0 in ?? () #14 0x00007f22090e77e0 in ?? () #15 0x00007f2228258d78 in ?? () #16 0x0000000000000000 in ?? ()
08-07-2014

Failed with similar stack trace on the 7/4 GC nightly testing RULE nsk/stress/network/network005 Crash Internal Error ...synchronizer.cpp...assert(ObjectSynchronizer::verify_objmon_isinpool(inf)) failed: monitor is invalid
08-07-2014

Ran into a failure early this morning: grep -v PASS doit_loop.*.log; uptime; pwd doit_loop.fast_1.log:Copy fast_1: loop #930808... doit_loop.fast_2.log:Copy fast_2: loop #880544...FAILED. doit_loop.fast_2.log:status=11 doit_loop.jvmg_1.log:Copy jvmg_1: loop #455100... doit_loop.jvmg_2.log:Copy jvmg_2: loop #455076... doit_loop.prod_1.log:Copy prod_1: loop #1450529... doit_loop.prod_2.log:Copy prod_2: loop #1450320... 16:14:56 up 159 days, 3:44, 3 users, load average: 8.28, 8.65, 8.82 Unfortunately, this was a SIGSEGV failure while the test was shutting down. Filed the following bug to track that failure: JDK-8049304 race between VM_Exit and _sync_FutileWakeups->inc()
03-07-2014

After just over a week, still no failures: $ grep -v PASS doit_loop.*.log; uptime doit_loop.fast_1.log:Copy fast_1: loop #788955... doit_loop.fast_2.log:Copy fast_2: loop #789006... doit_loop.jvmg_1.log:Copy jvmg_1: loop #385911... doit_loop.jvmg_2.log:Copy jvmg_2: loop #385844... doit_loop.prod_1.log:Copy prod_1: loop #1227572... doit_loop.prod_2.log:Copy prod_2: loop #1227254... 08:42:46 up 157 days, 20:11, 3 users, load average: 12.17, 11.39, 10.92
02-07-2014

Forgot to update the bug report on Tuesday (06.24). I've create a sanity check version of verify_objmon_isinpool() that reports information about the incoming object and the Monitor cache when a failure occurs. I've had 6 parallel runs running on my DevOps machine since Tuesday afternoon. Here's the current info: $ grep -v PASS doit_loop.*.log; uptime doit_loop.fast_1.log:Copy fast_1: loop #179265... doit_loop.fast_2.log:Copy fast_2: loop #179259... doit_loop.jvmg_1.log:Copy jvmg_1: loop #87700... doit_loop.jvmg_2.log:Copy jvmg_2: loop #87718... doit_loop.prod_1.log:Copy prod_1: loop #278783... doit_loop.prod_2.log:Copy prod_2: loop #278560... 09:51:25 up 151 days, 21:20, 3 users, load average: 9.99, 10.45, 10.50
26-06-2014

This is the failing assertion: 1190 assert(ObjectSynchronizer::verify_objmon_isinpool(inf), "monitor is invalid"); Here's the function: src/share/vm/runtime/synchronizer.cpp: 1634 // Check if monitor belongs to the monitor cache 1635 // The list is grow-only so it's *relatively* safe to traverse 1636 // the list of extant blocks without taking a lock. 1637 1638 int ObjectSynchronizer::verify_objmon_isinpool(ObjectMonitor *monitor) { 1639 ObjectMonitor* block = gBlockList; 1640 1641 while (block) { 1642 assert(block->object() == CHAINMARKER, "must be a block header"); 1643 if (monitor > &block[0] && monitor < &block[_BLOCKSIZE]) { 1644 address mon = (address) monitor; 1645 address blk = (address) block; 1646 size_t diff = mon - blk; 1647 assert((diff % sizeof(ObjectMonitor)) == 0, "check"); 1648 return 1; 1649 } 1650 block = (ObjectMonitor*) block->FreeNext; 1651 } 1652 return 0; 1653 } So verify_objmon_isinpool() is a piece of lock-free code that looks for the specified ObjectMonitor in the monitor cache. If the monitor can't be found, then the function returns 0 which is why the assert() fired.
24-06-2014

Sigh... This bug has a sense of humor... :-) Checking results this AM shows: $ grep -v PASSED doit_loop.*.log; uptime doit_loop.0.log:Copy 0: loop #667112...FAILED. doit_loop.0.log:status=6 doit_loop.1.log:Copy 1: loop #702976... doit_loop.2.log:Copy 2: loop #702846... doit_loop.3.log:Copy 3: loop #702950... doit_loop.4.log:Copy 4: loop #702936... doit_loop.5.log:Copy 5: loop #702779... 07:07:15 up 149 days, 18:36, 1 user, load average: 8.37, 9.09, 9.46 Here are snippets from the hs_err_pid file: # Internal Error (/opt/jprt/T/P1/003131.ddehaven/s/src/share/vm/runtime/synchronizer.cpp:1190), pid=31779, tid=140053360367360 # assert(ObjectSynchronizer::verify_objmon_isinpool(inf)) failed: monitor is invalid # # JRE version: Java(TM) SE Runtime Environment (9.0-b18) (build 1.9.0-ea-fastdebug-b18) # Java VM: Java HotSpot(TM) 64-Bit Server VM (1.9.0-internal-201406180031.ddehaven.hotspot-fastdebug mixed mode linux-amd64 compressed oops) --------------- T H R E A D --------------- Current thread (0x00007f60b000b800): JavaThread "Main Thread" [_thread_in_vm, id=31780, stack(0x00007f60b6bb5000,0x00007f60b6cb6000)] Stack: [0x00007f60b6bb5000,0x00007f60b6cb6000], sp=0x00007f60b6cb4060, free space=1020k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x10405ac] VMError::report_and_die()+0x15c V [libjvm.so+0x72ff8b] report_vm_error(char const*, int, char const*, char const*)+0x7b V [libjvm.so+0xf7252c] ObjectSynchronizer::inflate(Thread*, oop)+0x67c V [libjvm.so+0xf759bc] ObjectSynchronizer::fast_enter(Handle, BasicLock*, bool, Thread*)+0x1dc V [libjvm.so+0x9bd368] InterpreterRuntime::monitorenter(JavaThread*, BasicObjectLock*)+0x1c8 j java.io.PrintStream.println(Ljava/lang/String;)V+4 j runtime.ParallelClassLoading.shared.ClassLoadingController.println(Ljava/lang/String;)V+37 j runtime.ParallelClassLoading.shared.ClassLoadingController.startLoadingIterator()Z+326 j runtime.ParallelClassLoading.shared.ClassLoadingController.runIt([Ljava/lang/String;Ljava/io/PrintStream;)I+9 j runtime.ParallelClassLoading.shared.bootstrap.BootstrapRandomizedLoadingController.run([Ljava/lang/String;Ljava/io/PrintStream;)I+9 j runtime.ParallelClassLoading.shared.bootstrap.BootstrapRandomizedLoadingController.main([Ljava/lang/String;)V+4 v ~StubRoutines::call_stub V [libjvm.so+0x9d7bd7] JavaCalls::call_helper(JavaValue*, methodHandle*, JavaCallArguments*, Thread*)+0x18a7 V [libjvm.so+0xa6150b] jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) [clone .isra.199] [clone .constprop.214]+0x44b V [libjvm.so+0xa7729d] jni_CallStaticVoidMethod+0x1cd C [libjli.so+0x6e70] JavaMain+0x700 C [libpthread.so.0+0x7851] Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) j java.io.PrintStream.println(Ljava/lang/String;)V+4 j runtime.ParallelClassLoading.shared.ClassLoadingController.println(Ljava/lang/String;)V+37 j runtime.ParallelClassLoading.shared.ClassLoadingController.startLoadingIterator()Z+326 j runtime.ParallelClassLoading.shared.ClassLoadingController.runIt([Ljava/lang/String;Ljava/io/PrintStream;)I+9 j runtime.ParallelClassLoading.shared.bootstrap.BootstrapRandomizedLoadingController.run([Ljava/lang/String;Ljava/io/PrintStream;)I+9 j runtime.ParallelClassLoading.shared.bootstrap.BootstrapRandomizedLoadingController.main([Ljava/lang/String;)V+4 v ~StubRoutines::call_stub Update: Looks like DevOps machines have core dumps disabled by default so this failure didn't give me a core file for examining. I've updated the doit.ksh script that still being used by the remaining 5 parallel runs to enable core dumps.
24-06-2014

It's been a week since this is single crash and we haven't seen another occurrence. If the test h/w okay, and the core file inexplicable, there's no point leaving this open.
24-06-2014

The six parallel copies have hit > 600K iterations with no failures: $ grep -v PASSED doit_loop.*.log; uptime doit_loop.0.log:Copy 0: loop #602243... doit_loop.1.log:Copy 1: loop #602276... doit_loop.2.log:Copy 2: loop #602126... doit_loop.3.log:Copy 3: loop #602243... doit_loop.4.log:Copy 4: loop #602187... doit_loop.5.log:Copy 5: loop #602077... 12:15:40 up 148 days, 23:44, 1 user, load average: 10.69, 10.91, 10.86 I'm thinking of closing this as not reproducible.
23-06-2014

The six parallel copies have hit > 100K iterations each with no failures: $ grep -v PASSED doit_loop.*.log; uptime doit_loop.0.log:Copy 0: loop #100504... doit_loop.1.log:Copy 1: loop #100447... doit_loop.2.log:Copy 2: loop #100449... doit_loop.3.log:Copy 3: loop #100431... doit_loop.4.log:Copy 4: loop #100443... doit_loop.5.log:Copy 5: loop #100385... 09:39:35 up 144 days, 21:08, 1 user, load average: 10.15, 10.36, 10.54
19-06-2014

ILW HL? = P2 I: High, Crashes L: Low, Seems to be very hard to reproduce, has happened once W: Unknown (High), No known workaround at this time
18-06-2014

I have six copies of runtime/ParallelClassLoading/bootstrap/random/inner-complex running in parallel on my DevOps machine. That's 1.5X my processor count so it should be a decent load.
18-06-2014

The tonga.output/Tonga.log.log file shows this tidbit: [2014-06-18T03:48:36.30] TEST_CONCURRENCY="16" [2014-06-18T03:48:36.30] # Actual: TEST_CONCURRENCY=16 so this test run is executing up to 16 tests in parallel.
18-06-2014

I have runtime/ParallelClassLoading/bootstrap/random/inner-complex running in a loop on my DevOps Linux machine using this JPRT job: 2014-06-18-003131.ddehaven.hotspot I grabbed a copy of the failure's execution directory so I'm running with exactly the same options, etc... Update: The singleton run over lunch didn't repro: Loop #15297...PASSED. so I've stopped it for now.
18-06-2014

Here's the crashing code: src/share/vm/runtime/synchronizer.cpp: 1168 ObjectMonitor * ATTR ObjectSynchronizer::inflate (Thread * Self, oop object) { 1169 // Inflate mutates the heap ... 1170 // Relaxing assertion for bug 6320749. 1171 assert(Universe::verify_in_progress() || 1172 !SafepointSynchronize::is_at_safepoint(), "invariant"); 1173 1174 for (;;) { 1175 const markOop mark = object->mark(); 1176 assert(!mark->has_bias_pattern(), "invariant"); 1177 1178 // The mark can be in one of the following states: 1179 // * Inflated - just return 1180 // * Stack-locked - coerce it to inflated 1181 // * INFLATING - busy wait for conversion to complete 1182 // * Neutral - aggressively inflate the object. 1183 // * BIASED - Illegal. We should never see this 1184 1185 // CASE: inflated 1186 if (mark->has_monitor()) { 1187 ObjectMonitor * inf = mark->monitor(); 1188 assert(inf->header()->is_neutral(), "invariant"); 1189 assert(inf->object() == object, "invariant"); 1190 assert(ObjectSynchronizer::verify_objmon_isinpool(inf), "monitor is invalid"); 1191 return inf; 1192 } As luck would have it, I just changed this file, this specific function and the specific line that is calling ObjectSynchronizer::verify_objmon_isinpool(inf) via the following bug: JDK-8046758 cleanup non-indent white space issues prior to Contended Locking cleanup bucket My changes were only white space corrections, but what are the chances...
18-06-2014

The following bug caught my eye while I was searching: JDK-8044021 gc/gctests/RememberedSet crashed with SIGSEGV in ServiceUtil::visible_oop The crash in JDK-8044021 has a similar stack trace section: V [libjvm.so+0xee4554] bool ServiceUtil::visible_oop(oopDesc*)+0x28 V [libjvm.so+0x1141521] void ObjectMonitor::enter(Thread*)+0x12d V [libjvm.so+0x130dc93] void ObjectSynchronizer::fast_enter(Handle,BasicLock*,bool,Thread*)+0xe3 V [libjvm.so+0x126937e] void SharedRuntime::complete_monitor_locking_C(oopDesc*,BasicLock*,JavaThread*)+0x1d2 v ~RuntimeStub::_complete_monitor_locking_Java and here is this bug's interesting section: V [libjvm.so+0x72ff8b] report_vm_error(char const*, int, char const*, char const*)+0x7b;; report_vm_error(char const*, int, char const*, char const*)+0x7b V [libjvm.so+0xf7252c] ObjectSynchronizer::inflate(Thread*, oop)+0x67c;; ObjectSynchronizer::inflate(Thread*, oop)+0x67c V [libjvm.so+0xf759bc] ObjectSynchronizer::fast_enter(Handle, BasicLock*, bool, Thread*)+0x1dc;; ObjectSynchronizer::fast_enter(Handle, BasicLock*, bool, Thread*)+0x1dc V [libjvm.so+0x9bd368] InterpreterRuntime::monitorenter(JavaThread*, BasicObjectLock*)+0x1c8;; InterpreterRuntime::monitorenter(JavaThread*, BasicObjectLock*)+0x1c8 Both are going through ObjectSynchronizer::fast_enter(). JDK-8044021 crashes in ServiceUtil::visible_oop() because the associated object is bad. This bug fails an assertion in ObjectSynchronizer::inflate() because the associated object monitor is bad. Not really an exact match, but definitely interesting.
18-06-2014

Didn't find any unresolved matches for "ObjectSynchronizer::verify_objmon_isinpool" in the bug database. None of the bugs that match the testname are unresolved and none appear to be similar. Aurora history for the test didn't show any similar failures.
18-06-2014