JDK-8153292 : AllocateInstancePrefetchLines>AllocatePrefetchLines can trigger out-of-heap prefetching
  • Type: Bug
  • Component: hotspot
  • Sub-Component: compiler
  • Affected Version: 8u112,9
  • Priority: P2
  • Status: Closed
  • Resolution: Fixed
  • Submitted: 2016-04-01
  • Updated: 2017-07-26
  • Resolved: 2016-04-21
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 9
9 b120Fixed
Related Reports
Relates :  
Relates :  
Relates :  
Relates :  
Description
klass.inline.hpp:63), pid=229, tid=6
#  assert(!is_null(v)) failed: narrow klass value can never be zero

Comments
verified by nightly testing
26-07-2017

Correction of above: This problem was not introduced by any change in b42, but it can be easily triggered starting with b42. Here is the description of the problem and solution from the RFR. Problem: To avoid out-of-heap accesses by instructions prefetching data, TLABs have a reserved area. The size of that area is supposed to be large enough to accommodate possible prefetching. The amount of prefetched data is controlled separately for instance and array allocations (by the AllocateInstancePrefetchLines and AllocatePrefetchLines flags). The size of the reserved area in the TLAB is, however, determined only based on AllocatePrefetchLines. As a result, AllocateInstancePrefetchLines > AllocatePrefetchLines can trigger out-of-heap memory accesses. Solution: Set the size of the reserved TLAB area to the MAX of both flags. Some clarification (e.g., for future reference). The TLAB is sized so that there is always space available for the reserved area. See for example http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/file/4ba240d68b39/src/share/vm/gc/shared/threadLocalAllocBuffer.inline.hpp#l56 When the TLAB is refilled, that reserved area is filled with a VM-internal array containing NULL, for example at: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/file/4ba240d68b39/src/cpu/x86/vm/macroAssembler_x86.cpp#l5490 The TLAB reserved area is a waste of heap space, however, the benefits of prefetching are most likely larger than the drawbacks of wasting some heap space.
21-04-2016

The problem started appearing with b42. 256 changes were released with that build and is unfortunately hard to see which one is the likely cause of this failure. I'm continuing the investigation.
19-04-2016

Looks like existing problem in the code which is in rare cases triggered by the test. I was able to reproduce it for jdk9+112 build by running javac on sparcv9: javac -J-XX:+UseParallelGC -J-XX:AllocateInstancePrefetchLines=64
01-04-2016

We are scanning nmethods so this is most likely a compiler issue: V [libjvm.so+0x1499720] void nmethod::oops_do(OopClosure*,bool)+0x570 V [libjvm.so+0xee6704] void CodeBlobToOopClosure::do_nmethod(nmethod*)+0x14 V [libjvm.so+0xb01434] void CodeCache::scavenge_root_nmethods_do(CodeBlobToOopClosure*)+0x164
01-04-2016

Here's the crashing thread's stack trace: --------------- T H R E A D --------------- Current thread (0x00000001001c5000): GCTaskThread [stack: 0xffffffff7ad00000,0xffffffff7ae00000] [id=6] Stack: [0xffffffff7ad00000,0xffffffff7ae00000], sp=0xffffffff7adfe950, free space=1018k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x189bf88] void VMError::report_and_die(int,const char*,const char*,void*,Thread*,unsigned char*,void*,void*,const char*,int,unsigned long)+0xaa8 V [libjvm.so+0x189b46c] void VMError::report_and_die(Thread*,const char*,int,const char*,const char*,void*)+0x3c V [libjvm.so+0xbec958] void report_vm_error(const char*,int,const char*,const char*,...)+0x78 V [libjvm.so+0x7b8890] Klass*Klass::decode_klass_not_null(unsigned)+0x40 V [libjvm.so+0x15e3860] oop PSPromotionManager::copy_to_survivor_space<true>(oop)+0x190 V [libjvm.so+0x15e2f14] void PSPromotionManager::copy_and_push_safe_barrier<oop,true>(__type_0*)+0xe4 V [libjvm.so+0x1499720] void nmethod::oops_do(OopClosure*,bool)+0x570 V [libjvm.so+0xee6704] void CodeBlobToOopClosure::do_nmethod(nmethod*)+0x14 V [libjvm.so+0xb01434] void CodeCache::scavenge_root_nmethods_do(CodeBlobToOopClosure*)+0x164 V [libjvm.so+0x15e16ec] void ScavengeRootsTask::do_it(GCTaskManager*,unsigned)+0x28c V [libjvm.so+0xda13ec] void GCTaskThread::run()+0x2cc V [libjvm.so+0x14ed350] java_start+0x3f0 Not sure why this thread list from the hs_err_pid file shows the crashing thread as "(exited)": --------------- P R O C E S S --------------- Java Threads: ( => current thread ) 0x0000000100a56000 JavaThread "Service Thread" daemon [_thread_blocked, id=20, stack(0xffffffff51800000,0xffffffff51900000)] 0x000000010044e000 JavaThread "Sweeper thread" daemon [_thread_blocked, id=19, stack(0xffffffff51c00000,0xffffffff51d00000)] 0x000000010044b800 JavaThread "C1 CompilerThread3" daemon [_thread_blocked, id=18, stack(0xffffffff51e00000,0xffffffff51f00000)] 0x0000000100449000 JavaThread "C2 CompilerThread2" daemon [_thread_blocked, id=17, stack(0xffffffff52000000,0xffffffff52100000)] 0x0000000100442000 JavaThread "C2 CompilerThread1" daemon [_thread_blocked, id=16, stack(0xffffffff52200000,0xffffffff52300000)] 0x0000000100440000 JavaThread "C2 CompilerThread0" daemon [_thread_blocked, id=15, stack(0xffffffff52500000,0xffffffff52600000)] 0x000000010043d800 JavaThread "Signal Dispatcher" daemon [_thread_blocked, id=14, stack(0xffffffff52700000,0xffffffff52800000)] 0x000000010041f000 JavaThread "Finalizer" daemon [_thread_blocked, id=13, stack(0xffffffff52900000,0xffffffff52a00000)] 0x0000000100409000 JavaThread "Reference Handler" daemon [_thread_blocked, id=12, stack(0xffffffff52b00000,0xffffffff52c00000)] 0x00000001001a4000 JavaThread "main" [_thread_blocked, id=2, stack(0xffffffff7ce00000,0xffffffff7cf00000)] Other Threads: 0x0000000100402000 VMThread [stack: 0xffffffff52d00000,0xffffffff52e00000] [id=11] 0x0000000100a71800 WatcherThread [stack: 0xffffffff51600000,0xffffffff51700000] [id=21] =>0x00000001001c5000 (exited) GCTaskThread [stack: 0xffffffff7ad00000,0xffffffff7ae00000] [id=6] In any case, the crashing stack is in GC code so I'm moving this bug from hotspot/runtime -> hotspot/gc for initial triage.
01-04-2016