JDK-8062063 : Usage of UseHugeTLBFS, UseLargePagesInMetaspace and huge SurvivorAlignmentInBytes cause crashes in CMBitMapClosure::do_bit
  • Type: Bug
  • Component: hotspot
  • Sub-Component: gc
  • Affected Version: 8u40,9
  • Priority: P1
  • Status: Resolved
  • Resolution: Fixed
  • OS: linux_2.6
  • CPU: x86
  • Submitted: 2014-10-24
  • Updated: 2017-07-26
  • Resolved: 2015-01-12
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 8 JDK 9
8u40Fixed 9 b48Fixed
Description
Usage of HugeTLBFS, UseLargePagesInMetaspace and huge SurvivorAlignmentInBytes (4k) cause crashes in CMBitMapClosure::do_bit:

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00007facc84f9ee5, pid=9980, tid=140377864763136
#
# JRE version: Java(TM) SE Runtime Environment (8.0_40-b09) (build 1.8.0_40-ea-fastdebug-b09)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.40-b15-internal-201410152225.cccheung.jdk8u-hs-only-fastdebug mixed mode linux-amd64 compressed oops)
# Problematic frame:
# V  [libjvm.so+0x3fbee5]  oopDesc::size_given_klass(Klass*)+0x25
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.sun.com/bugreport/crash.jsp
#
...
Stack: [0x00007fac44b37000,0x00007fac44c38000],  sp=0x00007fac44c36930,  free space=1022k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V  [libjvm.so+0x3fbee5]  oopDesc::size_given_klass(Klass*)+0x25
V  [libjvm.so+0x6c45b6]  CMTask::scan_object(oop)+0xe6
V  [libjvm.so+0x6cff7b]  CMBitMapClosure::do_bit(unsigned long)+0x13b
V  [libjvm.so+0x6c7593]  CMTask::do_marking_step(double, bool, bool)+0x1073
V  [libjvm.so+0x6d0d4b]  CMConcurrentMarkingTask::work(unsigned int)+0x19b
V  [libjvm.so+0x1027cf6]  GangWorker::loop()+0x2b6
V  [libjvm.so+0xd67988]  java_start(Thread*)+0x108

Issue could be reproduced on linux (both 32 & 64 bit, both client and server, with mixed, comp and int mode) with 8u40 product & fastdebug bits starting from b11.
Issue could not be reproduced with jdk9-b35 and jdk8u40-b10.
Comments
Bugs found by nightly testing. Verified by passed nightly.
26-07-2017

The proposed fix looks good to me, I agree that it would have been nice to avoid doing the clearing until commit but as you say the code is cleaner this way. I've also been running the patched build for several hours without being able to trigger the crash. Didn't find a good way to look at the aurora results, but given that they are good, I think we should try to get this into 9 and then 8u40.
29-12-2014

Potential fix attached. What it basically does is that on "uncommit" it clears the memory so that the next commit will receive zero-filled memory like it would if it were committed from the OS. The reason why this has not been done in the commit method is that this would require some workarounds for detecting the first time the memory is committed and not doing the zeroing, because otherwise we would implicitly pre-touch the memory, causing startup regressions.
23-12-2014

The fix is good and should work. However this seems to be a problem down the stack (I.e. in G1PageBasedVirtualSpace.commit()) that does not provide the necessary guarantee that after calling commit() on some memory page the returned memory is actually zero-filled. Also, ideally we should not be able to access G1PageBasedVirtualSpace._special (actually I think "fixing" the visibility of _special has been omitted here for brevity :P). If it were kept this way, every user of G1PageBasedVirtualSpace would need to take care of how the underlying virtual space works. The problem is rather the use of large pages.
23-12-2014

I've been digging down into this and now found what seems to be the root cause. When running with "special" large pages we never uncommit underlying memory used by for example the marking bitmaps and when later committing the memory again we expect that it should be cleared, but it's not since we never actually uncomitted it. This leads to the bitmaps having bits set that are not pointing to marked objects or even objects at all. A simple fix for the issue can look like this: diff --git a/src/share/vm/gc_implementation/g1/g1RegionToSpaceMapper.cpp b/src/share/vm/gc_implementation/g1/g1RegionToSpaceMapper.cpp --- a/src/share/vm/gc_implementation/g1/g1RegionToSpaceMapper.cpp +++ b/src/share/vm/gc_implementation/g1/g1RegionToSpaceMapper.cpp @@ -67,9 +67,10 @@ } virtual void commit_regions(uintptr_t start_idx, size_t num_regions) { + bool zero_filled = !_storage.special(); _storage.commit(start_idx * _pages_per_region, num_regions * _pages_per_region); _commit_map.set_range(start_idx, start_idx + num_regions); - fire_on_commit(start_idx, num_regions, true); + fire_on_commit(start_idx, num_regions, zero_filled); } virtual void uncommit_regions(uintptr_t start_idx, size_t num_regions) { @@ -118,7 +119,7 @@ bool zero_filled = false; if (old_refcount == 0) { _storage.commit(idx, 1); - zero_filled = true; + zero_filled = !_storage.special(); } _refcounts.set_by_index(idx, old_refcount + 1); _commit_map.set_bit(i); What this basically means is that if the space is special (can't be uncommited) we need to pass information on that the underlying space is not zero_filled. I'm not sure this is the cleanest solution, but it fixes the problem. The code causing this is new in 8u40, so this is somewhat a regression and I think we should try to get it fixed in that release. With the new information we have I don't think the only reason for making it a P1 is that it failis in nightlies. My ILW would be: Impact: High, crash. Likelikhood: Medium, I can reproduce it easily. Workaround: High, stop using SurvivorAlignmentInBytes isn't a vailid workaround since it will just decrease the likelihood, problem still exists.
19-12-2014

One possibility is to restrict the size of SurvivorAlignmentInBytes to the length of 2 cache lines. Before doing that it would be good to understand how the code that does the alignment handles a 4k alignment. Does 4k really work but just wastes lots of space? Does the 4k alignment cause results in bad allocations. If this is urgent the 2 cache line limit can be imposed (with limited number of lines of coded changed) and the investigation about larger alignments done later if there is good reason to allow larger alignments.
10-12-2014

ILW says P4 but since this happened in our nightlies I raise the priority to P1. We need to clean out the problems in our nightlies asap.
09-12-2014

One fix for this maybe to limit the SurvivorAlignmentInBytes to a multiple of a few cache lines. At least 2 * cache line size has been suggested to avoid the problem of look-ahead prefetching the next cache line and causing false sharing on that next cache line.
04-11-2014

This was down graded to a p4 so stopping work on it. Changing fix version to 9.
04-11-2014

The Klass in the size_given_klass(Klass*) points to unallocated space. It actually looks like it is pointing into the heap when it should be pointing into Metaspace. RBX=0x00000000ee3000b0 is pointing into object: 0x00000000ee300018 [I - klass: {type array int} - length: 1014 - 0: 0xdeafbabe -558908738 - 1: 0xdeafbabe -558908738 - 2: 0xdeafbabe -558908738 - 3: 0xdeafbabe -558908738 - 4: 0xdeafbabe -558908738 - 5: 0xdeafbabe -558908738 PTAMS 0x00000000ee300000 NTAMS 0x00000000ee400000 space 1024K, 100% used [0x00000000ee300000, 0x00000000ee400000) 0x00000000ee300018 Heap: garbage-first heap total 819200K, used 807991K [0x00000000ce000000, 0x00000000ce101900, 0x0000000100000000) region size 1024K, 11 young (11264K), 5 survivors (5120K) Metaspace used 10613K, capacity 10696K, committed 12288K, reserved 1058816K class space used 1097K, capacity 1152K, committed 2048K, reserved 1048576K
31-10-2014

Allowing SurvivorAlignmentInBytes larger than 2 * or 4 * cache line size seems unwise. Alignment to the cache line size seems most reasonable but prefetch of the next cache line make 2 * the cache line with the realm of reason. Using 4 * cache line size would be defensive (for the good reason we cannot foresee at this time).
31-10-2014

ILW = HLL -> p4. Workaround is to use a "reasonable" SurvivorAlignmentInBytes of 32 or 64, or to keep UseLargePagesInMetaSpace off (the default)
29-10-2014