JDK-8010722 : assert: failed: heap size is too big for compressed oops
  • Type: Bug
  • Component: hotspot
  • Sub-Component: gc
  • Affected Version: hs25
  • Priority: P2
  • Status: Closed
  • Resolution: Fixed
  • Submitted: 2013-03-25
  • Updated: 2017-05-24
  • Resolved: 2013-09-11
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 8 Other Other
8Fixed hs25Fixed hs25,openjdk7uFixed
Related Reports
Relates :  
Relates :  
Relates :  
Relates :  
Relates :  
Relates :  
Description
;; Using jvm: "/bpool/local/common/jdk/baseline/solaris-amd64/jre/lib/amd64/server/libjvm.so"
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  Internal Error (/opt/jprt/T/P1/170135.amurillo/s/src/share/vm/memory/universe.cpp:889), pid=11448, tid=2
#  assert(!UseCompressedOops || (total_reserved <= (OopEncodingHeapMax - os::vm_page_size()))) failed: heap size is too big for compressed oops
#
# JRE version:  (8.0-b82) (build )
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.0-b24-internal-201303231701.amurillo.hs25-b24-snapshot-fastdebug mixed mode solaris-amd64 compressed oops)
# Core dump written. Default location: /bpool/local/aurora/sandbox/results/workDir/compiler/6865031/Test/core or core.11448
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.sun.com/bugreport/crash.jsp
#

Comments
Fixes part of the problem by removing the need to size the class metaspace before the heap.
13-08-2013

When using the "conservative approach" where you try to find the maximum possible alignment: you need to query the required alignment by the card table. This calculation may use the large page size, which has not been determined yet. Further, the use of the large pages is determined very late, and dependent on a few factors. One of these factors is UseCompressedOops.
17-04-2013

What makes this more complicated is that the alignment not only depends on the GC which will be used (which can be determined already at this stage), but that alignment is to some degree dependent on heap size itself. So max_heap_for_compressed_oops() does not do any alignment, as it does not know the applicable alignment yet. I.e. min/max heap size influences alignment in the parallel scavenge gc (it tries to calculate the alignment as a multiple of the largest page size possible that is smaller than the maximum heap size). In the other gcs, alignment in effect is some constant (or global variable) that is available after initial argument processing. Some options: - disable UseCompressedOops at that point again if it has been set by ergonomics. - align down, as this operation does not violate the assumption that later the heap does not increase. In Universe:.reserve_heap(), there is a comment that indicates that ClassMetaspaceSize should only be aligned up though. - use maximum possible alignment (maximum page size) for the selected collector (or across collectors) in max_heap_for_compressed_oops() to return a conservative estimate of that value. edit: another cyclic dependency for determining the maximum alignment G1 is even more problematic as parallel scavenge: its alignment corresponds to heap region size, which is dependent on heap size.
02-04-2013

There is an alignment requirement for ClassMetaspaceSize in Universe::reserve_heap() that is not enforced earlier in max_heap_for_compressed_oops() that influences the decision whether to use compressed oops. I.e. the alignment requirement is only enforced only after (multiple) use; the one in max_heap_for_compressed_oops() (via set_use_compressed_oops()) is only the first.
02-04-2013

The reason for the mismatching sizes is the following: In the method max_heap_for_compressed_oops() in arguments.cpp we do the same calculation as described above for Universe::reserve_heap() but in this version we don't align up. Since the value from max_heap_for_compressed_oops() is unaligned we get a too large value when we align it up later in Universe::reserve_heap(). One way to fix this would be to do the alignment in max_heap_for_compressed_oops() too, but the problem is that the alignment depends on which GC we will use. Not really clear what the best way to fix this is.
27-03-2013

Both tests are used to verify correctness of code in Universe::preferred_heap_base(). They uses -XX:HeapBaseMinAddress=32g flag for that. The assert was added in 8001049 changes. Passing to GC group.
26-03-2013

stack ----------------- lwp# 1 / thread# 1 -------------------- fffffd7fff26bb2a __lwp_wait () + a fffffd7fff25e980 _thrp_join () + 60 fffffd7fff25eace thr_join () + e fffffd7ff98240bc ContinueInNewThread0 () + 44 fffffd7ff9821ae7 ContinueInNewThread () + ab fffffd7ff982411d JVMInit () + 49 fffffd7ff9819861 JLI_Launch () + fcd 000000000040094b main () + 6f 000000000040077c ???????? () + fffffffffffffea0 ----------------- lwp# 2 / thread# 2 -------------------- fffffd7fff26bafa _lwp_kill () + a fffffd7fff2111b5 raise () + 19 fffffd7fff1e7bd2 abort () + ca fffffd7ff892c09d __1cCosFabort6Fb_v_ () + 119 fffffd7ff8f0c69e __1cHVMErrorOreport_and_die6M_v_ () + b56 fffffd7ff78164d3 __1cPreport_vm_error6Fpkci11_v_ () + 55f fffffd7ff8e2ce23 __1cIUniverseMreserve_heap6FLL_nNReservedSpace__ () + 6a7 fffffd7ff8998392 __1cUParallelScavengeHeapKinitialize6M_i_ () + 35a fffffd7ff8e2c35e __1cIUniversePinitialize_heap6F_i_ () + 50e fffffd7ff8e2af56 __1cNuniverse_init6F_i_ () + f2 fffffd7ff7be8e6b __1cMinit_globals6F_i_ () + a7 fffffd7ff8dad1ac __1cHThreadsJcreate_vm6FpnOJavaVMInitArgs_pb_i_ () + 1c8 fffffd7ff8045a80 JNI_CreateJavaVM () + 78 fffffd7ff9819e8c JavaMain () + 144 fffffd7fff26257d _thrp_setup () + a5 fffffd7fff262820 _lwp_start ()
25-03-2013