JDK-8145091 : JMAP fails to dump huge arrays
  • Type: Bug
  • Component: hotspot
  • Sub-Component: svc
  • Affected Version: 9
  • Priority: P3
  • Status: Resolved
  • Resolution: Duplicate
  • Submitted: 2015-12-10
  • Updated: 2016-03-15
  • Resolved: 2016-03-04
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 9
9Resolved
Related Reports
Duplicate :  
Duplicate :  
Relates :  
Relates :  
Description
When trying to dump large arrays, jmap fails in various ways:

E.g.

public class Test {
  public static void main(String[] args) throws Exception {
    byte[] something = new byte[2 * 1024 * 1024 * 1024 - 30];
    System.out.println("waiting");
    System.in.read();
  }
}
fails in the jmap process with a "file size limit" message when run with

java -Xmx12g Test.java

and

jmap -J-d64 -J-XX:SegmentedHeapDumpThreshold=2M -dump:format=b,file=heap.bin 

on linux/x64.
(Note the forced segmentation too, but there does not seem to be a reason why writing a ~2G object would fail on a 64 bit system; it also fails without the SegmentedHeapDumpThreshold set)

This code:

public class Test {
  public static void main(String[] args) throws Exception {
    int[] something = new int[2 * 1024 * 1024 * 1024 - 30];
    System.out.println("waiting");
    System.in.read();
  }
}

crashes the VM to be dumped with

# To suppress the following error report, specify this argument
# after -XX: or in .hotspotrc:  SuppressErrorAt=/heapDumper.cpp:1043
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  INVALID (0xe0000000) at pc=0x0000000000000000, pid=2168, tid=2236
#  assert(length_in_bytes > 0) failed: nothing to copy
#
# JRE version: Java(TM) SE Runtime Environment (9.0) (build 9-internal+0-2015-12-10-095457.tschatzl.hs-gc)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (9-internal+0-2015-12-09-143703.tschatzl.hs-gc, mixed mode, tiered, compressed oops, g1 gc, linux-amd64)
# Core dump will be written. Default location: Core dumps may be processed with "/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e %P %I" (or dumping to ...)
#
# An error report file with more information is saved as:
# .../hs_err_pid2168.log
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp

and the jmap process errors out with
Exception in thread "main" java.io.IOException: Premature EOF
	at sun.tools.attach.HotSpotVirtualMachine.readInt(HotSpotVirtualMachine.java:294)
	at sun.tools.attach.VirtualMachineImpl.execute(VirtualMachineImpl.java:199)
	at sun.tools.attach.HotSpotVirtualMachine.executeCommand(HotSpotVirtualMachine.java:263)
	at sun.tools.attach.HotSpotVirtualMachine.dumpHeap(HotSpotVirtualMachine.java:226)
	at sun.tools.jmap.JMap.dump(JMap.java:247)
	at sun.tools.jmap.JMap.main(JMap.java:142)

(same command line)
Comments
Both these cases are now fixed, by JDK-8129419 and JDK-8144732. Closing as duplicate.
04-03-2016

HPROF format requires us to write entire array as a single record. Maximum size of array allowed in Java is max_jint *elements*, but segment size is limited to u4 (max_juint) *bytes*. So if array has more than (max_juint/element_size) elements it overflows HPROF dump segment.
14-12-2015