jhat fails reading very large (>2GB) heap dumps:
jhat -debug 1 ...
Read record type 2, length 16 at position 0x337c13
Read record type 2, length 16 at position 0x337c2c
Read record type 12, length -1452344221 at position 0x337c45
java.io.IOException: Bad record length of -1452344221 at byte 0x337c4a of file.
at com.sun.tools.hat.internal.parser.HprofReader.read(HprofReader.java:192)
at com.sun.tools.hat.internal.parser.Reader.readFile(Reader.java:79)
at com.sun.tools.hat.Main.main(Main.java:143)
This is an overflow: the object (i.e. heap dump) size in the dump file is 2.7GB, which has been read as a signed int.
It is understood that the heap size requirements will be very large for jhat to read such a dump. A 64-bit JVM will usually be needed. e.g. jhat -J-d64 -J-Xmx6g etc...
It turns out that this bug has been seen in nightly testing:
New vm.heapdump failures (from 2008.03.29)
heapdump/JMapHeap
heapdump/JMapPerm
heapdump/OnOOMToFile
heapdump/OnOOMToPath
These tests failed due to "IOException: Bad record length of
-174853360 at byte 0x45c9b06c of file." on Solaris AMD64 Server
VM (machine intelsdv01).
Update: JMapHeap and JMapPerm were executed in the 2008.03.30
nightly on machine vm-v20z-5 and did not reproduce this
failure mode.
Update: JMapHeap and OnOOMToPath failed in the 2008.07.03
nightly with this failure mode. In the same run, JMapPerm
and OnOOMToFile failed due to 6650690.
Last failure on 2008.07.03 with Solaris AMD64 Server VM (machine intelsdv01)
Previous failure on 2008.03.29 with Solaris AMD64 Server VM (machine intelsdv01)
http://sqeweb.sfbay.sun.com/nfs/results/vm/gtee/JDK7/NIGHTLY/VM/2008-03-29/Serv_Baseline/vm/solaris-amd64/server/comp/vm-solaris-amd64_server_comp_vm.heapdump.testlist2008-03-29-22-27-49/analysis.html
http://sqeweb.sfbay.sun.com/nfs/results/vm/gtee/JDK7/NIGHTLY/VM/2008-07-03/Serv_Baseline/vm/solaris-amd64/server/comp/vm-solaris-amd64_server_comp_vm.heapdump.testlist2008-07-03-19-44-52/analysis.html