jhat fails reading very large (>2GB) heap dumps:
jhat -debug 1 ...
Read record type 2, length 16 at position 0x337c13
Read record type 2, length 16 at position 0x337c2c
Read record type 12, length -1452344221 at position 0x337c45
java.io.IOException: Bad record length of -1452344221 at byte 0x337c4a of file.
This is an overflow: the object (i.e. heap dump) size in the dump file is 2.7GB, which has been read as a signed int.
It is understood that the heap size requirements will be very large for jhat to read such a dump. A 64-bit JVM will usually be needed. e.g. jhat -J-d64 -J-Xmx6g etc...
It turns out that this bug has been seen in nightly testing:
New vm.heapdump failures (from 2008.03.29)
These tests failed due to "IOException: Bad record length of
-174853360 at byte 0x45c9b06c of file." on Solaris AMD64 Server
VM (machine intelsdv01).
Update: JMapHeap and JMapPerm were executed in the 2008.03.30
nightly on machine vm-v20z-5 and did not reproduce this
Update: JMapHeap and OnOOMToPath failed in the 2008.07.03
nightly with this failure mode. In the same run, JMapPerm
and OnOOMToFile failed due to 6650690.
Last failure on 2008.07.03 with Solaris AMD64 Server VM (machine intelsdv01)
Previous failure on 2008.03.29 with Solaris AMD64 Server VM (machine intelsdv01)