Duplicate :
|
|
Relates :
|
|
Relates :
|
FULL PRODUCT VERSION : A DESCRIPTION OF THE PROBLEM : HPROF dumps of a heap containing huge primitive arrays are corrupt due to an integer overflow in DumpWriter::write_raw(void* s, int len). Imagine there is a byte[] of length Integer.MAX_VALUE - 2. See the dumper source code from JDK 9 (the same problem with earlier versions too): void DumpWriter::write_raw(void* s, int len) { if (is_open()) { // flush buffer to make toom if ((position()+ len) >= buffer_size()) { // **** overflow here: position()+ len < 0, the branch is not executed flush(); /// *** not reached } // buffer not available or too big to buffer it if ((buffer() == NULL) || (len >= buffer_size())) { write_internal(s, len); // *** array bytes are immediately written, but the buffer was not flushed! } else { //... } THE PROBLEM WAS REPRODUCIBLE WITH -Xint FLAG: Did not try THE PROBLEM WAS REPRODUCIBLE WITH -server FLAG: Yes STEPS TO FOLLOW TO REPRODUCE THE PROBLEM : 1. Run the simple example below: java -Xmx4G -cp . BigArray 2. Dump heap with jconsole or jmap 3. Open the HPROF file in any tool, e.g. MAT 4. Get an error message. REPRODUCIBILITY : This bug can be reproduced always. ---------- BEGIN SOURCE ---------- public class BigArray { public static void main(String[] args) throws Exception { final byte[] array = new byte[Integer.MAX_VALUE - 2]; System.out.println("Dump heap now..."); System.in.read(); System.out.println(array); } } ---------- END SOURCE ----------
|