JDK-6606430 : OOME caused by ChunkedInputStream implementation change in 1.4.2
  • Type: Bug
  • Component: core-libs
  • Sub-Component: java.net
  • Affected Version: 1.4.2_12
  • Priority: P3
  • Status: Closed
  • Resolution: Duplicate
  • OS: solaris_10
  • CPU: x86
  • Submitted: 2007-09-19
  • Updated: 2010-08-05
  • Resolved: 2007-11-19
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
Other
1.4.2_12Resolved
Related Reports
Duplicate :  
Description
The OOME occurrence became frequent when one of our licensees
migrating their application from jdk 1.3.1_06 to 1.4.2_12.

OS : Solaris 8 -> Solaris 10
JDK: 1.3.1_06  -> 1.4.2_12

They sent a test case in which the jsp file runs on Tomcat and Apache
runs as front web server. URLConnectTest.java runs at client side.

In attached there is a -Xprof log collected by the licensee showed an
OOME was thrown.

Below is their investigation.

The data whose size is 107,161,907 byte was transmitted with http
chunked encoding. In jdk 1.3.1_06, if the prior read data is enough
the following snippet does not read further.

----->
private int read1(byte[] b, int off, int len) throws IOException {
        int avail = count - pos;
        if (avail <= 0) {
            fill();
            avail = count - pos;
            if (avail <= 0) return -1;
        }
        int cnt = (avail < len) ? avail : len;
        System.arraycopy(buf, pos, b, off, cnt);
        pos += cnt;
        return cnt;
}
<-----

In jdk 1.4.2, it seems that the implementation below hoard the read
data to chunkData. Since repetition of chunkData expansion leads to GC
of frequent occurrence and the reading processing of the application
does not overtake, the exploded buffer results in OOME. Why changing
the implementation in this way?

----->
    /*
     * Expand or compact chunkData if needed.
     */
    if (chunkData.length < chunkCount + copyLen) {
         int cnt = chunkCount - chunkPos;
         if (chunkData.length < cnt + copyLen) {
            byte tmp[] = new byte[cnt + copyLen];
            System.arraycopy(chunkData, chunkPos, tmp, 0, cnt);
            chunkData = tmp;
         } else {
            System.arraycopy(chunkData, chunkPos, chunkData, 0, cnt);
         }
         chunkPos = 0;
         chunkCount = cnt;
    }
    /*
     * Copy the chunk data into chunkData so that it's available
     * to the read methods.
     */
    System.arraycopy(rawData, rawPos, chunkData, chunkCount, copyLen);
    rawPos += copyLen;
    chunkCount += copyLen;
    chunkRead += copyLen;
<-----

The licensee also provided a suggested fix through limiting the data
size of prior read below,

----->
***************
*** 558,563 ****
--- 558,564 ----
       * <code>chunkData<code> or we need to determine how many bytes
       * are available on the input stream.
       */
+   static private int MAX_READAHEAD_SIZE = 8192;
      private int readAhead(boolean allowBlocking) throws IOException {

        /*
***************
*** 568,573 ****
--- 569,581 ----
        }

        /*
+         If more than MAX_READAHEAD_SIZE bytes are available now,
+         we don't need any more for the time being.
+        */
+       if (chunkCount - chunkPos > MAX_READAHEAD_SIZE)
+         return chunkCount - chunkPos;
+
+       /*
         * Reset position/count if data in chunkData is exhausted.
         */
        if (chunkPos >= chunkCount) {
<-----

Comments
EVALUATION This does look like a duplicate of CR 6446990. The read methods of ChunkedInputStream will return whatever amount of data is in the internal buffer before trying to process any more from the socket. The only way I can see this problem may occur is if available is being called multiple times on the stream, and this is the exact problem that has been fixed by CR 6446990.
19-11-2007