Current implementation of javax.sql.rowset.serial.SerialClob.position(Clob searchStr, long start) has following code: 311 public long position(Clob searchStr, long start) 312 throws SerialException, SQLException { 313 314 char cPattern[] = null; 315 try { 316 java.io.Reader r = searchStr.getCharacterStream(); 317 cPattern = new char[(int)searchStr.length()]; 318 r.read(cPattern); 319 } catch (IOException e) { 320 throw new SerialException("Error streaming Clob search data"); 321 } 322 return position(new String(cPattern), start); 323 } However, Reader.read(byte[] b) doesn't guarantee that exactly b.length bytes will be read. So, the code has potential problems in case less than b.length bytes are returned. While this is probably hardly reproducible in real apps theoretically it might happen - so it seems better to update code to protect from this. E.g. int totalRead = 0; while ( totalRead < cPattern.length() ) { totalRead += r.read(cPattern, totalRead, cPattern.length - totalRead); } Btw, why just don't use searchStr.getSubString(0, searchStr.length) instead of reading from the streams? Looks like from performance point of view it is better to use getSubString.
|