JDK-8014928 : (fs) Files.readAllBytes() copies content to new array when content completely read
  • Type: Bug
  • Component: core-libs
  • Sub-Component: java.nio
  • Affected Version: 8
  • Priority: P4
  • Status: Closed
  • Resolution: Fixed
  • OS: generic
  • CPU: generic
  • Submitted: 2013-05-20
  • Updated: 2013-09-09
  • Resolved: 2013-05-29
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
8 b93Fixed
Related Reports
Relates :  
Calling this code:

Path p = FileSystems.getDefault().getPath(dir, file);
byte[] b = Files.readAllBytes(p));

is quite inefficient. When Files.readAllBytes() will call Files.read() which will attempt to create a byte array and read the entire byte array in one shot (so far, so good). When that read succeeds -- because the buffer is now full -- the read() method will resize the buffer (resulting in an expensive array copy) and attempt to read some additional data just in case the file grew. Failing that, it will return -- but first, it must make another copy of the buffer that is the correct size and re-copy all the data back.

I understand the need to handle the case if the file has grown (well, sort of -- there's an inherent race condition there and there is no locking and no guarantee it will work in any case). But the common case should be optimized here (rather than penalized).
The original intention was to avoid copying to a new array when it is already sized correctly, this needs to be re-checked.