Changes for 7082769 can lead to file descriptor exhaustion on apps. Closing of IO streams that reference the same native file descriptor should also close out the streams that reference it. This apparently is similar behaviour to the underlying OS behaviour.
One such report is seen with the Hadoop hdfs project. It has code which creates RandomAccessFiles but never closes them. On JRE's without the 7082769 fix, this is not an issue since the first call to close the input/output stream associated with this FD will close out the underlying FD. With the 7082769 fix max file descriptors count can be reached since the randomAccessFile keeps reference to the underlying FD and keeps it open.
some code from Hadoop project :
@Override // FSDatasetInterface
public synchronized InputStream getBlockInputStream(ExtendedBlock b,
long seekOffset) throws IOException {
File blockFile = getBlockFile(b);
RandomAccessFile blockInFile = new RandomAccessFile(blockFile, "r");
if (seekOffset > 0) {
blockInFile.seek(seekOffset);
}
return new FileInputStream(blockInFile.getFD());
}
Due to this behavioural change, the fix should be backed out and we should look to see if the underlying issue can be addressed in a different way to avoid breaking apps that have worked in the past.