JDK-4413680 : Runtime exec hangs if ulimit nofiles is unlimited.
  • Type: Bug
  • Component: core-libs
  • Sub-Component: java.lang
  • Affected Version: 1.4a,1.3.0,1.4.0
  • Priority: P4
  • Status: Closed
  • Resolution: Fixed
  • OS: solaris_7,solaris_9
  • CPU: sparc
  • Submitted: 2001-02-09
  • Updated: 2009-06-25
  • Resolved: 2002-02-08
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
Other Other Other
1.3.1_14Fixed 1.4.0_03 03Fixed 1.4.1Fixed
Related Reports
Duplicate :  
Duplicate :  
Relates :  
Description
Name: boT120536			Date: 02/08/2001


java version "1.3.0"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.3.0)
Java HotSpot(TM) Client VM (build 1.3.0, mixed mode)


  To reproduce this problem, as root, do:

# ulimit -n unlimited
# java foo

Where foo.class is from:
public class foo
{ 
    public static void main(String argv[] ) throws IOException,
InterruptedException {
        String command = new String ("/bin/touch  /tmp/junkexec");
        // then write the data of the response

        Process process = Runtime.getRuntime().exec(command);
        System.out.println ("The process returned " + process.waitFor());
    }
}
 

This is actually a modification of bug id 4043528 which I know has
been closed.  However, if the system limit is unlimited, Runtime
exec will loop effectively forever.  I think it's a sign problem
in a max close routine, as it never gets out of trying to close
file descriptors.

Now, perhaps a way to fix this problem for solaris is to open
/proc/XXXX/fd and try to close only the ones that are open.
This way, it will be independent of the max_filedescriptor value.
(Review ID: 114796) 
======================================================================
###@###.### 10/5/04 22:28 GMT

Comments
CONVERTED DATA BugTraq+ Release Management Values COMMIT TO FIX: 1.4.0_03 hopper FIXED IN: 1.4.0_03 hopper INTEGRATED IN: 1.4.0_03 hopper VERIFIED IN: hopper-beta
14-06-2004

EVALUATION To be fixed. ###@###.### 2001-11-15 [dp@eng 2001-12-12] For future reference to those who may encounter bugs like this: libc (starting in Solaris 9) implements closefrom(3C) which efficiently closes all FD's greater than a given number. The fix implemented for the JDK (as far as I understand) is roughly a clone of this routine, which takes a look in /proc/<pid>/fd/ and looks at which fd's are open. This is safe to do only after a fork1() when you know you won't race with other threads performing open() calls. This has become a serious perf. problem on Solaris 9 in which the default fd limit was raised from 1024 to 64K.
11-06-2004

WORK AROUND Name: boT120536 Date: 02/08/2001 use a smaller limit value for nofiles - but still has performance impacts (as 4043528 says) ======================================================================
11-06-2004

PUBLIC COMMENTS We now close only open file descriptors. ###@###.### 2001-11-15
15-11-2001