JDK-4765019 : unable to create more than 2300 threads
  • Type: Bug
  • Component: hotspot
  • Sub-Component: runtime
  • Affected Version: 1.3.1_05,1.4.2
  • Priority: P2
  • Status: Closed
  • Resolution: Fixed
  • OS: windows_2000
  • CPU: generic,x86
  • Submitted: 2002-10-18
  • Updated: 2021-11-09
  • Resolved: 2002-12-05
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
Other Other Other Other
1.3.1_07 07Fixed 1.4.0_04Fixed 1.4.1_03Fixed 1.4.2Fixed
Related Reports
Duplicate :  
Description
We are unable to create more than 23oo threads with thread size set to 256k with jdk 1.3.1_04 on Windows 2000. The process virtual address space grows to 1.2GB only and the machine has 2GB physical RAM with 4GB swap space configured.

The application is a Call management server which accepts requests (http, voiceXml) from the number of callers (simulated in test environment), creates a therad to handle that request and kills that thread. Server worked fine with owe load and no. of request was not too high and we never created many threads. With new load configurations, more than 2500 threads are required.

The stack trace at crash is:

5463: Oct 09 14:17:47.315 PDT %MIVR-ENG-7-UNK:java.lang.OutOfMemoryError: unable to create new native thread^M
        at java.lang.Thread.start(Native Method)^M
        at com.cisco.jtapi.ObserverThread.?(com/cisco/jtapi/ObserverThread)^M
        at com.cisco.jtapi.ObserverProxy.<init>(com/cisco/jtapi/ObserverProxy)^M
        at com.cisco.jtapi.ObserverManager.?(com/cisco/jtapi/ObserverManager)^M
        at com.cisco.jtapi.ProviderImpl.?(com/cisco/jtapi/ProviderImpl)^M
        at com.cisco.jtapi.CallImpl.addObserver(com/cisco/jtapi/CallImpl)^M
        at com.cisco.wf.subsystems.jtapi.TAPIPortGroup$Port$InCallObserverImpl.transfer(TAPIPortGroup.java:6082)^M
        at com.cisco.wf.subsystems.jtapi.TAPIPortGroup$Port.connect(TAPIPortGroup.java:3892)^M
        at com.cisco.wf.steps.iaq.SelectResourceStep.execute(SelectResourceStep.java:468)^M
        at com.cisco.wf.steps.iaq.SelectResourceStep.execute(SelectResourceStep.java:164)^M
        at com.cisco.wfframework.obj.WFBeanStep.executeImpl(WFBeanStep.java:99)^M
        at com.cisco.wfframework.obj.WFStep.execute(WFStep.java:120)^M

The error returned from win32 api is:

Not enough storage is available to process this command.^M^M
Not enough storage is available to process this command.^M^M
Not enough storage is available to process this command.^M^M

Comments
CONVERTED DATA BugTraq+ Release Management Values COMMIT TO FIX: 1.3.1_07 1.4.0_04 1.4.1_03 mantis-beta tiger FIXED IN: 1.3.1_07 1.4.0_04 1.4.1_03 mantis-beta tiger INTEGRATED IN: 1.3.1_07 1.4.0_04 1.4.1_03 mantis-b19 mantis-beta tiger-b05
14-06-2004

SUGGESTED FIX http://jpsesvr.sfbay.sun.com:8080/ctetools/CodeStore/434/webrev/webrev.html
11-06-2004

EVALUATION Can you reproduce this with the latest JDK 1.4.1_01 release. Please re-test and let us know. Are you going to need this fixed in older release? If so, then an escalation will need to be filed. Alerting escalation team. I was able to reproduce this on latest 1.4.2 release. Running in 64bit mode returns far better results. It all depends on how much memory you have. I've been running for almost 40 minutes and I have 48000 ###@###.### 2002-10-25 ###@###.### 2002-10-25 the default stack size is 1 MB if not specified with -Xss and this result in calculation of _os_thread_limit be around 1800 (= (2GB - 200 MB ) / 1MB. After hit the limit, jvm code begins to try to reserve 20 MB first to see if there is enough virtual memory left, if not, "OutOfMemeory Error" thrown. The reserve only purpose for test, then released if reserve succeed. JVM create native Windows thread using the default stack size 0. Problem happens when on a MP machine with multiple thread creation that reserving 20 MB and commmiting stack space interlaced results in memory fragmented so finally, no single block is bigger enough than 20 MB even the total available virtual memory is far more than 20 MB. With -Xss256K, the _os_thread_limit is about 7200. The question lands on the stack size used in calling _beginthreadex, since within the _os_thread_limit, reserve memory will not be invoked. Calling _beginthreadex even worse with specifying a stack commit size, even there is alot explaination of better not to specify stack size to create thread, a question still exists there like in attached example(C++): in this example, the stack size never grow out of 256K, but we still fail to create more threads, failed btw 1300-1700 with specifying 256K as stack size. Think it is a Microsoft bug. Current solution is supply a XX flag(VM) upon turned on 0 as the argument passed to _beginthreadex as the commit stacksize, and meanwhile use -Xss256K to avoid triggering reserving 20 MB, and thread number can go up to 7000. ###@###.### 2002-11-19 Please see comment section for latest update.. Looks like still an issue. ###@###.### 2003-06-02 The customer comes back with the same result but with different root cause: not from SUN Java, but from Microsoft Winodws. From Windows2000, there is a choice of 3GB address space for application, and it is accomplished by following steps: 1) in boot.ini, add /3GB to configure an application "can" get more 2GB address space (default). 2) boot with this configuration, to use the large address space, use editbin.exe utility to make the applicaiton aware of using of large address space: editbin /LARGEADDRESSAWARE executable After done with above, it supposed for an application to access more than 2GB memory address. The customer is doing this and hope the extra 1 GB can help their application create even more threads with our supplied flag settings. But something wrong with Microsoft's driver for memory access when configured like this. For customer's case, if they back to default configuration, 2GB for individual process, the applciation run well. I idenitified and informed them of this problem, after they consult to Microsoft, Microsoft acknowledged this fault. The current solution is setting a proper system page number: in Windopws registry (use regedit) HKLM\System\CurrentControlSet\Control\Session Manager\Memory Management\SystemPages Value: SystemPages Data: 30000 3000 is recommeded by Microsoft, but I found it was not working, and for customer's case, 0x1e000 (122880) make the case move forward without failing. It indeed create more than 8000 java threads. The final solution will depends on Microsoft's correcting the fault for large address accessing. (The attached c++ application with commenting out the reserve/release run in the setting failed to create more native threads). ###@###.### 2003-06-02
02-06-2003