JDK-6711183 : OutOfMemoryError in vm\utilities\growableArray.cpp
  • Type: Bug
  • Component: hotspot
  • Sub-Component: gc
  • Affected Version: 5.0u15
  • Priority: P2
  • Status: Closed
  • Resolution: Not an Issue
  • OS: solaris_10
  • CPU: sparc
  • Submitted: 2008-06-05
  • Updated: 2010-07-29
  • Resolved: 2009-04-08
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
Other
5.0-poolResolved
Related Reports
Relates :  
Description
Production application exits after a few hours with the following error :

Exception java.lang.OutOfMemoryError: requested 10485760 bytes for GrET* in /BUILD_AREA/jdk1.5.0_15/hotspot/src/share/vm/utilities/growableArray.cpp. Out of swap space?

The error occur while starting a Full GC.

No third-party libraries used by the application.
No potential memory leak found at the native level.
No core dump generated.

Options used :
-XX:+DisableExplicitGC 
-XX:+HeapDumpOnOutOfMemoryError 
-XX:InteriorEntryAlignment=4 
-XX:+ManagementServer 
-XX:MaxHeapSize=3221225472 
-XX:MaxPermSize=402653184 
-XX:NewSize=1073741824  
-XX:OptoLoopAlignment=4 
-XX:ParallelGCThreads=8 
-XX:+PrintCommandLineFlags 
-XX:+PrintGC 
-XX:+PrintGCApplicationConcurrentTime 
-XX:+PrintGCApplicationStoppedTime 
-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps 
-XX:+PrintHeapAtGC 
-XX:+PrintTenuringDistribution 
-XX:+TraceClassUnloading 
-XX:-UseInlineCaches 
-XX:+UseParallelGC

The same application with same workload does not shows the issue on a RedHat 3 OS.

Comments
EVALUATION This bug has been in the incomplete state for a while. I'm closing it as "not a bug" which is the preliminary conclusion from the evaluation. If there is more information forthcoming, please reopen it.
08-04-2009

EVALUATION It looks like you are out of address space in your 32-bit JVM. You have -XX:MaxHeapSize=3221225472, so 3GB is tied up in the Java object heap. Then you have -XX:MaxPermSize=402653184, which will tie up another 384MB of space for the permanent generation. (Though in fact I see that only 28MB of that has actually been used by the end of the verbosegc.log.) You don't have much room left in your 4GB address space for other data structures (thread stacks, C malloc data, code, etc.) The VM gets to the first full collection at 15748.628 seconds, needs 10MB of data for side data structures during the collection, can't get it, and falls over. You said you tried decreasing the size of the Java object heap. How much did you decrease it? You only need something like 10MB to get past this first collection, so you shouldn't have to shrink it much. You say "the system becomes unresponsive". What does that mean? Are you spending a lot of time collecting garbage? Are you paging? (How much physical memory do you have on the box?) What did the GC logs from that run look like? That would say what the live data set is for your application, and give us a better idea how to shape the heap. You could try to shrinking your permanent generation some, say to -XX:MaxPermSize=64m (though, that's the default maximum permanent generation size, so you could as well just leave that option off your command line). That should free up several hundred MB of address space, which the VM will be able to use for its data structures. On the other hand, that might just delay the inevitable. Since you haven't survived even the first full collection, I can't tell how much of the stuff that's piled up in the old generation over the previous 4 hours is garbage. If lots of it is garbage, then we should collect it and let the application get on with its business. If its live, then you may run for a while and then run out of Java object heap: a real OutOfMemoryError. You can force full collections with the jconsole tool. That might give us an idea of how much of the old generation is dead objects without waiting 4 hours for the JVM to fall over. You could cause full collections every hour or so that way and post the GC logs here. Running with a 64-bit JVM would also allow the JVM to allocate the data structures it needs for the collection, and give us the live data size for the application. On the other hand, running a 64-bit JVM with a 3GB heap will allow you somewhat fewer objects in the heap (because the size of all the references increase from 32-bits to 64-bits). So you might want to increase your heap size some. Also, the overhead of using a 64-bit JVM when you are only using 3GB of heap might not be a good tradeoff. But lets get past the first collection before we worry too much about performance. You say that running on RedHat 3 doesn't show the problem. What are the command line settings for that run? I doubt you'll get a 3GB heap on a 32-bit RedHat system. If you ran with a lower -XX:MaxHeapSize= parameter on RedHat, you should easily be able to run with that same setting on Solaris. Does this application have throughput or latency requirements? That is, does it not care about GC pauses, but just wants the highest throughput possible? Or does it require short GC pauses, possibly at the expense of throughput? You are using the throughput collector. If you have latency concerns, you probably want to try the low-latency collector (-XX:+UseConcMarkSweepGC). This seems like a tuning exercise, not a bug.
05-06-2008