JDK-6676462 : JVM sometimes would suddenly consume significant amount of memory
  • Type: Bug
  • Component: hotspot
  • Sub-Component: compiler
  • Affected Version: 5.0u3,5.0u14
  • Priority: P2
  • Status: Resolved
  • Resolution: Fixed
  • OS: linux_2.6,solaris_10
  • CPU: x86,sparc
  • Submitted: 2008-03-18
  • Updated: 2012-10-09
  • Resolved: 2008-07-28
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
Other Other JDK 6 JDK 7 Other
5.0u16-rev,hs11Fixed 5.0u17Fixed 6u14Fixed 7Fixed hs11Resolved
Related Reports
Duplicate :  
Relates :  
Description
Synopsis : JVM sometimes would suddenly consume significant amount of memory

Description :

  Customer has recently experienced a severe Java memory problem on the Linux platform.  
The customer has noticed that JVM sometimes would suddenly consume significant amount of memory.  
When this happens, the operating system starts to get noticeably slower, and eventually starts to 
terminate other low priority processes.  Customer has experienced this issue several times.  
They consider this a critical issue.

Application is a Java program that monitors network devices.  
Customer noticed that sometimes when the program would suddenly consume a large amount of memory.

Normally, from 'top' output, the JVM consumes ~3G virtual mem, and ~500M res mem as below :


  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND

27400 root      25   0 2969m 452m  38m S  0.0 45.2   5:03.36 java.sgm

 

However, when this problem occurs, the JVM memory utilization would suddenly increase significantly.  
From 'top' command, customer observed the JVM is using 7.2G virtual memory and 3.5G res memory.

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND

13925 root      19   0 7206m 3.5g 5808 S    2 90.4   6:13.79 java.sgm

 
Furthermore, at the time when this problem occurred, the Java API "java.lang.Runtime.totalMemory()" 
and "java.lang.Runtime.freeMemory()" reported used memory is less than 500M.

Have worked with the customer to take a core file snapshot of the JVM when this problem occurred.
Following steps were used for collecting core file :  
   Customer keeps observing memory usage of the JVM.  When they noticed a significant memory usage jump, 
they will issue a command "kill -11 <pid>", where the PID is the Java process with the issue.  
This creates a 'core' file and an 'hs_err' file when this issue occurred.  


Following are some basic information about this issue:


* Operating System: Red Hat Enterprise Linux WS release 4 (linux-amd64)

   uname:Linux 2.6.9-42.ELsmp #1 SMP Wed Jul 12 23:32:02 EDT 2006 x86_64



* Java version: 1.5.0_13-b05

 

* JVM Parameters:

./java -DPROC_NAME=sgmProcessManager -Xmx2048M -XX:MaxPermSize=512m -server -Djava.endorsed.dirs=/opt/CSCOsgm/server/lib/endorsed -Djboss.server.name=MWTM -Djboss.server.home.dir=/opt/CSCOsgm/server -Djboss.server.home.url=file:/opt/CSCOsgm/server -Djboss.server.config.dir=/opt/CSCOsgm/server/conf -Dprogram.name=sgmServer.sh -Djava.security.egd= -Djava.awt.headless=true -Djava.protocol.handler.pkgs=org.apache.naming.resources -Ddrools.compiler=JANINO -Djava.util.prefs.userRoot=/opt/CSCOsgm/prefs/ -cp /opt/CSCOsgm/properties:/opt/CSCOsgm/etc:/opt/CSCOsgm/server/lib/run.jar:/opt/CSCOsgm/images org.jboss.Main
 


Location of Core dumps :
=========================

 Path : /net/cores.central/cores/dir33/831645

1) core-cisco.zip       ---->> Core File
2) hs_err_pid13925.log  ---->> Error Logs


Need to troubleshoot this problem.

Comments
EVALUATION http://hg.openjdk.java.net/jdk7/hotspot-comp/hotspot/rev/2b73d212b1fd
05-09-2008

EVALUATION This look like a problem with missing dead loop checks that result in continuously generating AddINode. Here's what Vladimir had to say: Yes, it is the dead loop case. It is difficult to determine which node transformation cause it at this phase. From nodes id it seems we do the last transformation in AddNode::Ideal(). We could prevent it if we add the dead code check (this != add2->in(1)). We may need to add the same to the previous optimization. // Convert "(x+1)+y" into "(x+y)+1". Push constants down the expression tree. if( add1_op == this_op && !con_right ) { Node *a12 = add1->in(2); const Type *t12 = phase->type( a12 ); if( t12->singleton() && t12 != Type::TOP && (add1 != add1->in(1)) ) { + if (add1->in(1) == this) + return phase->C->top(); // Dead loop add2 = add1->clone(); add2->set_req(2, in(2)); add2 = phase->transform(add2); set_req(1, add2); set_req(2, a12); progress = this; add2 = a12; } } // Convert "x+(y+1)" into "(x+y)+1". Push constants down the expression tree. int add2_op = add2->Opcode(); if( add2_op == this_op && !con_left ) { Node *a22 = add2->in(2); const Type *t22 = phase->type( a22 ); if( t22->singleton() && t22 != Type::TOP && (add2 != add2->in(1)) ) { + if (add2->in(1) == this) + return phase->C->top(); // Dead loop Node *addx = add2->clone(); addx->set_req(1, in(1)); addx->set_req(2, add2->in(1)); addx = phase->transform(addx); set_req(1, addx); set_req(2, a22); progress = this; } } Vladimir
02-05-2008