JDK-6494472 : jmap -permstat fails with Out of swap because uses too much memory
  • Type: Bug
  • Component: core-svc
  • Sub-Component: tools
  • Affected Version: 1.4.2_13,5.0u8
  • Priority: P3
  • Status: Resolved
  • Resolution: Fixed
  • OS: solaris_8
  • CPU: sparc
  • Submitted: 2006-11-16
  • Updated: 2010-12-03
  • Resolved: 2007-04-05
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
Other JDK 6 JDK 7
1.4.2_15 b01Fixed 6u2Fixed 7Fixed
Related Reports
Duplicate :  
Relates :  
Description
It seems to be impossible to use  jmap -permstat <java> <core> (1.4.2_13) on gcore of approx 2gb because it frequently fails with:

Exception in thread "main" java.lang.OutOfMemoryError: requested 8192 bytes for jbyte in /export1/jdk142-update/ws/f
cs/hotspot/src/share/vm/prims/jni.cpp. Out of swap space?

Using a lot of gcore files (more than 20) collected from a running weblogic server (Sol8 Sun-Fire-480R) only sometimes (4 times on gcore files collected when GC usage/activity was low) it was possible to extract permstat info. When it was successfull process size of jmap was something like 2.6g or 3g. When unsuccessfull jmap process grows until the limit and then fails

LD_PRELOAD=SA/interpose/SunOS_sun4u/interpose.so `dirname $0`/../j2sdk1.4.2_13/bin/jmap -J-Xmx1g -J-XX:MaxPermSize=256m -permstat executable corefile

   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/LWPID     
 24227 claudiom 3782M 3505M cpu7     0   10   0:20:24  10% jmap/1
 24227 claudiom 3782M 3505M sleep   29   10   0:01:12 1.2% jmap/2
 24227 claudiom 3782M 3505M sleep   59    0   0:00:02 0.0% jmap/8
 24227 claudiom 3782M 3505M sleep   59   10   0:00:00 0.0% jmap/7
 24227 claudiom 3782M 3505M sleep   59    0   0:00:00 0.0% jmap/6
 24227 claudiom 3782M 3505M sleep   59    0   0:00:00 0.0% jmap/5
 24227 claudiom 3782M 3505M sleep   51    2   0:00:00 0.0% jmap/4
 24227 claudiom 3782M 3505M sleep   59    0   0:00:00 0.0% jmap/3
Total: 1 processes, 8 lwps, load averages: 5.47, 6.09, 5.52
   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/LWPID     
Total: 0 processes, 0 lwps, load averages: 5.43, 6.07, 5.52

It is not possible to use the 64bit as workaround because of the error:

LD_PRELOAD_64=SA/interpose/SunOS_sun4u/interpose64.so `dirname $0`/../j2sdk1.4.2_13/bin/sparcv9/jmap -J-d64 -J-Xmx1g -J-XX:MaxPermSize=256m -permstat executable corefile

Attaching to core corefile from executable executable, please wait...
Error attaching to core file: debuggee is 32 bit, use 32 bit java for debugger

Running with +PrintGCDetails and +PrintGCTimeStamps in additional .hotspotrc I can see it does not use all the java heap
It works fine until "computing liveness..........."
On a Sun-Fire-880 with 32gb it takes half an hour to run, probably because increases allocated memory by a segment of 4m  (used prstat -L -p pid)


1408.771: [GC 1408.771: [DefNew: 21436K->639K(21888K), 0.0298402 secs] 175371K->155920K(201664K), 0.0299709 secs]
1410.698: [GC 1410.698: [DefNew: 21887K->614K(21888K), 0.0129789 secs] 177168K->156534K(201664K), 0.0130856 secs]
...
2066.234: [GC 2066.234: [DefNew: 67972K->439K(69760K), 0.0098242 secs] 469207K->401674K(644104K), 0.0099315 secs]
2073.456: [GC 2073.456: [DefNew: 68087K->939K(69760K), 0.0108945 secs] 469322K->402174K(644104K), 0.0109931 secs]
2078.671: [GC 2078.671: [DefNew: 68587K->1325K(69760K), 0.0105477 secs] 469822K->402559K(644104K), 0.0106818 secs]
...
2595.050: [GC 2595.051: [DefNew: 79159K->1785K(80000K), 0.0203313 secs] 647945K->571185K(738536K), 0.0204670 secs]
2600.518: [GC 2600.518: [DefNew: 79417K->1872K(80000K), 0.0361514 secs] 648817K->572032K(738536K), 0.0362653 secs]
2605.197: [GC 2605.197: [DefNew: 79504K->1457K(80000K), 0.0157982 secs] 649664K->572361K(738536K), 0.0160160 secs]

Exception in thread "main" java.lang.OutOfMemoryError: requested 8192 bytes for jbyte in /export1/jdk142-update/ws/fcs/hotspot/src/share/vm/prims/jni.cpp. Out of swap space?


Total: 1 processes, 8 lwps, load averages: 9.31, 10.41, 10.56
   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/LWPID     
 21079 claudiom 3487M 3208M run      0   10   0:25:45 8.9% jmap/1
 21079 claudiom 3487M 3208M sleep   29   10   0:01:05 0.0% jmap/2
 21079 claudiom 3487M 3208M sleep   59    0   0:00:03 0.0% jmap/8
 21079 claudiom 3487M 3208M sleep   59   10   0:00:00 0.0% jmap/7
 21079 claudiom 3487M 3208M sleep   59    0   0:00:00 0.0% jmap/6
 21079 claudiom 3487M 3208M sleep   59    0   0:00:00 0.0% jmap/5
 21079 claudiom 3487M 3208M sleep   51    2   0:00:00 0.0% jmap/4
 21079 claudiom 3487M 3208M sleep   59    0   0:00:00 0.0% jmap/3
Total: 1 processes, 8 lwps, load averages: 9.23, 10.38, 10.55
   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/LWPID     
 21079 claudiom 3487M 3192M run      0   10   0:25:49 8.8% jmap/1
 21079 claudiom 3487M 3192M sleep   29   10   0:01:05 0.0% jmap/2
 21079 claudiom 3487M 3192M sleep   59    0   0:00:03 0.0% jmap/8
 21079 claudiom 3487M 3192M sleep   59   10   0:00:00 0.0% jmap/7
 21079 claudiom 3487M 3192M sleep   59    0   0:00:00 0.0% jmap/6
 21079 claudiom 3487M 3192M sleep   59    0   0:00:00 0.0% jmap/5
 21079 claudiom 3487M 3192M sleep   51    2   0:00:00 0.0% jmap/4
 21079 claudiom 3487M 3192M sleep   59    0   0:00:00 0.0% jmap/3
Total: 1 processes, 8 lwps, load averages: 9.16, 10.34, 10.54
   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/LWPID     
Total: 0 processes, 0 lwps, load averages: 9.10, 10.31, 10.53
   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/LWPID     
Total: 0 processes, 0 lwps, load averages: 9.09, 10.29, 10.52

It seems that jmap is running in one thread and eating all the available address space.
Often it requests too much memory, because of object distribution in heap, and it fails at approx 3.5gb of process size


LivenessAnalysis: WARNING: sun.jvm.hotspot.utilities.AssertionFailure: FIXME: add derived pointer table during traversal
LivenessAnalysis: WARNING: sun.jvm.hotspot.utilities.AssertionFailure: FIXME: add derived pointer table during traversal
........%


(pmap collected a few secs before)

21079:  .../j2sdk1.4.2_13/bin/jmap -J-Xmx1g -J-XX:
 Address  Kbytes     RSS    Anon  Locked Mode   Mapped File
00010000      56      40       -       - r-x--  jmap
0002C000      16      16       8       - rwx--  jmap
00030000    3904    3824    2232       - rwx--    [ heap ]
00400000 2600960 2562176 2502656       - rwx--    [ heap ]
...
B0800000    4096    4096    4096       - rwx--    [ anon ]
B0C00000    4096    4096    4096       - rwx--    [ anon ]
B1000000    4096    4096    4096       - rwx--    [ anon ]
B1400000   12288   12288   12288       - rwx--    [ anon ]
B2000000    4096    4096    4096       - rwx--    [ anon ]
...
FB538000       8       8       8       - rwx--    [ anon ]
FB53A000       8       8       8       - rwx--    [ anon ]
FB53C000       8       8       8       - rwx--    [ anon ]
FB53E000       8       8       8       - rwx--    [ anon ]
FB540000       8       8       8       - rwx--    [ anon ]
FB542000      24      24      24       - rwx--    [ anon ]
....
FFBF0000      64      64      56       - rw---    [ stack ]
-------- ------- ------- ------- -------
total Kb 3427520 3255144 3183232       -


It is not safe to ask customers to run on production servers the jmap -permstat <pid>  due other known bugs (6306997) which can destroy debugee process

Comments
EVALUATION As my last comment, free the buffer used for the array in native code.
19-03-2007