JDK-6876535 : G1: potential C-heap leak
  • Type: Bug
  • Component: hotspot
  • Sub-Component: gc
  • Affected Version: hs16,hs17
  • Priority: P3
  • Status: Closed
  • Resolution: Duplicate
  • OS: generic,solaris_10
  • CPU: generic,sparc
  • Submitted: 2009-08-27
  • Updated: 2013-09-18
  • Resolved: 2010-01-11
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
Other
hs17Resolved
Related Reports
Duplicate :  
Description
Some beta customers that have tried G1 are reporting that it might have a C-heap leak. They observe the JVM's footprint steadily climbing as it's running.

Comments
EVALUATION I should also add that Andrey's fix for 6870843 does not fix this. So, it's clearly a different issue.
21-10-2009

EVALUATION The problem is indeed caused by SATB buffers. It looks as if the marking threads are not keeping up with the application threads and the SATB buffer queue gets overly large. This has a couple of side-effects. First, since we do drain the SATB buffers at the beginning of GC pauses, there's way too much overhead on said pauses due to this draining. Second, since we currently do not de-allocate SATB buffers but add them to a list for future re-use, that list gets way too large (and its contents are not used between marking cycles), which is what causes the leak.
21-10-2009

EVALUATION I wonder whether it's the SATB / update buffers that we leak. I can't think of anything else that we will be heavily allocating during a marking phase.
28-08-2009

EVALUATION It looks as if I can reproduce what looks like a leak. I did a long fixed-warehouse-number jbb run on a Linux amd64 box. First, I tried with MTT=4 to only do evacuation pauses. This is what I got by looking at top every now and then: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 25527 ap31282 15 0 3434m 2.5g 3544 S 52 32.6 0:01.56 java 25527 ap31282 15 0 3776m 3.2g 9012 S 794 41.4 4:25.00 java 25527 ap31282 15 0 3779m 3.2g 9012 S 795 41.5 10:10.12 java 25527 ap31282 15 0 3779m 3.2g 9012 S 795 41.5 21:34.32 java 25527 ap31282 15 0 3811m 3.3g 9012 S 795 42.0 63:18.75 java 25527 ap31282 15 0 3811m 3.3g 9012 S 795 42.0 144:38.22 java 25527 ap31282 15 0 3811m 3.3g 9012 S 794 42.0 191:15.06 java 25527 ap31282 15 0 3811m 3.3g 9012 S 794 42.0 215:33.35 java 25527 ap31282 15 0 3811m 3.3g 9012 S 796 42.0 250:58.99 java 25527 ap31282 15 0 3811m 3.3g 9012 S 796 42.0 332:33.28 java 25527 ap31282 15 0 3811m 3.3g 9012 S 797 42.0 366:38.57 java The above looks stable to me. After initialization, the overall footprint only goes up by a bit over time. And this can be explained by the old generation growing very slowly and, as a result, some new RSet entries being created. I then set MTT=0 to force old generation activity. Here's the data from top: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24569 ap31282 16 0 3572m 3.2g 8920 S 130 40.6 0:04.86 java 24569 ap31282 16 0 5543m 3.6g 9032 S 541 45.7 4:43.95 java 24569 ap31282 16 0 7387m 4.0g 9036 S 127 51.0 18:42.94 java 24569 ap31282 16 0 7319m 3.9g 9036 S 796 50.4 52:08.51 java 24569 ap31282 16 0 7324m 3.9g 9036 S 541 50.5 98:28.70 java 24569 ap31282 16 0 7330m 3.9g 9036 S 508 50.6 151:46.77 java 24569 ap31282 16 0 7332m 4.0g 9036 S 485 50.7 205:11.30 java This run had many marking cycles (and, in fact, a lot of them bailed out and caused a Full GC). And there was a clear jump in memory footprint when the first marking cycle started). Also, note that the virtual size is close to twice as large as the resident size. I suppose this can be explained by maybe some memory being allocated and not used, so it gets swapped out?
28-08-2009