JDK-8164293 : HotSpot leaking memory in long-running requests
  • Type: Bug
  • Component: hotspot
  • Sub-Component: compiler
  • Affected Version: 8u101
  • Priority: P3
  • Status: Resolved
  • Resolution: Fixed
  • OS: linux_ubuntu
  • CPU: x86_64
  • Submitted: 2016-08-11
  • Updated: 2018-02-08
  • Resolved: 2017-01-11
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 8
8u131Fixed
Related Reports
Duplicate :  
Duplicate :  
Relates :  
Description
FULL PRODUCT VERSION :
java version "1.8.0_101"
Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)

FULL OS VERSION :
Ubuntu 14.04
Linux andrew-VirtualBox 4.2.0-27-generic #32~14.04.1-Ubuntu SMP Fri Jan 22 15:32:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

EXTRA RELEVANT SYSTEM CONFIGURATION :
Ubuntu 14.04 Guest running in VirtualBox on OSX 10.11.5 host.
No other changes to standard Ubuntu configuration.

A DESCRIPTION OF THE PROBLEM :
When running a web application for a long period of time, and that application makes a large number of long-runing requests (> 1s), it will begin to leak memory.

Looking at the application with JConsole, all the following pools are limited, and none are growing:

* Heap Space
* Meta Space
* Code Cache
* Compressed Class Space

As such, I'm not entirely sure where all the extra memory is going. Looking at the application with JCmd and NMT, I see that over time, the Class/Malloc area grows. That is, given the following partial output from jcmd <pid> VM.native_memory summary:

-                     Class (reserved=23075KB, committed=15651KB)
                            (classes #2182)
                            (malloc=547KB #6210) 
                            (mmap: reserved=22528KB, committed=15104KB)

The "classes" number remains static, the "mmap" reserved & committed numbers remain static, but the "malloc" number grows over time, and does not decrease.

I have also observed that, after setting MALLOC_ARENAS_MAX to 2, when running on a system with glibc >= 2.16, that the malloc arenas seem to fragment over time, and that, after a couple of days, the amount of native heap space reported by pmap grows. For the sample application I have provided below, I have seen the *native* (*not* Java!) heap space grow to over 200MB used. This seems... incorrect.

THE PROBLEM WAS REPRODUCIBLE WITH -Xint FLAG: No

THE PROBLEM WAS REPRODUCIBLE WITH -server FLAG: Yes

REGRESSION.  Last worked in version 7u80

STEPS TO FOLLOW TO REPRODUCE THE PROBLEM :
Check out the sample project from GitHub onto a machine with JRE 1.8.0_101.
cd into nacroleptic
Run "mvn clean package"
Run java -jar target/narcoleptic.jar
cd into ../devourer
Run "mvn clean package"
Run ./run.sh

Download Apache JMeter 3.0 from http://jmeter.apache.org/download_jmeter.cgi
Extract the archive and start with the "bin/jmeter.sh" script.

Open the file "load_test.jmx" with JMeter.

Start the JMeter tests with CTRL+R (CMD+R on OSX)

Leave for *at least* 30 minutes, then come back and observe the amount of memory used by the process. It will have increased beyond what would be expected. Leave for several hours, maybe a couple of days. Memory usage will continue to increase.

EXPECTED VERSUS ACTUAL BEHAVIOR :
Expected behaviour: Memory usage of the application remains stable.

Actual behaviour: Memory usage of the application increases over time, without stopping. It may slow, and *appear* to stop, but given a large enough time span, it will *alwasy* continue to increase.
ERROR MESSAGES/STACK TRACES THAT OCCUR :
N/A - application does not crash unless placed inside a cgroup (for example) with a hard memory limit.

REPRODUCIBILITY :
This bug can be reproduced always.

---------- BEGIN SOURCE ----------
Please find the source code at: https://github.com/ipsi/memory-testing

This is the smallest test case I have been able to find, and I've been looking quite hard.
---------- END SOURCE ----------

CUSTOMER SUBMITTED WORKAROUND :
Revert to Java 7
Disable HotSpot completely with -Xint
Set the MALLOC_ARENAS_MAX environment variable to 2, which slows the memory growth, but does not halt it.


Comments
i strongly assume there is no leak in mtClass, only steady growth of OopMapCache. This should stabilize after a while. waiting for sometime to confirm this.
22-11-2016

this is jdk8 only issue, as sweeper implementation has gone considerable changes in jdk9. bug id :https://bugs.openjdk.java.net/browse/JDK-8046809
22-11-2016

after sweeper memory leak fix, i observe one class module leak. yet to figure out what is causing it.
31-10-2016

it seems like i found the leak, there is leak in sweeper.
31-10-2016

There is continuous compilation, deopt, make non entrant, zombie cycle going on. and steady increase of thread arena usage. NMT, core file analysis of ChunkPool confirms this. when compilation is completely turned off during leak, leak stops. next step: run with fastdebug, so that program hits some assert. P.S: unlike reported in bug, it is Thread Arena that leaks in my runs, not class module.
30-10-2016

one leak here https://bugs.openjdk.java.net/browse/JDK-8058563, fixed in 8u101
21-10-2016

ILW = Memory leak, mostly affecting long running applications, disable compilation = MMM = P3
21-10-2016

[~zmajo] Test created in such a way it cannot be executed on 9.
07-10-2016

Does the problem reproduce with JDK 9?
07-10-2016

Submitter indicates the problem does not occur with -Xint which suggests it is a compiler issue.
18-08-2016

This issue is reproducible in 8u101. find the attachment conatining VM Native memory summary captured for 6 hours. Test created in such a way it cannot be executed on 9. Here is the observation == The "classes" number remains static, the "mmap" reserved & committed numbers remain static, but the "malloc" number grows over time, and does not decrease Wed Aug 17 12:20:21 IST 2016 - Class (reserved=1084388KB, committed=40292KB) (classes #6334) (malloc=996KB #8259) (mmap: reserved=1083392KB, committed=39296KB) Wed Aug 17 12:28:47 IST 2016 Class (reserved=1084529KB, committed=41457KB) (classes #6337) (malloc=1137KB #9962) (mmap: reserved=1083392KB, committed=40320KB) Wed Aug 17 12:38:17 IST 2016 Class (reserved=1084644KB, committed=41572KB) (classes #6338) (malloc=1252KB #10820) (mmap: reserved=1083392KB, committed=40320KB) Wed Aug 17 12:52:38 IST 2016 Class (reserved=1084646KB, committed=41574KB) (classes #6316) (malloc=1254KB #9859) (mmap: reserved=1083392KB, committed=40320KB) Wed Aug 17 13:11:58 IST 2016 Class (reserved=1084740KB, committed=41668KB) (classes #6317) (malloc=1348KB #11157) (mmap: reserved=1083392KB, committed=40320KB) Wed Aug 17 14:10:44 IST 2016 Class (reserved=1084995KB, committed=41923KB) (classes #6319) (malloc=1603KB #10183) (mmap: reserved=1083392KB, committed=40320KB) Wed Aug 17 14:53:15 IST 2016 Class (reserved=1085195KB, committed=42123KB) (classes #6319) (malloc=1803KB #12772) (mmap: reserved=1083392KB, committed=40320KB) Wed Aug 17 16:07:03 IST 2016 Class (reserved=1085240KB, committed=42168KB) (classes #6319) (malloc=1848KB #10280) (mmap: reserved=1083392KB, committed=40320KB) Wed Aug 17 17:54:45 IST 2016 Class (reserved=1085745KB, committed=42673KB) (classes #6319) (malloc=2353KB #16744) (mmap: reserved=1083392KB, committed=40320KB) Wed Aug 17 18:13:45 IST 2016 Class (reserved=1085930KB, committed=42858KB) (classes #6319) (malloc=2538KB #17884) (mmap: reserved=1083392KB, committed=40320KB)
18-08-2016