JDK-8172319 : Increased memory usage with tiered compilation
  • Type: Bug
  • Component: hotspot
  • Sub-Component: compiler
  • Affected Version: 8
  • Priority: P3
  • Status: Closed
  • Resolution: Incomplete
  • OS: linux
  • CPU: x86_64
  • Submitted: 2017-01-05
  • Updated: 2017-02-08
  • Resolved: 2017-02-08
Related Reports
Relates :  
Description
FULL PRODUCT VERSION :
Have tested on several version of java:

Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)
Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)

Tested also on jdk8u92



FULL OS VERSION :
Amazon Linux: 4.4.30-32.54.amzn1.x86_64 #1 SMP Thu Nov 10 15:52:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux


EXTRA RELEVANT SYSTEM CONFIGURATION :
Running on AWS.
Have seen the issue on various amazon instance types, much easier to see on T2 instance types: t2.small, t2.medium



A DESCRIPTION OF THE PROBLEM :
We are witnessing  increased memory consumption by the JVM process when TieredCompilation is enable (this is enabled by default on java8).

We first witnessed this on an Application 3 months ago, when we attempted to reduce the instance type (reduced ram) and started seeing instance terminations due to exhausted memory.  From that point we have slowly been disabling features and ruling out the application code base.

We ran, The Application with JeMalloc (https://gdstechnology.blog.gov.uk/2015/12/11/using-jemalloc-to-get-to-the-bottom-of-a-memory-leak/), to determine if there was a native memory leak anywhere from the code based.  This did now point to anything application related, but was pointing at JIT related compilation code (I can provide these postscript'd outputs if required).

We run on any instance 2 java processes:  The Application, and Logstash.
Running only The Application with -Xint still results in the instance termination due to memory exhaustion (logstash is consuming the memory).  Enabling -Xint on both The Application and Logstash, stops memory consumption/grow.

To investigate further and attempt to narrow down (as disabling JIT on a server app is not ideal), we tried our luck with disabling TieredCompilation on both the Java processes.  Luck was on our side.  With TieredCompilation disabled the growing memory consumption does not occur.

What I am unsure of and why I'm only saying this is continual memory consumption rather than a memory leak at this point.  Is, that I don't know if there is a hidden hotspot jvm flag that controls the memory arena used by the client compiler for the compiled methods that collect profiling information, or the various arena's other than the normal: XX:InitialCodeCacheSize, XX:ReservedCodeCacheSize

----

I've created a simple "REST app" (include in this report), and run this on an instance:

----
java -server -Xms2m -Xmx2m -Xss256k -XX:CompressedClassSpaceSize=1m -XX:MaxMetaspaceSize=8m -XX:MetaspaceSize=8m -XX:InitialCodeCacheSize=4m -XX:ReservedCodeCacheSize=4m -classpath http-sample.jar Test
----

I've then hit the application (millions of times) over course of a day with apache benchmark

ab -n 10000000 -c 1 localhost:8000/test


With TieredCompilation enabled, this is the Java Process over the course of a day

jdk8u111:
----
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
20940 tootelld  20   0  684m  51m  15m S 97.1  5.2   9:25.52 java
20940 tootelld  20   0  684m  95m  15m S 99.9  9.6 358:55.98 java
20940 tootelld  20   0  812m 228m  15m S 94.0 23.0 990:56.71 java
20940 tootelld  20   0  876m 237m  15m S 94.9 23.9   1033:05 java
----

jdk8u102:
----
PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 3491 tootelld  20   0  684m  51m  15m S 91.5  5.2  15:24.25 java                                                                                                               
 3491 tootelld  20   0  876m 264m  15m S 92.9 26.6   1148:11 java    
 3491 tootelld  20   0  876m 293m  15m S 89.3 29.5   1291:45 java  
 3491 tootelld  20   0  940m 307m  15m S 96.8 30.9   1365:58 java   
----


Disabling tiered compilation (-XX:-TieredCompilation)

jdk8u102:
----
26131 tootelld  20   0  684m  57m  15m S 94.9  5.7  78:28.65 java
26131 tootelld  20   0  684m  57m  15m S 91.7  5.7 206:17.62 java
26131 tootelld  20   0  684m  57m  15m S 93.5  5.7 549:39.20 java
26131 tootelld  20   0  684m  57m  15m S 90.9  5.7 658:55.15 java
26131 tootelld  20   0  684m  57m  15m S 92.4  5.7   1089:51 java 
----


Please note, in production on The Application on which we first noticed the continually growing memory consumption we have tried various setting of the Metaspace and Reserved to different values, and defaults. 

-XX:CompressedClassSpaceSize 
-XX:MetaspaceSize=8m  -XX:MaxMetaspaceSize=8m 
-XX:InitialCodeCacheSize=4m -XX:ReservedCodeCacheSize=4m 




THE PROBLEM WAS REPRODUCIBLE WITH -Xint FLAG: No

THE PROBLEM WAS REPRODUCIBLE WITH -server FLAG: Yes

STEPS TO FOLLOW TO REPRODUCE THE PROBLEM :
Compile the given source code, and package into a jar and execute:

javac Test.java
jar -cvf http-sample.jar .

Then run the application as follows:

java -server -Xms2m -Xmx2m -Xss256k -XX:CompressedClassSpaceSize=1m -XX:MaxMetaspaceSize=8m -XX:MetaspaceSize=8m -XX:InitialCodeCacheSize=4m -XX:ReservedCodeCacheSize=4m -classpath http-sample.jar Test

Add -XX:-TieredCompilation 

----

Then hit the application with http requests (apache bench), and wait.  You need to hit it with millions of requests, example:

ab -n 10000000 -c 1 localhost:8000/test



EXPECTED VERSUS ACTUAL BEHAVIOR :
For the java process to not continually grow in resident and virtual memory, which ends up in application instability and box instability.
REPRODUCIBILITY :
This bug can be reproduced always.

---------- BEGIN SOURCE ----------
import com.sun.net.httpserver.HttpExchange;
import com.sun.net.httpserver.HttpHandler;
import com.sun.net.httpserver.HttpServer;
import java.util.concurrent.atomic.AtomicInteger;

public class Test {

    public static AtomicInteger count = new AtomicInteger(0);
    public static Content now = new HelloNew();
    public static Content old = new HelloOld(); 

    public static void main(String[] args) throws Exception {
        HttpServer server = HttpServer.create(new InetSocketAddress(8000), 0);
        server.createContext("/test", new MyHandler());
        server.setExecutor(null); // creates a default executor
        server.start();
    }

    static class MyHandler implements HttpHandler {
        @Override
        public void handle(HttpExchange t) throws IOException {
            String response = "This is the response";
	    byte[] res = null;
	    if( count.incrementAndGet()%2==0 ) {
		res = now.get();
	    } else {
		res = old.get();
	    }	
            t.sendResponseHeaders(200, res.length);
            OutputStream os = t.getResponseBody();
            os.write(res);
            os.close();
        }
    }

    public interface Content {
	byte[] get();

    }

    public static class HelloOld implements Content {
	public byte[] get() {
		return "2016".getBytes();
	} 
    }

    public static class HelloNew implements Content {
        public byte[] get() {
                return "2017".getBytes();
        }
    }

    
}
---------- END SOURCE ----------

CUSTOMER SUBMITTED WORKAROUND :
-XX:-TieredCompilation


Comments
The bug could not be reproduced. The bug can be reopened if there is a reproducible testcase or clear reproducible steps
08-02-2017

After the discussion with Poonam, it is clear that it could be one of the duplicate issue JDK-8164293 unfortunatly we cannt verify. As suggetsed by David requesting submitter to try building OpenJDK build == We are suspecting it could be one of the fixes which got fixed recently https://bugs.openjdk.java.net/browse/JDK-8164293 but this fix is not public yet. If you could build openJDK and try then you can get this issue fixed. Here are the instruction to build - http://openjdk.java.net/projects/build-infra/guide.html Do let me know for further assistance.
19-01-2017

Received received response from submitter containing pmap and log files, attached == I ran the tests for ~200 mins each. The tieredcompilation log is much larger. Please find attached the pmaps, and links to the log files you can download. Let me know if you have any issues getting the logs, or if you wish for more information. ==
16-01-2017

could be same issue as this https://bugs.openjdk.java.net/browse/JDK-8164293
06-01-2017

Requested submitter to verify without setting the codecache values, disabling ""-XX:InitialCodeCacheSize=4m -XX:ReservedCodeCacheSize=4m""
06-01-2017

Issue has been very clearly described, it has a good test case as well. From the command line arguments we have " -XX:InitialCodeCacheSize=4m -XX:ReservedCodeCacheSize=4m" When you are using TieredCompilation and restricting your CodeCacheSize things may not go well. TieredCompilation require more CodeCache,the default max CodeCache is 48M in Java 7 and 250M in Java 8. If you don't override the max CodeCache setting, you'll be fine on Java 8. On the other hand, if you did override the max CodeCache setting, on Java 8 will require either disabling TieredCompilation. JVM JIT generates compiled code and stores that in a memory area called CodeCache. If any application needs to compile large number of methods resulting in huge amount of compiled code then this CodeCache may become full. When it becomes full, the compiler is disabled to stop any further compilations of methods, this may lead to performance issues. Please go thorugh the blog from Poonam - https://blogs.oracle.com/poonam/entry/why_do_i_get_message Verify without using "-XX:InitialCodeCacheSize=4m -XX:ReservedCodeCacheSize=4m" and share the result with us.
06-01-2017