JDK-8185939 : Random OOME while Stream.parallel().collect() when using GC1
  • Type: Bug
  • Component: hotspot
  • Sub-Component: gc
  • Affected Version: 8,9
  • Priority: P3
  • Status: Closed
  • Resolution: Won't Fix
  • OS: generic
  • CPU: x86_64
  • Submitted: 2017-08-07
  • Updated: 2017-08-10
  • Resolved: 2017-08-10
Related Reports
Relates :  
Description
FULL PRODUCT VERSION :
java version "1.8.0_144"
Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)

FULL OS VERSION :
Microsoft Windows [Version 6.1.7601]

EXTRA RELEVANT SYSTEM CONFIGURATION :
x64, Intel Xeon CPU e5-1620 v3 3,5 GHz

A DESCRIPTION OF THE PROBLEM :
While running the code listed in the field "Source code for an executable test case" with the GC1 garbage collector (enabled by commandline options "-Xmx192m  -XX:+UseG1GC") an OutOfMemoryError is thrown at a random time.

The loop in the test case is doing the same operations every time. if one run works without OOME, all runs should run without OOME.

If the flag "-XX:+UseG1GC" is removed, the code runs without OOME.

THE PROBLEM WAS REPRODUCIBLE WITH -Xint FLAG: Did not try

THE PROBLEM WAS REPRODUCIBLE WITH -server FLAG: Did not try

STEPS TO FOLLOW TO REPRODUCE THE PROBLEM :
1. run Source code for an executable test case" with the VM options "-Xmx192m  -XX:+UseG1GC"

EXPECTED VERSUS ACTUAL BEHAVIOR :
Expected: normal execution of the sampe code
Actual: OutOfMemoryError is thrown 
REPRODUCIBILITY :
This bug can be reproduced often.

---------- BEGIN SOURCE ----------
    public static void main(String[] args) {

        for (int i = 0; i < 100; i++) {
            String name = "round #" + i;

            System.out.println(name);

            Collection<String> tmp = IntStream
                    .range(0, 10000000)
                    .parallel().mapToObj((c) -> name)
                    .collect(Collectors.toList());
        }

        System.out.println("PASSED");

    }
---------- END SOURCE ----------

CUSTOMER SUBMITTED WORKAROUND :
Not to use GC1, just remove -XX:+UseG1GC from the VM options.


Comments
As [~tschatzl] already explained, this issue is resulted from heap fragmentation. G1 manages differently for large(humongous) objects and this resulted in this fragmentation. If we enable gc+heap*=trace log, we can see memory fragmentation so can't allocate requested memory. Closing this issue with 'Won't fix'. There are some related enhancements would improve this situation.
10-08-2017

Since this looks like an issue with fragmentation of the java heap object space due to large object allocation, another (simple) workaround would be to increase the heap sufficiently.
09-08-2017

It is the Humongous objects creating issue, please check the attached "Xlog:gc" output from the sample execution on 9 ea b181.
08-08-2017

Issue is more frequent in G1 but it can be reproduced in ParallelGC also. Issue is not reproducible in CMS. Below are the results executed on 8 and 9 UseParalleGC - Pass UseConcMarkSweepGC = Pass UseParallelOldGC - Fail (OOM after 60 iterations) UseG1GC - Fail (OOM after 5 iterations) G1GC is failing immediately with below OOM Exception in thread "main" java.lang.OutOfMemoryError at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:598) at java.util.concurrent.ForkJoinTask.reportException(ForkJoinTask.java:677) at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:735) at java.util.stream.ReduceOps$ReduceOp.evaluateParallel(ReduceOps.java:714) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233) at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499) at Test.main(Test.java:17) Caused by: java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:3210) at java.util.Arrays.copyOf(Arrays.java:3181) at java.util.ArrayList.grow(ArrayList.java:261) at java.util.ArrayList.ensureExplicitCapacity(ArrayList.java:235) at java.util.ArrayList.ensureCapacityInternal(ArrayList.java:227) at java.util.ArrayList.addAll(ArrayList.java:579) at java.util.stream.Collectors.lambda$toList$3(Collectors.java:231) at java.util.stream.Collectors$$Lambda$4/295530567.apply(Unknown Source) at java.util.stream.ReduceOps$3ReducingSink.combine(ReduceOps.java:174) at java.util.stream.ReduceOps$3ReducingSink.combine(ReduceOps.java:160) at java.util.stream.ReduceOps$ReduceTask.onCompletion(ReduceOps.java:754) at java.util.concurrent.CountedCompleter.tryComplete(CountedCompleter.java:577) at java.util.stream.AbstractTask.compute(AbstractTask.java:317) at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at java.util.concurrent.ForkJoinPool$WorkQueue.execLocalTasks(ForkJoinPool.java:1040) at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1058) at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
08-08-2017