JDK-8080939 : ForkJoinPool and Phaser deadlock
  • Type: Bug
  • Component: core-libs
  • Sub-Component: java.util.concurrent
  • Affected Version: 8u40,8u45,9
  • Priority: P3
  • Status: Closed
  • Resolution: Fixed
  • Submitted: 2015-05-22
  • Updated: 2015-11-09
  • Resolved: 2015-09-21
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 9
9 b88Fixed
Related Reports
Duplicate :  
Description
(synopsis needs to be updated once we discover what is wrong exactly)

Originally found here:
  http://stackoverflow.com/questions/30392753/forkjoinpool-phaser-and-managed-blocking-to-what-extent-do-they-works-against

The sample code:

import java.util.concurrent.*;
import java.util.concurrent.atomic.*;

public class TestForkJoinPool {

    final static ExecutorService pool = Executors.newWorkStealingPool(8);
    private static volatile long consumedCPU = System.nanoTime();
    private static final AtomicInteger counter = new AtomicInteger();

    public static void main(String[] args) throws InterruptedException {
        final int numParties = 100;
        final Phaser p = new Phaser(1);
        final Runnable r = () -> {
            int idx = counter.incrementAndGet();

            System.out.println(idx + " arrived at register");
            p.register();

            System.out.println(idx + " arrived at awaitAdvance");
            p.arriveAndAwaitAdvance();

            System.out.println(idx + " arrived at deregister");
            p.arriveAndDeregister();
        };

        for (int i = 0; i < numParties; ++i) {
            consumeCPU(1000000);
            pool.submit(r);
        }

        while (p.getArrivedParties() != numParties) {}
    }

    static void consumeCPU(long tokens) {
        // Taken from JMH blackhole
        long t = consumedCPU;
        for (long i = tokens; i > 0; i--) {
            t += (t * 0x5DEECE66DL + 0xBL + i) & (0xFFFFFFFFFFFFL);
        }
        if (t == 42) {
            consumedCPU += t;
        }
    }
}

8u20 works finishes fine.
8u40 gets stuck.
8u40 + jsr166 jar (2015-05-22) bootclasspathed gets stuck.
8u20 + jsr166 jar (2015-05-22) bootclasspathed gets stuck.

This points to an issue in java.util.concurrent somewhere between 8u20 and 8u40.
Note that current jsr166 already contains the fix for JDK-8078490, so it does look like a different issue.

It would seem pre-8u40 FJP produces more threads to handle external submissions, while workers get stuck on Phaser.
post-8u40 seem to produce only ten threads on my test rig, and then the test gets stuck.
Comments
The javadoc clarification is being integrated into jdk9, so closing as dup of that.
21-09-2015

As hinted at by the StackOverflow poster, progress is not guaranteed here: The program requires up to 100 live parties at a barrier (arriveAndAwaitAdvance) but sets parallelism to only 8. The program previously happened to complete anyway because each spare thread that was generated in case needed to continue the current task was kept live long enough to run a newly submitted new task. Although if consumeCPU time exceeded the keep-alive time, it would also stall in previous versions. So in this sense, it is a race issue in the test program. I don't see a case for changing for jdk8 anyway because of the incompatibility, but the Phaser specs should be clarified in, at least in jdk9 (as follows). We might also consider adjusting internal policies so that this construction will reliably work (without raciness). * state of the phaser. If necessary, you can perform any * associated recovery within handlers of those exceptions, * often after invoking {@code forceTermination}. Phasers may ! * also be used by tasks executing in a {@link ForkJoinPool}, ! * which will ensure sufficient parallelism to execute tasks ! * when others are blocked waiting for a phase to advance. * * </ul> * --- 66,75 ---- * state of the phaser. If necessary, you can perform any * associated recovery within handlers of those exceptions, * often after invoking {@code forceTermination}. Phasers may ! * also be used by tasks executing in a {@link ForkJoinPool}. ! * Progress is ensured if the pool's parallelismLevel can ! * accommodate the maximum number of simultaneously blocked ! * parties.
27-05-2015