see discussion on concurrency-interest mailing list:
>> I'm pretty new to the Fork-Join framework, so this is probably an
>> obvious case of user-error, but I haven't been able to figure it out,
>> and thus far, none of the comments on that post have resolved the
>> issue either.
>> I'm profiling a parallel algorithm over a range of thread-counts. My
>> tasks seem to work flawlessly if I create the ForkJoinPool with
>> parallelism> 1 (I've normally been running with 2-24 threads). But if
>> I create the ForkJoinPool with parallelism = 1, I see deadlocks after
>> an unpredictable number of iterations. And yes - setting parallelism =
>> 1 is a strange practice, but I want to accurately ascertain the
>> ohttp://cs.oswego.edu/pipermail/concurrency-interest/2011-April/007835.htmlverhead of the parallel implementation, which means comparing the
>> serial version and the parallel version run with a single thread.
>> Below is a simple example that illustrates the issue I'm seeing. The
>> 'task' is a dummy iteration over a fixed array, divided recursively
>> into 16 subtasks. I chose this odd iteration simply to produce a
>> memory-bound workload - it's possible the task itself interacts oddly
>> with F-J or with JIT optimizations, but if so, I haven't been able to
>> tease out those interactions.
>> If run with THREADS = 2 (or more), it runs reliably to completion, but
>> if run with THREADS = 1, it invariably deadlocks. After an
>> unpredictable number of iterations, the main loop hangs in
>> ForkJoinPool.invoke(), waiting in task.join(), and the worker thread
>> exits. (I've been running between 10000 and 50000 ITERATIONS,
>> depending on the host hardware)
This is the second bug byproduct of bad last-minute refoctoring of ForkJoinPool.awaitWork to offload some of the code into idleAwaitWork.
Along with the changes to fix the issue described above, this CR will be used to pull in two more minor changes in ForkJoinPool:
1) Tolerate timing slop
2) Fix mask growth comparison