JDK-8024773 : ScheduledThreadPoolExecutor retains reference to a cancelled Runnable
  • Type: Bug
  • Component: core-libs
  • Sub-Component: java.util.concurrent
  • Affected Version: 7
  • Priority: P3
  • Status: Resolved
  • Resolution: Duplicate
  • Submitted: 2013-04-04
  • Updated: 2014-05-02
  • Resolved: 2014-05-02
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
Other
tbd_minorResolved
Related Reports
Relates :  
Description
FULL PRODUCT VERSION :
java version  " 1.7.0_09 " 
Java(TM) SE Runtime Environment (build 1.7.0_09-b05)
Java HotSpot(TM) 64-Bit Server VM (build 23.5-b02, mixed mode)

A DESCRIPTION OF THE PROBLEM :
If a Runnable is scheduled for repeated execution by the ScheduledThreadPoolExecutor.scheduleWithFixedDelay() and then cancelled via the returned Future, the memory reference to that Runnable may still be retained by the executor.


STEPS TO FOLLOW TO REPRODUCE THE PROBLEM :
Run the attached source code.
Take a heap dump using JVisualVM.
Check the heap dump for instances of the BigAndUseless class, and do  " Show Nearest GC Root " , again using JVisualVM.

EXPECTED VERSUS ACTUAL BEHAVIOR :
EXPECTED -
Expected no reachable instance of the BigAndUseless in the heap dump.
ACTUAL -
An instance of the BigAndUseless class is traced back to GC roots on the call stack.


ERROR MESSAGES/STACK TRACES THAT OCCUR :
Thread dump of the thread retaining the reference:

 " pool-1-thread-1 "  prio=5 tid=0x00007fa23505c000 nid=0x4f03 waiting on condition [0x00000001164f0000]
   java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x000000079dda17b0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)


REPRODUCIBILITY :
This bug can be reproduced always.

---------- BEGIN SOURCE ----------
package scheduleleak;

import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.ScheduledFuture;
import java.util.concurrent.TimeUnit;

public class ScheduleLeak {
   
    public static void main(String[] args) throws InterruptedException {
        ScheduledExecutorService pool = Executors.newScheduledThreadPool(1);
        ScheduledFuture<?> scheduleFuture = pool.scheduleWithFixedDelay(new BigAndUseless(), 1L, 1000000000L, TimeUnit.MILLISECONDS);
        Thread.sleep(100L);
        scheduleFuture.cancel(true);
        System.out.println( " Time for a heapdump! " );
        Thread.sleep(60000L);
        System.out.println( " If you haven't done the heapdump by now you're too slow! " );
    }
    
    private static class BigAndUseless implements Runnable {
        private final long[] wasteOfSpace = new long[1000*1000];

        @Override
        public void run() {
            // Do nothing!
        }
    }
    
}
---------- END SOURCE ----------

CUSTOMER SUBMITTED WORKAROUND :
Sever references in the Runnable at the same time the future is cancelled.
Comments
Not exactly the duplicate of JDK-7132378, but the issue was fixed as a part of it.
02-05-2014