JDK-6911759 : java.util.concurrent.locks.ReentrantLock generates a lot of garbage
  • Type: Enhancement
  • Component: core-libs
  • Sub-Component: java.util.concurrent
  • Affected Version: 7
  • Priority: P5
  • Status: Closed
  • Resolution: Not an Issue
  • OS: solaris_8
  • CPU: x86
  • Submitted: 2009-12-18
  • Updated: 2011-05-10
  • Resolved: 2011-05-10
Related Reports
Relates :  
Description
A DESCRIPTION OF THE REQUEST :
java.util.concurrent.locks.ReentrantLock.lock() and java.util.concurrent.locks.ReentrantLock.unlock() generates a lot of garbage in the form of java.util.concurrent.locks.AbstractQueuedSynchronizer$Node objects under heavy lock contention.
Minimizing GC invocations is critical in our financial application. We use both ReentrantLock and ArrayBlockingQueue in our application. This is the major source of garbage generation.

ArrayBlockingQueue suffers from the same problem because it uses ReentrantLock for the internal synchronization.


JUSTIFICATION :
ReentrantLock should be optimized to avoid creating new java.util.concurrent.locks.AbstractQueuedSynchronizer$Node objects by caching and reusing them.

EXPECTED VERSUS ACTUAL BEHAVIOR :
EXPECTED -
Did not expect that much garbage generated by java.util.concurrent.locks.ReentrantLock.lock() and java.util.concurrent.locks.ReentrantLock.unlock().
ACTUAL -
Huge number of java.util.concurrent.locks.AbstractQueuedSynchronizer$Node objects are generated when calling java.util.concurrent.locks.ReentrantLock.lock() and java.util.concurrent.locks.ReentrantLock.unlock().

---------- BEGIN SOURCE ----------
import java.util.concurrent.locks.ReentrantLock;

/**
 * @author Ilya Goberman
 */
public class Test {
    public static void main(String[] args) {
        final ReentrantLock lock = new ReentrantLock();
        
        startThread(lock);
        startThread(lock);
    }

    private static void startThread(final ReentrantLock lock) {
        new Thread(new Runnable(){

            public void run() {
                while (true) {
                    lock.lock();
                    try {
                        
                    }
                    finally {
                        lock.unlock();
                    }
                }
            }}).start();
    }
}

---------- END SOURCE ----------

Comments
PUBLIC COMMENTS Doug Lea responds: AQS nodes are highly GC friendly as it is. Caching is pretty much out because you then hit ABA problems and we cannot use pointer-marking to disambiguate. ---- To further expand on this ... the node cache would need synchronization to maintain it correctly. Using heavy-weight synchronization inside a synchronization library would defeat the performance goals of the library. Using lock-free synchronization to manage the cache is non-trivial as simple lock-free lists will encounter the ABA problem, unless some form of generational scheme is being used. More complex schemes encounter more overhead and again the performance goals of the library are no longer achieved. If ReentrantLock (really AQS) nodes are causing excessive garbage then it may mean that there is excessive contention within the application. Reducing contention in the application will have performance benefits beyond those related to GC of Nodes. The Nodes, if short-lived, should not be an issue for a generational collector as such collectors do not visit dead objects. So if these are causing GC issues then either: a) you're not using a generational collector; or b) the Nodes are remaining live long enough to reach the old generation before dying, and so requiring full GCs to reclaim them. Some additional performance/memory analysis of your applications to ascertain why these Nodes are a problem, would be beneficial. Note also that ArrayBlockingQueue is a simple synchronized queue and will not scale well under contention (single lock protects all accesses). It may also be that the Nodes are being created, not because of direct contention on the Lock, but because of blocking on the queue under full/empty conditions (ie uaing Condition.await()). People often expect that a queue with apparently balanced producers and consumers will on average be half-full; when in practice they often find (depending on the nature of the producers/consumers) that the queue is constantly ping-ponging between being empty and full. So again some analysis of the application to determine the exact cause of the excessive Node generation would be useful. David Holmes
27-01-2010

PUBLIC COMMENTS This issue has been raised on Doug Lea's concurrency-interest mailing list and I think Doug has done some additional work in this area for Java 7. In general it is tricky to provide a single policy that suits all situations - and providing multiple policies must not come at the performance expense of applications that are quite happy with the current policy.
25-01-2010