JDK-4994705 : deadlock in LogManager
  • Type: Bug
  • Component: core-libs
  • Sub-Component: java.util.logging
  • Affected Version: 5.0
  • Priority: P3
  • Status: Closed
  • Resolution: Fixed
  • OS: generic
  • CPU: generic
  • Submitted: 2004-02-13
  • Updated: 2006-10-13
  • Resolved: 2006-04-29
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
Other JDK 6
1.4.2_15Fixed 6 b83Fixed
Related Reports
Relates :  
Deadlock happens with nsk/stress/jck12a/jck12a014. The following two threads
deadlock each other (the lock info in the thread dump is not very accurate).
"Thread-591" prio=10 tid=0x0053c898 nid=0x259 in Object.wait() [0xd157f000..0xd1
        at java.util.logging.LogManager$Cleaner.run(LogManager.java:179)
        - waiting to lock <0xf177beb0> (a java.util.logging.LogManager)

"Thread-453" prio=10 tid=0x004c2770 nid=0x1d0 waiting for monitor entry [0xd3f7c
        at java.util.logging.LogManager.addLogger(LogManager.java:292)
        - waiting to lock <0xf177beb0> (a java.util.logging.LogManager)
        at java.util.logging.LogManager+1.run(LogManager.java:160)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.util.logging.LogManager.<clinit>(LogManager.java:141)
        at java.util.logging.Logger.getLogger(Logger.java:228)
        - locked <0xf4862c60> (a java.lang.Class)
        at java.util.logging.Logger.<clinit>(Logger.java:181)
        at java.awt.Component.<clinit>(Component.java:159)
        at javasoft.sqe.tests.api.java.util.EventObject.serial.ConstructorTests.
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
        at java.lang.reflect.Method.invoke(Method.java:494)
        at javasoft.sqe.javatest.lib.MultiTest.run(MultiTest.java:177)
        at javasoft.sqe.stresstest.StressTest$TestThread.run(StressTest.java:851

The full thread dump is attached.

Basically "Thread-591" locks LogManager.manager and waits for the class initializer
of LogManager to be finished. But "Thread-453" is inside the class initializer
of LogManager and it wants to lock LogManager.manager.

Usually instantiation can only happen after class is fully initialized. But in this
case, LogManager.manager is instantiated inside the class initializer of LogManager.
And "Thread-591" gets a chance to lock the object before class LogManager
is fully initialized.

Grabbing locks inside class initializer can be dangeous since there is a hidden
class object that other threads may be waiting on. It's required by spec, see
Section 2.17.5, step 2.

It's also a question whether there are places in JDK other than LogManager that are
subject to such vulnerability.

The following simplified testcase shows the possibility of such deadlock.
// B.java
// Deadlock happens when Thread 0 is initializing class A and wants C.c and
// Thread 1 holds C.c and waits for Class A to be fully initialized.

class A {
	static {
		try { Thread.sleep(5000); } catch (Exception e) {}
		synchronized (C.c) {

public class B {
	public static void main(String[] args) {
		// Thread 0
		new Thread() {
			public void run() {
				// Cause Class A to be initialized.
				new A();
		try { Thread.sleep(1000); } catch (Exception e) {}
		// Thread 1
		new Thread() {
			public void run() {
				synchronized (C.c) {
					// Class A is being initialized so wait
					// for the initialization to finish.
					new A();

class C {
	static C c = new C();

###@###.### 2004-02-13
I've added a testcase provided by customer SAP which deadlogs if you:

a) run it on a multi-cpu box.
b) run it 1-5 times, often it happens on the 1st attempt.


SUGGESTED FIX I have deleted the lmdd.Mar6.tar from attachment as it is incorrect fix. Please, see another webrev lmd.Apr7.tar in attachment.

SUGGESTED FIX Please, see the webrev lmdd.Mar6.tar in attachments.

EVALUATION Big thanks to Mattnias who provided the testcase from SAP. Now I'm able too reproduce the problem. This is the deadlock reproducible with the tescase: Thread #13: java/util/logging/LogManager.<clinit>() -> java/security/AccessController.doPrivileged() -> ... -> JVM_DoPrivileged -> java/util/logging/LogManager$1.run() -> InterpreterRuntime::_new() -> BLOCKED Related lines of the source code: LogManager.java(L169) : manager.rootLogger = manager.new RootLogger(); LogManager.java(L961) : private class RootLogger extends Logger { Thread #14 java/util/logging/Logger.<clinit>() -> java/util/logging/getLogger() -> java/util/logging/LogManager.getLogManager() -> BLOCKED Related line of the source code: Logger.java(L228) : LogManager manager = LogManager.getLogManager(); Concurrent creating the rootLogger object at the LogManager class initialization in the Thread #13 and a Logger object in the Thread #14 causes a deadlock. The LogManager.<clinit> creates an object of the RootLogger class which is a subclass of the Logger class. The Logger.<clinit> invokes the LogManager static method getLogManager(), and so, both class initializations are waiting for each other to complete. Please, see a pstack output in attachments: Logmanager_deadlock.pstk.

EVALUATION The description says: "The following two threads deadlock each other (the lock info in the thread dump is not very accurate). ... Basically "Thread-591" locks LogManager.manager and waits for the class initializer of LogManager to be finished. But "Thread-453" is inside the class initializer of LogManager and it wants to lock LogManager.manager." It is not correct. I think the thread dump is correct (but maybe not complete), and both threads "Thread-591" and "Thread-453" are waiting to lock the same LogManager.manager monitor. But it is completely unclear from the thread dump (from attachment) what thread holds the LogManager.manager lock. The sample example in the description is very interesting but it looks like it is unrelated to the real reason of the deadlock. The most important part of analysis is reproducing this problem with the real test. I was not able to reproduce it with the nsk/stress/jck12a/jck12a014. There is no info in the description on how to reproduce the bug and its reproducibility rate. It is possible that the test was modified to workaround this bug because this scenarious stress test can be needed to catch other problems. It would be nice to run the old version of the test from the old testbase. This is a preliminar input because more analysis is still needs to be done.

CONVERTED DATA BugTraq+ Release Management Values COMMIT TO FIX: dragon

EVALUATION This is related to the change in 1.4.2 to move logging initialization out of the system class. (see 4770189 and others) The difference in 1.5 is that logging is automatically enabled by awt. It may be possible to reduce the rate of occurance by a small amount of tweaking. ###@###.### 2004-02-13 The cleaner thread is the cause of this deadlock, I can narrow down the lock scope however as this is due to a stress test it is probably too risky to integrated at this point in the schedule. I'll carry on testing it for the next update ###@###.### 2004-03-04