JDK-5091934 : Thread.sleep() on Windows has inconsistent behavior
  • Type: Bug
  • Component: hotspot
  • Sub-Component: runtime
  • Affected Version: 5.0
  • Priority: P4
  • Status: Closed
  • Resolution: Duplicate
  • OS: windows_nt
  • CPU: x86
  • Submitted: 2004-08-25
  • Updated: 2017-09-14
  • Resolved: 2006-05-08
Related Reports
Duplicate :  
The current implementation of Thread.sleep(ms) on Windows has inconsistent behavior, based on whatever the minimum timer resolution is on a particular runtime platform.

I have attached 2 apps, one Java and one native win32 (you'll need to link with winmm.lib to make the native app compile).  The apps have the same behavior on my system (XP laptop, P4 3.4GHz) _except_ for sleep times in increments of 10 ms.  For these values, the Java app sleeps for about 16 ms, whereas the native app sleeps for the expected time.

What you end up with is inconsistent behavior for sleep(), where an app will sleep for about the right amount of time for any value passed in except values that are multiples of 10 ms.  For example, sleep(9) will sleep fro about 9 ms, 
sleep(11) will sleep for about 11 ms, but sleep(10) will sleep for 16 ms (on my system).

EVALUATION Ongoing issues concerning the use of high-resolution timers under win32 are being tracked through 5005837. The default timer interrupt period on windows (at least NT4 onwards) is usually a nominal 10ms but can be 15ms on some hardware. The multi-media timers provide a nominal minimum interrupt period of 1ms, but use of this is not recommended (2ms should be the minimum) and the lowest period an application should rely on is 3ms - see: http://www.microsoft.com/whdc/system/CEC/mm-timer.mspx The use of these high-resolution timers in hotspot is currently determined by two things: (a) If the -XX:ForceTimeHighResolution option is used then the timer interrupt period is set to 1ms when the VM loads, and is restored when the VM exits; otherwise (b) If the sleep time is not a multiple of 10ms then the interrupt period is set to 1ms for the duration of the sleep and then restored Note that simply using a 1ms period all the time is not a viable solution for many customers due to the 10-15x increase in the interrupt processing load. While faster hardware may negate this factor one day, it has not yet been established that this is the case. The policy in (b) is not satisfactory for two reasons: 1. It is speculated that performing many changes to the interrupt period causes windows to lose track of the time and for the time-of-day clock to drift as a result. This appears to be a problem in Windows, but needs further investigation. 2. The 10ms value is inappropriate when the actual period is 15ms, and it leads to unexpected inconsistencies when a thread sleeps for values around a 10ms boundary. Further it is really the magnitude of the requested sleep time that is significant not its divisibility into 10ms pieces: a 10ms sleep requires high resolution, a 1001ms sleep does not.

SUGGESTED FIX remove the code in os_win32.cpp's HighResolutionInterval that puts a condition check around the call to timerBeginPeriod(1) (and timerEndPeriod(1)).

EVALUATION Will look into Post Tiger... ###@###.### 2004-08-25