JDK-5005837 : rework win32 timebeginperiod usage
  • Type: Enhancement
  • Component: hotspot
  • Sub-Component: runtime
  • Affected Version: 1.5.1,2.0,1.3.1,5.0
  • Priority: P5
  • Status: Closed
  • Resolution: Won't Fix
  • OS:
    generic,windows_nt,windows_2000,windows_xp generic,windows_nt,windows_2000,windows_xp
  • CPU: generic,x86
  • Submitted: 2004-03-01
  • Updated: 2019-08-28
  • Resolved: 2015-02-12
Related Reports
Duplicate :  
Duplicate :  
Relates :  
Relates :  
Relates :  
Relates :  
Relates :  
Description
The Win32 platform-specific os::sleep() operation calls timeBeginPeriod() and timeEndPeriod(), which are exported from winmm.dll.  

timeBeginPeriod and timeEndPeriod can cause (a) time-of-day clock skew, and (b) degraded perforamnce because of the increased interrupt rate.  For sleep requests that are "long" (perhaps > 1sec) or are a multiple of 10 msecs we should consider avoiding timeBeginPeriod-timeEndPeriod.  

Remarks:

* See the java-3d timer/timertest demo
* See http://www.sysinternals.com/ntw2k/info/timer.shtml for a description of how timeBeginPeriod and timeEndPeriod are implemented. 

Thoughts:
* Defer loading winmm.dll
* We should measure the cost of the calls to timePeriodBegin and timePeriodEnd.  If either operation is expensive we could (a) use a process-specific globals "timerUsers" and "currentResolution" to avoid redundant timePeriodBegin and timePeriodEnd calls.  We would define a timeperiodbegin call as redundant if the current resolution is already >= the requested resolution. (b) call timePeriodBegin only to increase the resolution.  Periodically, perhaps in the watcher thread, if high res timers haven't been required in the last N secs, restore (decrease) the resolution with timePeriodEnd.  That is, we'd defer calling timePeriodEnd.  The intent is to eliminate chains of calls to timeperiodbegin and timeperiodend that simply increase and then decrease the timer resolution.  

* Before calling timeperiodbegin() we should verify that the system supports the requested resolution via:

    TIMECAPS tc;
    if (timeGetDevCaps(&tc, sizeof(TIMECAPS)) != TIMERR_NOERROR) {
        // Error; application can't continue.
    }
    const UINT wTimerRes =
        min(max(tc.wPeriodMin, TARGET_RESOLUTION), tc.wPeriodMax);
    timeBeginPeriod(wTimerRes);

###@###.### 2004-03-01
###@###.### 10/8/04 14:08 GMT

Comments
I had hoped that by now (2019) the low frequency interrupt issue may have been resolved. Certainly on my laptop, which is 3 years old, the system already runs at the 1ms tick period. But this march 2019 blog indicates the issue still remains for many systems: https://hazelcast.com/blog/locksupport-parknanos-under-the-hood-and-the-curious-case-of-parking-part-ii-windows/ It further raises the query as to whether the use of the HighResolutionInterval timer adjustment should be extended to all the timed-wait mechanisms. I'll note that the proposed changes under JDK-6313903 will partially extend it to Object.wait (by virtue of keeping it active for Thread.sleep).
28-08-2019

It turns out the ForceTimeHighResolution option has been incorrectly implemented. When DllMain is invoked the arguments have not been processed and so ForceTimeHighResolution will always be false and so the timer period will not be set to 1ms. However because ForceTimeHighResolution is true we disable the use of per-sleep changes to the timer and so we in fact always use low resolution timing. The fact some people have reported a performance increase when they used ForceTimeHighResolution is more readily understandable now. If they were doing lots of small sleeps that were causing a 1ms timer to be used, they were no longer doing so and so the reduced rate of interrupts could result in better program throughput. The fact the sleeps now sleep longer is probably of little consequence. By moving the call of TimerBeginPeriod to the init_2() method I was able to observe the correct operation of the 1ms timer. It is very interesting to note that when operating correctly, use of ForceTimeHighResolution results in much more stable/predictable sleep times - we no longer see a long first sleep followed by a super-short sleep. The measured times are almost exactly what we asked for (at least on the test system tulipwood.east.) This would tend to suggest that the per-sleep changing of the timer period is itself a de-stabilising action - perhaps explainable if the effect of calling timerBeginPeriod is actually asynchronous with respect to the call and temporarily confuses the time management system while the test program both sleeps and queries the system time. That is very hard to prove one way or another. A final note of interest. I discovered that on my laptop, running windows 2000, the timer period is 1ms all the time. By booting into safe mode I observed the expected 10ms period. This means that some other piece of software I normally run has already requested a 1ms timer period. This also helps explain why some users haven't seen sleep changing behaviour depending on whether the sleep time was a multiple of 10ms or not, because they were always running at a high interrupt rate anyway. To see the interrupt rate on windows run perfmon, add a trace/monitor for interrupts/sec, change the scale to 1 and the Y axis to go from 0 to 1000. During typical operation you'll see around 130+ interrupts/sec. The change to the 1ms timer is very evident when it occurs.
21-04-2017

The following MS Kb article: http://support.microsoft.com/kb/821893/en-us alludes to the problem of the clock running fast on some systems: "The system clock may run fast when you use the ACPI power management timer as a high-resolution counter on Windows 2000-based, Windows XP-based, and Windows Server 2003-based computers" but the details are totally confused as it then goes on to talk about the system clock losing time. Anyway the issue relates to the use of the power management timer (PMTimer) as a high-res timer on some platforms in ACPI mode. One of the workarounds suggested (though given the lack of detail it is hard to understand what is being worked-around) is to: "Modify the program to call the timeBeginPeriod function at startup and to call the timeEndPeriod function on exit. This workaround eliminates repeated time increment changes."
21-04-2017

We don't have a real world scenario where this is a problem, closing as WNF.
12-02-2015

The KB article only mentions Windows 2000 -> 2003, this might still be an issue on later versions though.
24-03-2014

EVALUATION All ongoing issues concerning the use of high-resolution timers under win32 are being tracked through this bug: 5005837. The default timer interrupt period on windows (at least NT4 onwards) is usually a nominal 10ms but can be 15ms on some hardware. The multi-media timers provide a nominal minimum interrupt period of 1ms, but use of this is not recommended (2ms should be the minimum) and the lowest period an application should rely on is 3ms - see: http://www.microsoft.com/whdc/system/CEC/mm-timer.mspx The use of these high-resolution timers in hotspot is currently supposed to be determined by two things: (a) If the -XX:ForceTimeHighResolution option is used then the timer interrupt period is set to 1ms when the VM loads, and is restored when the VM exits; otherwise (b) If the sleep time is not a multiple of 10ms then the interrupt period is set to 1ms for the duration of the sleep and then restored However the implementation of the ForceTimeHighResolution flag is incorrect and in fact it's use will totally disable use of the high resolution timer - see CR 6435126. Note that simply using a 1ms period all the time is not a viable solution for many customers due to the 10-15x increase in the interrupt processing load. While faster hardware may negate this factor one day, it has not yet been established that this is the case. The policy in (b) is not satisfactory for two reasons: 1. It is speculated that performing many changes to the interrupt period causes windows to lose track of the time and for the time-of-day clock to drift as a result. This appears to be a problem in Windows, but needs further investigation. 2. The 10ms value is inappropriate when the actual period is 15ms, and it leads to unexpected inconsistencies when a thread sleeps for values around a 10ms boundary. Further it is really the magnitude of the requested sleep time that is significant not its divisibility into 10ms pieces: a 10ms sleep requires high resolution, a 1001ms sleep does not. See also: http://blogs.sun.com/dholmes/entry/inside_the_hotspot_vm_clocks Last evaluated: July 7, 2006 Last edit: October 5, 2006
09-05-2006

EVALUATION These needs to be addressed in post tiger time frame. ###@###.### 2004-03-02
02-03-2004