United StatesChange Country, Oracle Worldwide Web Sites Communities I am a... I want to...
JDK-5005837 : rework win32 timebeginperiod usage

Submit Date:
Updated Date:
Project Name:
Resolved Date:
Won't Fix
Affected Versions:
Fixed Versions:

Related Reports

Sub Tasks

The Win32 platform-specific os::sleep() operation calls timeBeginPeriod() and timeEndPeriod(), which are exported from winmm.dll.  

timeBeginPeriod and timeEndPeriod can cause (a) time-of-day clock skew, and (b) degraded perforamnce because of the increased interrupt rate.  For sleep requests that are "long" (perhaps > 1sec) or are a multiple of 10 msecs we should consider avoiding timeBeginPeriod-timeEndPeriod.  


* See the java-3d timer/timertest demo
* See http://www.sysinternals.com/ntw2k/info/timer.shtml for a description of how timeBeginPeriod and timeEndPeriod are implemented. 

* Defer loading winmm.dll
* We should measure the cost of the calls to timePeriodBegin and timePeriodEnd.  If either operation is expensive we could (a) use a process-specific globals "timerUsers" and "currentResolution" to avoid redundant timePeriodBegin and timePeriodEnd calls.  We would define a timeperiodbegin call as redundant if the current resolution is already >= the requested resolution. (b) call timePeriodBegin only to increase the resolution.  Periodically, perhaps in the watcher thread, if high res timers haven't been required in the last N secs, restore (decrease) the resolution with timePeriodEnd.  That is, we'd defer calling timePeriodEnd.  The intent is to eliminate chains of calls to timeperiodbegin and timeperiodend that simply increase and then decrease the timer resolution.  

* Before calling timeperiodbegin() we should verify that the system supports the requested resolution via:

    TIMECAPS tc;
    if (timeGetDevCaps(&tc, sizeof(TIMECAPS)) != TIMERR_NOERROR) {
        // Error; application can't continue.
    const UINT wTimerRes =
        min(max(tc.wPeriodMin, TARGET_RESOLUTION), tc.wPeriodMax);

###@###.### 2004-03-01
###@###.### 10/8/04 14:08 GMT


We don't have a real world scenario where this is a problem, closing as WNF.
The KB article only mentions Windows 2000 -> 2003, this might still be an issue on later versions though.
It turns out the ForceTimeHighResolution option has been incorrectly implemented. When DllMain is invoked the arguments have not been processed and so ForceTimeHighResolution will always be false and so the timer period will not be set to 1ms. However because ForceTimeHighResolution is true we disable the use of per-sleep changes to the timer and so we in fact always use low resolution timing.

The fact some people have reported a performance increase when they used ForceTimeHighResolution is more readily understandable now. If they were doing lots of small sleeps that were causing a 1ms timer to be used, they were no longer doing so and so the reduced rate of interrupts could result in better program throughput. The fact the sleeps now sleep longer is probably of little consequence.

By moving the call of TimerBeginPeriod to the init_2() method I was able to observe the correct operation of the 1ms timer. It is very interesting to note that when operating correctly, use of ForceTimeHighResolution results in much more stable/predictable sleep times - we no longer see a long first sleep followed by a super-short sleep. The measured times are almost exactly what we asked for (at least on the test system tulipwood.east.) This would tend to suggest that the per-sleep changing of the timer period is itself a de-stabilising action - perhaps explainable if the effect of calling timerBeginPeriod is actually asynchronous with respect to the call and temporarily confuses the time management system while the test program both sleeps and queries the system time. That is very hard to prove one way or another.

A final note of interest. I discovered that on my laptop, running windows 2000, the timer period is 1ms all the time. By booting into safe mode I observed the expected 10ms period. This means that some other piece of software I normally run has already requested a 1ms timer period. This also helps explain why some users haven't seen sleep changing behaviour depending on whether the sleep time was a multiple of 10ms or not, because they were always running at a high interrupt rate anyway.

To see the interrupt rate on windows run perfmon, add a trace/monitor for interrupts/sec, change the scale to 1 and the Y axis to go from 0 to 1000. During typical operation you'll see around 130+ interrupts/sec. The change to the 1ms timer is very evident when it occurs.
The following MS Kb article: 
alludes to the problem of the clock running fast on some systems:

"The system clock may run fast when you use the ACPI power management timer as a high-resolution counter on Windows 2000-based, Windows XP-based, and Windows Server 2003-based computers"

but the details are totally confused as it then goes on to talk about the system clock losing time. Anyway the issue relates to the use of the power management timer (PMTimer) as a high-res timer on some platforms in ACPI mode. One of the workarounds suggested (though given the lack of detail it is hard to understand what is being worked-around) is to:

"Modify the program to call the timeBeginPeriod function at startup and to call the timeEndPeriod function on exit. This workaround eliminates repeated time increment changes."

All ongoing issues concerning the use of high-resolution timers under win32 are being tracked through this bug: 5005837.

The default timer interrupt period on windows (at least NT4 onwards) is usually a nominal 10ms but can be  15ms on some hardware. The multi-media timers provide a nominal minimum interrupt period of 1ms, but use of this is not recommended (2ms should be the minimum) and the lowest period an application should rely on is 3ms - see:


The use of these high-resolution timers in hotspot is currently supposed to be determined by two things:
(a) If the -XX:ForceTimeHighResolution option is used then the timer interrupt period is set to 1ms when the VM loads, and is restored when the VM exits; otherwise
(b) If the sleep time is not a multiple of 10ms then the interrupt period is set to 1ms for the duration of the sleep and then restored

However the implementation of the ForceTimeHighResolution flag is incorrect and in fact it's use will totally disable use of the high resolution timer - see CR 6435126.

Note that simply using a 1ms period all the time is not a viable solution for many customers due to the 10-15x increase in the interrupt processing load. While faster hardware may negate this factor one day, it has not yet been established that this is the case.

The policy in (b) is not satisfactory for two reasons:
1. It is speculated that performing many changes to the interrupt period causes windows to lose track of the time and for the time-of-day clock to drift as a result. This appears to be a problem in Windows, but needs further investigation.
2. The 10ms value is inappropriate when the actual period is 15ms, and it leads to unexpected inconsistencies when a thread sleeps for values around a 10ms boundary. Further it is really the magnitude of the requested sleep time that is significant not its divisibility into 10ms pieces: a 10ms sleep requires high resolution, a 1001ms sleep does not.

See also: http://blogs.sun.com/dholmes/entry/inside_the_hotspot_vm_clocks

Last evaluated: July 7, 2006
Last edit: October 5, 2006

These needs to be addressed in post tiger time frame.
###@###.### 2004-03-02

Hardware and Software, Engineered to Work Together