United StatesChange Country, Oracle Worldwide Web Sites Communities I am a... I want to...
JDK-5005837 : rework win32 timebeginperiod usage

Details
Type:
Enhancement
Submit Date:
2004-03-01
Status:
Open
Updated Date:
2014-03-24
Project Name:
JDK
Resolved Date:
Component:
hotspot
OS:
windows_nt,generic,windows_xp,windows_2000
Sub-Component:
runtime
CPU:
x86,generic
Priority:
P5
Resolution:
Unresolved
Affected Versions:
1.5.1,2.0,1.3.1,5.0
Targeted Versions:
9

Related Reports
Duplicate:
Duplicate:
Relates:
Relates:
Relates:
Relates:
Relates:

Sub Tasks

Description
The Win32 platform-specific os::sleep() operation calls timeBeginPeriod() and timeEndPeriod(), which are exported from winmm.dll.  

timeBeginPeriod and timeEndPeriod can cause (a) time-of-day clock skew, and (b) degraded perforamnce because of the increased interrupt rate.  For sleep requests that are "long" (perhaps > 1sec) or are a multiple of 10 msecs we should consider avoiding timeBeginPeriod-timeEndPeriod.  

Remarks:

* See the java-3d timer/timertest demo
* See http://www.sysinternals.com/ntw2k/info/timer.shtml for a description of how timeBeginPeriod and timeEndPeriod are implemented. 

Thoughts:
* Defer loading winmm.dll
* We should measure the cost of the calls to timePeriodBegin and timePeriodEnd.  If either operation is expensive we could (a) use a process-specific globals "timerUsers" and "currentResolution" to avoid redundant timePeriodBegin and timePeriodEnd calls.  We would define a timeperiodbegin call as redundant if the current resolution is already >= the requested resolution. (b) call timePeriodBegin only to increase the resolution.  Periodically, perhaps in the watcher thread, if high res timers haven't been required in the last N secs, restore (decrease) the resolution with timePeriodEnd.  That is, we'd defer calling timePeriodEnd.  The intent is to eliminate chains of calls to timeperiodbegin and timeperiodend that simply increase and then decrease the timer resolution.  

* Before calling timeperiodbegin() we should verify that the system supports the requested resolution via:

    TIMECAPS tc;
    if (timeGetDevCaps(&tc, sizeof(TIMECAPS)) != TIMERR_NOERROR) {
        // Error; application can't continue.
    }
    const UINT wTimerRes =
        min(max(tc.wPeriodMin, TARGET_RESOLUTION), tc.wPeriodMax);
    timeBeginPeriod(wTimerRes);

###@###.### 2004-03-01
###@###.### 10/8/04 14:08 GMT

                                    

Comments
EVALUATION

These needs to be addressed in post tiger time frame.
###@###.### 2004-03-02
                                     
2004-03-02
EVALUATION

All ongoing issues concerning the use of high-resolution timers under win32 are being tracked through this bug: 5005837.

The default timer interrupt period on windows (at least NT4 onwards) is usually a nominal 10ms but can be  15ms on some hardware. The multi-media timers provide a nominal minimum interrupt period of 1ms, but use of this is not recommended (2ms should be the minimum) and the lowest period an application should rely on is 3ms - see:

http://www.microsoft.com/whdc/system/CEC/mm-timer.mspx 

The use of these high-resolution timers in hotspot is currently supposed to be determined by two things:
(a) If the -XX:ForceTimeHighResolution option is used then the timer interrupt period is set to 1ms when the VM loads, and is restored when the VM exits; otherwise
(b) If the sleep time is not a multiple of 10ms then the interrupt period is set to 1ms for the duration of the sleep and then restored

However the implementation of the ForceTimeHighResolution flag is incorrect and in fact it's use will totally disable use of the high resolution timer - see CR 6435126.

Note that simply using a 1ms period all the time is not a viable solution for many customers due to the 10-15x increase in the interrupt processing load. While faster hardware may negate this factor one day, it has not yet been established that this is the case.

The policy in (b) is not satisfactory for two reasons:
1. It is speculated that performing many changes to the interrupt period causes windows to lose track of the time and for the time-of-day clock to drift as a result. This appears to be a problem in Windows, but needs further investigation.
2. The 10ms value is inappropriate when the actual period is 15ms, and it leads to unexpected inconsistencies when a thread sleeps for values around a 10ms boundary. Further it is really the magnitude of the requested sleep time that is significant not its divisibility into 10ms pieces: a 10ms sleep requires high resolution, a 1001ms sleep does not.

See also: http://blogs.sun.com/dholmes/entry/inside_the_hotspot_vm_clocks

Last evaluated: July 7, 2006
Last edit: October 5, 2006
                                     
2006-05-09
The KB article only mentions Windows 2000 -> 2003, this might still be an issue on later versions though.
                                     
2014-03-24



Hardware and Software, Engineered to Work Together