JDK-4717583 : High resolution Thread.sleep() needs lower overhead on Windohs
  • Type: Enhancement
  • Component: hotspot
  • Sub-Component: runtime
  • Affected Version: 2.0,1.3.1
  • Priority: P4
  • Status: Closed
  • Resolution: Duplicate
  • OS: generic,windows_2000
  • CPU: generic,x86
  • Submitted: 2002-07-19
  • Updated: 2017-09-14
  • Resolved: 2006-05-09
Related Reports
Duplicate :  
Relates :  
Relates :  
Relates :  
Description
On the win32 platform, we try to implement high-resolution Thread.sleep() 
calls by using timeBeginPeriod(1)/timeEndPeriod(1) from the Windohs 
multimedia library, winmm.dll.  On some machines using that dll causes 
a 4-second delay in VM startup.  See 4653558 for details.  The solution 
that was tried there was backed out with 4712392 because many important 
customers have come to depend on high-resolution sleeps.

On some Windohs platforms, repeated changing the time period causes the 
Windohs time-of-day clock to run fast.  See 4500388 for details. The 
-XX:+ForceTimeHighResolution flag that was added for that seems to force 
low resolution time.

Further, because we use the default time period for sleeps that are 
multiples of 10ms, even if the default time period is not 10ms, the 
results of Thread.sleep() are not monotonic with the request argument.  

As an example of the last two problems, consider the output of 
a test ShortSleep100 program (attached):

    $ $deployed/4712392/bin/java     > -showversion -XX:+ForceTimeHighResolution ShortSleep100
    java version "1.4.1-rc"
    Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.1-rc-b16)
    Java HotSpot(TM) Client VM (build 20020715.2049-4712392-compiler1-product, mixed mode)
     
    0       0.0   
    10      1563.0
    9       1562.0
    1       1563.0
    2       1562.0
    4       1563.0
    6       1562.0
    8       1563.0
    9       1562.0
    20      3125.0
    $ $deployed/4712392/bin/java     > -showversion -XX:-ForceTimeHighResolution ShortSleep100
    java version "1.4.1-rc"
    Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.1-rc-b16)
    Java HotSpot(TM) Client VM (build 20020715.2049-4712392-compiler1-product, mixed mode)
     
    0       0.0
    10      1562.0
    9       984.0
    1       204.0
    2       296.0
    4       485.0
    6       687.0
    8       875.0
    9       985.0
    20      3125.0    

Shouldn't the flag be named be -XX:+ForceHighResolutionTime to be more 
grammatically correct?

This whole section of code need to be thought out and cleaned up.

Comments
EVALUATION Ongoing issues concerning the use of high-resolution timers under win32 are being tracked through 5005837. The default timer interrupt period on windows (at least NT4 onwards) is usually a nominal 10ms but can be 15ms on some hardware. The multi-media timers provide a nominal minimum interrupt period of 1ms, but use of this is not recommended (2ms should be the minimum) and the lowest period an application should rely on is 3ms - see: http://www.microsoft.com/whdc/system/CEC/mm-timer.mspx The use of these high-resolution timers in hotspot is currently determined by two things: (a) If the -XX:ForceTimeHighResolution option is used then the timer interrupt period is set to 1ms when the VM loads, and is restored when the VM exits; otherwise (b) If the sleep time is not a multiple of 10ms then the interrupt period is set to 1ms for the duration of the sleep and then restored Note that simply using a 1ms period all the time is not a viable solution for many customers due to the 10-15x increase in the interrupt processing load. While faster hardware may negate this factor one day, it has not yet been established that this is the case. The policy in (b) is not satisfactory for two reasons: 1. It is speculated that performing many changes to the interrupt period causes windows to lose track of the time and for the time-of-day clock to drift as a result. This appears to be a problem in Windows, but needs further investigation. 2. The 10ms value is inappropriate when the actual period is 15ms, and it leads to unexpected inconsistencies when a thread sleeps for values around a 10ms boundary. Further it is really the magnitude of the requested sleep time that is significant not its divisibility into 10ms pieces: a 10ms sleep requires high resolution, a 1001ms sleep does not.
09-05-2006

WORK AROUND The specification for Thread.sleep() doesn't say anything about the resolution of the time period. So we could just ignore the whole issue. But high-resolution sleeps seem to be a distinguishing feature of our VM, so we are making a lot of people happy with it. ###@###.### 2002-07-19 ###@###.### 2002-11-15 Workaround provided by ###@###.###: We make sure we sleep for some number of milliseconds that is a multiple of 100. In our application we sleep from current time to the end of some interval. Since current time ends randomly between 0-9, most sleeps caused the problem. Now we round up to next 100 mSecs.
11-06-2004

PUBLIC COMMENTS High resolution Thread.sleep() needs lower overhead on Windohs.
10-06-2004

SUGGESTED FIX Discussions with various helpful users suggest that sleeps don't have to have perfect 1ms resolution, and not all sleeps need that resolution. Short sleeps need higher resolution than long sleeps. We were tending toward adding a threshold (e.g., a -XX:HighResolutionSleepThreshold=66 flag) that would default to something reasonable like 66ms (15 frames per second), and sleeps longer than that could use the default time period, but sleeps below that threshold would use a higher-resolution period. It was up for debate whether 1ms should be the high-resolution time period, or if we could use something like 5% of the sleep request as the period, to save the overhead observed with 1ms sleeps. E.g., a Thread.sleep(66) call could set the time period to 3ms and be pretty happy, and might use 1/3 the timer overhead of a 1ms time period. In the course of investigating this, I discovered that the Windohs Multimedia SDK says you can't just ask for a 1ms time period (or any arbitrary time period), but must use something like: #define TARGET_RESOLUTION 1 // 1-millisecond target resolution TIMECAPS tc; UINT wTimerRes; if (timeGetDevCaps(&tc, sizeof(TIMECAPS)) != TIMERR_NOERROR) { // Error; application can't continue. } wTimerRes = min(max(tc.wPeriodMin, TARGET_RESOLUTION), tc.wPeriodMax); timeBeginPeriod(wTimerRes); (If TARGET_RESOLUTION is 1 and tc.wPeriodMin is >= 1, then the max() function will return tc.wPeriodMin , and if tc.wPeriodMin is <= tc.wPeriodMax (which it should be), then the min() function will also return tc.wPeriodMin, but that code is straight from their SDK document, so maybe there's something subtle going on. We need that math if TARGET_RESOLUTION weren't 1.) This also points out that we are making calls like timeBeginPeriod(1L); where the SDK documents clearly say that milliseconds are counted in UINTs, not longs. The SDK also says one should check the result for the error return TIMERR_NOCANDO. ###@###.### 2002-07-19
19-07-2002