JDK-6435126 : ForceTimeHighResolution switch doesn't operate as intended
  • Type: Bug
  • Component: hotspot
  • Sub-Component: runtime
  • Affected Version: 5.0u6,6
  • Priority: P3
  • Status: Closed
  • Resolution: Won't Fix
  • OS: windows,windows_xp
  • CPU: generic,x86
  • Submitted: 2006-06-07
  • Updated: 2019-08-28
  • Resolved: 2012-07-23
Related Reports
Duplicate :  
Relates :  
Relates :  
Relates :  
The fix for 4500388 introduced the ForceTimeHighResolution switch to cause the VM to request a 1ms timer interrupt period at startup, and restore the timer period at shutdown, and consequently to disable the per-sleep changing of the timer period that would otherwise occur. This was applied to 1.3.1_04+, 1.4.0_02+, 5.0+ and 6.

However, the code to change the timer was placed in DllMain and at the time it is executed the command-line switches have not been processed and so ForceTimeHighResolution is always seen to be false at that stage and hence the timer period is never set to 1ms. Because the flag also disables per-sleep changes to the timer, the net result is that use of this flag actually disables all use of the high-resolution timer for sleep purposes.

You can use perfmon, on Windows, to display the interrupts/sec that the system is experiencing. Select a scale factor of 1 and set the Y axis to cover 0-1000. Under a normal 10ms timer period you will see around 100+ interrupts/sec. With the 1ms timer period you will see this shoot up to 1000+ interrupts/sec.

NOTE: Changing the timer period is a system wide action and so Windows operates the timer at the shortest period requested by any running application. On some systems you will find that some other piece of software has already set a 1ms period and so the operation of the VM, with or without ForceTimeHighresolution, has no effect on the timer period.

Edited to stop the censoring software from changing "axis" to "customer"

An SDN user confirms they also see time speed ups: --- I am confirming the bug on JDK 6 Update 6 running on Windows 2003 SP2. (that is "system time goes too fast when calling sleep(45)") David Bala��ic (yes, that is an eight) ---

The following compares the cost of using ForceTimeHighResolution with an init2() solution vs. defer until first sleep solution, using refworkload's reference_client. init2/result_fthr/: reference_client: Benchmark Samples Mean Stdev %Diff P Significant footprint3_real 10 27852.58 33.80 0.04 0.429 * jetstream_client 10 85.93 1.93 -5.88 0.002 Yes specjvm98_client 10 187.18 2.19 -0.12 0.789 * start_applet_awt 10 367.00 8.43 5.66 0.011 * start_application 10 225.00 8.19 -7.45 0.000 Yes start_application 10 282.80 5.01 -6.36 0.000 Yes startup3 10 3.05 0.04 9.96 0.000 Yes swingmark 10 197.99 9.17 -41.11 0.000 Yes swingmark_native 10 220.74 65.53 -30.37 0.001 Yes -------------------------------------------------------------------------- lazy-init/result_fthr/: reference_client: Benchmark Samples Mean Stdev %Diff P Significant footprint3_real 10 27810.44 31.38 0.19 0.001 Yes jetstream_client 10 84.25 2.29 -7.72 0.000 Yes specjvm98_client 10 185.96 1.91 -0.77 0.063 * start_applet_awt 10 398.50 39.77 -2.44 0.517 * start_application 10 210.80 8.23 -0.67 0.709 * start_application 10 267.30 5.19 -0.53 0.628 * startup3 10 3.24 0.05 4.39 0.000 Yes swingmark 10 335.17 1.56 -0.30 0.136 * swingmark_native 10 316.01 1.29 -0.32 0.075 * The variance across runs seems quite high so take these as a rough guide only. The main take-away from the above is that startup_application doesn't suffer when we use lazy setting of the interrupt period. startup3 also improves quite a bit, which can be explained by the fact that only 4 of the 6 apps used in startup3 actually use sleep and so cause the timer period to change.

EVALUATION This has been broken for too long. No use fixing it now.

EVALUATION ---------------------------------------- Re-targetting for Dolphin (Java 7) as part of the 5005837 work.

WORK AROUND Do not use ForceTimeHighResolution but instead, at the start of the application create and start a daemon Thread that simply sleeps for a very long time (that isn't a multiple of 10ms) as this will set the timer period to be 1ms for the duration of that sleep - which in this case is the lifetime of the VM: new Thread() { { this.setDaemon(true); this.start(); } public void run() { while(true) { try { Thread.sleep(Integer.MAX_VALUE); } catch(InterruptedException ex) { } } } }; Note that as the sleep never terminates the timer period is never restored by the VM. This is not a problem however because it seems windows tracks the changes to the timer in the process control block and removes the change when the process terminates. This particular feature of these API's is not documented however, but the use of the PCB is mentioned in one of the "inside NT" articles, because it stops a process from calling timeEndPeriod if it never called timeBeginPeriod. It does make sense that the OS would not simply trust the application to do the right thing here.

EVALUATION Fixing this bug is on the one hand trivial, and on the other very tricky. The code change is trivial. The problem is that this switch has been failing to operate as intended for several years now, yet there have been no reports of this issue from anyone using the switch, so anyone using the switch must be satisfied with the way their application is behaving and so fixing this bug could adversely affect those applications. It may be that we have to leave this switch as broken. A position that is more acceptable when one considers the simple workaround available.

SUGGESTED FIX Move the setting of the 1ms timer period out of DllMain and into the init_2() method (called after command-line arguments have been processed).