Saturday, April 29, 2017

Improving accuracy of timestamps based on system time

Run this small Java program on Windows. What output would you expect?


public class TimeAccuracyChecker
{
    public static void main(String[] args)
    {
        long first;
        long next;
first = System.currentTimeMillis();
        while (true)
        {
            next = System.currentTimeMillis();
            if (first != next)
                break;
        }
        System.out.println("First value: " + first + ", next value: " + next + ", difference: " + (next - first));
    }
}

The difference will vary from 15 to 16 msec:
C:\> java TimeAccuracyChecker

First value: 1447113411315, next value: 1447113411331, difference: 16​
Why? Because in Windows, system time is measured by interrupts from a timer (called "ticks"), which triggers them at equal intervals.
By default, such interval is (2-6) = 0.015625​ seconds. That's called "precision -6":
C:\> w32tm /query /status
Leap Indicator: 0(no warning)
Stratum: 3 (secondary reference - syncd by (S)NTP)
Precision: -6 (15.625ms per tick)
Root Delay: 0.2208562s
Root Dispersion: 7.8352645s
ReferenceId: 0x1765BB44 (source IP:  23.101.187.68)
Last Successful Sync Time: 2015-11-10 01:29:10
Source: time.windows.com,0x9
Poll Interval: 11 (2048s​)
(That's why we can see that a server, which handled a message first, has logged a timestamp bigger than server which handled it afterwards, with difference up to 16 msec - even if both servers are synced well to the same clock source).


But wait, you'll say, doesn't Thread.sleep()​ work with millisecond precision? Right, it does. Because it changes the timer resolution level to 1 msec by calling Windows function ​timeBeginPeriod() at start, and returning it to default by calling timeEndPeriod() at finish.

Now what's interesting is that this change is global on system level. This means that if a single thread changes the timer resolution to smaller value, then this affects not only other threads in this process - it affects all processes in the OS! Because now the CPU(s) will receive the interrupts from the timer at smaller intervals.

(That's why the output of the above example can change, if you'll run it alongside with Intellij IDEA on your workstation, for example).

So if we're interested in improving the accuracy of timestamps in our program, all we have to do is to launch a single backgroup thread, which will do nothing but sleep.
Check this out:
public class TimeAccuracyWithBackgroundSleepChecker
{
    public static void main(String[] args)
    {
        long first;
        long next;
        
        Thread t = new Thread()
        {
            public void run()
            {
                try
                {
                    Thread.sleep(Integer.MAX_VALUE);
                } catch (InterruptedException e) {}
            }
        };
        t.start();
        
        first = System.currentTimeMillis();
        while (true)
        {
            next = System.currentTimeMillis();
            if (first != next)
                break;
        }
                
        System.out.println("First value: " + first + ", next value: " + next + ", difference: " + (next - first));
        t.interrupt();
    }
}​

Output:
C:\> java TimeAccuracyWithBackgroundSleepChecker
First value: 1447116447771, next value: 1447116447772, difference: 1
This has a price. Quote from MSDN:
However, it can also reduce overall system performance, because the thread scheduler switches tasks more often. High resolutions can also prevent the CPU power management system from entering power-saving modes. Setting a higher resolution does not improve the accuracy of the high-resolution performance counter.


The TimerTool small utility shows what's the current timer resolution on your machine and can change it. Check it out while running the examples.

No comments:

Post a Comment