3

To get a sense of how accurate Instant.now() is on various machines, I ran a very simple test to see how often the clock updated:

public class Test { private static final Logger logger = LogManager.getLogger(Test.class.getName()); public static void main(String... args) { while (true) { logger.info(Instant.now()); } } } 

On a Intel Core i9 laptop running Windows 11, most of the log lines printed the same timestamp, and would only change every 1ms or so.

18:23:55.325 [main] INFO Test - 2025-04-15T22:23:55.325858100Z 18:23:55.325 [main] INFO Test - 2025-04-15T22:23:55.325858100Z 18:23:55.325 [main] INFO Test - 2025-04-15T22:23:55.325858100Z 18:23:55.325 [main] INFO Test - 2025-04-15T22:23:55.325858100Z 18:23:55.325 [main] INFO Test - 2025-04-15T22:23:55.325858100Z 18:23:55.326 [main] INFO Test - 2025-04-15T22:23:55.326858800Z 18:23:55.326 [main] INFO Test - 2025-04-15T22:23:55.326858800Z 18:23:55.326 [main] INFO Test - 2025-04-15T22:23:55.326858800Z 18:23:55.326 [main] INFO Test - 2025-04-15T22:23:55.326858800Z 18:23:55.326 [main] INFO Test - 2025-04-15T22:23:55.326858800Z 

On a Dell T620 server running Debian 12 (with chrony NTP), every timestamp was different and increasing (maybe 5-10us apart).

18:18:04.585 [main] INFO Test - 2025-04-15T22:18:04.585059578Z 18:18:04.585 [main] INFO Test - 2025-04-15T22:18:04.585065991Z 18:18:04.585 [main] INFO Test - 2025-04-15T22:18:04.585072460Z 18:18:04.585 [main] INFO Test - 2025-04-15T22:18:04.585078943Z 18:18:04.585 [main] INFO Test - 2025-04-15T22:18:04.585085285Z 18:18:04.585 [main] INFO Test - 2025-04-15T22:18:04.585091618Z 18:18:04.585 [main] INFO Test - 2025-04-15T22:18:04.585113372Z 18:18:04.585 [main] INFO Test - 2025-04-15T22:18:04.585122554Z 18:18:04.585 [main] INFO Test - 2025-04-15T22:18:04.585129166Z 18:18:04.585 [main] INFO Test - 2025-04-15T22:18:04.585135690Z 18:18:04.585 [main] INFO Test - 2025-04-15T22:18:04.585142432Z 18:18:04.585 [main] INFO Test - 2025-04-15T22:18:04.585148890Z 

Both machines used Java 21.

I'm just curious why one machine can measure times with microsecond precision while the other can only do so with millisecond precision. That's not just a little less precision - it's a whole order of magnitude less precision.

Microsecond precision starts becoming important when you're comparing timestamps produced by different processes/machines, and so cannot rely on System.nanoTime().

Is it a Windows vs Unix thing? Or is it more likely there's fundamentally better hardware clocks in the server?

7
  • Perhaps in the first case your computer is overburdened. If so, your JVM’s threads would not be scheduled on a CPU core for execution as often as on the second computer. Commented Apr 15 at 22:52
  • Thanks @Basil, but for sure not. I ran the test again with the same results; the system CPU usage reported by windows was in the single-digit territory, and there's low usage of other resources like memory. Commented Apr 15 at 22:54
  • 2
    @BasilBourque It doesn't have to do with CPU load. The timer resolution on Windows is lower (higher granularity) than on Linux. Commented Apr 16 at 7:31
  • 1
    Is it a Windows vs Unix thing? Or is it more likely there's fundamentally better hardware clocks in the server? It can be either or a combination. I read somewhere that clock precision in Windows can be configured somehow, but higher precision also costs more energy, so unless critical you probably don’t want it. I don’t remember the details. Commented Apr 16 at 7:57
  • 2
    Instant.now() is equivalent to Instant.now(Clock.systemUTC()), and the docs for Clock.system(), Clock.systemUTC(), and Clock.systemDefaultZone() all say “This clock is based on the best available system clock. This may use System.currentTimeMillis(), or a higher resolution clock if one is available.” I would guess the Windows clock is based on ticks (though that doesn’t explain why it jumped by 700 ns). Commented Apr 16 at 20:34

1 Answer 1

0

It's because Java's Instant.now() wraps whatever the OS gives you for the wall-clock, and Windows and Linux expose very different resolutions for that.

On Windows, Java ends up calling the Win32 GetSystemTimeAsFileTime API, which returns time in 100 ns units, but is only updated on the kernel’s system-timer interrupt, generally every 10-16 ms. That’s why you see the timestamp stick for ~1 ms (or more) between ticks.

Windows 8+ added GetSystemTimePreciseAsFileTime for microsecond-level updates, but Java 21 doesn’t use that. (Source)

On Linux, Java calls the POSIX clock_gettime(CLOCK_REALTIME), which on modern distributions supports full nanosecond‐precision returns (subject to your hardware timer and kernel config). Typical resolutions you’ll actually see are in the sub-microsecond to few-microsecond range. You can confirm your system’s advertised precision with clock_getres(2). (Source)

The Java Instant.now() calls the native function getNanoTimeAdjustment() which documents:

The value returned has the best resolution available to the JVM on the current system. This is usually down to microseconds - or tenth of microseconds - depending on the OS/Hardware and the JVM implementation.

That's implemented in jvm.cpp as os::javaTimeSystemUTC(seconds, nanos).

On Windows, that's calling GetSystemTimeAsFileTime

Whereas on Linux, it's: clock_gettime(CLOCK_REALTIME, &ts);

The "right way" to get high precision in Java in a platform-independant manner is to use System.nanoTime(), which is good for meausuring elapsed time (but not wall-clock). On Windows that's using QueryPerformanceCounter.

I can't find a source of why Java is not using GetSystemTimePreciseAsFileTime on Windows platforms that provide it. Perhaps, just simplicity of implementation.

Sign up to request clarification or add additional context in comments.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.