To get a sense of how accurate Instant.now() is on various machines, I ran a very simple test to see how often the clock updated:
public class Test { private static final Logger logger = LogManager.getLogger(Test.class.getName()); public static void main(String... args) { while (true) { logger.info(Instant.now()); } } } On a Intel Core i9 laptop running Windows 11, most of the log lines printed the same timestamp, and would only change every 1ms or so.
18:23:55.325 [main] INFO Test - 2025-04-15T22:23:55.325858100Z 18:23:55.325 [main] INFO Test - 2025-04-15T22:23:55.325858100Z 18:23:55.325 [main] INFO Test - 2025-04-15T22:23:55.325858100Z 18:23:55.325 [main] INFO Test - 2025-04-15T22:23:55.325858100Z 18:23:55.325 [main] INFO Test - 2025-04-15T22:23:55.325858100Z 18:23:55.326 [main] INFO Test - 2025-04-15T22:23:55.326858800Z 18:23:55.326 [main] INFO Test - 2025-04-15T22:23:55.326858800Z 18:23:55.326 [main] INFO Test - 2025-04-15T22:23:55.326858800Z 18:23:55.326 [main] INFO Test - 2025-04-15T22:23:55.326858800Z 18:23:55.326 [main] INFO Test - 2025-04-15T22:23:55.326858800Z On a Dell T620 server running Debian 12 (with chrony NTP), every timestamp was different and increasing (maybe 5-10us apart).
18:18:04.585 [main] INFO Test - 2025-04-15T22:18:04.585059578Z 18:18:04.585 [main] INFO Test - 2025-04-15T22:18:04.585065991Z 18:18:04.585 [main] INFO Test - 2025-04-15T22:18:04.585072460Z 18:18:04.585 [main] INFO Test - 2025-04-15T22:18:04.585078943Z 18:18:04.585 [main] INFO Test - 2025-04-15T22:18:04.585085285Z 18:18:04.585 [main] INFO Test - 2025-04-15T22:18:04.585091618Z 18:18:04.585 [main] INFO Test - 2025-04-15T22:18:04.585113372Z 18:18:04.585 [main] INFO Test - 2025-04-15T22:18:04.585122554Z 18:18:04.585 [main] INFO Test - 2025-04-15T22:18:04.585129166Z 18:18:04.585 [main] INFO Test - 2025-04-15T22:18:04.585135690Z 18:18:04.585 [main] INFO Test - 2025-04-15T22:18:04.585142432Z 18:18:04.585 [main] INFO Test - 2025-04-15T22:18:04.585148890Z Both machines used Java 21.
I'm just curious why one machine can measure times with microsecond precision while the other can only do so with millisecond precision. That's not just a little less precision - it's a whole order of magnitude less precision.
Microsecond precision starts becoming important when you're comparing timestamps produced by different processes/machines, and so cannot rely on System.nanoTime().
Is it a Windows vs Unix thing? Or is it more likely there's fundamentally better hardware clocks in the server?
Instant.now()is equivalent toInstant.now(Clock.systemUTC()), and the docs forClock.system(),Clock.systemUTC(), andClock.systemDefaultZone()all say “This clock is based on the best available system clock. This may use System.currentTimeMillis(), or a higher resolution clock if one is available.” I would guess the Windows clock is based on ticks (though that doesn’t explain why it jumped by 700 ns).