Using the below code I would have assumed so see cnt be close to 0, but on Windows I see only values above 500,000.
from time import * def test_time(f, c): cnt = 0 for i in range(c): ps, ts = f(), f() if not ps - ts: cnt += 1 return cnt if __name__ == '__main__': res = test_time(perf_counter_ns, 1_000_000) print(res) # usually returns a count of over 500k On Linux this does not happen. I understand that the output resolution on Windows is limited to 100 ns increments. My question if I am missing something here or if there is a way this can be made to work on Windows.
EDIT Others might find High-precision clock in Python helpful as suggested by @JonSG. It gives a good overview of precision time measurement with Python, but does not address the narrower question why consecutive calls to perf_counter_ns can yield the same value on windows and not on Linux.
EDIT I've tested this behaviour with 3.11, 3.12 and 3.13.
EDIT I've tested restricting the python process on Windows to a single core or altering the process priority. Neither made an apparent impact on the number of collisions.
time.perf_counter_ns()can yield the same value on windows and not on Linux. One of the answers in the question you linked shows that windows has a flat 100 ns timing for perf_counter_ns and Linux has ~70 ns. Surely those 30 ns aren't the reason why Linux doesn't have value collision on consecutive calls and Windows does?f()is let's say 80 ns, then if the system's value changes every 70 ns, you will never have duplicates, and if it changes every 100 ns, you will have many duplicates. Two measurements 80 ns apart can easily fall within the same 100 ns window but not within the same 70 ns window.