For high-precision timing in Python
Quick summary
Among time.monotonic_ns(), time.perf_counter_ns(), and time.time_ns(), only time.perf_counter_ns() [<- use this; it's the best] has sub-microsecond precision on both Linux and Windows.
Many people may not understand the difference between resolution, precision, and accuracy, and may mistakenly think precision timing is easier and more-accessible than it really is. Remember, in this context of software timing:
- Resolution = the smallest time difference the units can represent. Ex: 1 ns.
- Precision = the repeatability of the measurement. This is the smallest time difference you can repeatedly measure, and it's usually much larger than the resolution. Ex: 0.070 us (70 ns) typically on Linux, and up to 16000 us (16 ms), or ~230k times worse, on Windows.
- Accuracy = how close the measurement is to the true value. This has to do with how accurate your hardware clock's quartz crystal (or equivalent RC, ceramic, or PLL) oscillator is, and how well it's calibrated. We won't worry about this one here.
Using my time_monotonic_ns__get_precision.py program below, here are my results tested on my Dell Precision 5570 Pentium i9 (Linux) and i7 (Windows 11) high-end 20-thread laptops. Your results will vary based on your hardware and OS:
------------------------------------------------------------------------------- 1. time.monotonic_ns() ------------------------------------------------------------------------------- Resolution Precision ---------- --------- Linux: 1 ns 0.070 us +/- 0.118 us (70 ns +/- 118 ns) Windows: 1 ns 16000.000 us +/- 486.897 us (16 ms +/- 0.487 ms) ------------------------------------------------------------------------------- 2. time.perf_counter_ns() [Best!] ------------------------------------------------------------------------------- Resolution Precision ---------- --------- Linux: 1 ns 0.069 us +/- 0.070 us ( 69 ns +/- 70 ns) Windows: 1 ns 0.100 us +/- 0.021 us (100 ns +/- 21 ns) ------------------------------------------------------------------------------- 3. time.time_ns() ------------------------------------------------------------------------------- Resolution Precision ---------- --------- Linux: 1 ns 0.074 us +/- 0.226 us (74 ns +/- 226 ns) Windows: 1 ns 10134.354 us +/- 5201.053 us (10.134 ms +/- 5.201 ms)
Notice that even though all 3 functions have 1 ns resolution, only time.perf_counter_ns() has sub-microsecond precision on both Linux and Windows. The other two functions have sub-microsecond precision only on Linux, but are horrible (low precision) on Windows.
Details
1. Python 3.7 or later
If using Python 3.7 or later, use the modern, cross-platform time module functions such as time.monotonic_ns(), time.perf_counter_ns(), and time.time_ns(), here: https://docs.python.org/3/library/time.html#time.monotonic_ns.
import time # For Unix, Linux, Windows, etc. time_ns = time.monotonic_ns() # note: unspecified epoch time_ns = time.perf_counter_ns() # **best precision** time_ns = time.time_ns() # known epoch # Unix or Linux only time_ns = time.clock_gettime_ns() # etc. etc. There are others. See the link above.
See also this note from my other answer from 2016, here: How can I get millisecond and microsecond-resolution timestamps in Python?:
You might also try time.clock_gettime_ns() on Unix or Linux systems. Based on its name, it appears to call the underlying clock_gettime() C function which I use in my nanos() function in C in my answer here and in my C Unix/Linux library here: timinglib.c.
As a quick test, you can run the following to get a feel for what the minimum resolution is on your particular hardware and OS. I have tested and run this on both Linux and Windows:
python/time_monotonic_ns__get_precision.py from my eRCaGuy_hello_world repo:
#!/usr/bin/env python3 import os import pandas as pd import time SAMPLE_SIZE_DEFAULT = 20000 # For cases where Windows may have really crappy 16ms precision, we need a # significantly larger sample size. SAMPLE_SIZE_MIN_FOR_WINDOWS = 20000000 DEBUG = False # Set to True to enable debug prints def debug_print(*args, **kwargs): if DEBUG: print(*args, **kwargs) def print_bar(): debug_print("="*56, "\n") def process_timestamps(timestamps_ns, output_stats_header_str): """ Process the timestamps list to determine the time precision of the system. """ # Create a pandas DataFrame for efficient analysis of large datasets df = pd.DataFrame({"timestamp_ns": timestamps_ns}, dtype='int64') debug_print(f"df original:\n{df}") print_bar() # Remove duplicate timestamps. On Linux, there won't be any, because it has # sub-microsecond precision, but on Windows, the dataset may be mostly # duplicates because repeated calls to `time.monotonic_ns()` may return the # same value if called in quick succession. df.drop_duplicates(inplace=True) debug_print(f"df no duplicates:\n{df}") print_bar() if len(df) < 2: print("Error: not enough data to calculate time precision. Try \n" "increasing `SAMPLE_SIZE` by a factor of 10, and try again.") exit(1) # Now calculate the time differences between the timestamps. df["previous_timestamp_ns"] = df["timestamp_ns"].shift(1) df = df.dropna() # remove NaN row df["previous_timestamp_ns"] = df["previous_timestamp_ns"].astype('int64') df["delta_time_us"] = ( df["timestamp_ns"] - df["previous_timestamp_ns"]) / 1e3 debug_print(f"df:\n{df}") print_bar() # Output statistics mean = df["delta_time_us"].mean() median = df["delta_time_us"].median() mode = df["delta_time_us"].mode()[0] stdev = df["delta_time_us"].std() print(f">>>>>>>>>> {output_stats_header_str} <<<<<<<<<<") print(f"Mean: {mean:.3f} us") print(f"Median: {median:.3f} us") print(f"Mode: {mode:.3f} us") print(f"Stdev: {stdev:.3f} us") print(f"FINAL ANSWER: time precision on this system: " + f"{median:.3f} +/- {stdev:.3f} us\n") # ============================================================================= # 1. Test `time.monotonic_ns()` # ============================================================================= SAMPLE_SIZE = SAMPLE_SIZE_DEFAULT if os.name == 'nt': # The OS is Windows if SAMPLE_SIZE < SAMPLE_SIZE_MIN_FOR_WINDOWS: SAMPLE_SIZE = SAMPLE_SIZE_MIN_FOR_WINDOWS print(f"Detected: running on Windows. Using a larger SAMPLE_SIZE of " f"{SAMPLE_SIZE}.\n") # Gather timestamps with zero delays between them # - preallocated list, so that no dynamic memory allocation will happen in the # loop below timestamps_ns = [None]*SAMPLE_SIZE for i in range(len(timestamps_ns)): timestamps_ns[i] = time.monotonic_ns() process_timestamps(timestamps_ns, "1. time.monotonic_ns()") # ============================================================================= # 2. Test `time.perf_counter_ns()` # ============================================================================= SAMPLE_SIZE = SAMPLE_SIZE_DEFAULT timestamps_ns = [None]*SAMPLE_SIZE for i in range(len(timestamps_ns)): timestamps_ns[i] = time.perf_counter_ns() process_timestamps(timestamps_ns, "2. time.perf_counter_ns()") # ============================================================================= # 3. Test `time.time_ns()` # ============================================================================= SAMPLE_SIZE = SAMPLE_SIZE_DEFAULT if os.name == 'nt': # The OS is Windows if SAMPLE_SIZE < SAMPLE_SIZE_MIN_FOR_WINDOWS: SAMPLE_SIZE = SAMPLE_SIZE_MIN_FOR_WINDOWS print(f"Detected: running on Windows. Using a larger SAMPLE_SIZE of " f"{SAMPLE_SIZE}.\n") timestamps_ns = [None]*SAMPLE_SIZE for i in range(len(timestamps_ns)): timestamps_ns[i] = time.time_ns() process_timestamps(timestamps_ns, "3. time.time_ns()")
Here is my run and output when running it on a couple of high-end Dell Precision 5570 Pentium i9 (Linux) and i7 (Windows 11) 20-thread laptops.
On Linux Ubuntu 22.04 (python3 --version shows Python 3.10.12):
eRCaGuy_hello_world$ time python/time_monotonic_ns__get_precision.py >>>>>>>>>> 1. time.monotonic_ns() <<<<<<<<<< Mean: 0.081 us Median: 0.070 us Mode: 0.070 us Stdev: 0.118 us FINAL ANSWER: time precision on this system: 0.070 +/- 0.118 us >>>>>>>>>> 2. time.perf_counter_ns() <<<<<<<<<< Mean: 0.076 us Median: 0.069 us Mode: 0.068 us Stdev: 0.070 us FINAL ANSWER: time precision on this system: 0.069 +/- 0.070 us >>>>>>>>>> 3. time.time_ns() <<<<<<<<<< Mean: 0.080 us Median: 0.074 us Mode: -0.030 us Stdev: 0.226 us FINAL ANSWER: time precision on this system: 0.074 +/- 0.226 us real 0m0.264s user 0m0.802s sys 0m1.124s
On Windows 11 (python --version shows Python 3.12.1):
eRCaGuy_hello_world$ time python/time_monotonic_ns__get_precision.py Detected: running on Windows. Using a larger SAMPLE_SIZE of 20000000. >>>>>>>>>> 1. time.monotonic_ns() <<<<<<<<<< Mean: 15625.000 us Median: 16000.000 us Mode: 16000.000 us Stdev: 486.897 us FINAL ANSWER: time precision on this system: 16000.000 +/- 486.897 us >>>>>>>>>> 2. time.perf_counter_ns() <<<<<<<<<< Mean: 0.101 us Median: 0.100 us Mode: 0.100 us Stdev: 0.021 us FINAL ANSWER: time precision on this system: 0.100 +/- 0.021 us Detected: running on Windows. Using a larger SAMPLE_SIZE of 20000000. >>>>>>>>>> 3. time.time_ns() <<<<<<<<<< Mean: 9639.436 us Median: 10134.354 us Mode: 610.144 us Stdev: 5201.053 us FINAL ANSWER: time precision on this system: 10134.354 +/- 5201.053 us real 0m8.301s user 0m0.000s sys 0m0.000s
The median value in each case is the most representative of the typical resolution you can expect on your system because using the median value removes both the time jitter and outliers (unlike the mean, which removes the time jitter but not the outliers).
This proves conclusively that only the time.perf_counter_ns() function has both sub-microsecond resolution and precision on both Windows and Linux, which is what I needed to know the most.
Unspecified epoch:
Note that when using time.monotonic() or time.monotonic_ns(), the official documentation says:
The reference point of the returned value is undefined, so that only the difference between the results of two calls is valid.
So, if you need an absolute datetime type timestamp instead of a precision relative timestamp, which absolute datetime contains information like year, month, date, etc., then you should consider using datetime instead. See this answer here, my comment below it, and the official datetime documentation here and specifically for datetime.now() here. Here is how to get a timestamp with that module:
from datetime import datetime now_datetime_object = datetime.now()
Do not expect it to have the resolution nor precision nor monotonicity of time.clock_gettime_ns(), however. So, for timing small differences or doing precision timing work, prefer time.clock_gettime_ns() instead.
Another option is time.time()--also not guaranteeed to have a "better precision than 1 second". You can convert it back to a datetime using time.localtime() or time.gmtime(). See here. Here's how to use it:
>>> import time >>> time.time() 1691442858.8543699 >>> time.localtime(time.time()) time.struct_time(tm_year=2023, tm_mon=8, tm_mday=7, tm_hour=14, tm_min=14, tm_sec=36, tm_wday=0, tm_yday=219, tm_isdst=0)
Or, even better: time.time_ns():
>>> import time >>> time.time_ns() 1691443244384978570 >>> time.localtime(time.time_ns()/1e9) time.struct_time(tm_year=2023, tm_mon=8, tm_mday=7, tm_hour=14, tm_min=20, tm_sec=57, tm_wday=0, tm_yday=219, tm_isdst=0) >>> time.time_ns()/1e9 1691443263.0889063
2. Python 3.3 or later
On Windows, in Python 3.3 or later, you can use time.perf_counter(), as shown by @ereOn here. See: https://docs.python.org/3/library/time.html#time.perf_counter. This provides roughly a 0.5us-resolution timestamp, in floating point seconds. Ex:
import time # For Python 3.3 or later time_sec = time.perf_counter() # Windows only, I think # or on Unix or Linux (I think only those) time_sec = time.monotonic()
3. Pre-Python 3.3 (ex: Python 3.0, 3.1, 3.2), or later
Summary:
See my other answer from 2016 here for 0.5-us-resolution timestamps, or better, in Windows and Linux, and for versions of Python as old as 3.0, 3.2 or 3.2 even! We do this by calling C or C++ shared object libraries (.dll on Windows, or .so on Unix or Linux) using the ctypes module in Python.
I provide these functions:
millis() micros() delay() delayMicroseconds()
Download GS_timing.py from my eRCaGuy_PyTime repo, then do:
import GS_timing time_ms = GS_timing.millis() time_us = GS_timing.micros() GS_timing.delay(10) # delay 10 ms GS_timing.delayMicroseconds(10000) # delay 10000 us
Details:
In 2016, I was working in Python 3.0 or 3.1, on an embedded project on a Raspberry Pi, and which I tested and ran frequently on Windows also. I needed nanosecond resolution for some precise timing I was doing with ultrasonic sensors. The Python language at the time did not provide this resolution, and neither did any answer to this question, so I came up with this separate Q&A here: How can I get millisecond and microsecond-resolution timestamps in Python?. I stated in the question at the time:
I read other answers before asking this question, but they rely on the time module, which prior to Python 3.3 did NOT have any type of guaranteed resolution whatsoever. Its resolution is all over the place. The most upvoted answer here quotes a Windows resolution (using their answer) of 16 ms, which is 32000 times worse than my answer provided here (0.5 us resolution). Again, I needed 1 ms and 1 us (or similar) resolutions, not 16000 us resolution.
Zero, I repeat: zero answers here on 12 July 2016 had any resolution better than 16-ms for Windows in Python 3.1. So, I came up with this answer which has 0.5us or better resolution in pre-Python 3.3 in Windows and Linux. If you need something like that for an older version of Python, or if you just want to learn how to call C or C++ dynamic libraries in Python (.dll "dynamically linked library" files in Windows, or .so "shared object" library files in Unix or Linux) using the ctypes library, see my other answer here.
References
- My Q&A: How can I get millisecond and microsecond-resolution timestamps in Python?
- My answer: How to iterate over Pandas
DataFrames without iterating - https://docs.python.org/3/library/time.html
time.perf_counter) and PEP-564 provide a wealth of information about timing performance on a wide variety of operating systems, including tables for resolution, etc.