6

I have a simple decorator to track the runtime of a function call:

def timed(f): def caller(*args): start = time.time() res = f(*args) end = time.time() return res, end - start return caller 

This can be used as follows, and returns a tuple of the function result and the execution time.

@timed def test(n): for _ in range(n): pass return 0 print(test(900)) # prints (0, 2.69e-05) 

Simple enough. But now I want to apply this to recursive functions. Applying the above wrapper to a recursive function results in nested tuples with the times of each recursive call, as is expected.

@timed def rec(n): if n: return rec(n - 1) else: return 0 print(rec(3)) # Prints ((((0, 1.90e-06), 8.10e-06), 1.28e-05), 1.90e-05) 

What's an elegant way to write the decorator so that it handles recursion properly? Obviously, you could wrap the call if a timed function:

@timed def wrapper(): return rec(3) 

This will give a tuple of the result and the time, but I want all of it to be handled by the decorator so that the caller does not need to worry about defining a new function for every call. Ideas?

1
  • You probably can achieve what you want by using sys._getframe, but that is an implementation detail of CPython, so it's not really part of the abstract python language. Also it's probably going to be quite slow. Commented Nov 9, 2018 at 19:40

5 Answers 5

8

The problem here isn't really the decorator. The problem is that rec needs rec to be a function that behaves one way, but you want rec to be a function that behaves differently. There's no clean way to reconcile that with a single rec function.

The cleanest option is to stop requiring rec to be two things at once. Instead of using decorator notation, assign timed(rec) to a different name:

def rec(n): ... timed_rec = timed(rec) 

If you don't want two names, then rec needs to be written to understand the actual value that the decorated rec will return. For example,

@timed def rec(n): if n: val, runtime = rec(n-1) return val else: return 0 
Sign up to request clarification or add additional context in comments.

1 Comment

Yeah, I guess since the point of the decorator is to be called every time the function is called, it doesn't really fit with recursion by nature.
3

I prefer the other answers so far (particularly user2357112's answer), but you can also make a class-based decorator that detects whether the function has been activated, and if so, bypasses the timing:

import time class fancy_timed(object): def __init__(self, f): self.f = f self.active = False def __call__(self, *args): if self.active: return self.f(*args) start = time.time() self.active = True res = self.f(*args) end = time.time() self.active = False return res, end - start @fancy_timed def rec(n): if n: time.sleep(0.01) return rec(n - 1) else: return 0 print(rec(3)) 

(class written with (object) so that this is compatible with py2k and py3k).

Note that to really work properly, the outermost call should use try and finally. Here's the fancied up fancy version of __call__:

def __call__(self, *args): if self.active: return self.f(*args) try: start = time.time() self.active = True res = self.f(*args) end = time.time() return res, end - start finally: self.active = False 

2 Comments

This has thread-safety and reentrancy issues, though. For example, a call to rec in one thread can cause a call in another thread to omit the timing information.
@user2357112: agreed; that's why I prefer your answer. :-) (It's possible to get the current thread information, so that this cooperates with the threading module, but then you hit problems with green threads, etc.)
1

You could structure your timer in a different way by *ahem* abusing the contextmanager and function attribute a little...

from contextlib import contextmanager import time @contextmanager def timed(func): timed.start = time.time() try: yield func finally: timed.duration = time.time() - timed.start def test(n): for _ in range(n): pass return n def rec(n): if n: time.sleep(0.05) # extra delay to notice the difference return rec(n - 1) else: return n with timed(rec) as r: print(t(10)) print(t(20)) print(timed.duration) with timed(test) as t: print(t(555555)) print(t(666666)) print(timed.duration) 

Results:

# recursive 0 0 1.5130000114440918 # non-recursive 555555 666666 0.053999900817871094 

If this is deemed a bad hack I'll gladly accept your criticism.

2 Comments

Abusing the attributes of the timed function object to store the timing information causes problems if you try to time multiple things at once, though. Also, timed doesn't actually need func, and isn't really timing the execution of func; it's timing the contents of the with block.
In short, bad answer? I'm just learning some of these toolsets recently so I thought this could work, but figured there must be something I'm missing. Wasn't really thinking of multiple timers though so that's a good point.
0

Although it is not an overall solution to the problem of integrating recursion with decorators, for the problem of timing only, I have verified that the last element of the tuple of the times is the overall run time, as this is the time from the upper-most recursive call. Thus if you had

@timed def rec(): ... 

to get the overall runtime given the original function definitions you could simply do

rec()[1] 

Getting the result of the call, on the other hand, would then require recusing through the nested tuple:

def get(tup): if isinstance(tup, tuple): return get(tup[0]) else: return tup 

This might be too complicated to simply get the result of your function.

Comments

0

I encountered the same issue when trying to profile a simple quicksort implementation.

The main issue is that decorators are executed on each function call and we need something that can keep a state, so we can sum all calls at the end. Decorators are not the right tool the job

However, one idea is to abuse the fact that functions are objects and can have atributes. This is explored below with a simple decorator. Something that must be understood is that, by using decorator's sintax sugar (@), the function will always be accumulating its timings.

from typing import Any, Callable from time import perf_counter class timeit: def __init__(self, func: Callable) -> None: self.func = func self.timed = [] def __call__(self, *args: Any, **kwds: Any) -> Any: start = perf_counter() res = self.func(*args, **kwds) end = perf_counter() self.timed.append(end - start) return res # usage @timeit def rec(n): ... if __name__ == "__main__": result = rec(4) # rec result print(f"Took {rec.timed:.2f} seconds") # Out: Took 3.39 seconds result = rec(4) # rec result # timings between calls are accumulated # Out: Took 6.78 seconds 

Which brings us to a solution inspired by @r.ook, below is a simple context manager that stores each run timing and prints its sum at the end (__exit__). Notice that, because for each timing we require a with statement, this will not accumulate different runs.

from typing import Any, Callable from time import perf_counter class timeit: def __init__(self, func: Callable) -> None: self.func = func self.timed = [] def __call__(self, *args: Any, **kwds: Any) -> Any: start = perf_counter() res = self.func(*args, **kwds) end = perf_counter() self.timed.append(end - start) return res def __enter__(self): return self def __exit__(self, exc_type, exc_value, exc_traceback): # TODO: report `exc_*` if an exception get raised print(f"Took {sum(self.timed):.2f} seconds") return # usage def rec(n): ... if __name__ == "__main__": with timeit(rec) as f: result = f(a) # rec result # Out: Took 3.39 seconds 

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.