I saw a comment that lead me to the question Why does Python code run faster in a function?.
I got to thinking, and figured I would try it myself using the timeit library, however I got very different results:
(note: 10**8 was changed to 10**7 to make things a little bit speedier to time)
>>> from timeit import repeat >>> setup = """ def main(): for i in xrange(10**7): pass """ >>> stmt = """ for i in xrange(10**7): pass """ >>> min(repeat('main()', setup, repeat=7, number=10)) 1.4399558753975725 >>> min(repeat(stmt, repeat=7, number=10)) 1.4410973942722194 >>> 1.4410973942722194 / 1.4399558753975725 1.000792745732109 - Did I use
timeitcorrectly? - Why are these results less 0.1% different from each other, while the results from the other question were nearly 250% different?
- Does it only make a difference when using
CPythoncompiled versions of Python (like Cython)? - Ultimately: is Python code really faster in a function, or does it just depend on how you time it?