1

In the code below, I am generating cube of a number 9999 and calling the same via thread pool and normal method.

I am timing the difference between the same. Seems like the normal method is way faster. I am running this on a i7 8th gen intel processor with 16 gig ram inside a python 2.7 terminal.

I am baffled by this. May be I am missing something. I hope this question is helpful for people in the future.

import time from multiprocessing.pool import ThreadPool def cube(): return 9999*9999*9999 print "Start Execution Threading: " x = int(round(time.time() * 1000)) pool = ThreadPool() for i in range(0,100): result = pool.apply_async(cube, ()) result = pool.apply_async(cube, ()) result = pool.apply_async(cube, ()) # print result.get() pool.close() pool.join() print "Stop Execution Threading: " y = int(round(time.time() * 1000)) print y-x print "Start Execution Main: " x = int(round(time.time() * 1000)) for i in range(0,100): cube() cube() cube() print "Stop Execution Main: " y = int(round(time.time() * 1000)) print y-x 
1
  • 3
    %timeit 9999*9999*9999 19.3 ns ± 3.18 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each). This is not a job for multiprocessing. The overheads of just spawning a pool make the approach redundant. I'm not convinced this calculation would benefit from other cores in lower-level languages, but in python, you're physically having to spawn whole new python processes, copy all the namespace across to the new processes, and there's nothing here that they can work on collaboratively across cores. Commented Nov 30, 2018 at 10:05

2 Answers 2

3

Multiprocessing means you will start a new thread. That comes with quite an overhead in that it must be initialized. As such, multi-threading only pays off, especially in python, when you parallelize tasks which all on their own take considerable time to execute (in comparison to python start-up time) and which can be allowed to run asyncronously.

In your case, a simple multiplication, is so quickly executed it will not pay off.

Sign up to request clarification or add additional context in comments.

3 Comments

Can you tell us in general what would be the amount of time needed to run a piece of code repeatedly so as to warrant threading. Like you pointed out multiplication of is unnecessary for threading. Like say, would reading 100 text files with 100 lines each warrant threading. Thanks again.
@PrabhakarShanmugam No CPU-bound task will be faster with threading. You need to make a distinction between threads and processes here. This is due to the Global Interpreter Lock
@Prabhakar Shanmugam: You basically found out the start-up time of your python for new processes with your programme. So, it would need to be in that order of the difference you see. But as roganjosh says: you need to start processes (not threads), if you want to gain computational speed (and not parallelize I/O). The same library offers processes, too. roganjosh: You are definitely right there.
1

Because of from multiprocessing.pool import ThreadPool, you are using multi-threading and not multi-processing. CPython uses a Global Interpreter Lock to prevent more than one thread to execute Python code at the same time.

So as your program is CPU-bounded, you add the threading overhead with no benefits because of the GIL. Multi-threading does make sense in Python for IO-bounded problem, because a thread can run while others are waiting for IO completion.

You could try to use true multiprocessing, because then each Python process will have its own GIL, but I am still unsure of the gain, because the communication between processes adds even more overhead...

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.