1

I used multiprocessing in Python to run my code in parallel, like the following,

result1 = pool.apply_async(set1, (Q, n)) result2 = pool.apply_async(set2, (Q, n)) 

set1 and set2 are two independent function and this code is in a while loop.

Then I test the running time, if I run my code in sequence, the for particular parameter, it is 10 seconds, however, when I run in parallel, it only took around 0.2 seconds. I used time.clock() to record the time. Why the running time decreased so much, for intuitive thinking of parallel programming, shouldn't be the time in parallel be between 5 seconds to 10 seconds? I have no idea how to analyze this in my report... Anyone can help? Thanks

1
  • do you call result1.get() eventually? Have you checked that both variants (sequential/parallel) produce the same result? You could use timeit.default_timer() instead of time.clock(). Or just call it from command-line: python -mtimeit -s "from your_module import setup, run; setup();" "run()" Commented Dec 16, 2013 at 0:22

1 Answer 1

1

To get a definitive answer, you need to show all the code and say which operating system you're using.

My guess: you're running on a Linux-y system, so that time.clock() returns CPU time (not wall-clock time). Then you run all the real work in new, distinct processes. The CPU time consumed by those doesn't show up in the main program's time.clock() results at all. Try using time.time() instead for a quick sanity check.

Sign up to request clarification or add additional context in comments.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.