0

This is a followup question to this. User Will suggested using a queue, I tried to implement that solution below. The solution works just fine with j=1000, however, it hangs as I try to scale to larger numbers. I am stuck here and cannot determine why it hangs. Any suggestions would be appreciated. Also, the code is starting to get ugly as I keep messing with it, I apologize for all the nested functions.

def run4(j): """ a multicore approach using queues """ from multiprocessing import Process, Queue, cpu_count import os def bazinga(uncrunched_queue, crunched_queue): """ Pulls the next item off queue, generates its collatz length and """ num = uncrunched_queue.get() while num != 'STOP': #Signal that there are no more numbers length = len(generateChain(num, []) ) crunched_queue.put([num , length]) num = uncrunched_queue.get() def consumer(crunched_queue): """ A process to pull data off the queue and evaluate it """ maxChain = 0 biggest = 0 while not crunched_queue.empty(): a, b = crunched_queue.get() if b > maxChain: biggest = a maxChain = b print('%d has a chain of length %d' % (biggest, maxChain)) uncrunched_queue = Queue() crunched_queue = Queue() numProcs = cpu_count() for i in range(1, j): #Load up the queue with our numbers uncrunched_queue.put(i) for i in range(numProcs): #put sufficient stops at the end of the queue uncrunched_queue.put('STOP') ps = [] for i in range(numProcs): p = Process(target=bazinga, args=(uncrunched_queue, crunched_queue)) p.start() ps.append(p) p = Process(target=consumer, args=(crunched_queue, )) p.start() ps.append(p) for p in ps: p.join() 

1 Answer 1

1

You're putting 'STOP' poison pills into your uncrunched_queue (as you should), and having your producers shut down accordingly; on the other hand your consumer only checks for emptiness of the crunched queue:

while not crunched_queue.empty(): 

(this working at all depends on a race condition, btw, which is not good)

When you start throwing non-trivial work units at your bazinga producers, they take longer. If all of them take long enough, your crunched_queue dries up, and your consumer dies. I think you may be misidentifying what's happening - your program doesn't "hang", it just stops outputting stuff because your consumer is dead.

You need to implement a smarter methodology for shutting down your consumer. Either look for n poison pills, where n is the number of producers (who accordingly each toss one in the crunched_queue when they shut down), or use something like a Semaphore that counts up for each live producer and down when one shuts down.

Sign up to request clarification or add additional context in comments.

1 Comment

I added the following to the end of bazinga(): crunched_queue.put(['STOP', 'STOP']) and I added a counter to consumer to stop after the appropriate number of stops have been received. This works just fine, however, the script does not appear to be using more than one processor (there is only one Python process in my Activity Manager instead of numProcs+1). This is causing incredibly slow runs when j>10000. The original post where I broke the problem in to chunks split between the processors fine, do you know why this one won't?

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.