This is a followup question to this. User Will suggested using a queue, I tried to implement that solution below. The solution works just fine with j=1000, however, it hangs as I try to scale to larger numbers. I am stuck here and cannot determine why it hangs. Any suggestions would be appreciated. Also, the code is starting to get ugly as I keep messing with it, I apologize for all the nested functions.
def run4(j): """ a multicore approach using queues """ from multiprocessing import Process, Queue, cpu_count import os def bazinga(uncrunched_queue, crunched_queue): """ Pulls the next item off queue, generates its collatz length and """ num = uncrunched_queue.get() while num != 'STOP': #Signal that there are no more numbers length = len(generateChain(num, []) ) crunched_queue.put([num , length]) num = uncrunched_queue.get() def consumer(crunched_queue): """ A process to pull data off the queue and evaluate it """ maxChain = 0 biggest = 0 while not crunched_queue.empty(): a, b = crunched_queue.get() if b > maxChain: biggest = a maxChain = b print('%d has a chain of length %d' % (biggest, maxChain)) uncrunched_queue = Queue() crunched_queue = Queue() numProcs = cpu_count() for i in range(1, j): #Load up the queue with our numbers uncrunched_queue.put(i) for i in range(numProcs): #put sufficient stops at the end of the queue uncrunched_queue.put('STOP') ps = [] for i in range(numProcs): p = Process(target=bazinga, args=(uncrunched_queue, crunched_queue)) p.start() ps.append(p) p = Process(target=consumer, args=(crunched_queue, )) p.start() ps.append(p) for p in ps: p.join()