That's what the terminate method is for, but you have to be careful how you use it. It will kill the worker processes, but surprisingly it won't stop you from blocking forever on a waiting call. So you can only use it if you do apply_async or imap_unordered calls. Closing from another thread typically causes your calls into the pool to hang. In this example I set chunksize to 1 which is the preferred value if a single work item has a significant amount of processing. You can set chunksize to something greater if work item cost is low and you don't mind processing more items before you are done. But don't use the default... must items will be processed before anything makes it back to you.
import multiprocessing def worker(item): print(item) return item if __name__ == "__main__": with multiprocessing.Pool(4) as pool: for i in pool.imap_unordered(worker, range(100), chunksize=1): if i == 10: print('terminate') pool.terminate() break print('done')
funon elements after the first true output? If later evaluations have to wait for earlier ones to finish like that, it's going to put a pretty big damper on how much you can benefit from multiprocessing.