This is the most common pattern to compute a table of results:
Table[function[p], {p, parameters}] (regardless of how it's implemented, it could be a Map)
The problem with this is that if the calculation is interrupted before it's finished, the partial results will be lost.
We can do this in a safely interruptible way like so:
Do[AppendTo[results, {p, function[p]}], {p, parameters}] If this calculation is interrupted before it's finished, the intermediate results are still preserved. We can easily restart the calculation later, for those parameter values only for which function[] hasn't been run yet.
Question: What is the best way to achieve this when running calculations in parallel?
Assume that function[] is expensive to calculate and that the calculation time may be different for different parameter values. The parallel jobs must be submitted in a way to make best use of the CPU. The result collection must not be shared between the parallel kernels as it may be a very large variable (i.e. I don't want as many copies of it in memory as there are kernels)
Motivation: I need this because I want to be able to make my calculations time constrained. I want to run the function for as many values as possible during the night. In the morning I want to stop it and see what I got, and decide whether to continue or not.
Notes:
I'm sure people will mention that AppendTo is inefficient and is best avoided in a loop. I think this is not an issue here (considering that the calculations run on the subkernels and function[] is expensive). It was just the simplest way to illustrate the problem. There could be other ways to collect results, e.g. using a linked list, and flattening it out later. Sow/Reap is not applicable here because they don't make it possible to interrupt the calculation.
About the long running time: The most expensive part of the calculations I'm running are in C++ and called through LibraryLink, but they still take a very long time to finish.
