3

So basically, what I'm trying do is I want the parallel.foreach to repeat the log if an error is caught to the specific thread.

Parallel.ForEach(concurrentLogs, parallelOptions, log => try{ //Do work Console.WriteLine(log); } catch(Exception ex) { concurrentLogs.Enqueue(log); //repeat this log } }); 

Because when I'm debugging it, if a thread catches an error (ex. io exception) it won't repeat the same log anymore.

What could be a possible approach to this?

4
  • 2
    It depends on the type of concurrentLogs If it is a BlockingCollection you need to use GetConsumingEnumerable() Commented Nov 30, 2019 at 19:14
  • 1
    Do you want to repeat it forever until it succeed? Commented Nov 30, 2019 at 19:15
  • Hello @ScottChamberlain the concurrentLogs is a ConcurrentQueue. Commented Nov 30, 2019 at 19:17
  • Hi @FarhadJabiyev, yes you are right! Commented Nov 30, 2019 at 19:17

2 Answers 2

2

You are enumerating a ConcurrentQueue, which will be a snapshot of the collection and not reflect your later Enqueue items.

A quick solution would be to simply retry within the "foreach":

Parallel.ForEach(concurrentLogs, parallelOptions, log => { void DoWork(string log) { //Do work Console.WriteLine(log); } try { DoWork(log); } catch(Exception ex) { // or loop and keep count DoWork(log); } }); 

aside: as noted in the comments, this isn't how best to handle retrying, you need to decide a strategy. Polly is great for this sort of thing.

However this suggests you don't want a ConcurrentQueue, or you aren't using ineffectively. You might want to look at a BlockingCollection, Channel, or ActionBlock (TPL Dataflow).

Sign up to request clarification or add additional context in comments.

7 Comments

What if it fails again on catch block?
Then you'd write it as a loop, keep count, or whatever strategy you want. I'm here to answer the question, not write production ready code 🙂
@Eldar "if it fails again..." is kind of beside the point. The OP can simply add a counter, and bail out after max retries. The key issue is that the design is probably "flawed"; the OP ought to consider alternatives. Stuart suggested a couple of good choices :)
@AnnaP. +1 Also, I can recommend you to use Polly for retry logic. Install it via nuget. And then declare that policy outside of the loop: var retryPolicy = Policy.Handle<Exception>().RetryForever() And use so inside loop: retryPolicy.Execute(() => DoWork(log)))
Thanks a bunch for the quick reply @Stuart :)
|
1
 Parallel.ForEach(concurrentLogs, parallelOptions, log => { bool sucess = true; do { try { //Do work Console.WriteLine(log); } catch (Exception ex) { sucess = false; } }while(!sucess) }); 

2 Comments

@Eldar What's wrong with the memory though? You just repeat the operation. It probably misses a retry policy (like "how much time I need to wait between attempts", "how I report errors" and "after how many fails do I need to stop trying"), but in general it looks like a correct way to achieve this.
No first version of your code was enqueueing another log every time it fails while trying to handle that log. When while loop completed log will be already processed and logs enqueued in catch block will be redundant. But this version is ok but again as you stated while can be dangerous and i like this answer more. You should add your comments content in to your answer.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.