1

I have some expensive (slow) operations that could run in parallel

SERVERS=$(cmd1) ROUTERS=$(cmd2) NETWORKS=$(cmd3) KEYPAIRS=$(cmd3) 

I do want to speedup this by running these in parallel, but without using ugly hacks like redirecting their output to files and loading the files after their execution.

Is there a nice way to parallelize this in bash?

2 Answers 2

2

One thought that comes to mind straight away is temp files. So run your jobs in the background, redirect to a file, wait for the jobs to complete & read the files.

( job1 > /tmp/job1.out 2> /dev/null ; echo $? > /tmp/job1.ret ) & ( job2 > /tmp/job2.out 2> /dev/null ; echo $? > /tmp/job2.ret ) & ( job3 > /tmp/job3.out 2> /dev/null ; echo $? > /tmp/job3.ret ) & wait if [[ $(cat /tmp/job1.ret) -eq 0 ]] ; then job1_out=$(cat /tmp/job1.out) ; fi if [[ $(cat /tmp/job2.ret) -eq 0 ]] ; then job2_out=$(cat /tmp/job2.out) ; fi if [[ $(cat /tmp/job3.ret) -eq 0 ]] ; then job3_out=$(cat /tmp/job3.out) ; fi # the rest 
Sign up to request clarification or add additional context in comments.

1 Comment

Yep but this created temp files, not the best experience. I am still searching for alternatives without temp files.
1

I think GNU parset is what you are looking for. In your case it would look like this:

parset "SERVERS ROUTERS NETWORKS KEYPAIRS" :::: cmd1 cmd2 cmd3 cmd4 

1 Comment

This may work but I am not willing to rely on tools that are from available by default on Linux or MacOS, as I want to keep the solution free of dependencies.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.