I have a bash script similar to:
NUM_PROCS=$1 NUM_ITERS=$2 for ((i=0; i<$NUM_ITERS; i++)); do python foo.py $i arg2 & done What's the most straightforward way to limit the number of parallel processes to NUM_PROCS? I'm looking for a solution that doesn't require packages/installations/modules (like GNU Parallel) if possible.
When I tried Charles Duffy's latest approach, I got the following error from bash -x:
+ python run.py args 1 + python run.py ... 3 + python run.py ... 4 + python run.py ... 2 + read -r line + python run.py ... 1 + read -r line + python run.py ... 4 + read -r line + python run.py ... 2 + read -r line + python run.py ... 3 + read -r line + python run.py ... 0 + read -r line ... continuing with other numbers between 0 and 5, until too many processes were started for the system to handle and the bash script was shut down.
seqisn't a standardized command -- not part of bash, and not part of POSIX, so there's no reason to believe it'll be present or behave a particular way on any given operating system. And re: case for shell variables, keeping in mind that they share a namespace with environment variables, see fourth paragraph of pubs.opengroup.org/onlinepubs/009695399/basedefs/… for POSIX conventions).wait -nwas introduced inbash4.3.