bash version 4 has coproc command that allows this done in pure bash without named pipes:.
coproc cmd1 eval "exec cmd2 <&${COPROC[0]} >&${COPROC[1]}" Some other shells also can doFrom bash man page about coproc as well.command:
coproc [NAME] command [redirections]
This creates a coprocess named NAME. If NAME is not supplied, the default name is COPROC.
The standard output of command is connected via a pipe to a file descriptor in the executing shell, and that file descriptor is assigned to NAME[0]. The standard input of command is connected via a pipe to a file descriptor in the executing shell, and that file descriptor is assigned to NAME[1].
Below is more detailed answer but chains three commands, rather than two, which makes it onlyStart with a little more interestingversion easy to understand (but does not work). Start entire pipeline under coproc and use extra cat for plumbing.
coproc { cmd1 | cmd2 } cat <&${COPROC[0]} >&${COPROC[1]} If you are happyBuffering is obviously something that needs to also usebe taken care of. Here plain cat andwill likely break it due to buffering. So use stdbuf then construct can be made easier to understand.disable buffering by cat:
# this is same as above coproc { cmd1 | cmd2 } # replace last line from sample above with: stdbuf -i0 -o0 cat <&${COPROC[0]} >&${COPROC[1]} Version usingBetter still, do away with bashcat. Break up pipeline into two steps, run 1st section under coproc and 2nd section to connect with 1st.
coproc cmd1 cmd2 <&${COPROC[0]} >&${COPROC[1]} This still assumes that catcmd1 and stdbufcmd2, easy handle buffering correctly to understand:make it work.
# start pipeline coproc { cmd1 | cmd2 | cmd3 } # create command to reconnect STDOUT `cmd3` to STDIN of `cmd1` endcmd="exec stdbuf -i0 -o0 /bin/cat <&${COPROC[0]} >&${COPROC[1]}" # eval the command. eval "${endcmd}" Some other shells also can do coproc as well.
Note, have to use eval because variable expansion in <&$varThat is all the answer above.
Below is illegal in my versio of bash 4.2just some exploration and tests.25
Here example chains three commands only to make it a little more interesting.
# start pipeline coproc { cmd1 | cmd2 | cmd3 } # reconnect STDOUT from `cmd3` to STDIN of `cmd1` stdbuf -i0 -o0 /bin/cat <&${COPROC[0]} >&${COPROC[1]} Version using pureGet rid of bashcat: and stdbuf and stay with pure bash. Break it up into two parts, launch the first pipeline under coproc, then launch second part (either a single command or a pipeline), reconnecting it to the first:
coproc { cmd 1 | cmd2 } endcmd="exec cmd3 <&${COPROC[0]} >&${COPROC[1]}" eval "${endcmd}" coproc { cmd 1 | cmd2 } cmd3 <&${COPROC[0]} >&${COPROC[1]} Proof of concept:Proof of concept
fileFile ./prog, just - a worker program participating in looped IO. Just a dummy prog to consume, tag and re-print lines. UsingIt is using subshells to avoid buffering problems maybe overkill, it's not the point here.
#!/bin/bash let c=0 sleep 2 [ "$1" == "1" ] && ( echo start ) while : ; do line=$( head -1 ) echo "$1:${c} ${line}" 1>&2 sleep 2 ( echo "$1:${c} ${line}" ) let c++ [ $c -eq 3 ] && exit done #!/bin/bash # Start this prog with only param "1" to produce some output let c=0 sleep 2 [ "$1" == "1" ] && ( echo start ) while : ; do read line echo "$1:${c} ${line}" 1>&2 sleep 2 ( echo "$1:${c} ${line}" ) let c++ [ $c -eq 3 ] && exit done fileFile ./start_catstart_io_loop_with_cat - a demo launcher. This is a version using bash, cat and stdbuf
#!/bin/bash echo starting first cmd>&2 coproc { stdbuf -i0 -o0 ./prog 1 \ | stdbuf -i0 -o0 ./prog 2 \ | stdbuf -i0 -o0 ./prog 3 } echo "Delaying remainer" 1>&2 sleep 5 cmd="exec stdbuf -i0 -o0 /bin/cat <&${COPROC[0]} >&${COPROC[1]}" echo "Running: ${cmd}" >&2 eval "${cmd}" #!/bin/bash # start all 3 commands as one pipeline coproc { ./prog 1 \ | ./prog 2 \ | ./prog 3 } # start cat without buffering to connect IO loop stdbuf -i0 -o0 /bin/cat <&${COPROC[0]} >&${COPROC[1]} or fileFile ./start_partstart_io_loop_pure_bash - another demo launcher. This is a version using pure bash only. For demo purposes I am still using stdbuf because your real prog would have to deal with buffering internally anyway to avoid blocking due to buffering.
#!/bin/bash echo starting first cmd>&2 coproc { stdbuf -i0 -o0 ./prog 1 \ | stdbuf -i0 -o0 ./prog 2 } echo "Delaying remainer" 1>&2 sleep 5 cmd="exec stdbuf -i0 -o0 ./prog 3 <&${COPROC[0]} >&${COPROC[1]}" echo "Running: ${cmd}" >&2 eval "${cmd}" #!/bin/bash # start first 2 of 3 commands in the pipeline coproc { ./prog 1 \ | ./prog 2 } # start 3rd command connecting IO to 1 and 2 to make the IO loop ./prog 3 <&${COPROC[0]} >&${COPROC[1]} > ~/iolooptest$ ./start_part starting first cmd Delaying remainerstart_io_loop_pure_bash 2:0 start Running: exec stdbuf -i0 -o0 ./prog 3 <&63 >&60 3:0 2:0 start 1:0 3:0 2:0 start 2:1 1:0 3:0 2:0 start 3:1 2:1 1:0 3:0 2:0 start 1:1 3:1 2:1 1:0 3:0 2:0 start 2:2 1:1 3:1 2:1 1:0 3:0 2:0 start 3:2 2:2 1:1 3:1 2:1 1:0 3:0 2:0 start 1:2 3:2 2:2 1:1 3:1 2:1 1:0 3:0 2:0 start