298

I am looking for a way to clean up the mess when my top-level script exits.

Especially if I want to use set -e, I wish the background process would die when the script exits.

3
  • @DanielKaplan Try e.g. p=$(bash -c 'sleep 2 >/dev/null & echo $!'); sleep 1; ps -f -p "$p" to see that sleep 2 command is still running after bash has exited. Commented Dec 23, 2022 at 12:57
  • @DanielKaplan The sleep 2 command is running in background as a separate process; its command ends with &. Commented Jan 1, 2023 at 12:22
  • @jarno Apologies. I was incorrect about my first comment so I've deleted my others. Commented Jan 1, 2023 at 23:29

16 Answers 16

300
+100

This works for me (collaborative effort with the commenters):

trap "trap - SIGTERM && kill -- -$$" SIGINT SIGTERM EXIT 
  • kill -- -$$ sends a SIGTERM to the whole process group, thus killing also descendants. The <PGID> in kill -- -<PGID> is the group process id, which often, but not necessarily, is the PID that $$ variable contains. The few times PGID and PID differ you can use ps and other similar tools you can obtain the PGID, in your script.

    For example: pgid="$(ps -o pgid= $$ | grep -o '[0-9]*')" stores PGID in $pgid.

  • Specifying signal EXIT is useful when using set -e (more details here).

Sign up to request clarification or add additional context in comments.

20 Comments

Should work well on the whole, but the child processes may change process groups. On the other hand it doesn't require job control, and may also get some grandchild processes missed by other solutions.
I don't quite understand -$$. It evaluates to '-<PID>` eg -1234. In the kill manpage // builtin manpage a leading dash specifies the signal to be sent. However -- probably blocks that, but then the leading dash is undocumented otherwise. Any help?
@EvanBenn: Check man 2 kill, which explains that when a PID is negative, the signal is sent to all processes in the process group with the provided ID (en.wikipedia.org/wiki/Process_group). It's confusing that this is not mentioned in man 1 kill or man bash, and could be considered a bug in the documentation.
Why do we have two nested traps here?
@MohammedNoureldin The inner trap - SIGTERM will reset the current script SIGTERM response to the default kill behavior. Then, when kill -- -$$ is executed, the current script will receive SIGTERM and exit normally.
|
244

To clean up some mess, trap can be used. It can provide a list of stuff executed when a specific signal arrives:

trap "echo hello" SIGINT 

but can also be used to execute something if the shell exits:

trap "killall background" EXIT 

It's a builtin, so help trap will give you information (works with bash). If you only want to kill background jobs, you can do

trap 'kill $(jobs -p)' EXIT 

Watch out to use single ', to prevent the shell from substituting the $() immediately.

7 Comments

then how do you killall child only ? (or am I missing something obvious)
killall kills your children, but not you
kill $(jobs -p) doesn't work in dash, because it executes command substitution in a subshell (see Command Substitution in man dash)
is killall background supposed to be a placeholder? background is not in the man page...
kill $(jobs -p) is good, but prints usage info for 'kill' when there are no background jobs. IMHO, the best way for bash is jobs -p | xargs -r kill
|
162
+100

Update: https://stackoverflow.com/a/53714583/302079 improves this by adding exit status and a cleanup function.

trap "exit" INT TERM trap "kill 0" EXIT 

Why convert INT and TERM to exit? Because both should trigger the kill 0 without entering an infinite loop.

Why trigger kill 0 on EXIT? Because normal script exits should trigger kill 0, too.

Why kill 0? Because nested subshells need to be killed as well. This will take down the whole process tree.

12 Comments

The only solution for my case on Debian.
Neither the answer by Johannes Schaub nor the one provided by tokland managed to kill the background processes my shell script started (on Debian). This solution worked. I don't know why this answer is not more upvoted. Could you expand more about what exactly kill 0 means/does?
This is awesome, but also kills my parent shell :-(
This solution is literally overkill. kill 0 (inside my script) ruined my whole X session! Perhaps in some cases kill 0 can be useful, but this does not change the fact that it is not general solution and should be avoided if possible unless there is very good reason to use it. It would be nice to add a warning that it may kill parent shell or even whole X session, not just background jobs of a script!
While this might be an interesting solution under some circumstances, as pointed out by @vidstige this will kill the whole process group which includes the launching process (i.e. the parent shell in most cases). Definitely not something you want when you are running a script via an IDE.
|
25

The trap 'kill 0' SIGINT SIGTERM EXIT solution described in @tokland's answer is really nice, but latest Bash crashes with a segmentation fault when using it. That's because Bash, starting from v. 4.3, allows trap recursion, which becomes infinite in this case:

  1. shell process receives SIGINT or SIGTERM or EXIT;
  2. the signal gets trapped, executing kill 0, which sends SIGTERM to all processes in the group, including the shell itself;
  3. go to 1 :)

This can be worked around by manually de-registering the trap:

trap 'trap - SIGTERM && kill 0' SIGINT SIGTERM EXIT 

The more fancy way that allows printing the received signal and avoids "Terminated:" messages:

#!/usr/bin/env bash trap_with_arg() { # from https://stackoverflow.com/a/2183063/804678 local func="$1"; shift for sig in "$@"; do trap "$func $sig" "$sig" done } stop() { trap - SIGINT EXIT printf '\n%s\n' "received $1, killing child processes" kill -s SIGINT 0 } trap_with_arg 'stop' EXIT SIGINT SIGTERM SIGHUP { i=0; while (( ++i )); do sleep 0.5 && echo "a: $i"; done } & { i=0; while (( ++i )); do sleep 0.6 && echo "b: $i"; done } & while true; do read; done 

UPD: added a minimal example; improved stop function to avoid de-trapping unnecessary signals and to hide "Terminated:" messages from the output. Thanks Trevor Boyd Smith for the suggestions!

12 Comments

in stop() you provide the first argument as the signal number but then you hardcode what signals are being deregistered. rather than hardcode the signals being deregistered you could use the first argument to deregister in the stop() function (doing so would potentially stop other recursive signals (other than the 3 hardcoded)).
@TrevorBoydSmith, this would not work as expected, I guess. For example, the shell might be killed with SIGINT, but kill 0 sends SIGTERM, which will get trapped once again. This will not produce infinite recursion, though, because SIGTERM will be de-trapped during the second stop call.
Probably, trap - $1 && kill -s $1 0 should work better. I'll test and update this answer. Thank you for the nice idea! :)
Nope, trap - $1 && kill -s $1 0 woldn't work too, as we can't kill with EXIT. But it is really sufficient do de-trap TERM, because kill sends this signal by default.
@Sapphire_Brick done, now it should be harder to misinterpret the message.
|
23

trap 'kill $(jobs -p)' EXIT

I would make only minor changes to Johannes' answer and use jobs -pr to limit the kill to running processes and add a few more signals to the list:

trap 'kill $(jobs -pr)' SIGINT SIGTERM EXIT 

2 Comments

Why not kill the stopped jobs, too? In Bash EXIT trap will be run in case of SIGINT and SIGTERM, too, so the trap would be called twice in case of such a signal.
This works!! The other answers kill the calling process as well as the subprocesses, which is an issue if a script is called directly by a desktop environment or is called by another program you want to keep alive. Thank you!
13

To be on the safe side I find it better to define a cleanup function and call it from trap:

cleanup() { local pids=$(jobs -pr) [ -n "$pids" ] && kill $pids } trap "cleanup" INT QUIT TERM EXIT [...] 

or avoiding the function altogether:

trap '[ -n "$(jobs -pr)" ] && kill $(jobs -pr)' INT QUIT TERM EXIT [...] 

Why? Because by simply using trap 'kill $(jobs -pr)' [...] one assumes that there will be background jobs running when the trap condition is signalled. When there are no jobs one will see the following (or similar) message:

kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec] 

because jobs -pr is empty - I ended in that 'trap' (pun intended).

2 Comments

This test case [ -n "$(jobs -pr)" ] doesn't work on my bash. I use GNU bash, version 4.2.46(2)-release (x86_64-redhat-linux-gnu). The "kill: usage" message keeps popping up.
I suspect it has to do with the fact that jobs -pr doesn't return the PIDs of the children of the background processes. It doesn't tear the entire process tree down, only trims off the roots.
8
function cleanup_func { sleep 0.5 echo cleanup } trap "exit \$exit_code" INT TERM trap "exit_code=\$?; cleanup_func; kill 0" EXIT # exit 1 # exit 0 

Like https://stackoverflow.com/a/22644006/10082476, but with added exit-code

2 Comments

Where does exit_code come from in INT TERM trap?
@jarno IIUC, the EXIT trap is always invoked when the script exits. It sets the global variable exit_code to the exit code of the last command executed. After running cleanup_func, it then sends SIGTERM "to every process in the process group of the calling process", including itself (see kill(2)). SIGTERM is trapped and exits with $exit_code. Now, if you press ^C during script execution, $exit_code in the INT trap will be empty and exit will be invoked with the exit code of the last command: 130 (see help exit). exit triggers the EXIT trap: start from the top.
3

A nice version that works under Linux, BSD and MacOS X. First tries to send SIGTERM, and if it doesn't succeed, kills the process after 10 seconds.

KillJobs() { for job in $(jobs -p); do kill -s SIGTERM $job > /dev/null 2>&1 || (sleep 10 && kill -9 $job > /dev/null 2>&1 &) done } TrapQuit() { # Whatever you need to clean here KillJobs } trap TrapQuit EXIT 

Please note that jobs does not include grand children processes.

Comments

2

I made an adaption of @tokland's answer combined with the knowledge from http://veithen.github.io/2014/11/16/sigterm-propagation.html when I noticed that trap doesn't trigger if I'm running a foreground process (not backgrounded with &):

#!/bin/bash # killable-shell.sh: Kills itself and all children (the whole process group) when killed. # Adapted from http://stackoverflow.com/a/2173421 and http://veithen.github.io/2014/11/16/sigterm-propagation.html # Note: Does not work (and cannot work) when the shell itself is killed with SIGKILL, for then the trap is not triggered. trap "trap - SIGTERM && echo 'Caught SIGTERM, sending SIGTERM to process group' && kill -- -$$" SIGINT SIGTERM EXIT echo $@ "$@" & PID=$! wait $PID trap - SIGINT SIGTERM EXIT wait $PID 

Example of it working:

$ bash killable-shell.sh sleep 100 sleep 100 ^Z [1] + 31568 suspended bash killable-shell.sh sleep 100 $ ps aux | grep "sleep" niklas 31568 0.0 0.0 19640 1440 pts/18 T 01:30 0:00 bash killable-shell.sh sleep 100 niklas 31569 0.0 0.0 14404 616 pts/18 T 01:30 0:00 sleep 100 niklas 31605 0.0 0.0 18956 936 pts/18 S+ 01:30 0:00 grep --color=auto sleep $ bg [1] + 31568 continued bash killable-shell.sh sleep 100 $ kill 31568 Caught SIGTERM, sending SIGTERM to process group [1] + 31568 terminated bash killable-shell.sh sleep 100 $ ps aux | grep "sleep" niklas 31717 0.0 0.0 18956 936 pts/18 S+ 01:31 0:00 grep --color=auto sleep 

Comments

1

I finally have found a solution that appears to work in all cases to kill all descents recursively regardless of whether they are jobs, or sub-processes. The other solutions here all seemed to fail with things such as:

while ! ffmpeg .... do sleep 1 done 

In my situation, ffmpeg would keep running after the parent script exited.

I found a solution here to recursively getting the PIDs of all child processes recursively and used that in the trap handler thus:

cleanup() { # kill all processes whose parent is this process kill $(pidtree $$ | tac) } pidtree() ( [ -n "$ZSH_VERSION" ] && setopt shwordsplit declare -A CHILDS while read P PP;do CHILDS[$PP]+=" $P" done < <(ps -e -o pid= -o ppid=) walk() { echo $1 for i in ${CHILDS[$1]};do walk $i done } for i in "$@";do walk $i done ) trap cleanup EXIT 

The above placed at the start of a bash script succeeds in killing all child processes. Note that pidtree is called with $$ which is the PID of the bash script that is exiting and the list of PIDs (one per line) is reversed using tac to try and ensure that prarent processes are killed only after their children to avoid possible race conditions in loops such as the example I gave.

Comments

1

None of the answers here worked for me in the case of a continuous integration (CI) script that starts background processes from subshells. For example:

(cd packages/server && npm start &) 

The subshell terminates after starting the background process, which therefore ends up with parent PID 1.

With PPID not an option, the only portable (Linux and MacOS) and generic (independent of process name, listening ports, etc.) approach left is the process group (PGID). However, I can't just kill that because it would kill the script process, which would fail the CI job.

# Terminate the given process group, excluding this process. Allows 2 seconds # for graceful termination before killing remaining processes. This allows # shutdown errors to be printed, while handling processes that fail to # terminate quickly. kill_subprocesses() { echo "Terminating subprocesses of PGID $1 excluding PID $$" # Get all PIDs in this process group except this process # (pgrep on NetBSD/MacOS does this by default, but Linux pgrep does not) # Uses a heredoc instead of piping to avoid including the grep PID pids=$(grep -Ev "\\<$$\\>" <<<"$(pgrep -g "$1")") if [ -n "$pids" ]; then echo "Terminating processes: ${pids//$'\n'/, }" # shellcheck disable=SC2086 kill $pids || true fi sleep 2 # Check for remaining processes and kill them pids=$(grep -Ev "\\<$$\\>" <<<"$(pgrep -g "$1")") if [ -n "$pids" ]; then echo "Killing remaining processes: ${pids//$'\n'/, }" # shellcheck disable=SC2086 kill -9 $pids || true fi } # Terminate subprocesses on exit or interrupt # shellcheck disable=SC2064 trap "kill_subprocesses $$" EXIT SIGINT SIGTERM 

Comments

0

Universal solution which works also in sh (jobs there does not output anything to stdout):

trap "pkill -P $$" EXIT INT 

2 Comments

This only kills child processes. It wouldn't handle common cases like jobs started by a subshell (which end up with PPID 1). Killing the process group with -g would do that.
Well, I assumed typical scenario when sub-processes are responsible for killing their children.
-1

jobs -p does not work in all shells if called in a sub-shell, possibly unless its output is redirected into a file but not a pipe. (I assume it was originally intended for interactive use only.)

What about the following:

trap 'while kill %% 2>/dev/null; do jobs > /dev/null; done' INT TERM EXIT [...] 

The call to "jobs" is needed with Debian's dash shell, which fails to update the current job ("%%") if it is missing.

1 Comment

Hmm interesting approach, but it does not seem to work. Consider scipt trap 'echo in trap; set -x; trap - TERM EXIT; while kill %% 2>/dev/null; do jobs > /dev/null; done; set +x' INT TERM EXIT; sleep 100 & while true; do printf .; sleep 1; done If you run it in Bash (5.0.3) and try to terminate, there seems to be an infinite loop. However, if you terminate it again, it works. Even by Dash (0.5.10.2-6) you have to terminate it twice.
-1

Just for diversity I will post variation of https://stackoverflow.com/a/2173421/102484 , because that solution leads to message "Terminated" in my environment:

trap 'test -z "$intrap" && export intrap=1 && kill -- -$$' SIGINT SIGTERM EXIT 

Comments

-1

Another option is it to have the script set itself as the process group leader, and trap a killpg on your process group on exit.

EDIT: a possible bash hack to create a new process group is to use setsid(1) but only if we're not already the process group leader (can query it with ps).

Placing this at the beginning of the script can achieve that.

# Create a process group and exec the script as its leader if necessary [[ "$(ps -o pgid= $$)" -eq "$$" ]] || exec setsid /bin/bash "$0" "$@" 

Then signaling the process group with kill -- -$$ would work as expected even when script is not already the process group leader.

3 Comments

How do you set the process as process group leader? What is "killpg"?
killpg is C api to send signal (=kill) to a Process Group so is exactly what the kill -- -$$ and kill 0 answers suggest; starting a new proccess group is the novel idea here but needs details on how to do this from bash...
setsid(1) can do it, and we can test whether we're the leader with ps. So the bash hack would be to add something like this to the beginning of the script ` [[ "$(ps -o pgid= $$)" -eq "$$" ]] || exec setsid /bin/bash "$0" "$@"`
-4

So script the loading of the script. Run a killall (or whatever is available on your OS) command that executes as soon as the script is finished.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.