49

I've tried running things like this:

subprocess.Popen(['nohup', 'my_command'], stdout=open('/dev/null', 'w'), stderr=open('logfile.log', 'a')) 

This works if the parent script exits gracefully, but if I kill the script (Ctrl-C), all my child processes are killed too. Is there a way to avoid this?

The platforms I care about are OS X and Linux, using Python 2.6 and Python 2.7.

6 Answers 6

70

The child process receives the same SIGINT as your parent process because it's in the same process group. You can put the child in its own process group by calling os.setpgrp() in the child process. Popen's preexec_fn argument is useful here:

subprocess.Popen(['nohup', 'my_command'], stdout=open('/dev/null', 'w'), stderr=open('logfile.log', 'a'), preexec_fn=os.setpgrp ) 

(preexec_fn is for un*x-oids only. There appears to be a rough equivalent for Windows "creationflags=CREATE_NEW_PROCESS_GROUP", but I've never tried it.)

Sign up to request clarification or add additional context in comments.

5 Comments

Thanks for your answer; it works for me! However, I am curious why my command stops (process dies) after some point, if I omit the stdout & stderr agruments.
maybe the stdout and stderr buffers fill up and the process becomes deadlocked?
If you are using shell=True then creationflags=subprocess.CREATE_NEW_CONSOLE is probably what you want
Is nohup needed if you call setpgrp? Wouldn't the latter prevent the child from getting SIGHUP from the parent, as it is no longer part of the same process group?
It's not clear to me when these open's are closed -- if at all. To me, this is at least implicit behavior and I would bundle them with with as written below.
33

The usual way to do this on Unix systems is to fork and exit if you're the parent. Have a look at os.fork() .

Here's a function that does the job:

def spawnDaemon(func): # do the UNIX double-fork magic, see Stevens' "Advanced # Programming in the UNIX Environment" for details (ISBN 0201563177) try: pid = os.fork() if pid > 0: # parent process, return and keep running return except OSError, e: print >>sys.stderr, "fork #1 failed: %d (%s)" % (e.errno, e.strerror) sys.exit(1) os.setsid() # do second fork try: pid = os.fork() if pid > 0: # exit from second parent sys.exit(0) except OSError, e: print >>sys.stderr, "fork #2 failed: %d (%s)" % (e.errno, e.strerror) sys.exit(1) # do stuff func() # all done os._exit(os.EX_OK) 

6 Comments

If I fork, and then I kill one half of the fork (rather than allowing it to exit), will that kill the new process?
Okay, after further reading: this requires forking twice to avoid receiving signals? I'd quite like the parent process to remain interactive --- its job is to monitor the processes that it spawns --- which isn't possible if it has to disown the shell.
Thanks! I've added my implementation to your answer.
This is great as it sets the daemon parent process ID to 1 so that it's completely disconnected from the parent. The subprocess command I ran from the other answer was killed by my Torque job scheduler, even when changing its process group because the parent process ID still matched the dying process.
In this implementation an intermediate child is left as a zombie until parent exists. You need to collect its return code in the parent process to avoid that, e.g. by calling os.waitid(os.P_PID, pid, os.WEXITED) (before returning in the main process)
|
17

After an hour of various attempts, this works for me:

process = subprocess.Popen(["someprocess"], creationflags=subprocess.DETACHED_PROCESS | subprocess.CREATE_NEW_PROCESS_GROUP) 

It's solution for windows.

1 Comment

This works on Windows in 2021. Thanks!
5

Since 3.2 you can also use start_new_session flag (POSIX only).

import subprocess p = subprocess.Popen(["sleep", "60"], start_new_session=True) ret = p.wait() 

See start_new_session in Popen constructor

1 Comment

Yes, but note that the parent process of p still is the calling process. And of course the OP does not want to p.wait(). And if p fails and it still has the calling process as its parent, then it will become a zombie process.
0

Another way is to make the subprocess ignore SIGINT.

import subprocess import signal subprocess.Popen(["sleep", "100"], preexec_fn=lambda: signal.signal(signal.SIGINT, signal.SIG_IGN)) 

Using preexec_fn ensures that the parent process's SIGINT handler is not changed. (if it's changed you need to restore it like this.)

Of course, this will only work if the subprocess does not proceed to reinstate the signal handler. In the following case where the subprocess installs a signal handler, the subprocess would still be killed:

import subprocess import signal process=subprocess.Popen(["python", "-c", "import signal\nimport time\nsignal.signal(signal.SIGINT, signal.SIG_DFL)\nwhile True: \n print(1)\n time.sleep(1)"], preexec_fn=lambda: signal.signal(signal.SIGINT, signal.SIG_IGN)) process.wait() 

Credit to https://stackoverflow.com/a/3731948/5267751 .

Comments

-1
with open('/dev/null', 'w') as stdout, open('logfile.log', 'a') as stderr: subprocess.Popen(['my', 'command'], stdout=stdout, stderr=stderr) 

class subprocess.Popen(...)

Execute a child program in a new process. On POSIX, the class uses os.execvp()-like behavior to execute the child program. On Windows, the class uses the Windows CreateProcess() function.

os.execvpe(file, args, env)

These functions all execute a new program, replacing the current process; they do not return. On Unix, the new executable is loaded into the current process, and will have the same process id as the caller. Errors will be reported as OSError exceptions.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.