21

I have problems using the subprocess module to obtain the output of crashed programs. I'm using python2.7 and subprocess to call a program with strange arguments in order to get some segfaults In order to call the program, I use the following code:

proc = (subprocess.Popen(called, stdout=subprocess.PIPE, stderr=subprocess.PIPE)) out,err=proc.communicate() print out,err 

called is a list containing the name of the program and the argument (a string containing random bytes except the NULL byte which subprocess doesn't like at all)

The code behave and show me the stdout and stderr when the program doesn't crash, but when it does crash, out and err are empty instead of showing the famous "Segmentation fault".

I wish to find a way to obtain out and err even when the program crash.

I also tried the check_output / call / check_call methods

Some additional information:

  • I'm running this script on an Archlinux 64 bits in a python virtual environment (shouldn't be something important here, but you never know :p)

  • The segfault happens in the C program I'm trying to run and is a consequence of a buffer overflow

  • The problem is that when the segfault occurs, I can't get the output of what happened with subprocess

  • I get the returncode right: -11 (SIGSEGV)

  • Using python i get:

     ./dumb2 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA ('Exit code was:', -11) ('Output was:', '') ('Errors were:', '') 
  • While outside python I get:

     ./dumb2 $(perl -e "print 'A'x50") BEGINNING OF PROGRAM AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA END OF THE PROGRAM Segmentation fault (core dumped) 
  • The return value of the shell is the same: echo $? returns 139 so -11 ($? & 128)

0

3 Answers 3

12

"Segmentation fault" message might be generated by a shell. To find out, whether the process is kill by SIGSEGV, check proc.returncode == -signal.SIGSEGV.

If you want to see the message, you could run the command in the shell:

#!/usr/bin/env python from subprocess import Popen, PIPE proc = Popen(shell_command, shell=True, stdout=PIPE, stderr=PIPE) out, err = proc.communicate() print out, err, proc.returncode 

I've tested it with shell_command="python -c 'from ctypes import *; memset(0,1,1)'" that causes segfault and the message is captured in err.

If the message is printed directly to the terminal then you could use pexpect module to capture it:

#!/usr/bin/env python from pipes import quote from pexpect import run # $ pip install pexpect out, returncode = run("sh -c " + quote(shell_command), withexitstatus=1) signal = returncode - 128 # 128+n print out, signal 

Or using pty stdlib module directly:

#!/usr/bin/env python import os import pty from select import select from subprocess import Popen, STDOUT # use pseudo-tty to capture output printed directly to the terminal master_fd, slave_fd = pty.openpty() p = Popen(shell_command, shell=True, stdin=slave_fd, stdout=slave_fd, stderr=STDOUT, close_fds=True) buf = [] while True: if select([master_fd], [], [], 0.04)[0]: # has something to read data = os.read(master_fd, 1 << 20) if data: buf.append(data) else: # EOF break elif p.poll() is not None: # process is done assert not select([master_fd], [], [], 0)[0] # nothing to read break os.close(slave_fd) os.close(master_fd) print "".join(buf), p.returncode-128 
Sign up to request clarification or add additional context in comments.

5 Comments

I tried with the shell option, i get the same behaviour: ./dumb2 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA ('Exit code was:', -11) ('Output was:', '') ('Errors were:', '') while outside python I get: ./dumb2 $(perl -e "print Ax50") DEBUT DU PROGRAMME Erreur de segmentation (core dumped)
It means that the error message is printed directly to the terminal outside of shell's stdout. You could use pexpect, pty modules to capture such output
thanks, pexpect seems to be a good alternative, I will try that tomorrow and post the result
@Tic: I've tested the code on my machine and it works (it captures the message in err variable) i.e., you don't need pexpect at least on Ubuntu.
It doesn't work on mine, but pexpect returns the stdout so thank you :)
0

Came back here: it works like a charm with subprocess from python3 and if you are on linux, there is a backport to python2 called subprocess32 which does work quite well

Older solution: I used pexpect and it works

def cmd_line_call(name, args): child = pexpect.spawn(name, args) # Wait for the end of the output child.expect(pexpect.EOF) out = child.before # we get all the data before the EOF (stderr and stdout) child.close() # that will set the return code for us # signalstatus and existstatus read as the same (for my purpose only) if child.exitstatus is None: returncode = child.signalstatus else: returncode = child.exitstatus return (out, returncode) 

PS: a little slower (because it spawns a pseudo tty)

7 Comments

note: child.before is a string. It is not callable; remove ().
do not return signalstatus and exitstatus as the same value; they are not.
thx for the corrections J.F Sebastian :) Your answer is much more complete, made it the right answer.
if you are only interested in whether it is zero or not then you could use child.status.
|
-1
proc = (subprocess.Popen(called, stdout=subprocess.PIPE, stderr=subprocess.PIPE)) print(proc.stdout.read()) print(proc.stderr.read()) 

This should work better.
Personally i'd go with:

from subprocess import Popen, PIPE handle = Popen(called, shell=True, stdout=PIPE, stderr=PIPE) output = '' error = '' while handle.poll() is None: output += handle.stdout.readline() + '\n' error += handle.stderr.readline() + '\n' handle.stdout.close() handle.stderr.close() print('Exit code was:', handle.poll()) print('Output was:', output) print('Errors were:', error) 

And probably use epoll() if possible for the stderr as it sometimes blocks the call because it's empty which is why i end up doing stderr=STDOUT when i'm lazy.

8 Comments

thanks for the quick answer, I get the same result, there is no output or error (except '\n') In the not segfaulting cases, I get something strange, the output is troncatured: only the first line is shown
@Tic And you're sure that the called application isn't segfaulting? This is not reccomended as it can cause your application to hang if to much output, but replace output += .. and error += ... with pass and just let the loop go until the process is finished. After the while handle.poll() do a sweep of the output by doing: output = handle.stdout.read() and the same for stderr as well, see if that catches anything more.. They should perform the same type of operation, but just give it a go and see if it helps.
Also, do a python -m trace --trace myscript.py and see if you get anything useful out of it, it should tell you where your segfault is happening and give you an idea of where to start.
-1: do not replace .communicate() code with .stdout.read(); .stderr.read(). .read() shouldn't improve anything compared to .communicate() but it can deadlock unlike .communicate()
unless you are on Windows; specifying shell=True changes the meaning of called drastically.
|

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.