1

I want to execute bash command

 '/bin/echo </verbosegc> >> /tmp/jruby.log' 

in python using Popen. The code does not raise any exception, but none change is made on the jruby.log after execution. The python code is shown below.

>>> command='/bin/echo </verbosegc> >> '+fullpath >>> command '/bin/echo </verbosegc> >> /tmp/jruby.log' >>process = subprocess.Popen(command.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True) >>> output= process.communicate()[0] >>> output '</verbosegc> >> /tmp/jruby.log\n 

I also print out the process.pid and then check the pid using ps -ef | grep pid. The result shows that the process pid has been finished.

1
  • Using split() this way is a code smell. Don't do it. Commented Dec 8, 2015 at 18:05

4 Answers 4

2

Just use pass file object if you want to append the output to a file, you cannot redirect to a file unless you set shell=True:

command = ['/bin/echo', '</verbosegc>'] with open('/tmp/jruby.log',"a") as f: subprocess.check_call(command, stdout=f,stderr=subprocess.STDOUT) 
Sign up to request clarification or add additional context in comments.

3 Comments

@CharlesDuffy, that is what I have done in my answer, I meant there is no way to use >> without using shell=True
Quite right, and as edited the commentary is no longer misleading; +1.
@shijiexu, I used check_call as I don't see the logic in using Popen when you want to redirect the output to a file.
1

The first argument to subprocess.Popen is the array ['/bin/echo', '</verbosegc>', '>>', '/tmp/jruby.log']. When the first argument to subprocess.Popen is an array, it does not launch a shell to run the command, and the shell is what's responsible for interpreting >> /tmp/jruby.log to mean "write output to jruby.log".

In order to make the >> redirection work in this command, you'll need to pass command directly to subprocess.Popen() without splitting it into a list. You'll also need to quote the first argument (or else the shell will interpret the "<" and ">" characters in ways you don't want):

command = '/bin/echo "</verbosegc>" >> /tmp/jruby.log' process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True) 

3 Comments

This still fails when executing Popen. >>> process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/homes/sxu3/tools/python_install/lib/python2.7/subprocess.py", line 710, in init errread, errwrite) File "/homes/sxu3/tools/python_install/lib/python2.7/subprocess.py", line 1335, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory
You're right: you need to add the shell=True argument to make Popen invoke a shell. I was careless. So: subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True, shell=True)
Arrays are usable both with and without shell=True, though their semantics vary wildly between the cases -- see my answer for an example of the former. :)
1

Consider the following:

command = [ 'printf "%s\n" "$1" >>"$2"', # shell script to execute '', # $0 in shell '</verbosegc>', # $1 '/tmp/jruby.log' ] # $2 subprocess.Popen(command, shell=True) 

The first argument is a shell script referring to $1 and $2, which are in turn passed as separate arguments. Keeping data separate from code, rather than trying to substitute the former into the latter, is a precaution against shell injection (think of this as an analog to SQL injection).


Of course, don't actually do anything like this in Python -- the native primitives for file IO are far more appropriate.

Comments

-1

Have you tried without splitting the command and using shell=True? My usual format is:

 process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True) output = process.stdout.read() # or .readlines() 

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.