695

Is output buffering enabled by default in Python's interpreter for sys.stdout?

If the answer is positive, what are all the ways to disable it?

Suggestions so far:

  1. Use the -u command line switch
  2. Wrap sys.stdout in an object that flushes after every write
  3. Set PYTHONUNBUFFERED env var
  4. sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)

Is there any other way to set some global flag in sys/sys.stdout programmatically during execution?


If you just want to flush after a specific write using print, see How can I flush the output of the print function?.

3
  • 12
    For `print' in Python 3, see this answer. Commented Oct 16, 2016 at 5:05
  • 2
    I think a drawback of -u is that it won't work for compiled bytecode or for apps with a __main__.py file as entry point. Commented Dec 21, 2016 at 9:18
  • 1
    The full CPython initialization logic is here: github.com/python/cpython/blob/v3.8.2/Python/… Commented May 11, 2020 at 6:20

16 Answers 16

538

From Magnus Lycka answer on a mailing list:

You can skip buffering for a whole python process using python -u or by setting the environment variable PYTHONUNBUFFERED.

You could also replace sys.stdout with some other stream like wrapper which does a flush after every call.

class Unbuffered(object): def __init__(self, stream): self.stream = stream def write(self, data): self.stream.write(data) self.stream.flush() def writelines(self, datas): self.stream.writelines(datas) self.stream.flush() def __getattr__(self, attr): return getattr(self.stream, attr) import sys sys.stdout = Unbuffered(sys.stdout) print 'Hello' 
Sign up to request clarification or add additional context in comments.

15 Comments

Original sys.stdout is still available as sys.__stdout__. Just in case you need it =)
#!/usr/bin/env python -u doesn't work!! see here
__getattr__ just to avoid inheritance?!
Some notes to save some headaches: As I noticed, output buffering works differently depending on if the output goes to a tty or another process/pipe. If it goes to a tty, then it is flushed after each \n, but in a pipe it is buffered. In the latter case you can make use of these flushing solutions. In Cpython (not in pypy!!!): If you iterate over the input with for line in sys.stdin: ... then the for loop will collect a number of lines before the body of the loop is run. This will behave like buffering, though it's rather batching. Instead, do while true: line = sys.stdin.readline()
@tzp: you could use iter() instead of the while loop: for line in iter(pipe.readline, ''):. You don't need it on Python 3 where for line in pipe: yields as soon as possible.
|
248

I would rather put my answer in How can I flush the output of the print function? or in Python's print function that flushes the buffer when it's called?, but since they were marked as duplicates of this one (I do not agree), I'll answer it here.

Since Python 3.3, print() supports the keyword argument "flush" (see documentation):

print('Hello World!', flush=True) 

2 Comments

This is preferable to me. Buffering happens for a reason. Entirely disabling it comes at a cost.
@Akaisteph7 yeah this really seems like a much better solution and it's built right into Python now. I know this question was asked back when most people were using Python 2.x, but I think this should be the accepted answer.
93
# reopen stdout file descriptor with write mode # and 0 as the buffer size (unbuffered) import io, os, sys try: # Python 3, open as binary, then wrap in a TextIOWrapper with write-through. sys.stdout = io.TextIOWrapper(open(sys.stdout.fileno(), 'wb', 0), write_through=True) # If flushing on newlines is sufficient, as of 3.7 you can instead just call: # sys.stdout.reconfigure(line_buffering=True) except TypeError: # Python 2 sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0) 

Credits: "Sebastian", somewhere on the Python mailing list.

7 Comments

In Python3 you can just override the name of the print function with a flushing one. Its a dirty trick though!
@meawoppl: you could passflush=True parameter to print() function since Python 3.3.
Editing response to show response is not valid in recent version of python
@not2qubit: if you use os.fdopen(sys.stdout.fileno(), 'wb', 0) you end up with a binary file object, not a TextIO stream. You'd have to add a TextIOWrapper to the mix (making sure to enable write_through to eliminate all buffers, or use line_buffering=True to only flush on newlines).
If flushing on newlines is sufficient, as of Python 3.7 you can simply call sys.stdout.reconfigure(line_buffering=True)
|
76

Yes, it is.

You can disable it on the commandline with the "-u" switch.

Alternatively, you could call .flush() on sys.stdout on every write (or wrap it with an object that does this automatically)

1 Comment

-u from python run command is best suited for me!
49

This relates to Cristóvão D. Sousa's answer, but I couldn't comment yet.

A straight-forward way of using the flush keyword argument of Python 3 in order to always have unbuffered output is:

import functools print = functools.partial(print, flush=True) 

afterwards, print will always flush the output directly (except flush=False is given).

Note, (a) that this answers the question only partially as it doesn't redirect all the output. But I guess print is the most common way for creating output to stdout/stderr in python, so these 2 lines cover probably most of the use cases.

Note (b) that it only works in the module/script where you defined it. This can be good when writing a module as it doesn't mess with the sys.stdout.

Python 2 doesn't provide the flush argument, but you could emulate a Python 3-type print function as described here https://stackoverflow.com/a/27991478/3734258 .

3 Comments

Except that there is no flush kwarg in python2.
@o11c , yes you're right. I was sure I tested it but somehow I was seemingly confused (: I modified my answer, hope it's fine now. Thanks!
Yea buddy. functools.partial giddeeup.
16

The following works in Python 2.6, 2.7, and 3.2:

import os import sys buf_arg = 0 if sys.version_info[0] == 3: os.environ['PYTHONUNBUFFERED'] = '1' buf_arg = 1 sys.stdout = os.fdopen(sys.stdout.fileno(), 'a+', buf_arg) sys.stderr = os.fdopen(sys.stderr.fileno(), 'a+', buf_arg) 

3 Comments

Run that twice and it crashes on windows :-)
@MichaelClerx Mmm hmm, always remember to close your files xD.
Python 3.5 on Raspbian 9 gives me OSError: [Errno 29] Illegal seek for the line sys.stdout = os.fdopen(sys.stdout.fileno(), 'a+', buf_arg)
15
def disable_stdout_buffering(): # Appending to gc.garbage is a way to stop an object from being # destroyed. If the old sys.stdout is ever collected, it will # close() stdout, which is not good. gc.garbage.append(sys.stdout) sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0) # Then this will give output in the correct order: disable_stdout_buffering() print "hello" subprocess.call(["echo", "bye"]) 

Without saving the old sys.stdout, disable_stdout_buffering() isn't idempotent, and multiple calls will result in an error like this:

Traceback (most recent call last): File "test/buffering.py", line 17, in <module> print "hello" IOError: [Errno 9] Bad file descriptor close failed: [Errno 9] Bad file descriptor 

Another possibility is:

def disable_stdout_buffering(): fileno = sys.stdout.fileno() temp_fd = os.dup(fileno) sys.stdout.close() os.dup2(temp_fd, fileno) os.close(temp_fd) sys.stdout = os.fdopen(fileno, "w", 0) 

(Appending to gc.garbage is not such a good idea because it's where unfreeable cycles get put, and you might want to check for those.)

3 Comments

If the old stdout still lives on sys.__stdout__ as some have suggested, the garbage thing won't be necessary, right? It's a cool trick though.
As with @Federico's answer, this will not work with Python 3, as it will throw the exception ValueError: can't have unbuffered text I/O when calling print().
Your "another possibility" seems at first like the most robust solution, but unfortunately it suffers a race condition in the case that another thread calls open() after your sys.stdout.close() and before your os.dup2(temp_fd, fileno). I found this out when I tried using your technique under ThreadSanitizer, which does exactly that. The failure is made louder by the fact that dup2() fails with EBUSY when it races with open() like that; see stackoverflow.com/questions/23440216/…
15

In Python 3, you can monkey-patch the print function, to always send flush=True:

_orig_print = print def print(*args, **kwargs): _orig_print(*args, flush=True, **kwargs) 

As pointed out in a comment, you can simplify this by binding the flush parameter to a value, via functools.partial:

print = functools.partial(print, flush=True) 

5 Comments

Just wondering, but wouldn't that be a perfect use case for functools.partial?
Thanks @0xC0000022L, this makes it look better! print = functools.partial(print, flush=True) works fine for me.
@0xC0000022L indeed, I have updated the post to show that option, thanks for pointing that out
If you want that to apply everywhere, import builtins; builtins.print = partial(print, flush=True)
Oddly, this approach worked when nothing else did for Python 3.x, and I am wondering why the other documented approaches (use -u flag) do not work.
13

Yes, it is enabled by default. You can disable it by using the -u option on the command line when calling python.

Comments

8

You can also run Python with stdbuf utility:

stdbuf -oL python <script>

3 Comments

Line buffering (as -oL enables) is still buffering -- see f/e stackoverflow.com/questions/58416853/…, asking why end='' makes output no longer be immediately displayed.
True, but line buffering is the default (with a tty) so does it make sense to write code assuming output is totally unbuffered — maybe better to explicitly print(..., end='', flush=True) where that's improtant? OTOH, when several programs write to same output concurrently, the trade-off tends to shift from seeing immediate progress to reducing output mixups, and line buffering becomes attractive. So maybe it is better to not write explicit flush and control buffering externally?
I think, no. Process itself should decide, when and why it calls flush. External buffering control is compelled workaround here
5

It is possible to override only write method of sys.stdout with one that calls flush. Suggested method implementation is below.

def write_flush(args, w=stdout.write): w(args) stdout.flush() 

Default value of w argument will keep original write method reference. After write_flush is defined, the original write might be overridden.

stdout.write = write_flush 

The code assumes that stdout is imported this way from sys import stdout.

Comments

4

You can also use fcntl to change the file flags in-fly.

fl = fcntl.fcntl(fd.fileno(), fcntl.F_GETFL) fl |= os.O_SYNC # or os.O_DSYNC (if you don't care the file timestamp updates) fcntl.fcntl(fd.fileno(), fcntl.F_SETFL, fl) 

2 Comments

There's a windows equivalent: stackoverflow.com/questions/881696/…
O_SYNC has nothing at all to do with userspace-level buffering that this question is asking about.
4

One way to get unbuffered output would be to use sys.stderr instead of sys.stdout or to simply call sys.stdout.flush() to explicitly force a write to occur.

You could easily redirect everything printed by doing:

import sys; sys.stdout = sys.stderr print "Hello World!" 

Or to redirect just for a particular print statement:

print >>sys.stderr, "Hello World!" 

To reset stdout you can just do:

sys.stdout = sys.__stdout__ 

2 Comments

This might get very confusing when you then later try to capture the output using standard redirection, and find you are capturing nothing! p.s. your stdout is being bolded and stuff.
One big caution about selectively printing to stderr is that this causes the lines to appear out of place, so unless you also have timestamp this could get very confusing.
3

You can create an unbuffered file and assign this file to sys.stdout.

import sys myFile= open( "a.log", "w", 0 ) sys.stdout= myFile 

You can't magically change the system-supplied stdout; since it's supplied to your python program by the OS.

1 Comment

You can also set buffering=1 instead of 0 for line-buffering.
3

Variant that works without crashing (at least on win32; python 2.7, ipython 0.12) then called subsequently (multiple times):

def DisOutBuffering(): if sys.stdout.name == '<stdout>': sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0) if sys.stderr.name == '<stderr>': sys.stderr = os.fdopen(sys.stderr.fileno(), 'w', 0) 

3 Comments

Are you sure this is not buffered?
Should you check for sys.stdout is sys.__stdout__ instead of relying on the replacement object having a name attribute?
this works great if gunicorn isn't respecting PYTHONUNBUFFERED for some reason.
3

(I've posted a comment, but it got lost somehow. So, again:)

  1. As I noticed, CPython (at least on Linux) behaves differently depending on where the output goes. If it goes to a tty, then the output is flushed after each '\n'
    If it goes to a pipe/process, then it is buffered and you can use the flush() based solutions or the -u option recommended above.

  2. Slightly related to output buffering:
    If you iterate over the lines in the input with

    for line in sys.stdin:
    ...

then the for implementation in CPython will collect the input for a while and then execute the loop body for a bunch of input lines. If your script is about to write output for each input line, this might look like output buffering but it's actually batching, and therefore, none of the flush(), etc. techniques will help that. Interestingly, you don't have this behaviour in pypy. To avoid this, you can use

while True: line=sys.stdin.readline()
...

4 Comments

here's your comment. It might be a bug on older Python versions. Could you provide example code? Something like for line in sys.stdin vs. for line in iter(sys.stdin.readline, "")
for line in sys.stdin: print("Line: " +line); sys.stdout.flush()
it looks like the read-ahead bug. It should only happen on Python 2 and if stdin is a pipe. The code in my previous comment demonstrates the issue (for line in sys.stdin provides a delayed response)
BTW, the default being different for stdout depending on isatty() isn't just a Python thing -- that's standard C library behavior too.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.