The scene is like, yesterday I need to check some api bug. So I logged into the log server. I opened up a tmux session, so I can reconnect to my work later.
I typed in tail -f data_log | grep keyword to debug. But didn't work it out at that moment. So I decided to keep this tmux session for later and closed the terminal pane.
And today my colleague told me my tmux session with tail -f data_log | grep keyword running has caused a hard disk exhaustion on that log server. Which makes me feel ashamed, self-blamed and confused.
As tail -f opens its own stdout file descriptor and redirect the newly added content of data_log to the terminal screen.
Can this stdout file descriptor receive infinite amount of data?
Where does this file descriptor store this large amount of data? Is there a real file to store them?
Does tmux have anything to do with this issue?
If tmux has nothing to do with this issue, if I opened a terminal running tail -f my_log, and used crontab to add 1 byte to my_log per second, does it mean that every second 2 bytes will be stored on my disk?(1 for tail and 1 for crontab task)?