Timeline for Can I run out of disk space by creating a very large number of empty files?
Current License: CC BY-SA 3.0
8 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Aug 31, 2017 at 7:38 | vote | accept | luchonacho | ||
| Jul 21, 2017 at 12:35 | history | edited | sourcejedi | CC BY-SA 3.0 | added 1 character in body |
| Jul 21, 2017 at 12:30 | history | edited | sourcejedi | CC BY-SA 3.0 | added 1 character in body |
| Jul 21, 2017 at 12:23 | history | edited | sourcejedi | CC BY-SA 3.0 | added 90 characters in body |
| Jul 14, 2017 at 1:34 | comment | added | Peter Cordes | BTW, XFS does let you set a limit on the max percentage of space used by inodes, so you can run out of inodes before you get to the point where you can't append to existing files. (Default is 25% for FSes under 1TB, 5% for filesystems up to 50TB, 1% for larger than that.) Anyway, this space usage on metadata (inodes and extent maps) will be reflected in regular df -h, @luchonacho. | |
| Jul 14, 2017 at 1:30 | comment | added | Peter Cordes | XFS also allocates inodes dynamically. So does JFS. So did/does reiserfs. So does F2FS. Traditional Unix filesystems allocate inodes statically at mkfs time, and so do modern FSes like ext4 that trace their heritage back to it, but these days that's the exception, not the rule. (Unless you weight things by installed-base, in which it's probably accurate to say that most of the filesystems currently on disk on *nix systems in the wild have statically allocated inodes.) | |
| Jul 13, 2017 at 19:42 | comment | added | luchonacho | So, are you saying that if I create 28786688-667980=28118708 empty files, I will in effect run out of inodes and "break my system"? | |
| Jul 13, 2017 at 7:27 | history | answered | sourcejedi | CC BY-SA 3.0 |