Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

8
  • 3
    oh crap, that sounds exactly like it, and like a complete pain to fix. Its about a month to recopy. can this be done without losing the contents? Ill have to research dir_index etc more tomorrow. Wow, never would have thought of that. Commented Aug 10, 2015 at 12:54
  • Added the tune2fs command to disable the indexes, in case you want to try that. Commented Aug 10, 2015 at 12:57
  • 6
    Well spotted @steve. Unfortunately turning off dir_index will probably kill access performance with 70m files in one directory. Commented Aug 10, 2015 at 13:03
  • 4
    Yeah. Im not needing peak performance, but a fs search for each file would be horrible. So now Im looking at xfs or an array of 10k or so subfolders. Subfolders is a reasonable solution, however with ext4 I still run the risk of collision. does xfs suffer from the same issue? I read it uses a B+ tree, but that doesnt mean so much to me as far as ensuring theres never a collision. There is a world of misinformation out there, and Ive heard claims that it slows considerably over a million files, and claims that it doesnt. Commented Aug 10, 2015 at 18:35
  • 2
    I think this is a great answer, and Id like to mark it as such, but I think it would be nice if we could come to a fix, not just a diagnosis. Does anyone know if xfs suffers from anything like this? Ive read mixed reviews that it scales fine, or not over 1m. Commented Aug 11, 2015 at 17:46