This is perilously close to an opinion-based question/answer but I'll try to provide some facts with my opinions.
- If you have a very large number of files in a folder, any shell-based operation that tries to enumerate them (e.g.
mv * /somewhere/else) may fail to expand the wildcard successfully, or the result may be too large to use. lswill take longer to enumerate a very large number of files than a small number of files.- The filesystem will be able to handle millions of files in a single directory, but people will probably struggle.
One recommendation is to split the filename into two, three or four character chunks and use those as subdirectories. For example, somefilename.txt might be stored as som/efi/somefilename.txt. If you are using numeric names then split from right to left instead of left to right so that there is a more even distribution. For example 12345.txt might be stored as 345/12/12345.txt.
You can use the equivalent of zip -j zipfile.zip path1/file1 path2/file2 ... to avoid including the intermediate subdirectory paths in the ZIP file.
If you are serving up these files from a webserver (I'm not entirely sure whether that's relevant) it is trivial to hide this structure in favour of a virtual directory with rewrite rules in Apache2. I would assume the same is true for Nginx.