coreutils/src/chmod.c#process_file shows that chmod(1) always tries to set the mode and then checks back again with fstatat(2).
Files are processed via fts(3), which has to stat(2) all traversed file system objects beforehand to build its data tree.
Unixlore features in Speeding Up Bulk File Operations measurements where chmod(1) is timed against an find / xargs approach: the latter wins by magnitudes.
Here the command line adapted to the original question:
find . -print0 | xargs -0 chmod 775 Two reasons:
File system traversal is decoupled from operations on the files via the pipe between the two processes, which might even run on different cores.
fts(3)operation is minimized, becausexargs(1)'flattens' out the directory trees.
So yes: you should definitely use find / xargs for a simple solution.
Other options:
Play with the umask and the source code of the process(es) writing the new files.
If you are using Linux, chances are your system has enabled the
inotifykernel subsystem. In this case, you can create a script with an efficient solution viainotifywait(1).
Sidenote: unless you want to execute permissions on your files, I'd suggest modifying the invocation as so:
find . -type f -print0 | xargs -0 chmod 664 find . -type d -print0 | xargs -0 chmod 775 Note to the editors: I am not allowed to add more than two links to the post, neither to comment on other posts. I leave the URLs here and hope some openhearted user with sufficient reputation puts them back into the text and deletes this paragraph.
Comment on priming the disk cache with find . -printf "":
This might speed up the execution of the following chmod operations, however depends on available memory and i/o load. So it might work, or not. Decoupling traversal (find) and chmod operation already provides for caching, so priming the cache might be superfluous.