Skip to main content
suggested -exec, explained variants
Source Link
user unknown
  • 10.8k
  • 3
  • 37
  • 59

Looping over find's output is often bad practice, especially with general purpose scripts, where you don't know much about the circumstances, it is used.

In my daily work, I often have enough knowledge about the files below one subdirectory to exclude the possibility, that a toxic filename might be hit. And I never use filenames with blanks, newlines, tabs and such in their names on my own. It's imho a bad idea in the first place, begging for problems.

For example, I have a script to shrink images from my camera, which are generic and will never change. Their path does not contain blanks, nor the filenames. (IMG00001.JPG) If I get a new camera, I will need to change the script though.

For ad hoc commands, issued and forgotten, it's not a problem. For single user machines, if you don't share your code, it shouldn't be a problem too.

When giving advice on SO, it's often a problem when not communicating the pitfalls and assumptions you make, because other people might have a toxic filename.

Considering Linux's Gnu-find, it has 4 Options, to perform commands on files, like

find . -type f -name ... -exec do-smth-with {} ";" 

and besides -exec, there is -execdir, -ok and -okdir for similar purposes. The ~dir-Versions perform the action from the directory, the file is met, in contrast to your current dir, which is recommended. The ok~-Versions ask for confirmation to perform the command.

So in allmoost all cases, you don't need a for-loop. Find iterates by itself over the results and handles blanks and the like in filenames on its own.

Terminating the command with ";" can be replaced by a plus sign without quotes, if the command, called, handles a big bunch of files as parameters gracefully.

After execution, you may even pass more options to find, like:

find -name "*.html" -execdir wc {} + -ls 

to give a simple example.

I can't testify for other implementations of find.

Looping over find's output is often bad practice, especially with general purpose scripts, where you don't know much about the circumstances, it is used.

In my daily work, I often have enough knowledge about the files below one subdirectory to exclude the possibility, that a toxic filename might be hit. And I never use filenames with blanks, newlines, tabs and such in their names on my own. It's imho a bad idea in the first place, begging for problems.

For example, I have a script to shrink images from my camera, which are generic and will never change. Their path does not contain blanks, nor the filenames. (IMG00001.JPG) If I get a new camera, I will need to change the script though.

For ad hoc commands, issued and forgotten, it's not a problem. For single user machines, if you don't share your code, it shouldn't be a problem too.

When giving advice on SO, it's often a problem when not communicating the pitfalls and assumptions you make, because other people might have a toxic filename.

Looping over find's output is often bad practice, especially with general purpose scripts, where you don't know much about the circumstances, it is used.

In my daily work, I often have enough knowledge about the files below one subdirectory to exclude the possibility, that a toxic filename might be hit. And I never use filenames with blanks, newlines, tabs and such in their names on my own. It's imho a bad idea in the first place, begging for problems.

For example, I have a script to shrink images from my camera, which are generic and will never change. Their path does not contain blanks, nor the filenames. (IMG00001.JPG) If I get a new camera, I will need to change the script though.

For ad hoc commands, issued and forgotten, it's not a problem. For single user machines, if you don't share your code, it shouldn't be a problem too.

When giving advice on SO, it's often a problem when not communicating the pitfalls and assumptions you make, because other people might have a toxic filename.

Considering Linux's Gnu-find, it has 4 Options, to perform commands on files, like

find . -type f -name ... -exec do-smth-with {} ";" 

and besides -exec, there is -execdir, -ok and -okdir for similar purposes. The ~dir-Versions perform the action from the directory, the file is met, in contrast to your current dir, which is recommended. The ok~-Versions ask for confirmation to perform the command.

So in allmoost all cases, you don't need a for-loop. Find iterates by itself over the results and handles blanks and the like in filenames on its own.

Terminating the command with ";" can be replaced by a plus sign without quotes, if the command, called, handles a big bunch of files as parameters gracefully.

After execution, you may even pass more options to find, like:

find -name "*.html" -execdir wc {} + -ls 

to give a simple example.

I can't testify for other implementations of find.

Source Link
user unknown
  • 10.8k
  • 3
  • 37
  • 59

Looping over find's output is often bad practice, especially with general purpose scripts, where you don't know much about the circumstances, it is used.

In my daily work, I often have enough knowledge about the files below one subdirectory to exclude the possibility, that a toxic filename might be hit. And I never use filenames with blanks, newlines, tabs and such in their names on my own. It's imho a bad idea in the first place, begging for problems.

For example, I have a script to shrink images from my camera, which are generic and will never change. Their path does not contain blanks, nor the filenames. (IMG00001.JPG) If I get a new camera, I will need to change the script though.

For ad hoc commands, issued and forgotten, it's not a problem. For single user machines, if you don't share your code, it shouldn't be a problem too.

When giving advice on SO, it's often a problem when not communicating the pitfalls and assumptions you make, because other people might have a toxic filename.