1

I have a directory that contains multiple files of really huge size and the total size of the directory is around 285G where if i do ls -ltrh to list files in the directory, it is taking its own time to list the files. I want to delete all the contents in that directory in a faster way, i have tried the below way and is taking around 45 mins to clear files and directory. Is there any other fastest way to do so?

[loguser@npdlogmt01 DEVW]$ du -sh 2021-03-26_TEST 285G 2021-03-26_TEST [loguser@npmt01 DEV]$ cat Delete_Find_test_v10.out + date Sun Apr 11 11:20:43 UTC 2021 + find /op_reqs_logs/OPC/DEV/2021-03-26_TEST/ONLINE/V10 -type f -iname '*txt' -delete + date Sun Apr 11 11:20:44 UTC 2021 + find /op_reqs_logs/OPC/DEV/2021-03-26_TEST/BATCH/V10 -type f -iname '*txt' -delete + date Sun Apr 11 12:03:55 UTC 2021 + exit 0 rm -rf 2021-03-26_TEST 
1
  • 2
    Depending on what kind of filesystem (ext4, xfs, btrfs, etc) you're using, if there lots of files in a directory (tens of thousands or more) it can sometimes be faster to rename the directory (e.g. from foo to foo.old), create a new directory with the same name (and ownership and permissions), and then rm -rf the old directory. It certainly allows you to start using the new directory before all the old files have been deleted, which minimises downtime. This will also ensure that the newly created directory is of minimal size, which can improve performance when using the directory. Commented Apr 16, 2021 at 13:25

1 Answer 1

1

The size of the files is less important than the number of files. It should be faster to delete one big file than many small files.

The speed should be mostly IO bound, it's unlikely that another way is a significantly faster.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.