6

I can't determine exactly what file is eating up my disk.

Firstly I used df command to list my directories:

devtmpfs 16438304 0 16438304 0% /dev tmpfs 16449868 0 16449868 0% /dev/shm tmpfs 16449868 1637676 14812192 10% /run tmpfs 16449868 0 16449868 0% /sys/fs/cgroup /dev/mapper/fedora-root 51475068 38443612 10393632 79% / tmpfs 16449868 384 16449484 1% /tmp /dev/sda3 487652 66874 391082 15% /boot /dev/mapper/fedora-home 889839636 44677452 799937840 6% /home 

Then I ran du -h / | grep '[0-9\,]\+G'.

The problem is I get everything including other directories, so I need to get specifically find /dev/mapper/fedora-root but when I try du -h /dev/mapper/fedora-root | grep '[0-9\,]\+G' I get no results.

I need to know what's eating up 79% of directory /

How can I solve this?

3
  • 5
    If you want a graphical tool, you can install ncdu. Commented May 24, 2019 at 8:23
  • 1
    @Panki in these situations I find the dirstat gui's much more effective. qdirstat/kdirstat are my go-to's for seeing big files/folders. Commented May 24, 2019 at 21:07
  • du -h --max-depth=1 / | awk '$1 ~ /G/' | sort, find the next interesting largest dir and then drill down with the same command (replacing / with your new target). Commented May 24, 2019 at 22:56

4 Answers 4

10

My magic command in such situation is :

du -m . --max-depth=1 | sort -nr | head -20 

To use this :

  1. cd into the top-level directory containing the files eating space. This can be / if you have no clue ;-)
  2. run du -m . --max-depth=1 | sort -nr | head -20. This will list the 20 biggest subdirectories of the current directory, sorted by decreasing size.
  3. cd into the biggest directory and repeat the du ... command until you find the BIG file(s)
4
  • 6
    I would consider adding -x to du for this question's scenario. This prevents it from crossing filesystem boundaries and so doesn't include the other filesystems on the machine like /home, or any nfs mounts/mounted ISO files etc, all of which presumably OP would want to exclude since they are trying to find something taking up space on a particular filesystem. Commented May 24, 2019 at 16:42
  • 6
    That's an interesting choice of options. Personally, I rather du -sh */ | sort -nh. du -m would round up everything to at least a megabyte. du -h will display sizes with suffixes like K, M, and G. sort -h will take these suffixes into account. Chosing du -s */ instead of du --max-depth=1 . allows one to refine the glob. Using head discards results that might have taken work add up, so I think it's better to let everything be printed in case the sizes are evenly distributed. Avoiding sort -r puts the bigger results closer to the new command line prompt. Commented May 24, 2019 at 18:53
  • du -sch * | sort -h | less is a handy equivalent for --max-depth=1 Commented May 25, 2019 at 17:16
  • thanks for this i manage to isolate all directories until i have pinpointed the right location. Commented May 29, 2019 at 4:43
10

ncdu is a great tool for this kind of problem. Here's the corresponding package.

enter image description here enter image description here

You can use -x if you want to stay on only one filesystem, without following symlinks. For example, as root:

ncdu -x /home 

It's the command line equivalent of DaisyDisk, Baobab or WinDirStat.

It might take a long time to scan a large folders, but once it's done it should be very fast to find the largest files.

1
  • 1
    thank you i used this on my development server it was easy using it, but sadly i cant install it on our production. Commented May 29, 2019 at 4:44
8

If you have a feel for the actual size of the file you can find files larger than a specific size.

Eg, to find files larger than 10 MiB:

find /mounted/drive -size +10M 

Or

find /mounted/drive -size +10M -exec ls -lh {} + 

Httqm's suggestion is also good if the problem isn't one big file but a large collection of smaller files. That is use du to show directory totals. Limiting with --max-depth is very useful with large directory trees:

du -m some/directory --max-depth=1 | sort -nr | head -20 du some/directory --max-depth=1 | sort -n | tail -21 

Will break a single directory down into sub-directories, the second of these gives you the total for the directory you're listing as well.

1
  • Instead of -exec ls .. you can also use -ls for some more info. Commented May 25, 2019 at 11:19
5

Use this command to find out which directories are the largest:

du -a / | sort -n -r | head 
1
  • I often use du -h | sort -h | less. Or leave out the less, because the largest directories will be at the end of the output. And the rest are there in the scrollback if you want them. Or use du -sch *. Commented May 25, 2019 at 17:14

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.