5

df and du both incorrectly report that my root partition, a 100GB SSD, has no remaining space and uses 100G respectively; 85G in /home/steven alone. A simple summing of the disk usage provided by du, however, reports less than 13G used.

How can I fix this?

Specifically:

~ » du -sh ~ 85G /home/steven ~ » du -b ~ | wc -l 15041 ~ » du -h ~ | sort -h | tail -n 1 85G /home/steven # 91088489808 bytes if using -b for du ~ » du -b ~ | sort -n | head -n 15040 | cut -f 1 | perl -ne 'BEGIN{$i=0;}$i+=$_;END{print $i.qq|\n|;}' 12735983847 # 11-12G, roughly 

There's a huge discrepancy between 85G and 11G or 12G, obviously. I ran lsof +L1 and eliminated all of the processes with files marked deleted, but still no luck.

I have several soft links in $HOME pointing to directories (e.g., repos) on an external hard drive, which may be an issue based on some Stack Exchange posts I read, but I can't seem to understand it.

What should I do next?

11
  • What about a tune2fs -m 0 /dev/...? Commented Feb 4, 2015 at 21:04
  • No change in du after running that, unfortunately, but it did free up 5G per df. Commented Feb 4, 2015 at 21:10
  • 1
    Why would you filter out deleted files? They're still going to be on disk until the file handle is closed. Commented Feb 4, 2015 at 21:19
  • I'd also turn your reserved space back on unless this is on the /home filesystem. If it's on root you actually need that space, just probably not as much space as it reserved automatically. Commented Feb 4, 2015 at 21:23
  • @Bratchley, I ran kill -9 on all of the PIDs. Won't that release the file handles? Also, how much space would you recommend allocating? Commented Feb 4, 2015 at 21:37

2 Answers 2

4

du does a depth-first traversal of the given tree. By default, it shows the usage of every directory tree, showing the inclusive disk usage of each:

$ du ~ 4 /home/bob/Videos 40 /home/bob/.cache/abrt 43284 /home/bob/.cache/mozilla/firefox 43288 /home/bob/.cache/mozilla 12 /home/bob/.cache/imsettings 48340 /home/bob/.cache 4 /home/bob/Documents 48348 /home/bob 

If given the -a option, it will additionally show the size of every file.

With the -s option, it will show just the total size of each argument file or directory tree.

$ du -s ~ 48348 /home/bob $ du -s ~/* 4 /home/bob/Videos 4 /home/bob/Documents 

So, when you ran

$ du -b ~ | wc -l 15041 $ du -b ~ | sort -n | head -n 15040 | cut -f 1 | \ perl -ne 'BEGIN{$i=0;$i+=$_;END{print $i.qq|\n|;}' 12735983847 

you were summing up the size of everything under your home directory - multiple times, unfortunately, because the size reported on each line is inclusive of all subdirectories - but because you omitted the final line of du's output, which would be the line for /home/steven, du didn't count the size of any of the regular files in the top level of your home directory. So the sum didn't include your very large .xsession-errors file.

And when you ran

du -sb ~ returns 91296460205, but the sum of du -sb ~/* is only 1690166532 

your du -sb ~/* output didn't include any files or directories in your home directory that begin with ..

Both du ~ | tail -1 and du -s ~ should do a reasonable job of showing your home directory's disk usage (not including deleted-but-open files, of course), but if you want to sum up all the file sizes without relying on du, you can do something like this (assuming a modern find that supports the printf %s format to show the size in bytes):

find ~ -type f -printf '%s\n' | perl -ne 'BEGIN{$i=0;$i+=$_;END{print $i.qq|\n|;}' 
1
  • Thank you for the detailed reply, Mark. And the -printf trick, which is much quicker than the -exec du -h '{}' \; that I used. Commented Feb 5, 2015 at 20:13
2

You're summing the bytes, but the filesystem's block size is probably much larger than 1 byte. For an accurate count, you should be rounding each file's size up so that it's a multiple of the filesystem blocksize.

With GNU coreutils installed, you can run stat --file-system $HOME to find the block size of the filesystem.

On average, files will waste half a block. Multiply half a block by the number of files in $HOME and see if the result is close to 70GiB. If so, then your mystery is solved.

5
  • Thanks for the block size tip. $HOME's block size is 4096 bytes. Including directories (find ~ | wc -l), $HOME has 81846 files--more than what du found. Commented Feb 5, 2015 at 1:27
  • .xsession-errors was 83GB, per du -h .xsession-errors. Oddly, that file was excluded from du -h ~/ | sort -h. Commented Feb 5, 2015 at 1:42
  • After deleting that file, du -sh / shows 20G now, but df shows 97% usage. Commented Feb 5, 2015 at 1:46
  • lsof +L1 shows that the .xsession-errors file descriptor is still open, held by gdm. Would it be a disaster if I truncate the file in /proc/pid/fd/? Commented Feb 5, 2015 at 1:49
  • sudo truncate -s 0 /proc/[pid]/fd/[fd] worked. Commented Feb 5, 2015 at 10:30

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.