0

So df is telling me that I'm using 49/59G, so I set out to find my space.

$ df -h Filesystem Size Used Avail Use% Mounted on /dev/vda1 59G 49G 7.9G 86% / none 4.0K 0 4.0K 0% /sys/fs/cgroup udev 2.0G 4.0K 2.0G 1% /dev tmpfs 396M 340K 396M 1% /run none 5.0M 0 5.0M 0% /run/lock none 2.0G 0 2.0G 0% /run/shm none 100M 0 100M 0% /run/user overflow 1.0M 0 1.0M 0% /tmp 

``` I first tried du

5.1G . 2.3G ./var 1.3G ./usr 973M ./var/log 834M ./lib 736M ./lib/modules 713M ./var/lib 600M ./root 592M ./var/log/nginx 456M ./var/www 

Now, I'm no math major, but I'm pretty sure that's not 49G.

I tried ncdu, and that's telling i'm using 5GB / 128GB

2
  • Can you run du --apparent-size for us? ("although the apparent size is usually smaller, it may be larger due to holes in ('sparse') files, internal fragmentation, indirect blocks, and the like") Commented Aug 24, 2015 at 15:41
  • 129T . 128T ./proc 2.3G ./var 1.1G ./usr 980M ./var/log 800M ./lib 704M ./lib/modules 703M ./var/lib 599M ./root 595M ./var/log/nginx Commented Aug 24, 2015 at 16:42

2 Answers 2

3

One cause for this is having files that have been deleted from disk, but are still open in memory. These files although deleted, still report as space taken up on the disk.

You can check if you have any files in this state (and also which process is holding them open) with lsof.

$ lsof | grep deleted open.pl 15220 steve 3r REG 8,1 70 56099817 /home/steve/scratch/in.txt (deleted) 

The 7th column (70 in the above case) is the size of the file in bytes. The first column (open.pl) is the process that's holding the file open. The PID of the process is the second column.

Typically this happens when huge log files are removed, but the process that was using them was not restarted.

To free this space, simply restart the service that is still holding the file(s) open.

3
  • 1
    I think we have winner....I see all my big logs files in there.... Commented Aug 24, 2015 at 16:43
  • Simply restart those services and that disk space will be returned (although not always instantaneously). Commented Aug 24, 2015 at 16:49
  • 1
    Restarted the service didn't help, but I was able to truncate the file as per this: unix.stackexchange.com/a/68532/96635 Thanks for pointing me in the right direction! Commented Aug 25, 2015 at 3:57
0

There are always a bit "missing"... First of all, the inode-tables, filename-lists in directories and stuff like that will take a bit of space. I remember old floppys - they were 1.44MB, but with a MS-DOS FAT filesystem, they could only hold 1.38MB.

Second, Linux/Unix-filesystem usually reserve a few percent of the space for root. This way, if the users fill-up the whole filesystem, there are still room for root to maneuver - and logs belonging to root to be used. The default is 5% - which on a disk of a terabyte or two, becomes quite a bit (it's safe to reduce it some).

The tune2fscommand let you specify the percentage with the -m option -- tune2fs -m 1 /dev/sda1 reserves just 1% (you should keep that much).

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.