I have a weird problem on one of our servers. Almost half of my disk space is missing.
df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 271G 122G 149G 46% / devtmpfs 3.8G 0 3.8G 0% /dev tmpfs 3.8G 8.0K 3.8G 1% /dev/shm tmpfs 3.8G 8.6M 3.8G 1% /run tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup /dev/sda1 497M 120M 378M 24% /boot tmpfs 778M 0 778M 0% /run/user/600 On the other hand, du shows only 6GB used:
du -hs / 6.0G / This is a server where logs often fill the disk up to 100%, so my first response was to restart rsyslog daemon, but that had no effect. I also tried to reboot the server, so it can't be some files that are deleted but still in use from some process. I looked at https://serverfault.com/questions/299839/linux-disk-space-missing where someone suggested to do a fsck on reboot but that didn't help. On the same page, someone suggested to look for files on additional mount points, but there are none. I am looking for more suggestions.
The output of fdisk:
fdisk -l /dev/sda Disk /dev/sda: 299.5 GB, 299506860032 bytes, 584974336 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 65536 bytes / 65536 bytes Disk label type: dos Disk identifier: 0x000f1d8a Device Boot Start End Blocks Id System /dev/sda1 * 2048 1026047 512000 83 Linux /dev/sda2 1026048 17018879 7996416 82 Linux swap / Solaris /dev/sda3 17018880 584974335 283977728 83 Linux lsblk output:
lsblk -f /dev/sda NAME FSTYPE LABEL UUID MOUNTPOINT sda ├─sda1 xfs 78e8f824-1a2a-4c60-ab7b-6126a192932d /boot ├─sda2 swap bdbe969d-c59d-4956-ae69-71e2825f93dc [SWAP] └─sda3 xfs a9c9da10-5e99-4a14-a207-490e3f676617 / `xfs_quota -x -c 'free -h -b'` output: Filesystem Size Used Avail Use% Pathname /dev/sda3 270.7G 127.1G 143.5G 47% / /dev/sda1 496.5M 119.0M 377.5M 24% /boot Filesystem Size Used Avail Use% Pathname /dev/sda3 270.7G 127.1G 143.5G 47% / /dev/sda1 496.5M 119.0M 377.5M 24% /boot xfs_quota -x -c 'quota -h' doesn't return anything. Nobody set any quotas. It's one server out of several hundred with the same configuration and partition layout deployed at our branch offices, but only this has this problem. Because of some specific reasons, it's the only one that gets its disk filled regularly to 100%. We delete some logs manually every one or two weeks.
fdiskmakes little sense. For example,sda1does not use 512000 blocks of 512 bytes each; looking at the start and end block address it uses 1024000 blocksdu -hs /as root didn't you...?