2

I have 2 XFS filesystems where space seems to disappear mysteriously.

The system (Debian) was installed many years ago (12 years ago, I think). The 2 XFS filesystems were created at that time. Since then, the system has been updated, both software and hardware, and both filesystems have been grown a few times. It’s now running 32-bit up-to-date Debian Jessie, with a 64-bit 4.9.2-2~bpo8+1 linux kernel from the backports archive.

Now, within days, I see that the used space on those filesystems grows, much more than it should because of the files. I have checked with lsof +L1 that it’s not related to files that would have been deleted but kept open by some processes. I can reclaim the lost space by unmounting the filesystems and running xfs_repair.

Here is a transcript that shows it:

~# df -h /home Filesystem Size Used Avail Use% Mounted on /dev/mapper/system-home 2.0G 1.7G 361M 83% /home ~# du -hsx /home 1.5G /home ~# xfs_estimate /home /home will take about 1491.8 megabytes ~# umount /home ~# xfs_repair /dev/system/home Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... sb_fdblocks 92272, counted 141424 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 1 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... done ~# mount /home ~# df -h /home Filesystem Size Used Avail Use% Mounted on /dev/mapper/system-home 2.0G 1.5G 521M 75% /home ~# 

On this example, there were “only” 161MB that were lost, but if I wait too long, the filesystem is 100% full, and I have real problems…

If that matters, both filesystems are bind-mounted in a LXC container. (I don’t have any other XFS filesystem on this system.)

Does anybody has an idea why this happens or how I should investigate?

6
  • No need to tell me I should use a different filesystem. I already consider converting those to ext4. But after using XFS for more than 10 years with no problem, I’d still like to understand what’s happening. Commented Mar 16, 2017 at 20:15
  • Check out serverfault.com/questions/406069/…. Commented Mar 17, 2017 at 9:45
  • 1
    @FerencWágner Thanks for pointing this, it was very interesting. Unfortunately, I just tried defragmenting with xfs_fsr -v, flushing with sync; echo 3 > /proc/sys/vm/drop_caches, or mounting the filesystem with option allocsize=4096. None of them could reclaim the lost space. On the other hand, xfs_repair could. So my problem may be unrelated to the “XFS Dynamic Speculative EOF Preallocation”. Commented Mar 22, 2017 at 22:24
  • I have the same problem. askubuntu.com/questions/865141/… It will be interesting to see if this kernel patch fixes the issue patchwork.kernel.org/patch/9566285 Commented May 2, 2017 at 21:16
  • In my case, xfs_fsr -v just solved my problem. But I don't know the exact cause. Commented Feb 18, 2024 at 9:06

1 Answer 1

-1
xfs_admin -c 1 xxx 

use lazy-counters mode XFS.

1
  • Welcome to the site. OP asked what they were doing wrong. A good answer should address the question asked, not just provide an unexplained solution. Commented Nov 8, 2019 at 12:09

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.