6

As you see in this dumpe2fs -h output (snipped the end, left the head in case something is important), I have more (about 86000 more, in fact) 'Free blocks' than are reserved, but I get a "no space on device" error even for a little tiny file (echoing something into a file for testing).

Color me stumped.

 dumpe2fs 1.41.12 (17-May-2010) Filesystem volume name: Last mounted on: Filesystem UUID: b7d8fde6-faa4-4c13-b310-32f302cc6db6 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 9707520 Block count: 38808000 Reserved block count: 1940400 Free blocks: 2026361 Free inodes: 9583170 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 1014 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 
3
  • Do you have quotas enabled or anything like that? Commented Feb 6, 2011 at 2:11
  • 1
    fsck would be a good idea. Commented Feb 6, 2011 at 15:17
  • lsof|grep deleted might also give a potential clue. Commented Feb 26, 2012 at 9:47

3 Answers 3

4

Your 160 GB partition is 94.78 % full and its file system is using the default value for reserved block (5%).

You have then only 0.22% of your disk available (~40 MB). There is no much point trying to understand why a tiny file cause a disk full with so little space available.

You system might be at the same time creating log or temporary files that fill this space. Journaling might also play a role here. i.e. your tiny file isn't written directly but through an intermediary location that might require extra space.

2

You are probably experiencing disk corruption. Boot to single user or recovery mode and run fsck on the affected partition(s).

1
  • There is no evidence of a disk corruption. Commented Feb 26, 2012 at 15:33
0

Please check the number of inodes available with

df -i /FILESYSTEM-IN-QUESTION 

If you are running out of inodes you need to find the mazy of twisty and small files that are filling up the inode table and consolidate them

If, for example, you had 9 million files in /tmp this could cause the problem.

1
  • 1
    The number of available inodes is already known. There are only 124350 inodes used out of 9707520, i.e. 98.7% are free. This isn't the problem here. Commented Feb 26, 2012 at 9:37

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.