0

I have around one hundred .tar files and they include millions of small files. I want to extract them to one directory in Linux.

These .tar files are on a mounted LVM EXT4 file system. But no matter I extract to the mounted FS or my root directory, tar both says no space on device.

However, I used df -h and df -i to check the left space, there is a lot of it left (700G in my root directory and 30T in the mounted FS).

Therefore, what could be the problem here?

Results of df -h and df -ih Result of df -h Result of df -ih

6
  • 2
    Please update your question with the output of "df -ih" and "df -h". Also show the commands that you attempt to run along with the unaltered error messages. Commented Apr 7, 2024 at 5:08
  • How many millions is that and are they all extracted into the same directory? Generally you want to avoid more than a few thousand files in a single directory. Commented Apr 7, 2024 at 6:14
  • The case is special here as the program requires all the files to be in the same directory. Commented Apr 7, 2024 at 15:30
  • How many files exactly does the archive contain and in what partition are you attempting to extract it? Commented Apr 7, 2024 at 16:44
  • 1
    Please don't post pictures of text Commented Apr 7, 2024 at 20:36

2 Answers 2

0

According to this blog entry, the author estimates that the maximum normal ext4 directory size is about 10 million files unless the large_dir filesystem option is enabled:

Suppose, not entirely hypothetically, that one day you discover your (Linux) kernel logging messages like this:

EXT4-fs warning (device md1): ext4_dx_add_entry:2461: Directory (ino: 102236164) index full, reach max htree level :2 EXT4-fs warning (device md1): ext4_dx_add_entry:2465: Large directory feature is not enabled on this filesystem

Congratulations, of a sort. You've managed to accumulate so many files in the directory that it has filled up, in a logical sense. Unfortunately, further attempts to create files will fail; in fact they are already failing, because that's how you get the error message. If you're lucky, your software is logging error messages and you're noticing them. Also, since your directory got so large, you may have an unpleasant surprise coming your way.

('Large' here is relative. The directory that this happened to was only about 575 MBytes as reported by 'ls -lh'; this is very large for a directory, but not that large for a modern file. The filesystem as a whole had tons of space free.)

I would suggest looking through the kernel logs using dmesg and seeing if you're getting messages like this and then consider if you need to enable it using tune2fs with the -O large_dir option.

1
  • I found the warning in dmesg, thanks a lot! Commented Apr 7, 2024 at 22:54
-1

there is enough disk space, but you have reached the maximum number that a file system can hold, because of huge numbers of files on the disk.

Run df -ih to see the used inode percentage, perhaps 100%?
source

2
  • The used inode percentage is at 4% and 12% in the mounted FS and the root one. Commented Apr 7, 2024 at 4:44
  • @Lesterth please add that to your question so it's ready for people to find and read Commented Apr 7, 2024 at 20:38

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.