Skip to main content
added 1 character in body
Source Link
sourcejedi
  • 53.6k
  • 23
  • 179
  • 337

This output suggests 28786688 inodes overall, after which the next attempt to create a file in the root filesystem (device /dev/sda2) will return ENOSPC ("No space left on device").

Explanation: on the original *nix filesystem design, the maximum number of inodes is set at filesystem creation time. Dedicated space is allocated for them. You can run out of inodes before you run out of space for data, or vice versa. The most common default Linux filesystem ext4 still has this limitation. For information about inode sizes on ext4, look at the manpage for mkfs.ext4.

OnLinux supports other filesystems without this limitation. On btrfs, space is allocated dynamically. "The inode structure is relatively small, and will not contain embedded file data or extended attribute data." (ext3/4 allocates some space inside inodes for extended attributes). Of course you can still run out of disk space by creating too much metadata / directory entries.

Thinking about it, tmpfs is another example where inodes are allocated dynamically. It's hard to know what the maximum number of inodes reported by df -i would actually mean in practice for these filesystems - I would just ignore it. I wouldn't attach any meaning to the value shown.

 

"XFS also allocates inodes dynamically. So does JFS. So did/does reiserfs. So does F2FS. Traditional Unix filesystems allocate inodes statically at mkfs time, and so do modern FSes like ext4 that trace their heritage back to it, but these days that's the exception, not the rule.

"BTW, XFS does let you set a limit on the max percentage of space used by inodes, so you can run out of inodes before you get to the point where you can't append to existing files. (Default is 25% for FSes under 1TB, 5% for filesystems up to 50TB, 1% for larger than that.) Anyway, this space usage on metadata (inodes and extent maps) will be reflected in regular df -h" – Peter Cordes in a comment to this answer

This output suggests 28786688 inodes overall, after which the next attempt to create a file in /dev/sda2 will return ENOSPC ("No space left on device").

Explanation: on the original *nix filesystem design, the maximum number of inodes is set at filesystem creation time. Dedicated space is allocated for them. You can run out of inodes before you run out of space for data, or vice versa. The most common default Linux filesystem ext4 still has this limitation. For information about inode sizes on ext4, look at the manpage for mkfs.ext4.

On btrfs, space is allocated dynamically. "The inode structure is relatively small, and will not contain embedded file data or extended attribute data." (ext3/4 allocates some space inside inodes for extended attributes). Of course you can still run out of disk space by creating too much metadata / directory entries.

Thinking about it, tmpfs is another example where inodes are allocated dynamically. It's hard to know what the maximum number of inodes reported by df -i would actually mean in practice for these filesystems - I would just ignore it.

"XFS also allocates inodes dynamically. So does JFS. So did/does reiserfs. So does F2FS. Traditional Unix filesystems allocate inodes statically at mkfs time, and so do modern FSes like ext4 that trace their heritage back to it, but these days that's the exception, not the rule.

"BTW, XFS does let you set a limit on the max percentage of space used by inodes, so you can run out of inodes before you get to the point where you can't append to existing files. (Default is 25% for FSes under 1TB, 5% for filesystems up to 50TB, 1% for larger than that.) Anyway, this space usage on metadata (inodes and extent maps) will be reflected in regular df -h" – Peter Cordes in a comment to this answer

This output suggests 28786688 inodes overall, after which the next attempt to create a file in the root filesystem (device /dev/sda2) will return ENOSPC ("No space left on device").

Explanation: on the original *nix filesystem design, the maximum number of inodes is set at filesystem creation time. Dedicated space is allocated for them. You can run out of inodes before you run out of space for data, or vice versa. The most common default Linux filesystem ext4 still has this limitation. For information about inode sizes on ext4, look at the manpage for mkfs.ext4.

Linux supports other filesystems without this limitation. On btrfs, space is allocated dynamically. "The inode structure is relatively small, and will not contain embedded file data or extended attribute data." (ext3/4 allocates some space inside inodes for extended attributes). Of course you can still run out of disk space by creating too much metadata / directory entries.

Thinking about it, tmpfs is another example where inodes are allocated dynamically. It's hard to know what the maximum number of inodes reported by df -i would actually mean in practice for these filesystems. I wouldn't attach any meaning to the value shown.

 

"XFS also allocates inodes dynamically. So does JFS. So did/does reiserfs. So does F2FS. Traditional Unix filesystems allocate inodes statically at mkfs time, and so do modern FSes like ext4 that trace their heritage back to it, but these days that's the exception, not the rule.

"BTW, XFS does let you set a limit on the max percentage of space used by inodes, so you can run out of inodes before you get to the point where you can't append to existing files. (Default is 25% for FSes under 1TB, 5% for filesystems up to 50TB, 1% for larger than that.) Anyway, this space usage on metadata (inodes and extent maps) will be reflected in regular df -h" – Peter Cordes in a comment to this answer

added 1 character in body
Source Link
sourcejedi
  • 53.6k
  • 23
  • 179
  • 337

This output suggests 28786688 inodes overall before, after which the filesystem onnext attempt to create a file in /dev/sda2 returnswill return ENOSPC ("No space left on device")  .

OnExplanation: on the original *nix filesystem design, the maximum number of inodes is set at filesystem creation time. Dedicated space is allocated for them. You can run out of inodes before you run out of space for data, or vice versa. The most common default Linux filesystem ext4 still has this limitation. For information about inode sizes on ext4, look at the manpage for mkfs.ext4.

On btrfs, space is allocated dynamically. "The inode structure is relatively small, and will not contain embedded file data or extended attribute data." (ext3/4 allocates some space inside inodes for extended attributes). Of course you can still run out of disk space by creating too much metadata / directory entries, it's just that you can't run out of inodes.

Thinking about it, tmpfs is another example where inodes are allocated dynamically and it's. It's hard to know what the maximum number of inodes reported by df -i would actually mean in practice for these filesystems - I would just ignore it.

"XFS also allocates inodes dynamically. So does JFS. So did/does reiserfs. So does F2FS. Traditional Unix filesystems allocate inodes statically at mkfs time, and so do modern FSes like ext4 that trace their heritage back to it, but these days that's the exception, not the rule.

"BTW, XFS does let you set a limit on the max percentage of space used by inodes, so you can run out of inodes before you get to the point where you can't append to existing files. (Default is 25% for FSes under 1TB, 5% for filesystems up to 50TB, 1% for larger than that.) Anyway, this space usage on metadata (inodes and extent maps) will be reflected in regular df -h" – Peter Cordes in a comment to this answer

This output suggests 28786688 inodes overall before the filesystem on /dev/sda2 returns ENOSPC ("No space left on device")  .

On the original *nix filesystem design, the maximum number of inodes is set at filesystem creation time. Dedicated space is allocated for them. You can run out of inodes before you run out of space for data, or vice versa. The most common default Linux filesystem ext4 still has this limitation. For information about inode sizes on ext4, look at the manpage for mkfs.ext4.

On btrfs, space is allocated dynamically. "The inode structure is relatively small, and will not contain embedded file data or extended attribute data." (ext3/4 allocates some space inside inodes for extended attributes). Of course you can still run out of disk space by creating too much metadata / directory entries, it's just that you can't run out of inodes.

Thinking about it, tmpfs is another example where inodes are allocated dynamically and it's hard to know what the maximum number of inodes reported by df -i would actually mean in practice.

"XFS also allocates inodes dynamically. So does JFS. So did/does reiserfs. So does F2FS. Traditional Unix filesystems allocate inodes statically at mkfs time, and so do modern FSes like ext4 that trace their heritage back to it, but these days that's the exception, not the rule.

"BTW, XFS does let you set a limit on the max percentage of space used by inodes, so you can run out of inodes before you get to the point where you can't append to existing files. (Default is 25% for FSes under 1TB, 5% for filesystems up to 50TB, 1% for larger than that.) Anyway, this space usage on metadata (inodes and extent maps) will be reflected in regular df -h" – Peter Cordes in a comment to this answer

This output suggests 28786688 inodes overall, after which the next attempt to create a file in /dev/sda2 will return ENOSPC ("No space left on device").

Explanation: on the original *nix filesystem design, the maximum number of inodes is set at filesystem creation time. Dedicated space is allocated for them. You can run out of inodes before you run out of space for data, or vice versa. The most common default Linux filesystem ext4 still has this limitation. For information about inode sizes on ext4, look at the manpage for mkfs.ext4.

On btrfs, space is allocated dynamically. "The inode structure is relatively small, and will not contain embedded file data or extended attribute data." (ext3/4 allocates some space inside inodes for extended attributes). Of course you can still run out of disk space by creating too much metadata / directory entries.

Thinking about it, tmpfs is another example where inodes are allocated dynamically. It's hard to know what the maximum number of inodes reported by df -i would actually mean in practice for these filesystems - I would just ignore it.

"XFS also allocates inodes dynamically. So does JFS. So did/does reiserfs. So does F2FS. Traditional Unix filesystems allocate inodes statically at mkfs time, and so do modern FSes like ext4 that trace their heritage back to it, but these days that's the exception, not the rule.

"BTW, XFS does let you set a limit on the max percentage of space used by inodes, so you can run out of inodes before you get to the point where you can't append to existing files. (Default is 25% for FSes under 1TB, 5% for filesystems up to 50TB, 1% for larger than that.) Anyway, this space usage on metadata (inodes and extent maps) will be reflected in regular df -h" – Peter Cordes in a comment to this answer

added 90 characters in body
Source Link
sourcejedi
  • 53.6k
  • 23
  • 179
  • 337

This output suggests 28786688 inodes overall before the filesystem on /dev/sda2 returns ENOSPC ("No space left on device") .

On mostthe original *nix filesystemsfilesystem design, the maximum number of inodes is set at filesystem creation time. Dedicated space is allocated for them. You can run out of inodes before you run out of space for data, or vice versa. The most common default Linux filesystem ext4 still has this limitation. For information about inode sizes on ext4, look at the manpage for mkfs.ext4.

On btrfs, space is allocated dynamically. "The inode structure is relatively small, and will not contain embedded file data or extended attribute data." (ext3/4 allocates some space inside inodes for extended attributes). Of course you can still run out of disk space by creating too much metadata / directory entries, it's just that you can't run out of inodes.

Thinking about it, tmpfs is another example where inodes are allocated dynamically and it's hard to know what the maximum number of inodes reported by df -i would actually mean in practice.

"XFS also allocates inodes dynamically. So does JFS. So did/does reiserfs. So does F2FS. Traditional Unix filesystems allocate inodes statically at mkfs time, and so do modern FSes like ext4 that trace their heritage back to it, but these days that's the exception, not the rule.

"BTW, XFS does let you set a limit on the max percentage of space used by inodes, so you can run out of inodes before you get to the point where you can't append to existing files. (Default is 25% for FSes under 1TB, 5% for filesystems up to 50TB, 1% for larger than that.) Anyway, this space usage on metadata (inodes and extent maps) will be reflected in regular df -h" – Peter Cordes in a comment to this answer

This output suggests 28786688 inodes overall before the filesystem on /dev/sda2 returns ENOSPC ("No space left on device") .

On most *nix filesystems, the maximum number of inodes is set at filesystem creation time. Dedicated space is allocated for them. You can run out of inodes before you run out of space for data, or vice versa. For information about inode sizes on ext4, look at the manpage for mkfs.ext4.

On btrfs, space is allocated dynamically. "The inode structure is relatively small, and will not contain embedded file data or extended attribute data." (ext3/4 allocates some space inside inodes for extended attributes). Of course you can still run out of disk space by creating too much metadata / directory entries, it's just that you can't run out of inodes.

This output suggests 28786688 inodes overall before the filesystem on /dev/sda2 returns ENOSPC ("No space left on device") .

On the original *nix filesystem design, the maximum number of inodes is set at filesystem creation time. Dedicated space is allocated for them. You can run out of inodes before you run out of space for data, or vice versa. The most common default Linux filesystem ext4 still has this limitation. For information about inode sizes on ext4, look at the manpage for mkfs.ext4.

On btrfs, space is allocated dynamically. "The inode structure is relatively small, and will not contain embedded file data or extended attribute data." (ext3/4 allocates some space inside inodes for extended attributes). Of course you can still run out of disk space by creating too much metadata / directory entries, it's just that you can't run out of inodes.

Thinking about it, tmpfs is another example where inodes are allocated dynamically and it's hard to know what the maximum number of inodes reported by df -i would actually mean in practice.

"XFS also allocates inodes dynamically. So does JFS. So did/does reiserfs. So does F2FS. Traditional Unix filesystems allocate inodes statically at mkfs time, and so do modern FSes like ext4 that trace their heritage back to it, but these days that's the exception, not the rule.

"BTW, XFS does let you set a limit on the max percentage of space used by inodes, so you can run out of inodes before you get to the point where you can't append to existing files. (Default is 25% for FSes under 1TB, 5% for filesystems up to 50TB, 1% for larger than that.) Anyway, this space usage on metadata (inodes and extent maps) will be reflected in regular df -h" – Peter Cordes in a comment to this answer

Source Link
sourcejedi
  • 53.6k
  • 23
  • 179
  • 337
Loading