This output suggests 28786688 inodes overall before the filesystem on /dev/sda2 returns ENOSPC ("No space left on device") .
On the original *nix filesystem design, the maximum number of inodes is set at filesystem creation time. Dedicated space is allocated for them. You can run out of inodes before you run out of space for data, or vice versa. The most common default Linux filesystem ext4 still has this limitation. For information about inode sizes on ext4, look at the manpage for mkfs.ext4.
On btrfs, space is allocated dynamically. "The inode structure is relatively small, and will not contain embedded file data or extended attribute data." (ext3/4 allocates some space inside inodes for extended attributes). Of course you can still run out of disk space by creating too much metadata / directory entries, it's just that you can't run out of inodes.
Thinking about it, tmpfs is another example where inodes are allocated dynamically and it's hard to know what the maximum number of inodes reported by df -i would actually mean in practice.
"XFS also allocates inodes dynamically. So does JFS. So did/does reiserfs. So does F2FS. Traditional Unix filesystems allocate inodes statically at mkfs time, and so do modern FSes like ext4 that trace their heritage back to it, but these days that's the exception, not the rule.
"BTW, XFS does let you set a limit on the max percentage of space used by inodes, so you can run out of inodes before you get to the point where you can't append to existing files. (Default is 25% for FSes under 1TB, 5% for filesystems up to 50TB, 1% for larger than that.) Anyway, this space usage on metadata (inodes and extent maps) will be reflected in regular df -h" – Peter Cordes in a comment to this answer