The file system probably keeps a count of used and free data blocks as part of normal operation. df uses this information.
Even if the file system doesn't keep a real-time counter, it needs a quick way to find free blocks when writing new data, and that same data can also be used to find the number of free blocks.
In theory, some filesystem could keep such a used space counter on a per-directory basis, too. However, there are a few problems.
If the count was kept for the whole subtree recursively, the filesystem would need to propagate the usage numbers upwards to an arbitrary depth. That might slow down all write operations. If it was only kept for the files immediately within the directory, a recursive walk of the tree would still be required to find the total size of a tree.
On Unix-like filesystems, hard links are an even bigger obstacle. When a file can be linked to from multiple directories (or multiple times from the same directory), it has no unique parent directory. Where would the file's size be counted? Counting it on all directories that link to it would produce an inflated total usage, as the file could be counted multiple times. Counting it on only one directory would also be obviously wrong.
In fact, files (i.e. inodes) on traditional Unix filesystems don't even know the directories they reside in, only the count of links to them (names they have). In most usage, that information is not needed, since files are accessed primarily be name anyway. Storing it would also require an arbitrary amount of data in the inode, duplicating the data in the directories.