Directories are usually implemented as files. They have an inode, and a data area, but of course are usually accessed (at least written to) by special system calls. Some systems allow for reading directories with the usual read(2) system call (Linux doesn't, FreeBSD did when I last checked). The data area of the directory-file then contains the directory entries. On ext4, the root directory also has an inode, it's fixed to inode number 2 (try ls -lid /).
Having the directory act like a file makes it easy to allocate space for the directory entries, etc, as the functions to allocate blocks for files must always be there. Also, since they use the samasame data blocks as needed, there's no need to allocate space between file data and directory listings beforehand.
The internals of how directory entries are stored varies between file systems, and has for example evolved between ext2 and ext4. Modern systems use trees instead of linear lists for faster lookups. See here. Even the venerable FAT filesystem stores directories as files, but at least in older FATs, the root directory is special. (The structure of the directory entries in FAT is of course different from unixUnix filesystems).)
Hence, if this is the case, traversing from one directory to another requires the disk to read from seemingly arbitrary locations, which seems a little inefficient to me.
Yep. But often-accessed directory entries (or the underlying data blocks) are likely to be cached in modern operating systems.
Saving the contents of all directories centrally would require pre-allocating a large area, and would still require disk seeks within the directory data area.