2025-11-19
Automatically scrubbing ZFS pools periodically on FreeBSD
We've been moving from OpenBSD to FreeBSD for firewalls. One advantage of this is giving us a mirrored ZFS pool for the machine's filesystems; we have a lot of experience operating ZFS and it's a simple, reliable, and fully supported way of getting mirrored system disks on important machines. ZFS has checksums and you want to periodically 'scrub' your ZFS pools to verify all of your data (in all of its copies) through these checksums (ideally relatively frequently). All of this is part of basic ZFS knowledge, so I was a little bit surprised to discover that none of our FreeBSD machines had ever scrubbed their root pools, despite some of them having been running for months.
It turns out that while FreeBSD comes with a configuration option to do periodic ZFS scrubs, the option isn't enabled by default (as of FreeBSD 14.3). Instead you have to know to enable it, which admittedly isn't too hard to find once you start looking.
FreeBSD has a general periodic(8) system for triggering things on a daily, weekly, monthly, or other basis. As covered in the manual page, the default configuration for this is in /etc/defaults/periodic.conf and you can override things by creating or modifying /etc/periodic.conf. ZFS scrubs are a 'daily' periodic setting, and as of 14.3 the basic thing you want is an /etc/periodic.conf with:
# Enable ZFS scrubs daily_scrub_zfs_enable="YES"
FreeBSD will normally scrub each pool a certain number of days after its previous scrub (either a manual scrub or an automatic scrub through the periodic system). The default number of days is 35, which is a bit high for my tastes, so I suggest that you shorten it, making your periodic.conf stanza be:
# Enable ZFS scrubs daily_scrub_zfs_enable="YES" daily_scrub_zfs_default_threshold="14"
There are other options you can set that are covered in /etc/defaults/periodic.conf.
(That the daily automatic scrubs happen some number of days after the pool was last scrubbed means that you can adjust their timing by doing a manual scrub. If you have a bunch of machines that you set up at the same time, you can get them to space out their scrubs by scrubbing one a day by hand, and so on.)
Looking at the other ZFS periodic options, I might also enable the daily ZFS status report, because I'm not certain if there's anything else that will alert you if or when ZFS starts reporting errors:
# Find out about ZFS errors? daily_status_zfs_enable="YES"
You can also tell ZFS to TRIM your SSDs every day. As far as I can see there's no option to do the TRIM less often than once a day; I guess if you want that you have to create your own weekly or monthly periodic script (perhaps by copying the 801.trim-zfs daily script and modifying it appropriately). Or you can just do 'zpool trim ...' every so often by hand.
2025-11-17
A surprise with how '#!' handles its program argument in practice
Every so often I get to be surprised about some Unix thing. Today's surprise is the actual behavior of '#!' in practice on at least Linux, FreeBSD, and OpenBSD, which I learned about from a comment by Aristotle Pagaltzis on my entry on (not) using '#!/usr/bin/env'. I'll quote the starting part here:
In fact the shebang line doesn’t require absolute paths, you can use relative paths too. The path is simply resolved from your current directory, just as any other path would be – the kernel simply doesn’t do anything special for shebang line paths at all. [...]
I found this so surprising that I tested it on our Linux servers as well as a FreeBSD and an OpenBSD machine. On the Linux servers (and probably on the others too), the kernel really does accept the full collection of relative paths in '#!'. You can write '#!python3', '#!bin/python3', '#!../python3', '#!../../../usr/bin/python3', and so on, and provided that your current directory is in the right place in the filesystem, they all worked.
(On FreeBSD and OpenBSD I only tested the '#!python3' case.)
As far as I can tell, this behavior goes all the way back to 4.2 BSD (which isn't quite the origin point of '#!' support in the Unix kernel but is about as close as we can get). The execve() kernel implementation in sys/kern_exec.c finds the program from your '#!' line with a namei() call that uses the same arguments (apart from the name) as it did to find the initial executable, and that initial executable can definitely be a relative path.
Although this is probably the easiest way to implement '#!' inside the kernel, I'm a little bit surprised that it survived in Linux (in a completely independent implementation) and in OpenBSD (where the security people might have had a double-take at some point). But given Hyrum's Law there are probably people out there who are depending on this behavior so we're now stuck with it.
(In the kernel, you'd have to go at least a little bit out of your way to check that the new path starts with a '/' or use a kernel name lookup function that only resolves absolute paths. Using a general name lookup function that accepts both absolute and relative paths is the simplest approach.)
PS: I don't have access to Illumos based systems, other BSDs (NetBSD, etc), or macOS, but I'd be surprised if they had different behavior. People with access to less mainstream Unixes (including commercial ones like AIX) can give it a try to see if there are any Unixes that don't support relative paths in '#!'.
2025-11-04
Some notes on duplicating xterm windows
Recently on the Fediverse, Dave Fischer mentioned a neat hack:
In the decades-long process of getting my fvwm config JUST RIGHT, my xterm right-click menu now has a "duplicate" command, which opens a new xterm with the same geometry, on the same node, IN THE SAME DIRECTORY. (Directory info aquired via /proc.)
[...]
(See also a followup note.)
This led to @grawity sharing an xterm-native approach to this, using xterm's spawn-new-terminal() internal function that's available through xterm's keybindings facility.
I have a long-standing shell function in my shell that attempts to do this (imaginatively called 'spawn'), but this is only available in environments where my shell is set up, so I was quite interested in the whole area and did some experiments. The good news is that xterm's 'spawn-new-terminal' works, in that it will start a new xterm and the new xterm will be in the right directory. The bad news for me is that that's about all that it will do, and in my environment this has two limitations that will probably make it not something I use a lot.
The first limitation is that this starts an xterm that doesn't copy the command line state or settings of the parent xterm. If you've set special options on the parent xterm (for example, you like your root xterms to have a red foreground), this won't be carried over to the new xterm. Similarly, if you've increased (or decreased) the font size in your current xterm or otherwise changed its settings, spawn-new-terminal doesn't duplicate these; you get a default xterm. This is reasonable but disappointing.
(While spawn-new-terminal takes arguments that I believe it will pass to the new xterm, as far as I know there's no way to retrieve the current xterm's command line arguments to insert them here.)
The larger limitation for me is that when I'm at home, I'm often running SSH inside of an xterm in order to log in to some other system (I have a 'sshterm' script to automate all the aspects of this). What I really want when I 'duplicate' such an xterm is not a copy of the local xterm running a local shell (or even starting another SSH to the remove system), but the remote (shell) context, with the same (remote) current directory and so on. This is impossible to get in general and difficult to set up even for situations where it's theoretically possible. To use spawn-new-terminal effectively, you basically need either all local xterms or copious use of remote X forwarded over SSH (where the xterm is running on the remote system, so a duplicate of it will be as well and can get the right current directory).
Going through this experience has given me some ideas on how to improve the situation overall. Probably I should write a 'spawn' shell script to replace or augment my 'spawn' shell function so I can readily have it in more places. Then when I'm ssh'd in to a system, I can make the 'spawn' script at least print out a command line or two for me to copy and paste to get set up again.
(Two command lines is the easiest approach, with one command that starts the right xterm plus SSH combination and the other a 'cd' to the right place that I'd execute in the new logged in window. It's probably possible to combine these into an all-in-one script but that starts to get too clever in various ways, especially as SSH has no straightforward way to pass extra information to a login shell.)
2025-10-23
Two reasons why Unix traditionally requires mount points to exist
Recently on the Fediverse, argv minus one asked a good question:
Why does #Linux require #mount points to exist?
And are there any circumstances where a mount can be done without a pre-existing mount point (i.e. a mount point appears out of thin air)?
I think there is one answer for why this is a good idea in general and otherwise complex to do, although you can argue about it, and then a second historical answer based on how mount points were initially implemented.
The general problem is directory listings. We obviously want and need mount points to appear in readdir() results, but in the kernel, directory listings are historically the responsibility of filesystems and are generated and returned in pieces on the fly (which is clearly necessary if you have a giant directory; the kernel doesn't read the entire thing into memory and then start giving your program slices out of it as you ask). If mount points never appear in the underlying directory, then they must be inserted at some point in this process. If mount points can sometimes exist and sometimes not, it's worse; you need to somehow keep track of which ones actually exist and then add the ones that don't at the end of the directory listing. The simplest way to make sure that mount points always exist in directory listings is to require them to have an existence in the underlying filesystem.
(This was my initial answer.)
The historical answer is that in early versions of Unix, filesystems were actually mounted on top of inodes, not directories (or filesystem objects). When you passed a (directory) path to the mount(2) system call, all it was used for was getting the corresponding inode, which was then flagged as '(this) inode is mounted on' and linked (sort of) to the new mounted filesystem on top of it. All of the things that dealt with mount points and mounted filesystem did so by inode and inode number, with no further use of the paths and the root inode of the mounted filesystem being quietly substituted for the mounted-on inode. All of the mechanics of this needed the inode and directory entry for the name to actually exist (and V7 required the name to be a directory).
I don't think modern kernels (Linux or otherwise) still use this approach to handling mounts, but I believe it lingered on for quite a while. And it's a sufficiently obvious and attractive implementation choice that early versions of Linux also used it (see the Linux 0.96c version of iget() in fs/inode.c).
Sidebar: The details of how mounts worked in V7
When you passed a path to the mount(2) system call (called 'smount()' in sys/sys3.c), it used the name to get the inode and then set the IMOUNT flag from sys/h/inode.h on it (and put the mount details in a fixed size array of mounts, which wasn't very big). When iget() in sys/iget.c was fetching inodes for you and you'd asked for an IMOUNT inode, it gave you the root inode of the filesystem instead, which worked in cooperation with name lookup in a directory (the name lookup in the directory would find the underlying inode number, and then iget() would turn it into the mounted filesystem's root inode). This gave Research Unix a simple, low code approach to finding and checking for mount points, at the cost of pinning a few more inodes into memory (not necessarily a small thing when even a big V7 system only had at most 200 inodes in memory at once, but then a big V7 system was limited to 8 mounts, see h/param.h).
2025-10-12
The early Unix history of chown() being restricted to root
A few years ago I wrote about the divide in chown() about who got to give away files, where BSD and V7 were on one side, restricting it to root, while System III and System V were on the other, allowing the owner to give them away too. At the time I quoted the V7 chown(2) explanation of this:
[...] Only the super-user may execute this call, because if users were able to give files away, they could defeat the (nonexistent) file-space accounting procedures.
Recently, for reasons, chown(2) and its history was on my mind and so I wondered if the early Research Unixes had always had this, or if a restriction was added at some point.
The answer is that the restriction was added in V6, where the V6 chown(2) manual page has the same wording as V7. In Research Unix V5 and earlier, people can chown(2) away their own files; this is documented in the V4 chown(2) manual page and is what the V5 kernel code for chown() does. This behavior runs all the way back to the V1 chown() manual page, with an extra restriction that you can't chown() setuid files.
(Since I looked it up, the restriction on chown()'ing setuid files was lifted in V4. In V4 and later, a setuid file has its setuid bit removed on chown; in V3 you still can't give away such a file, according to the V3 chown(2) manual page.)
At this point you might wonder where the System III and System V unrestricted chown came from. The surprising to me answer seems to be that System III partly descends from PWB/UNIX, and PWB/UNIX 1.0, although it was theoretically based on V6, has pre-V6 chown(2) behavior (kernel source, manual page). I suspect that there's a story both to why V6 made chown() more restricted and also why PWB/UNIX specifically didn't take that change from V6, but I don't know if it's been documented anywhere (a casual Internet search didn't turn up anything).
(The System III chown(2) manual page says more or less the same thing as the PWB/UNIX manual page, just more formally, and the kernel code is very similar.)
2025-10-11
Maybe why OverlayFS had its readdir() inode number issue
A while back I wrote about readdir()'s inode numbers versus OverlayFS, which discussed an issue where for efficiency reasons, OverlayFS sometimes returned different inode numbers in readdir() than in stat(). This is not POSIX legal unless you do some pretty perverse interpretations (as covered in my entry), but lots of filesystems deviate from POSIX semantics every so often. A more interesting question is why, and I suspect the answer is related to another issue that's come up, the problem of NFS exports of NFS mounts.
What's common in both cases is that NFS servers and OverlayFS both must create an 'identity' for a file (a NFS filehandle and an inode number, respectively). In the case of NFS servers, this identity has some strict requirements; OverlayFS has a somewhat easier life, but in general it still has to create and track some amount of information. Based on reading the OverlayFS article, I believe that OverlayFS considers this expensive enough to only want to do it when it has to.
OverlayFS definitely needs to go to this effort when people call stat(), because various programs will directly use the inode number (the POSIX 'file serial number') to tell files on the same filesystem apart. POSIX technically requires OverlayFS to do this for readdir(), but in practice almost everyone that uses readdir() isn't going to look at the inode number; they look at the file name and perhaps the d_type field to spot directories without needing to stat() everything.
If there was a special 'not a valid inode number' signal value, OverlayFS might use that, but there isn't one (in either POSIX or Linux, which is actually a problem). Since OverlayFS needs to provide some sort of arguably valid inode number, and since it's reading directories from the underlying filesystems, passing through their inode numbers from their d_ino fields is the simple answer.
(This entry was inspired by Kevin Lyda's comment on my earlier entry.)
Sidebar: Why there should be a 'not a valid inode number' signal value
Because both standards and common Unix usage include a d_ino field in the structure readdir() returns, they embed the idea that the stat()-visible inode number can easily be recovered or generated by filesystems purely by reading directories, without needing to perform additional IO. This is true in traditional Unix filesystems, but it's not obvious that you would do that all of the time in all filesystems. The on disk format of directories might only have some sort of object identifier for each name that's not easily mapped to a relatively small 'inode number' (which is required to be some C integer type), and instead the 'inode number' is an attribute you get by reading file metadata based on that object identifier (which you'll do for stat() but would like to avoid for reading directories).
But in practice if you want to design a Unix filesystem that performs decently well and doesn't just make up inode numbers in readdir(), you must store a potentially duplicate copy of your 'inode numbers' in directory entries.
2025-10-01
Readdir()'s inode numbers versus OverlayFS
Recently I re-read Deep Down the Rabbit Hole: Bash, OverlayFS, and a 30-Year-Old Surprise (via) and this time around, I stumbled over a bit in the writeup that made me raise my eyebrows:
Bash’s fallback getcwd() assumes that the inode [number] from stat() matches one returned by readdir(). OverlayFS breaks that assumption.
I wouldn't call this an 'assumption' so much as 'sane POSIX semantics', although I'm not sure that POSIX absolutely requires this.
As we've seen before, POSIX talks about 'file serial number(s)' instead of inode numbers. The best definition of these is covered in sys/stat.h, where we see that a 'file identity' is uniquely determined by the combination the inode number and the device ID (st_dev), and POSIX says that 'at any given time in a system, distinct files shall have distinct file identities' while hardlinks have the same identity. The POSIX description of readdir() and dirent.h don't caveat the d_ino file serial numbers from readdir(), so they're implicitly covered by the general rules for file serial numbers.
In theory you can claim that the POSIX guarantees don't apply here since readdir() is only supplying d_ino, the file serial number, not the device ID as well. I maintain that this fails due to a POSIX requirement:
[...] The value of the structure's
d_inomember shall be set to the file serial number of the file named by thed_namemember. [...]
If readdir() gives one file serial number and a fstatat() of the same name gives another, a plain reading of POSIX is that one of them is lying. Files don't have two file serial numbers, they have one. Readdir() can return duplicate d_ino numbers for files that aren't hardlinks to each other (and I think legitimately may do so in some unusual circumstances), but it can't return something different than what fstatat() does for the same name.
The perverse argument here turns on POSIX's 'at any given time'. You can argue that the readdir() is at one time and the stat() is at another time and the system is allowed to entirely change file serial numbers between the two times. This is certainly not the intent of POSIX's language but I'm not sure there's anything in the standard that rules it out, even though it makes file serial numbers fairly useless since there's no POSIX way to get a bunch of them at 'a given time' so they have to be coherent.
So to summarize, OverlayFS has chosen what are effectively non-POSIX semantics for its readdir() inode numbers (under some circumstances, in the interests of performance) and Bash used readdir()'s d_ino in a traditional Unix way that caused it to notice. Unix filesystems can depart from POSIX semantics if they want, but I'd prefer if they were a bit more shamefaced about it. People (ie, programs) count on those semantics.
(The truly traditional getcwd() way wouldn't have been a problem, because it predates readdir() having d_ino and so doesn't use it (it stat()s everything to get inode numbers). I reflexively follow this pre-d_ino algorithm when I'm talking about doing getcwd() by hand (cf), but these days you want to use the dirent d_ino and if possible d_type, because they're much more efficient than stat()'ing everything.)
2025-09-22
Unix mail programs have had two approaches to handling your mail
Historically, Unix mail programs (what we call 'mail clients' or 'mail user agents' today) have had two different approaches to handling your email, what I'll call the shared approach and the exclusive approach, with the shared approach being the dominant one. To explain the shared approach, I have to back up to talk about what Unix mail transfer agents (MTAs) traditionally did. When a Unix MTA delivered email to you, at first it delivered email into a single file in a specific location (such as '/usr/spool/mail/<login>') in a specific format, initially mbox; even then, this could be called your 'inbox'. Later, when the maildir mailbox format became popular, some MTAs gained the ability to deliver to maildir format inboxes.
(There have been a number of Unix mail spool formats over the years, which I'm not going to try to get into here.)
A 'shared' style mail program worked directly with your inbox in whatever format it was in and whatever location it was in. This is how the V7 'mail' program worked, for example. Naturally these programs didn't have to work on your inbox; you could generally point them at another mailbox in the same format. I call this style 'shared' because you could use any number of different mail programs (mail clients) on your mailboxes, providing that they all understood the format and also provided that all of them agreed on how to lock your mailbox against modifications, including against your system's MTA delivering new email right at the point where your mail program was, for example, trying to delete some.
(Locking issues are one of the things that maildir was designed to help with.)
An 'exclusive' style mail program (or system) was designed to own your email itself, rather than try to share your system mailbox. Of course it had to access your system mailbox a bit to get at your email, but broadly the only thing an exclusive mail program did with your inbox was pull all your new email out of it, write it into the program's own storage format and system, and then usually empty out your system inbox. I call this style 'exclusive' because you generally couldn't hop back and forth between mail programs (mail clients) and would be mostly stuck with your pick, since your main mail program was probably the only one that could really work with its particular storage format.
(Pragmatically, only locking your system mailbox for a short period of time and only doing simple things with it tended to make things relatively reliable. Shared style mail programs had much more room for mistakes and explosions, since they had to do more complex operations, at least on mbox format mailboxes. Being easy to modify is another advantage of the maildir format, since it outsources a lot of the work to your Unix filesystem.)
This shared versus exclusive design choice turned out to have some effects when mail moved to being on separate servers and accessed via POP and then later IMAP. My impression is that 'exclusive' systems coped fairly well with POP, because the natural operation with POP is to pull all of your new email out of the server and store it locally. By contrast, shared systems coped much better with IMAP than exclusive ones did, because IMAP is inherently a shared mail environment where your mail stays on the IMAP server and you manipulate it there.
(Since IMAP is the dominant way that mail clients/user agents get at email today, my impression is that the 'exclusive' approach is basically dead at this point as a general way of doing mail clients. Almost no one wants to use an IMAP client that immediately moves all of their email into a purely local data storage of some sort; they want their email to stay on the IMAP server and be accessible from and by multiple clients and even devices.)
Most classical Unix mail clients are 'shared' style programs, things like Alpine, Mutt, and the basic Mail program. One major 'exclusive' style program, really a system, is (N)MH (also). MH is somewhat notable because in its time it was popular enough that a number of other mail programs and mail systems supported its basic storage format to some degree (for example, procmail can deliver messages to MH-format directories, although it doesn't update all of the things that MH would do in the process).
Another major source of 'exclusive' style mail handling systems is GNU Emacs. I believe that both rmail and GNUS normally pull your email from your system inbox into their own storage formats, partly so that they can take exclusive ownership and don't have to worry about locking issues with other mail clients. GNU Emacs has a number of mail reading environments (cf, also) and I'm not sure what the others do (apart from MH-E, which is a frontend on (N)MH).
(There have probably been other 'exclusive' style systems. Also, it's a pity that as far as I know, MH never grew any support for keeping its messages in maildir format directories, which are relatively close to MH's native format.)
2025-09-14
The idea of /usr/sbin has failed in practice
One of the changes in Fedora Linux 42 is unifying /usr/bin and /usr/sbin, by moving everything in /usr/sbin to /usr/bin. To some people, this probably smacks of anathema, and to be honest, my first reaction was to bristle at the idea. However, the more I thought about it, the more I had to concede that the idea of /usr/sbin has failed in practice.
We can tell /usr/sbin has failed in practice by asking how many people routinely operate without /usr/sbin in their $PATH. In a lot of environments, the answer is that very few people do, because sooner or later you run into a program that you want to run (as yourself) to obtain useful information or do useful things. Let's take FreeBSD 14.3 as an illustrative example (to make this not a Linux biased entry); looking at /usr/sbin, I recognize iostat, manctl (you might use it on your own manpages), ntpdate (which can be run by ordinary people to query the offsets of remote servers), pstat, swapinfo, and traceroute. There are probably others that I'm missing, especially if you use FreeBSD as a workstation and so care about things like sound volumes and keyboard control.
(And if you write scripts and want them to send email, you'll care about sendmail and/or FreeBSD's 'mailwrapper', both in /usr/sbin. There's also DTrace, but I don't know if you can DTrace your own binaries as a non-root user on FreeBSD.)
For a long time, there has been no strong organizing principle to /usr/sbin that would draw a hard line and create a situation where people could safely leave it out of their $PATH. We could have had a principle of, for example, "programs that don't work unless run by root", but no such principle was ever followed for very long (if at all). Instead programs were more or less shoved in /usr/sbin if developers thought they were relatively unlikely to be used by normal people. But 'relatively unlikely' is not 'never', and shortly after people got told to 'run traceroute' and got 'command not found' when they tried, /usr/sbin (probably) started appearing in $PATH.
(And then when you asked 'how does my script send me email about something', people told you about /usr/sbin/sendmail and another crack appeared in the wall.)
If /usr/sbin is more of a suggestion than a rule and it appears in everyone's $PATH because no one can predict which programs you want to use will be in /usr/sbin instead of /usr/bin, I believe this means /usr/sbin has failed in practice. What remains is an unpredictable and somewhat arbitrary division between two directories, where which directory something appears in operates mostly as a hint (a hint that's invisible to people who don't specifically look where a program is).
(This division isn't entirely pointless and one could try to reform the situation in a way short of Fedora 42's "burn the entire thing down" approach. If nothing else the split keeps the size of both directories somewhat down.)
PS: The /usr/sbin like idea that I think is still successful in practice is /usr/libexec. Possibly a bunch of things in /usr/sbin should be relocated to there (or appropriate subdirectories of it).
2025-08-04
The unusual way I end my X desktop sessions
I use an eccentric X 'desktop' that is not really a desktop as such in the usual sense but instead a window manager and various programs that I run (as a sysadmin, there's a lot of terminal windows). One of the ways that my desktop is unusual is in how I exit from my X session. First, I don't use xdm or any other graphical login manager; instead I run my session through xinit. When you use an xinit based session, you give xinit a program or a script to run, and when the program exits, xinit terminates the X server and your session.
(If you gave xinit a shell script, whatever foreground program the script ended with was your keystone program.)
Traditionally, this keystone program for your X session was your window manager. At one level this makes a lot of sense; your window manager is basically the core of your X session anyway, so you might as well make quitting from it end the session. However, for a very long time I've used a do-nothing iconified xterm running a shell as my keystone program.
(If you look at FvwmIconMan's strip of terminal windows in my (2011) desktop tour, this is the iconified 'console-ex' window.)
The minor advantage to having an otherwise unused xterm as my session keystone program is that I can start my window manager basically at the start of my (rather complex) session startup, so that I can immediately have it manage all of the other things I start (technically I run a number of commands to set up X settings before I start fvwm, but it's the first program I start that will actually show anything on the screen). The big advantage is that using something else as my keystone program means that I can kill and restart my window manager if something goes badly wrong, and more generally that I don't have to worry about restarting it. This doesn't happen very often, but when it does happen I'm very glad that I can recover my session instead of having to abruptly terminate everything. And should I have to terminate fvwm, this 'console' xterm is a convenient idle xterm in which to restart it (or in general, any other program of my session that needs restarting).
(The 'console' xterm is deliberately placed up at the top of the screen, in an area that I don't normally put non-fvwm windows in, so that if fvwm exits and everything de-iconifies, it's highly likely that this xterm will be visible so I can type into it. If I put it in an ordinary place, it might wind up covered up by a browser window or another xterm or whatever.)
I don't particularly have to use an (iconified) xterm with a shell in it; I could easily have written a little Tk program that displayed a button saying 'click me to exit'. However, the problem with such a program (and the advantage of my 'console' xterm) is that it would be all too easy to accidentally click the button (and force-end my session). With the iconified xterm, I need to do a bunch of steps to exit; I have to deiconify that xterm, focus the window, and Ctrl-D the shell to make it exit (causing the xterm to exit). This is enough out of the way that I don't think I've ever done it by accident.
PS: I believe modern desktop environments like GNOME, KDE, and Cinnamon have moved away from making their window manager be the keystone program and now use a dedicated session manager program that things talk to. One reason for this may be that modern desktop shells seem to be rather more prone to crashing for various reasons, which would be very inconvenient if that ended your session. This isn't all bad, at least if there's a standard D-Bus protocol for ending a session so that you can write an 'exit the session' thing that will work across environments.