TL;DR1: What's fast or not depends on intended use case.
TL;DR2: DOS is neither ment to handle stack overflow nor can it do so.
The PC is designed for a specific (low end) use case not needing high speed event handling.
As Justme already noted, this question fosters several misconceptions about hardware and software design more fundamental.
DOS on the 8088 had to process interrupts and return.
DOS does neither process nor manage interrupts (*1). Machine specific drivers (aka BIOS) do. And even they only do (on IBM and by default) only keyboard, timer and floppy. Everything else is up for application use and/or application specific third party drivers.
If a higher-priority interrupt occurred, the current handler was pushed and the higher-priority interrupt was serviced.
A well designed CPU is of course capable to manage this. There is a general locking, fully independent of interrupt priority (*2), which means that a subsequent interrupt is only allowed if software permit (as Justme explained).
Priorities are not managed by CPU hardware but the 8259 interrupt controller.
4.77 million seems like a lot of instructions per second,
Yes, it is, and it's at least 4 times what an 8088 can deliver, as 4.77 is the clock frequency used in the PC, not it's memory bandwidth or instruction rate.
The 8088 delivers a sustained rate around 250 thousand instructions per second (= 0.25 MIPS) (*3)
I remember that the PS/2's 6550 UART had a 16-character input buffer to lighten the CPU load.
I assume you mean National's 16550. also, this chip was already present in many 8088 serial cards. After all it does help there even more, doesn't it?
But when interrupts from several devices like video flooded in concurrently, did interrupts sometimes did not get processed in time and important work didn't get done, or a buffer overflowed?
Hard to imagine any use case for an IBM PC delivering data that fast. The PC was designed in 1980 for 1980 desk top use case, not high speed data acquisition.
Blocked designed worked well with DMA (think floppy) at up to 400 kIB/s, usually not delivering many interrupts (Floppy doing hardly 80 per second), while character based were not much more pressing. At that time a 2400 Bd modem, that is 200 characters per second, was already top notch for personal desktop use. Sure, faster connections existed,but they were usually handled by special hardware (HDLC and network cards *4).
What happened in that case?
Nothing.
Did the system crash with an error message like "Stack overflow," or what?
DOS does not contain any code to handle this. (*5)
If it crashed, no system message was give. It just crashed with random symptoms. After all, it was neither DOS job to handle this, nor did it have any say at that point.
DOS does not support virtualisation.
To detect such conditions some basic mechanics, like trapping on like addressing issues (segment overrun), to detect such conditions. THese are usually associated with virtualisation.
As a simple real mode CPU the 8086 did not provide any hardware means to check for segment overrun (*6). No chance to supervise stack usage or similar events. Thus
DOS can not have any code to handle this.
Because I don't remember that happening, and I wondered at the time why it didn't.
I'd say you were lucky to only have used applications that were designed to say within their limits.
So what about the STACKS setting in CONFIG.SYS?
With DOS 3.2 MS introduced the STACKS=m,n setting which does provide multiple stacks for interrupt handling (See this Answer for details). While this does help to manage (stack) memory needs for concurrent hardware interrupts, it does not prevent or catch overflow issues. All it does is
- taking stack load off of application (and DOS) stack by
- providing a (somewhat) private stack space and
- providing a minimum amount of stack space. It does so
- only so for DOS drivers active using those mechanics.
it does not
- check or handle overflow
- change classic driver behaviour
- support application specific interrupt handling
In addition, the default value for PC-DOS 3.x is '0,0' that is no stacks are reserved and no driver can benefit from it, unless one added above configuration option.
*1 - It does touching management a tiny bit by abstracting the addition of a new vector (INT 21h Function 25h).
*2 - Well, except NMI and Reset that is.
*3 - An 8088 needs (at least) 4 cycles to read a byte from memory - that includes instructions. While the fastest instructions only need two (like flag modification) or three (INC/DEC 16 bit register) clock cycles to perform, instructions are still at least one byte in size, so they can only be feed at one per 4 clocks, giving a maximum sustained speed of ~1.2 MIPS. In reality most instructions need more than 1 byte as well as more clocks. As a result a sustained MIPS number for the 8088 is more like 250 to 300 kIPS.
[And yes, less cycles per instruction than needing to fetch makes sense, as instruction fetch is independent of execution and handled via a 5 byte buffer. Thus long running instructions leave bandwidth to prefetch, smoothening out longer execution times. This partial asynchronous design is a major reason that the 8086 family could deliver (comparably) great performance.]
*4 - Which usually also fall into the blocked category, thus, while delivering up to 100 kiB (like Ethernet), their interrupt rate was again block dependent - additionally regulated by flow control as well.
*5 - Being a strict 8086 OS, DOS will not contain services tat are not provided by the 8086. Some parts, like drivers may contain features only provided by expanded ISA like 286 and above, but it's out of scope for DOS itself.
*6 - 80286 and above did, but DOS, as a strict 8086 OS, did not use it.