15

DOS on the 8088 had to process interrupts and return. If a higher-priority interrupt occurred, the current handler was pushed and the higher-priority interrupt was serviced. 4.77 million seems like a lot of instructions per second, and I remember that the PS/2's 6550 UART had a 16-character input buffer to lighten the CPU load.

But when interrupts from several devices like video flooded in concurrently, did interrupts sometimes not get processed in time and important work didn't get done, or a buffer overflowed?

What happened in that case?

When life was simple. Photo

Did the system crash with an error message like "Stack overflow," or what? Because I don't remember that happening, and I wondered at the time why it didn't. I just figured that 4.77 million must be a very big number. A 1,000 bit per second stream would still get as many as 4,770 instructions to process each bit.

PS: I'm actually interested in how this was done on the 5150, which was the slowest — not on the more advanced 286 or 386. If anybody still used DOS 3.3 on the 5150, that's the configuration I would be most curious about.

11
  • 22
    What interrupts from video? Commented Nov 17 at 6:28
  • 14
    The graphics card on 8088 systems (CGA etc.) didn't have interrupts, not even a vertical blanking interrupt (which was only added on later graphics cards). Commented Nov 17 at 6:31
  • 11
    Tnis is a perfectly good question, although not fully relevant. So if you downvote, say why. As seen by the answer from Justme it can be answered. Commented Nov 17 at 10:59
  • 8
    @ghellquist Hover over the down-vote button, and you will read "this question does not show any research effort". Commented Nov 17 at 12:17
  • 9
    How does a picture of a motherboard add to the question about CPU interrupt handling? Commented Nov 18 at 12:53

4 Answers 4

12

TL;DR1: What's fast or not depends on intended use case.

TL;DR2: DOS is neither ment to handle stack overflow nor can it do so.

The PC is designed for a specific (low end) use case not needing high speed event handling.

As Justme already noted, this question fosters several misconceptions about hardware and software design more fundamental.

DOS on the 8088 had to process interrupts and return.

DOS does neither process nor manage interrupts (*1). Machine specific drivers (aka BIOS) do. And even they only do (on IBM and by default) only keyboard, timer and floppy. Everything else is up for application use and/or application specific third party drivers.

If a higher-priority interrupt occurred, the current handler was pushed and the higher-priority interrupt was serviced.

A well designed CPU is of course capable to manage this. There is a general locking, fully independent of interrupt priority (*2), which means that a subsequent interrupt is only allowed if software permit (as Justme explained).

Priorities are not managed by CPU hardware but the 8259 interrupt controller.

4.77 million seems like a lot of instructions per second,

Yes, it is, and it's at least 4 times what an 8088 can deliver, as 4.77 is the clock frequency used in the PC, not it's memory bandwidth or instruction rate.

The 8088 delivers a sustained rate around 250 thousand instructions per second (= 0.25 MIPS) (*3)

I remember that the PS/2's 6550 UART had a 16-character input buffer to lighten the CPU load.

I assume you mean National's 16550. also, this chip was already present in many 8088 serial cards. After all it does help there even more, doesn't it?

But when interrupts from several devices like video flooded in concurrently, did interrupts sometimes did not get processed in time and important work didn't get done, or a buffer overflowed?

Hard to imagine any use case for an IBM PC delivering data that fast. The PC was designed in 1980 for 1980 desk top use case, not high speed data acquisition.

Blocked designed worked well with DMA (think floppy) at up to 400 kIB/s, usually not delivering many interrupts (Floppy doing hardly 80 per second), while character based were not much more pressing. At that time a 2400 Bd modem, that is 200 characters per second, was already top notch for personal desktop use. Sure, faster connections existed,but they were usually handled by special hardware (HDLC and network cards *4).

What happened in that case?

Nothing.

Did the system crash with an error message like "Stack overflow," or what?

DOS does not contain any code to handle this. (*5)

If it crashed, no system message was give. It just crashed with random symptoms. After all, it was neither DOS job to handle this, nor did it have any say at that point.

DOS does not support virtualisation.

To detect such conditions some basic mechanics, like trapping on like addressing issues (segment overrun), to detect such conditions. THese are usually associated with virtualisation.

As a simple real mode CPU the 8086 did not provide any hardware means to check for segment overrun (*6). No chance to supervise stack usage or similar events. Thus

DOS can not have any code to handle this.

Because I don't remember that happening, and I wondered at the time why it didn't.

I'd say you were lucky to only have used applications that were designed to say within their limits.


So what about the STACKS setting in CONFIG.SYS?

With DOS 3.2 MS introduced the STACKS=m,n setting which does provide multiple stacks for interrupt handling (See this Answer for details). While this does help to manage (stack) memory needs for concurrent hardware interrupts, it does not prevent or catch overflow issues. All it does is

  • taking stack load off of application (and DOS) stack by
  • providing a (somewhat) private stack space and
  • providing a minimum amount of stack space. It does so
  • only so for DOS drivers active using those mechanics.

it does not

  • check or handle overflow
  • change classic driver behaviour
  • support application specific interrupt handling

In addition, the default value for PC-DOS 3.x is '0,0' that is no stacks are reserved and no driver can benefit from it, unless one added above configuration option.


*1 - It does touching management a tiny bit by abstracting the addition of a new vector (INT 21h Function 25h).

*2 - Well, except NMI and Reset that is.

*3 - An 8088 needs (at least) 4 cycles to read a byte from memory - that includes instructions. While the fastest instructions only need two (like flag modification) or three (INC/DEC 16 bit register) clock cycles to perform, instructions are still at least one byte in size, so they can only be feed at one per 4 clocks, giving a maximum sustained speed of ~1.2 MIPS. In reality most instructions need more than 1 byte as well as more clocks. As a result a sustained MIPS number for the 8088 is more like 250 to 300 kIPS.

[And yes, less cycles per instruction than needing to fetch makes sense, as instruction fetch is independent of execution and handled via a 5 byte buffer. Thus long running instructions leave bandwidth to prefetch, smoothening out longer execution times. This partial asynchronous design is a major reason that the 8086 family could deliver (comparably) great performance.]

*4 - Which usually also fall into the blocked category, thus, while delivering up to 100 kiB (like Ethernet), their interrupt rate was again block dependent - additionally regulated by flow control as well.

*5 - Being a strict 8086 OS, DOS will not contain services tat are not provided by the 8086. Some parts, like drivers may contain features only provided by expanded ISA like 286 and above, but it's out of scope for DOS itself.

*6 - 80286 and above did, but DOS, as a strict 8086 OS, did not use it.

4
  • 1
    Is there supposed to be a footnote *6? Commented Nov 19 at 6:26
  • 1
    @DrSheldon Oh, yes, screwed 5 and 6 Commented Nov 19 at 14:55
  • That was a very interesting answer. I always wondered what STACKS sid. Thank you! Commented Nov 21 at 8:39
  • > I assume you mean National's 16550 ==== Been a long time, been a long time, been a long lonely, lonely, lonely time! Commented Nov 21 at 8:43
37

There are many false assumptions how it worked.

First of all, if an interrupt is currently executing, it cannot be normally interrupted by another interrupt, for two reasons. When an interrupt is entered, it clears the CPU interrupt flag so further interrupts are disabled even if the interrupt controller signals that new interrupts are pending, and the interrupt controller will not even signal about new interrupts until software has acknowledged the current interrupt as being handled, so other pending interrupts can be signaled to CPU.

So, if a higher-priority interrupt did occur during execution of lower priority interrupt, the current handler was most definitely not pushed anywhere and the higher-priority interrupt was not serviced, it was kept pending until current interrupt is handled.

Also, there were few interrupts that DOS needed to handle. In fact none, as it was the BIOS that handled interrupts, and DOS simply used the BIOS via standard API. A 8088 DOS machine such as IBM 5150 basically had three hardware interrupts in use by BIOS, the system tick timer (IRQ0), keyboard (IRQ1) and floppy controller (IRQ6). There were possibility for interrupts like UARTs and parallel port, but they were not used by BIOS and DOS used BIOS for accessing these.

Now, if you run any DOS program, it can do whatever it wants such as install IRQ handlers and enable them, so what happened when programs did that is up to how correctly their programmers implemented these things.

One thing for sure, video at that time did not generate any interrupts, it was not wired to do so on CGA or MDA. EGA and VGA allowed for that, but it was extremely rarely used because it was not mandatory and different implementations from different companies may differ and people might have other adapter cards already installed on the same IRQ line so it might not be possible to use it.

What happens if an interrupt is missed depends on what it is.

So, basically, if you had a 8250 UART communications program that installed a custom interrupt handler for receiving data from a serial port, if some other interrupt takes too long to execute, you will get an interrupt pending about character being received, and if that's not handled in time, another character may come in and when the interrupt finally gets executed, the code needs to ask UART what is the reason for the interrupt. OK, there's a status flag for character being received. But there's also a status flag for overrun error, which means that one or more bytes received is lost. The program can then do anything it wants with that info.

For DOS, and BIOS, it's much simpler. If a timer interrupt occurs, and they are not handled in time so multiple timer interrupts occur, then the interrupt will run only once it has time and tick count increases by one, so time ticks can be lost. If a keyboard interrupt occurs, keyboard interface is kept on hold so keyboards cannot send more data until interrupt is handled (on a IBM 5150). If a floppy operation is being executed, basically the PC is anyway waiting for the interrupt to be signalled about finishing execution, and there's a timeout so the operation will fail and is cancelled/ignored and considered as an error (maybe there was no floppy in the drive).

So if a buffer overflows, or stack overflows, or whatever, it's up to how the program should work with the hardware and what happens if something goes wrong.

Also, CPU does not see serial transmission at bit level, but byte level, so a 1000 bps transmission will have a byte interrupt at 100 times per second (assuming 8N1 framing). And the CPU running at 4.77 MHz does not mean it runs 4.77 million instructions per second, as most instructions take several clock cycles to execute.

7

Given that any program can execute CLI and thus disable interrupts for as long as it likes -- yes, it is entirely possible to fail to respond to interrupts in a timely manner.

The effect depends on how the particular hardware reacts to not being able to get its interrupt serviced.

1
  • 1
    Isn't this answering the imaginary question "what could happen?" (what is theoretically possible) rather than the question asked: "what typically did happen back then?" (Refer to actual question for correct wording.) Commented Nov 18 at 12:10
5

In 1992 I created a pair of DOS TSRs (Terminate and Stay Resident programs) to share modems across a Novell network. One TSR was installed on the PC that had a modem, and it handled the 16550 UART interrupts. The other went on to a PC that wished to use one of the shared modems. Then a terminal program could be used to request an available modem and transparently use it as if it was connected to that PC. I got it to work well, which was an achievement over 30 years ago, with basically no tools, no way to debug, and only two books from the local bookstore to give me any info.

Both TSRs had to be able to stream bidirectionally through IPX, the Novell packet interface. It is basically equivalent to UDP. I constructed a simple streaming protocol with sliding windows, retransmission and send credits, a lot like TCP. I didn't use SPX because the additional driver took up too much memory on the anemic PCs that the customers used. But I didn't have any problems with lost interrupts in the networking code, as best I know. A lost interrupt would mean a chunk of data missing, and the streaming method would request a retransmission.

The UART handling code was less forgiving. The 16550 has a 16 byte buffer, and lost characters could not reasonably be requested again. Some characters would just go missing, and in the middle of an image, that was pretty damaging. The TSRs overall seemed reliable except in one particular situation, and I was not able to determine the cause. Still it was a useful system and also easy to use. I thought about modifying it to allow sharing printers, but I think Novell introduced that at some point, or Windows For Workgroups did or something.

The only case where DOS would hang was when I was developing the IPX handling and there was a missing line of code in the book I was using. After rebooting a hundred times and having no conceivable way to debug a memory-resident driver with no user interface, I went back to the bookstore to get another book. Added one line, and it worked.

New contributor
Greener is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct.
2
  • 3
    I also debugged a case where a MUX was adding XON and XOFF flow control characters. I added escaping with DLE to the binary data transmission method to detect and correct for that issue. ASCII was well-designed because they knew they would need DLE. Commented Nov 19 at 17:46
  • Welcome to SE/Retrocomputing! If you have additions or other modifications, please edit your answer. Comments are not for these, this site is not a forum. You might want to take the tour and read some pages of the help center. Commented Nov 20 at 6:42

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.