Self-taught and learning OS development- looking for advice and help
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
It gets pretty overwhelming sometimes, and I’m not always sure if I’m going in the right direction.
If anyone here is experienced and willing to offer some advice, guidance, or even just answer a few questions now and then, I’d really appreciate it.
Thanks a lot for reading.
-
3 -
-
Number of slices to send:Optional 'thank-you' note:
-
-
Andrew S. Tannenbaum created the Minix OS and wrote some very good books on system-level design, though that dates to the 1980s. They helped Linus Torvalds create Linux, for which the source code is not only easily available, but so are instructions on how to customise and build your own Linux kernel.
For design from the bottom up, one of the best sources is Edsgar Djikstra's T.H.E. OS nucleus, where he strips everything down to fundamental operations. There was a sample implementation in Dr. Dobbs Journal, back in the previous century.
A practical view of such a minimalist OS can be found the the Commodore Amiga Exec. The full Amiga OS consisted of several layers, but Exec was the underpinning that made it the first commercial pre-emptively multi-tasking PC OS. Exec was built out of a set of doubly-linked lists and there were lists for task dispatching, library searches and pretty much any other low-level resource that you can think of. Although it wasn't specifically designed as OOP, it maps very well to C++, and in fact, at one point, SAS was marketing my implementation of C++ with Amiga OS support.
There are likely other more modern resources you can tap into, although I'm not informed of them.
A good start, however, would be to become familiar with threading, as unless you want to re-create DOS, you need to know how threads and thread resource management work. Another key thing to understand is interrupts.
Experience keeps a dear School, but Fools will learn in no other.
---
Benjamin Franklin - Postal official and Weather observer
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
I'm learning all of this on my own, mostly from scattered books, online bits, and whatever I can find. So hearing from someone with this kind of deep background honestly means a lot. I’m still wrapping my head around interrupts, threading, and just how all the low-level parts work together.
I’ll definitely look into T.H.E., Minix, and Amiga Exec like you mentioned , I didn’t even know about some of these. If you have any suggestions on how I could approach these topics step by step, or what to focus on first, I’d really appreciate it.
Also, just to add I’m not coming from a computer science background, so I don’t have a structured path or formal learning. I’ve just been picking things up in whatever order I come across them, which I know isn’t ideal.
If anyone has advice on how I should approach this from the ground up like what to focus on first or how to build a solid foundation i’d really appreciate the guidance. It’s a bit overwhelming at times, so any help would mean a lot.
Thanks again for taking the time to share all this. It's hard doing this solo, so guidance like this is gold
-
1 -
-
Number of slices to send:Optional 'thank-you' note:
-
-
The very stored-programs first computers had no OS. Every single instruction that was executed had to be coded by hand and punched into paper tape, Hollerith cards or the like. That includes all of the I/O instructions and I/O devices back then were really, really stupid. So stupid that the CPU had to read and decode each column of a punched card manually as it passed through the card reader.
Obviously, lots of programs would need to be able to read all 80 columns of a punched-card and generally even do so for multiple cards, so people developed "read-a-card" subroutines. At first you'd just add a copy of the the read-a-card routine to the back of your own program code and load and run that program.
Incidentally, to load your program, you'd generally key in a short bootstrap program using the console switches on the computer. This was true even for the early-generation PCs such as the Altair and IMSAI systems. The bootstrap would read in the application and jump to it. In my very first PC, which was about 2nd-generation, the bootstrap was in ROM that was called when you pressed the reset button so you didn't have to flip all the switches each time. Much more civilized.
As computers became more sophisticated they added tape and disk drives, which were more demanding in their programming. So you'd have a whole set of libraries to deal with.
Eventually, it got to the point where you just expected those drivers to be available to all apps and the first "monitor systems" were born.
Alongside this came smarter peripheral devices. Rather than relying on the CPU to handle all their work, you'd have some control electronics in the peripheral (or at least an attached control unit) that could handle the grunt work. The concept of interrupts came along at this point. Rather than have the CPU halt and wait for all 80 columns of a card to be read, instead the CPU would send a signal to the controller to read a card and the controller would notify the CPU when it was done so the (very expensive) CPU could continue to do computing work in the mean time.
Now you could just periodically poll the peripheral controller to determine when it was done, but what if the controller could tap the CPU on the shoulder, so to speak and let it know when the data was ready? That's what interrupts are. They literally interrupt what the CPU was doing and notify the CPU that work was complete (or failed!) and the CPU could make a note of that and test later without having to talk to the controller.
Now we're basically at the level of MS-DOS. You have a core program (the BIOS), which is often in RAM, and a control program such as COMMAND.COM that's responsible for talking to the user and accepting commands as input text to be interpreted and run.
Interrupts became much more important when systems started multi-tasking. A multi-tasking supervisor is responsible for making the CPU more efficient by avoiding "dead" time - time spent waiting for I/O to complete. Most apps are I/O-bound, so they spend a lot of time waiting. In a multi-tasking system, a waiting task is put in a wait queue and the supervisor checks for tasks who no longer need to wait. It picks one and resumes that task until it, too needs to wait. Or, alternatively until "yield" call is made (co-operative multitasking) or a time slice expires (pre-emptive multitasking) when the wait/ready queue is again polled for an eligible task to run.
At that point, you've reached the level of the Amiga Exec. But there's more. You add memory-management hardware.
Hardware memory management comes in two varieties: 1) it keeps programs from stomping all over each other if someone did some sloppy coding. 2) it facilitates virtualization of RAM.
As CPUs got more powerful, but RAM was still expensive, it became desirable to run more tasks that could fit into RAM at the same time. So mechanisms were developed to swap out waiting tasks from RAM to disk and swap in a ready task from disk to RAM. This was the basis for many early time-sharing systems in particular, since the absolute slowest peripheral device on an interactive application is the drooling user sitting at the terminal.
The original Amiga units did not have a memory-management unit (MMU), so memory-stomping was all too common, generally resulting in the dreaded Guru Meditation Error. Later machines had the hardware, but the OS never got upgraded for it, since Commodore proved that a superior product doesn't guarantee commercial success and went defunct. However, Linux ran on high-end Amigas and could use their MMUs. By the time Linux came along, an MMU was pretty much standard on all computer hardware.
You can re-trace a lot of this yourself. There is a class of microprocessor known as AVRs which are quite inexpensive. They are low-end units by today's standards, although some approach or exceed the capabilities of 1960's mainframe computers. The most famous representative of this is the Arduino.
AVR systems don't have an OS. They have a minimal control program whose primary function is to call 2 user-coded functions named init() and loop(). The init() function is where you initialize things, including setting up the I/O ports. The loop() method is called repeatedly thereafter. That's it.
Over the years, a lot of I/O support libraries have been published for AVRs, as they have become one of the primary drivers of the Internet of Things. So you can also take advantage of pre-written device drivers. For the record, this is mostly what I work on these days. I have all sorts of sensors and controllers built on them.
Just for info, the original programming for the Arduino has generally been done in a dialect of C++, but for some of the smarter micro-controllers, there's a dialect of Python known as CircuitPython that's popular.
Experience keeps a dear School, but Fools will learn in no other.
---
Benjamin Franklin - Postal official and Weather observer
-
1 -
-
Number of slices to send:Optional 'thank-you' note:
-
-
Bhushitha Hashan wrote:Hey everyone,I’m teaching myself OS development and low-level programming completely on my own and ive got no formal background, just learning from books, online resources, and experimenting.
It gets pretty overwhelming sometimes, and I’m not always sure if I’m going in the right direction.
Hey! I don't have the deep low level experience that Tim has but I can at least give some "free advice". (The most abundant thing on the Internet...)
I started out self taught learning on my own too. I later got a "formal education" CS etc etc blah blah.. but I feel I learned more practical skills on my own. I am biased obviously, but I don't think there's anything non-ideal about an unstructured learning path at all.
What is it you're interested in building? What is it that makes you interested in low level or OS development in particular?
What got me started is having something specific that I wanted to build. Without anything to restrain my ambitions I failed.. oh well. But I learned C as a byproduct.
When you said "low level" it made me think: Arduino (or something like it). That's not Linux, but it's very low level if that's all you're looking for. You can do some cool stuff with it like robotics. I haven't gone into that one yet myself but it's on my (already too long) TODO list.
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
Tim Holloway wrote:Let me got WAYYYY back in time, before even the dinosaurs.
The very stored-programs first computers had no OS. Every single instruction that was executed had to be coded by hand and punched into paper tape, Hollerith cards or the like. That includes all of the I/O instructions and I/O devices back then were really, really stupid. So stupid that the CPU had to read and decode each column of a punched card manually as it passed through the card reader.
Obviously, lots of programs would need to be able to read all 80 columns of a punched-card and generally even do so for multiple cards, so people developed "read-a-card" subroutines. At first you'd just add a copy of the the read-a-card routine to the back of your own program code and load and run that program.
Incidentally, to load your program, you'd generally key in a short bootstrap program using the console switches on the computer. This was true even for the early-generation PCs such as the Altair and IMSAI systems. The bootstrap would read in the application and jump to it. In my very first PC, which was about 2nd-generation, the bootstrap was in ROM that was called when you pressed the reset button so you didn't have to flip all the switches each time. Much more civilized.
As computers became more sophisticated they added tape and disk drives, which were more demanding in their programming. So you'd have a whole set of libraries to deal with.
Eventually, it got to the point where you just expected those drivers to be available to all apps and the first "monitor systems" were born.
Alongside this came smarter peripheral devices. Rather than relying on the CPU to handle all their work, you'd have some control electronics in the peripheral (or at least an attached control unit) that could handle the grunt work. The concept of interrupts came along at this point. Rather than have the CPU halt and wait for all 80 columns of a card to be read, instead the CPU would send a signal to the controller to read a card and the controller would notify the CPU when it was done so the (very expensive) CPU could continue to do computing work in the mean time.
Now you could just periodically poll the peripheral controller to determine when it was done, but what if the controller could tap the CPU on the shoulder, so to speak and let it know when the data was ready? That's what interrupts are. They literally interrupt what the CPU was doing and notify the CPU that work was complete (or failed!) and the CPU could make a note of that and test later without having to talk to the controller.
Now we're basically at the level of MS-DOS. You have a core program (the BIOS), which is often in RAM, and a control program such as COMMAND.COM that's responsible for talking to the user and accepting commands as input text to be interpreted and run.
Interrupts became much more important when systems started multi-tasking. A multi-tasking supervisor is responsible for making the CPU more efficient by avoiding "dead" time - time spent waiting for I/O to complete. Most apps are I/O-bound, so they spend a lot of time waiting. In a multi-tasking system, a waiting task is put in a wait queue and the supervisor checks for tasks who no longer need to wait. It picks one and resumes that task until it, too needs to wait. Or, alternatively until "yield" call is made (co-operative multitasking) or a time slice expires (pre-emptive multitasking) when the wait/ready queue is again polled for an eligible task to run.
At that point, you've reached the level of the Amiga Exec. But there's more. You add memory-management hardware.
Hardware memory management comes in two varieties: 1) it keeps programs from stomping all over each other if someone did some sloppy coding. 2) it facilitates virtualization of RAM.
As CPUs got more powerful, but RAM was still expensive, it became desirable to run more tasks that could fit into RAM at the same time. So mechanisms were developed to swap out waiting tasks from RAM to disk and swap in a ready task from disk to RAM. This was the basis for many early time-sharing systems in particular, since the absolute slowest peripheral device on an interactive application is the drooling user sitting at the terminal.
The original Amiga units did not have a memory-management unit (MMU), so memory-stomping was all too common, generally resulting in the dreaded Guru Meditation Error. Later machines had the hardware, but the OS never got upgraded for it, since Commodore proved that a superior product doesn't guarantee commercial success and went defunct. However, Linux ran on high-end Amigas and could use their MMUs. By the time Linux came along, an MMU was pretty much standard on all computer hardware.
You can re-trace a lot of this yourself. There is a class of microprocessor known as AVRs which are quite inexpensive. They are low-end units by today's standards, although some approach or exceed the capabilities of 1960's mainframe computers. The most famous representative of this is the Arduino.
AVR systems don't have an OS. They have a minimal control program whose primary function is to call 2 user-coded functions named init() and loop(). The init() function is where you initialize things, including setting up the I/O ports. The loop() method is called repeatedly thereafter. That's it.
Over the years, a lot of I/O support libraries have been published for AVRs, as they have become one of the primary drivers of the Internet of Things. So you can also take advantage of pre-written device drivers. For the record, this is mostly what I work on these days. I have all sorts of sensors and controllers built on them.
Just for info, the original programming for the Arduino has generally been done in a dialect of C++, but for some of the smarter micro-controllers, there's a dialect of Python known as CircuitPython that's popular.
Thank you so much mr Tim,for this amazing reply. I truly appreciate the time, effort, and depth you put into explaining all this.Interestingly, these exact foundational stories and low-level details are what pulled me into this whole journey,trying to understand how things really work beneath the programs and applications we use every day.
Lately, I’ve been diving into topics like memory and operating systems, and I’ve kind of hit a wall around the concept of virtual memory. it’s been a bit confusing to wrap my head around how it works under the hood.
But your post reminded me why I started this journey in the first place, and it really encouraged me to keep going. So thank you again this means a lot.
-
1 -
-
Number of slices to send:Optional 'thank-you' note:
-
-
Lou Hamers wrote:
Bhushitha Hashan wrote:Hey everyone,I’m teaching myself OS development and low-level programming completely on my own and ive got no formal background, just learning from books, online resources, and experimenting.
It gets pretty overwhelming sometimes, and I’m not always sure if I’m going in the right direction.
Hey! I don't have the deep low level experience that Tim has but I can at least give some "free advice". (The most abundant thing on the Internet...)![]()
I started out self taught learning on my own too. I later got a "formal education" CS etc etc blah blah.. but I feel I learned more practical skills on my own. I am biased obviously, but I don't think there's anything non-ideal about an unstructured learning path at all.
Thanks so much for the encouragement and for sharing your story, mr.Lou . it really means a lot. It’s always reassuring to hear from someone who also started out self-taught and found their own path. It gives me hope that learning by doing can go a long way, even without a formal background.
As for what got me into low-level and OS development... honestly, it’s been a winding journey. I started out learning Java and diving into object-oriented programming and software engineering. But after a while, all the layers of abstraction started to frustrate me . I wanted to know what was really happening under the hood.
That curiosity pulled me deeper: I began exploring how applications communicate over networks, how containers like Docker and virtual machines work, and how operating systems control it all. Then came more questions.how does the OS boot? How does it manage memory and talk to hardware? What exactly happens when the CPU is idle? How does it go from an idle state to executing its first instruction? What does UEFI do? How are interrupts triggered?
And the deeper I went, the more fascinated I became.
Eventually, all that curiosity gave me this (admittedly overly ambitious for someone who only recently wrote their first `System.out.println`) dream: to one day build a tiny operating system of my own, even if it’s just for learning. I know it’s a huge goal, and I’m still in the early stages.currently stuck trying to wrap my head around interrupts and virtual memory. but I genuinely enjoy the process and want to keep pushing forward.
If anyone here is open to offering even small bits of guidance now and then, I’d be so happy to connect. I totally understand everyone has their own busy schedules, and I’d never want to be a bother,but even the smallest insight from experienced folks makes a huge difference to someone just starting out like me. Not just in OS development, but in any area of computer science.
Thanks again!!! messages like yours are exactly what keep the motivation alive. 🙏
-
1 -
-
Number of slices to send:Optional 'thank-you' note:
-
-
It's redundant and a little confusing.
We've no problems with quoting excerpts to make a point, but just repeating the whole message just wastes space.
Experience keeps a dear School, but Fools will learn in no other.
---
Benjamin Franklin - Postal official and Weather observer
-
-
Number of slices to send:Optional 'thank-you' note:
-
-
One simple OS is something like DOS, which doesn't have to worry about multi-tasking.
A really, REALLY basic OS might be a FORTH interpreter or something like that. I did one of those back when a Z-80 was state-of-the-art. Had a lot of fun with it.
There's a specialized kind of OS known as an RTOS (Real-Time Operating System) which is custom-designed to run specific tasks and use specific hardware, as opposed to general OS's like Linux and Windows. The ESP32 chip is normally programmed that way, layering user app code over a general kernel, but that's a non-trivial system as far as I'm concerned.
If you'd l like a high-level RTOS approach, you MIGHT be able to find the Concurrent Pascal system. It pioneered some of the early management systems. It was designed in the late 1970s, I think, by Per Brinch Hansen. It's a specialized dialect of Pascal that contains constructs for task communications and synchronizations.
I mentioned the Amiga Exec. Unfortunately, it's not open-source, but it has been copiously documented and because of its inherent simplicity, quite easy to re-invent. First you write a set of doubly-linked list routines...
I don't doubt I could come up with more, but that's a start.
Experience keeps a dear School, but Fools will learn in no other.
---
Benjamin Franklin - Postal official and Weather observer
-
1 -
-
Number of slices to send:Optional 'thank-you' note:
-
-
Hi Bhushitha, This is your very impressive journey—respect for diving into OS development solo! Focus on small, achievable goals like building a bootloader or a basic kernel first. Use resources like OSDev and coderanch, the "Writing a Simple Operating System" tutorial, and join relevant forums or Discord communities. Document your progress, ask questions freely, and don’t stress over perfection—learning happens in layers. Happy hacking, and feel free to reach out anytime you need help!
Regards
Ajay Hinduja Geneva, Switzerland (Swiss)
Bhushitha Hashan wrote:Hey everyone,I’m teaching myself OS development and low-level programming completely on my own and ive got no formal background, just learning from books, online resources, and experimenting.
It gets pretty overwhelming sometimes, and I’m not always sure if I’m going in the right direction.
If anyone here is experienced and willing to offer some advice, guidance, or even just answer a few questions now and then, I’d really appreciate it.
Thanks a lot for reading.
| Joel Salatin has signs on his property that say "Trespassers will be Impressed!" Impressive tiny ad: Paul Wheaton's 16th Kickstarter: Gardening playing cards for gardeners and homesteaders https://coderanch.com/t/889615/Paul-Wheaton-Kickstarter-Gardening-playing |











