Timeline for What was the earliest system to explicitly support threading based on shared memory?
Current License: CC BY-SA 4.0
33 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Nov 18, 2021 at 19:10 | comment | added | cup | In the 90s, SunOS had fibers which ran one level below threads. No idea what all that was about but users were advised to use threads instead of fibers even though fibers were quicker. | |
| Feb 12, 2021 at 22:40 | vote | accept | rwallace | ||
| Feb 12, 2021 at 20:49 | comment | added | Peter Cordes | @GrahamNye: Or making emulation correct if you're not willing to pay the perf cost, like qemu-user apparently isn't: it apparently just JITs x86 load/store to plain ARM weakly-ordered load/store! However, ARMv8.3 should make it not too bad to translate all loads to ldapr (acquire "partial" order, unlike ARMv8.0 LDAR sequential-acquire), and all stores to stlr (release). But that would I assume block any speculative load reordering (which real x86 does, with rollback on memory-order mis-speculation because most x86 loads aren't shared mem). | |
| Feb 12, 2021 at 19:12 | comment | added | Graham Nye | @RossPresser Total Store Ordering. A way of using memory (i.e. store) that is similar to that used by a x86 processor thus making x86 emulation faster on Apple's M1 (ARM) processor. (reference) | |
| Feb 12, 2021 at 16:57 | comment | added | ninjalj | @RossPresser: TSO = Total Store Order | |
| Feb 12, 2021 at 16:13 | answer | added | user20813 | timeline score: 5 | |
| Feb 12, 2021 at 15:26 | comment | added | marshal craft | I think modern "Windows" always incorporated this notion, coequally termed multi-tasking. It was necessary for the literal windows feature to work. But it would have been designed as processes, which all bid for execution time. No notion of threads until intel created that. | |
| Feb 12, 2021 at 15:21 | comment | added | marshal craft | As far as I can tell a process in Windows terms is not a thread/parallel pipeline notion at all. It's merely a data structure associated with 1 or more threads of execution. | |
| Feb 12, 2021 at 15:01 | comment | added | Ross Presser | Stupid question: what is meant by this use of the acronym TSO? Time-Sharing Option seems wrong. Thread-state object? Thread-safe operations? Something else? | |
| Feb 12, 2021 at 14:24 | history | edited | user3840170 | CC BY-SA 4.0 | more precise title based on the question body; adjust tags |
| Feb 12, 2021 at 10:01 | answer | added | Stilez | timeline score: 1 | |
| Feb 12, 2021 at 9:43 | answer | added | rcgldr | timeline score: 2 | |
| Feb 12, 2021 at 0:28 | history | became hot network question | |||
| Feb 11, 2021 at 21:56 | comment | added | supercat | @moonwalker: For a single-core system or a multi-core system with coherent memory, a mutex could be synthesized using "ordinary" memory accesses. Things like atomic test-and-set or compare-exchange could make things much more efficient, but some simple constructs like a "hand-off mutex" (once one side releases up control, it won't reclaim it until the other side acquires and releases it) could be managed just fine with simple flags. | |
| Feb 11, 2021 at 20:53 | answer | added | davidbak | timeline score: 13 | |
| Feb 11, 2021 at 20:52 | comment | added | dave | <digress> Why threads? Because you want either a basically asynchronous system model (like VMS) or the ability to run multiple synchronous execution threads. Single-threaded execution with synchronous system calls, a la Unix, is just miserable. </digress> | |
| Feb 11, 2021 at 20:40 | answer | added | RETRAC | timeline score: 18 | |
| Feb 11, 2021 at 20:34 | comment | added | Graham Nye | "but I digress" Don't worry, this is RC.SE. Digression is pretty much welcomed - as long as it's an interesting digression. | |
| Feb 11, 2021 at 20:25 | comment | added | rwallace | @SolomonSlow Yep. Or put another way: it's easy to read the source code of a multithreaded program and think we understand it. But reaching a high degree of assurance that our understanding matches what's really going on, that there are no lurking heisenbugs? That's another order of difficulty altogether. | |
| Feb 11, 2021 at 20:23 | answer | added | Walter Mitty | timeline score: 3 | |
| Feb 11, 2021 at 19:40 | comment | added | Solomon Slow | Re, "threads...Bad Idea" Like many ideas, it depends on whose needs the idea meets. In an application that has to wait for several different, un-synchronized sources of input; IMO it is much easier for a person to read and understand the source code of a multi-threaded implementation than it is to read and understand an event-driven version, assuming that the reader was trained, like most of us, to read and write pure procedural code before all else. The down side is, of course, that there's a lot of subtle ways for somebody who writes a multi-threaded application to get into deep trouble. | |
| Feb 11, 2021 at 19:04 | answer | added | Raffzahn | timeline score: 3 | |
| Feb 11, 2021 at 18:52 | answer | added | manassehkatz-Moving 2 Codidact | timeline score: 15 | |
| Feb 11, 2021 at 18:14 | comment | added | rwallace | @moonwalker Yep. By 'the former case' I mean the case of 'you have to remember to use synchronization primitives or you have bugs'. | |
| Feb 11, 2021 at 18:09 | comment | added | moonwalker | @rwallace in the former case you have bugs, not threads. If your code is working on your machine but breaking on your neighbor's, or working correctly in the summer but breaking in the winter, and the reason is you couldn't be bothered about thread safety - you didn't write a program, you wrote a bug. Again, if you want your code to always work correctly in a multi-threaded model you have two options - use synchronization primitives or don't share any data. | |
| Feb 11, 2021 at 18:05 | answer | added | John Dallman | timeline score: 11 | |
| Feb 11, 2021 at 18:03 | answer | added | moonwalker | timeline score: 20 | |
| Feb 11, 2021 at 18:02 | answer | added | dave | timeline score: 18 | |
| Feb 11, 2021 at 18:01 | comment | added | tofro | You got that history the wrong way round. Threads occured first on Unix, notably Sun Solaris (as Lightweight Processes, LWP), before Windows even knew what that might be. | |
| Feb 11, 2021 at 17:41 | comment | added | rwallace | @moonwalker Yep. But 'you have to remember to use synchronization primitives or you will get heisenbugs' is not the same thing as 'you have to remember to transfer ownership of the block of data with a message or your program will immediately visibly fail to work'. Roughly speaking, in the former case you have threads and in the latter case you have processes. | |
| Feb 11, 2021 at 17:34 | comment | added | moonwalker | For multi-threaded access to shared data to be safe without synchronization primitives your execution order has to be naturally deterministic, which, taking into account latency variations even for things like RAM access, would be rather difficult to achieve without bottlenecking the CPU on other elements of the computer system, and at that point you might as well not bother with multi-threading at all. | |
| Feb 11, 2021 at 17:30 | comment | added | moonwalker | > 'thread A pokes a value into their shared memory space, thread B expects to read that value a few microseconds later without explicitly passing a message or copying or even transferring ownership of a block of memory' This will work safely only in the most trivial cases. To avoid bugs most of the time you'd need to explicitly use various synchronization primitives (e.g. mutexes in most languages or something like channels in Go) , though incorrect use of those primitives can lead to bugs of its own. | |
| Feb 11, 2021 at 16:24 | history | asked | rwallace | CC BY-SA 4.0 |