Skip to main content

Timeline for Why (not) segmentation?

Current License: CC BY-SA 3.0

13 events
when toggle format what by license comment
Jun 12, 2017 at 13:32 comment added user Back in 1985 when the 386 was introduced, a 4 GiB address space was considered to be enormous. Remember that a 20 MiB hard disk was rather large at the time, and it still wasn't entirely uncommon for systems to come with only floppy disk drives. The 3.5" FDD was introduced in 1983, sporting a formatted capacity of a whopping 360 KB. (1.44 MB 3.5" FDDs became available in 1986.) To within experimental error, everybody back then thought of a 32 bits address space as we now think of 64 bits: physically approachable, but so large so as to be practically infinite.
Nov 2, 2015 at 2:24 comment added Greg A. Woods 80386 address offsets are 32 bits! 80386 segments can be paged!
Dec 23, 2013 at 19:51 comment added supercat @zvrba: Even if a machine can handle 64 bit addresses directly, that doesn't mean that it can shuffle them around as efficiently as it could 32-bit object references. Many applications' performance is very dependent upon caching efficiency, and a cache of a given size will be able to hold twice as many 32-bit references as 64-bit references.
Dec 23, 2013 at 19:40 comment added supercat @zvrba: I actually like 8086-style segmentation better than 80286-style. If the 80386 had provided a mode which combined 8086-style segmentation with paging, but had extended the segment registers to 32 bits, it would have been very easy for object-oriented frameworks to access up to 64GB using 32-bit object references. If e.g. the top 4 bits of a segment register were a selector and the bottom 28 bits were an offset that was scaled by an amount which could be set for the 16 segment groups, a framework could efficiently handle even more memory while retaining 32-bit object references.
Aug 11, 2011 at 9:59 comment added zvrba You have completely misunderstood the segmentation. In 8086 it might have been a hack; 80286 introduced protected mode where it was crucial for protection; in 80386 it was even further extended and segments can be larger than 64kB, still with the benefit of hardware checks. (BTW, 80286 did NOT have an MMU.)
Aug 10, 2011 at 21:09 comment added Deleted @Martin Beckett: Me too. I spent a whole week trying to "get" x86 segmentation once.
Aug 10, 2011 at 20:50 comment added Patrick Hughes @Martin Beckett lmao =) I'm with ya on that, brother.
Aug 10, 2011 at 19:15 comment added Martin Beckett Thanks, you have brought back all the repressed memories of doing image processing on segmented memory - this is going to mean more therapy!
Aug 10, 2011 at 18:54 history migrated from stackoverflow.com (revisions)
Aug 10, 2011 at 17:56 comment added Mr. Shickadance This all makes sense. From my experience its quite easy to ask yourself 'Why?' when reading the Intel manuals. I'm just going to leave this open a bit before accepting this answer to perhaps get more perspective. Thanks!
Aug 10, 2011 at 17:37 comment added parsifal It's been a long time since I paid attention to the details of the Intel memory architecture, but I don't think that the segmented architecture would provide any greater hardware protection. The only real protection that an MMU can give you is to separate code and data, preventing buffer overrun attacks. And I believe that's controllable without segments, via page-level attributes. You could theoretically restrict access to objects by creating a separate segment for each, but I don't think that's reasonable.
Aug 10, 2011 at 17:22 comment added Mr. Shickadance Ok, that all makes sense. However, reading the Intel documents one would be inclined to think segments could actually be used for greater hardware level protection against program bugs. Specifically section 3.2.3 of the Systems Programming Guide - are there advantages to the multi-segment model? Would it be correct to say Linux uses the protected flat model? (section 3.2.2)
Aug 10, 2011 at 14:35 history answered parsifal CC BY-SA 3.0