Skip to main content
added 1832 characters in body
Source Link
Raffzahn
  • 249.4k
  • 23
  • 722
  • 1k

For the why one or the other many arguments can be made - one is, as seen the programmers view, which, and that may surprise some may or may not coincidence with the hardware he uses. A great recent (*6) example here is the PowerPC with its fluid endianess. Here the hardware bit (and byte) order within a word may or may not be the same as the one seen by a programmer. Thus one does need a way to number bits on a more abstract way.

That and the mentioned HP2100 hints at a possible reason why so many modern documentation documentation go for value based numbering: Evolution of modern machines and their understanding went thru a bottle neck, much like the Indian Tiger, namely the invention of very simple byte orientated microprocessors. Where historical machines had serial data formats, or byte sizes smaller than machine words or were using byte sized addressing but big endian for larger data items, the very simple mini and even more simple micros cut it all down to the byte as basic unit.

That byte being the CPU word of those micros and universally fixed at 8 bits (*7) was no longer a subdivision but seen as a monolithic building block for larger structures of words, which in turn are extensions of that byte. Making most history - and the reasons to count different - hidden.

*5 - Divided by the good old discussion about an array starting at ZERO or ONE (Hello BASIC).

*6 - I.e 1990s instead of 1950s and 60s.

*7 - This byte orientation is so canon today that may question on RC.SE show a total missing of understanding that there not only could be different byte sizes, but also that there would be machines without. Even more, everything is seen as bytes, no matter what medium or format and there being nothing else but bytes.

*5 - Divided by the good old discussion about an array starting at ZERO or ONE (Hello BASIC).

For the why one or the other many arguments can be made - one is, as seen the programmers view, which, and that may surprise some may or may not coincidence with the hardware he uses. A great recent (*6) example here is the PowerPC with its fluid endianess. Here the hardware bit (and byte) order within a word may or may not be the same as the one seen by a programmer. Thus one does need a way to number bits on a more abstract way.

That and the mentioned HP2100 hints at a possible reason why so many modern documentation documentation go for value based numbering: Evolution of modern machines and their understanding went thru a bottle neck, much like the Indian Tiger, namely the invention of very simple byte orientated microprocessors. Where historical machines had serial data formats, or byte sizes smaller than machine words or were using byte sized addressing but big endian for larger data items, the very simple mini and even more simple micros cut it all down to the byte as basic unit.

That byte being the CPU word of those micros and universally fixed at 8 bits (*7) was no longer a subdivision but seen as a monolithic building block for larger structures of words, which in turn are extensions of that byte. Making most history - and the reasons to count different - hidden.

*5 - Divided by the good old discussion about an array starting at ZERO or ONE (Hello BASIC).

*6 - I.e 1990s instead of 1950s and 60s.

*7 - This byte orientation is so canon today that may question on RC.SE show a total missing of understanding that there not only could be different byte sizes, but also that there would be machines without. Even more, everything is seen as bytes, no matter what medium or format and there being nothing else but bytes.

deleted 1 character in body
Source Link
Raffzahn
  • 249.4k
  • 23
  • 722
  • 1k
  • Bit Position (Offset)

    Here the naming is based on the offset from the start of a multi-bit-unit, starting at zero (0). Quite handy to describe the position within a larger unit, especially when those units have varying sizes. A good example is the IBM /360 (*2) with instructions in 2,4 or 6 byte size. By naming them according t positon, an opcode always starts at offset zero:

    enter image description here

    (Compiled from the 1967 Student Text to IBM System/360 Architecture)

    Likewise a memory operation will always have it's first offset at 24 and second (for memory to memory instructions) at 40. Of course the same goes for data formats:

    enter image description here

    (Compiled from various figures of IBM System/360 Principles of Operation)

    While the IBM /360 did set the grounds for many things, they were not alone in using it that way, as this excerpt for the 48-bit Telefunken TR-4 of 1959 shows (*3):

    enter image description here

    (Taken from the handouts of a February 1959 lecture about the TR-4 structure)

    TI follows those conventions with its 990 series of mini computers (of which the 9900 is a single chip implementation (*4)):

    enter image description here

    (Compiled from various figures of the October 1974 Preliminary 990 Computer Reference Manual)

  • Bit Count (Number)

    Essentially he same as before, with all the benefits, except it's counting, so the first one is also noted a 1 instead of zero0 :)

    IBM did so when counting them.

  • Bit Value

    That's something that only works if that multi-bit-unit comes in a way to detect it's lowest (last) item, so it can get a value of 2^0, commonly expressed as 0 by dropping the base.

    A seemingly good early example would be HP's 2100 series of mini computers. Except, they didn't pull it thru all the way like IBM did, they only numbered the bit in a word right to left. But they stayed with it when numbering the bytes in a word and words in a double word ... the later being easy for a little endian machine :))

    enter image description here

Or one could again see it as based on a different way. A versatile (hardware) system designer - like TI - will alwaysusually try to describe his design in abstract generic terms, so different designs can be compared with ease, while a programmer working with the units the designer gave him may prefer a less generic view.

  • Bit Position (Offset)

    Here the naming is based on the offset from the start of a multi-bit-unit, starting at zero (0). Quite handy to describe the position within a larger unit, especially when those units have varying sizes. A good example is the IBM /360 (*2) with instructions in 2,4 or 6 byte size. By naming them according t positon, an opcode always starts at offset zero:

    enter image description here

    (Compiled from the 1967 Student Text to IBM System/360 Architecture)

    Likewise a memory operation will always have it's first offset at 24 and second (for memory to memory instructions) at 40. Of course the same goes for data formats:

    enter image description here

    (Compiled from various figures of IBM System/360 Principles of Operation)

    While the IBM /360 did set the grounds for many things, they were not alone in using it that way, as this excerpt for the 48-bit Telefunken TR-4 of 1959 shows (*3):

    enter image description here

    TI follows those conventions with its 990 series of mini computers (of which the 9900 is a single chip implementation (*4)):

    enter image description here

    (Compiled from various figures of the October 1974 Preliminary 990 Computer Reference Manual)

  • Bit Count (Number)

    Essentially he same as before, with all the benefits, except it's counting, so the first one is also noted a 1 instead of zero :)

    IBM did so when counting them.

  • Bit Value

    That's something that only works if that multi-bit-unit comes in a way to detect it's lowest (last) item, so it can get a value of 2^0, commonly expressed as 0 by dropping the base.

    A seemingly good early example would be HP's 2100 series of mini computers. Except, they didn't pull it thru all the way like IBM did, they only numbered the bit in a word right to left. But they stayed with it when numbering the bytes in a word and words in a double word ... the later being easy for a little endian machine :))

    enter image description here

Or one could again see it as based on a different way. A (hardware) system designer - like TI - will always try to describe his design in abstract generic terms, so different designs can be compared with ease, while a programmer working with the units the designer gave him may prefer a less generic view.

  • Bit Position (Offset)

    Here the naming is based on the offset from the start of a multi-bit-unit, starting at zero (0). Quite handy to describe the position within a larger unit, especially when those units have varying sizes. A good example is the IBM /360 (*2) with instructions in 2,4 or 6 byte size. By naming them according t positon, an opcode always starts at offset zero:

    enter image description here

    (Compiled from the 1967 Student Text to IBM System/360 Architecture)

    Likewise a memory operation will always have it's first offset at 24 and second (for memory to memory instructions) at 40. Of course the same goes for data formats:

    enter image description here

    (Compiled from various figures of IBM System/360 Principles of Operation)

    While the IBM /360 did set the grounds for many things, they were not alone in using it that way, as this excerpt for the 48-bit Telefunken TR-4 of 1959 shows (*3):

    enter image description here

    (Taken from the handouts of a February 1959 lecture about the TR-4 structure)

    TI follows those conventions with its 990 series of mini computers (of which the 9900 is a single chip implementation (*4)):

    enter image description here

    (Compiled from various figures of the October 1974 Preliminary 990 Computer Reference Manual)

  • Bit Count (Number)

    Essentially he same as before, with all the benefits, except it's counting, so the first one is also noted a 1 instead of 0 :)

    IBM did so when counting them.

  • Bit Value

    That's something that only works if that multi-bit-unit comes in a way to detect it's lowest (last) item, so it can get a value of 2^0, commonly expressed as 0 by dropping the base.

    A seemingly good early example would be HP's 2100 series of mini computers. Except, they didn't pull it thru all the way like IBM did, they only numbered the bit in a word right to left. But they stayed with it when numbering the bytes in a word and words in a double word ... the later being easy for a little endian machine :))

    enter image description here

Or one could again see it as based on a different way. A versatile (hardware) system designer - like TI - will usually try to describe his design in abstract generic terms, so different designs can be compared with ease, while a programmer working with the units the designer gave him may prefer a less generic view.

added 395 characters in body
Source Link
Raffzahn
  • 249.4k
  • 23
  • 722
  • 1k

It's a different view on the same data for a different purpose. Going by bit value instead of position/offset is a PoV more common with programmers operating on fixed size integers. Them being the majority of today's users (*1) only makes this seen more often than any of the other ways.

  • Bit Position (Offset)

    Here the naming is based on the offset from the start of a multi-bit-unit, starting at zero (0). Quite handy to describe the position within a larger unit, especially when those units have varying sizes. A good example is the IBM /360 (*1*2) with instructions in 2,4 or 6 byte size. By naming them according t positon, an opcode always starts at offset zero:

    enter image description here

    (Compiled from the 1967 Student Text to IBM System/360 Architecture)

    Likewise a memory operation will always have it's first offset at 24 and second (for memory to memory instructions) at 40. Of course the same goes for data formats:

    enter image description here

    (Compiled from various figures of IBM System/360 Principles of Operation)

    While the IBM /360 did set the grounds for many things, they were not alone in using it that way, as this excerpt for the 48-bit Telefunken TR-4 of 1959 shows (*2*3):

    enter image description here

    TI follows those conventions with its 990 series of mini computers (of which the 9900 is a single chip implementation (*3*4)):

    enter image description here

    (Compiled from various figures of the October 1974 Preliminary 990 Computer Reference Manual)

  • Bit Count (Number)

    Essentially he same as before, with all the benefits, except it's counting, so the first one is also noted a 1 instead of zero :)

    IBM did so when counting them.

  • Bit Value

    That's something that only works if that multi-bit-unit comes in a way to detect it's lowest (last) item, so it can get a value of 2^0, commonly expressed as 0 by dropping the base.

    A seemingly good early example would be HP's 2100 series of mini computers. Except, they didn't pull it thru all the way like IBM did, they only numbered the bit in a word right to left. But they stayed with it when numbering the bytes in a word and words in a double word ... the later being easy for a little endian machine :))

    enter image description here

From a programmers view the first two are about indexing a bit array (*4*5), while the third is about assigning values in an integer. That of course only works in a narrowed down environment where the size of that integer is known.

*1 - Plus the primitive nature of some so called HLL. Never underestimate the influence dumb tools have. 'If all you have is a Hammer, then all you see is a nail'

*2 - Yes, the very design that layed groundwork for next to everything we see as canon today - including 8 bit bytes and bit numbering :))

*2*3 - Note that the footnote clarifies that character (and bit) order is different on tape.

*3*4 - Which in turn not only set the nomenclatura for the 9900 manuals, but had it's later models based on exactly that chip.

*4*5 - Divided by the good old discussion about an array starting at ZERO or ONE (Hello BASIC).

It's a different view on the same data for a different purpose.

  • Bit Position (Offset)

    Here the naming is based on the offset from the start of a multi-bit-unit, starting at zero (0). Quite handy to describe the position within a larger unit, especially when those units have varying sizes. A good example is the IBM /360 (*1) with instructions in 2,4 or 6 byte size. By naming them according t positon, an opcode always starts at offset zero:

    enter image description here

    (Compiled from the 1967 Student Text to IBM System/360 Architecture)

    Likewise a memory operation will always have it's first offset at 24 and second (for memory to memory instructions) at 40. Of course the same goes for data formats:

    enter image description here

    (Compiled from various figures of IBM System/360 Principles of Operation)

    While the IBM /360 did set the grounds for many things, they were not alone in using it that way, as this excerpt for the 48-bit Telefunken TR-4 of 1959 shows (*2):

    enter image description here

    TI follows those conventions with its 990 series of mini computers (of which the 9900 is a single chip implementation (*3)):

    enter image description here

    (Compiled from various figures of the October 1974 Preliminary 990 Computer Reference Manual)

  • Bit Count (Number)

    Essentially he same as before, with all the benefits, except it's counting, so the first one is also noted a 1 instead of zero :)

    IBM did so when counting them.

  • Bit Value

    That's something that only works if that multi-bit-unit comes in a way to detect it's lowest (last) item, so it can get a value of 2^0, commonly expressed as 0 by dropping the base.

    A seemingly good early example would be HP's 2100 series of mini computers. Except, they didn't pull it thru all the way like IBM did, they only numbered the bit in a word right to left. But they stayed with it when numbering the bytes in a word and words in a double word ... the later being easy for a little endian machine :))

    enter image description here

From a programmers view the first two are about indexing a bit array (*4), while the third is about assigning values in an integer. That of course only works in a narrowed down environment where the size of that integer is known.

*1 - Yes, the very design that layed groundwork for next to everything we see as canon today - including 8 bit bytes and bit numbering :))

*2 - Note that the footnote clarifies that character (and bit) order is different on tape.

*3 - Which in turn not only set the nomenclatura for the 9900 manuals, but had it's later models based on exactly that chip.

*4 - Divided by the good old discussion about an array starting at ZERO or ONE (Hello BASIC).

It's a different view on the same data for a different purpose. Going by bit value instead of position/offset is a PoV more common with programmers operating on fixed size integers. Them being the majority of today's users (*1) only makes this seen more often than any of the other ways.

  • Bit Position (Offset)

    Here the naming is based on the offset from the start of a multi-bit-unit, starting at zero (0). Quite handy to describe the position within a larger unit, especially when those units have varying sizes. A good example is the IBM /360 (*2) with instructions in 2,4 or 6 byte size. By naming them according t positon, an opcode always starts at offset zero:

    enter image description here

    (Compiled from the 1967 Student Text to IBM System/360 Architecture)

    Likewise a memory operation will always have it's first offset at 24 and second (for memory to memory instructions) at 40. Of course the same goes for data formats:

    enter image description here

    (Compiled from various figures of IBM System/360 Principles of Operation)

    While the IBM /360 did set the grounds for many things, they were not alone in using it that way, as this excerpt for the 48-bit Telefunken TR-4 of 1959 shows (*3):

    enter image description here

    TI follows those conventions with its 990 series of mini computers (of which the 9900 is a single chip implementation (*4)):

    enter image description here

    (Compiled from various figures of the October 1974 Preliminary 990 Computer Reference Manual)

  • Bit Count (Number)

    Essentially he same as before, with all the benefits, except it's counting, so the first one is also noted a 1 instead of zero :)

    IBM did so when counting them.

  • Bit Value

    That's something that only works if that multi-bit-unit comes in a way to detect it's lowest (last) item, so it can get a value of 2^0, commonly expressed as 0 by dropping the base.

    A seemingly good early example would be HP's 2100 series of mini computers. Except, they didn't pull it thru all the way like IBM did, they only numbered the bit in a word right to left. But they stayed with it when numbering the bytes in a word and words in a double word ... the later being easy for a little endian machine :))

    enter image description here

From a programmers view the first two are about indexing a bit array (*5), while the third is about assigning values in an integer. That of course only works in a narrowed down environment where the size of that integer is known.

*1 - Plus the primitive nature of some so called HLL. Never underestimate the influence dumb tools have. 'If all you have is a Hammer, then all you see is a nail'

*2 - Yes, the very design that layed groundwork for next to everything we see as canon today - including 8 bit bytes and bit numbering :))

*3 - Note that the footnote clarifies that character (and bit) order is different on tape.

*4 - Which in turn not only set the nomenclatura for the 9900 manuals, but had it's later models based on exactly that chip.

*5 - Divided by the good old discussion about an array starting at ZERO or ONE (Hello BASIC).

Source Link
Raffzahn
  • 249.4k
  • 23
  • 722
  • 1k
Loading