Skip to main content
added 27 characters in body
Source Link
Buffy
  • 37.4k
  • 10
  • 67
  • 118

I'm surprised that the following hasn't been stated yet. All of the answers given so far seem to be "after the fact" explanations of something that is really based on the way people built most (not all) early computers as binary machines. 

To make things a bit more compact here, I'll assume we are creating a 4 bit (nibble) based machine. We don't want to use more binary components to build a nibble than we need to because we are cheap, so we use forfour bi-stable components (transistorsrelays, vacuum tubes, transistors). We think of (perhaps) the two states as zero and one rather than, say, red and green.

So we have (0000) through (1111) as the combinations of the four transistors of a nibble. We can interpret them any way that we like. Suppose we want to interpret them as integers. How shall we assign the different codes to integers. We could let (0000) represent 42, a fundamental universal constant, and (0001) represent 3, an approximation to pi and so on, but we realize pretty soon that any computations we want to do with such an encoding would be pretty complex - and complexity in a machine costs money. We are cheap, remember.

So we notice that the codings are actually binary numbers. Well almost nobody used binary numbers before this so we start to think about it seriously now, as they have an application. I note that some early computers were actually decimal, not binary, but they recognized that while convenient it wasn't cheap.

Now, using binary numbers, not just binary encoding, it becomes "obvious" that the codes represent zero through fifteen, not one through sixteen. Using (0000) to represent 1 just seems dumb at this level - and at this time.

Now, (bit later) we want to index an "array" of nibbles. How shall we do it. Well we assign index numbers to the individual cells. Suppose we have four such cells. How shall we do it? We could use (0001) through (0100) to index them (1 through 4), but now (a-ha) if we have sixteen nibbles to index but start with one we can only get (our count) to fifteen without using another nibble or letting (0000) represent the last (sixteenth) cell. That seems dumb, so we (a-ha) index from zero. Now we can index sixteen cells with sixteen codes and it is cheap and pretty natural. The only cost is a bit of confusion in the minds of beginning programmers, but others will pay those costs and our machines can be cheap.

No contest. Index from zero.

The other answers here explore why the arithmetic inside the machine is cheap this way, so I won't repeat it here. But note that all of this happened before any languages at all were invented other than the simplest of machine coding without any abstraction facilities at all. It was just economics and engineering. Make it simple, keep it cheap.

I'm surprised that the following hasn't been stated yet. All of the answers given so far seem to be "after the fact" explanations of something that is really based on the way people built most (not all) early computers as binary machines. To make things a bit more compact here, I'll assume we are creating a 4 bit (nibble) based machine. We don't want to use more binary components to build a nibble than we need to because we are cheap, so we use for bi-stable components (transistors). We think of (perhaps) the two states as zero and one rather than, say, red and green.

So we have (0000) through (1111) as the combinations of the four transistors of a nibble. We can interpret them any way that we like. Suppose we want to interpret them as integers. How shall we assign the different codes to integers. We could let (0000) represent 42, a fundamental universal constant, and (0001) represent 3, an approximation to pi and so on, but we realize pretty soon that any computations we want to do with such an encoding would be pretty complex - and complexity in a machine costs money. We are cheap, remember.

So we notice that the codings are actually binary numbers. Well almost nobody used binary numbers before this so we start to think about it seriously now, as they have an application. I note that some early computers were actually decimal, not binary, but they recognized that while convenient it wasn't cheap.

Now, using binary numbers, not just binary encoding, it becomes "obvious" that the codes represent zero through fifteen, not one through sixteen. Using (0000) to represent 1 just seems dumb at this level - and at this time.

Now, (bit later) we want to index an "array" of nibbles. How shall we do it. Well we assign index numbers to the individual cells. Suppose we have four such cells. How shall we do it? We could use (0001) through (0100) to index them (1 through 4), but now (a-ha) if we have sixteen nibbles to index but start with one we can only get (our count) to fifteen without using another nibble or letting (0000) represent the last (sixteenth) cell. That seems dumb, so we (a-ha) index from zero. Now we can index sixteen cells with sixteen codes and it is cheap and pretty natural. The only cost is a bit of confusion in the minds of beginning programmers, but others will pay those costs and our machines can be cheap.

No contest. Index from zero.

The other answers here explore why the arithmetic inside the machine is cheap this way, so I won't repeat it here. But note that all of this happened before any languages at all were invented other than the simplest of machine coding without any abstraction facilities at all. It was just economics and engineering. Make it simple, keep it cheap.

I'm surprised that the following hasn't been stated yet. All of the answers given so far seem to be "after the fact" explanations of something that is really based on the way people built most (not all) early computers as binary machines. 

To make things a bit more compact here, I'll assume we are creating a 4 bit (nibble) based machine. We don't want to use more binary components to build a nibble than we need to because we are cheap, so we use four bi-stable components (relays, vacuum tubes, transistors). We think of (perhaps) the two states as zero and one rather than, say, red and green.

So we have (0000) through (1111) as the combinations of the four transistors of a nibble. We can interpret them any way that we like. Suppose we want to interpret them as integers. How shall we assign the different codes to integers. We could let (0000) represent 42, a fundamental universal constant, and (0001) represent 3, an approximation to pi and so on, but we realize pretty soon that any computations we want to do with such an encoding would be pretty complex - and complexity in a machine costs money. We are cheap, remember.

So we notice that the codings are actually binary numbers. Well almost nobody used binary numbers before this so we start to think about it seriously now, as they have an application. I note that some early computers were actually decimal, not binary, but they recognized that while convenient it wasn't cheap.

Now, using binary numbers, not just binary encoding, it becomes "obvious" that the codes represent zero through fifteen, not one through sixteen. Using (0000) to represent 1 just seems dumb at this level - and at this time.

Now, (bit later) we want to index an "array" of nibbles. How shall we do it. Well we assign index numbers to the individual cells. Suppose we have four such cells. How shall we do it? We could use (0001) through (0100) to index them (1 through 4), but now (a-ha) if we have sixteen nibbles to index but start with one we can only get (our count) to fifteen without using another nibble or letting (0000) represent the last (sixteenth) cell. That seems dumb, so we (a-ha) index from zero. Now we can index sixteen cells with sixteen codes and it is cheap and pretty natural. The only cost is a bit of confusion in the minds of beginning programmers, but others will pay those costs and our machines can be cheap.

No contest. Index from zero.

The other answers here explore why the arithmetic inside the machine is cheap this way, so I won't repeat it here. But note that all of this happened before any languages at all were invented other than the simplest of machine coding without any abstraction facilities at all. It was just economics and engineering. Make it simple, keep it cheap.

Source Link
Buffy
  • 37.4k
  • 10
  • 67
  • 118

I'm surprised that the following hasn't been stated yet. All of the answers given so far seem to be "after the fact" explanations of something that is really based on the way people built most (not all) early computers as binary machines. To make things a bit more compact here, I'll assume we are creating a 4 bit (nibble) based machine. We don't want to use more binary components to build a nibble than we need to because we are cheap, so we use for bi-stable components (transistors). We think of (perhaps) the two states as zero and one rather than, say, red and green.

So we have (0000) through (1111) as the combinations of the four transistors of a nibble. We can interpret them any way that we like. Suppose we want to interpret them as integers. How shall we assign the different codes to integers. We could let (0000) represent 42, a fundamental universal constant, and (0001) represent 3, an approximation to pi and so on, but we realize pretty soon that any computations we want to do with such an encoding would be pretty complex - and complexity in a machine costs money. We are cheap, remember.

So we notice that the codings are actually binary numbers. Well almost nobody used binary numbers before this so we start to think about it seriously now, as they have an application. I note that some early computers were actually decimal, not binary, but they recognized that while convenient it wasn't cheap.

Now, using binary numbers, not just binary encoding, it becomes "obvious" that the codes represent zero through fifteen, not one through sixteen. Using (0000) to represent 1 just seems dumb at this level - and at this time.

Now, (bit later) we want to index an "array" of nibbles. How shall we do it. Well we assign index numbers to the individual cells. Suppose we have four such cells. How shall we do it? We could use (0001) through (0100) to index them (1 through 4), but now (a-ha) if we have sixteen nibbles to index but start with one we can only get (our count) to fifteen without using another nibble or letting (0000) represent the last (sixteenth) cell. That seems dumb, so we (a-ha) index from zero. Now we can index sixteen cells with sixteen codes and it is cheap and pretty natural. The only cost is a bit of confusion in the minds of beginning programmers, but others will pay those costs and our machines can be cheap.

No contest. Index from zero.

The other answers here explore why the arithmetic inside the machine is cheap this way, so I won't repeat it here. But note that all of this happened before any languages at all were invented other than the simplest of machine coding without any abstraction facilities at all. It was just economics and engineering. Make it simple, keep it cheap.