4

Inspired by this question, I was trying to find out what exactly happens there (my answer was more intuitive, but I cannot exactly understand the why of it).

I believe it comes down to this (running 64 bit Python):

>>> sys.maxint 9223372036854775807 >>> float(sys.maxint) 9.2233720368547758e+18 

Python uses the IEEE 754 floating-point representation, which effectively has 53 bits for the significant. However, as far as I understand it, the significant in the above example would require 57 bits (56 if you drop the implied leading 1) to be represented. Can someone explain this discrepancy?

2
  • 1
    Remember, floating point numbers in Python are stored in binary, not decimal. Commented Mar 25, 2011 at 11:38
  • 1
    @Gabe. Exactly: what you see is not what you get. float(sys.maxint) is a float, stored in binary internally, whose value is exactly 2**63, or sys.maxint + 1, or to be exact, 9223372036854775808.0. What's displayed when you type float(sys.maxint) is merely a decimal approximation to that floating-point value. Python never prints more than 17 significant digits for the repr of a floating-point value; to print the exact value here would require 19 significant digits. Commented Mar 25, 2011 at 14:40

3 Answers 3

6

Perhaps the following will help clear things up:

>>> hex(int(float(sys.maxint))) '0x8000000000000000L' 

This shows that float(sys.maxint) is in fact a power of 2. Therefore, in binary its mantissa is exactly 1. In IEEE 754 the leading 1. is implied, so in the machine representation this number's mantissa consists of all zero bits.

In fact, the IEEE bit pattern representing this number is as follows:

0x43E0000000000000 

Observe that only the first three nibbles (the sign and the exponent) are non-zero. The significand consists entirely of zeroes. As such it doesn't require 56 (nor indeed 53) bits to be represented.

Sign up to request clarification or add additional context in comments.

Comments

2

You're wrong. It requires 1 bit.

>>> (9.2233720368547758e+18).hex() '0x1.0000000000000p+63' 

4 Comments

Very funny. That single 1 bit isn't much use without all the other zeros.
@David: Sure, but they're all zeros, and can remain so as far as the horizon and until the end of time.
@David: What's the difference between 1 and 1.000000000000000?
Hmm, I guess this is a case where less precision would be better - using fewer bits would erase the discrepancy. Not very practical, though.
1

When you convert sys.maxint to a float or double, the result is exactly 0x1p63, because the significand contains only 24 or 53 bits (including the implicit bit), so the trailing bits cause a round up. (sys.maxint is 2^63 - 1, and rounding it up produces 2^63.)

Then, when you print this float, some subroutine formats it as a decimal numeral. To do this, it calculates digits to represent 2^63. The fact that it is able to print 9.2233720368547758e+18 does not imply that the original number contains bits that would distinguish it from 9.2233720368547759e+18. It simple means that the bits in it do represent 9.2233720368547758e+18 (approximately). In fact, the next representable floating-point number in double precision is 9223372036854777856 (approximately 9.2233720368547778e+18), which is 2^63 + 2048. So the low 11 bits of these integers are not present in the double. The formatter merely displays the number as if those bits are zero.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.