It mentions that "Limited-Precision Decimal" is "Basically the same as a IEEE 754 binary floating-point, except that the exponent is interpreted as base 10. As a result, there are no unexpected rounding errors. Also, this kind of format is relatively compact and fast, but usually slower than binary formats."
Is Python decimal implemented the same way?
No, Python's Decimal is an arbitrary-precision decimal. Conceptually, it's a pair of ints: One storing the significand, and one storing the power of ten. So, for example, Decimal('3.141592653589793238462643383') is stored as the pair of numbers (3141592653589793238462643383, -27), representing the number 3141592653589793238462643383×10-27.
As stated in your linked page, Decimal “has the ability to increase the length of the significand (possibly also the exponent) as required”.
>>> Decimal(2).sqrt() Decimal('1.414213562373095048801688724') >>> decimal.getcontext().prec = 64 >>> Decimal(2).sqrt() Decimal('1.414213562373095048801688724209698078569671875376948073176679738')
The advantage of this approach is that you can perform calculations to very high precision. The disadvantage is that storage of arbitrary-precision integers requires dynamic memory allocation and the overhead of running garbage collection (or manually calling free in C). Also, the more digits you use in a calculation, the more time it takes.
Limited-precision decimal uses a fixed-width format. For example, you could implement a decimal class with 1 bit for a sign, 50 bits to store a 15-decimal-digit significand, and 13 bits for an exponent (allowing exponents up to about ±4000), which can be stored in a 64-bit machine word. This approach gives less-precise calculations, but uses less memory and is faster compared to arbitrary precision.
Anyhow, the arithmetic operations (+, -, *, /) on the Decimal class are implemented in such a way as to automatically find a common exponent for addition and subtraction (e.g., treating 1.23 + 45.6789 as 1.2300 + 45.6789 so the decimal places match), add/subtract exponents for multiplication and division, and round the significand if the maximum number of significant digits have been exceeded. This is conveniently hidden from the programmer, but it does take CPU cycles.
why is it slower
See the previous section where I talked about scaling exponents and rounding to a specified number of significant digits. This must be done in software, whereas float calculations are done in hardware (unless you're working on a retrocomputer or embedded system without a floating-point unit). For Python-style arbitrary-precision decimal, there's the additional slowdown of having to dynamically allocate and deallocate memory for the numbers.
and why isn't this representation always preferred over the IEEE 754 implementation?
Because often, having “exact” results is meaningless.
- Physical measurements have uncertainty. A man who weights “200 pounds” may actually weigh between 197 and 203 pounds depending on how much food and water he happens to have in his body at the time. Even copies of the standard kilogram, specifically engineered to be as accurate as possible, have tolerances on the order of 100 µg, limiting measurements to 10 significant digits of accuracy. Which makes IEEE 754 double-precision (53 significant bits ≈ 15.95 significant digits) more than enough.
Decimal is only exact for decimal numbers, i.e., rationals where the denominator is a product of 2's and 5's. It can't exactly represent non-decimal fractions like 1/3 or 1/29. If you need exact arbitrary fractions, there's another class for that. - Irrational-valued functions like
sqrt, sin, or log can't be stored exactly (unless you have a full computer algebra system), so must be approximated.
In these cases, there's little practical advantage Decimal's higher precision. Using floats instead saves memory and CPU time, which can be noticeable if you're working with arrays of millions of numbers.
Finally, why does using the exponent as base 10 prevent unexpected rounding errors?
Using a base-ten exponent doesn't prevent all rounding errors. For example,
Decimal('0.9999999999999999999999999999') >>> Decimal(2).sqrt() ** 2 Decimal('1.999999999999999999999999999')
However, it prevents the particular class of rounding errors that result from float not exactly representing decimal fractions, e.g., that 0.01 is really 0x147AE147AE147B × 2-59 = 0.01000000000000000020816681711721685132943093776702880859375. This tends to show up in financial calculations. For example, calculating sales tax (which is 8.25% where I live) on a $2 item:
>>> 2.00 * 0.0825 0.165 >>> round(_, 2) 0.17 >>> Decimal('2.00') * Decimal('0.0825') Decimal('0.165000') >>> round(_, 2) Decimal('0.16')
The first round call is incorrect because 0.165 is really 0.1650000000000000077715611723760957829654216766357421875, which rounds up to 17 cents instead of down to 16 cents (assuming the round-half-even rule) as intended.