1

Is there a robust way to test for equality of floating point numbers, or to generally ensure that floats that should be equal actually do equal each other to within the float's precision? For example, here is a distressing situation:

>> np.mod(2.1, 2) == 0.1 Out[96]: False 

I realize that this occurs because of the floating point error:

>> np.mod(2.1, 2) Out[98]: 0.10000000000000009 

I am familiar with the np.isclose(a,b,tol) function, but using it makes me uncomfortable since I might get false positives, i.e. getting told that things are equal when they really should not be. There is also the note that np.isclose(a,b) may be different from np.isclose(b,a) which is even worse.

I am wondering, is there a more robust way to determine equality of floats, without false positives/false negatives, without a==b being different from b==a and without having to mess around with tolerances? If not, what are the best practices/recommendations for setting the tolerances to ensure robust behavior?

6
  • With epsilon and absolute value: abs(np.mod(2.1, 2) - 0.1) < 1e-6? (returns True) Commented May 19, 2017 at 19:20
  • Use integers if you want to be exact. Otherwise you can sort the two values to isclose so that they are symmetric Commented May 19, 2017 at 19:22
  • are equal when they really should not be is a puzzling phrase. What's your logic for determining whether they should be equal or not? Other than value. The derivation history? Commented May 19, 2017 at 19:58
  • You'll need to give examples where isclose (or alternatives) does not work (to your satisfaction). Otherwise you'll get downvotes and close votes. Commented May 19, 2017 at 20:20
  • The obvious example is a=1e-8; b=2e-8; np.isclose(a,b) --> True. To which you might say "Use np.isclose(a, b, atol=1e-9) --> False." But then, of course, a=1e-9; b=2e-9 returns True again. Also, as I've already mentioned isclose is not a symmetric relation according to the docs (I haven't yet found an example of it, but that doesn't mean that it will never occur). Commented May 19, 2017 at 21:03

2 Answers 2

1

You stated that you want the check to return True if their infinite-precision forms are equal. In that case you need to use an infinite-precision data structure. For example fractions.Fraction:

>>> from fractions import Fraction >>> Fraction(21, 10) % 2 == Fraction(1, 10) True 

There is also (although slow!) support for arrays containing python objects:

>>> import numpy as np >>> arr = np.array([Fraction(1, 10), Fraction(11, 10), Fraction(21, 10), ... Fraction(31, 10), Fraction(41, 10)]) >>> arr % 2 == Fraction(1, 10) array([ True, False, True, False, True], dtype=bool) 

You just have to make sure you don't lose the infinite-precision objects (which isn't easy for several numpy/scipy functions).

In your case you could even just operate on integers:

>>> 21 % 20 == 1 True 
Sign up to request clarification or add additional context in comments.

Comments

1

The sympy library has some support for this kind of thing.

from sympy import * a, b = GoldenRatio**1000/sqrt(5), fibonacci(1000) print(float(a)) # prints 4.34665576869e+208 print(float(b)) # prints 4.34665576869e+208 print("Floats: ", float(a) - float(b)) # prints 0.0 print("More precise: ", N(fibonacci(100) - GoldenRatio**100/sqrt(5))) # prints -5.64613129282185e-22 

N allows you to specify the precision that you'd like (with some caveats). Additionally, sympy has the Rational class. For more info, see here.

Note that the floating point standard exists because it uses a fixed number of bits for computation. This is important for both processing speed and memory footprint. If you really need this type of precision, especially if you're looking for exact equality, you should really consider using a symbolic solver (like Mathematica, for example).

Python can do it, but it wasn't designed to do it.

2 Comments

I disagree with "Python can do it, but it wasn't designed to do it.", decimal.Decimal and fractions.Fraction show that Python has built-in modules designed especially to do it. However, if you meant "NumPy can do it, but it wasn't designed to do it" I would agree.
It certainly has built-ins that can do it, but I'd have to disagree Python was designed around infinite/arbitrary precision arithmetic. Mathematica was, for contrast. Numerical computations in Mathematica support arbitrary precision by default. NumPy has no skin in the game what so ever. Agreed on that point!

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.