Is there a robust way to test for equality of floating point numbers, or to generally ensure that floats that should be equal actually do equal each other to within the float's precision? For example, here is a distressing situation:
>> np.mod(2.1, 2) == 0.1 Out[96]: False I realize that this occurs because of the floating point error:
>> np.mod(2.1, 2) Out[98]: 0.10000000000000009 I am familiar with the np.isclose(a,b,tol) function, but using it makes me uncomfortable since I might get false positives, i.e. getting told that things are equal when they really should not be. There is also the note that np.isclose(a,b) may be different from np.isclose(b,a) which is even worse.
I am wondering, is there a more robust way to determine equality of floats, without false positives/false negatives, without a==b being different from b==a and without having to mess around with tolerances? If not, what are the best practices/recommendations for setting the tolerances to ensure robust behavior?
abs(np.mod(2.1, 2) - 0.1) < 1e-6? (returnsTrue)iscloseso that they are symmetricare equal when they really should not beis a puzzling phrase. What's your logic for determining whether they should be equal or not? Other than value. The derivation history?isclose(or alternatives) does not work (to your satisfaction). Otherwise you'll get downvotes and close votes.a=1e-8; b=2e-8; np.isclose(a,b)-->True. To which you might say "Usenp.isclose(a, b, atol=1e-9)-->False." But then, of course,a=1e-9;b=2e-9returnsTrueagain. Also, as I've already mentionediscloseis not a symmetric relation according to the docs (I haven't yet found an example of it, but that doesn't mean that it will never occur).