1

I'm having the strangest behavior with an object generated by numpy.arange:

for i in numpy.arange(xo, xn+h, h): xs.append(float(i)) 

In this case, xo=1, xn=4, h=0.1.

Now, I expected xs[-1] to be exactly equal to 4.0 == float(4). However, I get the following:

>>> foo = xs[-1] >>> foo == float(4) False >>> float(foo) == float(4) False >>> foo 4.0 >>> type(foo) <type 'float'> >>> int(sympy.ceiling(4.0)), int(sympy.ceiling(foo)) 4 5 

What on earth is happening here?

Placing print type(i) in the for loop prints <type 'numpy.float64'>. Perhaps something going on during the float(i) casting? Using numpy.asscalar doesn't change anything.

Using math.ceil(foo) instead of sympy.ceiling(foo) issues the same thing (that's the part I actually need to work).

3
  • numpy.float64 vs sympy.float? if you use sympy, you may run into problems like that i guess? Commented Oct 22, 2013 at 7:32
  • @usethedeathstar I used the built-in float() for casting. Sympy was only used on the last line of the console I/O above. And, as I said, using math.ceil instead of sympy.ceiling returns the same. Commented Oct 22, 2013 at 7:34
  • You should avoid np.arange with floating point numbers. Rather use np.linspace. Commented Oct 22, 2013 at 10:11

2 Answers 2

2
In [10]: for i in numpy.arange(xo, xn+h, h): xs.append(float(i)) ....: In [11]: xs Out[11]: [1.0, 1.1, 1.2000000000000002, 1.3000000000000003, 1.4000000000000004, 1.5000000000000004, 1.6000000000000005, 1.7000000000000006, 1.8000000000000007, 1.9000000000000008, 2.000000000000001, 2.100000000000001, 2.200000000000001, 2.300000000000001, 2.4000000000000012, 2.5000000000000013, 2.6000000000000014, 2.7000000000000015, 2.8000000000000016, 2.9000000000000017, 3.0000000000000018, 3.100000000000002, 3.200000000000002, 3.300000000000002, 3.400000000000002, 3.500000000000002, 3.6000000000000023, 3.7000000000000024, 3.8000000000000025, 3.9000000000000026, 4.000000000000003] 

This is why you do not get the wanted result, due to floating point accuracy, it cant give True to your test. This also explains why if you do a round-operation like ceiling on it, that you get five instead of four.

edit: to check if x and y are the same (within some margin of error), you could do the following, but i think there is (should be) something already in python that can do this for you

def isnear(x,y, precision = 1e-5): return abs(x-y)<precision 

edit2: or as ali_m said:

numpy.allclose(x, y, atol = 1e-5) 
Sign up to request clarification or add additional context in comments.

6 Comments

Well, it strikes me as odd that after casting with float(xs[-1]) you get just 4.0 as the output. What would the best approach be to get it right, then? round(i)?
@Alex i think there is something implemented in python to check if x is near to y, if you would implement it yourself, you could try abs(x-y)<something
@Alex: Instead of testing f1 == f2, where f1 and f2 are both floating point numbers, it is usually best to test abs(f1-f2) < eps, where eps is an appropriately small-sized number. In this case, a value of eps anywhere between 0.01 and 0.0000000000001 would be fine.
@Alex Does this answer your question, or do you need more info on it?
For convenience you could also use np.allclose(x, y, atol=eps)
|
1

This is not really strange - it is just the way that floating point arithmetic works because it cannot represent most values exactly. If you do repeated computations in floating point (arange() here is adding 0.1 to 1 and then adding 0.1 to that sum another 29 times) and if the numbers you are dealing with are not exactly representable in floating point, you won't get an exact answer at the end of the computation. The best article to read on this is What Every Computer Scientist Should Know About Floating-Point Arithmetic.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.