I'm designing an algorithm in python and know I'll want to translate it to C later.
However, mathematical operations in Python might not yield the same result as in C, for instance 4294967295 + 1 = 0 in C for unsigned integers, but not with plain Python integers operations. Therefore, I should not use Python integers in my design.
Can I safely and easily use Numpy to reproduce C behavior ? That is, if I perform usual operations (+, -, *, /, %, casting from float to int or the other way around) on arrays with types np.uint32 or np.float64 for instance, am I guaranteed (or can I get this guarantee somehow) to get the same result as a C program with uint32_t and float64_t ?
I'm only interested of what's part of the C "official behavior", anything that is allowed to depend on the compiler or processor in C can also differ with numpy as if it was another compiler/processor. I'm asking in particular since numpy has a Nan that is not always in C.
EDIT after comments :
I'm looking more particularly at this set of operations : (+, -, *, /, %, casting from float to int or the other way around).
I've tried to look at numpy documentation to no avail, and have run a few tests myself, for instance :
TEST 1 : int32 overflow ((uint32_t) 4294967295 + (uint32_t) 1 == 0 in C)
It does not seem to work with numpy scalars
>>> import numpy >>> a = numpy.uint32(4294967295) >>> type(a) <class 'numpy.uint32'> >>> a += 1 >>> a 4294967296 >>> type(a) <class 'numpy.int64'> But it does with numpy arrays :
import numpy a = numpy.array([4294967295], dtype='uint32') a += 1 print(a) print(a.dtype) Output:
[0] uint32 But this specific case does not give me any insurance that it always works with arrays.
**TEST 2 : negative integer division : **
-1/2 == 0 in C for int32.
But in "plain" numpy :
two = np.int64(2) mone = np.int64(-1) print(mone / two) print(mone // two) Gives :
-0.5 -1 I'm wondering whether there is some kind of "switch" to numpy, or operands that I could use, so that numpy would give me 0 in the above case for instance