Second question is how to return 0.00 in Python?
If you insist:
import struct def misinterpret_int_as_double(n): int_bytes = struct.pack('i', n) padding = b'\x00' * (struct.calcsize('d') - struct.calcsize('i')) return struct.unpack('d', int_bytes + padding)[0] >>> misinterpret_int_as_double(-145) 2.1219957193e-314
This is essentially what your C code is doing behind the scenes. C doesn't do any type-checking (at compile-time or run-time) on the arguments passed in the ... part of a call to a varargs function. What happens is that the memory that stores printf's arguments contains:
- A pointer to the string literal
"%.2lf. - The bytes representing the number -145. (On x86-32 or x86-64, this is the 4 bytes
91 FF FF FF.) - Some garbage data. (In the Python code above, this is assumed to be all zeros, but in your C program it need not be.)
The printf function sees the lf specifier and expects a double. So, it interprets the following bytes, 91 FF FF FF xx xx xx xx (where xx = garbage) as one. For about 1/4 of the possible values of the garbage (including 00 00 00 00), the number is small enough to round to zero.
Note that I've assumed a bunch of stuff: That you have a little-endian system with 4-byte int and 8-byte double. That function arguments are passed in ascending order in memory. And that your code doesn't segfault. YMMV on other hardware/OS/compiler combinations. That's how it is with undefined behavior.
Python works differently, because it's safely typed. If you pass the “wrong” type to str's % operator, it will automatically convert the operand to the correct type by calling the magic __float__ method (or __str__ or __int__ or whatever depending on which format it is).
printffamily is notoriously prone to errors this way. To answer your 2nd question, you cannot "trick" Python this way - there's no way to get Python to treat -145 as anything other than what it is: an integer.