2

I know this is a simple questions, but it came up when I was coding and I am wondering how it works now. So, my first question is that when printf is given an integer like below, but expecting a %f float value, why is it always outputting 0.000000? I am running this in GCC on linux terminal.

int main() { int a = 2, b = 5, result = 0; result = b/a*a; printf("%f\n", result); } //Above printf statement outputs 0.000000 every time. 

Then when I use the code below and give printf a double when it is expecting an %i integer value, the output is always random/garbage.

int main() { double a = 2, b = 5, result = 0; result = b/a*a; printf("%i\n", result); } //Above printf statement outputs random numbers every time. 

I completely understand the above code is incorrect since the printf output type is not the same as I am inputting, but I expected it to act the same way for each form of error instead of changing like this. Just caught my curiosity so I thought I would ask.

3
  • @Dinesh int a = 2, b = 5, result = 0; Commented Mar 23, 2014 at 2:57
  • This might satisfy your curiosity: stackoverflow.com/questions/2398791/… Commented Mar 23, 2014 at 3:02
  • printf("%f\n", ...); results in 0.000000 or -0.000000 for about 50% of all possible double. There are lots of small double. More informative to use printf("%a\n", ...); or printf("%e\n", ...); when researching UB. Commented Sep 11, 2018 at 15:23

2 Answers 2

3

Basically because if you interpret the bits that make up a small integer as if they were a double, it looks like the double value for 0. Whereas if you interpret the bits that represent a small double value as an integer, it looks like something more interesting. Here is a link to a page that describes how the bits are used to represent a double: http://en.m.wikipedia.org/wiki/IEEE_floating_point . With this link and a little patience, you can actually work out the integer value that a given double would be interpreted as.

Sign up to request clarification or add additional context in comments.

4 Comments

From the OP's question it sounds like (to me) he gets arbitrary values each time he runs the second fragment. So yes, the encoding of a double would "look interesting" if it were interpreted as an integer but I'd imagine this double would be encoded the same way each time (at least on the same CPU ... floating point settings or whatever). I expected it to act the same way for each form of error instead of changing like this
I doubt the value changes each time he runs the same program. I suspect he sees a different value for each different double that he uses, but that they change unpredictably.
So the result is due to how the data types are represented. Meaning the printf will try to interpret the data as the intended type regardless of what is inputted and it just changes because of that. That makes sense. Thanks.
Note that int and double are not required to pass as function argument using the same approach (e.g. stack vs. FP stack). The double printed may have nothing to do with int result.
-1

You used the wrong format specifiers It should be

int a = 2, b = 5, result = 0; result = b/a*a; printf("%d\n", result); ... double a = 2, b = 5, result = 0; result = b/a*a; printf("%f\n", result); 

2 Comments

I completely understand the above code is incorrect since the printf output type is not the same as I am inputting
I know I am using it wrong. My question is why does it act differently with two different types of wrong input. I assumed they both would be garbage coming out not 0.000000 for the first one. I ran into this by accident when I forgot to change the values. I'm just curious if anybody knows why it works this way.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.