I know this is a simple questions, but it came up when I was coding and I am wondering how it works now. So, my first question is that when printf is given an integer like below, but expecting a %f float value, why is it always outputting 0.000000? I am running this in GCC on linux terminal.
int main() { int a = 2, b = 5, result = 0; result = b/a*a; printf("%f\n", result); } //Above printf statement outputs 0.000000 every time. Then when I use the code below and give printf a double when it is expecting an %i integer value, the output is always random/garbage.
int main() { double a = 2, b = 5, result = 0; result = b/a*a; printf("%i\n", result); } //Above printf statement outputs random numbers every time. I completely understand the above code is incorrect since the printf output type is not the same as I am inputting, but I expected it to act the same way for each form of error instead of changing like this. Just caught my curiosity so I thought I would ask.
int a = 2, b = 5, result = 0;printf("%f\n", ...);results in0.000000or-0.000000for about 50% of all possibledouble. There are lots of smalldouble. More informative to useprintf("%a\n", ...);orprintf("%e\n", ...);when researching UB.