Your code has undefined behavior for all printf statements that use a %d conversion. The reason is you pass a value of type float where printf will expect a value of type int to have been passed.
To evaluate the expression a+b, the compiler converts the int operand to type float and evaluates the addition using floating point computation. The type of the result is float too. The same applies to the other expressions.
Types int and float have a different representation and may be passed in different ways to vararg functions such as printf. Passing a float value where printf expects an int value results in undefined behavior: anything can happen, and it is vain to try and make sense of the surprising output.
If you mean to use %d, you must convert the value to type int explicitly with a cast:
printf("%d \n", (int)(a + b));
Note that you can also use %.0f to output the float argument with no decimals, but the output will differ as printf will round the float value to the nearest integer, whereas (int)(a + b) will truncate the float value toward 0 if it can be represented as an int.
Note finally, that float values are implicitly promoted to double when passed to printf and other vararg functions. Using type double for all floating types is recommended unless you target very specific applications such as computer graphics or embedded systems.
Here is a modified version:
#include <stdio.h> int main() { int a = 2; double b = 4; printf("a+b: %f %d %.0f\n", a + b, (int)(a + b), a + b); printf("a*b: %f %d %.0f\n", a * b, (int)(a * b), a * b); printf("a/b: %f %d %.0f\n", a / b, (int)(a / b), a / b); printf("b/a: %f %d %.0f\n", b / a, (int)(b / a), b / a); getchar(); return 0; }
Output:
a+b: 5.000000 5 5 a*b: 6.000000 6 6 a/b: 0.666667 0 1 b/a: 1.500000 1 2
a+b(or any operation combining a and b) will befloat. If you want to print it as an int, you'll need to cast it to an int, e.g.(int)(a+b).%dthere invoke undefined behaviour. The argument isn't compatible withint, so you shouldn't use%d.