0
#include <stdio.h> #include <conio.h> int main() { int a = 2; float b = 4; clrscr(); printf("%d \n", a + b); printf("%f \n", a + b); printf("%d \n", a * b); printf("%f \n", a * b); printf("%d \n", b / a); printf("%f \n", b / a); getch(); return 0; } 

Output :-

-1293228905 6.000000 0 8.000000 0 2.000000 

Seems like its just taking the starting and ending range of the int datatype.

3
  • The type of a+b (or any operation combining a and b) will be float. If you want to print it as an int, you'll need to cast it to an int, e.g. (int)(a+b). Commented Dec 29, 2022 at 13:55
  • All of the places you use %d there invoke undefined behaviour. The argument isn't compatible with int, so you shouldn't use %d. Commented Dec 29, 2022 at 14:10
  • Read the fine warnings. Think about them. What do they tell you? Why do they tell you that? Commented Dec 29, 2022 at 14:17

2 Answers 2

1

Your code has undefined behavior for all printf statements that use a %d conversion. The reason is you pass a value of type float where printf will expect a value of type int to have been passed.

To evaluate the expression a+b, the compiler converts the int operand to type float and evaluates the addition using floating point computation. The type of the result is float too. The same applies to the other expressions.

Types int and float have a different representation and may be passed in different ways to vararg functions such as printf. Passing a float value where printf expects an int value results in undefined behavior: anything can happen, and it is vain to try and make sense of the surprising output.

If you mean to use %d, you must convert the value to type int explicitly with a cast:

 printf("%d \n", (int)(a + b)); 

Note that you can also use %.0f to output the float argument with no decimals, but the output will differ as printf will round the float value to the nearest integer, whereas (int)(a + b) will truncate the float value toward 0 if it can be represented as an int.

Note finally, that float values are implicitly promoted to double when passed to printf and other vararg functions. Using type double for all floating types is recommended unless you target very specific applications such as computer graphics or embedded systems.

Here is a modified version:

#include <stdio.h> int main() { int a = 2; double b = 4; printf("a+b: %f %d %.0f\n", a + b, (int)(a + b), a + b); printf("a*b: %f %d %.0f\n", a * b, (int)(a * b), a * b); printf("a/b: %f %d %.0f\n", a / b, (int)(a / b), a / b); printf("b/a: %f %d %.0f\n", b / a, (int)(b / a), b / a); getchar(); return 0; } 

Output:

a+b: 5.000000 5 5 a*b: 6.000000 6 6 a/b: 0.666667 0 1 b/a: 1.500000 1 2 
Sign up to request clarification or add additional context in comments.

Comments

0

In C The %d specifier is used to print an int value and %f using for print a float.

In this line printf("%d \n", a + b);

you are trying to addition an integer and a float, and the you print the result using the %d specifier. This shows the wrong result as an output.

In this line printf("%d \n", a * b); you are multiplying an integer and a float and printing the result using the %d specifier. It's also showing the wrong output as a result. To print the output correctly here you should use %f.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.