1
#include<iostream> #include<stdio.h.> using namespace std; int main() { float f=11.11; printf("%d",f); } 

When i execute the following code in dev c++ it gives the output -536870912. When i execute the same code on online tutorials point compiler i get different on every run. What could be the reason behind?

1
  • Note: You are compiling using a C++ compiler, not a C compiler. Commented Nov 6, 2016 at 15:15

4 Answers 4

3

If the argument doesn't match the format specifier, it's undefined behaviour. Ue %f to print a float.

printf("%f",f); 

You should be able to catch these sort of errors easily with good compilers. GCC produces:

warning: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘double’ [-Wformat=] 

for your code.

P.S.: There's a stray dot in your stdio.h header inclusion.

Sign up to request clarification or add additional context in comments.

2 Comments

f in fact expects a double not a float since C99 at least.
indeed but printf automatically promote to double
2

I'm not sure whether you're asking Why did it not print 11? or Why did it print different answers under different compilers?.

The reason it didn't print 11 is that, as other answers have explained, %d doesn't print floats, and in printf calls there are no automatic conversions to fix up mismatches between passed and expected types.

And the reason it printed different things under different compilers is that it's undefined behavior, meaning anything can happen. Suppose you go out to your car, loosen all the lugnuts on the front wheel until it's about to fall off, get in, and drive down the road at high speed until the wheel does fall off, whereupon you lose control and crash into a ditch. And suppose you get a new car and do exactly the same thing tomorrow, except that by chance you crash into a tree instead. At this point there are two approaches you could take:

  1. Try to figure out what subtle factors caused you to crash into a ditch on one day, and a tree on another day. Why wasn't it more repeatable?
  2. Resolve not to try foolish experiments like this again.

1 Comment

@harishbisht29 Because %c is defined as accepting an int. Also because there are some automatic conversions that do happen to the arguments when you call printf: char and short are promoted to int, and float is promoted to double. These are called the usual argument promotions or something like that, and they apply for any function that (like printf) accepts a variable number of arguments, and so has no fixed prototype.
1

It's a float, use

printf("%f" , f); 

If you don't it's an undefined behaviour. To understand you can try to cast a int with value -1 to an unsigned int.

You will get an huge value.

3 Comments

f in fact expects a double not a float since C99 at least.
@alk Since values and variables smaller than double is promoted to double for varargs arguments it doesn't really matter.
Although it may produce unexpected behavior, casting a negative signed integer to an unsigned integer is not UB.
1

There is a mismatch between the conversion specifier and the type of the variable here:

 printf("%d",f); 

Passing to printf other parameter(s) then specified by the conversion specifier(s) given by the format string provokes undefined behaviour. From that moment on everything can happen.

To fix this use the correct conversion specifier(s).

For floating point variables use the conversion specifier f, which expects a double. However when a float is passed to a variadic function like printf the float gets promote to a double.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.