0

I tried this:

int a; unsigned int *i=(unsigned int*)&a; *i=~0; printf("%d\n",a); 

I was expecting the answer to be positive since i is unsigned but the compiler gives -1 as result. Could you please explain and guide me where to read more about these stuff?

8
  • 5
    My bet is on an incorrect printf format specifier... Commented Apr 26, 2019 at 19:57
  • @EugeneSh. on using %u as format specifier the output is 4294967295 - so the output is compiler Dependent? Commented Apr 26, 2019 at 19:59
  • Is it good enough for you? How did you come to conclusion it is compiler dependent? But the answer is actually yes. But probably not for the reason you are thinking of. It is varying with the size of int Commented Apr 26, 2019 at 19:59
  • 1
    I'm guessing you need to read up on two's complement: en.wikipedia.org/wiki/Two%27s_complement Commented Apr 26, 2019 at 20:00
  • 2
    I'm not sure where the confusion is here... you're printing %d, which means the computer will fetch the 32-bit value from a, and interpret it as a signed integer (since its upper bit is one, it will show a negative value....) Commented Apr 26, 2019 at 20:09

1 Answer 1

1

A more descriptive version of what's going on:

You declare a variable a, which will hold a 32 bit value. The compiler knows to treat a as a signed integer, but at runtime, the only thing the computer knows is that it's storing 32 bits.

(a) -> 00000000 

Next, you set an unsigned int i to point at that value.

(*i) --+---> 00000000 (a) --+ 

So both a and i refer to the same 32 bit value, however, based on their types, this value can be interpreted differently...

You then set *i to be ~0

(*i) --+---> FFFFFFFF (a) --+ 

And then you print %d of a. So first, a is signed, but that's not what matters here. What matters is that you pass 0xFFFFFFFF as a parameter to printf("%d"...). When executing printf, the libc library sees the %d -- this tells the compiler to expect a value that represents a 32 bit signed integer(*), and the compiler will therefore use a signed integer-to-ascii converter to convert it to human readable text. Because the upper bit is one, this interpreter shows this as a negative number.

I'm assuming you're using a system that uses a 32 bit int. Some systems use a 16 bit int, but the same principal applies if you're using 16 bit

(*) -- depending on the system being compiled for, %d can represent either a 32 or 16 bit integer... You shouldn't assume %d refers to 32 bit only...

Sign up to request clarification or add additional context in comments.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.