A more descriptive version of what's going on:
You declare a variable a, which will hold a 32 bit value. The compiler knows to treat a as a signed integer, but at runtime, the only thing the computer knows is that it's storing 32 bits.
(a) -> 00000000
Next, you set an unsigned int i to point at that value.
(*i) --+---> 00000000 (a) --+
So both a and i refer to the same 32 bit value, however, based on their types, this value can be interpreted differently...
You then set *i to be ~0
(*i) --+---> FFFFFFFF (a) --+
And then you print %d of a. So first, a is signed, but that's not what matters here. What matters is that you pass 0xFFFFFFFF as a parameter to printf("%d"...). When executing printf, the libc library sees the %d -- this tells the compiler to expect a value that represents a 32 bit signed integer(*), and the compiler will therefore use a signed integer-to-ascii converter to convert it to human readable text. Because the upper bit is one, this interpreter shows this as a negative number.
I'm assuming you're using a system that uses a 32 bit int. Some systems use a 16 bit int, but the same principal applies if you're using 16 bit
(*) -- depending on the system being compiled for, %d can represent either a 32 or 16 bit integer... You shouldn't assume %d refers to 32 bit only...
printfformat specifier...%uas format specifier the output is4294967295- so the output is compiler Dependent?int%d, which means the computer will fetch the 32-bit value froma, and interpret it as a signed integer (since its upper bit is one, it will show a negative value....)