Let's say we have the following two pieces of code:
int *a = (int *)malloc(sizeof(*a)); int *b = (int *)malloc(sizeof(*b)); And
int *a = (int *)malloc(2 * sizeof(*a)); int *b = a + 1; Both of them allocate two integers on the heap and (assuming the normal usage) they should be equivalent. The first seems to be slower as it calls malloc twice and can result in a more cache-friendly code. The second however is possibly insecure as we can accidentally override the value of what b points to just by incrementing a and writing to the resulting pointer (or someone malicious can instantly change the value of b just by knowing where a is).
It's possible that the above claims are not true (for example the speed is questioned here: Minimizing the amount of malloc() calls improves performance?) but my question is just: Can the compiler do this type of transformation or is there something fundamentally different between the two according to the standard? If it is possible, what compiler flags (let's say gcc) can allow it?
malloc()) fragment than by the first. The first would need two lots of accounting overhead, so the chances are good that the values inaandbare more than 4 bytes apart in that case (probably at least 8 bytes apart on a 32-bit system and at least 16 bytes on a 64-bit system, but those numbers are endlessly fungible by different implementations).sizeof(int)rather than assuming4. Code like this is what makes porting from 32-bit to 64-bit far, far harder than it should be. (Sure,intis the same size on both, but other types differ considerably.)