Could someone explain me this code please ? I have received some byte code from an assembler and now I have to use it in my virtual machine. This code is used but I don't know how it works and what it is used for.
static int32_t bytecode_to_int32 (const uint8_t* bytes) { uint32_t result = (uint32_t)bytes[0] << 24 | (uint32_t)bytes[1] << 16 | (uint32_t)bytes[2] << 8 | (uint32_t)bytes[3] << 0 ; return (int32_t)result; }
byte[0]int the most significant bits,byte[1]into next significant, ...byte[3]into least significant. Maybe it's for converting a big endian unsigned integer into the edianness of the local machine