Even in older C standards, according to Wikipedia, int is always guaranteed to have 16 bits at minimum, independently from the processor architecure. This goes along with a recommendation for int being the "the integer type that the target processor is most efficiently working with".
So for 8 or 16 bit processor architectures, I would usually expect int to be the same type as int16_t, so both will be compiled to exactly the same machine instructions. For processor architectures with more bits, int arithmetics may be equally or more efficient. AFAIK especially certain RISC architectures were optimized for 32 or 64 bit arithmetics, so 16 bit arithmetics may be - in theory - a little bit slower on those architectures.
However, in reality, I would not expect this to matter for the vast majority of real-world programs and machines. If you want some numbers, look into the instruction tables of several Intel/AMD/VIA processor architectures, there you can see how large (or small) the differences in CPU cycles are between 16 and 32 bit variants of the same instruction.
typedef int int16_t;thenintandint16_tare the same thing, and have the same performance. Somewhat related question: What is the difference betweenintXX_tandint_fastXX_t?