There is no difference. Let's take a look at a bit of assembly output from objdump -S binary:
int main() { 40053c: 55 push %rbp 40053d: 48 89 e5 mov %rsp,%rbp 400540: 48 83 ec 30 sub $0x30,%rsp 400544: 64 48 8b 04 25 28 00 mov %fs:0x28,%rax 40054b: 00 00 40054d: 48 89 45 f8 mov %rax,-0x8(%rbp) 400551: 31 c0 xor %eax,%eax my_struct_t s; s.c.d = 3; 400553: c6 45 e9 03 movb $0x3,-0x17(%rbp) a_struct_t d2; d2.d = 4; 400557: c6 45 df 04 movb $0x4,-0x21(%rbp) } .... //the rest is not interesting, or at least I believe so
Now I am not an expert in x86 assembly, but we can clearly see, that there is just one instruction for both commands.
And why is that so? If you have a struct, that is clearly defined in terms of it's size, then during the compilation process every offset can be calculated. Now it just "put a value X in the memory cell Y".
The level of nesting is not important.
As for the general guidelines towards the performance tuning, let me cite Donald Knuth:
Premature optimization is the root of all evil.
Better tune your algorithms and IO than worry about the compiler's work. Compiler can do miracles (literally!) in terms of optimisation, if you allow it with certain compilation flags.