Currently sdcc evaluates arithmetic stuff at compile time for ints, but not for pointers.
unsigned char b;
unsigned char d;
unsigned char e;
a.c.e = 1 + 1 + 1;
The "1 + 1 + 1" is replaced by 3 at compile time, but &(a.c.e) is calculated at runtime as (&a + 1 + 1). It should be done at compile time.
The above code (which is attached, too) should be compiled with
sdcc -mz80 --no-peep to see the problem.
Some ports like hc08 do optimize the pointer calculation in code generation (z80 does it partially in code generation, partially in the peephole optimizer: We see one +1 in the loading of the constant, and an inc after that; the inc would be optimized awy by the peephole optimizer), but I think this should be done at a higher level.
Doing it at a higher level would be cleaner and make it work for more complex examples, too. It would ease register pressure since the register allocator would never allocate registers to intermediate results of the pointer calculation.