long g(long t)
{
if(t>=2000 && t<4250)
t=t-(24*t-38580)/1000;
return t;
}
int main()
{
if(g(4000)!=3943) fail();
pass();
}
current sdcc will optimize the 24*t to 16 bit because of previous "if",
and the result is not as expected.
sdcc -v
SDCC : mcs51/z80/z180/r2k/r2ka/r3ka/sm83/tlcs90/ez80_z80/z80n/r800/ds390/pic16/pic14/TININative/ds400/hc08/s08/stm8/pdk13/pdk14/pdk15/mos6502/mos65c02/f8 TD- 4.5.2 #15411 (Linux)
published under GNU General Public License (GPL)
Any comments will be helpful.
P.S.: Edited by @spth, so the multiplication symbols are shown properly instead of being used as markup.
Diff:
A little simpler version with assembly output:
gives for g:
here the if is with the correct 32-bit constants, but afterwards it's indeed everything on 16-bit: the multiplication is done on 16 bits. The subtraction is implemented as adding 0x694c (and note 65536-0x694c=38580) and finally 0x03e8 with which that 16-bit division is performed is the original 1000 decimal.
Without
t>=2000check in theifthe expression would use longs:The condition checks that the t>=2000 and t <4250. Now with these limits can
(24*t-38580)/1000fit in 16 bit arithmetic? As24*4250==102000for unsigned arithmetic avoiding using one bit could be enough:(12*t-19290)/500<--that's not done by SDCC nowIf the original program would be rewritten e.g. as
The results would be as expected (I get pass in this example).
My experiments show that the current logic in compiler calculates for which range it could use 16-bit arithmetic for the above expression up to the upper limit of:
because
for one above it would use longs.
But the above limits are reasonable if the unchanged expression
24*t-38580uses__divuint, i.e.t=t-(24*(unsigned)t-(unsigned)38580)/1000;would give correct answers for that input ranget>=2000 && t<4339.So if the compiler is not doing the "adjustment of the constants to use less bits" like in my first idea
(12*t-19290)/500;but by design always keeps the constants as it finds them in the source, then the current bug is in not using__divuintin that expression in that range in which it estimated under these assumptions that all calculations should fit 16 bits.I've used your simplified version for a regression test ([r15441], disabled for now, until the bug is fixed). Definitely looks like an issue with generalized constant propagation.
--nogenconstpropis a workaround.Related
Commit: [r15441]
A closer look revealed that generalized constant propagation itself is fine, it just happens to trigger a bug in the handling of unsigned BitInt divisions here.
Fixed in [r15555].
Related
Commit: [r15555]
I have to reopen this ticket: the fix caused a regression for ports that use support functions for shifts. I should have a fix by tonight.
Fixed the shift issue in [r15559].
Related
Commit: [r15559]