Hello All, I have one for you.
unsigned int x = 65535;
long y = (long)x;
// y is now -1!!!
Microsoft and GCC do it right: y = 65535.
What processor are you targeting?
This seems like a perfectly reasonable result to me. Not sure exactly what the relevant standard has to say about it, but I've seen this behavior on several micros in the past. I seem to recall it's one of those undefined or implementation defined behaviors. At any rate, using VC++ or GCC on a PC these days; sizeof(unsigned int) == sizeof(long), on what-ever micro you are targeting, this is NOT necessarily the case; sizeof(unsigned int) might be sizeof(unsigned short int), and I am pretty sure those compilers are more compliant with the current C/C++ standards than SDCC.
65535 == 0xFFFF which if cast from signed to unsigned will be -1, if it is then cast from 16 bit to 32 bits the sign bit is extended. What happens if you first cast to unsigned long?
unsigned int x = 65535 ;
long y = (long)(unsigned long)x ;
This should perform an unsigned conversion to the correct size first and then reinterpret the result as signed.
SDCC has got it right too.
First of all, this example does not even generate code for x or y. But if I change the function to return a long and add "return y;" it gives 0x0000FFFF. This is the only correct result even when sizeof(int) < sizeof(long) which is true for SDCC.
I tested mcs51 and z80. So the main question becomes which version of SDCC are you using?
Thank you both for taking the time to look at my problem.
I'm using SDCC 2.6.0 for the MCS8051.
It's true that the above function doesn't really create x or y, in the function where I saw this error, I really am using the long value and therefore it was created (wrong). I tried to write the easiest examlpe I could to show the problem.
In my code I printed out the hex value of y (from above) and it showed that during the conversion from my uint16 to my int32, it signed extended my uint. For numbers between $FFFF and $8000 it would save $FFFFFFFF and $FFFF8000 which is not what I wanted or expected.
My advice is then: try to find out if this also occurs with the latest snapshot and your code. If so, file a bug report including source code which reproduces the bug.
Long John Silicon
This is not a Bug.
Microsoft GCC is probably compiling under a 32-bit CPU/Addressing.
While SDCC is compiling for Embedded work, usually 16-bit MCU/Adressing.
So, if you assign 65535 to a 16-bit int,
then read this int back, you will get -1.
The 16-bit int range is -32768 to 32767.
If you want 65535 in a 16-bit int, then use unsigned int (range 0 to 65535) !
Just like if you assign 255 to a Char, will give you -1.
The 8-bit char range is 128 to 127.
Unless, you use unsigned char (range 0 to 255)
jlsilicon: "If you want 65535 in a 16-bit int, then use unsigned int (range 0 to 65535) !"
But jeronimo479 USED unsigned int :-) (unsigned int x = 65535;)
Typecast into an unsigned long.
When you typecast an unsigned into a signed, it isnt regarding it in its previous context, but instead of in its new context. Its typecast (new) context is a signed number, so it looks at what the signed magnitude is in the old context and equates that number to the new context. In such a case, x equals (2^16 - 1), but that number in signed context is -1. Since the compiler is regarding the signed context, it takes the -1 from an int and makes the long equal to the same number. Considering the unsigned context as an unsigned long, the compiler will regard the (2^16 - 1) and copy that into the long.