|
From: Martin W. <mai...@ma...> - 2012-08-04 22:34:31
|
After updating to the latest version of the MinGW GCC (4.7.0), one
of my regression tests started failing. I've isolated the problem
to the following test case:
#include <stdio.h>
#include <math.h>
volatile double scale = 3.0;
int main(int argc, char *argv[])
{
printf("%20.20f\n", 123000.0 / pow(10.0, scale));
printf("%d\n", (int)(123000.0 / pow(10.0, scale)));
printf("%d\n", (int)(123000.0 / pow(10.0, 3.0)));
}
The output from this is:
123.00000000000000000000
122
123
whereas I would have expected the middle number to also be 123.
Is this allowed behaviour, or is it a bug? I realise double to int
conversion is not necessarily exact, but it appears that the double
value is an exact representation of the desired integer value, so I
would have expected it to survive conversion. As shown by the third
line of output, it does when the expression is evaluated at compile
time.
|
|
From: K. F. <kfr...@gm...> - 2012-08-05 00:08:39
|
Hello Martin!
On Sat, Aug 4, 2012 at 6:21 PM, Martin Whitaker
<mai...@ma...> wrote:
> After updating to the latest version of the MinGW GCC (4.7.0), one
> of my regression tests started failing. I've isolated the problem
> to the following test case:
>
> #include <stdio.h>
> #include <math.h>
>
> volatile double scale = 3.0;
>
> int main(int argc, char *argv[])
> {
> printf("%20.20f\n", 123000.0 / pow(10.0, scale));
> printf("%d\n", (int)(123000.0 / pow(10.0, scale)));
> printf("%d\n", (int)(123000.0 / pow(10.0, 3.0)));
> }
>
> The output from this is:
>
> 123.00000000000000000000
> 122
> 123
>
>
> whereas I would have expected the middle number to also be 123.
>
> Is this allowed behaviour, or is it a bug?
I don't claim to know for certain, but I would believe that this
behavior is allowed by the C standard. (I assume that you are
treating this as a C program, and I also believe that this behavior
would be allowed by the C++ standard.)
> I realise double to int
> conversion is not necessarily exact,
(I believe that double-to-int is required to be exact if the
double in question is exactly representable by an int, but
I don't think that that's the issue here.)
> but it appears that the double
> value is an exact representation of the desired integer value, so I
> would have expected it to survive conversion. As shown by the third
> line of output, it does when the expression is evaluated at compile
> time.
In an ideal world, one would like the run-time and compile-time
computations to agree.
But there are several other things that we would want. We would
want to be able to have a cross-compiler that gave the same result
as a native compiler. We would want the run-time computation to
be able to take advantage of floating-point hardware, even though
that can differ from platform to platform. We want the pow function
to be monotone and (close to, by floating-point standards) continuous.
Unfortunately, we can't have all of those things at the same time.
If I had to guess, I would guess that at compile time the compiler
says that the literal "3.0" is indeed an integer, and, in effect, calls
a "pow (double, int)" function, and gets the exact result. (Or maybe
the compiler isn't trying to be that smart, but it calls pow in a
library statically linked into the compiler executable, and that pow
differs slightly from the pow in dll that your program uses at run
time.)
But the run-time calculation takes the contents of the volatile
variable scale, and plugs it into "pow (double, double)" and,
legitimately (*), gets a not-quite-accurate result that's almost
equal to the correct value of 1000, but is one or two bits larger.
Now the result of the division is one or two bits smaller than
the correct value of 123, and gets truncated down to 122 when
converted to an int.
(*) So, the question is whether it's legitimate for
pow (10.0, 3.0) != 1000.0
I think that the answer is yes, even though all of the doubles
in question are exactly representable.
One could hypothetically require that pow be "exact" in that
the result of pow be the representable double nearest the true
infinite-precision result. But this is impractical for a function
like pow, so the result of pow might be off by a couple of bits.
Now let's say that y is a double that is one bit smaller than
3.0 and that pow (10.0, y) is one bit larger than 1000.0.
The result of pow is not the nearest representable double,
but it's only off by a couple of bits, so we deem it good enough.
Now comes the problem. We could say that 3.0 is an integer
and exactly representable, and therefore it's a special case and
pow (10.0, 3.0) should be exact. But if we demand that -- not
really unreasonable -- we then have the problem that
pow (10.0, y) > pow (10.0, 3.0)
even though y < 3.0.
So, if we let pow, for reasons of practical efficiency, not be "exact"
in general, but then demand that pow be exact for certain special
cases (like pow (int, int)), we lose monotonicity and possibly
"continuity" (in some floating-point sense).
You have to compromise somewhere, and I'm pretty sure that the
standard permits the compromise that you see above.
(As a quality-of-implementation issue, one could argue that this
particular compromise is not the best choice, but it's not clear
to me that there is an obvious best choice.)
Happy Floating-Point Hacking!
K. Frank
|
|
From: Martin W. <mai...@ma...> - 2012-08-05 06:37:00
|
K. Frank wrote:
> Hello Martin!
>
> On Sat, Aug 4, 2012 at 6:21 PM, Martin Whitaker
> <mai...@ma...> wrote:
>> After updating to the latest version of the MinGW GCC (4.7.0), one
>> of my regression tests started failing. I've isolated the problem
>> to the following test case:
>>
>> #include <stdio.h>
>> #include <math.h>
>>
>> volatile double scale = 3.0;
>>
>> int main(int argc, char *argv[])
>> {
>> printf("%20.20f\n", 123000.0 / pow(10.0, scale));
>> printf("%d\n", (int)(123000.0 / pow(10.0, scale)));
>> printf("%d\n", (int)(123000.0 / pow(10.0, 3.0)));
>> }
>>
>> The output from this is:
>>
>> 123.00000000000000000000
>> 122
>> 123
>>
>>
>> whereas I would have expected the middle number to also be 123.
>>
>> Is this allowed behaviour, or is it a bug?
>
> I don't claim to know for certain, but I would believe that this
> behavior is allowed by the C standard. (I assume that you are
> treating this as a C program, and I also believe that this behavior
> would be allowed by the C++ standard.)
>
>> I realise double to int
>> conversion is not necessarily exact,
>
> (I believe that double-to-int is required to be exact if the
> double in question is exactly representable by an int, but
> I don't think that that's the issue here.)
>
I was misled by printing out the intermediate double value
and seeing it was an exact integer value. But as another
user has pointed out, the compiler is generating code that
does the calculations in a higher floating point precision,
so the value being converted to an int is not necessarily
the same as the value printed out.
Thanks for the rest of your comments. As you say, Happy
Floating-Point Hacking!
Martin
|
|
From: Philip K. <phi...@we...> - 2012-08-05 10:36:51
|
I thought I had understood everything. But now it looks I didn't get a thing. You mean, I should redistribute the sources? Isn't that pretty pointless in the ages of the Internet? Wouldn't it suffice to give the credits in a top-level `README'? No, I had hoped this discussion lead to information. Now I'm more confused than ever. To everybody that is really interested in answering my question, please leave four or five bullet points on what I have to do, and I will follow. Thank you. |
|
From: Roger K. W. <ROG...@sa...> - 2012-08-05 13:39:57
|
On 08/04/2012 08:08 PM, K. Frank wrote:
> Hello Martin!
>
> On Sat, Aug 4, 2012 at 6:21 PM, Martin Whitaker
> <mai...@ma...> wrote:
>> After updating to the latest version of the MinGW GCC (4.7.0), one
>> of my regression tests started failing. I've isolated the problem
>> to the following test case:
>>
>> #include <stdio.h>
>> #include <math.h>
>>
>> volatile double scale = 3.0;
>>
>> int main(int argc, char *argv[])
>> {
>> printf("%20.20f\n", 123000.0 / pow(10.0, scale));
>> printf("%d\n", (int)(123000.0 / pow(10.0, scale)));
>> printf("%d\n", (int)(123000.0 / pow(10.0, 3.0)));
>> }
>>
>> The output from this is:
>>
>> 123.00000000000000000000
>> 122
>> 123
>>
>>
>> whereas I would have expected the middle number to also be 123.
>>
>> Is this allowed behaviour, or is it a bug?
> I don't claim to know for certain, but I would believe that this
> behavior is allowed by the C standard. (I assume that you are
> treating this as a C program, and I also believe that this behavior
> would be allowed by the C++ standard.)
>
>> I realise double to int
>> conversion is not necessarily exact,
> (I believe that double-to-int is required to be exact if the
> double in question is exactly representable by an int, but
> I don't think that that's the issue here.)
>
>> but it appears that the double
>> value is an exact representation of the desired integer value, so I
>> would have expected it to survive conversion. As shown by the third
>> line of output, it does when the expression is evaluated at compile
>> time.
> In an ideal world, one would like the run-time and compile-time
> computations to agree.
>
> But there are several other things that we would want. We would
> want to be able to have a cross-compiler that gave the same result
> as a native compiler. We would want the run-time computation to
> be able to take advantage of floating-point hardware, even though
> that can differ from platform to platform. We want the pow function
> to be monotone and (close to, by floating-point standards) continuous.
>
> Unfortunately, we can't have all of those things at the same time.
>
> If I had to guess, I would guess that at compile time the compiler
> says that the literal "3.0" is indeed an integer, and, in effect, calls
> a "pow (double, int)" function, and gets the exact result. (Or maybe
> the compiler isn't trying to be that smart, but it calls pow in a
> library statically linked into the compiler executable, and that pow
> differs slightly from the pow in dll that your program uses at run
> time.)
>
> But the run-time calculation takes the contents of the volatile
> variable scale, and plugs it into "pow (double, double)" and,
> legitimately (*), gets a not-quite-accurate result that's almost
> equal to the correct value of 1000, but is one or two bits larger.
> Now the result of the division is one or two bits smaller than
> the correct value of 123, and gets truncated down to 122 when
> converted to an int.
>
> (*) So, the question is whether it's legitimate for
>
> pow (10.0, 3.0) != 1000.0
>
> I think that the answer is yes, even though all of the doubles
> in question are exactly representable.
>
> One could hypothetically require that pow be "exact" in that
> the result of pow be the representable double nearest the true
> infinite-precision result. But this is impractical for a function
> like pow, so the result of pow might be off by a couple of bits.
>
> Now let's say that y is a double that is one bit smaller than
> 3.0 and that pow (10.0, y) is one bit larger than 1000.0.
> The result of pow is not the nearest representable double,
> but it's only off by a couple of bits, so we deem it good enough.
>
> Now comes the problem. We could say that 3.0 is an integer
> and exactly representable, and therefore it's a special case and
> pow (10.0, 3.0) should be exact. But if we demand that -- not
> really unreasonable -- we then have the problem that
>
> pow (10.0, y) > pow (10.0, 3.0)
>
> even though y < 3.0.
>
> So, if we let pow, for reasons of practical efficiency, not be "exact"
> in general, but then demand that pow be exact for certain special
> cases (like pow (int, int)), we lose monotonicity and possibly
> "continuity" (in some floating-point sense).
>
> You have to compromise somewhere, and I'm pretty sure that the
> standard permits the compromise that you see above.
>
> (As a quality-of-implementation issue, one could argue that this
> particular compromise is not the best choice, but it's not clear
> to me that there is an obvious best choice.)
>
>
> Happy Floating-Point Hacking!
>
>
> K. Frank
I have been bitten by this as well. Different results among MinGW
versions of GCC vs Linux versions etc.
The use of nearbyint() seems to be useful in some cases and also makes
the author's intent clear.
Thus the original post becomes:
#include <stdio.h>
#include <math.h>
volatile double scale = 3.0;
int main(int argc, char *argv[])
{
printf("%20.20f\n", 123000.0 / nearbyint(pow(10.0, scale)));
printf("%d\n", (int)(123000.0 / nearbyint(pow(10.0, scale))));
printf("%d\n", (int)(123000.0 / nearbyint(pow(10.0, 3.0))));
}
>
> ------------------------------------------------------------------------------
> Live Security Virtual Conference
> Exclusive live event will cover all the ways today's security and
> threat landscape has changed and how IT managers can respond. Discussions
> will include endpoint security, mobile security and the latest in malware
> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
> _______________________________________________
> MinGW-users mailing list
> Min...@li...
>
> This list observes the Etiquette found at
> http://www.mingw.org/Mailing_Lists.
> We ask that you be polite and do the same. Disregard for the list etiquette may cause your account to be moderated.
>
> _______________________________________________
> You may change your MinGW Account Options or unsubscribe at:
> https://lists.sourceforge.net/lists/listinfo/mingw-users
> Also: mailto:min...@li...?subject=unsubscribe
>
--
Roger Wells, P.E.
SAIC
221 Third St
Newport, RI 02840
401-847-4210 (voice)
401-849-1585 (fax)
rog...@sa...
|
|
From: KHMan <kei...@gm...> - 2012-08-05 02:28:27
|
On 8/5/2012 6:21 AM, Martin Whitaker wrote:
> After updating to the latest version of the MinGW GCC (4.7.0), one
> of my regression tests started failing.
Interesting, it fails for me on MinGW gcc 3.4.5, MinGW 4.5.2 and
TDM gcc 4.5.2...
> I've isolated the problem
> to the following test case:
>
> #include<stdio.h>
> #include<math.h>
>
> volatile double scale = 3.0;
>
> int main(int argc, char *argv[])
> {
> printf("%20.20f\n", 123000.0 / pow(10.0, scale));
> printf("%d\n", (int)(123000.0 / pow(10.0, scale)));
> printf("%d\n", (int)(123000.0 / pow(10.0, 3.0)));
> }
>
> The output from this is:
>
> 123.00000000000000000000
> 122
> 123
>
>
> whereas I would have expected the middle number to also be 123.
>
> Is this allowed behaviour, or is it a bug? I realise double to int
> conversion is not necessarily exact, but it appears that the double
> value is an exact representation of the desired integer value, so I
> would have expected it to survive conversion. As shown by the third
> line of output, it does when the expression is evaluated at compile
> time.
Search for 'pow' in your math.h header file and you will find the
answer in a long comment block: "Excess precision when using a
64-bit mantissa for FPU math ops can cause unexpected results..."
Adding:
#include <fenv.h>
fesetenv(FE_PC53_ENV);
gives 123 for the second line. Or save it first. The assembly
output around the pow() in the second line is:
flds 32(%esp)
fstpl (%esp)
call _pow
fldl 16(%esp) ; loads 123000.0
fdivp %st, %st(1) ; divide with pow() on x87 stack
So the intermediate result was kept on the x87 stack, and the
final result isn't exact I suppose.
glibc wraps the i386 pow instruction and test for various special
cases and conditions, including appropriate integer arguments, in
order to guarantee exact results where possible.
--
Cheers,
Kein-Hong Man (esq.)
Kuala Lumpur, Malaysia
|
|
From: Martin W. <mai...@ma...> - 2012-08-05 06:19:23
|
KHMan wrote: > On 8/5/2012 6:21 AM, Martin Whitaker wrote: >> After updating to the latest version of the MinGW GCC (4.7.0), one >> of my regression tests started failing. > > Interesting, it fails for me on MinGW gcc 3.4.5, MinGW 4.5.2 and > TDM gcc 4.5.2... > Hmmm, yes, the test case also fails for me if I go back to the previous version. The real code is of course more complex, so it's some other code generation change that has suddenly exposed this issue. > Search for 'pow' in your math.h header file and you will find the > answer in a long comment block: "Excess precision when using a > 64-bit mantissa for FPU math ops can cause unexpected results..." > > Adding: > > #include <fenv.h> > fesetenv(FE_PC53_ENV); > > gives 123 for the second line. Or save it first. The assembly > output around the pow() in the second line is: > > flds 32(%esp) > fstpl (%esp) > call _pow > fldl 16(%esp) ; loads 123000.0 > fdivp %st, %st(1) ; divide with pow() on x87 stack > > So the intermediate result was kept on the x87 stack, and the > final result isn't exact I suppose. > > glibc wraps the i386 pow instruction and test for various special > cases and conditions, including appropriate integer arguments, in > order to guarantee exact results where possible. > That explains it. I was misled by printing out the double value and seeing it was an exact integer. Saving the intermediate value in a temporary variable fixes the problem. Thanks! Martin |
|
From: Eli Z. <el...@gn...> - 2012-08-05 16:44:29
|
You replied to the wrong thread; hopefully, this will bring it back. > Date: Sun, 05 Aug 2012 12:36:43 +0200 > From: Philip Köster <phi...@we...> > > I thought I had understood everything. But now it looks I didn't get a > thing. > > You mean, I should redistribute the sources? Not necessarily, see below. > Isn't that pretty pointless in the ages of the Internet? Wouldn't it > suffice to give the credits in a top-level `README'? No. > To everybody that is really interested in answering my question, please > leave four or five bullet points on what I have to do, and I will > follow. Thank you. I will try. You don't need to distribute sources of GCC components if you link those components statically into your binaries. As Greg pointed out, using the '-static-libgcc' switch to GCC when linking C and C++ programs, and in addition '-static-libstdc++' when linking C++ programs will accomplish that. If you do not use the above switches, then your binaries will by default be linked against DLL versions of the GCC libraries, and then you will have to distribute the sources of those DLLs. OK? |
|
From: Earnie B. <ea...@us...> - 2012-08-05 17:18:33
|
On Sun, Aug 5, 2012 at 12:44 PM, Eli Zaretskii wrote: > If you do not use the above switches, then your binaries will by > default be linked against DLL versions of the GCC libraries, and then > you will have to distribute the sources of those DLLs. I will point out that linking the static versions of the libraries will cause your binaries to be larger in size and may take longer to load because of the size of the binary. If you're binaries are executing multiple processes then you will want to consider that the DLL versions may increase the user experience while the static version may diminish that experience. But the measure of experience with your binary is solely dependent on the function of that binary. -- Earnie -- https://sites.google.com/site/earnieboyd |
|
From: Philip K. <phi...@we...> - 2012-08-05 17:22:15
|
> You don't need to distribute sources of GCC components if you link those components statically into your binaries. As Greg pointed out, using the '-static-libgcc' switch to GCC when linking C and C++ programs, and in addition '-static-libstdc++' when linking C++ programs will accomplish that. If you do not use the above switches, then your binaries will by default be linked against DLL versions of the GCC libraries, and then you will have to distribute the sources of those DLLs. OK? Fair enough. In fact, what I'm doing is load my DLL from Java via JNI, which in turn makes use of MinGW functions. I might look into this switch that I have never heard of, but as long as I distribute the sources, I'm on the safe side, yes? That's the way I understand your comment. |
|
From: Eli Z. <el...@gn...> - 2012-08-05 18:05:29
|
> Date: Sun, 05 Aug 2012 19:22:04 +0200 > From: Philip Köster <phi...@we...> > > Fair enough. In fact, what I'm doing is load my DLL from Java via JNI, > which in turn makes use of MinGW functions. I might look into this > switch that I have never heard of, but as long as I distribute the > sources, I'm on the safe side, yes? Yes, AFAIK. |