On 08/04/2012 08:08 PM, K. Frank wrote:
> Hello Martin!
> On Sat, Aug 4, 2012 at 6:21 PM, Martin Whitaker
> <mailing-list@...> wrote:
>> After updating to the latest version of the MinGW GCC (4.7.0), one
>> of my regression tests started failing. I've isolated the problem
>> to the following test case:
>> #include <stdio.h>
>> #include <math.h>
>> volatile double scale = 3.0;
>> int main(int argc, char *argv)
>> printf("%20.20f\n", 123000.0 / pow(10.0, scale));
>> printf("%d\n", (int)(123000.0 / pow(10.0, scale)));
>> printf("%d\n", (int)(123000.0 / pow(10.0, 3.0)));
>> The output from this is:
>> whereas I would have expected the middle number to also be 123.
>> Is this allowed behaviour, or is it a bug?
> I don't claim to know for certain, but I would believe that this
> behavior is allowed by the C standard. (I assume that you are
> treating this as a C program, and I also believe that this behavior
> would be allowed by the C++ standard.)
>> I realise double to int
>> conversion is not necessarily exact,
> (I believe that double-to-int is required to be exact if the
> double in question is exactly representable by an int, but
> I don't think that that's the issue here.)
>> but it appears that the double
>> value is an exact representation of the desired integer value, so I
>> would have expected it to survive conversion. As shown by the third
>> line of output, it does when the expression is evaluated at compile
> In an ideal world, one would like the run-time and compile-time
> computations to agree.
> But there are several other things that we would want. We would
> want to be able to have a cross-compiler that gave the same result
> as a native compiler. We would want the run-time computation to
> be able to take advantage of floating-point hardware, even though
> that can differ from platform to platform. We want the pow function
> to be monotone and (close to, by floating-point standards) continuous.
> Unfortunately, we can't have all of those things at the same time.
> If I had to guess, I would guess that at compile time the compiler
> says that the literal "3.0" is indeed an integer, and, in effect, calls
> a "pow (double, int)" function, and gets the exact result. (Or maybe
> the compiler isn't trying to be that smart, but it calls pow in a
> library statically linked into the compiler executable, and that pow
> differs slightly from the pow in dll that your program uses at run
> But the run-time calculation takes the contents of the volatile
> variable scale, and plugs it into "pow (double, double)" and,
> legitimately (*), gets a not-quite-accurate result that's almost
> equal to the correct value of 1000, but is one or two bits larger.
> Now the result of the division is one or two bits smaller than
> the correct value of 123, and gets truncated down to 122 when
> converted to an int.
> (*) So, the question is whether it's legitimate for
> pow (10.0, 3.0) != 1000.0
> I think that the answer is yes, even though all of the doubles
> in question are exactly representable.
> One could hypothetically require that pow be "exact" in that
> the result of pow be the representable double nearest the true
> infinite-precision result. But this is impractical for a function
> like pow, so the result of pow might be off by a couple of bits.
> Now let's say that y is a double that is one bit smaller than
> 3.0 and that pow (10.0, y) is one bit larger than 1000.0.
> The result of pow is not the nearest representable double,
> but it's only off by a couple of bits, so we deem it good enough.
> Now comes the problem. We could say that 3.0 is an integer
> and exactly representable, and therefore it's a special case and
> pow (10.0, 3.0) should be exact. But if we demand that -- not
> really unreasonable -- we then have the problem that
> pow (10.0, y) > pow (10.0, 3.0)
> even though y < 3.0.
> So, if we let pow, for reasons of practical efficiency, not be "exact"
> in general, but then demand that pow be exact for certain special
> cases (like pow (int, int)), we lose monotonicity and possibly
> "continuity" (in some floating-point sense).
> You have to compromise somewhere, and I'm pretty sure that the
> standard permits the compromise that you see above.
> (As a quality-of-implementation issue, one could argue that this
> particular compromise is not the best choice, but it's not clear
> to me that there is an obvious best choice.)
> Happy Floating-Point Hacking!
> K. Frank
I have been bitten by this as well. Different results among MinGW
versions of GCC vs Linux versions etc.
The use of nearbyint() seems to be useful in some cases and also makes
the author's intent clear.
Thus the original post becomes:
volatile double scale = 3.0;
int main(int argc, char *argv)
printf("%20.20f\n", 123000.0 / nearbyint(pow(10.0, scale)));
printf("%d\n", (int)(123000.0 / nearbyint(pow(10.0, scale))));
printf("%d\n", (int)(123000.0 / nearbyint(pow(10.0, 3.0))));
> Live Security Virtual Conference
> Exclusive live event will cover all the ways today's security and
> threat landscape has changed and how IT managers can respond. Discussions
> will include endpoint security, mobile security and the latest in malware
> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
> MinGW-users mailing list
> This list observes the Etiquette found at
> We ask that you be polite and do the same. Disregard for the list etiquette may cause your account to be moderated.
> You may change your MinGW Account Options or unsubscribe at:
> Also: mailto:mingw-users-request@...
Roger Wells, P.E.
221 Third St
Newport, RI 02840