From: Emanuel F. <E.F...@op...> - 2017-01-20 04:07:45
|
Hi KHMan, Thank you for your well documented explanation - it shows how it could be that my "stubbornness" is actually grounded in reality. :-) Gosh, good Lord I never even knew about -Ofast! :-D And I guess kudos also to the IEEE: these "standards bodies" are often seen as stifling creativity, but in this case they're just invaluable: I'd hate our clients reporting different results just because they'd be running on AMDs! All the best, Emanuel P.S. Your "(a+b != b+a) for floating point" example is excellent: that's indeed the kind of "gymnastics" we often do to manage the floats' inherent IMprecision. On 20-Jan-17 04:34, KHMan wrote: > On 1/20/2017 9:53 AM, Emanuel Falkenauer wrote: >> On 19-Jan-17 12:35, Peter Rockett wrote: >> [snip snip] >>> I think everybody (apart maybe from the OP) agrees how floating point >>> numbers behave. Keith makes a good point about rounding. Can I toss in >>> another feature that changing the compiler optimisation level often >>> reorders instructions meaning that rounding errors accumulate in >>> different ways. So changing the optimisation level often slightly >>> changes the numerical answers. :-\ >> I agree that it could well (or even should?) be the case... but it's not >> in my case - to my own pleasant surprise. >> >> I can build with -O3 to get the most juice for releases, or with -O0 to >> debug... my logs are still the same (spare for actual bugs). I even >> compile with -mtune=native -march=native -mpopcnt -mmmx -mssse3 -msse4.2 >> (native: I'm on Xeons), although I doubt very much the Borland compiler >> knows anything about those optimizations... and yet the latter's logs >> are still identical to MinGW's. >> Honestly it beats me as well... but I'm sure glad it's the case! :-) > Since nobody has filled this in... > > In the past it has been pointed out to me that gcc by default > respects the possibility that (a+b != b+a) for floating point, so > it does not attempt those kinds of reordering. So one ought to get > consistent results from the FPU. > > By default -O[0123s] is still math-strict [1]. -Ofast, a host of > other math options and some CPU-specific options break strictness > for more speed. If -O[0123s] misbehaves, I guess it should be > investigated as a potential bug. > > Agner Fog's manual 1, section 8 [2] gives a table of optimizations > performed, including floating point optimizations and many > compilers including gcc and Borland. So it is likely that gcc did > all optimizations while being math-strict. > > [1] https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html > [2] http://www.agner.org/optimize/ > |