From: al d. <ad...@fr...> - 2006-12-07 01:41:08
|
On Wednesday 06 December 2006 20:06, John Doe wrote: > Concerning floating point, is the rounding mode set to 64-bit > or 80-bit when the regressions are running on x86 machines? > =A0Having 80-bit enabled causes all kinds of issues for > regressions because debug and optimized versions of code can > have significantly different results. =A0On x86_64 bit > machines, the 387 instructions are bypassed. That option is not always obeyed by the compilers. It is really nothing more than minor nuisance in validating=20 regressions. There are things like ... =20 A number that should be zero, maybe made by subtracting one=20 version of 3.3489 from another version of 3.3489 . They were=20 made by some calculation. The result is 6.2342e-16 on one=20 version and 6.2738e-16 on another. What if one of the numbers=20 is -7.67842e-18 ? A calculation uses the "sin()" function, which is implemented a=20 little different on a different CPU, or a different compiler. =20 It might not show just looking at the number, but then subtract=20 them..... If you are picky, it is possible to see the difference between=20 an Intel and an AMD processor. It is easy to see the=20 difference between an AMD-64 in "long" mode vs. the same AMD-64=20 in 32 bit mode. Compilers sometimes optimize by combining common subexpressions=20 or rearranging the order of an expression. Usually this is=20 desirable because it leads to better performance and has no=20 undesirable effects. Part of the point of regressions is to show these issues. It is=20 important that they do. Suppose I make a change to an=20 algorithm. I want to see that effect, and will design tests to=20 show it. Consider ... How accurate is the time step control? =20 What error is added by something like "bypass"? How do I prove=20 that this model parameter actually does something? Tests are often used to compare one simulator against another. =20 In this case, it is guaranteed that there will be differences. =20 To make a real test, we make test cases that might exaggerate=20 the differences. One way is to have 2 sets of tests. One is for the developers,=20 and shows all of this. The other is more lax, and just tests=20 basic functionality. For this one, just pick the numbers so=20 they are not sensitive to this. But even this is misleading. Anyone serious knows that this=20 happens. For education, it is important that students learn=20 that this happens, and that it is normal. Welcome to floating point computer math. |