From: K. F. <kfr...@gm...> - 2011-05-01 15:28:53
|
Hello Jon! On Sun, May 1, 2011 at 10:17 AM, JonY <jo...@us...> wrote: > On 5/1/2011 22:03, James K Beard wrote: >> Now, about that math library problem... >> ... >> AFAIK, all currently produced FPUs are IEEE compliant. Are there some out >> there that are still bucking the inevitable? > > -- Putting mingw lists back -- > > I heard older AMDs (late 90s-200Xs era) don't have 80-bit FPUs, but they > do have 64-bit FPUs, so it wasn't strange that results do differ > slightly between CPUs. Maybe that has changed for newer CPUs. > > Does the long double epsilon 1.08420217248550443401e-19L accuracy > absolutely require an 80-bit FPU? If your willing to do your floating-point math in software, then no. But if you want to get 19-decimal-digit precision in your FPU, it pretty much needs to be 80 bits. (You can always trade off exponent bits for mantissa bits, but it's much better stick with IEEE choice.) The 80-bit floating-point format has a 64-bit mantissa (and, I believe, no implicit bit): log_base_10 (2^64) = 19.27, so 64 bits gets you a little over 19 decimal digits of precision. So to get 19 decimal digits, you need 64 bits plus exponent plus sign, and that gives you something pretty similar to the 80-bit format, and it would be hard to justify making a non-standard choice. > IMHO, we don't need to produce results more accurate than that, but its > a bonus if it doesn't impact performance too much. Historically the approach (within a compiler / platform family) has been to support whatever floating-point you support in software, and use floating-point hardware when available. If you want to do heavy floating-point computation, it's up to you to make sure you have the necessary floating-point hardware or eat the (significant) performance hit. Nowadays I would think that floating-point hardware is pretty much a given on desktops and servers. Maybe this isn't true for some embedded systems (that gcc targets) -- I just don't know. Best. K. Frank |