From: K. F. <kfr...@gm...> - 2011-04-09 15:03:42
|
Hi Jon and Ruben! On Sat, Apr 9, 2011 at 9:47 AM, JonY <jo...@us...> wrote: > ... > On 4/9/2011 21:33, Ruben Van Boxem wrote: >> Hi, >> >> Sorry for jumping into this discussion, but I don't seem to understand what >> the advantage is of a non-hardware supported real number representation. If >> you need the two (or a bit more) decimal places required for currency and >> percentages, why not just use a big integer and for display divide by 100? >> No more worries about precision, up to an arbitrarily determined number of >> decimal places. Are the numbers so huge that they can't be stored in a >> 128-bit integer, or are there stricter requirements precision-wise? Thanks! >> >> Ruben > > Sure that is fine if your range is limited. Its the same reason floating > point exists, but with more specific applications. > > No, 128-bit integers are too short after factoring in exponents. > ... Obviously it depends on the exact use case you need to address, and there are some specialized situations in which decimal floating-point is legitimately called for, but I think Ruben is basically right. The money thing that has been mentioned a few times in this thread is a red herring. If you need to perform money calculation accurate to the penny, just do your calculations in pennies, and use exact integer arithmetic. (Or do them in dollars, and use exact two-decimal-place fixed-point arithmetic, which is really integer arithmetic by another name.) If you really need floating-point because of the range provided by the exponent, then your money calculations won't be exact anyway. For example, let say you're using seven-digit decimal floating point: $10,000,000.00 (exact) + $1,000.73 (exact) = $10,001,000.73 (???) With seven-digit decimal floating-point your result won't be exact; instead, you'll get $10,001,000.00, with the 73 cents being lost to round-off error. If you claim you need floating-point because you need the range provided by the exponent to represent a sum as large as $10,000,000, then you are no longer doing calculations exact down to the penny (with seven-digit floating-point). Also, note that performing the example calculation, $10,000,000.00 + $1,000.73, will not trigger any sort of traps or exceptions that are sometimes provided by floating-point hardware or software. This is not overflow or underflow or some other exceptional floating-point condition such as denormalization -- this is garden-variety round-off error that "always" happens when performing floating-point calculations. So, if you need exact money calculations, used fixed-point arithmetic (essentially integer arithmetic), live within the well-defined finite range, and throw an exception (or somehow signal the error condition) if you overflow the finite range. (Or use fixed-point arithmetic based on bignums, as Ruben suggested, and have essentially unlimited range, at the cost of slower arithmetic.) What, then, would be the advantage of using decimal floating-point? I don't really know the history or what people were thinking when they built those early decimal floating-point systems, but there is a (minor) advantage of having the numbers people work with on paper being represented exactly. I have 1.2345 * 10^10, and 7.6543 * 10^-12 written down on a piece of paper ad type them into my decimal computer. They are represented exactly. Of course the sum and product of these numbers is not represented exactly (with, say, seven-digit floating-point), so any advantage of having used decimal floating-point is minor. Decimal floating-point rarely buys you anything you really care about, which is probably why almost all modern computers support binary floating-point, but not decimal. This does raise the question that Ruben alluded to: Why might someone bother with implementing a decimal floating-point package for the gcc environment? It's a fair amount of work and rather tricky to do it right, and if you don't do it right, there's no point to it. Not to be critical, but why not let decimal floating-point die a seemingly well-deserved death, as is pretty much happening in the modern computing world? Implementing floating-point arithmetic correctly is rather fussy work, and I, myself, wouldn't have much taste for it. Best. K. Frank |