|
From: Tim P. <ti...@ho...> - 2001-01-04 03:50:36
|
Adding to Finn Bock's reply, it's almost certainly the case that Scott
(Knight) is running on a platform with IEEE-754 floating-point hardware, and
that the results computed by CPython and Jython were identical. The
implementations do differ in how they choose to round the display, though;
CPython inherits the fine points of its float I/O from the platform C
library, while Java defines it more rigorously.
And you're going to get the same results in any language if you use floating
point: Java, Python, Jython, C, C++, Perl, Eiffel, LISP, Scheme, Fortran,
... it doesn't matter.
This link explains one of the fine points in deadly <wink> detail:
http://www.python.org/cgi-bin/moinmoin/RepresentationError
> test_var = 0.0
> limit = 0.1
> inc = 0.01
>
> while test_var <= limit:
> print test_var
> test_var = test_var + inc
Ever since someone first bumped into this in Fortran in the 1950's
(honest!), the "correct" way to write such a loop has been to use an integer
index (say "i"), and compute
test_var = i * inc
inside the loop. Each floating-point operation introduces its own rounding
error, and in doing
test_var = test_var + inc
these errors compound over time. In
test_var = i * inc
you only suffer one rounding error in test_var.
However, none of that changes that (for example) 0.9 is not exactly
representable in binary floating-point arithmetic, no matter how you compute
it (see the link above).
Scott, if you're under the illusion <wink> that decimal fractions are a Good
Thing, you'll never be happy with hardware floating point arithmetic. You
should look into something like Java's BigDecimal class instead:
http://www2.hursley.ibm.com/decimalj/decdef.html
as-always-the-fast-drives-out-the-sane-ly y'rs - tim
|