Thanks for that. I haven't played with decimal before. What we're seeing here simply reflects the finite precision of the original value for phi, and the precision with which the decimal module will have computed. There are about 8 arithmetic operations in id2 so the deterioration is maybe a bit higher than I expected.

If I get phi as:
    context.prec = 800
    phi = (decimal.Decimal(5).sqrt()+1) / 2
and get my length for phi as len(str(phi)) - 1, I see:

Phi is 800 digits long, 1.618...24292
The 1st identity is 3.0 and 797 more "0"s then 1
The 2nd identity is 2.9 and 796 more "9"s then 8
Equal? False
The difference is= -2.1E-798

Now in the hardware-based implementation, every float value has an exact representation in decimal, it's just that the exact representation may have ~750 significant digits. So how to round correctly, and in a way that is short-repr nice? CPython have had some fun with these issues . I'm relying on the test material including cases that prove awkward for this type of conversion. It does seem to.

Jeff Allen
On 18/04/2014 23:18, Vernon D. Cole wrote:
When you are ready to try some truly pathological numbers, I stumbled across a program that uses them.  They are decimal calculations that display rounding error 600+ digits away.  Find the code at

Phi is 624 digits long, 1.618...24292
The 1st identity is 2.9 and 621 more "9"s then 8
The 2nd identity is 3.0 and 620 more "0"s then 2
Equal? False
The difference is= 2.2E-622

Date: Wed, 09 Apr 2014 08:02:32 +0100
From: Jeff Allen <>
Subject: Re: [Jython-dev] Test failures for float
To: Jython Developers <>
Message-ID: <>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

I wrote a pretty good replacement for core.stringlib.Formatter, based on
java.text.DecimalFormatter. It agrees with CPython for all "reasonable"
precisions, that is to say up to 16 significant figures, and no-one has
had to take the log of anything. It can provide float.__str__ and
float.__repr__ more cleanly than at present too.

But it fails tests like this: AssertionError:
'10000000000000000000000000000000000000000000000000' !=

I haven't come across any cases where the problem is that the actual
float value is different (considered at the bit-level using
float.hex()); it's all on the output formatting. CPython appears to give
us the exact value -- there is always a finite answer, though it may
have a thousand digits. DecimalFormatter, which is at bottom just
Double.toString(), gives us just enough digits to differ from the next
float bit pattern.

Experiments with BigDecimal suggest I can get exact conformance by that
route and it seems worth the attempt, so I'm going that way.