On 1/13/08, Nikodemus Siivola <nikodemus@...> wrote:
> ...which makes life a lot easier.
Up to a point, at least. :) Here's the underlying problem:
On x86 we perform single-float + integer normally using
double-precision, and then coerce the result back to single-float.
(The FILD instruction always gives us a double-float, and unless
we do MOVE-FROM-SINGLE it remains one. Or so it seems to me, and
that would also explain the observed behaviour below.)
During IR1 we derive the types for both
(+ <single> <integer>) ; uses double-precision
(+ <single> (FLOAT <integer> <single>)) ; uses single-precision
and get a mismatch for a number of unlucky arguments. The use of
double-precision in the first case appears to be an (un)happy accident
-- interval arithmetic gives us the double-precision result because
that's what the backend does.
(+ 8172.0 (coerce -95195347 'single-float)) ; => -9.518717e7
(+ 8172.0 -95195347) ; => -9.5187176e7
(coerce (+ 8172.0 (coerce -95195347 'double-float)) 'single-float)
; => -9.5187176e7
I don't have an immediate idea how to fix this, except by making sure
%SINGLE-FLOAT always creates single-floats, even if the result is consumed
immediately. ...but I'm not sure if that's the right thing, or if the right
thing would be to deal with this in the IR1 (somehow).