## [Sbcl-devel] Complex floating-point arithmetic

 [Sbcl-devel] Complex floating-point arithmetic From: Paul Khuong - 2009-06-20 11:28:50 ```Good evening. Before committing the SSE complex patch, I'd like your opinion on other changes I made in that area: 1. We currently compute complex-complex division with: (let* ((rx (realpart x)) (ix (imagpart x)) (ry (realpart y)) (iy (imagpart y))) (if (> (abs ry) (abs iy)) (let* ((r (/ iy ry)) (dn (* ry (+ 1 (* r r))))) (complex (/ (+ rx (* ix r)) dn) (/ (- ix (* rx r)) dn))) (let* ((r (/ ry iy)) (dn (* iy (+ 1 (* r r))))) (complex (/ (+ (* rx r) ix) dn) (/ (- (* ix r) rx) dn))))) Knuth suggests to compute dn as (+ ry (* r iy)) or (+ iy (* r ry)) in his tome on semi-numerical algorithms. Indeed, squaring r only to cancel some of it away is fishy. Unless someone can think of a justification for the current implementation, I believe that change won't be a problem. 2. In any case, the implementation above is far from complete. Compared to, e.g., the behaviour mandated by the C99 specification, we fail to handle pretty much all the edge cases (except for gratuitous precision loss compared to a more naive implementation). On the other hand, in a performance-oriented application, the user probably wants something closer to the above than what GCC has to emit to conform to C99. I don't think C99 is noted for the sanity of its complexes either... 3. On a somewhat related note, the spec says that we should compute complex-float/real-float operations as though the real was first upgraded to a complex. That's not quite what happens, since we have some transforms that optimise such cases. Thus, (* 1.0 #c(1.0 #.sb-ext:single-float-positive-infinity)) => #c(1.0 #.SB-EXT:SINGLE-FLOAT-POSITIVE-INFINITY), but (* #c(1.0 0.0) #c(1.0 #.sb-ext:single-float-positive-infinity)) => #C(# #.SB-EXT:SINGLE-FLOAT-POSITIVE- INFINITY) ; or an FP condition. Basically, respecting the spec to the letter here would probably imply generating slower code that sometimes returns NaNs or less precise results, or adding branches to check for zero imaginary parts (and probably other edge cases as noted above) in complex/complex operations (but that's lossy since 0.0 does not necessarily mean 0, etc.). Is it time for a fast-math--style declaration? Other thoughts? Paul Khuong ```

 [Sbcl-devel] Complex floating-point arithmetic From: Paul Khuong - 2009-06-20 11:28:50 ```Good evening. Before committing the SSE complex patch, I'd like your opinion on other changes I made in that area: 1. We currently compute complex-complex division with: (let* ((rx (realpart x)) (ix (imagpart x)) (ry (realpart y)) (iy (imagpart y))) (if (> (abs ry) (abs iy)) (let* ((r (/ iy ry)) (dn (* ry (+ 1 (* r r))))) (complex (/ (+ rx (* ix r)) dn) (/ (- ix (* rx r)) dn))) (let* ((r (/ ry iy)) (dn (* iy (+ 1 (* r r))))) (complex (/ (+ (* rx r) ix) dn) (/ (- (* ix r) rx) dn))))) Knuth suggests to compute dn as (+ ry (* r iy)) or (+ iy (* r ry)) in his tome on semi-numerical algorithms. Indeed, squaring r only to cancel some of it away is fishy. Unless someone can think of a justification for the current implementation, I believe that change won't be a problem. 2. In any case, the implementation above is far from complete. Compared to, e.g., the behaviour mandated by the C99 specification, we fail to handle pretty much all the edge cases (except for gratuitous precision loss compared to a more naive implementation). On the other hand, in a performance-oriented application, the user probably wants something closer to the above than what GCC has to emit to conform to C99. I don't think C99 is noted for the sanity of its complexes either... 3. On a somewhat related note, the spec says that we should compute complex-float/real-float operations as though the real was first upgraded to a complex. That's not quite what happens, since we have some transforms that optimise such cases. Thus, (* 1.0 #c(1.0 #.sb-ext:single-float-positive-infinity)) => #c(1.0 #.SB-EXT:SINGLE-FLOAT-POSITIVE-INFINITY), but (* #c(1.0 0.0) #c(1.0 #.sb-ext:single-float-positive-infinity)) => #C(# #.SB-EXT:SINGLE-FLOAT-POSITIVE- INFINITY) ; or an FP condition. Basically, respecting the spec to the letter here would probably imply generating slower code that sometimes returns NaNs or less precise results, or adding branches to check for zero imaginary parts (and probably other edge cases as noted above) in complex/complex operations (but that's lossy since 0.0 does not necessarily mean 0, etc.). Is it time for a fast-math--style declaration? Other thoughts? Paul Khuong ```
 Re: [Sbcl-devel] Complex floating-point arithmetic From: Raymond Toy - 2009-06-25 17:38:17 ```>>>>> "Paul" == Paul Khuong writes: Paul> Good evening. Before committing the SSE complex patch, I'd like your Paul> opinion on other changes I made in that area: Paul> 1. We currently compute complex-complex division with: Paul> (let* ((rx (realpart x)) Paul> (ix (imagpart x)) Paul> (ry (realpart y)) Paul> (iy (imagpart y))) Paul> (if (> (abs ry) (abs iy)) Paul> (let* ((r (/ iy ry)) Paul> (dn (* ry (+ 1 (* r r))))) Paul> (complex (/ (+ rx (* ix r)) dn) Paul> (/ (- ix (* rx r)) dn))) Paul> (let* ((r (/ ry iy)) Paul> (dn (* iy (+ 1 (* r r))))) Paul> (complex (/ (+ (* rx r) ix) dn) Paul> (/ (- (* ix r) rx) dn))))) Paul> Knuth suggests to compute dn as (+ ry (* r iy)) or (+ iy (* r ry)) in Paul> his tome on semi-numerical algorithms. Indeed, squaring r only to Paul> cancel some of it away is fishy. Unless someone can think of a Paul> justification for the current implementation, I believe that change Paul> won't be a problem. I agree. I don't know why I did that. Paul> 2. In any case, the implementation above is far from complete. Paul> Compared to, e.g., the behaviour mandated by the C99 specification, we Paul> fail to handle pretty much all the edge cases (except for gratuitous Paul> precision loss compared to a more naive implementation). On the other Paul> hand, in a performance-oriented application, the user probably wants Paul> something closer to the above than what GCC has to emit to conform to Paul> C99. I don't think C99 is noted for the sanity of its complexes Paul> either... Fortunately, we're not bound by C99 rules. :-) I think Kahan was involved in the C99 spec, so we probably want to think about these cases. Paul> 3. On a somewhat related note, the spec says that we should compute Paul> complex-float/real-float operations as though the real was first Paul> upgraded to a complex. That's not quite what happens, since we have Paul> some transforms that optimise such cases. Thus, Paul> (* 1.0 #c(1.0 #.sb-ext:single-float-positive-infinity)) Paul> => #c(1.0 #.SB-EXT:SINGLE-FLOAT-POSITIVE-INFINITY), Paul> but Paul> (* #c(1.0 0.0) #c(1.0 #.sb-ext:single-float-positive-infinity)) Paul> => #C(# #.SB-EXT:SINGLE-FLOAT-POSITIVE- Paul> INFINITY) ; or an FP condition. Paul> Basically, respecting the spec to the letter here would probably imply Paul> generating slower code that sometimes returns NaNs or less precise Paul> results, or adding branches to check for zero imaginary parts (and Difficult to say, I think. I know Kahan didn't want complex+real to be implemented as complex+complex, because that could change the sign of the imaginary part of the first complex. But the spec clearly says contagion should happen. Not following the spec in this case would have made the implementation easier for the special functions of a complex arg. However, doesn't sbcl not follow the spec that (imagpart x) = (* 0 x)), so the imaginary part is a zero with the same sign as x? I know cmucl doesn't follow the spec in this case. I guess there's room for many interpretations. Maybe the simplest approach is to follow Kahan's lead and not do contagion as the CLHS says we should? Ray ```