--
On Sat, 26 Aug 2000 09:00:34 Klaus Hartmann wrote:
>Never mind... The VC+ 6 Debugger just >displayed a completely wrong floating-point >value. A simple printf() displayed the correct >one. The helped me to make the code work... >finally. Just had to subtract/add a small >epsilon from/to the AABB's min/max points.
Thats quite odd . . . I was having a similar precision problem with my terrain system a few days ago. I had just converted the whole thing over to double precision to allow me to model planets with micrometer resolution, and yet I was still getting single precision accuracy. I stepped through it in the debugger, and saw the extra bits just being chopped off.
It turns out that the internal precision on my CPU was being set to 23 bits somehow, and a call to _controlf87 to reset the precision to 64 bits fixed the problem.'
-Jake Cannell
--== Sent via Deja.com http://www.deja.com/ ==--
Before you buy.
|