|
From: Ethan A M. <EAM...@gm...> - 2017-11-15 02:45:34
|
On Wednesday, 15 November 2017 01:44:04 you wrote: > > > I'd say integer division is gradually becoming an anachronism, > > > and the new trend seems to be not just fixed floating-point but > > > full rational-number support or arbitrary- (even infinite-) > > > precision calculations, to avoid frustrating behaviors like > > > ceil(900e-7/12e-7) => 76. > > > > There you intrigue me. I have a patch series queued that adds 64-bit > > integer arithmetic with full overflow handling everywhere. Would it indeed > > be feasible to similarly upgrade the floating point evaluation code to > > provide more graceful handling of very large numbers? > > The int64 conversion was surprisingly easy so I can believe that > > modernizing the floating point would also be straightforward. > > But I wouldn't know where to start. Do you have suggestions? > > That particular example will "misbehave" even with 64b doubles > (or at least it did in my test, which is why I used that > particular example). It does not misbehave here in gnuplot (linux x86-64 platform) I tried a few larger exponents at random, up to ceil(900e-101/12e-101), without tripping an error. Is there some particular pattern that is prone to error? > But in general, the only reason I see not > to convert from float to double is that some platforms wouldn't > have built-in double support. Gnuplot has used (double) everywhere since 2011, when we dropped support for 16-bit Windows. The only exception is that when reading in binary matrix files, the matrix is stored as (float). I don't know why that was, but it's on my list of things to fix in 5.3 > I don't work on esoteric platforms > anymore, and obviously Win/Linux/Mac all handle double fine. Exactly. Ethan |