|
From: David K. <da...@gn...> - 2017-11-15 01:48:13
|
Tait <gnu...@t4...> writes: >> > I'd say integer division is gradually becoming an anachronism, >> > and the new trend seems to be not just fixed floating-point but >> > full rational-number support or arbitrary- (even infinite-) >> > precision calculations, to avoid frustrating behaviors like >> > ceil(900e-7/12e-7) => 76. >> >> There you intrigue me. I have a patch series queued that adds 64-bit >> integer arithmetic with full overflow handling everywhere. Would it indeed >> be feasible to similarly upgrade the floating point evaluation code to >> provide more graceful handling of very large numbers? >> The int64 conversion was surprisingly easy so I can believe that >> modernizing the floating point would also be straightforward. >> But I wouldn't know where to start. Do you have suggestions? > > That particular example will "misbehave" even with 64b doubles > (or at least it did in my test, which is why I used that > particular example). But in general, the only reason I see not > to convert from float to double is that some platforms wouldn't > have built-in double support. Since historically double had been the _only_ floating point _expression_ format in K&R C (as opposed to floating point storage format that could be just float), platforms without built-in double support would be really peculiar once using them for C became a mainstream choice. -- David Kastrup |