Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver office.

## Re: [Sbcl-help] Odd Division By Zero Error

 Re: [Sbcl-help] Odd Division By Zero Error From: Waldek Hebisch - 2012-07-09 20:39:30 ```Christophe Rhodes wrote: > The issue at hand is whether the "extra" density of floats around 0 > should be used by the RNG. At first, it seems obvious that it should, > because well, why not? One paradigm for floating point operations is that operation is first performed exactly and after that the result is rounded to a representable float. Sometimes doing so may cost a lot of compute time and a lot of coding effort (when we require "correct rounding"), so IMHO there are reasons to use approximations with higher error. But for RNG in single precision correct rounding seem to be obtainable with reasonable effort. Given that values close to zero have extremally small probablity one can use approximations (say rounding 64-bit random integer scaled to the unit interval) that are close enough but cheaper to compute. So using extra precision around zero is fits well with philosopy of floating point. > On the other hand, imagine a simple use of a RNG > to generate samples from a distribution using the CDF and a lookup > table: generate a float between 0 and 1 and transform according to the > inverse of the CDF. Ignoring for the moment the actual generation of > zeros, if the RNG exploits the wide range of floats around 0, the lower > tail of the distribution will be much, much more explored than the upper > tail, because the floating point resolution around 0 is far greater than > it is around 1. I am not sure what you mean by "more explored". If you mean that floats close to zero have higher absolute precision, then it is what usualy is expected: taking log of uniform distribution one wants to get exponential distribution. When RNG uses uniform absolute precision in the interval (0, 1) the result is distorded tail of exponential distibution. When RNG makes good use of extra absolute precision available close to 0 then tail of transformed distribution is much closer to the true exponential distibution. Of course, the user may do something stupid, like using log(1 - x) with x unifor in (0,1) to generate exponential distibution. Then extra effort spent close to 0 is wasted. Given the above I think that RNG which makes use of extra precision around 0 is better than one which does not. OTOH there are many uses of random numbers, some are content with low quality random source but are highly sensitive to speed, while other absolutely need high quality. It is likely that user will want different speed/quality tradeoff then the one provided by the default implementation. So I am ready to use my own or third party code if needed. -- Waldek Hebisch hebisch@... ```