From: K S. <kis...@ya...> - 2011-03-07 17:27:03
|
Hi Kai, --- On Mon, 7/3/11, Kai Tietz <kti...@go...> wrote: > > I meant y without decimal place and x without decimal > place, which > have for x ^ y always integer result. For x with decimal > place and y > without decimal place we can use here powi instead, which > should do > more exact rounding. > By decimal place, do you mean a integer type, or do you mean an integral value (e.g. 3.0)? In our actual code, both arguments passed to pow() are doubles, i.e. we are effectively calling pow(2.0,3.0) You comments about using ipow() for more precision raises a question about the precision of the general pow() for double values. The result for the general pow(2.0, 3.0) suggest it may not achieve the precision we are assuming. As I said, we have a "bounded real" type that is designed to deal with the finite precision of any floating types. The idea is that we have upper and lower bounds that are supposed to enclose the exact (inifinite precision) value of a real number. For the Windows implementation, we perform the rounding using MS's _controlfp: _controlfp(_RC_UP, _MCW_RC) to round up, for example. The assumption is that the double value returned by a function is the closest double to the exact answer, so that the rounded up and rounded down double values will enclose the exact value. For the case of pow(2.0,3.0), the exact value of 8.0 can be represented (and appears to be the value returned on other platforms). The returned result of 7.9999999999999982 suggest that the general pow() is not returning answers with the precision we are assuming, because the rounded bounds are: lower: 7.9999999999999973 upper: 7.9999999999999991 which does not enclose 8.0. Do you know if the routines in the 1.0 runtime achieve this precision for the general pow() function? Thanks and cheers, Kish > >> > >> I found this precision problem because in our > system > >> (ECLiPSe consraint logic programming language), we > have a > >> number type "bounded reals", which has an upper > and lower > >> bound, and the bounds are supposed to enclose the > exact > >> value of the real number (which might not be > representable > >> with finite precision). We obtain this from a > double value > >> by rounding it up and down, and using these as the > bounds. > >> In the case of the result of > >> power(2.0, 3), the upper bound is still less than > 8.0 after > >> rounding up. > >> [and is the same value as the lower bound on > other > >> platforms, which is why I knew it was about two > times off > >> the expected precision] > >> > >> > >> > Yes, we changed some math-routines to > statisfy ISO-C > >> > standard here. > >> > The MS routines aren't suiteable in all cases > here. > >> There > >> > is for > >> > example still an outstanding issue about the > >> > bessel-functions, which > >> > aren't satisfying ISO-C in all cases. Those > routines > >> are > >> > implemented > >> > in older runtime via gdtoa implementation, > which sadly > >> has > >> > proven as > >> > pretty slow for IEEE operations and > additional had > >> shown > >> > some issues > >> > about ISO-C standard, too. > >> > > >> > >> Thanks for the explanation. Are there anything > else done > >> by > >> MinGW-w64 other than the floating point > functions? > >> > >> > > >> > Well, first you can fallback to 1.0 > branch-math. This > >> > implementation > >> > is slower, but in those cases more accurate. > But also > >> > doesn't satisfy > >> > ISO-C standard well. > >> > Second way to fix that (and IMHO the better > one) spent > >> some > >> > effort in > >> > current implementation to improve it. > >> > > >> > >> Do I need to compile w64-MinGW in order to get > this, or can > >> I specify this with a flag (for the compiler or > linker?)? > > There are no compile-time flags for this. You would need to > fallback > to 1.0 runtime. > > >> Thanks and cheers, > >> > >> Kish > >> > >> > >> > >> > > > > Regards, > Kai > |