|
From: Joerg B. <em...@jb...> - 2010-07-07 06:08:24
|
Hello,
I have a problem with valgrind since I did convert
my program to use long double instead double on
critical points. Here is an example code:
#include <stdlib.h>
#include <stdio.h>
int main(int argc,char **argv){
long double x;
char *ende;
x=strtold(argv[1],&ende);
printf("%.20LE",x);
return 0;
}
Using gcc, I get the result
./test 0.345
3.44999999999999999999E-01
but
valgrind ./test 0.345
==2403== Memcheck, a memory error detector
==2403== Copyright (C) 2002-2009, and GNU GPL'd, by Julian Seward et al.
==2403== Using Valgrind-3.5.0 and LibVEX; rerun with -h for copyright info
==2403== Command: ./test 0.345
==2403==
3.44999999999999973355E-01
Due to that precision loss, I cannot further use valgrind. Some numeric
integals etc. heavily depend on numeric precision...
Any hints? Anyone else?
Joerg
|
|
From: John R. <jr...@bi...> - 2010-07-07 12:05:27
|
> I have a problem with valgrind since I did convert > my program to use long double instead double on > critical points. Here is an example code: [snipped] > Due to that precision loss, I cannot further use valgrind. Some numeric > integals etc. heavily depend on numeric precision... valgrind handling only single- and double-precision floating point is well documented. If 53 bits of fractional precision are not enough, then the question becomes, "Why are 64 bits enough? Why aren't 2*53 bits, or 53+64 bits, or 2*64 bits, or more, required?" If you must have both such high precision and valgrind, then convert to software double precision. Represent each quantity by the sum of two 'double's, such that the difference in exponents is near 53. [Have fun with the many corner cases: exponent underflow, ...] -- |
|
From: Duncan S. <bal...@fr...> - 2010-07-07 14:33:27
|
Hi John, >> I have a problem with valgrind since I did convert >> my program to use long double instead double on >> critical points. Here is an example code: [snipped] I also have this problem with some programs: running under valgrind eventually causes different program behaviour due to extra rounding. > If 53 bits of fractional precision are not enough, then the question > becomes, "Why are 64 bits enough? Why aren't 2*53 bits, or 53+64 bits, > or 2*64 bits, or more, required?" Perhaps because careful analysis has determined that 53 bits is not enough, but some greater number of bits is? > If you must have both such high precision and valgrind, then convert > to software double precision. Represent each quantity by the sum of > two 'double's, such that the difference in exponents is near 53. > [Have fun with the many corner cases: exponent underflow, ...] One of the great advantages of valgrind is that it doesn't require modifying source code. Your rather dismissive answer of "rewrite your software" is not very helpful. It's not always feasible to rewrite software. Ciao, Duncan. |
|
From: John R. <jr...@bi...> - 2010-07-07 15:19:08
|
On 07/07/2010 05:23 AM, Duncan Sands wrote: >>> I have a problem with valgrind since I did convert >>> my program to use long double instead double on >>> critical points. Here is an example code: [snipped] > > I also have this problem with some programs: running under valgrind > eventually causes different program behaviour due to extra rounding. > >> If 53 bits of fractional precision are not enough, then the question >> becomes, "Why are 64 bits enough? Why aren't 2*53 bits, or 53+64 bits, >> or 2*64 bits, or more, required?" > > Perhaps because careful analysis has determined that 53 bits is not enough, > but some greater number of bits is? The original poster gave no indication that such careful analysis had been done, only that 53 bits apparently was not enough for the existing software. If no fixed upper bound on the necessary precision is known in advance, then it is possible that nothing will satisfy the original poster, and the payoff for attempting a solution might be low. >> If you must have both such high precision and valgrind, then convert >> to software double precision. Represent each quantity by the sum of >> two 'double's, such that the difference in exponents is near 53. >> [Have fun with the many corner cases: exponent underflow, ...] > > One of the great advantages of valgrind is that it doesn't require > modifying source code. Your rather dismissive answer of "rewrite > your software" is not very helpful. It's not always feasible to > rewrite software. I gave an explicit technical suggestion that was new to the discussion, can be done, and shows promise towards solving the apparent problem of providing more than 53 bits of precision while working with valgrind. I have implemented such software in the past, and indeed controlling exponent range is a problem. Nevertheless, it allowed me to achieve about 96 bits of precision in most cases [my particular application needed 78 bits], have moderate speed, and play nicely with the software tools of its day. (It was many years before valgrind; the software is not available for distribution.) -- |
|
From: John R. <jr...@bi...> - 2010-07-07 15:52:08
|
> Due to that precision loss, I cannot further use valgrind. Some numeric > integals etc. heavily depend on numeric precision... [This response is general and somewhat theoretical, but might aid in understanding some big-picture aspects of memcheck and FP.] What is the expected benefit from a valgrind that handles FP that is wider than 'double'? Memcheck handles uninit FP very simply: if any input bit to an FP operation is uninit, then all output bits are uninit. There is no bit-by-bit analysis for an FP op like there is for [most] integer operations. [Integer divide and remainder are not handled as precisely by memcheck as integer addition, subtraction, and boolean ops. There are some cryptography apps where this matters.] In some respects memcheck of FP code is a schema operation where the width of FP is an independent variable. If you can force the code to execute the same path, then the width of FP does not matter to the memcheck results. It may be possible to evade the width issue by melding a code coverage analysis with the output of memcheck on the code for 'double'. -- |
|
From: Duncan S. <bal...@fr...> - 2010-07-07 16:55:54
|
Hi John, >> Due to that precision loss, I cannot further use valgrind. Some numeric >> integals etc. heavily depend on numeric precision... > > [This response is general and somewhat theoretical, but might aid > in understanding some big-picture aspects of memcheck and FP.] > > What is the expected benefit from a valgrind that handles FP that is > wider than 'double'? Memcheck handles uninit FP very simply: if any input > bit to an FP operation is uninit, then all output bits are uninit. > There is no bit-by-bit analysis for an FP op like there is for [most] > integer operations. [Integer divide and remainder are not handled as > precisely by memcheck as integer addition, subtraction, and boolean ops. > There are some cryptography apps where this matters.] the problem I have is that due to extra rounding when running under valgrind, eventually the program behaves differently, different code paths are taken, etc, which can make valgrind useless for trying to debug some failures, because the failure doesn't occur under valgrind. This has nothing to do with tracking uninitialized bits etc in floating point numbers. In fact if valgrind ignored floating point numbers and just executed them natively that would be fine with me. Ciao, Duncan. |
|
From: John R. <jr...@bi...> - 2010-07-07 19:20:38
|
> [My case] has nothing to do with tracking uninitialized bits etc in floating > point numbers. In fact if valgrind ignored floating point numbers and > just executed them natively that would be fine with me. What I distill from this is a request for a "FP mode" which checks loads from memory to FP registers for valid [allocated] addresses and [un]init bit values, but immediately marks the bits in the register as all init or all uninit. FP operations track data flow, but again the result bits are marked all init or all uninit. Stores from FP registers to memory are checked for valid [allocated] addresses, and copy the marking of the bits in register to the memory bytes. FP modes (width, rounding, etc.) and operations never are changed from user values. -- |
|
From: Duncan S. <bal...@fr...> - 2010-07-08 08:21:33
|
Hi John, >> [My case] has nothing to do with tracking uninitialized bits etc in floating >> point numbers. In fact if valgrind ignored floating point numbers and >> just executed them natively that would be fine with me. > > What I distill from this is a request for a "FP mode" which checks > loads from memory to FP registers for valid [allocated] addresses > and [un]init bit values, but immediately marks the bits in the register > as all init or all uninit. FP operations track data flow, but again > the result bits are marked all init or all uninit. Stores from FP > registers to memory are checked for valid [allocated] addresses, > and copy the marking of the bits in register to the memory bytes. > FP modes (width, rounding, etc.) and operations never are changed > from user values. this sounds like a great plan. Best wishes, Duncan. |
|
From: Nicholas N. <n.n...@gm...> - 2010-07-09 02:27:18
|
On Wed, Jul 7, 2010 at 4:08 PM, Joerg Bergmann <em...@jb...> wrote: > > I have a problem with valgrind since I did convert > my program to use long double instead double on > critical points. This is bug 197915: https://bugs.kde.org/show_bug.cgi?id=197915 80-bit FP could be implemented, it's largely a matter of grunt work to add the new type and all the relevant operations to VEX. People complain sporadically but rarely enough that nobody has done this work yet. Nick |