|
From: Evan L. <sa2...@cy...> - 2007-08-09 08:46:17
|
Nicholas Nethercote wrote:
> Valgrind works at the binary level, and knows very little about C. It
> uses an assumption that sending undefined bytes to write() is bad, and
> that is very nearly always the case. Perhaps in your case it's not bad
> (although I'm still not convinced -- you're writing junk bytes to a
> file) in which case you have to work around the assumption, by using
> memset like John suggested, or one of the client requests that change
> the status of memory.
>
>> for(int i = 0; i < Dbits; i += 8) {
>> mask = *cptr++; // <--- VALGRIND REPORTS ERROR HERE
>> mask <<= i;
>> *dst |= mask;
>> }
>
> Does it really complain on that line? As John said, Valgrind shouldn't
> complain on that line, and the example errors you gave previously all
> involved calls to write().
The original link I posted was an old message from 2005 - I only posted
it because it looked like the same problem; I don't actually call
'write' anywhere.
But, assuming that calling write() on the top bytes does cause a
problem, surely this is a normal operation anyway? If you're writing
long doubles to a file, then you'll generally want to write an array of
them, and read back that array at some later time. In this case, you
would calculate the size of the array as "num_elements * sizeof(long
double)", and write out the whole lot, or do something equivalent. This
means that you write out the undefined bytes along with everything else.
The only practical fix is to manually compress the array on the way out,
and expand it on the way back, using special knowledge of the real size
of a long double.
I have no idea if this is possible (and I suspect it isn't), but
wouldn't telling valgrind that a long double is actually sizeof(long
double) bytes be a better fix?
Evan
|