|
From: Nicholas N. <nj...@cs...> - 2007-08-08 22:16:52
|
On Wed, 8 Aug 2007, Evan Lavelle wrote:
> But, my point is, Valgrind is being too smart. It 'knows' that a long
> double is 80 bits, but I can only rely on what the compiler tells me,
> which is that a long double is 128 bits. I don't (semantically) use the
> uninitialised bytes, because I only ever operate on long doubles using
> C++ operations; I just occasionally need to store and retrieve them from
> an unusual place. For this, I have to use sizeof, because that's what
> sizeof is for. However, Valgrind gives me hundreds of errors.
>
> Ideally, I think that I need to tell Valgrind to use C's size definition
> here, rather than its built-in knowledge. I could potentially find a
> system header somewhere that could tell me that a long double is
> actually 80 bits, but this wouldn't help: C would still store long
> doubles in 16 bytes of memory, and I couldn't correctly operate on
> arrays of long doubles. In other words, I still need sizeof (actually, I
> never operate on arrays, but you get the idea).
Valgrind works at the binary level, and knows very little about C. It uses
an assumption that sending undefined bytes to write() is bad, and that is
very nearly always the case. Perhaps in your case it's not bad (although
I'm still not convinced -- you're writing junk bytes to a file) in which
case you have to work around the assumption, by using memset like John
suggested, or one of the client requests that change the status of memory.
> for(int i = 0; i < Dbits; i += 8) {
> mask = *cptr++; // <--- VALGRIND REPORTS ERROR HERE
> mask <<= i;
> *dst |= mask;
> }
Does it really complain on that line? As John said, Valgrind shouldn't
complain on that line, and the example errors you gave previously all
involved calls to write().
Nick
|