|
From: Bart V. A. <bva...@ac...> - 2011-04-01 10:14:40
|
On Fri, Apr 1, 2011 at 4:03 AM, Bob Harris <rsh...@gm...> wrote: > On Mar 31, 2011, at 7:55 PM, Tom Hughes wrote: >> It's entirely expected that it may fail under valgrind when it doesn't >> normally because the memory layout is different plus every byte of memory >> you allocate has a little over a byte (nine bits) of overhead in the form >> of the shadow memory valgrind is using to track it, and so on. >> >> You will run out of memory more quickly under valgrind that you do normally >> is the basic message. > > Sure, understandable. But to go from a program that is able to > allocate nearly 16G without valgrind to one that fails at 2.8G with > it? > > The test program has this line: > m26=malloc((size_t)12615754384); > 12.6G in a single block and that works fine without valgrind. With > valgrind, memory allocation starts to fail way earlier, and the 12G > attempt also fails. > > And in fact, if I comment out that huge 12.6G block and compile on a > 32bit machine, and run (without valgrind), I get allocation failures > starting at around 2.4G. This I expect on a 32-bit machine, from past > experience. All this, together with valgrind giving me a message that > seems to be telling me "my, grandmother, but that's a big block you > are trying to allocate", is what makes me suspicious that the valgrind > is doing something that is limiting me to the equivalent of a 32-bit > realm. Have you already tried to add --freelist-vol=0 to the memcheck command-line options ? See also http://www.valgrind.org/docs/manual/mc-manual.html for more information. Bart. |