|
From: Alan M. <ala...@jp...> - 2013-05-30 21:54:02
|
On 5/30/2013 11:22 AM, Alan Mazer wrote: > On 5/30/13 10:16 AM, Dan Kegel wrote: >> On Thu, May 30, 2013 at 9:15 AM, Alan Mazer >> <ala...@jp...> wrote: >>> This type of error is occurring in a variety of places, always >>> "definitely lost" memory at a function call. >>> >>> Anyone have any idea what I'm missing? I've tried multiple compilers >>> and a bazillion different options (on compiler and valgrind). I'm >>> stumped. >> Have you looked at what the compiler is generating, or single-stepped >> through that call? > > I just looked at the output from -S but nothing looks odd. It very > quickly jumps into strcmp calls that are near the top of the called > routine. > > I've been running this under OS X but I just tried it on a Linux > machine and there got more information: > > ==15252== 8,233 bytes in 1 blocks are definitely lost in loss record 4 > of 4 > ==15252== at 0x4A07152: operator new[](unsigned long) > (vg_replace_malloc.c:363) > ==15252== by 0x40E1A0: read_conf(char const*, char const*, char > const*, char const*, unsigned char, SL_List<char*>*, char**) > (parsing.cpp:295) > ==15252== by 0x41799A: main (main.cpp:404) > > On Linux valgrind gives me a line number within the routine (line 295) > that has a legitimate memory leak. > > So I guess I just need to run Valgrind on my Linux machine and avoid > the Mac? It looks like the problem is that valgrind can't locate the "new" usages within the routines. Looking at this more I can see that the reported memory leaks are all valid, and the tracebacks are perfect except for the locations within the "new"-using routines. That part of the traceback is omitted in every message. Compiling with the optimizer enabled heightens the effect, so I thought I might have some optimizations accidentally enabled but it doesn't look like it. It's something weird about g++ on OS X... |