|
From: William S. <wsc...@ak...> - 2007-11-14 15:37:17
|
I'm trying to track down some missing memory using valgrind. So far, I'm not finding it. The problem could be some kind of pilot error, so for now I'll simply describe what I'm trying to do, and how: o Linux environment. o Trying to track down something holding or losing memory in a server written in c++. o Program shows an enormous jump in memory use - both vss and rss - when "work in progress" hits 40K - 50K records. (Memory jumps 50 megs.) o When WIP has been processed, memory use never goes back down. o Dynamic memory use: - vanilla C++ - objects being new'd/deleted. - certain amount of malloc/free. - large user of standard library strings. o After WIP has been processed, if I feed another 40K - 50K input records, memory use does not jump again - making me doubt that this is in fact a leak. My suspicion is that it is the standard library allocator holding onto blocks of memory for reuse. But the valgrind summary - printed when the service exits - shows no memory use that comes close to explaining the high vss/rss size. The largest block of memory outstanding is due to "Deque". But that memory is 800KB, vs the 50 meg mystery. Here's the valgrind commandline I'm using. ./valgrind/coregrind/valgrind --leak-check=full --show-reachable=yes --trace-children=yes --log-file-exactly=/root/valgrind1 /a/bin/mybatcher Any ideas? |
|
From: Tom H. <to...@co...> - 2007-11-14 15:46:58
|
In message <473...@ak...>
William Schauweker <wsc...@ak...> wrote:
> The problem could be some kind of pilot error, so for now I'll simply
> describe what I'm trying to do, and how:
>
> o Linux environment.
> o Trying to track down something holding or losing memory in a server
> written in c++.
> o Program shows an enormous jump in memory use - both vss and rss - when
> "work in progress" hits 40K - 50K records. (Memory jumps 50 megs.)
> o When WIP has been processed, memory use never goes back down.
> o Dynamic memory use:
> - vanilla C++ - objects being new'd/deleted.
> - certain amount of malloc/free.
> - large user of standard library strings.
>
> o After WIP has been processed, if I feed another 40K - 50K input
> records, memory use does not jump again - making me doubt that this is
> in fact a leak.
>
> My suspicion is that it is the standard library allocator holding onto
> blocks of memory for reuse.
It could be the standard C++ library hanging on to it, but that
should show up in valgrind. More likely it is glibc hanging on to
it as malloc/free don't normally return freed memory to the system.
> But the valgrind summary - printed when the service exits - shows no
> memory use that comes close to explaining the high vss/rss size. The
> largest block of memory outstanding is due to "Deque". But that memory
> is 800KB, vs the 50 meg mystery.
That would square with glibc hanging on to it - the memory is not
allocated to anything but is part of the free space in the malloc
heap show will not show up as leaked memory or as still allocated
memory because it isn't.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Howard C. <hy...@sy...> - 2007-11-14 16:06:11
|
Tom Hughes wrote: > In message <473...@ak...> > William Schauweker <wsc...@ak...> wrote: >> My suspicion is that it is the standard library allocator holding onto >> blocks of memory for reuse. > > It could be the standard C++ library hanging on to it, but that > should show up in valgrind. More likely it is glibc hanging on to > it as malloc/free don't normally return freed memory to the system. Actually, blocks larger than a certain size are usually returned to the system. The default threshold in glibc 2.5 is 128K; malloc requests of this size or larger are satisfied using mmap() and the mmap'd regions are unmapped when freed. By the way, tuning this threshold can have a noticeable impact on the degree of fragmentation in the heap, and can help malloc's performance on some workloads. http://sourceware.org/ml/libc-alpha/2006-03/msg00033.html The tests I reported here http://highlandsun.com/hyc/malloc/ all used the default glibc settings. glibc may have fared better in these tests if I had used a larger threshold but I haven't had time to re-test. -- -- Howard Chu Chief Architect, Symas Corp. http://www.symas.com Director, Highland Sun http://highlandsun.com/hyc/ Chief Architect, OpenLDAP http://www.openldap.org/project/ |
|
From: William S. <wsc...@ak...> - 2007-11-14 17:10:40
|
Thank you both for the info. This definitely helps me figure out where to go next. Howard Chu wrote: >Tom Hughes wrote: > > >>In message <473...@ak...> >> William Schauweker <wsc...@ak...> wrote: >> >> >>>My suspicion is that it is the standard library allocator holding onto >>>blocks of memory for reuse. >>> >>> >>It could be the standard C++ library hanging on to it, but that >>should show up in valgrind. More likely it is glibc hanging on to >>it as malloc/free don't normally return freed memory to the system. >> >> > >Actually, blocks larger than a certain size are usually returned to the >system. The default threshold in glibc 2.5 is 128K; malloc requests of this >size or larger are satisfied using mmap() and the mmap'd regions are unmapped >when freed. > >By the way, tuning this threshold can have a noticeable impact on the degree >of fragmentation in the heap, and can help malloc's performance on some workloads. > >http://sourceware.org/ml/libc-alpha/2006-03/msg00033.html > >The tests I reported here http://highlandsun.com/hyc/malloc/ >all used the default glibc settings. glibc may have fared better in these >tests if I had used a larger threshold but I haven't had time to re-test. > > |