|
From: Natalie X. <na...@nc...> - 2004-03-02 13:58:19
|
Hi, My server program launchs a thread for each request from the client side. If I end the prgram after prcess one request, I got the following message: ==30825== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 23 from 2) ==30825== malloc/free: in use at exit: 13362 bytes in 50 blocks. ==30825== malloc/free: 842 allocs, 792 frees, 784962 bytes allocated. ==30825== For counts of detected errors, rerun with: -v ==30825== searching for pointers to 50 not-freed blocks. ==30825== checked 272461496 bytes. If the program ends after 10 requests, the message is: ==30887== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 39 from 2) ==30887== malloc/free: in use at exit: 13362 bytes in 50 blocks. ==30887== malloc/free: 6562 allocs, 6512 frees, 6207642 bytes allocated. ==30887== For counts of detected errors, rerun with: -v ==30887== searching for pointers to 50 not-freed blocks. ==30887== checked 272461240 bytes. For the line: ==30887== malloc/free: 6562 allocs, 6512 frees, 6207642 bytes allocated. What's the difference between "6207642 bytes allocated" and "6562 allocs"? Does those bytes get freed in the end? The server program crashes sometime, but not at the same client's request at each time. I'm trying to detect why the memory taken by the program, seeing from command "top", increases as the program runs. Do all memory get freed when a thread ends. Why does the program end with "Segmentation fault", but I can't see any errors from Valgrind? Thanks, |
|
From: Tom H. <th...@cy...> - 2004-03-02 14:30:06
|
In message <200...@ma...>
Natalie Xie <na...@nc...> wrote:
> Hi,
>
> My server program launchs a thread for each request from the client side. If I
> end the prgram after prcess one request, I got the following message:
> ==30825== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 23 from 2)
> ==30825== malloc/free: in use at exit: 13362 bytes in 50 blocks.
> ==30825== malloc/free: 842 allocs, 792 frees, 784962 bytes allocated.
> ==30825== For counts of detected errors, rerun with: -v
> ==30825== searching for pointers to 50 not-freed blocks.
> ==30825== checked 272461496 bytes.
>
> If the program ends after 10 requests, the message is:
> ==30887== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 39 from 2)
> ==30887== malloc/free: in use at exit: 13362 bytes in 50 blocks.
> ==30887== malloc/free: 6562 allocs, 6512 frees, 6207642 bytes allocated.
> ==30887== For counts of detected errors, rerun with: -v
> ==30887== searching for pointers to 50 not-freed blocks.
> ==30887== checked 272461240 bytes.
>
> For the line:
> ==30887== malloc/free: 6562 allocs, 6512 frees, 6207642 bytes allocated.
> What's the difference between "6207642 bytes allocated" and "6562 allocs"? Does
> those bytes get freed in the end?
The "bytes allocated" value is the number of bytes allocated from the
heap during the programs run using malloc and friends. It isn't a high
water mark though, it's the total amount ever allocated, so if you
allocate 10k, free it, and then allocate another 10k then you will
see 20k listed under that heading.
The "allocs" value is the total number of allocations made using
malloc and friends, so dividing that into the "bytes allocated" value
would give you the average size of an allocation request.
> The server program crashes sometime, but not at the same client's
> request at each time. I'm trying to detect why the memory taken by
> the program, seeing from command "top", increases as the program
> runs.
That sounds like a memory leak, in which case --leak-check=yes ought
to find it. Another possibility is simple that you have a data
structure that it growing without bound as the program runs. That
won't be found by the leak checker because it isn't a leak - the
program still has a handle on the memory.
> Do all memory get freed when a thread ends.
You mean memory malloced by a thread? No, it doesn't.
> Why does the program end with "Segmentation fault", but I can't see
> any errors from Valgrind?
Despite popular opinion to the contrary, valgrind isn't completely
omniscient and can't find every single problem in your programs. In
particular if you overrun a buffer on the stack it is unlikely to spot
it. Sometimes you have to fall back to traditional debugging techniques.
Tom
--
Tom Hughes (th...@cy...)
Software Engineer, Cyberscience Corporation
http://www.cyberscience.com/
|
|
From: Jeremy F. <je...@go...> - 2004-03-02 22:00:56
|
On Tue, 2004-03-02 at 06:24, Tom Hughes wrote: > > Why does the program end with "Segmentation fault", but I can't see > > any errors from Valgrind? > > Despite popular opinion to the contrary, valgrind isn't completely > omniscient and can't find every single problem in your programs. In > particular if you overrun a buffer on the stack it is unlikely to spot > it. Sometimes you have to fall back to traditional debugging techniques. Though the 2.0 habit of silently dying without reporting anything if the client managed to SEGV itself was a bit of a bug. Fixed in 2.1. J |
|
From: Nicholas N. <nj...@ca...> - 2004-03-02 23:15:35
|
On Tue, 2 Mar 2004, Jeremy Fitzhardinge wrote: > > Despite popular opinion to the contrary, valgrind isn't completely > > omniscient and can't find every single problem in your programs. In > > particular if you overrun a buffer on the stack it is unlikely to spot > > it. Sometimes you have to fall back to traditional debugging techniques. > > Though the 2.0 habit of silently dying without reporting anything if the > client managed to SEGV itself was a bit of a bug. Fixed in 2.1. What particular case do you mean? N |
|
From: Jeremy F. <je...@go...> - 2004-03-03 01:17:02
|
On Tue, 2004-03-02 at 15:10, Nicholas Nethercote wrote: > On Tue, 2 Mar 2004, Jeremy Fitzhardinge wrote: > > > > Despite popular opinion to the contrary, valgrind isn't completely > > > omniscient and can't find every single problem in your programs. In > > > particular if you overrun a buffer on the stack it is unlikely to spot > > > it. Sometimes you have to fall back to traditional debugging techniques. > > > > Though the 2.0 habit of silently dying without reporting anything if the > > client managed to SEGV itself was a bit of a bug. Fixed in 2.1. > > What particular case do you mean? Well, it used to be that if the process did a bad memory access, it would be killed by SIGSEGV, SIGBUS, SIGFPE or something, and everything would just terminate. If you happened to be using memcheck at the time, then you might get some warning, but otherwise you wouldn't - and you wouldn't get any leak statistics or anything else. Now it catches all those signals, and tries to do an orderly exit so you can see what happened. J |