|
From: jlh <jl...@gm...> - 2008-05-16 14:45:14
|
Hello list! A program I'm writing apparently has memory leaks that I'd like to find. However, the program has to process real time data and it cannot process them fast enough when running in valgrind. How would one deal with this situation? Ideally, I'd even like to run it for a long time, like hours or even days, so that as many different code paths as possible are being taken. I once was under the -- perhaps mistaken -- impression that I could re-compile the entire program with valgrind support built-in, which would allow to find mem-leaks without the slow down. Is this possible or am I just badly mistaken about that? I couldn't find any information about it. Would that make sense at all? I imagine that if I'm only interested in finding memory leaks, it would be enough to instrument all calls to malloc(), free(), etc. at compile (or link) time and then be able to run at native speed, i.e. not using the CPU emulator. This problem aside, I'm very happy with valgrind, as it helped me fix some leaks already. Thanks for any help. Thanks, jlh |
|
From: Bart V. A. <bar...@gm...> - 2008-05-16 14:51:06
|
On Fri, May 16, 2008 at 4:33 PM, jlh <jl...@gm...> wrote: > A program I'm writing apparently has memory leaks that I'd like to > find. However, the program has to process real time data and it > cannot process them fast enough when running in valgrind. What has already worked for me is to reduce the complexity of the input signals for the real-time code, such that the real-time code takes less time and completes in time even when run under Valgrind. Bart. |
|
From: Dallman, J. <joh...@si...> - 2008-05-16 15:10:30
|
> A program I'm writing apparently has memory leaks that I'd like to > find. However, the program has to process real time data and it > cannot process them fast enough when running in valgrind. How > would one deal with this situation? The best way is to capture some large set of the real-time data to disk, and have the program read it from there. That way it can take as long as it likes. Such a facility will be useful for testing in other ways too. -- John Dallman Parasolid Porting Engineer |
|
From: Dirk S. <val...@ds...> - 2008-05-16 18:46:10
|
On Fri, 16 May 2008, jlh wrote: > A program I'm writing apparently has memory leaks that I'd like to > find. However, the program has to process real time data and it > cannot process them fast enough when running in valgrind. How > would one deal with this situation? Ideally, I'd even like to run > it for a long time, like hours or even days, so that as many > different code paths as possible are being taken. One of our software projects has same limitations. I added an interface, which allows to store the input data and timestamps and also the reverse (loading timestamps and data back from disk). This way I can record data from e.g. 1 day and process it later with valgrind again (taking a week :-). This way I can test everything inside except the real input code (which ATM has a rarely occuring bug I try to track down for half a year now :-) Thought this method will probably only work, when the software design allows a more or less central data input instead of lots of input floating throught lots of different interfaces into the software. On the other hand even if valgrind is slow, it sometimes is helpful enough. Also there are link library replacements for the malloc/free functions, which allow the tracking you want to a certain degree (don't remember the name, "electric fence"?) Ciao -- http://www.dstoecker.eu/ (PGP key available) |
> Also there are link library replacements for the malloc/free functions, > which allow the tracking you want to a certain degree (don't remember the > name, "electric fence"?) Don't overlook the simple export MALLOC_CHECK_=2 (with a trailing underscore) which causes the normal glibc to check for overrun and double free. This does not catch use of uninitialized memory or other cases that memcheck does, but MALLOC_CHECK_=2 sometimes is enough, and it runs close to normal speed. See "info libc". -- John Reiser, jreiser@BitWagon.com |
|
From: jlh <jl...@gm...> - 2008-05-16 22:49:08
|
Dirk Stöcker <valgrind <at> dstoecker.de> writes: > One of our software projects has same limitations. I added an interface, > which allows to store the input data and timestamps and also the reverse > (loading timestamps and data back from disk). Interesting, but in some situations this might require quite some changes to the application. The more I change the program, the more I'm at risk of not triggering the memory leaks I'm hunting for. I will try and see if I can do something in that direction. I suspect a certain part of the program to leak, I can probably do this after isolating that part. > Thought this method will probably only work, when the software design > allows a more or less central data input instead of lots of input floating > throught lots of different interfaces into the software. My software as a whole interacts with another program, so it's quite difficult to record the data and play it back later. But with that isolation it should work. > Also there are link library replacements for the malloc/free functions, > which allow the tracking you want to a certain degree (don't remember the > name, "electric fence"?) Thanks, I had a look at electric fence and it's not about memory leaks, but rather about buffer overruns. But there's a fork of efence called DUMA which intercepts calls to memory functions to trace down memory leaks. But it's mode of operation is to segfault whenever something bad is detected; it's meant to be run in a debugger, so you can immediately have a look at what happened. I prefer the way valgrind works: It collects all information but doesn't interrupt anything; then at the end you get a long report. Also, I somehow managed to overlook the most obvious candidate for finding memory leaks: mtrace. It's actually part of glibc and already was on my system. It still slows down the program quite a bit (DUMA did as well), but much less than valgrind, since only the memory function calls get slower, but the rest runs at native speed. It seem to be a great tool for C, but doesn't do well with C++. :( Thanks for all replies! I think I will have to go with isolating parts of the program that can run without the real-time pressure and run them with valgrind. Thanks, jlh |
|
From: Nicholas N. <nj...@cs...> - 2008-05-17 00:29:43
|
On Fri, 16 May 2008, jlh wrote: > Thanks, I had a look at electric fence and it's not about memory leaks, but > rather about buffer overruns. But there's a fork of efence called DUMA which > intercepts calls to memory functions to trace down memory leaks. But it's mode > of operation is to segfault whenever something bad is detected; it's meant to > be run in a debugger, so you can immediately have a look at what happened. I > prefer the way valgrind works: It collects all information but doesn't > interrupt anything; then at the end you get a long report. > > Also, I somehow managed to overlook the most obvious candidate for finding > memory leaks: mtrace. It's actually part of glibc and already was on my > system. It still slows down the program quite a bit (DUMA did as well), but > much less than valgrind, since only the memory function calls get slower, but > the rest runs at native speed. It seem to be a great tool for C, but doesn't > do well with C++. :( I think there are 1001 malloc-replacement libraries that provide some level of checking. IIRC the mpatrol manual (http://www.cbmamiga.demon.co.uk/mpatrol/) has an extensive list of them. Nick |
|
From: Dirk S. <val...@ds...> - 2008-05-17 20:23:10
|
On Fri, 16 May 2008, jlh wrote: >> One of our software projects has same limitations. I added an interface, >> which allows to store the input data and timestamps and also the reverse >> (loading timestamps and data back from disk). > > Interesting, but in some situations this might require quite some changes to > the application. The more I change the program, the more I'm at risk of > not triggering the memory leaks I'm hunting for. I will try and see if I can > do something in that direction. Well, that interface is for general debugging (allows reproducability of real-time data processing) and not for memory leak checking. Also it was implemented at a very early software stage. :-) > Also, I somehow managed to overlook the most obvious candidate for finding > memory leaks: mtrace. It's actually part of glibc and already was on my > system. It still slows down the program quite a bit (DUMA did as well), but > much less than valgrind, since only the memory function calls get slower, but > the rest runs at native speed. It seem to be a great tool for C, but doesn't > do well with C++. :( As another reply says: There are probably 1001 malloc replacements :-) I often use mymalloc() in my C codes, which then is either defined to be malloc or points to a replacement function with #ifdef DEBUG_MEMORY. Usually also constructors and destructors of the classes, and other allocating functions (and lots of others) have an #ifdef DEBUG'ed printf() inside and a little Perl script to test if the constructions and destructions match is written very fast. And I always have an #ifdef DEBUG_OLD, which contains all the old debug statements added for special debugging purposes (I never remove them, except they conflict with new code). Turning this on usually makes software 1-10 times slower, but sometimes is very very helpful (I once got 1GB debug data in 10 minutes :-). I really can not understand, why nowadays all prefer debuggers and forget the good old printf. Ciao -- http://www.dstoecker.eu/ (PGP key available) |
|
From: jlh <jl...@gm...> - 2008-05-17 22:55:20
|
Dirk Stöcker <valgrind <at> dstoecker.de> writes: > Usually also constructors and destructors of the classes, and other > allocating functions (and lots of others) have an #ifdef DEBUG'ed printf() > inside and a little Perl script to test if the constructions and > destructions match is written very fast. I do that a lot, I have a debug() function that is either defined printf() or (void)0. But I don't use it for memory allocation logging. > And I always have an #ifdef DEBUG_OLD, which contains all the old debug > statements added for special debugging purposes (I never remove them, > except they conflict with new code). Turning this on usually makes > software 1-10 times slower, but sometimes is very very helpful (I once > got 1GB debug data in 10 minutes . > I really can not understand, why nowadays all prefer debuggers and forget > the good old printf. I don't see how filling the program with lots of printf()s and then writing a perl script that analyzes its output is different from using a debugger. Doing that *is* using a debugger: the one you just wrote yourself. And when there are good debuggers available already, why spend time reinventing the wheel? In fact, mtrace works exactly like this: It generates lots of output and mtrace is a perl script that analyzes it and tells you at which lines memory has been allocated but never released. And valgrind is a really great tool. It points its finger to a line of code and says "this malloc()ed memory has not been free'ed anywhere". Getting this kind of comfort is not trivial with self-written printf()s + perl. jlh |
|
From: Dirk S. <val...@ds...> - 2008-05-18 12:11:05
|
On Sat, 17 May 2008, jlh wrote: >> I really can not understand, why nowadays all prefer debuggers and forget >> the good old printf. > > I don't see how filling the program with lots of printf()s and then writing a > perl script that analyzes its output is different from using a debugger. Doing > that *is* using a debugger: the one you just wrote yourself. And when there > are good debuggers available already, why spend time reinventing the wheel? In > fact, mtrace works exactly like this: It generates lots of output and mtrace is > a perl script that analyzes it and tells you at which lines memory has been > allocated but never released. A debugger also needs to be told, what to output. But different to printfs you need to tell it everytime again. The old printfs otherwise output stuff, which was subject to intensive searches already. Don't know why, but usually that helps also with later searches. Seems bugs are always grouped together in the software ;-) Another advantage: It works everywhere, even on production machines :-) P.S. Writing a perl script is a bit strong. Usually these are 1-3 liners. > And valgrind is a really great tool. It points its finger to a line of code > and says "this malloc()ed memory has not been free'ed anywhere". Getting this > kind of comfort is not trivial with self-written printf()s + perl. Jup. valgrind is a very fine piece of software. I'm very happy it exists. The only problem is the speed :-) Was one of the first debug tools I used when I switched from Amiga to Unix development (oh, many years ago, time flies so fast, I get old :-) Ciao -- http://www.dstoecker.eu/ (PGP key available) |