From: Mike B. <mbr...@vi...> - 2003-06-06 15:26:58
|
> Valgrind's approach -- dynamically compiling and instrumenting all code > every time -- is the most convenient for users. No worrying about whether > an executable is instrumented or anything. This is also true of the other approach. The instrumented code blocks can be kept in a disk cache which is consulted each time the executable is executed with profiling enabled. There's no extra step for the user. The user clicks on the same button regardless of whether the application was previously instrumented. > As for efficiency, it would be possible to have Valgrind save the code it > generates, so you could reuse it the next time. This is what Quantify does. > But the dynamic > compilation + instrumentation phase typically only takes up about 10% of > the execution time for Valgrind (the Memcheck skin, at least) so doing > this wouldn't help performance much, but it would make the implementation > more complicated, and possibly make life more difficult for users. I don't understand why it must be more difficult for the user. Do you know what percentage of time when using the cache profiling skin? BTW, why are they called "skins"? It makes it sound like something graphical. > Actually, thinking more, Valgrind's just-in-time approach could has > another efficiency advantage -- it only instruments the code that gets > executed. This is good (especially space-wise) if you only use a small > fraction of a great big library. I agree this is an efficiency gain if you profile the executable only once or perhaps a handfull of times. However the efficiency gain is whiped away if you execute the same executable enough times. In any case, if it's only a 10% difference in performance, I'm not going to worry about too much. Thanks for the responses. Mike Bresnahan |