|
From: Bryan M. <om...@br...> - 2006-08-07 19:59:17
|
Patrick, thanks for the input. If the compiled object is available, there should be nothing to stop some "offline" analysis to determine the executable lines within it. It would be better if all of this could automagically be done by a GUI or some other pre/post processing tool rather than as a set of manual steps, but done this way, ALL of the relevant information would be available to perform whatever analysis you wish. The other advantage is that this wouldn't impact the size or speed of the executable under test and you could also cull this information from the "other" ;) OS if you really wanted to do the comparison. Bryan "Brain Murders" Meredith Patrick Ohly wrote: > On Sat, 2006-08-05 at 12:52 +0100, Bryan Meredith wrote: >>> Also, does it give percentage coverage? >> Again, in the GUI, with both the source file and line coverage >> information available, it will be simpler to generate any required >> metrics (another long way of saying No). > > Actually this might be the right approach to handle a problem that no > other coverage tool that I am aware of handles right: if you link the > same object code, say from a static library, into different executables > and then run those multiple times, then merging coverage information > about the original object code can be very hard. Now assuming that the > same source code is compiled differently into different object files > (think Linux and Windows, with and without debugging enabled, etc.) and > then executed the object file based approach completely fails - > basically you have to merge information about the source code in this > case, as you suggest. > > The drawback (and I suppose that was what Nicholas was pointing out) is > then that you don't have information about number of code lines compiled > into the object files or executables, so you have to fall back to less > reliable methods of source code analysis to have a baseline for the > percentage of covered lines of code. > |