|
From: John R. <jr...@bi...> - 2017-08-31 11:46:09
|
> My motivation is a huge binary which takes a lot of time to instrument and which > is executed frequently during many many test suite runs without any > human intervention. If you have only a hammer [memcheck] then everything begins to look like a nail. Probably you should enlarge your toolbox instead of trying to optimize memcheck. Please be less coy about the huge binary. How big is it? How many shared libraries? What is the total /usr/bin/size, particularly .text? What programming languages does it employ? How much address space does it use at run time? How much time does memcheck take? How many machines are you running round-the-clock for this test suite? [Yes, the numerical answers to each question do matter.] Probably the software has very poor quality: few unit tests, undocumented design and implementation strategy, little or no consideration of testability. So: Apply profiling to the subroutine call graph. Use code coverage analysis. Look at the bugs in the last year. Look at the changes to the source code in the last year [each change is a proxy for a bug.] Identify the 20% of the source that is responsible for 80% of the bugs. Attack that 20% using divide-and-conquer: There should be a three-level hierarchy of pieces. Develop the unit tests for the lowest-level pieces and the integration tests for each node in the hierarchy. Apply profiling + coverage + memcheck at each node. (Expect 6 months. Hire two graduate students and a manager [perhaps yourself: but it will take 25% of your time].) -- |