|
From: Tilmann S. <til...@go...> - 2009-02-07 10:13:53
|
Hi Thomas, On Tue, Feb 3, 2009 at 3:51 PM, Thomas Louis <lo...@ne...> wrote: > I just got linked to CodeCover on my search for an code coverage tool for > testing a big JEE application. > Despite it is a quite big application we just started to establish automated > testing routines. The biggest problem we have is to distinguish which parts > of the code should be tested. The application is very stable in it's current > release but with every update we are afraid of bringing in new bugs. Ok > It's > not possible to test everything before a new release goes productive. Because it would take too long to do all the manual tests? If so, CodeCover has some features (per-test coverage, correlation matrix) which allow you to optimize your manual test suite by giving you hints about test cases which are likely to be redundant. > So the following idea got into my mind. What if a code coverage tool can > state how much of my _changed_ code I got tested. The changed code can be > easily determined by an SVN difference (or other versioning systems). The > tests could be done automatically or manually. If doing them manually one > could see which code passages that where changed between two software > releases got coverd by testing and keep on testing until every change got > tested. > First I want to know what you think about that idea. Do you think that it is > a good idea to test only the differences between stable code and the new > code? In my opinion it definitely makes sense to focus on testing the areas where code changed, especially if the effort which you can spend on testing is limited. Note that code coverage in general only gives you hints about your test suite. If your code coverage is low this implies that your test suite is bad, since it doesn't even really touch the code to be tested. On the other hand, high code coverage does not necessarily imply that your test suite is good, it only shows one aspect of your test cases: which code they covered, it does not tell you anything about other aspects of the quality of your test suite e.g. the quality of the input data and the expected results of the individual test cases. > Do you think testing manually in combination with a code coverage tool > makes sense? Absolutely, CodeCover was designed with manual testing in mind. > Can you make any suggestions of how to integrate that into CodeCover? Basically two things are needed: (1) A mechanism to determine the pieces of code which changed, e.g. what diff does. It probably makes sense to reuse an existing tool/library for this. (2) A way to mark the pieces of code which should be covered and the functionality to use these marks. There is already support for adding meta information to the elements of the MAST (this is the data structure which contains the contents of a source file in a more abstract representation which is suitable for a coverage tool), so it should not be difficult to add a "this method should be covered" flag for methods which were identified by (1). Then maybe a different mode should be added, where the coverage is calculated relative to the marked methods/classes. I'm not sure how much work this is in total, I would assume (1) is the bigger part here, but I don't see any major technical difficulties in implementing such a feature. A contribution of such a feature would certainly be welcome :) > PS: Darf man hier auch auf Deutsch schreiben? Please stick to english as the list subscribers are from all over the world :) Greetings, Tilmann |