|
From: Jeff M. <je...@pr...> - 2008-04-27 16:06:57
|
On Apr 23, 2008, at 9:41 AM, Travis Swicegood wrote: > There's no official code coverage yet. There's some institutional > resistance to it here 'cause it is a false meter (i.e., and good TDD > project will actually have well over 100% code coverage). People see > "95%" and get all happy, when there's still major areas not covered. > I guess its like any statistical data though, its only as good as the > person interpreting it. Hi Travis, When you say "well over," I think you mean "nearly." ;) I've always been a bit skeptical of this claim of TDD test coverage regarding coverage. Maybe its because of a character flaw in myself, but I've never been able to embrace the degree of TDD that would allow this statement to be true. I usually adopt my level of testing to the task at hand. For some things, its test first all the way. Or, at least example code first, which gets turned into tests later. For other things, I write a few tests shortly after implementing the feature. Maybe for small values of t, before or after doesn't matter that much? Call this the quantum theory of TDD. For some things I use coverage reports to create test for code that I, or someone else has already written. And yes, some stuff I don't write tests for at all. While one can control one's own habits of TDD, that gets much harder at an organizational level with geographic dispersion, contractor relationships, less educated developers, or open source. I find coverage reports handy in these situations. I think the biggest problem I've faced with using coverage reports is the tendency to generate tests to match the implementation of a feature in order to achieve higher coverage. This is opposed to testing public features or user features. Over-testing the implementation makes the tests brittle and harder to change the tests to match the code as it evolves. I always cringe when I see someone declare some high % of coverage will be mandatory on their project. Maybe TDD avoids this pitfall? For me, this is similar to the question "How many licks does it take to get to the center of a tootsie-pop?" I lack the discipline to find out. Best Regards, Jeff |
|
From: Travis S. <dev...@do...> - 2008-04-28 02:18:40
|
On Apr 27, 2008, at 11:06 AM, Jeff Moore wrote: > When you say "well over," I think you mean "nearly." ;) > > I've always been a bit skeptical of this claim of TDD test coverage > regarding coverage. Nope. I chose that phrase carefully ;-) In a true TDD project, you will end up hitting the same line of code many times as various parts of the system are exercised during the course of a test suite run. As such, a simple this line was hit, this line wasn't doesn't provide much value to me. Showing not only the number of times a line of code were touched during a suite run provides much more value to someone like me as it starts to show patterns where code is being touched less frequently and might be capable of being refactored into another class. Of course, this type of testing is completely the opposite of Paddy Brady's style of behaviorist TDD where each element is tested in a cocoon of mock objects to completely isolate it from other elements. Using this, you do get at the brittle tests you referenced as your testing harness has to take on much of the implementation details of the SUT (system under test) in order to completely isolate objects. > For other things, I write a few tests shortly after implementing the > feature. Maybe for small values of t, before or after doesn't matter > that much? Call this the quantum theory of TDD. I love this! Test last as quantum TDD, or QTDD :-D > For some things I use coverage reports to create test for code that I, > or someone else has already written. As with seasons, every tool has its time. When entering into a new project or taking over legacy code, I do think code coverage reports can provide useful information on what part of the system should be treated with the most care when refactoring. Areas with little or no test coverage give you a may to undocumented features and expectations of the code prior to starting to refactor. There definitely is a place in a TDD worker's tool belt for code coverage reports, just not as a check box on the road toward making a project complete. > And yes, some stuff I don't write tests for at all. Yes, those are called prototypes and "rm *" works well for them... ;-) Seriously though, I do hear you. There are occasionally things where the expense in writing and maintaining the tests less than the benefit that can be derived from a test. When was the last time you wrote a test to verify your test worked? :-) > While one can control one's own habits of TDD, that gets much harder > at an organizational level with geographic dispersion, contractor > relationships, less educated developers, or open source. I find > coverage reports handy in these situations. To a limited degree, yes. Though I would say all of these are organizational/people problems and not so much TDD problems. I'm not sure about geographic dispersion though. Are programmers in Michigan more likely to write tests than those in California? If so, why isn't Detroit the Silicon Valley of the Great Lakes? Regarding contractor relationships, I think that's mainly an issue of poor contracts with contractors and/or companies that supply them. If TDD is important to a company, no bid should be accepted without tests proving the work as part of the contract signed. I can't see how any company would take work without having some sort of testing mechanism in place to prove the work. Whether that's functional acceptance tests, unit tests, or some blend of the two doesn't really matter. At some point, someone is going to have to sign off on the work as complete, and if the process that was used to sign off on it is not automated, the company in question either has a very short shelf life for the code in question or is asking for trouble. Open source definitely is a different best. That said, I do think it comes back to the project's culture. If the project is known for stable releases where things work as advertised with no fuss the developers who choose to align themselves with that project will have no problem bending their personal habits to that of the project in order to contribute. On the other hand, if a project routinely ships stable code that has little more than a proof of concept test surrounding it (if that), then people will continue to contribute code with that level of testing around it. > I think the biggest problem I've faced with using coverage reports is > the tendency to generate tests to match the implementation of a > feature in order to achieve higher coverage. This is opposed to > testing public features or user features. This is the reason Marcus argues against them. It's epitomized in the US by "No Child Left Behind". Teaching as prep for tests does nothing to teach how to learn. Likewise, coding tests just to get coverage does nothing to improve the quality of the code and is a false metric. > Over-testing the > implementation makes the tests brittle and harder to change the tests > to match the code as it evolves. I always cringe when I see someone > declare some high % of coverage will be mandatory on their project. I personally cringe when any project declares that they want any % of coverage for their project. Declaring "80%" means that you consider at least 20% of your code as throw away? It doesn't make sense to me. Either testing is important and you do it, or testing is marketing and you do enough that there are tests to run so you can say "we have tests". For me its an either or, coverage is just a way to make the latter seem more appealing by doing just enough. -T |