From: David G. <go...@py...> - 2004-06-21 22:39:58
|
[David Goodger] >> * Failure output should generate a diff, like the unit tests do. [Felix Wiemann] > It does, but it's a unified_diff. So it does. I was looking at the wrong place; unit tests put their diffs in stdout before unittest.py reports the failures. Sorry. > There is no point in dumping 30 KB of text to indicate two changed > lines. In fact, it would be a significant disadvantage. Agreed. >> * test_functional.py should be using docutils.core.publish_file, >> not .publish_string. Letting .publish_file handle the file I/O >> will test that code, will allow for file encodings (testing that >> code too), and will simplify test_functional.py's code as well. >> It's re-implementing code that already exists, works, and should >> be tested. > > Then we are writing the output file just in order to re-read it. I was about to say "but publish_file returns the string also!", but it didn't. It does now (in CVS). I was thinking of the Writer.write() method, which does. So anyway, just say output = publish_file(...) and you'll get both file I/O *and* the encoded string result. > Not quite elegant, but maybe I can implement it. As it was, I agree, not elegant at all. >> I don't. Many small unit tests make it easy to determine where the >> problem is. Most unit tests contain text explaining what is being >> tested for. > > Doesn't the diff show quite exactly what was changed? Not always. The intent of the test may not be apparent from the change. >> And unit tests modules can be run individually. > > You can also run test_functional.py. But can you run an individual test of the many test_functional.py normally runs? On my machine, test_functional.py takes 8 seconds. test_language.py takes half a second to run 48 tests. Much better turn-around, makes for faster coding & debugging. >>> If we have only minimal functional tests, we could as well >>> hardcode them directly into a test case. >> >> But that wouldn't test the whole-system functionality. > > When using publish_string, we are testing anything except the I/O, > which should be sufficient for usual unit tests. But not for functional tests! Please, please, stop bringing up unit tests when I'm talking about functional tests! :-) >>> The need for the ability to process large test data files is why I >>> asked for a additional framework. >> >> That's fine, but we don't need to test lists over and over again. >> Lists are tested by the unit test modules. It's fine to have a few >> large test files, for the reasons you pointed out (can view them, >> etc.), but functional tests do not replace unit tests. > > Then what exactly could you catch with a unit test which you can't > catch with a functional test, provided both unit and functional > tests are equally exhaustive? Specific bugs. Edge cases. Corner cases. We currently have 800 unit tests. If there were 800 little functional tests that could be run individually (or in related groups) and were immediatelly identifiable by name, then maybe. But there aren't, and I see no gain in converting all 800 unit tests into functional tests. To the contrary, I see a potential loss if we do convert them, because of scope issues, speed issues, potential new bugs introduced, and the sheer wasted effort that could be put to much better use. Perhaps I can't explain it well enough, but I know that both types of test are useful, and complementary. >> There could even be an option on the test_functional.py script, to >> copy actual output/ files over to expected/. > > Dunno if it's worth the effort. Copying is fast. You may change your tune when there are 40 files to copy, from and to different directories. But that's functionality for later, when/if it's needed. -- David Goodger <http://python.net/~goodger> |