[Pyunit-interest] comments on PyUnit
Brought to you by:
purcell
From: Fred L. D. Jr. <fd...@ac...> - 2001-04-11 12:11:32
|
I have a few comments on PyUnit that have bubbled up as I've been writing the documentation for the Python Library Reference and have used it in more projects. - Naming consistency: The TestCase class defines several methods for use by the individual test methods. These have names like assert*() and fail*(). Most of these methods have a name in only one of these groups, but assert_() and failUnless() are the same method. assertEqual() and assertEquals() are also synonyms, but from the same group, and assertNotEqual() and assertNotEquals() are a similar pair. I'd like to see one set or the other become recommended usage, and offer names for all the functions. Using the two sets of names is confusing, especially since one set is the logical negation of the other. I'd also like to see only one of assert*Equal() and assert*Equals() be documented; having two names so similar seems unnecessary. Guido has indicated a preference for the fail*() names; I have no preference either way. The names provided in the latest version are: fail*() | assert*() -------------+------------------------------------ fail() | failIf() | failUnless() | assert_() | assertRaises() | assertEqual(), assertEquals() | assertNotEqual() assertNotEquals() Note that while there is a (small!) overlap, neither group completely covers the available functionality. - Guido suggests using a unique exception for test failures, similar to the use of TestFailed in the Python regression test. This allows the test errors to be distinguished from the traditional use of "assert" in Python modules. For backward compatibility it might be reasonable to subclass from AssertionError. Having all the fail*()/assert*() methods call fail() to actually raise the appropriate expception would be an easy way to isolate this, and allow TestCase subclasses to do things differently should the need arise. - There is no facility to skip a test without failing. This can be reasonable in cases where a platform does not support the full capability of the tested module. An example application of this is in test.test_largefile in the Python regression test -- if a platform does not support large files, it is not considered an error for the tests not to be completed. -Fred -- Fred L. Drake, Jr. <fdrake at acm.org> PythonLabs at Digital Creations |