Re: [Pyunit-interest] comments on PyUnit
Brought to you by:
purcell
From: Guido v. R. <gu...@di...> - 2001-04-11 16:24:11
|
> I agree that recommending one consistent set of names is a good idea. The > 'fail' methods have been around longer, but I think that the 'assert*' methods > may be clearer for non-native English speakers, who must be used to the > assert keyword already. > > On the whole, I would vote for the 'assert' methods, because I get the > impression that they are more widely used at this point in time. I disagree, for reasons see below. > > The names provided in the latest version are: > > > > fail*() | assert*() > > -------------+------------------------------------ > > fail() | > > failIf() | > > failUnless() | assert_() > > | assertRaises() > > | assertEqual(), assertEquals() > > | assertNotEqual() assertNotEquals() > > > > Note that while there is a (small!) overlap, neither group > > completely covers the available functionality. > > Right: 'failIf()' is redundant, and there is no 'assertionFailed()'. > > I suggest the following standard set: > > fail() > assert_() > assertRaises() > assertEqual() > assertNotEqual() > > There's no great harm in documenting the synonym 'failUnless()', since > 'assert_()' is slightly ugly. (Which causes inconsistency and confusion, and got this whole thing started.) > > - Guido suggests using a unique exception for test failures, similar > > to the use of TestFailed in the Python regression test. This > > allows the test errors to be distinguished from the traditional > > use of "assert" in Python modules. For backward compatibility it > > might be reasonable to subclass from AssertionError. > > That doesn't help backward compatibility, because the framework would still > have to catch 'AssertionError' in order to detect failures. > > I have yet to come across a situation where making this distinction between > deeply buried AssertionErrors and deliberately provoked ones is important. > The worst that can currently happen is that some buried 'assert' statements > can cause a test to look like it's failing rather than erroring. > > (I also like the idea that test writers can still use the 'assert' statement > for their tests if it suits them.) And that's exactly what I'm strongly opposing. When "python -O" is used, tests that use the assert statement will not be performed correctly, because no code is compiled for assert statements in that case. It's a nasty trap to fall into, and the fact that the PyUnit framework uses AssertionError makes it a common mistake. Therefore, I'd like to: (1) phase out the use of AssertionError in favor of a brand new PyUnit-specific exception (2) phase out the use of method names starting with assert > > Having all the fail*()/assert*() methods call fail() to actually > > raise the appropriate expception would be an easy way to isolate > > this, and allow TestCase subclasses to do things differently > > should the need arise. > > As a slight simplification to the code, it makes sense. However, it would > never make sense for TestCase subclasses to change that (fail()) aspect of > their behaviour, since the framework strictly defines which exception is > considered a failure. So that could also be made customizable (maybe through a class variable that can be set by the constructor; or maybe, more crudely, as a global in unittest). > > - There is no facility to skip a test without failing. This can be > > reasonable in cases where a platform does not support the full > > capability of the tested module. An example application of this > > is in test.test_largefile in the Python regression test -- if a > > platform does not support large files, it is not considered an > > error for the tests not to be completed. > > I thought about this recently, and I came to the conclusion that tests should > be skipped by explicitly omitting them from a test suite, along the > following lines: > > def suite(): > try: > import some_tests > return unittest.defaultTestLoader.loadTestsFromModule(some_tests) > except ImportError: > return TestSuite() # empty suite Sounds fine to me. > I don't really like the alternative, which is adding a 'skipped' result. > The Python regression test suite is probably unusual in requiring > the skipping of some tests. Most test suites are more straightforward. Disagree. I think any sufficiently large system tends to grow optional components that can only be testsed on certain platforms. But the approach you suggest is fine. > On the whole, at this late stage in the Python 2.1 release, I wouldn't be > inclined to change anything much. Unfortunately, I have to agree. > Am I making any sense? (Hint: if so, it's never happened before...) Yes! Glad to meet you, actually. :-) --Guido van Rossum (home page: http://www.python.org/~guido/) |