pyunit-interest Mailing List for PyUnit testing framework (Page 14)
Brought to you by:
purcell
You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(12) |
Jun
(16) |
Jul
(6) |
Aug
|
Sep
(3) |
Oct
(7) |
Nov
(4) |
Dec
(6) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
|
Feb
(11) |
Mar
(7) |
Apr
(50) |
May
(9) |
Jun
(5) |
Jul
(33) |
Aug
(9) |
Sep
(7) |
Oct
(7) |
Nov
(17) |
Dec
(4) |
2002 |
Jan
(8) |
Feb
|
Mar
(14) |
Apr
(9) |
May
(13) |
Jun
(30) |
Jul
(13) |
Aug
(14) |
Sep
|
Oct
|
Nov
(2) |
Dec
(11) |
2003 |
Jan
(8) |
Feb
(3) |
Mar
(6) |
Apr
(5) |
May
(15) |
Jun
(5) |
Jul
(8) |
Aug
(14) |
Sep
(12) |
Oct
(1) |
Nov
(2) |
Dec
(1) |
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
(3) |
Jul
(4) |
Aug
(4) |
Sep
(2) |
Oct
|
Nov
|
Dec
|
2005 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
|
2006 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
(1) |
Dec
(5) |
2007 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(3) |
Nov
|
Dec
|
2010 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Guido v. R. <gu...@di...> - 2001-04-11 18:49:02
|
> >> failUnless / assert_ > > GvR> failUnless > > >> assertEqual / assertEquals > > GvR> failUnlessEqual > > >> assertNotEqual / assertNotEquals > > GvR> failIfEqual > > GvR> Also, assertRaises should be changed to failUnessRaises, etc. > > I strongly dislike the new names. They are too long and express the > intended behavior in negative form. > > assertRaises: "This code is correct if it raises an error." > failUnlessRaises: "This code is not correct unless it raises an error." > > The former is a more natural way to express the meaning. I would > suggest finding a different word for assert instead of finding ways to > turn all the assert calls into a negative form. Sigh, you're right. > One possibility is verify. Another is check, although that word > already has a meaning for unit testing. Then all methods should get new names, right? It's a shame we can't use check (or can we?), otherwise that would be my preference. What does JUnit use? --Guido van Rossum (home page: http://www.python.org/~guido/) |
From: Guido v. R. <gu...@di...> - 2001-04-11 18:46:29
|
> Guido van Rossum writes: > > The problem is that many test suites (e.g. the Zope tests for > > PageTemplates :-) use assert statements instead of calls to fail*() / > > assert*() methods. Fred: > From the compatibility side, those methods need to be supported, > yes. Is there a reason they should raise AssertionError and not a > subclass of it? I think the later would help people learn that the > two are not the same thing at a conceptual level. The assert*() > methods should raise the new exception. > The framework can still *catch* the base AssertionError -- that's > different. The AssertionError is exactly right when using an assert > statement, it's just not being used properly there. ;-) I'm less > concerned about the PageTemplates tests than our DOM tests; that's a > somewhat larger collection of tests! This is fine, but since the framework still catches AssertionError, the gain is minimal. --Guido van Rossum (home page: http://www.python.org/~guido/) |
From: Jeremy H. <je...@di...> - 2001-04-11 18:40:07
|
[Apoliges if these comments come through twice in slightly different form. I seem to be having mail delivery problems.] >>>>> "GvR" == Guido van Rossum <gu...@di...> writes: >> Yes. I propose that the Python Library Reference only document >> one name for each method. *Which* name should be decided here. >> This issue exists for the following pairs of names: >> >> failUnless / assert_ GvR> failUnless >> assertEqual / assertEquals GvR> failUnlessEqual >> assertNotEqual / assertNotEquals GvR> failIfEqual GvR> Also, assertRaises should be changed to failUnessRaises, etc. I strongly dislike the new names. They are too long and express the intended behavior in negative form. assertRaises: "This code is correct if it raises an error." failUnlessRaises: "This code is not correct unless it raises an error." The former is a more natural way to express the meaning. I would suggest finding a different word for assert instead of finding ways to turn all the assert calls into a negative form. One possibility is verify. Another is check, although that word already has a meaning for unit testing. Jeremy |
From: Fred L. D. Jr. <fd...@ac...> - 2001-04-11 18:38:36
|
Steve wrote: > There's no great harm in documenting the synonym 'failUnless()', since > 'assert_()' is slightly ugly. Guido van Rossum writes: > (Which causes inconsistency and confusion, and got this whole thing > started.) Yes. I propose that the Python Library Reference only document one name for each method. *Which* name should be decided here. This issue exists for the following pairs of names: failUnless / assert_ assertEqual / assertEquals assertNotEqual / assertNotEquals (Note that only one of these falls into the fail*()/assert*() discussion!) I wrote: > - Guido suggests using a unique exception for test failures, similar > to the use of TestFailed in the Python regression test. This > allows the test errors to be distinguished from the traditional > use of "assert" in Python modules. For backward compatibility it > might be reasonable to subclass from AssertionError. Steve: > That doesn't help backward compatibility, because the framework would still > have to catch 'AssertionError' in order to detect failures. I'm not sure I understand why -- do you *really* want to catch failures from assert statements, or is there some other motivation? I'd be inclined to group AssertionError separately from some new exception raised by test code. Steve, in response to my comments on the exception to use: > As a slight simplification to the code, it makes sense. However, it would > never make sense for TestCase subclasses to change that (fail()) aspect of > their behaviour, since the framework strictly defines which exception is > considered a failure. But they way want to do additional things when fail() is called. -Fred -- Fred L. Drake, Jr. <fdrake at acm.org> PythonLabs at Digital Creations |
From: Fred L. D. Jr. <fd...@ac...> - 2001-04-11 18:37:12
|
Guido van Rossum writes: > The problem is that many test suites (e.g. the Zope tests for > PageTemplates :-) use assert statements instead of calls to fail*() / > assert*() methods. From the compatibility side, those methods need to be supported, yes. Is there a reason they should raise AssertionError and not a subclass of it? I think the later would help people learn that the two are not the same thing at a conceptual level. The assert*() methods should raise the new exception. The framework can still *catch* the base AssertionError -- that's different. The AssertionError is exactly right when using an assert statement, it's just not being used properly there. ;-) I'm less concerned about the PageTemplates tests than our DOM tests; that's a somewhat larger collection of tests! -Fred -- Fred L. Drake, Jr. <fdrake at acm.org> PythonLabs at Digital Creations |
From: Fred L. D. Jr. <fd...@ac...> - 2001-04-11 18:22:12
|
Steve Purcell writes: > I propose the following: > > - Add 'failUnlessRaises()' to TestCase > - Add 'failIfEqual()' to TestCase > - Add 'failUnlessEqual()' to TestCase (and probably 'failIfEqual()' too) Was that second one "failIfNotEqual()"? I'd really rather only see one name for it. In particular, I'd like to see either "Unless" or "IfNot", but not both. > - Document the 'fail*' methods > - Keep the 'assert*' methods but don't mention them Agreed -- there's no reason to actually remove them, and good reason to keep them. > - Don't mention using the 'assert' statement > - Make all 'fail*' and 'assert*' methods delegate to 'fail()', as > suggested by Fred > - Keep AssertionError as the 'blessed failure exception', but > parameterise it at module level This is reasonable. An alternate approach may be to subclass it, but still only watch for AssertionError when running tests: class TestFailed(AssertionError): """Exception raised by TestCase.fail().""" > I think this can be safely done in time for this week's 2.1 release > (though that's not *my* call to make). This would need to be in almost immediately (can you get it done tonight?) so that enough people can test it from CVS before the release candidate is on Friday. > If this is acceptable, I can make appropriate changes. If you don't have time, I'll be glad to do this. -Fred -- Fred L. Drake, Jr. <fdrake at acm.org> PythonLabs at Digital Creations |
From: Guido v. R. <gu...@di...> - 2001-04-11 18:22:08
|
> Yes. I propose that the Python Library Reference only document one > name for each method. *Which* name should be decided here. > This issue exists for the following pairs of names: > > failUnless / assert_ failUnless > assertEqual / assertEquals failUnlessEqual > assertNotEqual / assertNotEquals failIfEqual Also, assertRaises should be changed to failUnessRaises, etc. > I wrote: > > - Guido suggests using a unique exception for test failures, similar > > to the use of TestFailed in the Python regression test. This > > allows the test errors to be distinguished from the traditional > > use of "assert" in Python modules. For backward compatibility it > > might be reasonable to subclass from AssertionError. > > Steve: > > That doesn't help backward compatibility, because the framework would still > > have to catch 'AssertionError' in order to detect failures. > > I'm not sure I understand why -- do you *really* want to catch failures > from assert statements, or is there some other motivation? I'd be > inclined to group AssertionError separately from some new exception > raised by test code. The problem is that many test suites (e.g. the Zope tests for PageTemplates :-) use assert statements instead of calls to fail*() / assert*() methods. --Guido van Rossum (home page: http://www.python.org/~guido/) |
From: Guido v. R. <gu...@di...> - 2001-04-11 18:03:43
|
> At 12:26 PM 4/11/2001 -0500, Guido van Rossum wrote: > >And that's exactly what I'm strongly opposing. When "python -O" is > >used, tests that use the assert statement will not be performed > >correctly, because no code is compiled for assert statements in that > >case. It's a nasty trap to fall into, and the fact that the PyUnit > >framework uses AssertionError makes it a common mistake. > > But isn't this the real problem? > > I had an advanced application a few years back in Objective-C, where GNU > optimizations and macro assertions were 2 separate deals. At one point, I > compiled with assertions off and tested the program. The gain in > performance was 0.5% so I decided to ship the product to customers with > optimization as well as assertions. Even if it had been 10%, I still would > have kept assertions. > > The reason to keep assertions is that they raise exceptions which can then > be caught and dealt with appropriately, such as notifying customer support, > or skipping the processing of a troubled object within a larger set of objects. > > The alternative is that the program keeps chugging through an erroneous > situation. It's doubtful that you covered that in your testing and that you > know what the results will be. What's even more doubtful is that anything > intelligent will happen past the point where the "missing assertion" was > located. > > Also, it shouldn't be surprising that assertions don't slow software down > by much. Your algorithms and caching and such are what's most important. > Assertions pretty much ride along with your processing as extra > instructions, not necessarily extra complexity. > > Finally, consider the flip side: Is there any reason to give to developers > that their assertions MUST be disabled for optimization? I agree 100% with you, and we're planning to separate the assertions from optimizations some time in the future. Howver, since it will still be possible to turn off assertions, it's still a bad idea to use AssertionError as the exception blessed by PyUnit. --Guido van Rossum (home page: http://www.python.org/~guido/) |
From: Guido v. R. <gu...@di...> - 2001-04-11 18:03:09
|
> Thanks Guido, points duly noted. You're welcome. :-) > I propose the following: > > - Add 'failUnlessRaises()' to TestCase > - Add 'failIfEqual()' to TestCase > - Add 'failUnlessEqual()' to TestCase (and probably 'failIfEqual()' too) > - Document the 'fail*' methods > - Keep the 'assert*' methods but don't mention them > - Don't mention using the 'assert' statement > - Make all 'fail*' and 'assert*' methods delegate to 'fail()', as > suggested by Fred > - Keep AssertionError as the 'blessed failure exception', but > parameterise it at module level All good. I agree that we shouldn't break existing code now. > I think this can be safely done in time for this week's 2.1 release > (though that's not *my* call to make). The code freeze is planned to start Thursday (probably Thu afternoon), so you have approximately a day! > The 'blessed' failure exception could be changed at a later date if that > turns out to be desirable. However, I think that if the documentation does > not encourage test writers to use the 'assert' statement, this will never > be a problem. I still think it would be a good idea to change the blessed exception later, to encourage users who are still using assert statements from fixing their tests. But I agree that we shouldn't break their code with the 2.1 release. > If this is acceptable, I can make appropriate changes. Please do, and please be quick! :-) > (Users should note that the proposed changes would have no effect on their > existing tests.) This, plus the fact that unittest wasn't part of Python 2.0, makes me comfortable with this change so close to the final release. Fred Drake has volunteered to make the documentation updates, including warnings against the use of assert statement and methods. --Guido van Rossum (home page: http://www.python.org/~guido/) |
From: Chuck E. <ec...@mi...> - 2001-04-11 17:47:08
|
At 03:58 PM 4/11/2001 +0200, Steve Purcell wrote: > def suite(): > try: > import some_tests > return unittest.defaultTestLoader.loadTestsFromModule(some_tests) > except ImportError: > return TestSuite() # empty suite Perhaps: return SkippedTestSuite(reason) would better. Then you could display the reason (and count the number of tests skipped). My point being that SkippedTestSuite becomes part of unittest. I agree with Guido that many systems will grow optional components. Webware is one example. -Chuck |
From: Steve P. <ste...@ya...> - 2001-04-11 17:45:49
|
Thanks Guido, points duly noted. I propose the following: - Add 'failUnlessRaises()' to TestCase - Add 'failIfEqual()' to TestCase - Add 'failUnlessEqual()' to TestCase (and probably 'failIfEqual()' too) - Document the 'fail*' methods - Keep the 'assert*' methods but don't mention them - Don't mention using the 'assert' statement - Make all 'fail*' and 'assert*' methods delegate to 'fail()', as suggested by Fred - Keep AssertionError as the 'blessed failure exception', but parameterise it at module level I think this can be safely done in time for this week's 2.1 release (though that's not *my* call to make). The 'blessed' failure exception could be changed at a later date if that turns out to be desirable. However, I think that if the documentation does not encourage test writers to use the 'assert' statement, this will never be a problem. If this is acceptable, I can make appropriate changes. (Users should note that the proposed changes would have no effect on their existing tests.) Best wishes, -Steve -- Steve Purcell, Pythangelist Get testing at http://pyunit.sourceforge.net/ Any opinions expressed herein are my own and not necessarily those of Yahoo |
From: Chuck E. <ec...@mi...> - 2001-04-11 17:41:08
|
At 12:26 PM 4/11/2001 -0500, Guido van Rossum wrote: >And that's exactly what I'm strongly opposing. When "python -O" is >used, tests that use the assert statement will not be performed >correctly, because no code is compiled for assert statements in that >case. It's a nasty trap to fall into, and the fact that the PyUnit >framework uses AssertionError makes it a common mistake. But isn't this the real problem? I had an advanced application a few years back in Objective-C, where GNU optimizations and macro assertions were 2 separate deals. At one point, I compiled with assertions off and tested the program. The gain in performance was 0.5% so I decided to ship the product to customers with optimization as well as assertions. Even if it had been 10%, I still would have kept assertions. The reason to keep assertions is that they raise exceptions which can then be caught and dealt with appropriately, such as notifying customer support, or skipping the processing of a troubled object within a larger set of objects. The alternative is that the program keeps chugging through an erroneous situation. It's doubtful that you covered that in your testing and that you know what the results will be. What's even more doubtful is that anything intelligent will happen past the point where the "missing assertion" was located. Also, it shouldn't be surprising that assertions don't slow software down by much. Your algorithms and caching and such are what's most important. Assertions pretty much ride along with your processing as extra instructions, not necessarily extra complexity. Finally, consider the flip side: Is there any reason to give to developers that their assertions MUST be disabled for optimization? -Chuck |
From: John J. L. <ph...@cs...> - 2001-04-11 16:42:06
|
On Wed, 11 Apr 2001, Steve Purcell wrote: [...] [Fred Drake wrote] > > - Guido suggests using a unique exception for test failures, similar > > to the use of TestFailed in the Python regression test. This [...] > The worst that can currently happen is that some buried 'assert' statements > can cause a test to look like it's failing rather than erroring. In fact, I think it's better than that: asserts and test failures are more similar to each other than asserts and run-time errors are. Pyunit's simplicity is its strong point -- I extended an old version of it with my own (rather messy) add-ons, and it was easy to do because it's simple; yesterday I upgraded to the latest version of Pyunit and was able to do so with a one-line change, despite the fact that my extensions violate the basic idea of Pyunit in an ugly-but-functional way. John |
From: Guido v. R. <gu...@di...> - 2001-04-11 16:24:11
|
> I agree that recommending one consistent set of names is a good idea. The > 'fail' methods have been around longer, but I think that the 'assert*' methods > may be clearer for non-native English speakers, who must be used to the > assert keyword already. > > On the whole, I would vote for the 'assert' methods, because I get the > impression that they are more widely used at this point in time. I disagree, for reasons see below. > > The names provided in the latest version are: > > > > fail*() | assert*() > > -------------+------------------------------------ > > fail() | > > failIf() | > > failUnless() | assert_() > > | assertRaises() > > | assertEqual(), assertEquals() > > | assertNotEqual() assertNotEquals() > > > > Note that while there is a (small!) overlap, neither group > > completely covers the available functionality. > > Right: 'failIf()' is redundant, and there is no 'assertionFailed()'. > > I suggest the following standard set: > > fail() > assert_() > assertRaises() > assertEqual() > assertNotEqual() > > There's no great harm in documenting the synonym 'failUnless()', since > 'assert_()' is slightly ugly. (Which causes inconsistency and confusion, and got this whole thing started.) > > - Guido suggests using a unique exception for test failures, similar > > to the use of TestFailed in the Python regression test. This > > allows the test errors to be distinguished from the traditional > > use of "assert" in Python modules. For backward compatibility it > > might be reasonable to subclass from AssertionError. > > That doesn't help backward compatibility, because the framework would still > have to catch 'AssertionError' in order to detect failures. > > I have yet to come across a situation where making this distinction between > deeply buried AssertionErrors and deliberately provoked ones is important. > The worst that can currently happen is that some buried 'assert' statements > can cause a test to look like it's failing rather than erroring. > > (I also like the idea that test writers can still use the 'assert' statement > for their tests if it suits them.) And that's exactly what I'm strongly opposing. When "python -O" is used, tests that use the assert statement will not be performed correctly, because no code is compiled for assert statements in that case. It's a nasty trap to fall into, and the fact that the PyUnit framework uses AssertionError makes it a common mistake. Therefore, I'd like to: (1) phase out the use of AssertionError in favor of a brand new PyUnit-specific exception (2) phase out the use of method names starting with assert > > Having all the fail*()/assert*() methods call fail() to actually > > raise the appropriate expception would be an easy way to isolate > > this, and allow TestCase subclasses to do things differently > > should the need arise. > > As a slight simplification to the code, it makes sense. However, it would > never make sense for TestCase subclasses to change that (fail()) aspect of > their behaviour, since the framework strictly defines which exception is > considered a failure. So that could also be made customizable (maybe through a class variable that can be set by the constructor; or maybe, more crudely, as a global in unittest). > > - There is no facility to skip a test without failing. This can be > > reasonable in cases where a platform does not support the full > > capability of the tested module. An example application of this > > is in test.test_largefile in the Python regression test -- if a > > platform does not support large files, it is not considered an > > error for the tests not to be completed. > > I thought about this recently, and I came to the conclusion that tests should > be skipped by explicitly omitting them from a test suite, along the > following lines: > > def suite(): > try: > import some_tests > return unittest.defaultTestLoader.loadTestsFromModule(some_tests) > except ImportError: > return TestSuite() # empty suite Sounds fine to me. > I don't really like the alternative, which is adding a 'skipped' result. > The Python regression test suite is probably unusual in requiring > the skipping of some tests. Most test suites are more straightforward. Disagree. I think any sufficiently large system tends to grow optional components that can only be testsed on certain platforms. But the approach you suggest is fine. > On the whole, at this late stage in the Python 2.1 release, I wouldn't be > inclined to change anything much. Unfortunately, I have to agree. > Am I making any sense? (Hint: if so, it's never happened before...) Yes! Glad to meet you, actually. :-) --Guido van Rossum (home page: http://www.python.org/~guido/) |
From: Steve P. <ste...@ya...> - 2001-04-11 13:59:56
|
Hi Fred, Guido and all, Fred L. Drake, Jr. wrote: > I'd like to see one set or the other become recommended usage, and > offer names for all the functions. Using the two sets of names is > confusing, especially since one set is the logical negation of the > other. I'd also like to see only one of assert*Equal() and > assert*Equals() be documented; having two names so similar seems > unnecessary. > > Guido has indicated a preference for the fail*() names; I have no > preference either way. I agree that recommending one consistent set of names is a good idea. The 'fail' methods have been around longer, but I think that the 'assert*' methods may be clearer for non-native English speakers, who must be used to the assert keyword already. On the whole, I would vote for the 'assert' methods, because I get the impression that they are more widely used at this point in time. > The names provided in the latest version are: > > fail*() | assert*() > -------------+------------------------------------ > fail() | > failIf() | > failUnless() | assert_() > | assertRaises() > | assertEqual(), assertEquals() > | assertNotEqual() assertNotEquals() > > Note that while there is a (small!) overlap, neither group > completely covers the available functionality. Right: 'failIf()' is redundant, and there is no 'assertionFailed()'. I suggest the following standard set: fail() assert_() assertRaises() assertEqual() assertNotEqual() There's no great harm in documenting the synonym 'failUnless()', since 'assert_()' is slightly ugly. > - Guido suggests using a unique exception for test failures, similar > to the use of TestFailed in the Python regression test. This > allows the test errors to be distinguished from the traditional > use of "assert" in Python modules. For backward compatibility it > might be reasonable to subclass from AssertionError. That doesn't help backward compatibility, because the framework would still have to catch 'AssertionError' in order to detect failures. I have yet to come across a situation where making this distinction between deeply buried AssertionErrors and deliberately provoked ones is important. The worst that can currently happen is that some buried 'assert' statements can cause a test to look like it's failing rather than erroring. (I also like the idea that test writers can still use the 'assert' statement for their tests if it suits them.) > Having all the fail*()/assert*() methods call fail() to actually > raise the appropriate expception would be an easy way to isolate > this, and allow TestCase subclasses to do things differently > should the need arise. As a slight simplification to the code, it makes sense. However, it would never make sense for TestCase subclasses to change that (fail()) aspect of their behaviour, since the framework strictly defines which exception is considered a failure. > - There is no facility to skip a test without failing. This can be > reasonable in cases where a platform does not support the full > capability of the tested module. An example application of this > is in test.test_largefile in the Python regression test -- if a > platform does not support large files, it is not considered an > error for the tests not to be completed. I thought about this recently, and I came to the conclusion that tests should be skipped by explicitly omitting them from a test suite, along the following lines: def suite(): try: import some_tests return unittest.defaultTestLoader.loadTestsFromModule(some_tests) except ImportError: return TestSuite() # empty suite I don't really like the alternative, which is adding a 'skipped' result. The Python regression test suite is probably unusual in requiring the skipping of some tests. Most test suites are more straightforward. On the whole, at this late stage in the Python 2.1 release, I wouldn't be inclined to change anything much. Am I making any sense? (Hint: if so, it's never happened before...) -Steve -- Steve Purcell, Pythangelist Get testing at http://pyunit.sourceforge.net/ Any opinions expressed herein are my own and not necessarily those of Yahoo |
From: Fred L. D. Jr. <fd...@ac...> - 2001-04-11 12:11:32
|
I have a few comments on PyUnit that have bubbled up as I've been writing the documentation for the Python Library Reference and have used it in more projects. - Naming consistency: The TestCase class defines several methods for use by the individual test methods. These have names like assert*() and fail*(). Most of these methods have a name in only one of these groups, but assert_() and failUnless() are the same method. assertEqual() and assertEquals() are also synonyms, but from the same group, and assertNotEqual() and assertNotEquals() are a similar pair. I'd like to see one set or the other become recommended usage, and offer names for all the functions. Using the two sets of names is confusing, especially since one set is the logical negation of the other. I'd also like to see only one of assert*Equal() and assert*Equals() be documented; having two names so similar seems unnecessary. Guido has indicated a preference for the fail*() names; I have no preference either way. The names provided in the latest version are: fail*() | assert*() -------------+------------------------------------ fail() | failIf() | failUnless() | assert_() | assertRaises() | assertEqual(), assertEquals() | assertNotEqual() assertNotEquals() Note that while there is a (small!) overlap, neither group completely covers the available functionality. - Guido suggests using a unique exception for test failures, similar to the use of TestFailed in the Python regression test. This allows the test errors to be distinguished from the traditional use of "assert" in Python modules. For backward compatibility it might be reasonable to subclass from AssertionError. Having all the fail*()/assert*() methods call fail() to actually raise the appropriate expception would be an easy way to isolate this, and allow TestCase subclasses to do things differently should the need arise. - There is no facility to skip a test without failing. This can be reasonable in cases where a platform does not support the full capability of the tested module. An example application of this is in test.test_largefile in the Python regression test -- if a platform does not support large files, it is not considered an error for the tests not to be completed. -Fred -- Fred L. Drake, Jr. <fdrake at acm.org> PythonLabs at Digital Creations |
From: Steve P. <ste...@ya...> - 2001-04-10 14:26:13
|
Jim Vickroy wrote: > It appears that Py-unit creates a separate process (rather than thread) > for test cases. Is that correct? Nope. There are no additional threads or processes created by PyUnit. In what context are you seeing peculiar behaviour? (Note that if you are testing multi-threaded programs on Linux, each thread will show up as a separate process in the output of 'ps') -Steve -- Steve Purcell, Pythangelist Get testing at http://pyunit.sourceforge.net/ Any opinions expressed herein are my own and not necessarily those of Yahoo |
From: Jim V. <Jim...@no...> - 2001-04-10 14:17:21
|
It appears that Py-unit creates a separate process (rather than thread) for test cases. Is that correct? Thanks, -- jv |
From: Steve P. <ste...@ya...> - 2001-04-06 14:15:12
|
Fred L. Drake, Jr. wrote: > Also, is anyone working on coverage tools that can be used with > PyUnit, and can be contributed to the project? Yes, me! I'll try to use Neil Schemenauer's 'code_coverage.py'. I investigated it a while ago, but I want to be careful with the PyUnit integration because I don't want to break PyUnit's compatibility with Jython -- 'code_coverage.py' relies on inspecting code objects, and therefore does not work with Jython. -Steve -- Steve Purcell, Pythangelist Get testing at http://pyunit.sourceforge.net/ Any opinions expressed herein are my own and not necessarily those of Yahoo |
From: Fred L. D. Jr. <fd...@ac...> - 2001-04-06 13:14:54
|
Michal Wallace writes: > Have you actually tried this? The profiler is usually blazingly fast > has almost no overhead whatsoever (there's hooks for it directly in > the VM). I run it as part of my cgi wrapper "when I'm developing, and > it's quite comfortable. Even if you had a ton of test cases, I would > think it should take a half a second or so at the worst. Steve Purcell writes: > I based my comment on some experiences I had with coverage > measurement; there, the callback overhead is much higher because a > callback is made for each executed line, rather than for each > function, as is the case when profiling. > > So on reflection, "significantly slower" is probably significantly > overstated. :-) I'd say that profiling is less of a performance hit than coverage analysis, but it still sucks cycles like a mad dog. I say this based on trying to improve performance in Grail, which requires a lot of work to get a page displayed. The profiler provides a number of classes that can be used to collect information. There's even a HotProfile class which is touted as relatively fast, but the information it collects is simpler (which is why it is faster). Unfortunately, the datastructure it creates is not compatible with the analysis tools in the pstat module. ;( It may be that some post-processing on the data can be used to convert it to the format used by pstat. Another possibility is to write the data collector in C, and then push the data into Python structures after the measurement is complete. Is anyone working on ideas like these? A *lot* of people can benefit from the work, and anyone serious about performance would want to use the profiler if it didn't hurt as bad as it does. Also, is anyone working on coverage tools that can be used with PyUnit, and can be contributed to the project? -Fred -- Fred L. Drake, Jr. <fdrake at acm.org> PythonLabs at Digital Creations |
From: Steve P. <ste...@ya...> - 2001-04-06 11:24:58
|
Michal Wallace wrote: > On Fri, 6 Apr 2001, Steve Purcell wrote: > > Yes, as Michal notes, using the profiler is the easiest option, > > though it will cause the tested code to run significantly slower. > > Have you actually tried this? The profiler is usually blazingly fast > has almost no overhead whatsoever (there's hooks for it directly in > the VM). I run it as part of my cgi wrapper when I'm developing, and > it's quite comfortable. Even if you had a ton of test cases, I would > think it should take a half a second or so at the worst. I based my comment on some experiences I had with coverage measurement; there, the callback overhead is much higher because a callback is made for each executed line, rather than for each function, as is the case when profiling. So on reflection, "significantly slower" is probably significantly overstated. :-) -Steve -- Steve Purcell, Pythangelist Get testing at http://pyunit.sourceforge.net/ Any opinions expressed herein are my own and not necessarily those of Yahoo |
From: Michal W. <sa...@ma...> - 2001-04-06 11:04:20
|
On Fri, 6 Apr 2001, Steve Purcell wrote: > Yes, as Michal notes, using the profiler is the easiest option, > though it will cause the tested code to run significantly slower. Have you actually tried this? The profiler is usually blazingly fast has almost no overhead whatsoever (there's hooks for it directly in the VM). I run it as part of my cgi wrapper when I'm developing, and it's quite comfortable. Even if you had a ton of test cases, I would think it should take a half a second or so at the worst. Cheers, - Michal ------------------------------------------------------------------------ www.manifestation.com www.sabren.net www.linkwatcher.com www.zike.net ------------------------------------------------------------------------ |
From: Steve P. <ste...@ya...> - 2001-04-06 08:55:23
|
Luigi Ballabio wrote: > > The question is: given a suite of test cases, would it be possible and/or > of interest to anyone to display the time elapsed for running the single > test cases, besides the total time for the suite? If enough interest exist, > could it be made a (command-line) option? > I know the above can be easily accomplished by running the cases one by > one. However, I would like to keep the concept of "test suite for a module" > as opposed to "set of unrelated tests for some functionalities of a module". Yes, as Michal notes, using the profiler is the easiest option, though it will cause the tested code to run significantly slower. Other than that, writing a custom TestResult subclass and a corresponding TestRunner would allow you to record that kind of timing information. -Steve -- Steve Purcell, Pythangelist Get testing at http://pyunit.sourceforge.net/ Any opinions expressed herein are my own and not necessarily those of Yahoo |
From: Michal W. <sa...@ma...> - 2001-04-04 13:41:24
|
On Wed, 4 Apr 2001, Luigi Ballabio wrote: > > Hi all, > feel free to blame me for not diving into the code and finding that I can > implement this myself by subclassing TestResult :) > The question is: given a suite of test cases, would it be possible and/or > of interest to anyone to display the time elapsed for running the single > test cases, besides the total time for the suite? If enough interest exist, > could it be made a (command-line) option? > I know the above can be easily accomplished by running the cases one by > one. However, I would like to keep the concept of "test suite for a module" > as opposed to "set of unrelated tests for some functionalities of a module". Luigi, Why not run the whole thing under the python profiler, and then only print results for your test cases? Cheers, - Michal ------------------------------------------------------------------------ www.manifestation.com www.sabren.net www.linkwatcher.com www.zike.net ------------------------------------------------------------------------ |
From: Luigi B. <bal...@ma...> - 2001-04-04 11:07:17
|
Hi all, feel free to blame me for not diving into the code and finding that I can implement this myself by subclassing TestResult :) The question is: given a suite of test cases, would it be possible and/or of interest to anyone to display the time elapsed for running the single test cases, besides the total time for the suite? If enough interest exist, could it be made a (command-line) option? I know the above can be easily accomplished by running the cases one by one. However, I would like to keep the concept of "test suite for a module" as opposed to "set of unrelated tests for some functionalities of a module". Bye, Luigi |