pyunit-interest Mailing List for PyUnit testing framework (Page 13)
Brought to you by:
purcell
You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(12) |
Jun
(16) |
Jul
(6) |
Aug
|
Sep
(3) |
Oct
(7) |
Nov
(4) |
Dec
(6) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
|
Feb
(11) |
Mar
(7) |
Apr
(50) |
May
(9) |
Jun
(5) |
Jul
(33) |
Aug
(9) |
Sep
(7) |
Oct
(7) |
Nov
(17) |
Dec
(4) |
2002 |
Jan
(8) |
Feb
|
Mar
(14) |
Apr
(9) |
May
(13) |
Jun
(30) |
Jul
(13) |
Aug
(14) |
Sep
|
Oct
|
Nov
(2) |
Dec
(11) |
2003 |
Jan
(8) |
Feb
(3) |
Mar
(6) |
Apr
(5) |
May
(15) |
Jun
(5) |
Jul
(8) |
Aug
(14) |
Sep
(12) |
Oct
(1) |
Nov
(2) |
Dec
(1) |
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
(3) |
Jul
(4) |
Aug
(4) |
Sep
(2) |
Oct
|
Nov
|
Dec
|
2005 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
|
2006 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
(1) |
Dec
(5) |
2007 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(3) |
Nov
|
Dec
|
2010 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Gareth R. <gd...@ra...> - 2001-04-24 14:28:22
|
Here's some discussion of PyUnit's handling of data-driven tests. I hope to persuade you that improvement is needed. By "data-driven tests" I mean groups of tests handled by the same code by driven by tables or files of data. For example, a test case that reads records from two databases and checks that they are consistent. Or a test case that repeatedly calls a function under test with arguments taken from a table. Requirements for data-driven tests: 1. You mustn't have to know in advance how many tests will be made (you can't when checking files or databases). 2. You must be able to find as many errors as possible each time the test case is run: this is particularly important when the test case is expensive to set up. Here are some suggestions for implementing data-driven tests, with analysis: 1. Write an ordinary unittest test case. This will stop when it finds the first error, thus failing requirement 2. 2. Declare a method for each test case, perhaps using tricks with 'eval', as in the manytests.py example distributed with PyUnit 1.3.1. This is only possible if you know in advance how many test cases there will be, failing requirement 1. Also the volume of output from the test runner is likely to be unhelpful and annoying (when you have 1000 test cases you don't want to see a report for each case, merely a report for each failure and a summary of the cases executed). 3. Collect errors as they are found yourself and then report them all at once at the end of the test. This loses much of the convenience of PyUnit. 4. Extend PyUnit by providing a mechanism for specifying that a test case be called multiple times. This might be done by having a subclass of unittest.TestCase which repeatedly calls its test method, with a numeric argument giving the number of times it's been called so far, and stops when the method returns false. For table-driven tests this might work, but for tests driven by files or databases I think it will make the tests very hard to write, as a test that is naturally written as a loop will have to be broken up into separate calls to the test method in order to fit this approach. Also this design means that it will be very easy to make mistakes that result in infinite loops. 5. Extend PyUnit by providing methods for reporting errors without aborting the test case. This could be done quite simply by providing a way for a test case to call the addFailure() method of the TestResult object. For example, you could add the line self.__result = result near the start of the TestCase.__call__ method, and add a method like this to the TestCase class: def addFailure(self, msg): # Raise an exception and catch it immediately so that we can get a # traceback. try: raise self.failureException, msg except self.failureException: self.__result.addFailure(self,self.__exc_info()) A disadvantage of this approach is that it breaks the one-to-one correspondence between tests and failures. If you have 10 test cases and 5 failures, it doesn't mean that you've had 5 successes: the 5 failures may have come from a single test case. I don't think that this correspondence is particularly important, but if you think there are users who will want to write data-driven tests that preserve the correspondence you could have methods like addFailureIf(expr, msg) and so on that increment the number of test cases in the TestResult object when they are called, regardless of whether they succeed or fail. I think that solution 5 has the most to recommend it. I've successfully used this approach in my own unit tests. I would like to see it or something like it included in future releases of PyUnit. Solution 4 could be offered as an another approach. However, I would need to develop the solution and gain some experience using it before I was able to recommend it. |
From: Chuck E. <ec...@mi...> - 2001-04-23 18:57:46
|
At 12:47 PM 4/23/2001 -0600, Jim Vickroy wrote: >Python is supposed to be a higher-level language. In my view, it would be >nice to avoid having to know the details of a particular vendors optimization >implementations. To that end, I do not see the relevance of how GCC >chooses to make optimizations available. It would be relevant if the two GCC issues that I stated (cost of time and bugs) ever cropped up in future versions of Python. While I hope they don't, if an aggressive "optimize Python" effort gave me a substantial performance improvement at the cost of a few flags, I would pay the price. Even under that scenario you don't _have_ to know anything. Just type "python foo.py" and go. -Chuck |
From: Jim V. <Jim...@no...> - 2001-04-23 18:47:59
|
Python is supposed to be a higher-level language. In my view, it would be nice to avoid having to know the details of a particular vendors optimization implementations. To that end, I do not see the relevance of how GCC chooses to make optimizations available. Chuck Esterbrook wrote: > At 05:30 PM 4/23/2001 +0200, Steve Purcell wrote: > >Hence my suggestion that -O be used *only* to remove asserts. Then you and I > >would not use '-O' ever, and the interpreter would deal with optimisations > >in a manner transparent to our code. > > GCC didn't make optimization transparent for these two reasons: > > - Different degrees of optimization cost so much extra time during > compilation that you didn't want them during development. > > - Bugs were sometimes discovered in optimizations, so you wanted to turn > them off. > > Perhaps these problems could be avoided in Python, but more often that not > I like to be able to turn off the "fancy stuff" when it's not working out > for me. > > -Chuck > > _______________________________________________ > Pyunit-interest mailing list > Pyu...@li... > http://lists.sourceforge.net/lists/listinfo/pyunit-interest |
From: Chuck E. <ec...@mi...> - 2001-04-23 18:24:30
|
At 05:30 PM 4/23/2001 +0200, Steve Purcell wrote: >Hence my suggestion that -O be used *only* to remove asserts. Then you and I >would not use '-O' ever, and the interpreter would deal with optimisations >in a manner transparent to our code. GCC didn't make optimization transparent for these two reasons: - Different degrees of optimization cost so much extra time during compilation that you didn't want them during development. - Bugs were sometimes discovered in optimizations, so you wanted to turn them off. Perhaps these problems could be avoided in Python, but more often that not I like to be able to turn off the "fancy stuff" when it's not working out for me. -Chuck |
From: Steve P. <ste...@ya...> - 2001-04-23 15:50:51
|
Fred L. Drake, Jr. wrote: > st = parser.suite(open("module.py").read()) > pprint.pprint(st.totuple(1)) > > The last item in each tuple that represents a concrete token gives > the line number for that token (for multi-line tokens, it gives the > line the token *ends* on). It's a pretty pathetic interface, but the > information is there. Thanks Fred, but I don't think even this is enough information, because it doesn't tell whether or not the token will result in that line getting a SET_LINENO instruction. I seem to remember happening across that '1' parameter a while back, and when I compared the results side-by-side with the SET_LINENO map for a given file, it differed quite significantly. I've attached an example script which shows the differences (it's an experimental hack I made ages ago). -Steve -- Steve Purcell, Pythangelist Get testing at http://pyunit.sourceforge.net/ Any opinions expressed herein are my own and not necessarily those of Yahoo |
From: Steve P. <ste...@ya...> - 2001-04-23 15:32:59
|
Chuck Esterbrook wrote: > At 04:37 PM 4/23/2001 +0200, Steve Purcell wrote: > >If the interpreter needs to be told what optimisations to make by the > >user, there is something wrong, and opportunities are then rife for code to > >appear that *requires* certain optimisations. Next, there'd have to be > >pragmas, right? > > I disagree. What kind of code are you writing that *requires* an > optimization? Presumably an optimization is a change in code generation > that results in exactly the same program semantics with (hopefully) faster > execution. That was my point. I don't think code should ever be tied to interpreter optimisations. If user-controllable optimisations were added to the interpreter, people would enter the mindset of "module x requires optimisations y,z", and then next they would automatically but illogically scream for the kind of pragmas that proved useless in Perl: use less memory; Yeuch. > Basically, if Python continues to couple -O with "disable assert", then I > will never be able to use -O even in production, because the performance > gain of removing assert is practically nothing and the value of catching > errors early is substantial. > > If currently -O doesn't provide much performance gain, then I guess it's > not a big deal. But if a future version does, then I will want to use it > while keeping my asserts. Hence my suggestion that -O be used *only* to remove asserts. Then you and I would not use '-O' ever, and the interpreter would deal with optimisations in a manner transparent to our code. -Steve -- Steve Purcell, Pythangelist Get testing at http://pyunit.sourceforge.net/ Any opinions expressed herein are my own and not necessarily those of Yahoo |
From: Fred L. D. Jr. <fd...@ac...> - 2001-04-23 15:06:41
|
Steve Purcell writes: > (The line number info doesn't seem to be available from the > 'parser' module's ASTs.) The methods of the syntax-tree objects that convert to tuples/lists accept a parameter that tells it to include an additional item that gives the line number. So you could do: import parser import pprint st = parser.suite(open("module.py").read()) pprint.pprint(st.totuple(1)) The last item in each tuple that represents a concrete token gives the line number for that token (for multi-line tokens, it gives the line the token *ends* on). It's a pretty pathetic interface, but the information is there. -Fred -- Fred L. Drake, Jr. <fdrake at acm.org> PythonLabs at Digital Creations |
From: Jim V. <Jim...@no...> - 2001-04-23 15:03:40
|
Steve Purcell wrote: ..... > > I might be alone in thinking that one of the crappy things about Perl > compared to Python is its plethora of command line switches, but I've never > thought about Python's command line switches at all, and I'm happy about that. Me too! > > > Wouldn't the best (and simplest) option be to define '-O' to mean > '__debug__ == 0 and assertions are disabled', and let the interpreter > take care of optimisation? Java works that way: it declares that 'final' and > 'private' methods *may* be optimised away, but allows the user no control > of optimisation. (Other than offering a don't-blow-my-foot-off option > '-nojit', of course.) This seems reasonable to me also. > > > If the interpreter needs to be told what optimisations to make by the > user, there is something wrong, and opportunities are then rife for code to > appear that *requires* certain optimisations. Next, there'd have to be > pragmas, right? > > -Steve > > -- > Steve Purcell, Pythangelist > Get testing at http://pyunit.sourceforge.net/ > Any opinions expressed herein are my own and not necessarily those of Yahoo > > _______________________________________________ > Pyunit-interest mailing list > Pyu...@li... > http://lists.sourceforge.net/lists/listinfo/pyunit-interest |
From: Chuck E. <ec...@mi...> - 2001-04-23 14:58:26
|
At 04:37 PM 4/23/2001 +0200, Steve Purcell wrote: >Guido van Rossum wrote: > > I'll admit that -O is pretty bogus, but I really don't think that > > "assertions disabled" and "generating optimized code" should be tied > > forever to each other (just because they were once in C). Once we > > have a decent optimizer, I can certainly see providing more control > > over code generation options. > >I imagine that adding more command line switches for such variable >optimisations would not be a benefit to the average user. I've never even >seen any Python program that usually runs under '-O'. I imagine such beasts >are rare; either Python is fast enough, or modules are re-written in C. '-O' >doesn't add much in the way of middle ground. Currently. But presumably a future version of Python could work much harder under a -O directive to produce a faster .pyo. >I might be alone in thinking that one of the crappy things about Perl >compared to Python is its plethora of command line switches, but I've never >thought about Python's command line switches at all, and I'm happy about that. > >Wouldn't the best (and simplest) option be to define '-O' to mean >'__debug__ == 0 and assertions are disabled', and let the interpreter >take care of optimisation? Java works that way: it declares that 'final' and >'private' methods *may* be optimised away, but allows the user no control >of optimisation. (Other than offering a don't-blow-my-foot-off option >'-nojit', of course.) > >If the interpreter needs to be told what optimisations to make by the >user, there is something wrong, and opportunities are then rife for code to >appear that *requires* certain optimisations. Next, there'd have to be >pragmas, right? I disagree. What kind of code are you writing that *requires* an optimization? Presumably an optimization is a change in code generation that results in exactly the same program semantics with (hopefully) faster execution. Consequently, removing an assertion is not a pure optimization. My program will definitely behave differently without assertions because undesireable conditions will progress past a point that they otherwise would not have. Consequently, I won't be sure how my program will behave in such a situation. Most likely I'll just get an exception, but perhaps outside resources such as files or databases will be put in a state inconsistent with what my program expects. Furthermore, I won't get the usual e-mails to "su...@my..." that say "There are no users defined in the database" with included information such as database name, user name, etc. Instead I'll get something like "IndexError" with a deep traceback. Basically, if Python continues to couple -O with "disable assert", then I will never be able to use -O even in production, because the performance gain of removing assert is practically nothing and the value of catching errors early is substantial. If currently -O doesn't provide much performance gain, then I guess it's not a big deal. But if a future version does, then I will want to use it while keeping my asserts. -Chuck |
From: Steve P. <ste...@ya...> - 2001-04-23 14:40:19
|
Jeremy Hylton wrote: > > (*) Okay, SET_LINENO instructions are omitted, too, but it is hard to > > imagine anyone having good cause to worry about the performance > > implications of SET_LINENO. I didn't know that. Skip Montanaro's code_coverage module relies on SET_LINENO instructions in order to find out which lines of a module's source code will potentially be passed to a 'settrace' function. That would seem to mean that code run under '-O' could not be coverage tested. (The line number info doesn't seem to be available from the 'parser' module's ASTs.) That's bad news... Guido van Rossum wrote: > I'll admit that -O is pretty bogus, but I really don't think that > "assertions disabled" and "generating optimized code" should be tied > forever to each other (just because they were once in C). Once we > have a decent optimizer, I can certainly see providing more control > over code generation options. I imagine that adding more command line switches for such variable optimisations would not be a benefit to the average user. I've never even seen any Python program that usually runs under '-O'. I imagine such beasts are rare; either Python is fast enough, or modules are re-written in C. '-O' doesn't add much in the way of middle ground. I might be alone in thinking that one of the crappy things about Perl compared to Python is its plethora of command line switches, but I've never thought about Python's command line switches at all, and I'm happy about that. Wouldn't the best (and simplest) option be to define '-O' to mean '__debug__ == 0 and assertions are disabled', and let the interpreter take care of optimisation? Java works that way: it declares that 'final' and 'private' methods *may* be optimised away, but allows the user no control of optimisation. (Other than offering a don't-blow-my-foot-off option '-nojit', of course.) If the interpreter needs to be told what optimisations to make by the user, there is something wrong, and opportunities are then rife for code to appear that *requires* certain optimisations. Next, there'd have to be pragmas, right? -Steve -- Steve Purcell, Pythangelist Get testing at http://pyunit.sourceforge.net/ Any opinions expressed herein are my own and not necessarily those of Yahoo |
From: Guido v. R. <gu...@di...> - 2001-04-23 13:32:06
|
> >>>>> "CE" == Chuck Esterbrook <ec...@mi...> writes: > > CE> Finally, consider the flip side: Is there any reason to give to > CE> developers that their assertions MUST be disabled for > CE> optimization? > > I don't understand this question. The definition of optimization for > Python is "assertions are disabled."(*) So "assertions must be disabled" > is tautological. > > Jeremy > > (*) Okay, SET_LINENO instructions are omitted, too, but it is hard to > imagine anyone having good cause to worry about the performance > implications of SET_LINENO. I'll admit that -O is pretty bogus, but I really don't think that "assertions disabled" and "generating optimized code" should be tied forever to each other (just because they were once in C). Once we have a decent optimizer, I can certainly see providing more control over code generation options. --Guido van Rossum (home page: http://www.python.org/~guido/) |
From: Jeremy H. <je...@al...> - 2001-04-23 12:46:58
|
>>>>> "CE" == Chuck Esterbrook <ec...@mi...> writes: CE> Finally, consider the flip side: Is there any reason to give to CE> developers that their assertions MUST be disabled for CE> optimization? I don't understand this question. The definition of optimization for Python is "assertions are disabled."(*) So "assertions must be disabled" is tautological. Jeremy (*) Okay, SET_LINENO instructions are omitted, too, but it is hard to imagine anyone having good cause to worry about the performance implications of SET_LINENO. |
From: Jeremy H. <je...@al...> - 2001-04-23 12:46:58
|
>>>>> "SP" == Steve Purcell <ste...@ya...> writes: SP> Thanks Guido, points duly noted. I propose the following: SP> - Add 'failUnlessRaises()' to TestCase SP> - Add 'failIfEqual()' to TestCase SP> - Add 'failUnlessEqual()' to TestCase (and probably SP> 'failIfEqual()' too) These names are awkard because they use a negative expression: "It should be be the case that this code does not raise ValueError." assertRaises() is clearer because it is in positive form, "It should be the case that this code raises ValueError." I'd rather find a synonym for assert than convert everything to fail. verifyRaises() seems reasonable. Jeremy |
From: Fred L. D. Jr. <fd...@ac...> - 2001-04-12 18:51:57
|
Steve Purcell writes: > I decided against that; the extra level of traceback output is unsightly. > The failure exception is now parameterised, and the framework is not written > such that it would be safe and/or desirable to override fail() in subclasses. Guido agrees with you, and I just wasn't thinking about the extra traceback cruft. > I also decided against this; fail() does not have a common-sense > 'assert' counterpart, and 'assertNot' is better served by 'assert_(not expr)' > for those users who prefer 'assert*' to 'fail*'. Again, Guido agrees. His expected usage was different, but exactly equivalent: if expr: self.fail(...) (This has the advantage of being just slightly faster as well.) -Fred -- Fred L. Drake, Jr. <fdrake at acm.org> PythonLabs at Digital Creations |
From: Guido v. R. <gu...@di...> - 2001-04-12 18:39:39
|
> > > assertEquals(a,b) > > > > Why even bother with this? I'd say assert_(a == b) is good enough. > > Judging by the number of requests I've had for this, it seems that when > an equality assertion of two objects fails, one almost always wants to see > what the actual value was. So one ends up writing > > self.assert_(a == b, "%s != %s" % (a, b)) > > all over the place. Since I added 'assertEquals' I've used it enough that > I understand why it was requested. Excellent point. I withdraw the suggestion. --Guido van Rossum (home page: http://www.python.org/~guido/) |
From: Steve P. <ste...@ya...> - 2001-04-12 18:10:19
|
Guido van Rossum wrote: > > assert_(expr) > > This is an ugly name, though (and that's what started this debate). Hey, it's not my fault you bagged 'assert' first. :-) > > assertEquals(a,b) > > Why even bother with this? I'd say assert_(a == b) is good enough. Judging by the number of requests I've had for this, it seems that when an equality assertion of two objects fails, one almost always wants to see what the actual value was. So one ends up writing self.assert_(a == b, "%s != %s" % (a, b)) all over the place. Since I added 'assertEquals' I've used it enough that I understand why it was requested. -Steve -- Steve Purcell, Pythangelist Get testing at http://pyunit.sourceforge.net/ Any opinions expressed herein are my own and not necessarily those of Yahoo |
From: Guido v. R. <gu...@di...> - 2001-04-12 17:56:04
|
> Fred L. Drake, Jr. wrote: > > I notice that it still isn't calling TestCase.fail() in all the > > fail<Something>() methods -- were we still agreed on that? I can make > > that change is needed. > Steve: > I decided against that; the extra level of traceback output is unsightly. Agreed. > The failure exception is now parameterised, and the framework is not written > such that it would be safe and/or desirable to override fail() in subclasses. OK, fine. > > Since Guido's suggested we document both naming famillies, we should > > have a second name for this -- I suggest "assertNot()". > > I also decided against this; fail() does not have a common-sense > 'assert' counterpart, and 'assertNot' is better served by 'assert_(not expr)' > for those users who prefer 'assert*' to 'fail*'. Agreed. > A bit of relevant background: JUnit lives quite happily with the following: > > fail() > assert(expr) > assertEquals(a,b) > assertSame(a,b) But Java doesn't have an assert statement in the language! > If I were starting PyUnit from scratch, I would provide only: > > fail() > assert_(expr) This is an ugly name, though (and that's what started this debate). > assertEquals(a,b) Why even bother with this? I'd say assert_(a == b) is good enough. > assertRaises(exception,func,*args,**kwargs) This is a useful utility. But, the problems I pointed out with assert will remain as long as we have names starting with assert... :-( --Guido van Rossum (home page: http://www.python.org/~guido/) |
From: Steve P. <ste...@ya...> - 2001-04-12 17:10:42
|
Fred L. Drake, Jr. wrote: > I notice that it still isn't calling TestCase.fail() in all the > fail<Something>() methods -- were we still agreed on that? I can make > that change is needed. I decided against that; the extra level of traceback output is unsightly. The failure exception is now parameterised, and the framework is not written such that it would be safe and/or desirable to override fail() in subclasses. > > The complete set of fail*() methods, together with their 'assert*' > > synonyms, is now: > > > > fail() > > failIf() > > Since Guido's suggested we document both naming famillies, we should > have a second name for this -- I suggest "assertNot()". I also decided against this; fail() does not have a common-sense 'assert' counterpart, and 'assertNot' is better served by 'assert_(not expr)' for those users who prefer 'assert*' to 'fail*'. A bit of relevant background: JUnit lives quite happily with the following: fail() assert(expr) assertEquals(a,b) assertSame(a,b) If I were starting PyUnit from scratch, I would provide only: fail() assert_(expr) assertEquals(a,b) assertRaises(exception,func,*args,**kwargs) Best wishes, -Steve -- Steve Purcell, Pythangelist Get testing at http://pyunit.sourceforge.net/ Any opinions expressed herein are my own and not necessarily those of Yahoo |
From: Fred L. D. Jr. <fd...@ac...> - 2001-04-12 16:34:37
|
Steve Purcell writes: > I have now checked the changes into both the Python and PyUnit CVS trees. I notice that it still isn't calling TestCase.fail() in all the fail<Something>() methods -- were we still agreed on that? I can make that change is needed. > The complete set of fail*() methods, together with their 'assert*' synonyms, > is now: > > fail() > failIf() Since Guido's suggested we document both naming famillies, we should have a second name for this -- I suggest "assertNot()". -Fred -- Fred L. Drake, Jr. <fdrake at acm.org> PythonLabs at Digital Creations |
From: Steve P. <ste...@ya...> - 2001-04-12 16:20:17
|
Guido van Rossum wrote: > OK, that makes the most sense given our deadline. Sometimes, there's > more than one way to do it! :-) Ooh! That's a quote for the record books... There's *always* more than one way to do it. The skill lies in carefully not telling people about the ugly ways! Best wishes, -Steve -- Steve Purcell, Pythangelist Get testing at http://pyunit.sourceforge.net/ Any opinions expressed herein are my own and not necessarily those of Yahoo |
From: Guido v. R. <gu...@di...> - 2001-04-12 16:05:01
|
> Steve wrote: > > > I think this can be safely done in time for this week's 2.1 release > > > (though that's not *my* call to make). > > Guido replied: > > The code freeze is planned to start Thursday (probably Thu afternoon), > > so you have approximately a day! > > > I have now checked the changes into both the Python and PyUnit CVS trees. > > The complete set of fail*() methods, together with their 'assert*' synonyms, > is now: > > fail() > failIf() > failUnless() -- assert_() > failIfEqual() -- assertNotEqual(), assertNotEquals() > failUnlessEqual() -- assertEqual(), assertEquals() > > There is a new attribute 'failureException' in the TestCase class; this > attribute parameterises the exception that indicates test failures, and > is set to 'AssertionError' by default. Subclasses may override it if they > wish: > > class TestFailed(Exception): pass > > class StandardTestCase(unittest.TestCase): > failureException = TestFailed > > or even: > > class StandardTestCase(unittest.TestCase): > class failureException(Exception): pass > > I will make a new PyUnit release at the weekend that will correspond to the > 'unittest.py' included in the Python 2.1 final release. > > Best wishes, Thanks, Steve! > > Then all methods should get new names, right? It's a shame we can't > > use check (or can we?), otherwise that would be my preference. What > > does JUnit use? > > JUnit uses assert and assertEquals, which is a vote in favour or those names. > > Why don't we just bite the bullet and document the synonyms? Obviously > tastes differ significantly in this area. OK, that makes the most sense given our deadline. Sometimes, there's more than one way to do it! :-) --Guido van Rossum (home page: http://www.python.org/~guido/) |
From: Steve P. <ste...@ya...> - 2001-04-12 09:22:08
|
Steve wrote: > > I think this can be safely done in time for this week's 2.1 release > > (though that's not *my* call to make). Guido replied: > The code freeze is planned to start Thursday (probably Thu afternoon), > so you have approximately a day! I have now checked the changes into both the Python and PyUnit CVS trees. The complete set of fail*() methods, together with their 'assert*' synonyms, is now: fail() failIf() failUnless() -- assert_() failIfEqual() -- assertNotEqual(), assertNotEquals() failUnlessEqual() -- assertEqual(), assertEquals() There is a new attribute 'failureException' in the TestCase class; this attribute parameterises the exception that indicates test failures, and is set to 'AssertionError' by default. Subclasses may override it if they wish: class TestFailed(Exception): pass class StandardTestCase(unittest.TestCase): failureException = TestFailed or even: class StandardTestCase(unittest.TestCase): class failureException(Exception): pass I will make a new PyUnit release at the weekend that will correspond to the 'unittest.py' included in the Python 2.1 final release. Best wishes, -Steve -- Steve Purcell, Pythangelist Get testing at http://pyunit.sourceforge.net/ Any opinions expressed herein are my own and not necessarily those of Yahoo |
From: Steve P. <ste...@ya...> - 2001-04-12 07:09:23
|
Guido van Rossum wrote: > > One possibility is verify. Another is check, although that word > > already has a meaning for unit testing. > > Then all methods should get new names, right? It's a shame we can't > use check (or can we?), otherwise that would be my preference. What > does JUnit use? JUnit uses assert and assertEquals, which is a vote in favour or those names. Why don't we just bite the bullet and document the synonyms? Obviously tastes differ significantly in this area. -Steve -- Steve Purcell, Pythangelist Get testing at http://pyunit.sourceforge.net/ Any opinions expressed herein are my own and not necessarily those of Yahoo |
From: Guido v. R. <gu...@di...> - 2001-04-11 19:14:53
|
> >The problem is that many test suites (e.g. the Zope tests for > >PageTemplates :-) use assert statements instead of calls to fail*() / > >assert*() methods. > > So in the future do I have to switch from: > assert sess.hasFragments(self.userPages()) > > to: > if sess.hasFragments(self.userPages()): > self.fail() > ? > > Here's a proposal that I know will get shot down. :-) No, you would write self.verifyTrue(sess.hasFragments(self.userPages())) > How about a "test" statement? > > test sess.hasFragments(self.userPages()) > > > - Works like assert > - Raises a TestError > - No flags to turn it off > > One "soft" advantage is that it makes the concept of regression testing > much more prominent. People will ask, "What is this 'test' statement I see > in the language?" Then they will get a big lecture on unit testing. Sorry, I don't see any reason why we should change the language for this, despite your pitch. :-) > (Aside: Glad to hear that optimizations and assertions will be separated in > the future. Thanks. My favorite feature of Python over Objective-C is that > Python gets better over time. (Objective-C stagnated.)) It's a dead language. --Guido van Rossum (home page: http://www.python.org/~guido/) |
From: Chuck E. <ec...@mi...> - 2001-04-11 18:54:07
|
At 02:23 PM 4/11/2001 -0500, Guido van Rossum wrote: >The problem is that many test suites (e.g. the Zope tests for >PageTemplates :-) use assert statements instead of calls to fail*() / >assert*() methods. So in the future do I have to switch from: assert sess.hasFragments(self.userPages()) to: if sess.hasFragments(self.userPages()): self.fail() ? Here's a proposal that I know will get shot down. :-) How about a "test" statement? test sess.hasFragments(self.userPages()) - Works like assert - Raises a TestError - No flags to turn it off One "soft" advantage is that it makes the concept of regression testing much more prominent. People will ask, "What is this 'test' statement I see in the language?" Then they will get a big lecture on unit testing. (Aside: Glad to hear that optimizations and assertions will be separated in the future. Thanks. My favorite feature of Python over Objective-C is that Python gets better over time. (Objective-C stagnated.)) -Chuck |