perlunit-users Mailing List for PerlUnit (Page 2)
Status: Beta
Brought to you by:
mca1001
You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(2) |
Dec
(3) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(4) |
Feb
(6) |
Mar
(1) |
Apr
|
May
(2) |
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(8) |
Dec
(1) |
2003 |
Jan
(14) |
Feb
(2) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(10) |
Oct
(20) |
Nov
|
Dec
|
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(3) |
Nov
|
Dec
(3) |
2005 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
|
2006 |
Jan
|
Feb
(6) |
Mar
|
Apr
|
May
(4) |
Jun
(6) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(8) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
(1) |
2009 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
(3) |
Jun
(2) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Roger D. <rog...@gl...> - 2006-06-07 11:14:00
|
I've created a script much like TkRunner.pl, TestRunner etc, to run individual test suites. Is there a way I can run an individual test from within a test-suite in a similar manner? |
From: Roger D. <rog...@gl...> - 2006-05-10 16:16:32
|
You might try the files attached. These were faked up from the tests I'm currently writing, so I can't guarantee they'll work but they're close to the truth. Roger At 10/05/2006 12:07:31, per...@li... wrote: >Hi everyone, > >I'm new here so a quick hello. > >I was just wondering if anyone had any examples of a simple setup using >TestSuite integrated from >start to finish, maybe just running one easy TestCase? > >Thanks in advance for your help. > >-- > >With regards, > >Andrew Beaton > >Information in this electronic mail is confidential and may be legally >privileged. It is intended solely for the addressee. Access to this mail by >anyone else is unauthorised. If you are not the intended recipient any use, >disclosure, copying or distribution of this message is prohibited and may be >unlawful. When addressed to our customers, any information contained in this >message is subject to intY's Terms & Conditions. Please rely on your own >virus scanning and procedures with regard to any attachments to this message. > >Scanned by MailDefender - managed email security from intY - >www.maildefender.net > > >------------------------------------------------------- >Using Tomcat but need to do more? Need to support web services, security? >Get stuff done quickly with pre-integrated technology to make your job easier >Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >_______________________________________________ >Perlunit-users mailing list >Per...@li... >https://lists.sourceforge.net/lists/listinfo/perlunit-users > |
From: Andrew B. <and...@in...> - 2006-05-10 11:07:24
|
Hi everyone, I'm new here so a quick hello. I was just wondering if anyone had any examples of a simple setup using TestSuite integrated from start to finish, maybe just running one easy TestCase? Thanks in advance for your help. -- With regards, Andrew Beaton Information in this electronic mail is confidential and may be legally privileged. It is intended solely for the addressee. Access to this mail by anyone else is unauthorised. If you are not the intended recipient any use, disclosure, copying or distribution of this message is prohibited and may be unlawful. When addressed to our customers, any information contained in this message is subject to intY's Terms & Conditions. Please rely on your own virus scanning and procedures with regard to any attachments to this message. Scanned by MailDefender - managed email security from intY - www.maildefender.net |
From: Roger D. <rog...@gl...> - 2006-05-09 12:36:51
|
As a lightweight alternative - and the easiest way forward - you can assoiciate a Cascading Style Sheet with an XML document and achieve the same result in most modern browsers. OTOH, using the Perl Document Object Model API for XML documents will give you a tool which doesn't mean learning a whole new language. See the XML::DOM package http://search.cpan.org/~tjmather/XML-DOM-1.44/lib/XML/DOM.pm comes as standard with the ActiveState Perl and http://www.w3.org/DOM/. Having used it a couple of times, I would regard the XSLT route as a last resort. I've heard that XSLT2 is even more of a pain to learn and use than XSTL1. Unless you are going to be doing XSLT full-time that is, for small projects scripting I think is best. Client-side xslt, the last I looked and this was a while ago, contained the vagaries of all browsers, and just as much hackery to cope with different implementations of XSLT parsers. XSL subsumes XSLT, but is far more complex. Unless you're using this stuff for proper desktop publishing, I'd be very wary. Roger At 07/05/2006 00:28:27, per...@li... wrote: >Somebody asked off-list about generating HTML reports from unit test >results, possibly using Test::Unit::Runner::XML. Here's the main part >of my reply, in case it's of use to anyone else. > >[...] > >The two routes that spring to mind are > >- write XSL to transform the XML output to HTML > >- use the T:U:Runner::XML module as a template to generate HTML >instead > >I suspect XSL would be easier if you were familiar with both XSL and >Perl, and anyway it's a handy tool to know. From Perl, you can run >xsltproc as a subprocess or go via XML::XSLT. Another route would be >to get the web browser to do the XSLT for you - that would be the >neatest solution. > >http://www.google.co.uk/search?q=client-side+xslt looks promising but >the w3schools link advocates use of ActiveX. I would try >http://www.digital-web.com/articles/client_side_xslt/ . > >This route probably needs less maintenance and also you may find parts >of the Ant(?) project already have something because some of the Java >folks are keen on XSL too. > >Then again if you just want to hack something together and make it >work it may be easier to start with Andrew's perl module. > >[...] > >I guess either solution is about the hundred lines of code mark... > >[...] > > >Matthew #8-) > > >------------------------------------------------------- >Using Tomcat but need to do more? Need to support web services, security? >Get stuff done quickly with pre-integrated technology to make your job easier >Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >_______________________________________________ >Perlunit-users mailing list >Per...@li... >https://lists.sourceforge.net/lists/listinfo/perlunit-users > |
From: Matthew A. <mc...@us...> - 2006-05-08 17:35:33
|
Somebody asked off-list about generating HTML reports from unit test results, possibly using Test::Unit::Runner::XML. Here's the main part of my reply, in case it's of use to anyone else. [...] The two routes that spring to mind are - write XSL to transform the XML output to HTML - use the T:U:Runner::XML module as a template to generate HTML instead I suspect XSL would be easier if you were familiar with both XSL and Perl, and anyway it's a handy tool to know. From Perl, you can run xsltproc as a subprocess or go via XML::XSLT. Another route would be to get the web browser to do the XSLT for you - that would be the neatest solution. http://www.google.co.uk/search?q=client-side+xslt looks promising but the w3schools link advocates use of ActiveX. I would try http://www.digital-web.com/articles/client_side_xslt/ . This route probably needs less maintenance and also you may find parts of the Ant(?) project already have something because some of the Java folks are keen on XSL too. Then again if you just want to hack something together and make it work it may be easier to start with Andrew's perl module. [...] I guess either solution is about the hundred lines of code mark... [...] Matthew #8-) |
From: Matthew A. <mc...@us...> - 2006-02-26 22:56:00
|
On Sun, Feb 19, 2006 at 03:24:13PM -0600, Joi Ellis wrote: > On Sun, 19 Feb 2006, Matthew Astley wrote: > > On Sat, Feb 18, 2006 at 02:14:13AM -0600, Joi Ellis wrote: > > It would help the changelog if you could list the bugs fixed, it > > may not be possible for me to figure this out from your changes. > > Is there a specific bug list you want me to look at? [...] No, I understood that you had found new bugs and fixed them. So they won't be on any bug list... I don't need bug reports, I was just after some text to put in the changelog. > > Failing this, we'll just have a slightly vaguely changelog. 8-) > > > In all, I've worked on the following classes: > > > > > > Test/Unit/Exception.pm > > > Test/Unit/HarnessUnit.pm > > > Test/Unit/Loader.pm > > > Test/Unit/Result.pm > > > Test/Unit/Runner.pm > > > Test/Unit/TkTestRunner.pm > > > Test/Unit/UnitHarness.pm > > How would you like to send these changes? [...] > > Whatever you like. I'm not sure how much time I can devote to it, The first thing to do is get whatever you would like to send into the SF tracker (or somewhere else) so it doesn't ... fade away. > Well, I got lazy and used RCS to track the changes I've made. > That's where I got the ChangeLog I sent, with some manual edits to > delete unimportant junk comments. Sent? I haven't seen your changes yet, did I miss them? I looked in the SF patch tracker but can't see anything. I think this is a perfectly valid approach to tweaking files when you want version control before you submit the patch. I guess it's why Arch, SVK and others are popular. > I could probably generate some patch files but it may take me some > time, [...] That'd be good. Or just upload a tarball of the RCS ",v" files and your ChangeLog, or whatever you sent last time. I can deal with those. > > Adding patch ticket(s) on Sourceforge will keep your work in view. > > If your changes can be split by theme, sending several patch > > tickets with smaller diffs would probably help me out - if you've > > done a lot of work it may take me a while to catch up all of it. > > That's a new feature since I last worked with SF. I presume that's > a place where outsiders can put their patches? I could do that. Or > I could create bug reports at cpan and upload patch files as > attachments there, too. > > Which do you prefer? I prefer the SF tracker. Actually I would be most happy with a redirection from rt.cpan.org to the SF tracker, there's no merit in having two bug systems for one package. 8-( > > > I've created a subclass of TestCase.pm that includes versions of > > > tests copied from Test::More as well. [...] > No, I didn't intend MyTestCase to become part of the project, I just > used that because I wanted to keep my extensions separate. I left > it in Test::Unit for my own organization purposes inside Komodo. OK. If you're willing to contribute it too then perhaps I can pick stuff out of it as I get to reorganising the assertion code. > [...] I added another assert last night, using code lifted from > Test::Warn and rewritten to suit Perl Unit, so now I can capture and > test for warn() and carp() from the method under test. This isn't > tested much yet and I haven't yet written a test suite for it. Ah yes, I have some like this. Chaos when Carp.pm loads afterwards. Sigh. While the patch is open, feel free to just append another pair of (file + comment) to it. I guess I should get another release out soon after, so you can have 0.26 do what you're expecting. > > > All this still passes all of the original Perl Unit tests when > > > my custom versions are loaded. > > > > That's always good. Do your changes need additional testing or POD? > > It probably wouldn't hurt me to write test cases that show how 0.25 > fails to handle TAP, and make sure I've fixed the tests. Well if you don't get time, you can just say it's the maintainer's job. 8-] > Indeed. It occurs to me that one could use the tests provided with > the Test::Harness module under t/sample-tests. Neat idea, I've added this to my TODO. > > [...] plus at least one slightly scary bug (is_numeric, > > rt.cpan.org #16130) in the queue already. > [...] > If you look around on Google you can see a bunch of projects where > this particular linux "enhancement" is generating bug reports. I'd > probably just make the test check to see if its running under a > linux kernel and adjust the test accordingly. It's not really a > Perl Unit issue anyway. True it's not a PU issue, but I think it's important for test portability - the sort of thing the framework should warn the user about. Thanks for the extra info and ideas, Matthew #8-) |
From: Joi E. <gy...@vi...> - 2006-02-19 21:28:47
|
On Sun, 19 Feb 2006, Matthew Astley wrote: > On Sat, Feb 18, 2006 at 02:14:13AM -0600, Joi Ellis wrote: > > For other readers: TAP is "test anything protocol", see > http://use.perl.org/~petdance/journal/22057 Even easier would be perldoc Test::Harness:TAP, since Test::Harness is bundled with current perls and its documented there. > > I've fixed some bugs, too, like HarnessUnit failing to pass any > > useful diagnostic information back to the GUI. > > It would help the changelog if you could list the bugs fixed, it may > not be possible for me to figure this out from your changes. Failing > this, we'll just have a slightly vaguely changelog. Is there a specific bug list you want me to look at? The work I did was mostly to make the gui usable for my particular project, but it's general enough. I've looked at the bugs list on cpan and on sourceforge. The stuff I've address isn't mentioned in any of those that I can see. I haven't filed any bug reports myself. > > In all, I've worked on the following classes: > > > > Test/Unit/Exception.pm > > Test/Unit/HarnessUnit.pm > > Test/Unit/Loader.pm > > Test/Unit/Result.pm > > Test/Unit/Runner.pm > > Test/Unit/TkTestRunner.pm > > Test/Unit/UnitHarness.pm > > How would you like to send these changes? In exchange for agreeing to > "the same terms as Perl" you can be listed in AUTHORS. Further > enthusiasm runs the risk of being added as a SF developer. 8-) Whatever you like. I'm not sure how much time I can devote to it, the past two weeks I've been home on medical leave from my regular job so I've had an unusual amount of spare time lately. I already have one project on source forge, XPCGI. Haven't touched it in years, though. I'm tempted to port the thing to Perl/Tk now. ;) > Output of "cvs diff -u" plus name or date of what you started hacking > on would suit me quite well, but it won't include new files. Or you > can just tarball whatever you have and let me sort it out. Well, I got lazy and used RCS to track the changes I've made. That's where I got the ChangeLog I sent, with some manual edits to delete unimportant junk comments. I could probably generate some patch files but it may take me some time, I get to go back to work tomorrow morning if the doctor agrees. > Adding patch ticket(s) on Sourceforge will keep your work in view. If > your changes can be split by theme, sending several patch tickets with > smaller diffs would probably help me out - if you've done a lot of > work it may take me a while to catch up all of it. That's a new feature since I last worked with SF. I presume that's a place where outsiders can put their patches? I could do that. Or I could create bug reports at cpan and upload patch files as attachments there, too. Which do you prefer? > > I've created a subclass of TestCase.pm that includes versions of tests > > copied from Test::More as well. > > > > Test/Unit/MyTestCase.pm > > Thread fork here... > > Also, a minor point on naming (sorry). I think T:U:MyTestCase is > unsuitable for project release because of the convention for naming > things MyFoo in the POD, when intending to suggest that it is the > user's own Foo under discussion. i.e. while your code is joining the > project you're not a Joe Public user. No, I didn't intend MyTestCase to become part of the project, I just used that because I wanted to keep my extensions separate. I left it in Test::Unit for my own organization purposes inside Komodo. > Apart from T:U:Assert refactoring, possible solutions for the name: if > T:U:MyTestCase only includes versions of tests copied from Test::More, > it might be best named T:U:MoreTestCase or something like that. Or if > you wanted some code ownership, T:U:JoiTestCase. I don't care about ownership much. I added another assert last night, using code lifted from Test::Warn and rewritten to suit Perl Unit, so now I can capture and test for warn() and carp() from the method under test. This isn't tested much yet and I haven't yet written a test suite for it. > > All this still passes all of the original Perl Unit tests when my > > custom versions are loaded. > > That's always good. Do your changes need additional testing or POD? It probably wouldn't hurt me to write test cases that show how 0.25 fails to handle TAP, and make sure I've fixed the tests. > There may be an issue here with module dependencies - it isn't fair to > expect the user to install the whole set of "Test::* module that reads > or writes TAP" in order to install Test::Unit. No, it isn't. That sort of thing can be made optional both in the install and in the tests. None of the changes I've made to Perl Unit classes involve new Test::* modules. Even MyTestCase.pm doesn't require them, it just steals from them. ;) I do have a mix of *.t files that use Test::More, Test::Pod, etc, which is one reason I've been hacking on the Tk gui so much. The gui can now handle output from non-Perl Unit tests in a usable fashion, which it utterly failed to do in 0.25. > However, it does make a lot of sense for a perlunit developer to run > tests against them if the project is to support them properly. It may > even be a good idea to keep output from old versions of such modules, > because we can't assume the user is up-to-date without a dependency, > and testing can be a bit special when it comes to legacy support. > Perhaps it's best to bottle some samples of TAP (!) and distribute > them. Then make some additional code to require the relevant modules > and bottle up some new TAP. Indeed. It occurs to me that one could use the tests provided with the Test::Harness module under t/sample-tests. I haven't gone to that extreme yet, though. I've just been fixing what didn't work for my own project's tests. > Testing TAP output against an assortment of code that reads it would > be harder. I expect a simple test for this would be to use 'prove' against the whole test suite. One might want to look around in Test::Harness's tests to see if they don't already have a TAP validation test. > > OK, this could get complicated. Perhaps it's best to start with > something, then extend it later. > > > Is there any interest out there? > > I'm definitely interested, and yes this does mean to the extent of > writing tests or maybe even POD. 9-) > > My roundtuitometer informs me that we have patches for TestDecorator > and an XML (Ant compatible?) testrunner, plus at least one slightly > scary bug (is_numeric, rt.cpan.org #16130) in the queue already. I remember reading a bug report somewhere more general, but I don't remember exactly where it was. The gist of the issue that that the linux libc/glibc is_numeric routine recognizes one additional format than other unices do, so it fails the test for that routine. If you look around on Google you can see a bunch of projects where this particular linux "enhancement" is generating bug reports. I'd probably just make the test check to see if its running under a linux kernel and adjust the test accordingly. It's not really a Perl Unit issue anyway. -- Joi Ellis gy...@vi... No matter what we think of Linux versus FreeBSD, etc., the one thing I really like about Linux is that it has Microsoft worried. Anything that kicks a monopoly in the pants has got to be good for something. - Chris Johnson |
From: Matthew A. <mc...@us...> - 2006-02-19 20:42:27
|
On Sat, Feb 18, 2006 at 02:14:13AM -0600, Joi Ellis wrote: > I've created a subclass of TestCase.pm that includes versions of tests > copied from Test::More as well. > > Test/Unit/MyTestCase.pm I too have a collection of assertion methods, I keep them in my own T:U:TestCase subclass and yes it's high time I shared it. Subclassing is the obvious OO way to extend the code, but it is not without problems. To bring our code into the project, the obvious thing to do is just push the new methods up into a superclass so they're available to everyone... unless the subclass has a specific purpose. But then if the user wishes to have a subclass for doing local tricks, e.g. database testing utils, they'll need to do it twice. I believe Test::Unit::Assert already contains too much stuff all in one place, leading to - overwhelming docs - may give new users the impression that there is far too much to learn in one go - no sensible place to put even *more* stuff - accompanying problems in t/tlib/AssertTest.pm - hard to share useful assertion code with other test frameworks - could interfere with the user's choice of inheritance pattern Leaving the file to grow without bound as we find more useful and exciting tests makes it difficult to manage the code and the method namespace. For example, T:U:Assert->assert_deep_equals was "shamelessly pinched from Test::More and adapted to Test::Unit". This method has an assortment of bugs filed against it. I have applied patches but when I look at all this, I wonder - which version of Test::More did it come from? (I haven't investigated) - is it fixed in Test::More, or is it an artifact of the "adapted to Test::Unit"? - if I fix or extend assert_deep_equals, can I contribute back to Test::More? I use assert_deep_equals quite a bit myself, but I would prefer to put some distance between maintenance of it and the rest of the framework. Solutions so far: - make a contrib/ directory containing T:U:TestCase subclasses, starting with mine and Joi's, and leave the user to pick among the code in the methods or using them as SUPER. Easy for us, but not great for code reuse. - use mixin classes to supply[1] assertion methods, possibly in a dynamic way. But eww, mixins. - symbol table pokery and/or AUTOLOAD sickness to do... what, dispatch methods by hand? - continue growing Assert.pm, maybe I imagined a problem? There are other test frameworks out there with their own assertion names, argument styles (including conflicting ones[2]) and neat tricks. A bit of competition is healthy, but I'm not interested in proving "our test framework is better than yours". It would be much more useful to make it easy for users to try one or another, including porting existing tests without too much fuss. It would also be good to make it possible for testrunners (GUI, automated continuous integration, continuous testing[3], random testing[4] etc.) to be shared between frameworks. On my To Do list are such lofty aims as finding out what else is out there in test world, and then talking to the maintainers of the other tools. A comparison here is what I've seen so far while looking at test frameworks for Common Lisp. Each provides a set of features, but there's nowhere you can go to get all the features (whatever that might mean). This leads me to think that the test framework should be doing less, but providing hooks for other packages to do the rest. More test runners, more assertions, more types of test failure (todo, skip, slow test filtering, test dependency). Clearly this needs more thought and more input data. A start would be collecting people's testcase subclasses in a contrib directory, just to see what's there. Matthew #8-) -- [1] On the user side, this might cause a test to be written package MyTest; use strict; use warnings; use base 'Test::Unit::TestCase'; use Test::Unit::AssertMixin::Foo; use Test::Unit::AssertMixin::Bar; sub test_with_foo { my $self = shift; $self->assert_foo( ... ); $self->assert_bar( ... ); } The user may shuffle the incantation for mixins out to a project module or the suit generator, to reduce the per-class noise. On the perlunit side, the requirement would be for the mixin classes to register themselves during &import. [2] http://sf.net/tracker/index.php?func=detail&aid=407833&group_id=2653&atid=102653 The conflict is against the Java flavour, and it's mostly an artifact of Perl's argument passing style, but still it's unfortunate: JUnit 3.5 has assert(message, condition); perlUnit .13 has assert(condition, message); Maybe there are others, I haven't looked. [3] http://pag.csail.mit.edu/continuoustesting/ [4] http://www.accesscom.com/~darius/software/clickcheck.html http://www.findinglisp.com/blog/2004/10/random-testing.html including link to a scary "razor sharp html fragments" article |
From: Matthew A. <mc...@us...> - 2006-02-19 20:18:24
|
On Sat, Feb 18, 2006 at 02:14:13AM -0600, Joi Ellis wrote: > Greetings! I downloaded PerlUnit 0.25 from SourceForge a few months > ago for use in a personal perl project. I've been spending nearly > as much time hacking on Perl Unit as I have my own project. Cool! Thanks for letting us know. > I've worked mostly on the Tk GUI and on the Test::Harness > compabilitity issues. IE my local copy now recognizes mostly-proper > TAP format from all the other Test::* modules, and it produces > mostly proper TAP output as well. For other readers: TAP is "test anything protocol", see http://use.perl.org/~petdance/journal/22057 > I've fixed some bugs, too, like HarnessUnit failing to pass any > useful diagnostic information back to the GUI. It would help the changelog if you could list the bugs fixed, it may not be possible for me to figure this out from your changes. Failing this, we'll just have a slightly vaguely changelog. > In all, I've worked on the following classes: > > Test/Unit/Exception.pm > Test/Unit/HarnessUnit.pm > Test/Unit/Loader.pm > Test/Unit/Result.pm > Test/Unit/Runner.pm > Test/Unit/TkTestRunner.pm > Test/Unit/UnitHarness.pm How would you like to send these changes? In exchange for agreeing to "the same terms as Perl" you can be listed in AUTHORS. Further enthusiasm runs the risk of being added as a SF developer. 8-) Output of "cvs diff -u" plus name or date of what you started hacking on would suit me quite well, but it won't include new files. Or you can just tarball whatever you have and let me sort it out. Adding patch ticket(s) on Sourceforge will keep your work in view. If your changes can be split by theme, sending several patch tickets with smaller diffs would probably help me out - if you've done a lot of work it may take me a while to catch up all of it. > I've created a subclass of TestCase.pm that includes versions of tests > copied from Test::More as well. > > Test/Unit/MyTestCase.pm Thread fork here... Also, a minor point on naming (sorry). I think T:U:MyTestCase is unsuitable for project release because of the convention for naming things MyFoo in the POD, when intending to suggest that it is the user's own Foo under discussion. i.e. while your code is joining the project you're not a Joe Public user. Apart from T:U:Assert refactoring, possible solutions for the name: if T:U:MyTestCase only includes versions of tests copied from Test::More, it might be best named T:U:MoreTestCase or something like that. Or if you wanted some code ownership, T:U:JoiTestCase. > All this still passes all of the original Perl Unit tests when my > custom versions are loaded. That's always good. Do your changes need additional testing or POD? There may be an issue here with module dependencies - it isn't fair to expect the user to install the whole set of "Test::* module that reads or writes TAP" in order to install Test::Unit. However, it does make a lot of sense for a perlunit developer to run tests against them if the project is to support them properly. It may even be a good idea to keep output from old versions of such modules, because we can't assume the user is up-to-date without a dependency, and testing can be a bit special when it comes to legacy support. Perhaps it's best to bottle some samples of TAP (!) and distribute them. Then make some additional code to require the relevant modules and bottle up some new TAP. Testing TAP output against an assortment of code that reads it would be harder. OK, this could get complicated. Perhaps it's best to start with something, then extend it later. > Is there any interest out there? I'm definitely interested, and yes this does mean to the extent of writing tests or maybe even POD. 9-) My roundtuitometer informs me that we have patches for TestDecorator and an XML (Ant compatible?) testrunner, plus at least one slightly scary bug (is_numeric, rt.cpan.org #16130) in the queue already. I don't claim to operate a fair queueing algorithm for these things, but I will listen to requests from other users. Matthew #8-) |
From: Joi E. <gy...@vi...> - 2006-02-19 16:48:09
|
On Sun, 19 Feb 2006, Desilets, Alain wrote: > What additional features to the extensions provide? It's not so much new features, as it is making the existing features work properly, especially with Test::Harness-style launchers. The 2-second summary is that it now works with Test::Harness .t files, it displays complete error messages in the GUI, and the GUI has had some usability tweaks made, primarily File/Restart and the ability to pre-load the test history list using parameters from the command line. The GUI is also running under TK804 now, which it had problems with initially. This also means that you can use 'prove' or 'make test' on Perl Unit tests, and they will generate proper TAP output for any test harness you might be using. One major issue the GUI has is that it fails to re-load perl modules which have been changed since the GUI was launched. I've made some initial attempts to fix this, but decided the simplest thing to do would be to add File/Restart to just bypass the issue. This is only a problem when you're specifying tests to load via package::Test format because the loader puts them into the same perl process as the GUI itself. This is where perl's efficiency bites us in the tookus. Java has this too, but JUnit provides a custom class loader to force test classes to be reloaded. Perl needs something like this. If you specify a .t file instead of a class, the problem goes away because the test gets run in a child perl process, not the same process as the GUI is using. This is why I've spent so much time making the Test::Harness support function properly with the GUI. Other side projects involved here are Komodo integration, and a stand-alone test stub/launcher generator which reads a perl module and generates a Perl Unit test stub file, and a Test::Harness .t file to launch it. This is a work in progress. None of this stuff is tested on Win32, by the way, but the only thing I can can think of that may not port easily is the change to Test::Unit::Loader, where I tell it to redirect STDERR into STDOUT in the child perl's shell command. Here's a slightly-editted change long from my RCS tree: 2006-02-18 joi <joi@falcon> * TkTestRunner.pm: Added some useful information to the About box. Changed the length of the progress bar from 400 to 600, added a pack(). Pressing <return> in the entry box now runs the test. Tk804 fix: You can only select one failed test at a time. * HarnessUnit.pm: Commented out some non-TAP output. * MyTestCase.pm: Moved TAPComment up into the parent. * UnitHarness.pm: Rewrote parser to expect proper TAP format. Redirected STDERR into STDOUT so that the GUI can display it. Added extra parsing to try to always have a useful name for a failed test case in the GUI. Corrected handling of diagnostic output so that it is passed up to the GUI. Added a test failure if the Plan doesn't match the case count. TAP-compliant test harnesses need to do this. Adjusted logic so that a plan line may follow the tests, per TAP. Added check of result->should_stop so that the GUI can kill Harness tests without having to wait for the entire suite to complete. * Result.pm: Removed non-TAP output. Wrapped debug output with leading # per TAP. Added plan support. 2006-02-12 joi <joi@falcon> * UnitHarness.pm: Ripped out the $verbose junk and replaced it with Test::Unit::Debug. Test cases now suck STDERR in with STDOUT so the gui can parse them. This will probably break the Test::Unit tests, but it makes the gui much more usable. Removed check for $max, TAP does not require that every plan print its 1..N tests line at the start of the run, it is allowed to appear at the very end. Such systems broke UnitHarness. Rewrote the main parser (again.) It now recognizes Test::More output, Test::Pod output, and Test::Unit output. 2006-02-02 joi <joi@falcon> * TkTestRunner.pm: Added a Restart option to the File menu. This is useful for re-launching the test gui with the original parameters, to work around perl's refusal to re-load classes or tests which have been altered since the gui was launched. Altered @ARGV handling so that multiple test files or classes specified on the command line are pre-loaded into the history browser. When re-started, the browser uses its figures out its current geometry and re-uses that information in the new gui. 2006-01-30 joi <joi@falcon> * Loader.pm: Undid some of my previous work since it caused Loader to fail its own unit tests. * TkTestRunner.pm: Added a File/Restart command to make restarting the GUI easier. This is useful for the (many) cases where the Loader will refuse to reload a changed class-under-test. 2006-01-29 joi <joi@falcon> * Runner.pm: Adds a default plan method. Now any Runner can run TAP tests and have the information, even if it doesn't use it. * TkTestRunner.pm: Added use of Test::Unit::Debug class. * UnitHarness.pm: Rewrote my initial fix so that it doesn't freeze the GUI. Also enabled the plan() call. * Result.pm: Adding the missing plan support so that UnitHarness can pass the information back to the Runners after the tests start running. 2006-01-28 joi <joi@falcon> * UnitHarness.pm: Added use of Debug package. 2006-01-21 joi <joi@falcon> * UnitHarness.pm: Tweaked further, to make the list of failed tests more readable, along with the details dialog presented for each. Also made it recognize proper TAP format better. * HarnessUnit.pm: Tweaked to follow TAP.pod documentation. The format of test results did not follow standard TAP, now it's closer. * MyTestCase.pm: Added an assert_zero method, and tweaked assert_isa. 2006-01-16 joi <joi@falcon> * Loader.pm: Tweaked the compile routines so that they don't issue 'Subroutine redefined' warnings over STDERR while the gui is running. * Exception.pm: Quieted a warning that was showing up on STDERR. * Loader.pm: Fixes bug where tests specified by class name are never re-loaded once a gui has been launched. * UnitHarness.pm: Initial attempt at fixing errors displayed. Still rather ugly, but I'm not certain if I'm understanding things correctly. 2005-12-27 joi <joi@falcon> * UnitHarness.pm: Fixed a runtime warning. * TkTestRunner.pm: Added get_message() output to the details window. 2005-12-23 joi <joi@falcon> * HarnessUnit.pm: Fixed bug that caused the assert(test,message) message parameter to be ignored and not printed in both of the uppermost unit harnesses. * TkTestRunner.pm: Hides the useless 'rerun' button. -- Joi Ellis gy...@vi... No matter what we think of Linux versus FreeBSD, etc., the one thing I really like about Linux is that it has Microsoft worried. Anything that kicks a monopoly in the pants has got to be good for something. - Chris Johnson |
From: Joi E. <gy...@vi...> - 2006-02-18 08:18:44
|
Greetings! I downloaded PerlUnit 0.25 from SourceForge a few months ago for use in a personal perl project. I've been spending nearly as much time hacking on Perl Unit as I have my own project. I've worked mostly on the Tk GUI and on the Test::Harness compabilitity issues. IE my local copy now recognizes mostly-proper TAP format from all the other Test::* modules, and it produces mostly proper TAP output as well. I've fixed some bugs, too, like HarnessUnit failing to pass any useful diagnostic information back to the GUI. In all, I've worked on the following classes: Test/Unit/Exception.pm Test/Unit/HarnessUnit.pm Test/Unit/Loader.pm Test/Unit/Result.pm Test/Unit/Runner.pm Test/Unit/TkTestRunner.pm Test/Unit/UnitHarness.pm I've created a subclass of TestCase.pm that includes versions of tests copied from Test::More as well. Test/Unit/MyTestCase.pm All this still passes all of the original Perl Unit tests when my custom versions are loaded. Is there any interest out there? -- Joi Ellis gy...@vi... No matter what we think of Linux versus FreeBSD, etc., the one thing I really like about Linux is that it has Microsoft worried. Anything that kicks a monopoly in the pants has got to be good for something. - Chris Johnson |
From: Matthew A. <mc...@us...> - 2005-10-22 18:16:10
|
I've just added to CVS Marek's patch from the RT bug at http://rt.cpan.org/NoAuth/Bug.html?id=4613 Changes will be visible at http://cvs.sourceforge.net/viewcvs.py/perlunit/src/Test-Unit/lib/Test/Unit/Assert.pm http://cvs.sourceforge.net/viewcvs.py/perlunit/src/Test-Unit/t/tlib/AssertTest.pm once the public repository has updated. The patch provides new assertions like this, my $object = Horse->new("Dobbin"); $self->assert_isa('Horse', $object); $self->assert_can('gallop', $object); and what I'm wondering is whether folks expect these to pass classnames in addition to instances of the classes, $self->assert_isa('Horse', 'Horse::Shire'); $self->assert_can('gallop', 'Horse'); when @Horse::Shire::ISA = ('Horse') . There are several options, 1) accept classnames as well as objects, to match the behaviour of UNIVERSAL::isa and UNIVERSAL::can 2) don't accept classnames because that's less likely to be useful in testing - this is what Marek's original code does 3) same as 2) but change the names to assert_objisa and assert_objcan For now I've left the code doing 2) but the installation tests require 1), so this will need resolving one way or another before the next release. It seemed best to ask the users what they expect the code to do... Other questions that might arise later are do we need assert_not_isa and assert_not_can? or some way to make composite assertions, that would make this easier? do we need to break Test::Unit::Assert or its POD down into smaller pieces to make it less scary? Suggestions welcome. Matthew #8-) |
From: Matthew A. <mc...@us...> - 2005-10-16 20:48:10
|
[crossposted to both lists, please pick one if you're replying] Test::Unit 0.25 is released to SourceForge, https://sourceforge.net/project/showfiles.php?group_id=2653&package_id=2626&release_id=363716 and CPAN, http://search.cpan.org/~mcast/Test-Unit-0.25/ Here's a copy of the SourceForge release notes and change summary, Mostly a bugfix release: all outstanding serious bugs are fixed, build and install should now be clean (previous version required "force install" under cpan shell). Updates for new features and minor bugs will follow. + significant fixes to assert_deep_equals wrt. circular structures, scalar references and comparison of undef; and installation tests for these + make the is_numeric test stricter, which will affect results from assert_equal; tests for is_numeric + first stab at a UML class diagram, check the doc/class-diagram.txt before relying on it too heavily + tidy up and clarify the copyright notices, author POD sections; hopefully in a consistent and fair way + improve "Show..." dialog in GUI + remove some old junk Older changes, + changes to Test::Unit::Decorator + bugfix in T:U::Procedural + test and improve filtering Please see ChangeLog or CVS logs for more details. There are still some minor bugs against the package. I'll work through these too, but I think the next major work should be on the documentation. Matthew #8-) |
From: Matthew A. <mc...@us...> - 2005-06-21 22:35:01
|
[Repost 'cos I got the wrong 'From' address. Hopefully the first one will give up waiting for the moderator and go Away.] On Tue, Jun 21, 2005 at 04:29:41PM -0400, Desilets, Alain wrote: > I am a newbie to PerlUnit, but have used jUnit and pyUnit > extensively. Hello! > First, is there a way to get PerlUnit to print a traceback (with the > file name and line number of all the function and method calls in > the call stack) when an exception is raised? Um, I suspect there probably is, yes... but if I knew where the setting was, I've forgotten. 8-( Hmm, there was a thing about get_backtrace_on_fail. This was removed a while back, it claims to be deprecated if you call it but it's actually non-functional. I don't recall what the replacement or outcome was. Hmm, the "Error" base class contains $Error::Debug = 0; # Generate verbose stack traces at the top, so I've used that in the code at the bottom. I suspect there's a tidy way to get this out, probably involves adding a Listener to the TestRunner...? I've cheated by using Data::Dumper. I have a more funky testrunner script which I normally use. I'll have a poke about and add stacktrace info, erm, soon. I seem to have completely forgotten my SF password. Again. Not a good sign for a "maintainer". > This is the default behaviour of both jUnit and pyUnit, but it seems > PerlUnit only prints the message of the exception that was raised. > This is sometimes enough to identify the source of the problem, but > not if the error happens deep inside the system. I know the problem. For some reason it doesn't trouble me much, if I can figure out what trick it is I'm using, I'll explain. Also in favour of short messages: it's good to have a concise description of the problem, provided the detail is still available. What would be really handy is having a place to dump extra data from the test. Something built in to (a subclass of) the TestCase, so stack traces are just a C-x C-f (file load) away, or a refresh of a web page. > Secondly, how do you run a suite of tests? I have tried 3 different > approaches, and none give me exactly what I want. See sample code > below with inlined comments. > --- Sample Code: > > use strict; > > use Test::Unit::TestRunner; > use Test::Unit::TestSuite; > use Test::Unit::TestCase; > > > package FooBar; > our @ISA = qw(Test::Unit::TestCase); > > sub new { > my ($class, @args) = @_; > my $self = $class->SUPER::new(@args); > return $self; > } As far as I'm aware, there should be no need to override 'new' for any of the commonly used classes. You can just use the inherited one. > sub test_fail { > my ($self) = @_; > $self->fail("failed in FooBar."); > } > > package FooBar2; > our @ISA = qw(Test::Unit::TestCase); > > sub new { > my ($class, @args) = @_; > my $self = $class->SUPER::new(@args); > return $self; > } > > sub test_fail { > my ($self) = @_; > $self->fail("failed in FooBar2."); > } Otherwise, looks OK so far... > package MyTestSuite; > use base qw(Test::Unit::TestSuite); > > sub new { > my ($class) = @_; > my $self = $class->SUPER::empty_new(); > no strict; > $self->add_test(FooBar); > $self->add_test(FooBar2); > use strict; > return $self; > } > > sub name { 'My very own test suite' } You can do this more easily with the first example in the Test::Unit::TestSuite POD, just provide the name() as you've done plus an include_tests() , sub include_tests { return ( "FooBar", "FooBar2" ); } or whatever shorthand you like for such things. > package main; > > my $runner = Test::Unit::TestRunner->new(); > > ########################################################################## > # APPROACH 1: Implement subclass MyTestSuite of TestSuite, and feed its > # name to TestRunner::start(). > # MyTestSuite will wrap a bunch of test cases into a suite. > # > # RESULT: > # WORKS, BUT ALL ERRORS REPORTED AS THOUGH THEY WERE FROM THE SAME > # TEST CASE > # > # I need to see errors from different TestCases, because I often have TestCases > # that test different subclasses of a same root class, and these TestCases > # use the same methods inherited from a common TestCase subclass. > ########################################################################## > > #no strict; > #$runner->start(MyTestSuite); > #use strict; That should work. I'm puzzled by the 'no strict' thing, you can just pass the classname as a string...? But that's not relevant. So I save it out and tried it, mca1001@doorstop:/tmp$ perl mail.pl .F.F Time: 0 wallclock secs ( 0.00 usr + 0.00 sys = 0.00 CPU) !!!FAILURES!!! Test Results: Run: 2, Failures: 2, Errors: 0 There were 2 failures: 1) mail.pl:19 - test_fail(FooBar) failed in FooBar. 2) mail.pl:33 - test_fail(FooBar2) failed in FooBar2. Test was not successful. which looks about right to me. Two test methods called test_fail, in their respective classes. Did the 'make test' run OK? (Apart from the broken tests) What are you expecting to see different to this? > ########################################################################## > # APPROACH 2: Create an empty TestSuite, add TestCases to it and feed it to > # TestRunner::start() This would be equivalent to the subclassing approach, but for the fact that the TestRunner is expecting a classname not an instance of the TestSuite. You passed an instance of the suite to $runner->start , > # Couldn't load Test::Unit::TestSuite=HASH(0x18255a4) in any of the > # supported ways at ../../../perlib//Test/Unit/Loader.pm line 68. I vaguely remember doing this too... the clue is that it's trying to load a class named "Test::Unit::TestSuite=HASH(0x18255a4)", and that's the stringified object. I think it should either catch this case and give a more helpful message, or cope with it. The TestRunner will apparently also accept the names of files to pass to Test::Unit::UnitHarness, in various forms. > ########################################################################## > # APPROACH 3: Run each TestCase using a different TestRunner. You should only need the one TestRunner instance. That's the convenience of having the TestSuite, pile it all in and let it run. Hacking your code about to have some subclassing of the first TestCase, (and almost no comments) ----8<---- use strict; package FooBar; use base 'Test::Unit::TestCase'; sub test_fail { my ($self) = @_; $self->fail("failed in FooBar's test_fail,\n \$self = '$self', an instance of ".ref($self)); } package FooBar2; use base 'FooBar'; sub test_pass { } package Wibble3; use base 'FooBar2'; package MyTestSuite; use base qw(Test::Unit::TestSuite); sub include_tests {qw{FooBar FooBar2 Wibble3}} sub name { 'My very own test suite' } package main; use Test::Unit::TestRunner; use Data::Dumper; my $runner = Test::Unit::TestRunner->new(); $Error::Debug = 1; $runner->start('MyTestSuite'); # Approach 1 print "-"x70,"\n", Data::Dumper->Dump([$runner], ['runner']); ----8<---- Tests being run will be FooBar->test_fail FooBar2->test_fail [inherited from FooBar] FooBar2->test_pass Wibble3->test_fail [inherited from FooBar] Wibble3->test_pass [inherited from FooBar2] and the output I get looks like this, mca1001@doorstop:/tmp$ perl mail.pl .F.F..F. Time: 0 wallclock secs ( 0.00 usr + 0.00 sys = 0.00 CPU) !!!FAILURES!!! Test Results: Run: 5, Failures: 3, Errors: 0 There were 3 failures: 1) mail.pl:8 - test_fail(FooBar) failed in FooBar's test_fail, $self = 'FooBar=HASH(0x8164260)', an instance of FooBar 2) mail.pl:8 - test_fail(FooBar2) failed in FooBar's test_fail, $self = 'FooBar2=HASH(0x81644e8)', an instance of FooBar2 3) mail.pl:8 - test_fail(Wibble3) failed in FooBar's test_fail, $self = 'Wibble3=HASH(0x81643a4)', an instance of Wibble3 Test was not successful. which again looks right to me. After this, there follows a messy printout of the internals of the TestRunner and your Test::Unit::Failure instances, which do contain the trace info. I set the failure method to explain what the $self is. The TestCases follow the standard form of inheritance, that each of the subclasses will see the test_fail method unless you override it. Short answer is, I can't tell what it is that's broken in your first approach, but I will say that this system isn't hugely forgiving if you step off the expected usage path. Sorry about that. Matthew #8-) |
From: Desilets, A. <Ala...@nr...> - 2005-06-21 20:29:55
|
Hi all, I am a newbie to PerlUnit, but have used jUnit and pyUnit extensively. I have two questions. First, is there a way to get PerlUnit to print a traceback (with the = file name and line number of all the function and method calls in the = call stack) when an exception is raised? This is the default behaviour = of both jUnit and pyUnit, but it seems PerlUnit only prints the message = of the exception that was raised. This is sometimes enough to identify = the source of the problem, but not if the error happens deep inside the = system. Secondly, how do you run a suite of tests? I have tried 3 different = approaches, and none give me exactly what I want. See sample code below = with inlined comments. Thx Alain D=E9silets --- Sample Code: use strict; use Test::Unit::TestRunner; use Test::Unit::TestSuite; use Test::Unit::TestCase; package FooBar; our @ISA =3D qw(Test::Unit::TestCase); sub new { my ($class, @args) =3D @_; my $self =3D $class->SUPER::new(@args); return $self; } sub test_fail { my ($self) =3D @_; $self->fail("failed in FooBar."); } package FooBar2; our @ISA =3D qw(Test::Unit::TestCase); sub new { my ($class, @args) =3D @_; my $self =3D $class->SUPER::new(@args); return $self; } sub test_fail { my ($self) =3D @_; $self->fail("failed in FooBar2."); } package MyTestSuite; use base qw(Test::Unit::TestSuite); sub new { my ($class) =3D @_; my $self =3D $class->SUPER::empty_new(); no strict; $self->add_test(FooBar); $self->add_test(FooBar2); use strict; return $self; } sub name { 'My very own test suite' } package main; my $runner =3D Test::Unit::TestRunner->new(); =20 ########################################################################= ##=20 # APPROACH 1: Implement subclass MyTestSuite of TestSuite, and feed its # name to TestRunner::start(). # MyTestSuite will wrap a bunch of test cases into a suite. #=20 # RESULT: # WORKS, BUT ALL ERRORS REPORTED AS THOUGH THEY WERE FROM THE SAME=20 # TEST CASE =20 # # I need to see errors from different TestCases, because I often have = TestCases # that test different subclasses of a same root class, and these = TestCases=20 # use the same methods inherited from a common TestCase subclass. ########################################################################= ## #no strict; #$runner->start(MyTestSuite); #use strict; ########################################################################= ##=20 # APPROACH 2: Create an empty TestSuite, add TestCases to it and feed = it to # TestRunner::start() =20 # # RESULT: # I get the following error message: # # Couldn't load Test::Unit::TestSuite=3DHASH(0x18255a4) in any of = the=20 # supported ways at ../../../perlib//Test/Unit/Loader.pm line 68. ########################################################################= ##=20 #my $suite =3D Test::Unit::TestSuite->empty_new(); #no strict; #$suite->add_test(FooBar); #use strict; #$runner->start($suite); ########################################################################= ##=20 # APPROACH 3: Run each TestCase using a different TestRunner. # # RESULT: # ONLY REPORTS ERRORS FROM THE FIRST TestCase ########################################################################= ##=20 #no strict; #$runner =3D Test::Unit::TestRunner->new(); #$runner->start(FooBar); #print "-- Doing: FooBar2\n"; #$runner =3D Test::Unit::TestRunner->new(); #$runner->start(FooBar2); #use strict; |
From: Andrew A. <aa...@ma...> - 2004-12-30 07:52:03
|
RK> So, question 1. Anyone out there...? I guess so. RK> Question 2. I played a bit with Test::Unit awhile ago and quite liked RK> it. I thought I read once there is/might be integration of Test::Unit RK> with Test::More or Test::Harness? Test::Unit::HarnessUnit. It's a test runner which outputs in the same format that Test::Harness expects. Look at http://search.cpan.org/src/ASPIERS/Test-Unit-0.24/t/assert.t RK> The reason I ask both questions is because like many (some?) I am RK> feeling the need to add automated testing to my good development habits RK> so I am doing some test module shopping. I like the way Test::Unit RK> worked but Test::More and Test::Harness seem to be the "standard" way in RK> the Perl community. RK> Thus I am torn. If Test::Unit has been abandoned then that may answer RK> my question for me. On the other hand, if maintenance/advancement RK> continues and Test::Unit test scripts can easily be incorporated with RK> Test::Harness, etc. then I would much rather be liking that... Test::Unit is abandoned in some way, but it still work. %) aa29 |
From: Robert K. <rku...@ob...> - 2004-12-29 22:02:38
|
So, question 1. Anyone out there...? Question 2. I played a bit with Test::Unit awhile ago and quite liked it. I thought I read once there is/might be integration of Test::Unit with Test::More or Test::Harness? The reason I ask both questions is because like many (some?) I am feeling the need to add automated testing to my good development habits so I am doing some test module shopping. I like the way Test::Unit worked but Test::More and Test::Harness seem to be the "standard" way in the Perl community. Thus I am torn. If Test::Unit has been abandoned then that may answer my question for me. On the other hand, if maintenance/advancement continues and Test::Unit test scripts can easily be incorporated with Test::Harness, etc. then I would much rather be liking that... Robert Kuropkat |
From: Conrad H. <co...@ne...> - 2004-12-14 16:57:24
|
Hi there, I'm hoping to use Test::Unit in a project I'm about to start writing, but would like suggestions (or maybe pointers to examples) on using it in an OO environment. The POD gives a very basic introduction to Test::Unit::TestCase and how it allows you to inherit tests, but it's not clear to me how to use this feature. Case in point: ChildClass inherits from an abstract ParentClass. All descendants of ParentClass should implement commonMethod, which (for example) should fail cleanly on certain inputs. It seems to me that the test of commonMethod's failure semantics could be usefully implemented in the tests for ParentClass, so all ChildClasses can benefit from that test. *But* I'm not sure how to do this cleanly in Test::Unit. At the moment I was *thinking* of structuring stuff like this: Parent.pm Child.pm isa Parent Test::Parent.pm isa Test::Unit::TestCase Test::Child.pm isa Test::Parent .. but then should I be (in Test::Child::set_up) storing a list of instances of Child objects in some bit of $self that's known about by Test::Parent so Test::Parent can run the commonMethod tests on them? i.e. something like this: { package Test::Child; sub set_up { my $self = shift; $self->{TEST_INSTANCE} = Child->new; } } { package Test::Parent; sub test_commonMethodFailure { my $self = shift; # Try invoking commonMethod badly.. eval { $self->{TEST_INSTANCE}->commonMethod('bad parameter') }; # Look at @_ to see that an error's been generated... } } Does this make sense? Is this storing-child-instances-in-some- contractually-agreed-location-for-the-parent-to-test thing the best way of taking advantage of inheritance in Test::Unit, or is there a better way? Having written the question down, I can see that there's nothing inherently Perl-y about this, so maybe other OO unit testing frameworks have already answered this question - anybody suggest a reference please? Thanks for your time.. Conrad |
From: Jean-Louis L. <jl...@so...> - 2004-10-27 20:49:25
|
> I'm tempted to cross-post to perlunit-devel... maybe you could reply > about the patch/refactor to devel, and discuss the actual use of > decoupled test cases on user? Ok, I'll post to perlunit-devel soon. As for actual use and other non-codish issues, please read on. > In the general case it makes sense to shuffle the order of the tests > methods: AIUI the design intention is that tests should be independent > of each other. My attention was first attracted to this issue because a colleague of mine dismissed PerlUnit for that reason four years ago. I wanted to know the full story. I quickly found out about @TESTS, then realized it didn't work. After doing some reading on junit.org, I accept the theoretical reasons against ordering in the context of xUnit. However there are *practical* reasons why you would want to. For example, some tests may take a much longer time to run. And for what I plan to use PerlUnit for, this is a very real concern. The test method execution time may vary between hours, days and even weeks. So I want to run the 'short' tests first, and either terminate the suite or start analysis of failed short tests while the longer ones are still running. >> If I take the time to make a proper patch, will it be incorporated >> and will a new version be released? > > Yes and no. We will happily accept patches, and future releases will > happen at some point. I'm happy enough if my changes, provided they are accepted, make it into the repository. > More importantly for your patch, I think the fix should either be to > the documentation, or a wider change to the code. Will you stay for a > chat before leaving your patch? 8-) I'm evaluating test harnesses for a customer. I had some concerns which are addresse by the content of your reply, how quick it came, indeed that any reply came at all ;-) And then I need a harness for a product my company is developing on its own. So if I decide to go with PerlUnit, it's likely to be a long-term relationship... > The JUnit guys discuss test execution order starting at > http://groups.yahoo.com/group/junit/message/4063 > > and ending with a removal of the OP's need to fix order at > http://groups.yahoo.com/group/junit/message/4068 Thanks for the pointer. BTW, there seems to be differences between PerlUnit and JUnit. As far as I can see right now I can't duplicate this code from the aforementioned thread (http://groups.yahoo.com/group/junit/message/4065): public class MyTestCase extends TestCase { public static Test suite() { final TestSuite suite = new TestSuite(); suite.addTest(new MyTestCase("myTestMethod1")); suite.addTest(new MyTestCase("myTestMethod2")); suite.addTest(new MyTestCase("myTestMethod3")); suite.addTest(new MyTestCase("myTestMethod4")); } } ...would (naively) translate to: my $suite = Test::Unit::TestSuite->new; $suite->add_test(FooBar->new('test_foo')); my $testrunner = Test::Unit::TestRunner->new(); $testrunner->start($suite); ...but this results in: Couldn't load Test::Unit::TestSuite=HASH(0x86d2668) in any of the supported ways at /usr/lib/perl5/site_perl/5.8.3/Test/Unit/Loader.pm line 68. -- Jean-Louis Leroy Sound Object Logic http://www.soundobjectlogic.com |
From: Matthew A. <mc...@us...> - 2004-10-27 00:37:18
|
Anyone else care to join in? I need another brain! I'm tempted to cross-post to perlunit-devel... maybe you could reply about the patch/refactor to devel, and discuss the actual use of decoupled test cases on user? On Tue, Oct 26, 2004 at 02:17:28PM +0200, Jean-Louis Leroy wrote: > I've found out a trivial bug in Test::Unit, You're right that @TESTS doesn't behave as documented. The rest of the problem is more complex, but it makes sense to: correct the docs, ensure the users can do what they _need_ to do, hopefully give them a little nudge to help differentiate between wanting ordered tests and actually needing them. > which btw solves this problem: > http://sourceforge.net/mailarchive/forum.php?thread_id=3232369&forum_id=2441 Thanks for digging out the reference. 8-) > Test::Unit::TestCase::list_tests does grab the testsub names from > @TESTS as documented, but then sticks them in a hash, probably to > remove duplicates: The comments here are relevant, # Returns a list of the tests run by this class and its superclasses. # DO NOT OVERRIDE THIS UNLESS YOU KNOW WHAT YOU ARE DOING! > sub list_tests { > my $class = ref($_[0]) || $_[0]; > # ... > my %tests = map {$_ => ''} @tests; > return keys %tests; > } I don't know the exact reason for the shouty warning. I think it's justified but needs another 300 words of explanation. Hmm, lots of things going on in this method. Looking at the code and pondering what's needed: In the general case it makes sense to shuffle the order of the tests methods: AIUI the design intention is that tests should be independent of each other. This includes side effects, previous return values and so on. Illustrations below. I thought we had discussed the explicit use of rand() in shuffling the tests, but can't find mention in Google. rand() could produce unnerving intermittent failures if there were linkage between tests, but is better than tests that fail or not depending on what other test methods are present, the names of the tests and the version of Perl. The introspection (get_matching_methods calls) can only produce duplicates when a test has been overridden from its superclasses. It makes sense to filter these out: the overridden methods can't be called by name and when overriding a method one expects it to disappear from the bottom layer. I can't think why one might override a test, but that's beside the point. So class methods and superclass methods are shuffled together and de-duped and this is fine so far. When @TESTS is set, the superclasses' tests are _appended_ to the ones listed in @TESTS, then shuffled. Failing to include them now is likely to confuse people so we'd better not change that. But for the shuffle, you could put the superclass tests first by including their names in @TESTS at the front. Having a method to get them might be convenient. Superclass test methods are added in an order depending on how they're shuffled in their class and the inheritance order; unless you also fixed the sort for these classes by whatever means. Test names still need de-duping and your %seen will do this. Or do they? Maybe you put the name of one test in @TESTS several times, because you wanted it run several times? This isn't how it was intended to work, but TMTOWTDI says maybe it should be possible. Most people would still want supermethods de-duped. So we have test ordering (by hash, random, as specified, sorted), and the relative ordering of tests vs. inherited tests. The only order that can't easily be reconstructed is "as specified", so it makes sense to preserve this where possible. We don't want repeat runs of overriding tests, but may want repeat runs explicitly in @TESTS. We want to keep the current default of hash ordered tests. Switching to random ordering for the default needs wider discussion. One solution would be a Test::Unit::OrderPreservedTestCase which contains the current list_tests method, minus the %tests but plus a de-dup filter on the inherited tests. Then the Test::Unit::TestCase inherits from the thing with the tedious name and does the %tests thing. This pushes the complications out into the already complicated class structure. Another solution would involve methods such as list_local_tests_ordered # returns @TESTS or introspects class list_super_tests_ordered # recurses on @ISA classes list_tests_ordered ..... # combines (@local, @super), de-duping # @super but not @local list_tests # get ordered list, shuffle by hash Then we add good comments about why it's done like this and how to get the most from it. Folks determined to sort test names can override list_tests with a pass-through method. Folks wanting proper randomisation can override list_tests to shuffle properly. Default behaviour stays the same, except @TESTS works as documented. Folks wishing to skip even numbered test methods after the shuffle can do this, provided the thread on the other CPU shuffles with the same random seed. Any more? > I experimented a fix in a subclass of T::U::TestCase: > > sub list_tests { > my $class = ref($_[0]) || $_[0]; > # ... > my %seen; > return grep { !$seen{$_}++ } @tests; > } I'm glad this will fix your problem. I think we need something more flexible for the default behaviour, and there may be better ways to fix your problem depending on exactly what it is. > I see that the latest release of the module dates back to 2002, Yes. There are a couple of changes since then in CVS, but nothing exciting. > and the latest message on this ML is one year old. There's more recent activity on the perlunit-devel list, for small values of 'activity'. > Is the module still alive? Who's in charge? Many developers have been quiet for a while, I can't speak for them. I believe Piers has the CPAN upload password. I am using perlunit again at (new) work and have some interest in at least the bugfixes. Of which this is one. > If I take the time to make a proper patch, will it be incorporated > and will a new version be released? Yes and no. We will happily accept patches, and future releases will happen at some point. Exactly who is "we" and when things happen mostly depends on our round-tuit schedulers. More importantly for your patch, I think the fix should either be to the documentation, or a wider change to the code. Will you stay for a chat before leaving your patch? 8-) The JUnit guys discuss test execution order starting at http://groups.yahoo.com/group/junit/message/4063 and ending with a removal of the OP's need to fix order at http://groups.yahoo.com/group/junit/message/4068 The Ant (build tool) guys have a go too, http://www.mail-archive.com/us...@an.../msg08101.html but that's broken. Ask Google for "run junit tests in a certain order" while it's still in the cache, or try http://web.archive.org . There's more about the design at http://www.martinfowler.com/bliki/JunitNewInstance.html but not much discussion of the ordering. A more concrete illustration involving databases, typed off the top of my head so please excuse typos, package SortmeTest; use strict; use base 'Test::Unit::TestCase'; our @TESTS = qw( test_insert test_select ); # assume this works as documented # else do # sub list_tests { return sort $_[0]->SUPER::list_tests } # just like the code tells you not to my $dbh; sub set_up { my $self = shift; $self->SUPER::set_up; # always a good idea $dbh ||= new DBI(gubbins); # ensure we have a database handle } sub test_insert { my $self = shift; my $expect = 1; # daft for a simple example, but a good pattern my $actual = $dbh->do("insert into tble (a, b) values (3, 42)"); $self->assert_equals( $expect, $actual ); } sub test_select { my $self = shift; my $rows = $dbh->selectall_arrayref("select b from tble where a = 3"); $self->assert_equals(1, scalar @$rows); # check num. rows $self->assert_equals(42, $rows->[0]->[0]); # check the b value } END { if ($dbh) { $dbh->rollback; # so the insert doesn't fail tomorrow undef $dbh; } } There are many places where you have to force the design to do something odd: - global database handle Can't use an object instance variable because different tests have different objects. May prefer not to have two database connections because it will be slower. Can't use the transaction to do insert/ select/rollback, except on one handle. - sorting the method execution order Arises from the dependence of test_select on the preceding test_insert. If you maintain the list of test methods independently of the method definitions, they can get out of step. This will bite you. - the magic number 42 appears in two places Fine for a small constant, but what about a 70 character text constant? Use another global? Risk a typo? - use of END instead of tear_down If you run in the Tk test runner, this could be hours later since you'll be examining the test output before quitting perl. Suppose you're on Oracle and you have a unique index. The uncommitted record will cause a subsequent run of the test suite to hang until you close the window on the first one. (By default? I don't know about disabling the behaviour.) Very confusing. Fixing this one by having a teardown which checks $self->name before deciding whether to rollback... well this way lies further madness. OK, so I have set up my straw man. He won't mind being "mended" with a small diff: package ImprovedTest; use strict; use base 'Test::Unit::TestCase'; sub set_up { my $self = shift; $self->SUPER::set_up; # always a good idea $self->{dbh} = new DBI(gubbins); } sub dbh { $_[0]->{dbh} } # if you like accessors sub test_fortytwo { my $self = shift; my $num = 42; $self->part_insert($num); $self->part_select($num); } sub part_insert { my ($self, $val) = @_; my $expect = 1; # daft for a simple example, but a good pattern my $actual = $self->dbh->do("insert into tble (a, b) values (3, $val)"); $self->assert_equals( $expect, $actual ); } sub part_select { my ($self, $val) = @_; my $rows = $self->dbh->selectall_arrayref("select b from tble where a = 3"); $self->assert_equals(1, scalar @$rows); # check num. rows $self->assert_equals($val, $rows->[0]->[0]); # check the b value } sub tear_down { my $self = shift; $self->SUPER::tear_down; my $dbh = $self->dbh; if ($dbh) { $dbh->rollback; # so the insert doesn't fail tomorrow } } Does this example make things any clearer? - one dbh per test object = one dbh per insert/select/rollback We have isolation. - test suite runs your tests independently Currently in series but no particular order, but if you're on a multi-processor machine then maybe you prefer to run them in parallel? Or with a cluster, on separate machines? These features are just a twinkle in the eye, now. One day somebody with a three hour test suite will implement them because it would save time to do so... and might be easier than optimising the tests. - change your magic constants with ease I can add a new test like this, sub test_elephant { my $self = shift; my $string = "elephant"; $self->part_insert($string); $self->part_select($string); } when I decide that the tble shall also accept strings. - rollback happens ASAP, whether tests pass/fail/die In a general way, for any other test methods we declare. This starts to look like a base class My::DatabaseTestCase if you have many groups of tests to write. - doesn't even try the select if the insert fails The failure exception or die takes control off to the framework and then to tear_down. No point trying the select, you expect it to fail. If you want to ensure that the select _does_ fail when run without the insert, this is a separate test. sub test_selectfail { my $self = shift; $self->assert_raises('Test::Unit::Failure', sub { $self->part_select("Elvis") }); } It's probably a useful test, too: a fresh dbh, select, failure, catch failure and be happy with it. I hope this gives you some ideas. Sorry if it's too simple for you, but I haven't seen the tests you already have. Matthew #8-) |
From: Jean-Louis L. <jl...@so...> - 2004-10-26 12:16:47
|
Hello, I've found out a trivial bug in Test::Unit, which btw solves this problem: http://sourceforge.net/mailarchive/forum.php?thread_id=3232369&forum_id=2441 Test::Unit::TestCase::list_tests does grab the testsub names from @TESTS as documented, but then sticks them in a hash, probably to remove duplicates: sub list_tests { my $class = ref($_[0]) || $_[0]; # ... my %tests = map {$_ => ''} @tests; return keys %tests; } I experimented a fix in a subclass of T::U::TestCase: sub list_tests { my $class = ref($_[0]) || $_[0]; # ... my %seen; return grep { !$seen{$_}++ } @tests; } I see that the latest release of the module dates back to 2002, and the latest message on this ML is one year old. Is the module still alive? Who's in charge? If I take the time to make a proper patch, will it be incorporated and will a new version be released? Thanks, Jean-Louis Leroy Sound Object Logic http://www.soundobjectlogic.com |
From: Bret P. <br...@pe...> - 2003-10-16 21:59:17
|
At 10:27 PM 10/14/2003, Dmitry Diskin wrote: >Neat :) As far as I can see, the only drawback of this approach is the >waste of time? I'd like to second this approach. If you think it is a waste of time, just remove the first two tests. If you want to keep them, then they must be worth the time. I can see reasons either way. Bret _____________________________________ Bret Pettichord, Software Tester Book - www.testinglessons.com Consulting - www.pettichord.com Blog - www.io.com/~wazmo/blog Hotlist - www.testinghotlist.com |
From: Phlip <pl...@sy...> - 2003-10-15 11:37:09
|
> >>sub test_003_delete { > > > > test_002_select(); > > > >>... delete the above record and do some tets > >>} > > > > > > ;-) > > > > Neat :) As far as I can see, the only drawback of this approach is the > waste of time? As one develops, one hits the test button after changing 1 to 10 lines. If that gets slow, one spends a few minutes figuring out why and speeding it up. The method of last resort is to split the current folder, so its local tests stay fast. Run the global tests before and after integrating. If the database can't run three transactions per test run, replace it? -- Phlip |
From: Dmitry D. <dd...@ic...> - 2003-10-15 03:27:14
|
Phlip wrote: >>Can you suggest a structure of tests for testing insert, select and >>delete operations? I suppose that I can write something like tis: >> >>sub test_001_insert { >>... insert record and do some tets >>} >> >>sub test_002_select { > > test_001_insert(); > >>... select record inserted in previous test and do some tets >>} >> >>sub test_003_delete { > > test_002_select(); > >>... delete the above record and do some tets >>} > > > ;-) > Neat :) As far as I can see, the only drawback of this approach is the waste of time? -- Dmitry. |
From: Phlip <pl...@sy...> - 2003-10-14 21:43:24
|
> Can you suggest a structure of tests for testing insert, select and > delete operations? I suppose that I can write something like tis: > > sub test_001_insert { > ... insert record and do some tets > } > > sub test_002_select { test_001_insert(); > ... select record inserted in previous test and do some tets > } > > sub test_003_delete { test_002_select(); > ... delete the above record and do some tets > } ;-) -- Phlip |