|
From: Alias <ali...@gm...> - 2006-01-11 12:47:21
|
Hi Marcus, Thanks for your response, I'm glad someone else is listening! I've been doing lots of reading on this, and I agree that Integration testing is the correct term for what I'm talking about. This is definitely linked closely with unit testing, and I'm very keen to approach the problem of reliably testing UI, which is generally accepted as being the achilles heel of unit testing, which is a real shame, because the benefits are quite apparent to me. I'll keep the list informed of any pertinent discoveries or conclusions I reach. Thanks, Alias On 1/11/06, コグラン マーカス <ma...@de...> wrote: > Hi Alias, > > Extremely well said. You have mirrored many of my own, and I'm sure many other's, thoughts. I'm not sure about the hows, but the whys are definitely all there. > > Perhaps the first step could be a change of definition/mind set. How about 'Integration Tests' instead of 'Unit Tests' for what you're talking about. Yeah, I know. Same problem, different name, but there are a lot of presumptions and expectations attached to a name, especially something like 'Unit Test'. After all, a unit test should be a unit test. That's what it was designed for. That's what it's good at. > > Anyway, enough rambling. I would definitely be interested in discussing this point more with anyone interested. > > Hope you're all well, > > Marcus. > > ma...@de... > > -----Original Message----- > From: asu...@li... [mailto:asu...@li...] On Behalf Of asu...@li... > Sent: Saturday, January 07, 2006 1:24 PM > To: asu...@li... > Subject: Asunit-users digest, Vol 1 #110 - 1 msg > > Send Asunit-users mailing list submissions to > asu...@li... > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.sourceforge.net/lists/listinfo/asunit-users > or, via email, send a message with subject or body 'help' to > asu...@li... > > You can reach the person managing the list at > asu...@li... > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Asunit-users digest..." > > > Today's Topics: > > 1. Re: Re: Testing asynchronous functions (Alias) > > --__--__-- > > Message: 1 > Date: Fri, 6 Jan 2006 14:23:31 +0000 > From: Alias <ali...@gm...> > To: asu...@li... > Subject: Re: [Asunit-users] Re: Testing asynchronous functions > Reply-To: asu...@li... > > Hi Luke, > > Sorry for the delay, I've been really busy leading up to the holidays > and my Gmail is so flooded with mailing lists I sometimes miss > messages. > > I'm thinking that EventDrivenTestCase is the way to go. What we need > is a testcase that will basically listen to events - that means that > the teardown & setup stuff would have to be called every time. > > I know what you're saying about testing too much. The thing though, is > that I think there's a fundamental limit to unit testing which makes > it less useful than it could be. This (admittedly angry, ranty and > perhaps a little over the top) article begins to touch on an important > point: > http://www.pyrasun.com/mike/mt/archives/2004/07/11/15.41.04/index.html > > Essentially, unit tests are great, but where they fall down is their > inability to deal with interactions between different parts of the > system. Ideally, I'd like to write my tests, and then run my > application as intended, with the unit tests running as well, so that > I can actually test the entire system as it runs. Each test case can > be run once at the start to test each class in isolation, but lots of > tests need to be run in some kind of production environment. I guess > there is a line of distinction to be drawn between exception handling, > acceptance tests and unit testing, but for me the unit testing would > be much more useful if I could write it in conjunction with the UI > code and then compile, click stuff, and if unit tests fail then give > me exact info about which tests have failed and why. Sometimes it just > doesn't make sense for a class to be tested completely in isolation - > some problems only become apparent when they are used in conjunction, > even though all the tests are passed. > > Unit testing for individual classes has been very useful to me, and > I'd like to extend that level of coverage to the UI, as far as > possible - I know it's pretty much impossible to test human > interaction stuff completely accurately, but it must surely be > possible to test stuff in a step by step manner. > > However - I'm beginning to think a different methodology is needed for > the kind of testing I'm thinking of. Here are my problems with unit > testing: > > * Unit testing doesn't really make sense for UI development - I > won't go into why this is, because it's pretty self apparent > * Unit testing only tests such small pieces of functionality that > it only really useful in the context of single, complex classes which > have simple, defined inputs & output and minimal interaction with > other classes > * Unit testing can't easily be used to test asynchronous functions > * It is difficult, if not impossible, to unit test every piece of > functionality in your classes while they are interacting in a > production environment > * It is entirely possible (and perhaps likely) that an application > could be totally broken and badly engineered, despite passing a huge > number of unit tests > > This is not to say that unit testing is useless or a waste of time. > However, in it's current form, its use in UI development is severely > limited.Let's look at some of the goals would expect from a perfect > testing system: > > * Test code must be completely seperate from production code, so > that when I am confident that my application works, I can remove the > test harness & be reasonably sure that the application will still work > reliably - unit testing accomplishes this very well. > * I need to be able to test an application as it is running - that > is, in it's production form, in a live environment, doing actual, > specific tasks - not just passing tests that I myself have designed > * I would like to produce detailed feedback and logs from the > tests themselves, rather than having a seperate logging process > > Although it is possible to accomplish many of these things in ASUnit, > it's difficult to do so in the framework as it stands. Hence, I think > that it's probably the case that these problems are more to do with > the very concept of unit testing itself, rather than any specific > implementation. > > What I'm thinking of, as a solution, is this - what I need is the > ability to call some kind of seperate test functionality from inside > my production code. For example: > > //I've just recieved a response from an AMF gateway - > var myAmfResult:Object =3D response; > trace(RuntimeTest.runTest("testAmfResponseIsValid",this,myAmfResult)); > > > My ContinousTest class would then run the appropriate test on the live > instance (passed as a parameter to the test) and from then on, you're > in similar territory to unit tests, except you don't have the > teardown/setup functionality - this is because you wouldn't have this > in production either, and you can't rely on always having a fresh > instance of something to test on. Ideally, I'd like to combine my > runtime tests with the unit tests, (maybe in the same class file?) so > that the entire testing system can be removed without leaving any test > code in the application. > > This saves the work that would be otherwise consumed in creating > complex testcases & mocks, essentially working around the limitations > of the unit test framework. > > Note the use of the trace() function - this is deliberate, becasue we > can use the "remove trace actions" compiler directive to detach the > runtime test. Each call to RuntimeTest could return the test results, > or simply an empty string and pass the output to a logger if we didn't > want it to clutter up the output panel. > > Does this make sense? What do you guys think? > > Cheers, > Alias > > > > > > > > > > On 12/8/05, Luke Bayes <lb...@gm...> wrote: > > Hey Rob, > > > > Sorry about the delayed response on my part. I've been talking with Ali > > about this issue and giving it some thought myself. > > > > What I'm hearing is that you need a single TestCase that will execute som= > e > > action, pause for an arbitrary amount of time, and then perform assertion= > s > > around each test method, rather than once for a test case. > > > > My first guess is that this should be possible by overriding the runBare > > method of TestCase. If you're working in AS 3, this might have a larger > > impact because of how we implemented test failures. In AS 2, failures wer= > e > > simply transmitted to the Runner from anyone that wanted to pass them, in > > the AS 3 build, failures are transmited by "throwing" a TestFailure objec= > t > > up the call stack (This is closer to the way JUnit works). This may cause > > some problems if the called methods are no longer in the same call stack > > that the runner initiated... It might still work fine - I'm just not sure= > . > > At a minimum, there should be a method in TestCase or Assert that allows = > us > > to transmit a failure directly - the overriden runBare could then pass > > failures up to that method... > > > > At the end of the day, it still feels like you're trying to test too much= > at > > once. If you're trying to test features of a preloader, there should be a > > way to do that using the AsynchronousTestCase as provided. But I would > > encourage you to decouple the testing of the preloader functionality from > > the testing of the features being loaded. Admittedly, this can be a compl= > ex > > problem and I believe even java developers still struggle with it. > > > > If you're interested in getting into the sources, please let me know as t= > he > > CVS repository on sourceforge is no longer up to date. It was getting rea= > lly > > messy in there so we moved over to Subversion. > > > > Also, if you're wanting to add these features, let's come up with a > > different name since AsynchronousTestCase is already being used for the > > feature that allows pausing prior to the test case execution. Maybe > > PausableTestCase? EventDrivenTestCase? IntervalDrivenTestCase? Or perhaps= > , > > we should just refactor these features into the existing > > AsynchronousTestCase? > > > > Another possible alternative is that you could simply use what is already > > there and implement a unique test case class for each asynchronous test t= > hat > > you want to verify... > > > > Let me know what you think - > > > > > > > > Luke Bayes > > www.asunit.org > > > > > > > > > > > > > > > > > > > > > > > > --__--__-- > > _______________________________________________ > Asunit-users mailing list > Asu...@li... > https://lists.sourceforge.net/lists/listinfo/asunit-users > > > End of Asunit-users Digest > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > Asunit-users mailing list > Asu...@li... > https://lists.sourceforge.net/lists/listinfo/asunit-users > |