You can subscribe to this list here.
| 2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(58) |
Dec
(4) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2005 |
Jan
(23) |
Feb
(3) |
Mar
(6) |
Apr
(4) |
May
(15) |
Jun
(22) |
Jul
(18) |
Aug
(3) |
Sep
(25) |
Oct
(7) |
Nov
(86) |
Dec
(9) |
| 2006 |
Jan
(20) |
Feb
(44) |
Mar
(59) |
Apr
(23) |
May
(37) |
Jun
(35) |
Jul
|
Aug
(2) |
Sep
(3) |
Oct
(21) |
Nov
(17) |
Dec
(22) |
| 2007 |
Jan
(13) |
Feb
(7) |
Mar
(1) |
Apr
(13) |
May
(4) |
Jun
(2) |
Jul
(5) |
Aug
(8) |
Sep
(13) |
Oct
(22) |
Nov
(3) |
Dec
|
| 2008 |
Jan
(2) |
Feb
(3) |
Mar
(1) |
Apr
(3) |
May
|
Jun
(2) |
Jul
(34) |
Aug
(10) |
Sep
(5) |
Oct
(6) |
Nov
(8) |
Dec
|
| 2009 |
Jan
(1) |
Feb
(10) |
Mar
(4) |
Apr
(12) |
May
(10) |
Jun
(27) |
Jul
|
Aug
(1) |
Sep
(3) |
Oct
|
Nov
(6) |
Dec
(1) |
| 2010 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
(2) |
Oct
|
Nov
|
Dec
|
| 2011 |
Jan
|
Feb
|
Mar
(5) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2017 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Johannes N. <joh...@gm...> - 2006-01-30 00:10:24
|
not much to say bout it:
in asunit.framework.Assert
in the assertTrue function
throw IllegalOperationError("Invalid argument count");
should be
throw new IllegalOperationError("Invalid argument count");
cheers
--
j:pn
|
|
From: Luke B. <lb...@lu...> - 2006-01-27 03:55:17
|
Hey Andreas, =20 Thanks so much for your contribution! =20 We=92ll be sure to include this change in the next release. =20 =20 Luke Bayes www.asunit.org <http://www.asunit.org/>=20 =20 =20 _____ =20 From: Peter Andreas M=F8lgaard=20 Sent: Tuesday, January 24, 2006 11:18 AM Subject: AsUnit & Flex 1.5 =20 Hi Luke and Ali, =20 Thank you very much for having been so privileged to benefit from the = AsUnit framework. I have however encountered a bug when using it from within Flex 1.5 = app=92s. =20 Class file: com.asunit.util.LocalConnClient=20 Line number: 56 =20 Problem Desc: /com/asunit/util/LocalConnClient.as:52 A return = statement is required in this function. Code: return function() { arguments.unshift(fname); arguments.unshift("execResolve"); arguments.unshift( ref.serverId); return ref.execMethod.apply(this, arguments); } =20 Solution: Desc.: To cast the function instance prior to returning it. Code: return Function( function() { arguments.unshift(fname); arguments.unshift("execResolve"); arguments.unshift( ref.serverId); return ref.execMethod.apply(this, arguments); } ) =20 I don=92t know why and its only sometimes I experienced it until I made = an explicit cast. If you can somehow use it its fine, its just a piece = small change that helped me around. Keep up the good work=20 De bedste hilsner / Best regards, Peter Andreas M=F8lgaard =20 |
|
From: Alias <ali...@gm...> - 2006-01-22 13:45:51
|
Hi Luke,
> - Before you go too far, you should make absolutely certain that compilin=
g
> with "trace" turned off, actually omits objects (other than string
> primitives) that were referenced in trace statements from the SWF. I susp=
ect
> that the only way to remove trace statements is to perform the removal
> "after" the source has been tokenized. I could certainly be wrong about
> this, but it is possible that the compiler does not re-run dependency
> analysis after trace removal. If this is the case, wrapping these referen=
ces
> in trace statements probably would not work.
I've done some cursory tests - it seems that a singleton class,
wrapped in a trace statement *does* get removed by the compiler if it
is only referenced by the trace statements. My test code below:
//class
class com.proalias.SomeSingleton {
=09
=09private static var $instance : SomeSingleton;
=09
=09private function SomeSingleton() {
=09=09
=09}
=09
=09public function helloWorld() : String {
=09=09return ("hello world");=09
=09}
=09
=09/**
=09 * @return singleton instance of SomeSingleton
=09 */
=09public static function get instance() : SomeSingleton {
=09=09if ($instance =3D=3D null)
=09=09=09$instance =3D new SomeSingleton();
=09=09return $instance;
=09}
}
//in .fla
import com.proalias.SomeSingleton;
trace("someSingleton:"+SomeSingleton.instance.helloWorld());
Does anyone see any potential problems with this approach?
Let me know,
Alias
|
|
From: Luke B. <lb...@gm...> - 2006-01-20 17:08:22
|
Rob, Answers inline below: > Ok - I wouldn't be proposing to create a situation whereby a project > becomes dependant on, or needs to ship with, AsUnit. The integration > testing stuff would be another part of the test harness, or would > allow you to easily connect the asserts to your own debug/reporting > code. There are three different user stories here: > > 1) I want to do test driven development, and then do functional > testing as I go, and know if I break something. This sounds like an excellent goal to me! > 2) I want to have accurate debug information when my code breaks This also sounds like something we should work toward. > 3) I want to ship the product with this debug information to the > client, so that I can get accurate bug reports This makes sense to me, I wonder if a simple logging utility would work for this. Then whenever an exception is caught at the top of your thread, you could spit out the last 1 or more log entries for bug reports? > So, it's important to seperate these concerns. The first two are my > main concern - the third one has come out of the process of figuring > out how to do the first two. Shippping with asserts on is a totally > seperate issue to having a more reliable test harness - I wouldn't > want to create a system which forces the user to do one or the other. This makes a lot of sense to me. I was misunderstanding you, and thought that the 3rd was actually the goal... > However, as the owners of the project, the licencing is absolutely > your decision. Unfortunately, I'm not sure if that's true anymore. I could be wrong on this point, but I'm afraid that since we have released these sources under the LGPL, any "derivitive works" must be under the same or better license. I think the only way we can change the license at this point is if we would have kept a non-LGPL branch in place that no one else contributed to. Since there actually have been very few contributions, we could probably branch the sources from the date of release and get it up to speed faster than starting from scratch - but there would need to be a really compelling reason to go through this. > That's also an excellent point. Do you know of any formal measurements > for reliability requirements? Say, > financial/military/consumer/experimental? Sadly, no, but I have always been fascinated by the incredible rise in product costs for financial/military/medical work of all kinds. It seems ludicrous what companies charge for hospital equipment, but the QA that has to go into those products is amazingly expensive. Software is probably doubly so. I suspect there are formal descriptions for software requirements - probably similar to data base normalization... 1st normal form =3D=3D prototype 2nd normal form =3D=3D consumer grade 3rd normal form =3D=3D financial 4th normal form =3D=3D medical/military I would be surprised if there isn't something like that that available. Thanks, Luke Bayes www.asunit.org |
|
From: Alias <ali...@gm...> - 2006-01-20 10:23:32
|
Hi Luke, Thanks for your mail. > - In a desktop application, one has much more freedom in terms of file si= ze, > but the swf apps that we have developed tend to be very sensitive to even= a > few kilobytes of inflation. I believe the non-visual AsUnit 2.x framework= is > 7KB to 14KB right now. > > - The SWF runtime is relatively slow in terms of memory and CPU usage. I > haven't yet encountered a complex SWF application that left much in terms= of > system resources to spare - especially during periods of heavy activity, = the > very times when you would most likely encounter failures. > Both fair points, no argument there. > - Before you go too far, you should make absolutely certain that compilin= g > with "trace" turned off, actually omits objects (other than string > primitives) that were referenced in trace statements from the SWF. I susp= ect > that the only way to remove trace statements is to perform the removal > "after" the source has been tokenized. I could certainly be wrong about > this, but it is possible that the compiler does not re-run dependency > analysis after trace removal. If this is the case, wrapping these referen= ces > in trace statements probably would not work. > That's an excellent suggestion, I'll look into that. My impression was that it was a precompile directive, but now I think about it, that's a totally unfounded assumption. > - Finally - and most importantly, there may be licensing issues associate= d > with including AsUnit inside of a commercial product. I'm not a lawyer, s= o I > could be wrong on this issue, but the way that I understand the LGPL, is > that if one ships a library covered by this license (as AsUnit is), it mu= st > be compiled separately from the associated product - and linked > "dynamically" at runtime. This works fine for a test framework, because y= ou > don't actually "ship" the test harness as part of the product. It is (tod= ay) > compiled separately. This issue will probably prevent most projects from > being able to include AsUnit at runtime. On this topic - if this idea gai= ns > traction - we will definitely consider moving to a different license in t= he > future - but I'm unclear about what freedom we really have in this issue = (at > least for AsUnit 2.x)... > > Overall, I definitely support the idea of extending AsUnit to work in the > context of integration (or UI) tests, but I'm not convinced that we shoul= d > do much work with the idea that people are going to ship AsUnit in releas= ed, > production code. > Ok - I wouldn't be proposing to create a situation whereby a project becomes dependant on, or needs to ship with, AsUnit. The integration testing stuff would be another part of the test harness, or would allow you to easily connect the asserts to your own debug/reporting code. There are three different user stories here: 1) I want to do test driven development, and then do functional testing as I go, and know if I break something. 2) I want to have accurate debug information when my code breaks 3) I want to ship the product with this debug information to the client, so that I can get accurate bug reports So, it's important to seperate these concerns. The first two are my main concern - the third one has come out of the process of figuring out how to do the first two. Shippping with asserts on is a totally seperate issue to having a more reliable test harness - I wouldn't want to create a system which forces the user to do one or the other. However, as the owners of the project, the licencing is absolutely your decision. > - I think we should also consider reliability requirements. Different > projects can have wide variance in terms how reliable they must be. For > example, it is catastrophic when there is a software failure in the space > shuttle or a pace maker. I'm not sure that most web apps actually have th= e > same kind of uptime requirements. I'm not saying that failures should be > taken lightly - I'm just saying that one needs to evaluate the cost of th= at > security and make appropriate trade offs. That's also an excellent point. Do you know of any formal measurements for reliability requirements? Say, financial/military/consumer/experimental? Let me know, Alias |
|
From: Matt F. <mat...@gm...> - 2006-01-20 01:56:13
|
Yep that looks like its going to be really good, any ideas when it will be released? On 1/19/06, Ali Mills <ali...@gm...> wrote: > I just read on Martin Fowler's bliki > (http://martinfowler.com/bliki/TestDouble.html) that Gerard Meszaros > is working on a book to capture patterns for using the various Xunit > frameworks. The book outline can be found at > http://tap.testautomationpatterns.com:8080/Book%20Outline.html. > > Could be an interesting book... > > > Ali > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log fi= les > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://sel.as-us.falkag.net/sel?cmdlnk&kid=103432&bid#0486&dat=121642 > _______________________________________________ > Asunit-users mailing list > Asu...@li... > https://lists.sourceforge.net/lists/listinfo/asunit-users > |
|
From: Luke B. <lb...@gm...> - 2006-01-19 20:35:22
|
Sorry - one more point about the inclusion of test code in production software... - I think we should also consider reliability requirements. Different projects can have wide variance in terms how reliable they must be. For example, it is catastrophic when there is a software failure in the space shuttle or a pace maker. I'm not sure that most web apps actually have the same kind of uptime requirements. I'm not saying that failures should be taken lightly - I'm just saying that one needs to evaluate the cost of that security and make appropriate trade offs. Thanks, Luke Bayes www.asunit.org |
|
From: Luke B. <lb...@gm...> - 2006-01-19 20:11:11
|
Hey Alias, I just wanted to make some observations related to shipping test code in a production environment, and specifically to the inclusion of AsUnit in a commercial product. - In a desktop application, one has much more freedom in terms of file size= , but the swf apps that we have developed tend to be very sensitive to even a few kilobytes of inflation. I believe the non-visual AsUnit 2.x framework i= s 7KB to 14KB right now. - The SWF runtime is relatively slow in terms of memory and CPU usage. I haven't yet encountered a complex SWF application that left much in terms o= f system resources to spare - especially during periods of heavy activity, th= e very times when you would most likely encounter failures. - Before you go too far, you should make absolutely certain that compiling with "trace" turned off, actually omits objects (other than string primitives) that were referenced in trace statements from the SWF. I suspec= t that the only way to remove trace statements is to perform the removal "after" the source has been tokenized. I could certainly be wrong about this, but it is possible that the compiler does not re-run dependency analysis after trace removal. If this is the case, wrapping these reference= s in trace statements probably would not work. - Finally - and most importantly, there may be licensing issues associated with including AsUnit inside of a commercial product. I'm not a lawyer, so = I could be wrong on this issue, but the way that I understand the LGPL, is that if one ships a library covered by this license (as AsUnit is), it must be compiled separately from the associated product - and linked "dynamically" at runtime. This works fine for a test framework, because you don't actually "ship" the test harness as part of the product. It is (today= ) compiled separately. This issue will probably prevent most projects from being able to include AsUnit at runtime. On this topic - if this idea gains traction - we will definitely consider moving to a different license in the future - but I'm unclear about what freedom we really have in this issue (a= t least for AsUnit 2.x)... Overall, I definitely support the idea of extending AsUnit to work in the context of integration (or UI) tests, but I'm not convinced that we should do much work with the idea that people are going to ship AsUnit in released= , production code. I may very well be out of touch with what other folks want - so perhaps other people could weigh in on this issue? Is there anyone out there that would ship test code in an actual product? Anyone that most definitely woul= d not? Thanks, Luke Bayes www.asunit.org |
|
From: Ali M. <ali...@gm...> - 2006-01-18 23:12:58
|
I just read on Martin Fowler's bliki (http://martinfowler.com/bliki/TestDouble.html) that Gerard Meszaros is working on a book to capture patterns for using the various Xunit frameworks. The book outline can be found at http://tap.testautomationpatterns.com:8080/Book%20Outline.html. Could be an interesting book... Ali |
|
From: Alias <ali...@gm...> - 2006-01-18 11:55:30
|
Hi Luke, That's cool, that was what I figured. Good to know for sure though, Cheers, Alias On 1/17/06, Luke Bayes <lb...@gm...> wrote: > Hey Rob, > > Sorry for the confusion, the XmlTransport class has been replaced with > asunit.framework.TestCaseXml... > > Essentially, it's just a wrapper for an XML objecy that has a built-in > onload handler - which calls "onXmlLoaded" on the concrete test case. > > Please let us know if you have any other questions. > > > Thanks, > > Luke > > > > On 1/17/06, Alias < ali...@gm...> wrote: > > > > Hi All, > > > > I've been wondering - in Luke's recent blog post on Asynchronous testin= g: > > > http://lukebayes.blogspot.com/2005/11/testing-asynchronous-functionality.= html > > > > the example code makes mention of a class called XmlTransport. This > > looks really useful, but doesn't appear to be in any of my ASUnit > > sources. Could I just have a bad version, or is this just a stub/not > > meant to be there? Am I looking in the wrong places? > > > > I'm beginning to feel a bit stupid... > > > > Cheers, > > Alias > > > > > > ------------------------------------------------------- > > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > > for problems? Stop! Download the new AJAX search engine that makes > > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > > > http://sel.as-us.falkag.net/sel?cmdlnk&kid=103432&bid#0486&dat=121642 > > _______________________________________________ > > Asunit-users mailing list > > Asu...@li... > > https://lists.sourceforge.net/lists/listinfo/asunit-users > > > > |
|
From: Alias <ali...@gm...> - 2006-01-18 10:22:41
|
Hi Ali, David,
Re:FlashForward -I really hope you guys win!
Anyway -
Re: shipping code with assertions on...
First off, that discussion applies to C/C++, where the game is a
little different, however, there are a few things that are worth
bearing in mind:
C assertions are not the same as ASUnit assertions - in C/C++, the
program will completely stop when an assertion fails, which we
obviously do not want. However, what we *do* want is for the program
to quietly tell us (the developers) that it's in trouble, and let us
know about it, real quick-like.
I've thought of a couple of solutions for this.
All assertions/test methods would be called through a singleton via
trace statements, something like this:
trace(Sys.printLn(IntegrationTester.assert("MyClassTestCase","testDataIsVal=
id")));
Let me unwrap that function a little, just for clarity:
trace(
Sys.printLn( <----------This will ensure we direct any
failed assertions
to a logger or trace output
IntegrationTester.getInstance.runTest( =20
<--------------We call a unit test
=20
from production code
=20
"MyClassTestCase","testDataIsValid",this <--specify the
=20
test case,
=20
test method,
=20
and pass a
=20
reference to
=20
the object
=20
being tested
)
)
);
Each class would have to import IntegrationTester (or whatever we end
up calling it), but if we turn off trace actions, it should be left
out by the compiler anyway. Worst case you ship with a few bytes of
extra pcode ;)
One the code is ready to be released, you could either:
a)Turn off trace actions, and send it as is
b)Leave the integration testing framework intact, and maybe have it
report failed assertions to your server, either silently, or
explicitily, in a kind of Mozilla Quality Feedback kind of a way. I
think this would be very useful in a pre-release environment.
Now, this approach appeals to me for a few reasons,
1) Test code reuse - in theory, if your tests are written well, they
should be able to pass in production the same as they do in isolation,
or if they can't, you should *know* this, and know not to call them
from production code. I'm pretty certain that at least 30% of your
test methods should be able to run on production code.
2) Continuous human interaction feedback - it's all very well to have
all the code tested in isolation, but, as we all know, other humans
will *always* find ways to break code which we can never concieve of
by ourselves. If you are working with the XP/Agile workflow, you
should be regularly releasing functional builds to the
client/stakeholders, which they can manually test, and when that code
breaks, you will have a much better chance of knowing why.
3) My UI tests and integration tests can be developed in parallel,
rather than leaving the integration until all the unit tests are done,
at which point the project design may be past the point of no return.
4) I'm using eclipse, and I'm in the habit of calling
trace("ClassName.methodName(params)")
at the top of my code, and I have eclipse code templates to do this,
so using a similar template to call test methods would be pretty
trivial. It would also be backward compatible with flash 6, which is
pretty important to a lot of people.
Anyway - how does that sound?
However, I have one major concern. It is possible that the production
code would somehow become dependant on the testing code, so it might
be necessary to enforce some kind of code hygiene or perhaps a cloning
algorithm (like the XmlTestcase uses) to prevent this, or maybe just
regularly test with tests turned off as well as on.
I'm eager to hear your views on this,
Sincerely,
Alias
On 1/17/06, Ali Mills <ali...@gm...> wrote:
> Hey Alias,
>
> Thanks for the congratulations. We're pretty excited that AsUnit was
> nominated and accepted to the film festival for it's technical merit.
> Participating at Flashforward as both presenters and film festival
> finalists is a great way to start the new year!
>
> As far as your post goes, it makes sense. And, your suggestion of
> using "trace()" to remove test code from shipping production code is
> an interesting one. Luke and I have thought about ways to use AsUnit
> for integration testing. We're sure that you could use the framework
> to do so, but we don't have any solid conclusions on exactly how just
> yet. Some integration testing solutions that I've heard of weave test
> code into production code - unappealing practice that I'd push
> strongly to avoid in any released AsUnit integation testing package.
> It sounds like your idea of using "trace()" would remove some test
> code but what about the rest? How would you remove the test cases?
>
> What's your feeling on weaving tests into production code? Do you
> feel that it's acceptable to ship a small amount of test code?
>
>
> Ali
>
>
> -------------------------------------------------------
> This SF.net email is sponsored by: Splunk Inc. Do you grep through log fi=
les
> for problems? Stop! Download the new AJAX search engine that makes
> searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
> http://sel.as-us.falkag.net/sel?cmdlnk&kid=103432&bid#0486&dat=121642
> _______________________________________________
> Asunit-users mailing list
> Asu...@li...
> https://lists.sourceforge.net/lists/listinfo/asunit-users
>
|
|
From: David H. <wa...@us...> - 2006-01-17 21:56:08
|
On Tue, Jan 17, 2006 at 10:54:36AM -0800, Ali Mills wrote: > What's your feeling on weaving tests into production code? Do you > feel that it's acceptable to ship a small amount of test code? Some discussion on that topic, http://c2.com/cgi/wiki?ShipWithAssertionsOn dave -- http://david.holroyd.me.uk/ |
|
From: Ali M. <ali...@gm...> - 2006-01-17 18:54:38
|
Hey Alias, Thanks for the congratulations. We're pretty excited that AsUnit was nominated and accepted to the film festival for it's technical merit. Participating at Flashforward as both presenters and film festival finalists is a great way to start the new year! As far as your post goes, it makes sense. And, your suggestion of using "trace()" to remove test code from shipping production code is an interesting one. Luke and I have thought about ways to use AsUnit for integration testing. We're sure that you could use the framework to do so, but we don't have any solid conclusions on exactly how just yet. Some integration testing solutions that I've heard of weave test code into production code - unappealing practice that I'd push strongly to avoid in any released AsUnit integation testing package.=20 It sounds like your idea of using "trace()" would remove some test code but what about the rest? How would you remove the test cases? What's your feeling on weaving tests into production code? Do you feel that it's acceptable to ship a small amount of test code? Ali |
|
From: Luke B. <lb...@gm...> - 2006-01-17 18:10:46
|
Hey Rob, Sorry for the confusion, the XmlTransport class has been replaced with asunit.framework.TestCaseXml... Essentially, it's just a wrapper for an XML objecy that has a built-in onload handler - which calls "onXmlLoaded" on the concrete test case. Please let us know if you have any other questions. Thanks, Luke On 1/17/06, Alias <ali...@gm...> wrote: > > Hi All, > > I've been wondering - in Luke's recent blog post on Asynchronous testing: > > http://lukebayes.blogspot.com/2005/11/testing-asynchronous-functionality.= html > > the example code makes mention of a class called XmlTransport. This > looks really useful, but doesn't appear to be in any of my ASUnit > sources. Could I just have a bad version, or is this just a stub/not > meant to be there? Am I looking in the wrong places? > > I'm beginning to feel a bit stupid... > > Cheers, > Alias > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://sel.as-us.falkag.net/sel?cmdlnk&kid=103432&bid#0486&dat=121642 > _______________________________________________ > Asunit-users mailing list > Asu...@li... > https://lists.sourceforge.net/lists/listinfo/asunit-users > |
|
From: Alias <ali...@gm...> - 2006-01-17 17:12:22
|
Hi All, I've been wondering - in Luke's recent blog post on Asynchronous testing: http://lukebayes.blogspot.com/2005/11/testing-asynchronous-functionality.ht= ml the example code makes mention of a class called XmlTransport. This looks really useful, but doesn't appear to be in any of my ASUnit sources. Could I just have a bad version, or is this just a stub/not meant to be there? Am I looking in the wrong places? I'm beginning to feel a bit stupid... Cheers, Alias |
|
From: Alias <ali...@gm...> - 2006-01-14 18:49:04
|
Hi Ali,
I appreciate your concerns. However - the type of UI based testing
that passage refers to is different to what I'm describing - that type
of test uses an approach where the mouse input is blindly mimicked by
software - by recording user input or whatnot. This approach is indeed
hugely unreliable, because the testing software doesn't really
understand what's going on - if the layout of the UI changes, for
example, the tests invariably fail.
The type of tests I'm talking about would be of a similar nature to
unit tests, except that instead of running on clean instances of
objects in isolation, they would run the same tests during
human-driven testing, and do the same types of checking, except during
runtime.
Say I have a testcase to validate the results of an array lookup
my unit test function would look something like this:
functionTestArrayLookUp(){
var instance:ArrayLookup =3D new ArrayLookup();
//instantiate a fresh ArrayLookup object
var lookUp:Array =3D instance.getArrayData;
//do the checks via some asserts:
assertTrue("check that the array is not undefined",lookup !=3D unde=
fined);
... etc.
However, in a live environment, I'd need to run similar tests inside
my application.
so - how about we simply bypass the setup/teardown step, and pass an
instance in as a parameter? Then the integration test would look like
this:
function testArrayLookUp(instance:ArrayLookup){
//instead of creating a new instance, and invoking setup/teardown,
//we do the checks on the instance passed in as a parameter
var lookUp:Array =3D instance.getArrayData;
//do the checks via some asserts:
assertTrue("check that the array is not undefined",lookup !=3D unde=
fined);
... etc.
Then, in our production code, we can do something like this:
var myArrayLookup:ArrayLookup =3D new ArrayLookup();//create the
instance as it would be in a normal production environment
trace(runtimeIntegrationTest(myTestCase, testArrayLookup, this);//run
the same tests on it as you would in a mock/unit test environment
Then, when we get a real live user to sit in front of the application
and do user testing, if and when if breaks, we've got the same data as
we would if we'd had a unit test fail - or at least we get some good
time savings on test reuse.
Obviously you might want to have different tests running in
production, but it would make sense to be able to leverage the ASUnit
framework for this kind of work.
Am I making sense?
Let me know,
Alias
p.s. congratulations on the FlashForward 2006 technical merit nomination!
http://www.flashforwardconference.com/finalists
On 1/14/06, Ali Mills <ali...@gm...> wrote:
> I was just reading http://www.xprogramming.com/testfram.htm and came
> upon the excerpt, "I don't like user interface-based tests. In my
> experience, tests based on user interface scripts are too brittle to
> be useful. When I was on a project where we used user interface
> testing, it was common to arrive in the morning to a test report with
> twenty or thirty failed tests. A quick examination would show that
> most or all of the failures were actually the program running as
> expected. Some cosmetic change in the interface had caused the actual
> output to no longer match the expected output. Our testers spent more
> time keeping the tests up to date and tracking down false failures and
> false successes than they did writing new tests." This excerpt does a
> good job of describing a situation that I fear - a situation where UI
> changes (which are bound to occur again and again) put me in a
> position where I'm spending more time maintaining my tests than I
> neccessary.
>
> Integration testing is really an interesting piece of the testing
> puzzle. While I'm not entirely convinced of its overall value,
> clearly many, many others are. Alias, it sounds like you feel a need
> for it, and I'm excited that you'll keep the list up-to-date on your
> conclusions.
>
>
> Ali
>
>
> -------------------------------------------------------
> This SF.net email is sponsored by: Splunk Inc. Do you grep through log fi=
les
> for problems? Stop! Download the new AJAX search engine that makes
> searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
> http://ads.osdn.com/?ad_idv37&alloc_id=16865&opclick
> _______________________________________________
> Asunit-users mailing list
> Asu...@li...
> https://lists.sourceforge.net/lists/listinfo/asunit-users
>
|
|
From: Ali M. <ali...@gm...> - 2006-01-14 06:21:11
|
I was just reading http://www.xprogramming.com/testfram.htm and came upon the excerpt, "I don't like user interface-based tests. In my experience, tests based on user interface scripts are too brittle to be useful. When I was on a project where we used user interface testing, it was common to arrive in the morning to a test report with twenty or thirty failed tests. A quick examination would show that most or all of the failures were actually the program running as expected. Some cosmetic change in the interface had caused the actual output to no longer match the expected output. Our testers spent more time keeping the tests up to date and tracking down false failures and false successes than they did writing new tests." This excerpt does a good job of describing a situation that I fear - a situation where UI changes (which are bound to occur again and again) put me in a position where I'm spending more time maintaining my tests than I neccessary. Integration testing is really an interesting piece of the testing puzzle. While I'm not entirely convinced of its overall value, clearly many, many others are. Alias, it sounds like you feel a need for it, and I'm excited that you'll keep the list up-to-date on your conclusions. Ali |
|
From: Alias <ali...@gm...> - 2006-01-11 12:47:21
|
Hi Marcus, Thanks for your response, I'm glad someone else is listening! I've been doing lots of reading on this, and I agree that Integration testing is the correct term for what I'm talking about. This is definitely linked closely with unit testing, and I'm very keen to approach the problem of reliably testing UI, which is generally accepted as being the achilles heel of unit testing, which is a real shame, because the benefits are quite apparent to me. I'll keep the list informed of any pertinent discoveries or conclusions I reach. Thanks, Alias On 1/11/06, コグラン マーカス <ma...@de...> wrote: > Hi Alias, > > Extremely well said. You have mirrored many of my own, and I'm sure many other's, thoughts. I'm not sure about the hows, but the whys are definitely all there. > > Perhaps the first step could be a change of definition/mind set. How about 'Integration Tests' instead of 'Unit Tests' for what you're talking about. Yeah, I know. Same problem, different name, but there are a lot of presumptions and expectations attached to a name, especially something like 'Unit Test'. After all, a unit test should be a unit test. That's what it was designed for. That's what it's good at. > > Anyway, enough rambling. I would definitely be interested in discussing this point more with anyone interested. > > Hope you're all well, > > Marcus. > > ma...@de... > > -----Original Message----- > From: asu...@li... [mailto:asu...@li...] On Behalf Of asu...@li... > Sent: Saturday, January 07, 2006 1:24 PM > To: asu...@li... > Subject: Asunit-users digest, Vol 1 #110 - 1 msg > > Send Asunit-users mailing list submissions to > asu...@li... > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.sourceforge.net/lists/listinfo/asunit-users > or, via email, send a message with subject or body 'help' to > asu...@li... > > You can reach the person managing the list at > asu...@li... > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Asunit-users digest..." > > > Today's Topics: > > 1. Re: Re: Testing asynchronous functions (Alias) > > --__--__-- > > Message: 1 > Date: Fri, 6 Jan 2006 14:23:31 +0000 > From: Alias <ali...@gm...> > To: asu...@li... > Subject: Re: [Asunit-users] Re: Testing asynchronous functions > Reply-To: asu...@li... > > Hi Luke, > > Sorry for the delay, I've been really busy leading up to the holidays > and my Gmail is so flooded with mailing lists I sometimes miss > messages. > > I'm thinking that EventDrivenTestCase is the way to go. What we need > is a testcase that will basically listen to events - that means that > the teardown & setup stuff would have to be called every time. > > I know what you're saying about testing too much. The thing though, is > that I think there's a fundamental limit to unit testing which makes > it less useful than it could be. This (admittedly angry, ranty and > perhaps a little over the top) article begins to touch on an important > point: > http://www.pyrasun.com/mike/mt/archives/2004/07/11/15.41.04/index.html > > Essentially, unit tests are great, but where they fall down is their > inability to deal with interactions between different parts of the > system. Ideally, I'd like to write my tests, and then run my > application as intended, with the unit tests running as well, so that > I can actually test the entire system as it runs. Each test case can > be run once at the start to test each class in isolation, but lots of > tests need to be run in some kind of production environment. I guess > there is a line of distinction to be drawn between exception handling, > acceptance tests and unit testing, but for me the unit testing would > be much more useful if I could write it in conjunction with the UI > code and then compile, click stuff, and if unit tests fail then give > me exact info about which tests have failed and why. Sometimes it just > doesn't make sense for a class to be tested completely in isolation - > some problems only become apparent when they are used in conjunction, > even though all the tests are passed. > > Unit testing for individual classes has been very useful to me, and > I'd like to extend that level of coverage to the UI, as far as > possible - I know it's pretty much impossible to test human > interaction stuff completely accurately, but it must surely be > possible to test stuff in a step by step manner. > > However - I'm beginning to think a different methodology is needed for > the kind of testing I'm thinking of. Here are my problems with unit > testing: > > * Unit testing doesn't really make sense for UI development - I > won't go into why this is, because it's pretty self apparent > * Unit testing only tests such small pieces of functionality that > it only really useful in the context of single, complex classes which > have simple, defined inputs & output and minimal interaction with > other classes > * Unit testing can't easily be used to test asynchronous functions > * It is difficult, if not impossible, to unit test every piece of > functionality in your classes while they are interacting in a > production environment > * It is entirely possible (and perhaps likely) that an application > could be totally broken and badly engineered, despite passing a huge > number of unit tests > > This is not to say that unit testing is useless or a waste of time. > However, in it's current form, its use in UI development is severely > limited.Let's look at some of the goals would expect from a perfect > testing system: > > * Test code must be completely seperate from production code, so > that when I am confident that my application works, I can remove the > test harness & be reasonably sure that the application will still work > reliably - unit testing accomplishes this very well. > * I need to be able to test an application as it is running - that > is, in it's production form, in a live environment, doing actual, > specific tasks - not just passing tests that I myself have designed > * I would like to produce detailed feedback and logs from the > tests themselves, rather than having a seperate logging process > > Although it is possible to accomplish many of these things in ASUnit, > it's difficult to do so in the framework as it stands. Hence, I think > that it's probably the case that these problems are more to do with > the very concept of unit testing itself, rather than any specific > implementation. > > What I'm thinking of, as a solution, is this - what I need is the > ability to call some kind of seperate test functionality from inside > my production code. For example: > > //I've just recieved a response from an AMF gateway - > var myAmfResult:Object =3D response; > trace(RuntimeTest.runTest("testAmfResponseIsValid",this,myAmfResult)); > > > My ContinousTest class would then run the appropriate test on the live > instance (passed as a parameter to the test) and from then on, you're > in similar territory to unit tests, except you don't have the > teardown/setup functionality - this is because you wouldn't have this > in production either, and you can't rely on always having a fresh > instance of something to test on. Ideally, I'd like to combine my > runtime tests with the unit tests, (maybe in the same class file?) so > that the entire testing system can be removed without leaving any test > code in the application. > > This saves the work that would be otherwise consumed in creating > complex testcases & mocks, essentially working around the limitations > of the unit test framework. > > Note the use of the trace() function - this is deliberate, becasue we > can use the "remove trace actions" compiler directive to detach the > runtime test. Each call to RuntimeTest could return the test results, > or simply an empty string and pass the output to a logger if we didn't > want it to clutter up the output panel. > > Does this make sense? What do you guys think? > > Cheers, > Alias > > > > > > > > > > On 12/8/05, Luke Bayes <lb...@gm...> wrote: > > Hey Rob, > > > > Sorry about the delayed response on my part. I've been talking with Ali > > about this issue and giving it some thought myself. > > > > What I'm hearing is that you need a single TestCase that will execute som= > e > > action, pause for an arbitrary amount of time, and then perform assertion= > s > > around each test method, rather than once for a test case. > > > > My first guess is that this should be possible by overriding the runBare > > method of TestCase. If you're working in AS 3, this might have a larger > > impact because of how we implemented test failures. In AS 2, failures wer= > e > > simply transmitted to the Runner from anyone that wanted to pass them, in > > the AS 3 build, failures are transmited by "throwing" a TestFailure objec= > t > > up the call stack (This is closer to the way JUnit works). This may cause > > some problems if the called methods are no longer in the same call stack > > that the runner initiated... It might still work fine - I'm just not sure= > . > > At a minimum, there should be a method in TestCase or Assert that allows = > us > > to transmit a failure directly - the overriden runBare could then pass > > failures up to that method... > > > > At the end of the day, it still feels like you're trying to test too much= > at > > once. If you're trying to test features of a preloader, there should be a > > way to do that using the AsynchronousTestCase as provided. But I would > > encourage you to decouple the testing of the preloader functionality from > > the testing of the features being loaded. Admittedly, this can be a compl= > ex > > problem and I believe even java developers still struggle with it. > > > > If you're interested in getting into the sources, please let me know as t= > he > > CVS repository on sourceforge is no longer up to date. It was getting rea= > lly > > messy in there so we moved over to Subversion. > > > > Also, if you're wanting to add these features, let's come up with a > > different name since AsynchronousTestCase is already being used for the > > feature that allows pausing prior to the test case execution. Maybe > > PausableTestCase? EventDrivenTestCase? IntervalDrivenTestCase? Or perhaps= > , > > we should just refactor these features into the existing > > AsynchronousTestCase? > > > > Another possible alternative is that you could simply use what is already > > there and implement a unique test case class for each asynchronous test t= > hat > > you want to verify... > > > > Let me know what you think - > > > > > > > > Luke Bayes > > www.asunit.org > > > > > > > > > > > > > > > > > > > > > > > > --__--__-- > > _______________________________________________ > Asunit-users mailing list > Asu...@li... > https://lists.sourceforge.net/lists/listinfo/asunit-users > > > End of Asunit-users Digest > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > Asunit-users mailing list > Asu...@li... > https://lists.sourceforge.net/lists/listinfo/asunit-users > |
|
From: <ma...@de...> - 2006-01-11 05:31:47
|
Hi Alias, Extremely well said. You have mirrored many of my own, and I'm sure many other's, thoughts. I'm not sure about the hows, but the whys are definitely all there. Perhaps the first step could be a change of definition/mind set. How about 'Integration Tests' instead of 'Unit Tests' for what you're talking about. Yeah, I know. Same problem, different name, but there are a lot of presumptions and expectations attached to a name, especially something like 'Unit Test'. After all, a unit test should be a unit test. That's what it was designed for. That's what it's good at. Anyway, enough rambling. I would definitely be interested in discussing this point more with anyone interested. Hope you're all well, Marcus. ma...@de... -----Original Message----- From: asu...@li... [mailto:asu...@li...] On Behalf Of asu...@li... Sent: Saturday, January 07, 2006 1:24 PM To: asu...@li... Subject: Asunit-users digest, Vol 1 #110 - 1 msg Send Asunit-users mailing list submissions to asu...@li... To subscribe or unsubscribe via the World Wide Web, visit https://lists.sourceforge.net/lists/listinfo/asunit-users or, via email, send a message with subject or body 'help' to asu...@li... You can reach the person managing the list at asu...@li... When replying, please edit your Subject line so it is more specific than "Re: Contents of Asunit-users digest..." Today's Topics: 1. Re: Re: Testing asynchronous functions (Alias) --__--__-- Message: 1 Date: Fri, 6 Jan 2006 14:23:31 +0000 From: Alias <ali...@gm...> To: asu...@li... Subject: Re: [Asunit-users] Re: Testing asynchronous functions Reply-To: asu...@li... Hi Luke, Sorry for the delay, I've been really busy leading up to the holidays and my Gmail is so flooded with mailing lists I sometimes miss messages. I'm thinking that EventDrivenTestCase is the way to go. What we need is a testcase that will basically listen to events - that means that the teardown & setup stuff would have to be called every time. I know what you're saying about testing too much. The thing though, is that I think there's a fundamental limit to unit testing which makes it less useful than it could be. This (admittedly angry, ranty and perhaps a little over the top) article begins to touch on an important point: http://www.pyrasun.com/mike/mt/archives/2004/07/11/15.41.04/index.html Essentially, unit tests are great, but where they fall down is their inability to deal with interactions between different parts of the system. Ideally, I'd like to write my tests, and then run my application as intended, with the unit tests running as well, so that I can actually test the entire system as it runs. Each test case can be run once at the start to test each class in isolation, but lots of tests need to be run in some kind of production environment. I guess there is a line of distinction to be drawn between exception handling, acceptance tests and unit testing, but for me the unit testing would be much more useful if I could write it in conjunction with the UI code and then compile, click stuff, and if unit tests fail then give me exact info about which tests have failed and why. Sometimes it just doesn't make sense for a class to be tested completely in isolation - some problems only become apparent when they are used in conjunction, even though all the tests are passed. Unit testing for individual classes has been very useful to me, and I'd like to extend that level of coverage to the UI, as far as possible - I know it's pretty much impossible to test human interaction stuff completely accurately, but it must surely be possible to test stuff in a step by step manner. However - I'm beginning to think a different methodology is needed for the kind of testing I'm thinking of. Here are my problems with unit testing: * Unit testing doesn't really make sense for UI development - I won't go into why this is, because it's pretty self apparent * Unit testing only tests such small pieces of functionality that it only really useful in the context of single, complex classes which have simple, defined inputs & output and minimal interaction with other classes * Unit testing can't easily be used to test asynchronous functions * It is difficult, if not impossible, to unit test every piece of functionality in your classes while they are interacting in a production environment * It is entirely possible (and perhaps likely) that an application could be totally broken and badly engineered, despite passing a huge number of unit tests This is not to say that unit testing is useless or a waste of time. However, in it's current form, its use in UI development is severely limited.Let's look at some of the goals would expect from a perfect testing system: * Test code must be completely seperate from production code, so that when I am confident that my application works, I can remove the test harness & be reasonably sure that the application will still work reliably - unit testing accomplishes this very well. * I need to be able to test an application as it is running - that is, in it's production form, in a live environment, doing actual, specific tasks - not just passing tests that I myself have designed * I would like to produce detailed feedback and logs from the tests themselves, rather than having a seperate logging process Although it is possible to accomplish many of these things in ASUnit, it's difficult to do so in the framework as it stands. Hence, I think that it's probably the case that these problems are more to do with the very concept of unit testing itself, rather than any specific implementation. What I'm thinking of, as a solution, is this - what I need is the ability to call some kind of seperate test functionality from inside my production code. For example: //I've just recieved a response from an AMF gateway - var myAmfResult:Object =3D response; trace(RuntimeTest.runTest("testAmfResponseIsValid",this,myAmfResult)); My ContinousTest class would then run the appropriate test on the live instance (passed as a parameter to the test) and from then on, you're in similar territory to unit tests, except you don't have the teardown/setup functionality - this is because you wouldn't have this in production either, and you can't rely on always having a fresh instance of something to test on. Ideally, I'd like to combine my runtime tests with the unit tests, (maybe in the same class file?) so that the entire testing system can be removed without leaving any test code in the application. This saves the work that would be otherwise consumed in creating complex testcases & mocks, essentially working around the limitations of the unit test framework. Note the use of the trace() function - this is deliberate, becasue we can use the "remove trace actions" compiler directive to detach the runtime test. Each call to RuntimeTest could return the test results, or simply an empty string and pass the output to a logger if we didn't want it to clutter up the output panel. Does this make sense? What do you guys think? Cheers, Alias On 12/8/05, Luke Bayes <lb...@gm...> wrote: > Hey Rob, > > Sorry about the delayed response on my part. I've been talking with Ali > about this issue and giving it some thought myself. > > What I'm hearing is that you need a single TestCase that will execute som= e > action, pause for an arbitrary amount of time, and then perform assertion= s > around each test method, rather than once for a test case. > > My first guess is that this should be possible by overriding the runBare > method of TestCase. If you're working in AS 3, this might have a larger > impact because of how we implemented test failures. In AS 2, failures wer= e > simply transmitted to the Runner from anyone that wanted to pass them, in > the AS 3 build, failures are transmited by "throwing" a TestFailure objec= t > up the call stack (This is closer to the way JUnit works). This may cause > some problems if the called methods are no longer in the same call stack > that the runner initiated... It might still work fine - I'm just not sure= . > At a minimum, there should be a method in TestCase or Assert that allows = us > to transmit a failure directly - the overriden runBare could then pass > failures up to that method... > > At the end of the day, it still feels like you're trying to test too much= at > once. If you're trying to test features of a preloader, there should be a > way to do that using the AsynchronousTestCase as provided. But I would > encourage you to decouple the testing of the preloader functionality from > the testing of the features being loaded. Admittedly, this can be a compl= ex > problem and I believe even java developers still struggle with it. > > If you're interested in getting into the sources, please let me know as t= he > CVS repository on sourceforge is no longer up to date. It was getting rea= lly > messy in there so we moved over to Subversion. > > Also, if you're wanting to add these features, let's come up with a > different name since AsynchronousTestCase is already being used for the > feature that allows pausing prior to the test case execution. Maybe > PausableTestCase? EventDrivenTestCase? IntervalDrivenTestCase? Or perhaps= , > we should just refactor these features into the existing > AsynchronousTestCase? > > Another possible alternative is that you could simply use what is already > there and implement a unique test case class for each asynchronous test t= hat > you want to verify... > > Let me know what you think - > > > > Luke Bayes > www.asunit.org > > > > > > > > > > --__--__-- _______________________________________________ Asunit-users mailing list Asu...@li... https://lists.sourceforge.net/lists/listinfo/asunit-users End of Asunit-users Digest |
|
From: Alias <ali...@gm...> - 2006-01-06 14:23:58
|
Hi Luke, Sorry for the delay, I've been really busy leading up to the holidays and my Gmail is so flooded with mailing lists I sometimes miss messages. I'm thinking that EventDrivenTestCase is the way to go. What we need is a testcase that will basically listen to events - that means that the teardown & setup stuff would have to be called every time. I know what you're saying about testing too much. The thing though, is that I think there's a fundamental limit to unit testing which makes it less useful than it could be. This (admittedly angry, ranty and perhaps a little over the top) article begins to touch on an important point: http://www.pyrasun.com/mike/mt/archives/2004/07/11/15.41.04/index.html Essentially, unit tests are great, but where they fall down is their inability to deal with interactions between different parts of the system. Ideally, I'd like to write my tests, and then run my application as intended, with the unit tests running as well, so that I can actually test the entire system as it runs. Each test case can be run once at the start to test each class in isolation, but lots of tests need to be run in some kind of production environment. I guess there is a line of distinction to be drawn between exception handling, acceptance tests and unit testing, but for me the unit testing would be much more useful if I could write it in conjunction with the UI code and then compile, click stuff, and if unit tests fail then give me exact info about which tests have failed and why. Sometimes it just doesn't make sense for a class to be tested completely in isolation - some problems only become apparent when they are used in conjunction, even though all the tests are passed. Unit testing for individual classes has been very useful to me, and I'd like to extend that level of coverage to the UI, as far as possible - I know it's pretty much impossible to test human interaction stuff completely accurately, but it must surely be possible to test stuff in a step by step manner. However - I'm beginning to think a different methodology is needed for the kind of testing I'm thinking of. Here are my problems with unit testing: * Unit testing doesn't really make sense for UI development - I won't go into why this is, because it's pretty self apparent * Unit testing only tests such small pieces of functionality that it only really useful in the context of single, complex classes which have simple, defined inputs & output and minimal interaction with other classes * Unit testing can't easily be used to test asynchronous functions * It is difficult, if not impossible, to unit test every piece of functionality in your classes while they are interacting in a production environment * It is entirely possible (and perhaps likely) that an application could be totally broken and badly engineered, despite passing a huge number of unit tests This is not to say that unit testing is useless or a waste of time. However, in it's current form, its use in UI development is severely limited.Let's look at some of the goals would expect from a perfect testing system: * Test code must be completely seperate from production code, so that when I am confident that my application works, I can remove the test harness & be reasonably sure that the application will still work reliably - unit testing accomplishes this very well. * I need to be able to test an application as it is running - that is, in it's production form, in a live environment, doing actual, specific tasks - not just passing tests that I myself have designed * I would like to produce detailed feedback and logs from the tests themselves, rather than having a seperate logging process Although it is possible to accomplish many of these things in ASUnit, it's difficult to do so in the framework as it stands. Hence, I think that it's probably the case that these problems are more to do with the very concept of unit testing itself, rather than any specific implementation. What I'm thinking of, as a solution, is this - what I need is the ability to call some kind of seperate test functionality from inside my production code. For example: //I've just recieved a response from an AMF gateway - var myAmfResult:Object =3D response; trace(RuntimeTest.runTest("testAmfResponseIsValid",this,myAmfResult)); My ContinousTest class would then run the appropriate test on the live instance (passed as a parameter to the test) and from then on, you're in similar territory to unit tests, except you don't have the teardown/setup functionality - this is because you wouldn't have this in production either, and you can't rely on always having a fresh instance of something to test on. Ideally, I'd like to combine my runtime tests with the unit tests, (maybe in the same class file?) so that the entire testing system can be removed without leaving any test code in the application. This saves the work that would be otherwise consumed in creating complex testcases & mocks, essentially working around the limitations of the unit test framework. Note the use of the trace() function - this is deliberate, becasue we can use the "remove trace actions" compiler directive to detach the runtime test. Each call to RuntimeTest could return the test results, or simply an empty string and pass the output to a logger if we didn't want it to clutter up the output panel. Does this make sense? What do you guys think? Cheers, Alias On 12/8/05, Luke Bayes <lb...@gm...> wrote: > Hey Rob, > > Sorry about the delayed response on my part. I've been talking with Ali > about this issue and giving it some thought myself. > > What I'm hearing is that you need a single TestCase that will execute som= e > action, pause for an arbitrary amount of time, and then perform assertion= s > around each test method, rather than once for a test case. > > My first guess is that this should be possible by overriding the runBare > method of TestCase. If you're working in AS 3, this might have a larger > impact because of how we implemented test failures. In AS 2, failures wer= e > simply transmitted to the Runner from anyone that wanted to pass them, in > the AS 3 build, failures are transmited by "throwing" a TestFailure objec= t > up the call stack (This is closer to the way JUnit works). This may cause > some problems if the called methods are no longer in the same call stack > that the runner initiated... It might still work fine - I'm just not sure= . > At a minimum, there should be a method in TestCase or Assert that allows = us > to transmit a failure directly - the overriden runBare could then pass > failures up to that method... > > At the end of the day, it still feels like you're trying to test too much= at > once. If you're trying to test features of a preloader, there should be a > way to do that using the AsynchronousTestCase as provided. But I would > encourage you to decouple the testing of the preloader functionality from > the testing of the features being loaded. Admittedly, this can be a compl= ex > problem and I believe even java developers still struggle with it. > > If you're interested in getting into the sources, please let me know as t= he > CVS repository on sourceforge is no longer up to date. It was getting rea= lly > messy in there so we moved over to Subversion. > > Also, if you're wanting to add these features, let's come up with a > different name since AsynchronousTestCase is already being used for the > feature that allows pausing prior to the test case execution. Maybe > PausableTestCase? EventDrivenTestCase? IntervalDrivenTestCase? Or perhaps= , > we should just refactor these features into the existing > AsynchronousTestCase? > > Another possible alternative is that you could simply use what is already > there and implement a unique test case class for each asynchronous test t= hat > you want to verify... > > Let me know what you think - > > > > Luke Bayes > www.asunit.org > > > > > > > > > > |
|
From: Luke B. <lb...@lu...> - 2005-12-14 19:04:55
|
Hey Kevin, Yes! We do have a build of AsUnit that supports ActionScript 2.0. I don't think it uses the Output Panel, but it does integrate as a custom extension panel into the Flash IDE (MX 2004 and version 8). Simply go to this url: http://sourceforge.net/project/showfiles.php?group_id=108947 and choose AsUnit 2.x alpha. Even though this build says it's an alpha, it should work fine for your needs. This build should have an .mxp installer that will require the Macromedia Extension Manager in order to install properly. Please let us know on the mailing list if you have any more questions. Thanks, Luke Bayes www.asunit.org -----Original Message----- To: luk...@us... Message body follows: Hi Luke, I am learning about AS unit framework. Do you guys have any early build that support AS 2.0 within installing the extension? I encountered this sample code: http://www.debreuil.com/FrameworkDocs/UnitTestingOverview.htm I'd like to know if it's possible to output the error in Output Panel in Flash 8. Thanks a lot. |
|
From: Luke B. <lb...@gm...> - 2005-12-08 19:08:50
|
Hey Rob, Sorry about the delayed response on my part. I've been talking with Ali about this issue and giving it some thought myself. What I'm hearing is that you need a single TestCase that will execute some action, pause for an arbitrary amount of time, and then perform assertions around each test method, rather than once for a test case. My first guess is that this should be possible by overriding the runBare method of TestCase. If you're working in AS 3, this might have a larger impact because of how we implemented test failures. In AS 2, failures were simply transmitted to the Runner from anyone that wanted to pass them, in the AS 3 build, failures are transmited by "throwing" a TestFailure object up the call stack (This is closer to the way JUnit works). This may cause some problems if the called methods are no longer in the same call stack that the runner initiated... It might still work fine - I'm just not sure. At a minimum, there should be a method in TestCase or Assert that allows us to transmit a failure directly - the overriden runBare could then pass failures up to that method... At the end of the day, it still feels like you're trying to test too much a= t once. If you're trying to test features of a preloader, there should be a way to do that using the AsynchronousTestCase as provided. But I would encourage you to decouple the testing of the preloader functionality from the testing of the features being loaded. Admittedly, this can be a complex problem and I believe even java developers still struggle with it. If you're interested in getting into the sources, please let me know as the CVS repository on sourceforge is no longer up to date. It was getting reall= y messy in there so we moved over to Subversion. Also, if you're wanting to add these features, let's come up with a different name since AsynchronousTestCase is already being used for the feature that allows pausing prior to the test case execution. Maybe PausableTestCase? EventDrivenTestCase? IntervalDrivenTestCase? Or perhaps, we should just refactor these features into the existing AsynchronousTestCase? Another possible alternative is that you could simply use what is already there and implement a unique test case class for each asynchronous test tha= t you want to verify... Let me know what you think - Luke Bayes www.asunit.org |
|
From: <za...@ar...> - 2005-12-08 09:30:41
|
Hi,
I use the following interval-based workaround for asynchronous unit testing=
:
---------------------------------------------------------------------------
/**
* asynchronous unit test example
**/
class my.package.ExampleTestFoo extends TestCase {
private var className:String =3D "my.package.ExampleTestFoo";
private var nIntervalID1:Number;
private var nCounter1:Number =3D 0;
private var example1:Example;
=09public function setUp():Void {
=09=09trace("### START ExampleTestFoo ###");
=09}
=09
=09
=09public function tearDown():Void {
=09=09trace("### END ExampleTestFoo ###");
=09}
=09
=09
=09public function callTestFoo(theTest:ExampleTestFoo):Void {
=09=09if(theTest.nCounter1 =3D=3D 4) {
=09=09=09// remove interval
=09=09=09clearInterval(theTest.nIntervalID1);
=09=09}
=09=09
=09=09// call test method
=09=09theTest.testFoo();
=09=09
=09}
=09
=09public function testFoo():Void {
=09=09// increase the counter
=09=09this.nCounter1++;
=09=09
=09=09// first call of this method just initializes and prepares for testin=
g
=09=09if(this.nCounter1 =3D=3D 1) {
=09=09=09
=09=09=09// setup interval, time between calls =3D 2000 ms
=09=09=09this.nIntervalID1 =3D setInterval(callTestFoo, 2000, this);
=09=09=09
=09=09=09// call the asynchronous method
=09=09=09example1.loadAndPlay("myswf.swf","part2");
=09=09}
=09=09
=09=09// second call of this method tests basic setup and calls some more a=
synchronous methods
=09=09else if(this.nCounter1 =3D=3D 2) {
=09=09=09
=09=09=09assertTrue("assert the basic setup is okay", example1.isRunning())=
;
=09=09=09
=09=09=09// call some more asynchronous methods
=09=09=09example1.loadData("myData.xml");=09
=09=09}
=09=09
=09=09// third call of this method executes the abc tests
=09=09else if(this.nCounter1 =3D=3D 3) {
=09=09=09
=09=09=09// GENERAL TESTS
=09=09=09
=09=09=09assertTrue("assert abc is okay", example1.getStatus() =3D=3D 1);
=09=09}
=09=09
=09=09// fourth call of this method executes the xyz tests
=09=09else if(this.nCounter1 =3D=3D 4) {
=09=09=09assertTrue("assert xyz is okay", example1.getStatus() =3D=3D 57);
=09=09}
=09}
}
---------------------------------------------------------------------------
Yes, it's more work to setup the tests :-(
Also because an interval is used it's not very precise (i.e. you cannot rel=
y that the testmethod is called exactly at the time specified in the interv=
al) but for me this has worked so far.
Best regards,
Jan
Machen Sie aus 14 Cent spielend bis zu 100 Euro!
Die neue Gaming-Area von Arcor - =FCber 50 Onlinespiele im Angebot.
http://www.arcor.de/rd/emf-gaming-1
|
|
From: Alias <ali...@gm...> - 2005-12-05 15:39:22
|
Hi Luke, Sorry it's taken so long to reply, this has taken some thought on my part. The Xml example provided is all well and good, as are your points on when and what should be tested in isolation. However, I believe there is still at least one common case where more asynchronous functionality is needed. Imagine I have a system which loads arbitrary swfs, and parses them to create transitions, to extract data, or some other task. Even though I am loading the swf directly from the filesystem, and although I could perhaps get it loaded so quickly that it wouldn't need to wait at all, it would potentially fail in a live environment. This is pretty common with preloader code. Hence, I need to be able to test that events & callbacks fire in the correct sequence, and that these events have access to credible mock objects or test data. What I really need in AsUnit is a thread-like test case which will simply wait indefinitely (or, better, time out after 5 seconds or so) until it's callee returns a response or calls a method on it, before calling the teardown method. Maybe I should start looking at writing an AsynchronousTestCase class? Cheers, Alias Cummins On 11/26/05, Luke Bayes <lb...@gm...> wrote: > Hey Rob, > > I really appreciate this question and feel that a lot of people would > benefit from the answer, so I posted a response to it on my blog. Please > check out: http://lukebayes.blogspot.com and let me know what you think. > > > Thanks, > > > Luke Bayes > www.asunit.org > |
|
From: Luke B. <lb...@gm...> - 2005-12-02 00:33:57
|
That sounds great Don! Your help will definitely be appreciated. We'll look forward to hearing from you and make sure to have fun at the party... ;-) Luke Bayes www.asunit.org |