From: Peter K. <ps...@cs...> - 2002-12-12 05:42:40
|
Hello, My name is Peter Keller, and I was directed here by Noel Welsh to talk about Scheme Unit and a testing infrastructure of similar design that I had written for the Chicken Scheme compiler(before I knew about Scheme Unit). I think that if the ideas in Scheme Unit and what I have are combined, it will be a very good thing. Here is an API description with some theory of operations description of the testing infrastructure: http://www.call-with-current-continuation.org/chicken-manual.pdf Look in the table of contents under "Additional Files : test-infrastructure.scm". It is pretty comprehensive, but this manual is missing a couple things that are in the cvs repository and not released yet. (Minor things, all told). If you download the 1082 build of chicken, the files that start with test-infrastructure* are the ones you should look at. I read the paper just today about Scheme Unit, and discovered that Scheme Unit and the stuff I have do about 90% of the same things, but I seem to have implemented a lot more "future plan" features that you guys were thinking about and I have some more elaborate control structures than Scheme Unit. Also, there seems to be some information that Scheme Unit is PLT specific. The stuff I have is pure R5RS code(with hygienic macros) and loads with the full feature set into a few popular scheme implementations without modification(I have a seperate implementation specific loader file, and then the real code base, and then an implementation specific code base for certain test structures that only make sense in the local implementation). One of the couple real differences between our code bases is that you chose to handle things like divide by zero errors and I had not(technically you chose option three in the paper about what kinds of things to return from an assertion and I chose option two). This was because I noticed different scheme implementations either did or didn't throw any sort of an exception when things like divide by zero happened and I couldn't find a pure R5RS way of handling it. But, that just might be stupidity on my part. If so, then I can still stay with option 2, but catch the error and handle it intelligently and then return a result object at the assertion level. So the appearance of option two holds. Another major difference is I have elaborate(but not confusingly complex) control structures that allow you to terminate a test case by effectively calling a lexically continuation of a test package that you are contained in. The scope of the termination is preserved and is analyzable later. Also, I don't have such a tighty coupled prologue/epilogue structure like you do for your expectations, instead I implemented a atexit() like feature where you can dynamically build a queue of operations to do when you leave the test case or test package. This, like the termination functions are also lexically scoped. The future things I implemented was an optional warning syntax for test packages so you could supply warning objects(usually strings), test cases, and expectations, a let-like binding form structure for test packages and test cases, and things like a "todo", "side-effect", and "skip" system which allows you to keep track of todo stuff you need to do, code that you only want run for the side effects, and code that you wish to skip all together. Of course, the evaluation of the test suites you generate with my system is decoupled from the analysis and output generation of the results of the tets. Basically a large in memory tree of the results is created that you can analyze any which way you please. Can you guys check out my work and see what you think? An overriding goal of mine is to keep the code R5RS conformant so it works in as many implementations as possible. Thanks. -pete |