Here is a rough sketch of some of the requirements for JMLTests. (These are lifted from an email discussion and are very rough at this point.)
JMLTests should support:
- Cross-platform development--The behavior must be the same across Windows, Mac, and Linux. JML developers regularly use all three platforms. This will likely require being able to extract and compare against normalized file paths. The tests will also need to be categorized by version of Java, version of JML, and possibly level of JML feature.
- Test-driven development--Can we run individual tests quickly and conveniently?
- Testing a variety of different JML tools--Can we specify tests that are common to all sorts of tools, plus tests that are just for a RAC or ESC, etc.? JMLTests should allow (at least) testing static analysis tools and checking run-time code (i.e., compile code using tool, then execute generated code and check its output).
- A significant (and common) set of tests should constitute a syntax & type-checking validation suite - that is a set of tests for the JML checker.
- A mechanism for tracking regressions--Different JML tools will pass different sets of tests. With JML2 the tests were developed in parallel with the code, so all the tests passed all the time. With new tools we need to track which tests are known to pass and which are known to fail so that we can notice when new code breaks old code.
- Checking of expected results at a variety of granularities--For some tools it may be sufficient to just record whether the test is a positive one (shouldn't generate an error) or a negative one (should generate an error). Other tools may need to check whether some phrase appears in the output; alternatively we can categorize the kind of error message and relevant line and character positions expected from a (failing) test.
- Checking a variety of run configurations--Examples of run configurations include just compiling the input, or compiling the input and then running the output.
- Efficient execution of large test suites--In JML2, restarting a Virtual Machine instance for each test case was extremely expensive. The test framework should have sufficient abstraction that individual JML-based tools can run multiple test cases using a single VM instance if the tools are so capable.
- Easy addition of new tests--Adding basic positive or negative test cases should be as easy as just dropping a new *.java file into the appropriate directory (and perhaps a new expected results file if appropriate).
- A test (or a group of tests at least) should have a brief comment indicating its purpose.
- ... others to be extracted by reviewing existing JML2 makefiles ... (In brief, the Makefile system generates JUnit test suites that use plain text files to define inputs and expected results. A variety of environment variables and makefile dependencies are used to control the tests. My vision is to replace the information that was embedded in the environment variables and makefiles with XML files so that different tools can process that information as needed.)
- Given that there are some nice frameworks that already exist - e.g. JUnit, TestNG - we should concentrate on the tests and the format for representing them. That will be more than enough work.
- I suggest starting with JML syntax/name lookup/typechecking tests.