Menu

What I would want to do

2001-02-28
2001-03-01
  • Mats Henricson

    Mats Henricson - 2001-02-28

    So, what is it we want to achieve? What I'm doing right now is three things:

    1. I want a way of specifying that a particular test should run n times
    2. I want a way of specifying that a particular test should run in m parallel threads
    3. I want a way of specifying that the performance of a particular test should be logged.

    There must be a gazillion other uses, but that is what I do right now.

    Then the question is if an XML specification should be attached to a test or a test case. I think it should be per test, and that you can specify it also for a test case, in which case it would trickle down and specify all tests in that test case, unless (?) there is a overriding XML specification for a contained test.

    What do you think?

    I'm pretty strong in Java and pretty weak in XML, so don't expect any XML miracles from me. I don't know if I have any support from my employer yet, so I have to investigate that.

    Mats

     
    • Mats Henricson

      Mats Henricson - 2001-02-28

      Here is a first cut of a silly XML spec for a JUnit test:

      <JUnitTest logPerformance=true numberOfRuns=10>
          <testName>The FooBar test</testName>
          <testClass>com.gazoo.foo.bar.FooBarTest</testClass>
          <testFunction logPerformance=false>testFoo</testFunction>
          <testFunction numberOfRuns=2>testBar</testFunction>
      </JUnitTest>

      It should be possible to specify that a test case should pick all test*() functions in a specific class, but there should also be a way of specifying that a test case should pick only a few test functions. There should also be possible to specify that all tests should have performance characteristics measured and saved to a DB (which means that we should also have a JDBC URL/driver/userid/password in the XML file (which means we need a way to encrypt the password - but that is not high priority). There should be a way of specifying that some tests should run n times, etc.

      Whatever, I'm just throwing some ideas around.

       
      • Bill la Forge

        Bill la Forge - 2001-02-28

        > There should also be possible to specify that all tests should have performance characteristics
        > measured and saved to a DB (which means that we should also have a JDBC
        > URL/driver/userid/password in the XML file (which means we need a way to encrypt the
        > password - but that is not high priority).

        As for saving output to a DB, I'd like to side-step that for the moment. Sounds like
        phase-2. I do like the idea. But I'm unfamiliar with this area.

        > There should be a way of specifying that some tests should
        > run n times, etc.

        Especially important when dealing with race conditions!

        Now let me add my own wants...

        Top of my list--I really want to specify multiple input data for the same test, where the input objects are defined in an XML file. And I want to be able to spoecify the expected output objects in a file as well.

        If a test is defined by the files in a directory, then I want to be able to provide a directory
        of directories and have all the tests run.

        I want to be able to specify file name patterns like input*.xml for input files and expected*.xml for output, so I can specify multiple sets of data in the same directory.

        I want to be able to run several different tests on the same collection of imput files, where those input files are used to construct the objects being used as input to the tests.

         
        • Bill la Forge

          Bill la Forge - 2001-03-01

          Mats,

          I've been thinking about the kind of thing you're talking about and it sounds a lot like the junit extensions for ant: http://jakarta.apache.org/

          Examples

               <junit>
                 <test name="my.test.TestCase" />
               </junit>

          Runs the test defined in my.test.TestCase in the same VM. No output will be generated unless the test fails.

               <junit printsummary="yes" fork="yes" haltonfailure="yes">
                 <formatter type="plain" />
                 <test name="my.test.TestCase" />
               </junit>

          Runs the test defined in my.test.TestCase in a separate VM. At the end of the test a single line summary will be printed. A detailed report of the test
          can be found in TEST-my.test.TestCase.txt. The build process will be stopped if the test fails.

               <junit printsummary="yes" haltonfailure="yes">
                 <classpath>
                   <pathelement location="${build.tests}" />
                   <pathelement path="${java.class.path}" />
                 </classpath>

                 <formatter type="plain" />

                 <test name="my.test.TestCase" haltonfailure="no" outfile="result" >
                   <formatter type="xml" />
                 </test>

                 <batchtest fork="yes">
                   <fileset dir="${src.tests}">
                     <include name="**/*Test*.java" />
                     <exclude name="**/AllTests.java" /<
                   </fileset>
                 </batchtest>
               </junit>

          Runs my.test.TestCase in the same VM (ignoring the given CLASSPATH), only a warning is printed if this test fails. In addition to the plain text test
          results, for this test a XML result will be output to result.xml.

          For each matching file in the directory ${src.tests} a test is run in a separate VM. If a test fails, the build process is aborted. Results are collected in
          files named TEST-name.txt.

           

Log in to post a comment.