XML output?

Kent Dahl
  • Kent Dahl

    Kent Dahl - 2001-07-23

    The summary page says: "Test output is in XML for automatic testing", but I'll be darned if I can find it.

    • Bastiaan Bakker

      Bastiaan Bakker - 2001-07-24

      Right, the author most likely meant "will be in XML". I've corrected it.



      • Kent Dahl

        Kent Dahl - 2001-07-25

        Are there any clear plans as to what XML output should/is going to include? (Any documents?)

        I've got a hankering for some XML-ish log output, and am pondering about writing a quick hack based on TextTestResult, spewing out something like:


                <Test name="testFoo">
                      <!-- debug output from the test itself
                           (whatever it spews to std::cout/cerr)
                           Maybe put this in one of those CDATA sections?
                      <Failure description="Expected 5, got 4..." />

                <Test name="testBar">
                      <Error description="Toilet blew up."/>

                <!-- summary section for readability -->
                                <Error description="Toilet blew up."
                                       test="testBar" />
                                <Failure description="Expected 5, got 4..."
                                         test="testFoo" />

                        <Statistics     runTests="2"
                                        errors="1"      />


        Someone got something that does this, or anything more clever? Thoughts, ideas...?

    • Duane Murphy

      Duane Murphy - 2001-07-27

      I would make some simple changes to this syntax.

      <Error description="Toilet blew up." test="testBar"/>

      Would be
      <Error test="testBar">
      Toilet blew up.

      This allows the test description to be more than one line.

    • Kent Dahl

      Kent Dahl - 2001-07-27

      I thought attributes were allowed to be multiline?
      (Ugly as it may be...)

      But personally, I would just have to nit-pick and go the next logical step:

      <Error test="testBar">
      <Description>Toilet blew up.</Description>


      An issue that bothers me is what to do with whatever the testcases find appropriate to tell the world. Should one grab the std::cout output and put it in a CDATA section, escape it or what?

      Partly the reason I ask, is because I envision a time when the Context object mentioned in the CppUtx design will creep in, and then I hope the testwriters would have some way of easily grouping their own output in XML, essentially extending the debug output format.

      It might be a hangover from the bad practice of writting macro-toggled debug output statements up the wazoo in code, but I find myself adding explicit debug output in my unittests, which I then read when tests fail. (Esp. when I'm stuck using assertImplementation over assertEquals et al, such as with strings... hint-hint :-)

      Oh, btw, is there any particular reason that testsuites and other testcases that iterate over subsuites don't have some way of telling the testresult they start/stop? Calling TestResult::startTest might be quite wrong, but I would like some supported notification.

      • Gary Granger

        Gary Granger - 2001-07-30

        > I find myself adding explicit debug output in my unittests

        Yes, I wanted to do the same thing when I first started using UnitTest.   I made modifications to add a testOutput() method to TestResult, then changed the run() method of Test to accept the TestResult pointer as a parameter.  TestCase provides implementations for setUp, tearDown, and runTest which take the TestResult but instead call the original methods which do not take a TestResult.  Here is a code excerpt:

            // These are the methods called by default implementation of run
            // (TestResultInterface *).  Override any of these methods if the test
            // case method needs access to the result, such as for generating
            // output.  Otherwise the test case only needs to override the simpler
            // (and the original) methods which do not have the test result
            // parameter.
            virtual inline void runTestResult    (TestResultInterface *)
            { runTest (); }
            virtual inline void setUpResult      (TestResultInterface *)
            { setUp (); }
            virtual inline void tearDownResult   (TestResultInterface *)
            { tearDown (); }

            virtual inline void setUp () {}
            virtual inline void tearDown () {}
            virtual inline void runTest () {}

        This allows me to write tests which stream output to the result, and then my test runner has options to enable or disable output:

            void testDimensions (TestResultInterface *result)
                TestOutput out (this, result);

                assert (Dimension::Time().getName() == "time");
                assert (Dimension::Time().getTitle() == "Time");
                assert (Dimension::Time().getUnits() == Units::Seconds());
                out << "Volume: " << Dimension::Volume().getUnits ().toBaseText ()
                    << endl;
                out << " Speed: " << Dimension::Speed().getUnits ().toBaseText ()
                    << endl;
                Units density = Units::Kilograms() / (Units::Meters() ^ 3);
                assert (Dimension::MassDensity().getUnits () == density);

        I did it this way for a couple reasons.  One, I could control the test output from the test runner without changing the test.  Two, the output could be passed easily via the CORBA interface I implemented and displayed on the remote (Java GUI) test runner.

        I have since learned about Log4Cpp and started using it, so perhaps that would be a better option for logging test output.  I envision UnitTest automatically creating category hierarchies which parallel the test hierarchy and providing a simple output stream object for tests to log to.

        If interested, the source for my changes is at




Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:

No, thanks