From: Stefan S. <se...@sy...> - 2006-03-10 15:52:31
|
Tony Graham wrote: > Stefan Seefeld <se...@sy...> writes: > ... > >>I'v been meaning to look at the test suite for quite a while now. I'm still >>not sure I understand everything. How are results validated ? Is there some >>program that checks that the generated pdf is correct or not ? > > > Validation is by wetware, not software. > > You get to look at the output and decide if it is correct. Oh well, that's what I was afraid of you would say. > After the first time that you run the tests, you can put the resulting .pdf > and .png files in the testsuite/ref directory, and you will then get an > indication when any test output changes, but it's still up to you to decide > what's correct and what isn't. Ok. > Back at the XSL 1.0 CR stage, Norm Walsh had to print out and eyeball hundreds > of samples from multiple vendors' test results. Since different vendors had > used different fonts when running each other's test suites, there was never > any guarantee that two outputs of running one test case were going to be in > any way similar. > > I had to set up a new instance of the testing module recently, and even I > found it clunky and frustrating. It does need to be improved -- and since > it's just XSLT, Perl, and Make, it could be improved by nearly anybody. It > especially needs improvment in configuration/reuse since we're now back to > having two xmlroff backends that can produce output in multiple formats. For the purpose of measuring xmlroff's own state, would it be possible to somehow fix at least some parameters that could impact the output without impacting the pass/fail result (such as fonts etc.), and then automate the validation ? I'm developing QMTest (http://www.codesourcery.com/qmtest), and I'm considering to generate a test suite for xsl-fo from the existing code. The xsl-fo processor would just be a parameter that could be specified in a config file (e.g. 'xmlroff'). However, this would be only partly useful if the validation part would always generate 'pass', i.e. it would be good if at least some of the validation could be automated, too. The way I imagine this to be done is to run the test suite once, generating output (pdf or whatever) without validation. Then you visually validate all tests, putting valid output into the tests suite as reference for subsequent runs. Once this is done subsequent test runs can validate by comparing (diffing) actual output with the reference output. Of course, this only works if you can control the free parameters (fonts, for example) reasonably well so it makes sense to generate reference output. (Such reference output could be generated per renderer backend, if the output is too different). What do you think about such a strategy ? Regards, Stefan |