|
From: doug s. <hig...@ho...> - 2013-07-15 20:52:52
|
> > Doug - very interesting that you got this working, especially as direct > comparisons (Bit by bit) do not work, as you know. > > Don Brutzman - hope you get a chance to look at what Doug did - great > for conformance tests between browsers. > This type of testing won't say if something is right or wrong, only if something changed. The fixtures need to be regenerated on each platform, and perhaps the recordings as well if the opengl-statusbar vs. gui-statusbar offsets the mouse too much or the window size can't be matched. -Doug more... > I just wish it was a bit less clumsy, with a simple push-button gui. I reworked it into 5 perl scripts: run_record.plrun_fixture.plrun_playback.plrun_compare.pl compare.pl http://dug9.users.sourceforge.net/web3d/testing/recording2.zip http://dug9.users.sourceforge.net/web3d/testing/testing2.zip And you put a list of (new / additional) scenes you want to test into a file with scenefilename testname on each record: tlist.txt 1.wrl t1 .. 52.x3d t104 tlist.txt is only read by run_record.pl The run_fixture, run_playback, and run_compare scan the /recording folder for the list of .fwplay to test. For 104 tests (1-52.wrl + 1-52.x3d) run_playback.pl - takes 145 seconds to run run_compare.pl - takes 35 seconds to run Total time: 3 minutes. This would be run after each small change in code. Output looks like: .. t77 PASS t78 t79 PASS t9 PASS t90 PASS t91 PASS t92 PASS t93 PASS t94 PASS t95 PASS t96 PASS t97 PASS t98 PASS t99 FAILIf something FAILS you roll back your last change and try another way.see also REFACTORING: http://en.wikipedia.org/wiki/Code_refactoring http://refactoring.com/catalog/index.html So what's missing? a) some way to echo the name of the scene file when a test fails b) some way to generate / add / remove / update / delete tests c) chaining of playback and compare d) compare summary stats |