|
From: doug s. <hig...@ho...> - 2013-06-06 13:15:32
|
>> identical to pass. So there's 4 steps to testing: >> A) generate automated test. In our case thats the .fwplay manually generated. >> B) generate a test fixture before refactoring code, by playing the automated test >> C) after refactoring, generate a playback file >> D) compare B and C output files. They should be the same if your refactoring was good. If a test fails, roll back your refactoring changes, and try again. > Aren't we better off generating the testing upfront of any changes ? Yes. A. Once per lifetime, and for all platforms, you generate list of tests, and walk through them generating the /recording/*.fwplay files, one per test. You .zip it and send it to other developers on other platforms B. At least once per platform with touchups after manually tested functional changes, or once before each major refactoring effort: replay the recordings with -F --fixture to /fixture files C. After each tiny refactoring change rerun with -P --playback to /playback D. and after each tiny refactoring change run (some yet to be developed) perl script that does comparisons between fixture and playback files and shows PASS or FAIL for each test > These could be stored as fixtures in tandem with the tests dir ? Yes. tests/recording, tests/fixtures tests/playback /recording the <testname>.fwplay files generated with -R --recording -N <testname> /fixtures the <testname>.000x.bmp image files and <testname>.log files generated with -F --fixture -N <testname> /playback same as /fixtures, except in a different directory Perl compare script would verify files in /fixture are the same as in /playback . > Also, they could be used as a sort of integration testing whenever any > other change had been done to any other bit of the code ? . In the following way: you may be trying to improve the functionality in one tiny area. You think/hope you aren't hurting something else. You run the tests, and the area where you are improving, the tests _should_ fail where you made noticable improvements. All other tests should pass -no changes-. If you see unrelated tests failing, then you know you have side effects. Lets say no side-effects, and some tests are failing due to noticable improvements. Then you would re-run -F --fixture on just those tests directly and positively impacted by the improvement. Then re-run the playback and comparisons, and everything should pass again. . > It needs not only be for the potential refactoring job coming ahead ... right - improvements will cause test failure, but if no side-effects just in a few tests you know should benefit. Then re-run -F on just those tests. > > This is all of course assuming I've understood everything which has > recently been accomplished ... > > a) We can play any given .wrl file and generate fixtures out of them. > b1) Those fixtures are replay-able in different architectures ( Win / > Linux / OSX ) Correct. I haven't tried OSX, and there's still lots of scenarios this doesn't cover in the wireless devices and desktop -such as the browser plugin, or SAI/EAI- but it's a very essential, core testing approach that will help developers improve the structure of the main code. > b2) The playing can also generate snapshots that can be used for > comparison with previous snapshots ( to be validated ) Correct. Perl compare can be used to compare the /fixture and /playback image and .log files > > Doug, Are my assertions correct ? Yes. > > Also, is it worth if we'd start condensing all this new testing > functionality into some sort of text document to serve as documentation > for future reference rather then being scattered through emails ? . Uhm - I think if you go freewrl --help you should get a brief reminder, except for the "`" and ESC which are internal commands I forgot to put help on - I'll work on that. I wonder where more detailed help would go. On a web page? Here's something for Users: http://freewrl.sourceforge.net/use.html but Testing is more for developers. Maybe a separate html page for testing? Also there's still some parts missing - like perl examples. |