|
From: Paulo E. C. <pau...@gm...> - 2013-06-06 10:22:13
|
On 01/06/13 21:55, doug sanden wrote: > identical to pass. So there's 4 steps to testing: > A) generate automated test. In our case thats the .fwplay manually generated. > B) generate a test fixture before refactoring code, by playing the automated test > C) after refactoring, generate a playback file > D) compare B and C output files. They should be the same if your refactoring was good. If a test fails, roll back your refactoring changes, and try again. Aren't we better off generating the testing upfront of any changes ? These could be stored as fixtures in tandem with the tests dir ? Also, they could be used as a sort of integration testing whenever any other change had been done to any other bit of the code ? It needs not only be for the potential refactoring job coming ahead ... This is all of course assuming I've understood everything which has recently been accomplished ... a) We can play any given .wrl file and generate fixtures out of them. b1) Those fixtures are replay-able in different architectures ( Win / Linux / OSX ) b2) The playing can also generate snapshots that can be used for comparison with previous snapshots ( to be validated ) Doug, Are my assertions correct ? Also, is it worth if we'd start condensing all this new testing functionality into some sort of text document to serve as documentation for future reference rather then being scattered through emails ? |