From: Steve H. <sh...@zi...> - 2002-07-25 18:37:30
|
----- Original Message ----- From: "why the lucky stiff" <yam...@wh...> > Steve Howell (sh...@zi...) wrote: > > All of the YAML-driven specification-based tests for the pure Python YAML > > implementation now reside in TestingSuite/spec.yml, which also drives a great > > deal of the testing for the Ruby implementations. > > BRILLIANT!! I can't believe this is actually coming together so quickly. Thanks for adopting the suite so quickly. > > You'll probably want to comb through the Wiki page for some specifics. Especially the end of the page about adding the suite to your implementation. There's some details there that either need to be worked out for PyYaml or disputed and corrected. Such as: Permit me to respond in email first, but as we resolve issues, I will update the Wiki. > > 1. The testing suites should go in the /yts/ directory of your distribution along with a test.sh which runs the tests. This is arbitrary, but I want it to be consistent. > Hmmm...I'm curious why you didn't call the folder in perforce "yts", instead of "TestingSuite." Although, TestingSuite seems like a better name to me, and Clark and I have both configured our perforce clients to use that name. :) As for the mechanism of running the tests, I don't necessarily think we need to standardize across languages. For example, when Brian adds the Perl tests, it might be more natural for him to put the tests in a folder called "t," since that's the Perl convention. Of course, there's nothing to stop him from making a soft link and distributing a test.sh as well. Just not sure it's worth the trouble. > 2. The test.sh should return a valid YAML document with results of the tests. See the Wiki page for how this should look. You can run the test.sh in YAML4R to see how it all should work. > > The point is to give us a simple interface for testing all the implementations simultaneously for generating parallel results. > Hmmm, slight difference in philosophy here too. I actually think that the tests in spec.yml should be assumed to be 100% passing for the latest implementation. That's how I'm doing it in Python, anyway. If I can't get a test passing, I change the key to "not_yet_for_python." This makes the spec.yml file a little more self-documenting, although you need to trust that the implementer has made everything pass. And, of course, I am not providing any quick, clean way for Python folks to run my tests and see the stats, so I see the benefit of your approach. I need to think on this a little bit. It shouldn't be hard to get a more compliant yts driver working, but for now, my higher priority is porting some of my code-driven tests to YAML, and also I need to knock out a few YAML features. Thanks, Steve |