From: Waylan L. <wa...@gm...> - 2008-03-24 17:33:43
|
I recently came up with a little different approach to testing python-markdown and would like some feedback. Sorry if this is a little long. Unrelated to my work on python-markdown I was checking out [python-nose][] to see what the recent hype was all about. Turns out its basically a clone of [py.test][] that claims to be "less magic". Now I had personally never used py.test, so that didn't mean much to me, although "less magic" sounded good. Anyway, in browsing through the nose documentation I stumbled upon [Test generators][] (scroll to bottom of that page) which I don't remember seeing before. It turns out [py.test has them too][]. Simply put, you can write one test which runs through a loop and generates (using `yeild`) a separate (sandboxed) unit test for each cycle of the loop. [python-nose]: http://code.google.com/p/python-nose/ [py.test]: http://codespeak.net/py/dist/test.html [Test generators]: http://code.google.com/p/python-nose/wiki/WritingTests [py.test has them too]: http://codespeak.net/py/dist/test.html#generative-tests-yielding-more-tests Immediately, I saw this as an interesting way to test markdown output. The test class should be able to loop through a list of files in a given directory and each file's output could be it's own separate unittest. I threw some code together (see attachment) and it Just Works(TM). Cool! This is by no means production ready. For starters, I'm only doing a `output_string == expected_output_string` type check and some of the existing tests (4 to be precise) fail because of some minor whitespace issues (I think the dom behavior was changed after those tests were created). There is no diff of the failing code produced or even a nice html report like we get in the current framework. I'm not going to spend the time to work those things out until I know this will be used. Although I did find [xmldiff][], which may be an interesting approach regardless of the testing framework used. [xmldiff]: http://www.logilab.org/859 There are some interesting possibilities this opens up though. I seem to recall someone recently complaining on this list about there not being any api tests. This would make it easy to integrate them in with the syntax tests. Additionally, the way I'm initiating the markdown class, it's easy to override in a subclass with your own custom args for that batch of tests. It certainly is more flexible in that respect. In fact, I *will most likely* be adopting something much like this for some of my extensions (particularly mete-data and the other extensions it supports) regardless of what happens with python-markdown's testing. I should mention that the current testing framework runs each test multiple times for performance and memory usage measurements. I imagine we would lose that if we went with something like this. There's also the fact that neither nose nor py.test are part of the standard python library. However, I would imagine the average user will never use the tests anyway, so do we even care? So what do you think? It this something worth pursuing or not? P.S.: If your testing this out, just copy the attached file into the "tests" directory. Assuming python-nose is installed on your system, on the command line from within the "tests" directory, run `nosetests` with no args and nose will find the tests and run them. I should also mention that I wasn't too concerned with the color I painted this shed. I just wanted to get something that worked. So If you have any better names for things, please share. -- ---- Waylan Limberg wa...@gm... |