From: Leyne, S. <sl...@at...> - 2001-05-15 15:13:20
|
David, See my comments below: > -----Original Message----- > From: David Jencks [mailto:dav...@ea...] > Sent: Tuesday, May 15, 2001 12:25 AM > To: fir...@li... > Subject: RE: [Firebird-test] Catching up on postings > > > <Sean> > > That's great news! > > > > I'm wondering though, why did you choose XML as the file format, as > > compared to straight text? > > </Sean> > > > <david> > XML structure/parsers give you the ability to find just what > you want in an > arbitrarily large document without having to write a parser. > Navigation is > also plausible. It should be very easy to provide a nice > html views of > resultsets from the xml translation. The xml structure makes > it easy to > pinpoint where differences between the old and new results > are... give a > path to the elements. A big reason though is probably because this > started life as an ant task, and ant build files are xml based. > </david> <Sean> David, are you proposing to store the test results in a separate file from the original script or in the same file? Personally, I think that the results should be stored separately. I can appreciate the benefits of using XML for storing the results, it would allow for a quick analysis of the elements to determine discrepancies. I must admit that I really prefer the simplicity of straight text for the testing scripts themselves, perhaps I need to see a simple example to appreciate the benefits of the structured layout. </Sean> > > > > I don't think reliance on isql is appropriate. > > > > <Sean> > > Before Ann has a chance to ask, why not? > > > > ISQL needs to be tested, so why not use it as part of the tests? > > </Sean> > > <david>I suppose you are right that it needs testing. I have > not had good > results with isql, maybe I don't know how to make it commit. > It doesn't > usually show me what's in the database, put there by > interclient. Also as > far as I know you can't test parameterized queries with it. Testing > threading/parallel execution is as far as I know impossible. > It seems to me > that testing the ways client programs will use the db is more > important > than testing a scripting engine. So for my money, the most important > testing is through jdbc drivers, odbc drivers, ibo, ibx, bde, > c code that > uses the dynamic sql interface, static sql, and gdml, in > approximately that > order (first 3 can be shuffled).</david> > > <Sean> Ah, there's your problem! You're using InterClient! <grin> The only problem I have with your points above is that the initial focus is on testing the engine functionality. Using any data access layer (jdbc, odbc, IBO...) introduces into the testing scheme possible 'artifacts'/errors related to the data access layer and not engine functionality. The current ISQL approach virtually eliminated these type of errors, saving testers from hunting down these 'false positive' results. Furthermore, when it comes to diagnosing anomolies in data results, ISQL has been used as the "gold standard" for determining whether a problem is engine or data access layer related. Don't get me wrong, I agree that data access layer testing is necessary and should be considered as part of the overall goal. </Sean> > > <Sean> > > Don't understand what you mean about extracting the SQL? > > </Sean> > <david> > Most of the TCS I looked at(not very much) consisted of a simple sql > statement plus a c program to run it and produce text output: > elsewhere in > TCS is the expected output. The format of the output is > highly dependent > on the actual c program used. To me, the important parts are > in order (1) > the SQL (2) the logical structure and contents of the results (3) the > logical setup information (4) the c program (5) the text layout of the > results. > > Finding the SQL and putting it into the xml format I am > proposing is fairly > easy. Finding the expected results is a little harder. I > don't know how > to automatically translate the expected results into the xml > result set > format I propose. I think someone has to look at it and decide if the > meaning is the same. I suppose it might be possible to take > the xml and > transform it into text of the same format as TCS, and compare > that...I'll > have to think about this some more. > </david> <Sean> David, it was my understanding that the suite involved sending the script commands to ISQL and then capturing the generated results. I wasn't aware of the associated c programs, so perhaps I (and others) need an education about the functionality/implementation of the TCS suite and the role of the C programs. Accordingly, I _had_ been thinking it was actually fairly straightforward to capture the results (although this may need some more thought). First, you successfully run the existing TCS suite on an existing engine release (0.9-4). This ensures that the engine is producing the expected results. Then you run the new program using the new text/xml scripts, and store the results. These results are now the new "standard"/expected results by which all subsequent tests can be judged. This doesn't work with for new test, which we create but should get us over the hurdle of creating the initial/current results. </Sean> > > > <david> > Here are some things I think are important or a good idea. > > 1. separate logical content of tests and results from > formatting details. > 2. Separate platform/execution context from content of test. > 3. Concentrate on testing drivers used by application programs. > 4. Have as much of the testing framework work on all > platforms as possible > with as little modification as possible. > 5. File based version control. > 6. test suite updates available in small pieces. ( not the > whole tcs db) > > I hope you can comment on my samples posted in a different message. > Thanks! > </david> <Sean> Don't think I fully understand what you mean in #2. WRT #3, I would suggest that the efforts should be concentrated on testing the engine and _then_ testing the data drivers. </Sean> |