From: Pavel C. <pc...@us...> - 2002-07-03 11:02:22
|
Hi all, First, a small recap about Black box testing: <recap> The purpose is to verify correctness and limits of final product. That mean: 1) Correct input produce correct result. 2) Incorrect input produce correct error result. 3) Product reacts correctly to limit conditions and values (memory, disk space, file size, values at behaviour-switching boundary etc.) 4) Product reacts correctly to unexpected conditions (power failure, low-memory, I/O error) 5) Consistency. Test results from 1)-4) are consistent in time and with different order of execution. This also means concurrency tests. 6) Usability (performance, ergonomy, documentation) If possible, automated test systems are used to do 1),2),3) and 5) tests. 4) and 6) usually depends completely on manual labour. All bugs and issues detected by QA department are tracked in some sort of Problem resolution system. This system is a crucial part of whole QA/development cycle, and any implemented QA procedure can't work successfully for long time without it. </recap> Current status: We have a Problem tracking system, but it's not connected to TCS nor QA cycle in any way. We also have an automated TCS released by Borland with around 390 tests (please refer to the attached text for complete list of test groups and number of tests in each group). Tests are stored in FB database and test cases do not cover even major functionality areas, and mostly fall to the first test category. Tests mostly use ESQL for access to the database, instead of DSQL which is prevailing approach today. More to that, we cannot trust many tests, as it seems that the common practice was to write a test case and then catch the output (from tested engine!! in case of new features). This approach doesn't warrants the "correctness". This can be achieved only by human labour, i.e. someone have to get the input data, apply the tested algorythm on it and write down the expected results. This should be also verified by another person. It's obvious that declarative approach for input data and expected result would work better than procedural one. Well, it's a paranoid approach that assume that tested software cannot be trusted, but I can't imagine any other "less paranoid" approach to QA that would work for mission-critical software. TCS is hard to operate, and writing new tests require almost the same skills (or more in certain areas) for QA people as for core developers. The entry barrier for any apprentice QA developer is very high. Some notes: For anything near the QA "certification", we'll absolutely must cover testing in area 1) in full and most important cases in 3) and 4). Other areas can follow later, but have to be considered when we'll plan our QA strategy and tools used. We'll need reference tests for SQL, stored procedure language and DSQL (most used) interface. All expected results of tests for correctness have to be "correct". We'll need more friendly TCS. Open source QMTest (www.codesourcery.com) is an option, but you can suggest other tools (including home-grown) as well. Questions for you: 1) Do you think that current problem tracking system (and how we use it) is suitable for our needs ? 2) If not, what changes do you recommend (technical or procedural) ? 3) What's your recommendations and requirements for TCS ? 4) What test cases (groups) we should build and in what order/priority ? 5) Are you willing to write some test cases ? If you are, in what area ? 6) If yes, what practice do you suggest (will work best for you) ? This include requirements for TCS. Your comments would be greatly appreciated. Best regards Pavel Cisar http://www.ibphoenix.com For all your upto date Firebird and InterBase information |
From: Paul R. <pr...@ib...> - 2002-07-03 13:38:34
|
Pavel Cisar wrote: > Tests mostly use ESQL for access to the database, > instead of DSQL which is prevailing approach today. How would you see us testing DSQL? I'd trust the results of an ESQL test over a DSQL test anyday. Writing good (I mean correct) DSQL tests is not easy. At least gpre knows what it is doing. This is not a reason to ignore testing DSQL, especially as gpre generates undocumented api calls in some cases. But having played a little with writing gpre scripts I think the learning curve is more shallow. Test writers can concentrate on writing code that will accomplish a test. Writing DSQL will turn into a test of the programmers C/C++ abilities - at least in the beginning. I think one of the goals of the QA group is to create an environment where tests can be created quickly and easily by many developers. ESQL makes this a lot easier. > We'll need more friendly TCS. Open source QMTest (www.codesourcery.com) > is an option, but you can suggest other tools (including home-grown) as > well. Have you seen QAT yet? (http://qat.sourceforge.com). It is another OS test harness, this time written in Java. I haven't played with it seriously but the fact that it is in Java makes it more interesting to me. My main objection (in fact my only objection) to QMTest is that it requires Python. It is another thing to install and configure and it quite probably requires learning a bit of python, too. That said, I'm not stuck on this. A harness that is easily configurable and allows easy extension addition to the test 'database' is the most important thing. We need to reduce the barriers to test writing as much as possible. Paul -- Paul Reeves http://www.ibphoenix.com Supporting users of Firebird and InterBase |
From: John B. <bel...@cs...> - 2002-07-03 17:09:56
|
Hi, On Wednesday, July 3, 2002, at 06:31 AM, Paul Reeves wrote: > Pavel Cisar wrote: > > >> Tests mostly use ESQL for access to the database, instead of DSQL >> which is prevailing approach today. > > How would you see us testing DSQL? I'd trust the results of an ESQL > test over a DSQL test anyday. Writing good (I mean correct) DSQL tests > is not easy. At least gpre knows what it is doing. One of the things that would help here is the ability to take generic sql scripts and run them through all the different interfaces (esql, dsql, etc). This would allow us to drive common aspects of our different interfaces from the same test script. Write once and test everywhere. Obviously it will only work for the common subset of interface functionality (dsql doesn't have blob access, for example). But a lot of the tests in TCS are glorified SQL scripts already and would benefit from this. > > [...] > >> We'll need more friendly TCS. Open source QMTest >> (www.codesourcery.com) is an option, but you can suggest other tools >> (including home-grown) as well. > > > Have you seen QAT yet? (http://qat.sourceforge.com). It is another OS > test harness, this time written in Java. I haven't played with it > seriously but the fact that it is in Java makes it more interesting to > me. > At one point I had ported most, if not all, of TCS to dejagnu. So I guess that is also a consideration. But the interface on that is admittedly poor compared to some of the GUI-based test harnesses. > My main objection (in fact my only objection) to QMTest is that it > requires Python. It is another thing to install and configure and it > quite probably requires learning a bit of python, too. That said, I'm > not stuck on this. A harness that is easily configurable and allows > easy extension addition to the test 'database' is the most important > thing. We need to reduce the barriers to test writing as much as > possible. I think the barrier to entry for the test suite can be higher than that of the engine. For example, I wouldn't have a problem with the would be tester had to install one or two additional software packages to test (5-10 would be excessive) as long as the process the clearly documented, uses only free tools (of course), and only takes a few minutes aside from download time. Our testers should not have to learn any additional programming language just to run the tests. It is a reasonable expectation that they need to know some language to write tests. -John |
From: Pavel C. <pc...@us...> - 2002-07-03 19:48:30
|
Hi, On 3 Jul 2002 at 10:09, John Bellardo wrote: > One of the things that would help here is the ability to take generic > sql scripts and run them through all the different interfaces (esql, > dsql, etc). This would allow us to drive common aspects of our > different interfaces from the same test script. Write once and test > everywhere. Obviously it will only work for the common subset of > interface functionality (dsql doesn't have blob access, for example). > But a lot of the tests in TCS are glorified SQL scripts already and > would benefit from this. This would be really nice. But pardon my ignorance (I'm not an ESQL guru), how we can run arbitrary SQL scripts via ESQL ? I don't know that this is possible directly, so would need some transformation tool (jet another kind of GPRE) ? > At one point I had ported most, if not all, of TCS to dejagnu. So I > guess that is also a consideration. But the interface on that is > admittedly poor compared to some of the GUI-based test harnesses. My main objection to dejagnu/expect solution is that it IMHO really doesn't solve the main problem - How tests are written. Actually, TCS it's not so bad once you make it running, but tests written in mixture of shell script, arcane TCS commands, environment variables (both OS and TCS), C/C++, ESQL etc. with use of various external tools (thanks god all standard, free or part of TCS suite) and all packed in single text file (sorry, I mean BLOB in test database ,-) is really a nightmare. Of course, dejagnu/expect can shallow it a bit but at expense that we have to rewrite all tests. I we have to write all tests from scratch, I'd like try to find more "test writer friendly" solution first. But dejagnu is still in play as backup solution. > I think the barrier to entry for the test suite can be higher than that > of the engine. For example, I wouldn't have a problem with the would be > tester had to install one or two additional software packages to test > (5-10 would be excessive) as long as the process the clearly documented, > uses only free tools (of course), and only takes a few minutes aside > from download time. Our testers should not have to learn any additional > programming language just to run the tests. It is a reasonable > expectation that they need to know some language to write tests. I agree, but easy to setup & operable test harness would be a plus. Actually, it's not supposed that test harness should be used by more than few people -> "platform keepers/packagers and QA specialists" (of course, it would be nice if anyone would be able to verify our results, but it's not required). But we do not have enough good tests (and we speak about thousands tests here) and we need them quickly. That is the main problem we face here, regardless of concrete test harness we close with. Since Firebird project inception we learned that skilled C/C++ developers are scarce to be wasted on something like test cases, even if we'll convince them to write them. But there are many Delphi, Java, PHP and SQL developers flying around Firebird project that are not C/C++ developers and thus disqualified for core engine development, waiting for an opportunity to do something useful. We should come with a solution that allow these developers to write some tests in their spare time. Not much work, not much learning, not much intellectual load. It's IMO the only way we can get tests we need and "in time". For comparison, if I'd be paid to work on test cases full time, and I'll write (and verify) ten tests a day, it will take almost two months for me just to replace TCS tests. And we'll need ten times more tests just to match Borland's "certification". We need an approach that either allow me to write more tests a day (not feasible, new test must be _designed_, written and tested and it's a lot of handwork on itself), or allow us to turn in more people (usually new, not already involved in Firebird developments). I can't see any other way than lowering the entry barrier for test writers. But how ? Well, declarative tests can work for some tested areas (common SQL, PSQL) but not for all (events, ISQL etc.). But how "declarative test" should be "declared" ? How other test should be build ? That is the question, dear Horatio :) I've some ideas, but I'd greatly appreciate any suggestion about how to make test writing more sexy for anyone. Best regards Pavel Cisar http://www.ibphoenix.com For all your upto date Firebird and InterBase information |
From: John B. <bel...@cs...> - 2002-07-03 21:21:20
|
Pavel, On Wednesday, July 3, 2002, at 12:49 PM, Pavel Cisar wrote: > Hi, > > On 3 Jul 2002 at 10:09, John Bellardo wrote: > >> One of the things that would help here is the ability to take generic >> sql scripts and run them through all the different interfaces (esql, >> dsql, etc). This would allow us to drive common aspects of our >> different interfaces from the same test script. Write once and test >> everywhere. Obviously it will only work for the common subset of >> interface functionality (dsql doesn't have blob access, for example). >> But a lot of the tests in TCS are glorified SQL scripts already and >> would benefit from this. > > This would be really nice. But pardon my ignorance (I'm not an ESQL > guru), how we can run arbitrary SQL scripts via ESQL ? I don't know that > this is possible directly, so would need some transformation tool (jet > another kind of GPRE) ? There would need to be a small transformation tool, but it shouldn't be too complex. It would take as input the SQL and spit out a .e (or .epp) file. > [...] -John |