From: Pavel C. <pc...@us...> - 2002-07-03 11:02:22
|
Hi all, First, a small recap about Black box testing: <recap> The purpose is to verify correctness and limits of final product. That mean: 1) Correct input produce correct result. 2) Incorrect input produce correct error result. 3) Product reacts correctly to limit conditions and values (memory, disk space, file size, values at behaviour-switching boundary etc.) 4) Product reacts correctly to unexpected conditions (power failure, low-memory, I/O error) 5) Consistency. Test results from 1)-4) are consistent in time and with different order of execution. This also means concurrency tests. 6) Usability (performance, ergonomy, documentation) If possible, automated test systems are used to do 1),2),3) and 5) tests. 4) and 6) usually depends completely on manual labour. All bugs and issues detected by QA department are tracked in some sort of Problem resolution system. This system is a crucial part of whole QA/development cycle, and any implemented QA procedure can't work successfully for long time without it. </recap> Current status: We have a Problem tracking system, but it's not connected to TCS nor QA cycle in any way. We also have an automated TCS released by Borland with around 390 tests (please refer to the attached text for complete list of test groups and number of tests in each group). Tests are stored in FB database and test cases do not cover even major functionality areas, and mostly fall to the first test category. Tests mostly use ESQL for access to the database, instead of DSQL which is prevailing approach today. More to that, we cannot trust many tests, as it seems that the common practice was to write a test case and then catch the output (from tested engine!! in case of new features). This approach doesn't warrants the "correctness". This can be achieved only by human labour, i.e. someone have to get the input data, apply the tested algorythm on it and write down the expected results. This should be also verified by another person. It's obvious that declarative approach for input data and expected result would work better than procedural one. Well, it's a paranoid approach that assume that tested software cannot be trusted, but I can't imagine any other "less paranoid" approach to QA that would work for mission-critical software. TCS is hard to operate, and writing new tests require almost the same skills (or more in certain areas) for QA people as for core developers. The entry barrier for any apprentice QA developer is very high. Some notes: For anything near the QA "certification", we'll absolutely must cover testing in area 1) in full and most important cases in 3) and 4). Other areas can follow later, but have to be considered when we'll plan our QA strategy and tools used. We'll need reference tests for SQL, stored procedure language and DSQL (most used) interface. All expected results of tests for correctness have to be "correct". We'll need more friendly TCS. Open source QMTest (www.codesourcery.com) is an option, but you can suggest other tools (including home-grown) as well. Questions for you: 1) Do you think that current problem tracking system (and how we use it) is suitable for our needs ? 2) If not, what changes do you recommend (technical or procedural) ? 3) What's your recommendations and requirements for TCS ? 4) What test cases (groups) we should build and in what order/priority ? 5) Are you willing to write some test cases ? If you are, in what area ? 6) If yes, what practice do you suggest (will work best for you) ? This include requirements for TCS. Your comments would be greatly appreciated. Best regards Pavel Cisar http://www.ibphoenix.com For all your upto date Firebird and InterBase information |