Hi all,
Thank you about your interest in Firebird QA efforts! I'm very sorry
about my silence after I invited you to this discussion, but I was forced
to do other things not related with Firebird project. But it seems that
I'll have more than enough time now to continue with this topic in next
weeks, and because the FB2 development is going onwards and we can hope
for 1.5 release in next few months, this topic (and real progress with
it) is more important than ever.
First and foremost, I'd like inform you in detail about the current state
of Firebird QA, and ideas and work we did in last months to improve it.
As you may know, quality is not an easy target, and there are many books
and techniques to achieve it in software projects. Because a list of QA
techniques that are currently used (or those we think that should be
used) in Firebird project is quite long, I'm going to start a separate
message thread for each technique to keep discussion clean and on topic.
But here you are a brief version of this list.
a) White-box testing (WBT) by developers
1) Peer review.
2) Code audit
3) Development test cases.
4) Design by Contract.
Problems discovered by developers are rarely tracked in Bug Tracking
System, and are fixed immediately.
b) Black-box testing (BBT) by "QA department"
The purpose of "QA department" is to verify correctness and limits of
final product. That mean:
1) Correct input produce correct result.
2) Incorrect input produce correct error result.
3) Product reacts correctly to limit conditions and values (memory,
disk space, file size, values at behaviour-switching boundary etc.)
4) Product reacts correctly to unexpected conditions (power failure,
low-memory, I/O error)
5) Consistency. Test results from 1)-4) are consistent in time and
with different order of execution. This also means concurrency tests.
6) Usability (performance, ergonomy, documentation)
If possible, automated test systems are used to do 1),2),3) and 5)
tests. 4) and 6) usually depends completely on manual labour. All
bugs and issues detected by QA department are tracked in some
sort of Problem resolution system. This system is a crucial part of
whole QA/development cycle, and any implemented QA procedure
can't work successfully for long time without it. Due to its nature, I'll
break this topic to next threads:
1) Automated QA - Test Control System issues
2) Problem tracking
3) Organization and procedural issues
If you have any comments or suggestions related to this lists, don't
hesitate to share them in direct reply to this message.
Best regards
Pavel Cisar
http://www.ibphoenix.com
For all your upto date Firebird and
InterBase information
|