From: Philippe M. <mak...@fi...> - 2008-12-15 15:20:18
|
Hi all, I committed today new version of tests to avoid false fail related to the change from SQLCODE to SQLSTATE. Results are more clean now for 2.5, there is still some cleaning to do, but for I now I prefer to let them, at least to not hide changes that occurred with new ods, new functions ... with the new version there is still 41 FAIL on 625 tests under Windows, but that's really better than before the cleaning ;) At least results can be exploited now without too much noise with false fail. -- Philippe Makowski http://www.ibphoenix.com Supporting users of Firebird Tel +33 (0) 561058813 |
From: Mark O'D. <mar...@gm...> - 2008-12-20 00:53:36
|
Hi Philippe On 16/12/08 02:20, Philippe Makowski wrote: > Hi all, > > I committed today new version of tests to avoid false fail related to the change from > SQLCODE to SQLSTATE. > > Results are more clean now for 2.5, there is still some cleaning to do, but for I now I > prefer to let them, at least to not hide changes that occurred with new ods, new functions ... > > with the new version there is still 41 FAIL on 625 tests under Windows, but that's really > better than before the cleaning ;) > At least results can be exploited now without too much noise with false fail. > Just for comparison, I have updated my copy to the latest svn copy and running on linux, and running the two sets separately I get: functional: --- STATISTICS ---------------------------------------- 490 tests total 1 ( 0%) tests ERROR 51 ( 10%) tests FAIL 438 ( 89%) tests PASS For the bugs subset : --- STATISTICS ---------------------------------------- 139 tests total 12 ( 9%) tests ERROR 21 ( 15%) tests FAIL 106 ( 76%) tests PASS I am yet to look into the fails, but two I did seemed cases where the result comparison set needed to be updated. I will also try and run the older TCS suite. Cheers - Mark |
From: Philippe M. <mak...@fi...> - 2008-12-20 09:36:18
|
Mark O'Donohue [08-12-20 01.53] : > Just for comparison, I have updated my copy to the latest svn copy and > running on linux, and running the two sets separately I get: > SuperServer 32 ? > functional: > --- STATISTICS ---------------------------------------- > > 490 tests total > 1 ( 0%) tests ERROR > 51 ( 10%) tests FAIL > 438 ( 89%) tests PASS > > I get this : --- STATISTICS --------------------------------------------------------------- 490 tests total 47 ( 10%) tests FAIL 443 ( 90%) tests PASS > > For the bugs subset : > --- STATISTICS ---------------------------------------- > > 139 tests total > 12 ( 9%) tests ERROR > 21 ( 15%) tests FAIL > 106 ( 76%) tests PASS > > I get this --- STATISTICS --------------------------------------------------------------- 139 tests total 15 ( 11%) tests FAIL 124 ( 89%) tests PASS You should not have ERROR, there is something wrong on your setup > I am yet to look into the fails, but two I did seemed cases where the > result comparison set needed to be updated. > Yes, all of that have to be carefully check , there are still fals FAIL (ODS change for example) but as I said, Alpha stage is not a good one for that. And now, I have some work to do with 2.1.2 (58 bugs fixed to check) |
From: Mark O'D. <mar...@gm...> - 2008-12-21 09:04:13
|
Hi Philippe On 20/12/08 20:36, Philippe Makowski wrote: >> > SuperServer 32 ? > That was Classic Server 32 bit. > > You should not have ERROR, there is something wrong on your setup > Yes, I will look at these. > Yes, all of that have to be carefully check , there are still fals FAIL (ODS change for > example) but as I said, Alpha stage is not a good one for that. > Yes I understand, my usually experience is running these all the time, and with bug fixes, having the developers closely involved in writing the test scripts. > And now, I have some work to do with 2.1.2 (58 bugs fixed to check) > Ok, I will post what I find as I look through the fails, and presumably that will help update the test results when needed. Thanks for the help! Cheers - Mark |