From: Sean D. <se...@da...> - 2004-05-21 20:34:46
|
On Thu, May 20, 2004 at 05:00:43PM -0500, Carl McAdams wrote: >=20 >=20 >=20 >=20 >=20 > I agree with Sean on the NA return. As Sean mentions, some conformance > tests expect errors to be properly returned. >=20 > In a failing test case, it is typical for the return codes to be > interpreted and passed to the output as a string. When this doesn't occu= r, > the test may have been terminated early, i.e. segmentation fault. This change is now checked into hpitest. An exit code of 77 from the test results in the NA state for the test being set. -Sean =20 > Carl McAdams >=20 >=20 >=20 >=20 >=20 >=20 >=20 > |---------+-----------------------------------------> > | | Sean Dague <se...@da...> | > | | Sent by: | > | | ope...@li...ur| > | | ceforge.net | > | | | > | | | > | | 05/20/2004 12:53 PM | > | | Please respond to | > | | openhpi-devel | > | | | > |---------+-----------------------------------------> > >----------------------------------------------------------------------= ------------------------------------------------------| > | = | > | To: ope...@li... = | > | cc: = | > | Subject: Re: [Openhpi-devel] Third return code for conformance= tests | > | = | > >----------------------------------------------------------------------= ------------------------------------------------------| >=20 >=20 >=20 >=20 > On Thu, May 20, 2004 at 11:00:42AM -0400, David Judkovics wrote: > > Good idea. > > > > Would it make sense to expand this further? > > > > Do the current tests capture the return codes for the library calls? > > Should the test try to translate the HPI error codes and return those > > as well? >=20 > I don't think that makes sense. It is the responsibility of a conformance > test to understand what a valid return is at any point. Most tests will = be > PASS or FAIL. Otherwise logic has to be put into the test framework to > know > what the right exit code was, which is the wrong place to put it. >=20 > However, look at the hotswap tests that currently exist. If the plugin > being used doesn't support managed hotswap, all the tests fail. We could > make them PASS if the Capability wasn't there, but that doesn't really te= ll > the full story. >=20 > Another instance, you have a negative test for saHpiSensorReadingGet. You > want to run it on a Resource that *does not* have SENSOR capability to > ensure that you get SA_HPI_INVALID_CMD returned, which is a PASS, any oth= er > error is a fail. However, if you can't find a Resource without sensors, > then the test is meaningless, and should return NA (or something). >=20 > If you look at the automake test documentation, it states the following: >=20 > "The number of failures will be printed at the end of the run. If a given > test program exits with a status of 77, then its result is ignored in the > final count. This feature allows non-portable tests to be ignored in > environments where they don't make sense." >=20 > I think we should also use that standard of exitting 77 if the test is NA, > and adjust the test infrastructure accordingly. >=20 > -Sean >=20 > -- > __________________________________________________________________ >=20 > Sean Dague Mid-Hudson Valley > sean at dague dot net Linux Users Group > http://dague.net http://mhvlug.org >=20 > There is no silver bullet. Plus, werewolves make better neighbors > than zombies, and they tend to keep the vampire population down. > __________________________________________________________________ >=20 >=20 > #### C.DTF has been removed from this note on May 20, 2004 by Carl McAdams --=20 __________________________________________________________________ Sean Dague Mid-Hudson Valley sean at dague dot net Linux Users Group http://dague.net http://mhvlug.org There is no silver bullet. Plus, werewolves make better neighbors than zombies, and they tend to keep the vampire population down. __________________________________________________________________ |