Support for Test Runs?

  • Julius Gawlas

    Julius Gawlas - 2012-11-23

    Lets say that we have  TestCatalog with several TestCases. Than we create a TestPlan that includes some or all of these TestCases. This is a recipe of what we want to be tested.

    Now a tester will execute such TestPlan by doing a manual or automated testing. For each TestCase each instance of such attempt is considered a TestRun. Now TestRun can fail because of various reasons (hardware, operator error, real bug found, etc). Often the tester will make several such TestRuns before he/she considers that test case failed or succeeded. Such a TestRun can have its properties & artifacts (who run it, logs, documents, dumps, etc). In the end it is very valuable to keep track of all these runs.

    How would you advise to model such behavior in TestManager?


  • Roberto Longobardi

    Hi Julius,
    sorry for the late answer.

    As you already do, usually different runs of the same test case (actually of the same test catalog or sub-catalog) are modeled as Test Plans in the plugin.

    Since these are new features, I just want to recap how things work with test plans.

    You may already have discovered this, but you can define a new test plan from a test catalog page and you have several options.

    As far as which test cases are part of the test plan (i.e. test run), you have two options:
    1) It will contain all the test cases currently in the catalog, and also all test cases that will eventually be added to it in the future. This also means that any test case removed from the catalog will no longer appear in the test plan (and its execution outcome will be deleted as well).
    2) You can select individual test cases to be included into the plan. Any change to the catalog - as far as test case additions and deletions - will not be reflected into the plan. If you choose this option, you will anyway be able to add test cases to the plan also later.

    As far as the version of the test case descriptions that will be included in the plan, you also have two options:
    1) The plan will always point to the latest test case description (i.e. wiki) versions, so to reflect any changes to them.
    2) The plan will "freeze" the current test case descriptions, so that you'll always be able to see what a test case description was at the time you added it to the plan (and run it). Anyway, you will be able to upgrade any test case in the plan to its latest version at any time.

    So, it is common to create one test plan for each "collective test run" of your catalog(s).
    In a continous integration scenario, you may want to create one test plan for each build, for example.

    Now to your question.

    You say a tester runs the same scenario several times, until it is successful, and you wish to keep track of all these intermediate outcomes.

    Well, actually keeping track of the changes to a test case status is already possible within the test case page. Every time you change the test status, this is recorded in the plan and you can examine the status change history in the corresponding section at the bottom of the test case page.

    Being a wiki page, you can also attach log files, screenshots and other stuff to the test case.

    Anyway, there are two problems here:

    1) You cannot record multiple successive occurrences of the same test case status. For example, if a test case fails several times, you cannot record them.

    2) Anything you attach to a test case page while looking at a specific plan, is actually attached to the test case page, so you'll find them also while looking at the test case in the catalog or in any other plan.

    I don't have a ready solution for these two problems at the moment. The second one is expecially hard to fix.

    For the first one, instead, I may implement an enhancement to let you record successive occurrences of the same status of a test case. 
    I've opened enhancement ticket 10683 on Track-Hacks for this.

    For the second, you may surely attach logs and screenshots to the tickets you open as a consequence of the repeatedly failing test case.

    BTW, any help is greatly appreciated and welcome, so if you feel the plugin can be improved in any way, let's discuss it, and if you feel like it, don't hesitate to provide patches.



Log in to post a comment.

Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:

No, thanks