Menu

Proposed design criteria for a Turbomachinery Test Harness

2022-03-19
2022-07-11
  • Martin Beaudoin

    Martin Beaudoin - 2022-03-19

    What do we mean by a Test Harness:

    A Test Harness is a suite of scripts + test cases + interface a user or developer of OpenFOAM/foam-extend can utilize to check the validity of selected computations or simulations against a given state of the source code.

    A dedicated Turbomachinery Test Harness will specialize on the testing of Turbomachinery test cases, solvers, interfaces, boundary conditions, etc.

    Do we need to start from scratch?

    The foam-extend platform already provides a generic Test Harness based on Kitware CTest/CDash.

    This test harness basically:

    • Compiles a given release of the foam-extend source code and report compilation issues
    • Runs the full suite of the foam-extend tutorials for one single time step, and report runtime issues
    • Publishes the compilation and run results over a public Web service hosted on SourceForge.Net

    We can see an example of a foam-extend Test Harness run right here:

    Over the years, many dashboards were created for various versions of foam-extend. You can browse them all here:

    Unless a better technological solution than Kitware CTest/CDash is suggested, it is proposed that the Turbomachinery Test Harness will be based on that platform. This is obviously a topic open for discussions.

    List of specifications:

    Based on the past experience of developing and using the foam-extend Test Harness, here is a list of specifications I would like to proposed for the Turbomachinery Test Harness:

    • Developed as a standalone application, under its own Git repository hosted on the TurboWG project
    • Compatible for various versions and flavours of foam-extend/OpenFOAM platforms
    • Based on the latest versions of CTest/CDash + a dedicated suite of scripts and applications
    • Fully automated so it can run at regular intervals from dedicated test servers
    • Needs to run daily, weekly or automagically when updates to the source code are published
    • The CDash service should be hosted on the TurboWG dedicated but public Web space
    • Needs to detect and report on compilation issues
    • Needs to detect and report on basic run-time issues
    • Needs to detect and report for both sequential and parallel runs
    • Needs to compare solutions from sequential and parallel runs for a given test case
    • Needs to detect and report on solutions or results discrepancies from a set of reference results (more on this later)
    • Needs to run the various tests to completeness, not just for 1 or 2 timesteps
    • Needs to be flexible enough so anyone can easily add some new tests
    • Needs to run on both Windows and Linux/Unix (??) (Will run on Linux first, for sure...)
    • Needs to report basic but useful information about the host running the test to help remote debugging and support (compiler version, Third-Party software versions (MPI, metis, etc), operating system flavour and version, etc)
    • Exhaustive design and user documentation published as up-to-date companion documents to the Test Harness source code (Git + LaTex)
    • Introductory documentation published as a Technical Notes for the OpenFOAM Journal
    • Am I forgetting something?...

    Reference or Golden Simulation Results

    In order to detect simulation errors or discrepancies at the solutions level once a given test case is run to completion, we will need to provide some kind of Reference or "Golden" results to be compared to by the Test Harness.

    Those Golden results will obviously vary from test case to test case, so a generic mechanism will need to be put in place so the test harness will not have to bother about the specifics of validating a given test case.

    Traditionally, most of all the test cases we exchange or publish have a Allrun and Allclean scripts to handle the specificity of running the case.
    I propose to add an optional AllCheck script to any test case being handled by the Test Harness. If this script is present, then the Test Harness will run it, and report the result as yet another Pass/Fail flag.

    The kind of useful checks one could implement in this script could be (please add your favourite suggestion here):

    • Check the number of overall iterations needed to come to full completeness.
    • Check for flux imbalance across a given interface
    • Report the value of a given solution at prescribed locations (pressure, velocity magnitude, etc) and compare to a Golden solution pre-computed and stored somewhere publicly accessible.
    • Compare against some experimental results store somewhere publicly accessible
    • Compute and report about some statistical analysis of a given field or solution
    • Compute and compare with the result from an oracle or AI-based expert agent
    • Compute and compare against an analytic solution if available
    • Evaluate the duration of a parallel run (obviously site specific)
    • Compare the sequential results with the results obtained from a parallel run

    The Golden solutions based on full field computations will sometimes need some accordingly large storage space. The Files section of the TurboWG site on SourceForge.Net could be used to store those large files.

    The Golden solutions should not necessarily be frozen in time for all time. Maybe, as the quality of the solvers will evolve from release to release, those Golden solutions will need to be adjusted as well. So we will need some kind of mechanisms to tag various versions of a Golden solution so older version of solvers or applications can still have access to a useful Reference solution, which might not necessarily be the latest one available.

    We will probably need to develop some specific applications for doing those kind of comparisons. Here, some already available tools like PyFOAM or swak4Foam could be put to good use. Otherwise, developing dedicated C++ applications for the test harness could make sense as well.

     

    Last edit: Martin Beaudoin 2022-05-05
  • Håkan Nilsson

    Håkan Nilsson - 2022-03-19

    I think it would be great to continue in this direction, since you have already done quite a lot and already have experience with this. For the TurboWG test harness, I think it is better to limit the tests to those that the group is interested in. Otherwise problems related to turbomachinery may drown in problems that are not related to turbomachinery. We need to figure out how to check that simulations not only run, but also give reasonable results. For instance, warnings if GGI flux errors are larger than some chosen value. We probably also need to create some automatic post-processing for visual/manual comparison with reference solutions, unless we can find a way they can be automatically compared. I think you have already covered this in your description.

    I would like to have some variants of the mixer case with/without inlet/outlet in this suite, as we are presently testing. I would also like to have all the axialTurbine cases. All of these both for foam-extend and ESI-version. Those are very fast and very coarse cases, which should detect some problems. Every now and then we can also run the larger validation cases.

     
  • Martin Beaudoin

    Martin Beaudoin - 2022-03-19

    Complete list of useful test cases for the Turbomachinery Test Harness

    Let's build a list of all the test cases we want to monitor through the Test Harness

    Cases Description Comments
    Mixer Cases Need full list
    axialTurbine SRF, MRF, DyM (let's expand that list)
    ERCOFTAC Conical Diffuser A set of very nice examples for automatically testing against Golden results
    ERCOFTAC Centrifugal Pump
    Single Channel Pump
    Timisoara Swirl Generator
    Francis-99 A great example of a large, complex, industrial grade mesh
    Dellenback Abrupt Expansion
    Synthetic test cases specifically for testing AMI/GGI/mixingPlane Testing scalability and performance of algorithms
     

    Last edit: Martin Beaudoin 2022-03-19
  • Greg Burgreen

    Greg Burgreen - 2022-03-19

    Combining just (https://github.com/casey/just) with CTest/CDash might be a powerful combination.

     
    • Martin Beaudoin

      Martin Beaudoin - 2022-03-20

      This is interesting, I will look it up.
      Have you been using 'just' yourself for your work?

      A consideration to keep in mind is that this would be adding a dependency on yet another external "ThirdParty" package to the OpenFOAM ecosystem if people want to use the TurboWG Test Harness. I've never been too shy about this while collaborating to the foam-extend project. But adding a new package has to be well worth it when the alternative is still to write traditional Makefiles or bash scripts.

      Thanks for pointing this out. I will certainly check it out.

       
      • Martin Beaudoin

        Martin Beaudoin - 2022-03-20

        I've been poking around the 'just' package on github.

        • A pre-compiled version of this package is not available for my workstation's flavour of Linux: Fedora 33. (I should upgrade to Fedora 35, I know...). So I need to compile it.
        • This package is written using the Rust programming language.
        • So one has to install Rust on their workstation or cluster, which is yet another dependency to take into account.
        • Compiling a Rust package also requires adding some additional dependencies like the Cargo package manager, etc.
        • Taking care of all those dependencies is easy for developers or advanced users who are admins on their own workstation.
        • For regular users, or supercomputer/cluster users with limited admin powers, this is not an unsurmountable hurdle, but they will probably need to talk to their friendly sysadmin.
        • As the Rust language will apparently make its way into the Linux kernel source code, those dependencies might become pre-installed de-facto on most Linux system, but I don't think we are there yet.

        So I like the idea of adding a powerful command runner like 'just' , but I might look at other solutions for the time being for the Test Harness.

        After all, I am not sure if we don't already have all the necessary powerful tools to do a pretty decent job (PyFOAM, CMake, CTest, python, bash). But this is certainly debatable, so I am eager to see other comments about this.

         

        Last edit: Martin Beaudoin 2022-03-20
  • Martin Beaudoin

    Martin Beaudoin - 2022-03-20

    I wonder if there is not something similar to the 'just' package, but written in python...??

     
  • Greg Burgreen

    Greg Burgreen - 2022-03-20

    I have not worked with just. I came across it a few months ago, and it seemed to have a clean syntax and to be capable.

    Another Python based runner is fireworks (https://materialsproject.github.io/fireworks/). I did work with this package several years ago. I choose this package over Luigi.

    The clearest first step may be to define one simple test case and build out a simple test harness that does one simple thing using bash scripts. And then start building out increasing complexity and capability. This would provide basic capability that would be replicated and compared using other approaches with a view of selecting the optimal approach. Otherwise, the test harness ideas remain a bit abstract (at least, to me). Maybe foam-extend already has such examples already.

     
    • Martin Beaudoin

      Martin Beaudoin - 2022-03-21

      Maybe foam-extend already has such examples already.

      It certainly does, I wrote it myself using Kitware CMake/CTest/CDash and contribute it to the foam-extend project years ago. See the testHarness directory located in the root directory of any foam-extend installation.

      All the stuff I have written above (Specifications, etc) is based on this experiment and on ideas on how to improve it.

      And you are right, the objective is to start with a basic setup, and build on top of it. The foam-extend Test Harness will be this first iteration. This whole discussion will help finding ways to improve this design.

       
  • Greg Burgreen

    Greg Burgreen - 2022-03-20

    Uber simple Python runner: https://github.com/mikeevmm/sane

     
  • Greg Burgreen

    Greg Burgreen - 2022-03-21

    Excellent, Martin.

     
  • Martin Beaudoin

    Martin Beaudoin - 2022-05-05

    The first version of the TurboWG Test Harness was contributed under the new TestHarness Git repository.

    Currently, only the tutorials are instrumented, but you can run the test harness with either the foam-extend or the ESI OpenFOAM version.

    The other TurboWG test cases will be added to this test harness as soon as possible.

    See the README.txt file for more information.

     
  • Martin Beaudoin

    Martin Beaudoin - 2022-07-11

    Link to the current CDash service used for the TurboWG test harness:

    https://my.cdash.org/viewSubProjects.php?project=TurboWG

     

Log in to post a comment.