Menu

V_GetCmp ( ) call in Parallel code

madhu
2009-06-17
2013-04-25
  • madhu

    madhu - 2009-06-17

    Hi,

    First, I thank the developer(s), x-flow, of the code!
    It is very useful for young researchers like me who enter into the area of CFD.

    I am reading the code now. I could see  V_GetCmp ( ) function call in both serial and parallel versions. This function call is defined in LASPACK which supports only single processor operations.

    My questions are(1) why should we use this function call in parallel when PETSC,a powerful package provides such options?

    (2) cannot we avoid using V_GetCmp () in serial code? I am also finding very little explanation on this function call in LASPACK reference manual. In what way it helps us?

    Could you please expalain the above facts?

    Thanks.

    Madhu

     
    • x-flow

      x-flow - 2009-06-17

      Both codes use the same implementation to have a common interface. Of course you can get and set values of a vector in laspack without using V_GetCmp() and V_SetCmp() methods.

      In serial code, you can simply use x.Cmp[i] to get and set values.

      In parallel code, the V_GetCmp function just calls corresponding PETSc equivalent.

      To sum up, I think the code is more readable this way.

      I don't know if this answered your question.

      Regards,

      x-flow

       
    • madhu

      madhu - 2009-06-18

      Thank you very much for your reply.

      My point is that the function V_GetCmp()is a LASPACK function. LASPACK is for single processor calculations.

      PETSC is for parallel processor and must be having a function which is equivalent to V_GetCmp() of LASPACK.
      If we use such a function we can eliminate the use of LASPACK completely for parallel version. This makes code clean without too many libraries and its function calls.

      Overall what I want to say is that for "parallel version" it is better to use only PETSC for all calculation and function calls and avoid using LASPACK.

      I hope you got what I have come to say.

      Once again great thanks for your efforts in the code development. I am learning a lot through OPENFVM. This forum should grow bigger and bigger.

      Regards,
      Madhu

       
      • x-flow

        x-flow - 2009-06-18

        I agree with you.

        The code could be refactorized, the Laspack V_GetCmp and V_SetCmp can be removed completely because they have a check each time they are called so it is faster to call x.Cmp[i].
        Equivalent PETSc functions could be used. Another optimization in parallel would be to group vectors such as hu, hv and hw for parallel communications.

        In the meantime, if somebody wants to do these alterations...

        Regards,

        x-flow.

         
    • sdcpune

      sdcpune - 2009-08-22

      Hi Billy,

      How to get the "Parallel processing" working? Do we need a machine will multiple cores (processors) or it can work also with multiple machines connected in a network?
      Also I assume Linux binary provided has parallel processing implemented (and _NOT_ in the Windows version) - Pl confirm?

      ====
      Thanks for OpenFVM - looks very simple to get entry and also examples work well. The most interesting part was "Gmsh" was chosen to supplement the solver with pre and post-processing. For pre-proc, no doubt great tool with geometry creation ability and much more. Initially I thought difficult to get going with post-processing (use to ParaView) but Gmsh seems good choice so that OpenFVM needs nothing else than Gmsh to start getting hands dirty with CFD.

       
      • x-flow

        x-flow - 2009-08-22

        The parallel version is implemented only in Linux. I have used MPICH, LAMMPI and OpenMPI.
        OpenFVM uses MPI so it can work with multicore processors or on a cluster with distributed cores. To get it working you have to install PETSc along with your favorite MPI implementation. Once installed, make sure PETSc is correctly setup. Build the parallel version and run OpenFVM as mpirun -np 1 ../OpenFVM lid d 2 to create to domains for 2 processors. Then type: mpirun -np 2 ../OpenFVM lid f 2.
        Hope this helps,

        Billy.

         
    • sdcpune

      sdcpune - 2009-08-24

      Thank you very much. I am getting more and more interested in parallel processing (was not knowing that all these tools already exist).

      Sorry without taking too much of your time, if you could quickly comment on;
      Let's say we have our existing Solver written in (cross-platform) C++ with Hex-Mesh, what would be the effort to make it compatible with MPI (to take advantage of Parallel processing)

      I will take a deeper look into the code to understand the design better. Thanks

       
    • x-flow

      x-flow - 2009-08-25

      It depends on the structure of the code.

      Also, if you use a library like PETSc it should be easy to do. On the other hand, if you use MPI directly it will be more time consuming.

       
    • sdcpune

      sdcpune - 2009-08-25

      Thanks for your reply. PETSc should be the way to go as default (assuming Parallel computation always). LASPAC for serial.
      Command line argument "r" for mesh renumbering, "d" for domain decomposition, "f" for solver.

      Very quickly can you confirm the benefits of "running decomposed case on single processor machine". We saw such case with OpenFOAM - where the 4 processes were launched simultaneously (on our machine which is single core and NO MPI, PETSc etc installed).
      My question is "Does it provide any gain in solver computation time"?
      Of-course this is somewhat OpenFOAM related question and quite logical - we will try to find the answer ourself, but any comments will be helpful in diving into this subject. Without parallel computing our life would seem to be paralyzed.

       
      • x-flow

        x-flow - 2009-08-26

        Hi,

        There are no benefits of running decomposed case in serial. In fact that option has been removed in the latest version thus removing the dependency of the serial version from Metis library and making it easier to build.

         

Log in to post a comment.