Hi all,
I have struggled to run a radial turbine simulation case with Foam-extend 3.0, Density Based Solver transonicMRFDyMFoam in parallal. The set up information is:
For the stator, left and right patches type is cyclicGgi, top and bottom patches type is wall, outlet patch type is overlapGgi.
For the rotor, the left and right patches type is cyclicGgi, top and bottom patches type is wall ,inlet patch type is overlapGgi which is coupled with stator outlet.
Slinding mesh is applied on the rotor, means the rotor mesh will rotate in a specific angle in next time step. So the stator outlet and rotor inlet interface will gradually seperate till fully uncovered.
Due to the mesh size(coarse mesh is up to 2 M cells), running with only one core doesn't really make sense because it will cause me about 45 year to running in a revolution.
So I try to make the case running in parallel.
However, the problems coming.
At first I've just tryp to decompose the calculating domain into several parts(for example 2,4,6,8) with scotch method or metis method. The case crashed imediately. Then I realized that that may be a potential bug. In order to trigger the bug out, I create a test mesh (Tmesh, I'll attached it in this ticket) which only has two 20 degrees cylindrical blocks, represents the stator, rotor respectively.
The patches name for stator and rotor are Sin, Sout, Sleft, Sright, Stop, Sbot, Rin, Rout, Rleft, Rright, Rtop, Rbot which are self-explanatory.
The patches type for the test case are, cyclicGgi (Sleft, Sright; Rright, Rright) overlapGgi (Sout, Sin). The others are wall and inlet patch outlet patch.
And also should be remind that, in the following test, I've swith the cyclicGgi to wall in some cases to see if it is failed on both cyclicGgi or overlapGgi.
I've done the following test:
-> 1 processor, both cyclicGgi and overlapGgi: Pass
-> 2 processor, both cyclicGgi and overlapGgi: Fail
Can only be running when there're no processor patches, that means, the rotor region belong to processor 0 and the stator region belong to processor 1. Once the decomposePar cut the overlapGgi interface, it is failed immidiately.
I've also try to find which line of the the solver crashed here.
It is the first "h.correctBoundaryConditions()" stop the parallization. And I've add a lot flags in the foam-extend source code itself, found that the finally failure occurs in Pstream.
However, even the case could be run in 2 processors prallal, the information seems not pass the overlapGgi interface.
-> 3 processor, with scoth both cyclicGgi and overlapGgi: Fail
From this time, I find that if offering the h field value from the 0 folder and change all the gradientEnthalpy to zeroGradient, the h.correctBoundaryConditions() will pass. But the information still not pass the overlapGgi interface.
I've found a bug report from the solver auther himself--> #58. He suggested there were two cases which could not be parallized. First, the communitaion type is blocking; second, more than two processors share the overlapGgi interface.
I shift the communication type to nonBlocking and find that the solver will run in 3 processors without give the value of h from 0 folder. but, the information still not passing the overlapGgi interface even I make the interface share by two processors.
-> 3 processor, the overlapGgi interface master and slave patch are in 1 core. Only overlapGgi, the cyclicGgi interface changed to wall. Fail
From here, I've found another useful link--> #280 which Hrv replied saying that the cyclicGgi may fail in global face zones. Then I've test without global face zone. failed.
-> 3 processor, the overlapGgi interface and the cyclicGgi interface are in same core, like an "H". Fail
"H"type decomposition is just avoid the gloable face zone method. However, failled all.
-> 4 processor, with nonBlocking type: Fail for all
-> Built Density Based Solver on foam-extend 3.1, foam-extend 3.2 and retest Fail for all
Now I'm pretty sure that this is a bug for foam-extend itself. Could you please offer me some solution for that? That bug really limit the simulation for turbomachineray.
Thanks!
Janry