With your help I now have a 3D HDG code for Navier-Stokes running I think. To verify the results I have the flow past a cylinder test case implemented (see the 3D benchmark problem in for example Lehrenfeld and Schoeberl, CMAME 307 (2016)) and I would like to compute the lift and drag coefficients by computing an integral over the obstacle given the computed velocity vector and pressure. I have tried a couple of things:
I've tried a few other options, but all with similar error messages. Could someone point me to where I might find documentation or an example to compute an integral over an object?
thanks,
Sander
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Actually i think the first one should work. Mke sure you only use the components of your gridfuntion that is defined on the elements. Otherwise you could use an innerproduct of your result function and a gridfunction set before to (1,0) or (0,1) on your obstacle surface.
all the best,
Philip
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
does not work since we iterate over boundary elements (i.e. 2d elements), but the drag_integrand (which is a derivative) depends on the dofs of the volume elemnet. One possibility for a direct evaluation is an interpolation of the derivative to an H1 finite element space (via the GridFunction.Set function).
But simpler and also more accurate is the 'functional technique' as Philip proposed:
\int_Gamma_Dir du/dn v = <f -="" A="" u,="" v=""> \forall v
Set v as the characteristic function on the boundary of interest (thx HDG), and evaluate the residual on the right hand side.
Joachim
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thank you again for the help. I don't yet have values comparable to the benchmark results, but in the process I found that I'm not computing correctly an integral on the Neumann boundary. I have an integral of the form
\int_{\Gamma_N} (uf0n)(ufvf)
where uf and vf are trial and test functions that live only on element edges (HDG) and uf0 is the value of uf from the previous time step. I found an example on the NGSolve pages where in DG you could add this to the bilinear form using
a += SymbolicBFI ( (udg0*n)(udg*vdg), BND, skeleton=True,definedon=mesh.Boundaries("Neumann"))
but this won't work for HDG where uf0, uf and vf are defined in
One thing that compiles and runs is the following:
Cbar = FacetFESpace(mesh, order=1, dirichlet="Neumann")
ee = GridFunction(Cbar)
ee.Set(CoefficientFunction(1.0), definedon=mesh.Boundaries("Neumann"))
a += SymbolicBFI(ee*(uf0*n)*(uf*vf), element_boundary=True)
although I think this is incorrect (the solution at the boundary doesn't behave as it should). Is there a better way to add an integral over a Neumann boundary to the bilinear form?
thanks,
Sander
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Attached an HDG implementation of Navier-Stokes in 2D. This is the HDG formulation as described in the paper by Labeur and Wells, http://dx.doi.org/10.1137/100818583
I'm just using simple Backward Euler in time and lagging the velocity in the nonlinear advection term (like a Picard iteration), which should be first order in time.
The code can be better written (I should separate the Stokes part from the advection part and I still have to figure out how in the time loop I can avoid having to compute the Stokes part of the matrix over and over again), but the issue I'm first trying to solve is adding the Neumann boundary integral (lines 212-217) and computation of lift and drag which I'm sure is still going wrong (lines 249-268). If you run the simulation, you see that at the outflow, the flow is not flowing out like it should. I tried your suggestion (see commented lines 219, 220), but that unfortunately didn't run.
sets the values at the "inlet" boundary, but zeros the function everywhere else. If you just leave out this command you get the solution right (in particular the outflow).
I also found the problem with the boundary integral of facet-functions (line 219,220): It's the trace operator (see How-To 'The Trace() operator). Now, we have to use ProxyFunction().Trace() to integrate Facet-functions on the boundary. Since it's not completely intuitive why we need the Trace-operator here (and it causes some over-head in the ngs-py script), I will make it also optional here, such as in the H1-case, the coming days.
Some hints to your implementation: Since you like to do the static-condensation, which we do directly in the assembling loop, you cannot precompute the Stokes-matrix and sum it up with the advection term in the time-loop. I propose to define the BilinearForm before the time-loop, and keep the BilinarForm object during the time-loop. An a.Assemble() recomputes the matrix. It uses the current value of the involved GridFunction, i.e. the previous velocity. For this, you have to copy the state un.vec to uold.vec. The advantage is to avoid matrix re-allocation, and it may also keep the symbolic factorization pattern of the direct solver.
* Your expressions are already quite complex (do a 'print (face_terms)' to dump out the tree). Here a
face_terms.Compile()
or
face_terms.Compile(realcompile=True)
may lead to serious performance improvement. The first one linearizes the tree and finds common leafs, the second one generates c++ code and compiles it just in time (not completly implemented, may or may not work).
Best, Joachim
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Perfect - flow at the outflow does exactly as it's supposed to now. I am busy following your instructions to optimize the code in preparation also for some 3D simulations. As you suggested, I define the BilinearForm before the time-loop. My time-loop now is just:
while t < tend:
step += 1
t += float(dt)
print (' Time step: ',step, ' time: ',t)
a.Assemble()
f.Assemble()
# For solver
c = Preconditioner(type="direct", bf=a, flags = {"inverse" : "pardiso" } )
c.Update()
# solve system
BVP(bf=a,lf=f,gf=UN,pre=c,maxsteps=3,prec=1e-10).Do()
UOld.vec.data = UN.vec #.data
On my 3rd timestep, though, I get a segmentation fault. I am assuming this is because I'm not copying my data correctly from UN to UOld, because if I include
# Bi-linear form
a = BilinearForm(X, flags={"eliminate_internal" : True})
a += SymbolicBFI(element_terms)
a += SymbolicBFI(face_terms, element_boundary=True)
a += SymbolicBFI(Neumann_term, element_boundary=True)
# Linear form
f = LinearForm(X)
f += SymbolicLFI(sources)
in the time loop again, everything works fine. Does NGS-py have a copy operator to copy from UN to UOld?
thanks,
Sander
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I suspect the preconditioner: A preconditioner registers itself at the bilinear-form. Whenever the bilinear-form get's reassembled, it also updates the preconditioner. The crash may arise since the preconditioner c gets deleted, but the bilinear-form still has a pointer to it (which is not yet a shared_ptr).
Move also preconditioner outside the time-loop.
Joachim
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Yes, of course, attached is the working version. With this code I find:
max CD = 3.18
min CD = 3.1
max CL = 0.93
min CL = -0.954
Compared to Table 3 from your paper, Lehrenfeld, Schoeberl, CMAME 307(2016) 339-361, I'm a little off. You find approximatly
max CD = 3.23
min CD = 3.16
max CL = 0.98
min CL = -1.02
Of course this could depend on the time stepping - my time stepping is only first order. It may also be because my approach to compute the lift and drag is less accurate than the functional approach (which I haven't been able to get to work).
I found a few points for improvement, an updated version of your file is attached.
The major source of the error might be not to use curved elements. Via
mesh.Curve(3)
you create curved elements with a polynomial mapping of order 3.
The cf.Compile() does not modify the CoefficientFunction cf, but returns a new object, the compiled version of cf. There was a bug in realcompile=True, please update your ngsolve.
You can easily switch on task-based shared-memory parallelization using the TaskManager:
with TaskManager():
while t < tend:
a.Assmeble()
....
For this type of problems we are usually better off using higher k (e.g. 4) instead of the global mesh refinement. You have to try it out.
Thanks for the update. As suggested, I updated ngsolve and ran the file. I had to change umfpack back to pardiso (does this make much of a difference? I get error messages with umfpack:
catch in AssembleBilinearform 2: DirectPreconditioner: needs a sparse matrix (or has memory problems)
Traceback (most recent call last):
File "<string>", line 156, in <module>
RuntimeError: DirectPreconditioner: needs a sparse matrix (or has memory problems)in Assemble BilinearForm 'bfa'
The other error message is related to
# Bi-linear form
a = BilinearForm(X, flags={"eliminate_internal" : True})
a += SymbolicBFI(element_terms.Compile(realcompile=True))
a += SymbolicBFI(face_terms.Compile(realcompile=True), element_boundary=True)
a += SymbolicBFI(Neumann_term.Compile(realcompile=True), element_boundary=True)
I get
solve bvp
cg solve for real system
0 0.683346
1 7.78915e-15
Solution time = 0.0316722 sec wall time
Iterations: 2
compute internal element ...
c_drag: 0.6136733766874822
c_lift: -0.0021911121846012512
setting all_dofs_together 0
/opt/netgen/bin/ngscxx: 2: /opt/netgen/bin/ngscxx: ccache: not found
Traceback (most recent call last):
File "<string>", line 200, in <module>
RuntimeError: problem calling compiler
Finished executing navierstokes.py
Thank you for using NGSolve
calls our compiler-wrapper ngscxx to compile an automatic generated c++ code of the function. Compile(False) does a run-time linearization of the function. In your example Compile(False) gave me an overall speedup of two, and Compile(True) another factor of two - I just mention it for motivation.
When you build ngsolve yourself, the ngscxx wrapper inherits the compiler flags from the ngsolve cmake configuration.
When you install the precompiled ngsolve, you get some generic default flags. It should be working, now. If not, have a look inside ngscxx.
Joachim
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hello,
With your help I now have a 3D HDG code for Navier-Stokes running I think. To verify the results I have the flow past a cylinder test case implemented (see the 3D benchmark problem in for example Lehrenfeld and Schoeberl, CMAME 307 (2016)) and I would like to compute the lift and drag coefficients by computing an integral over the obstacle given the computed velocity vector and pressure. I have tried a couple of things:
c_drag = Integrate ( drag_integrand, mesh, BND )
which gives me a Segmentation fault, or
c_drag = Integrate ( drag_integrand, mesh.Boundaries("object") )
which doesn't work either because only the following are accepted argument types:
(cf: ngsolve.fem.CoefficientFunction, mesh: ngsolve.comp.Mesh, VOL_or_BND: ngsolve.comp.VorB=VorB.VOL, order: int=5, region_wise: bool=False, element_wise: bool=False)
I've tried a few other options, but all with similar error messages. Could someone point me to where I might find documentation or an example to compute an integral over an object?
thanks,
Sander
Actually i think the first one should work. Mke sure you only use the components of your gridfuntion that is defined on the elements. Otherwise you could use an innerproduct of your result function and a gridfunction set before to (1,0) or (0,1) on your obstacle surface.
all the best,
Philip
does not work since we iterate over boundary elements (i.e. 2d elements), but the drag_integrand (which is a derivative) depends on the dofs of the volume elemnet. One possibility for a direct evaluation is an interpolation of the derivative to an H1 finite element space (via the GridFunction.Set function).
But simpler and also more accurate is the 'functional technique' as Philip proposed:
\int_Gamma_Dir du/dn v = <f -="" A="" u,="" v=""> \forall v
Set v as the characteristic function on the boundary of interest (thx HDG), and evaluate the residual on the right hand side.
Joachim
Thank you again for the help. I don't yet have values comparable to the benchmark results, but in the process I found that I'm not computing correctly an integral on the Neumann boundary. I have an integral of the form
\int_{\Gamma_N} (uf0n)(ufvf)
where uf and vf are trial and test functions that live only on element edges (HDG) and uf0 is the value of uf from the previous time step. I found an example on the NGSolve pages where in DG you could add this to the bilinear form using
a += SymbolicBFI ( (udg0*n)(udg*vdg), BND, skeleton=True,definedon=mesh.Boundaries("Neumann"))
but this won't work for HDG where uf0, uf and vf are defined in
Vbar = FacetFESpace(mesh, order=k, dirichlet="wall|object|inlet")
One thing that compiles and runs is the following:
although I think this is incorrect (the solution at the boundary doesn't behave as it should). Is there a better way to add an integral over a Neumann boundary to the bilinear form?
thanks,
Sander
Hi Sander,
with HDG it becomes simple: Let X be the product space L2 x Facet, and let
Then
or
is meaningful (in contrast to SymbolicBFI (u*v, BND) ).
Your construction looks interesting, and I think it should work as well. Can you provide a complete example ?
Joachim
Hi Joachim,
Attached an HDG implementation of Navier-Stokes in 2D. This is the HDG formulation as described in the paper by Labeur and Wells, http://dx.doi.org/10.1137/100818583
I'm just using simple Backward Euler in time and lagging the velocity in the nonlinear advection term (like a Picard iteration), which should be first order in time.
The code can be better written (I should separate the Stokes part from the advection part and I still have to figure out how in the time loop I can avoid having to compute the Stokes part of the matrix over and over again), but the issue I'm first trying to solve is adding the Neumann boundary integral (lines 212-217) and computation of lift and drag which I'm sure is still going wrong (lines 249-268). If you run the simulation, you see that at the outflow, the flow is not flowing out like it should. I tried your suggestion (see commented lines 219, 220), but that unfortunately didn't run.
thanks,
Sander
Hi Sander,
you have done quite some staff in ngs-py. ...
The pitfall was that
sets the values at the "inlet" boundary, but zeros the function everywhere else. If you just leave out this command you get the solution right (in particular the outflow).
I also found the problem with the boundary integral of facet-functions (line 219,220): It's the trace operator (see How-To 'The Trace() operator). Now, we have to use ProxyFunction().Trace() to integrate Facet-functions on the boundary. Since it's not completely intuitive why we need the Trace-operator here (and it causes some over-head in the ngs-py script), I will make it also optional here, such as in the H1-case, the coming days.
Some hints to your implementation:
Since you like to do the static-condensation, which we do directly in the assembling loop, you cannot precompute the Stokes-matrix and sum it up with the advection term in the time-loop.
I propose to define the BilinearForm before the time-loop, and keep the BilinarForm object during the time-loop. An a.Assemble() recomputes the matrix. It uses the current value of the involved GridFunction, i.e. the previous velocity. For this, you have to copy the state un.vec to uold.vec. The advantage is to avoid matrix re-allocation, and it may also keep the symbolic factorization pattern of the direct solver.
* Your expressions are already quite complex (do a 'print (face_terms)' to dump out the tree). Here a
or
may lead to serious performance improvement. The first one linearizes the tree and finds common leafs, the second one generates c++ code and compiles it just in time (not completly implemented, may or may not work).
Best, Joachim
Hi Joachim,
Perfect - flow at the outflow does exactly as it's supposed to now. I am busy following your instructions to optimize the code in preparation also for some 3D simulations. As you suggested, I define the BilinearForm before the time-loop. My time-loop now is just:
On my 3rd timestep, though, I get a segmentation fault. I am assuming this is because I'm not copying my data correctly from UN to UOld, because if I include
in the time loop again, everything works fine. Does NGS-py have a copy operator to copy from UN to UOld?
thanks,
Sander
the vector copy
is the right thing to do.
I suspect the preconditioner: A preconditioner registers itself at the bilinear-form. Whenever the bilinear-form get's reassembled, it also updates the preconditioner. The crash may arise since the preconditioner c gets deleted, but the bilinear-form still has a pointer to it (which is not yet a shared_ptr).
Move also preconditioner outside the time-loop.
Joachim
That was indeed the issue - everything runs now!
Sander
Super ! Maybe you want to link also working example here. The Trace()-operator for the FacetFESpace is now optional.
Yes, of course, attached is the working version. With this code I find:
max CD = 3.18
min CD = 3.1
max CL = 0.93
min CL = -0.954
Compared to Table 3 from your paper, Lehrenfeld, Schoeberl, CMAME 307(2016) 339-361, I'm a little off. You find approximatly
max CD = 3.23
min CD = 3.16
max CL = 0.98
min CL = -1.02
Of course this could depend on the time stepping - my time stepping is only first order. It may also be because my approach to compute the lift and drag is less accurate than the functional approach (which I haven't been able to get to work).
Sander
Hi Sander,
I found a few points for improvement, an updated version of your file is attached.
you create curved elements with a polynomial mapping of order 3.
The cf.Compile() does not modify the CoefficientFunction cf, but returns a new object, the compiled version of cf. There was a bug in realcompile=True, please update your ngsolve.
You can easily switch on task-based shared-memory parallelization using the TaskManager:
Best, Joachim
Hi Joachim,
Thanks for the update. As suggested, I updated ngsolve and ran the file. I had to change umfpack back to pardiso (does this make much of a difference? I get error messages with umfpack:
catch in AssembleBilinearform 2: DirectPreconditioner: needs a sparse matrix (or has memory problems)
Traceback (most recent call last):
File "<string>", line 156, in <module>
RuntimeError: DirectPreconditioner: needs a sparse matrix (or has memory problems)in Assemble BilinearForm 'bfa'
The other error message is related to
I get
solve bvp
cg solve for real system
0 0.683346
1 7.78915e-15
Solution time = 0.0316722 sec wall time
Iterations: 2
compute internal element ...
c_drag: 0.6136733766874822
c_lift: -0.0021911121846012512
setting all_dofs_together 0
/opt/netgen/bin/ngscxx: 2: /opt/netgen/bin/ngscxx: ccache: not found
Traceback (most recent call last):
File "<string>", line 200, in <module>
RuntimeError: problem calling compiler
Finished executing navierstokes.py
Thank you for using NGSolve
which I'm assuming is an installation error? I installed NGSOLVE using the description found here:
https://gitlab.asc.tuwien.ac.at/jschoeberl/ngsolve-docu/wikis/installdebian
regards,
Sander
The
calls our compiler-wrapper ngscxx to compile an automatic generated c++ code of the function. Compile(False) does a run-time linearization of the function. In your example Compile(False) gave me an overall speedup of two, and Compile(True) another factor of two - I just mention it for motivation.
When you build ngsolve yourself, the ngscxx wrapper inherits the compiler flags from the ngsolve cmake configuration.
When you install the precompiled ngsolve, you get some generic default flags. It should be working, now. If not, have a look inside ngscxx.
Joachim