Here is my second question:
I am solving the stokes equation with smooth solution:
-laplace u + grad p = f
div u = 0 (*)
u|_bdry= g_bdry
ave-p = 0
C. Lehrenfeld's Hdiv-space is used. The implementation is done without static condensation.
Again, I am using fespace "number" to take care of average pressure zero.
The convergence rate doesn't make sense this time:
order = 1 looks fine,
order = 2 gives me the following rates:
0.96, 9.59e-03 | 0.60, 2.83e-01
3.02, 1.18e-03 | 2.16, 6.32e-02
0.04, 1.15e-03 | 0.75, 3.77e-02 <-----------------this doesn't make any sense
6.01, 1.79e-05 | 3.46, 3.42e-03
3.01, 2.22e-06 | 2.04, 8.29e-04
order =3 gives me:
4.48, 7.07e-04 | 3.48, 4.26e-02
4.12, 4.05e-05 | 3.28, 4.37e-03
-5.41, 1.72e-03 | -3.97, 6.84e-02 <-----------------this doesn't make any sense
5.72, 3.27e-05 | 3.47, 6.17e-03 <-----------------this doesn't make any sense
Then, I did another experiment: Relaxed average-zero pressure condition by changing the divergence-free equation (*) with
div(u) + gamma p = gamma * exact_p
I commented out the average-zero pressure line, and adding back pressure-mass matrix (the lines ended with#FIXME wrong pde)
This time, I got neat convergence rates.
e.g. order = 3 gives me:
4.39, 4.44e-04 | 2.13, 2.40e-02
4.04, 2.71e-05 | 3.35, 2.35e-03
4.03, 1.66e-06 | 2.99, 2.95e-04
4.01, 1.03e-07 | 3.00, 3.69e-05
Finally, I modifed the boundary condition so that pressure is determined:
I changed normal velocity bdry to be the following bdry
(grad u.n).n - p = something
I commented out average-zero and gamma terms, and changed Hdiv space without bdry and added back correct bdry source term on LFI (the lines ended with #FIXME weird bdry)
This time, nice convergence rates again:
e.g. order = 3 gives me:
4.48, 6.05e-04 | 3.04, 3.31e-02
4.04, 3.67e-05 | 3.18, 3.65e-03
4.01, 2.27e-06 | 3.00, 4.56e-04
4.00, 1.42e-07 | 3.00, 5.70e-05
The above tests leads me to believe there is an internal bug for the "number" fespace...
But, I am very confused why the average-zero "number" fespace trick works for the laplace case, but not for the stokes case...
i tried your example and observed the same problems with ndof=8443. When you draw the pressure you can see that something goes wrong there. At the moment i don't know exactly that's happening, but i don't think it's the numer FESpace because I tried the same with Taylor Hood and everthing worked fine.
Last edit: Philip Lederer 2016-07-18
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Joachim tried the same on his mac and had no problem. We realised that he uses the umfpack solver instead of the pardiso so we think that this leads to the observed problems. You can either regularise by adding
a += SymbolicBFI (1e-8*p0 * q0)
to your Bilinearform or change the signs in your bilinearform to get a symmetric system:
:::python
a = BilinearForm (V,flags = { "eliminate_internal" : False }, symmetric=True")
a += SymbolicBFI (InnerProduct(gradu, gradv) + pdiv(v)+ div(u)q )
a += SymbolicBFI ( InnerProduct ( gradu.trans * n, tang(vhat-v) ), element_boundary=True )
a += SymbolicBFI ( InnerProduct ( gradv.trans * n, tang(uhat-u) ), element_boundary=True )
a += SymbolicBFI ( 5(order+1)(order+1)/h * InnerProduct ( tang(vhat-v), tang(uhat-u) ), element_boundary=True )
a += SymbolicBFI (p0*q + p * q0)
Somehow the problems (at least on my computer) dissaper, maybe also because of the the flag "symmetric=True", as now the symmetric Pardiso is called. Don't forget to change the sign of the pressure where you calculate the pressure error,... it took me a while to realize that ;)
Oh... it's a solver issue...thanks!!
I was trying to install umfpack...mumps...no good luck....I give up
Should have done that in Brenel with you guys... :(
All right, I will symmetrize the system wheneven possible when using pardiso... I use to trust direct solver 100%, not anymore
Here is a funny test:
I used CG solver with pardiso preconditioner
The same problem is tested as my original post, I solve either use static condensation or not.
Both approaches give me same error up to order 3, I noticed the direct approach need a few iterations to converge for order=3 while static condensed approach just need 1 solve.
However, for order = 4 or 5, the direct approach fail to converge, no issue for static condensed approach
It seems pardiso preconditioner + CG solver is a good and safe solver to tell us whether it was doing the "right" thing... :)
then do a make (maybe you have to install some additional packages).
After that add
-DUSE_UMFPACK=ON
and
-DCMAKE_PREFIX_PATH= "...path to the suite sparse folder where you used make"
to your cmake flags for ngsolve and compile!
now umfpack should be available like that
inv_a = a.mat.Inverse(V.FreeDofs(), inverse="umfpack")
res = f.vec.CreateVector()
res.data = f.vec - a.mat*gfu.vec
gfu.vec.data += inv_a * res
(jsut tested it again, indeed, umfpack works!)
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Since we are on the solver discussion.
What other (cool) solver other than your inhouse ones is, or is planed, to hook into ngsolve?
Is there any plan for addting Trilinos or Petsc in the near future?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Just an update, I installed umpack as you suggested.
It works, is more stable, but is really slow and memory consuming comparing with pardiso... did you observe similar behavior of the solver?
Best,
Guosheng
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
side remark:
If you update NGSolve now, you will observe that the "error barrier" is decreased a bit so that you obtain your optimal order eoc for one additional level of refinement ;).
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hello again,
Here is my second question:
I am solving the stokes equation with smooth solution:
-laplace u + grad p = f
div u = 0 (*)
u|_bdry= g_bdry
ave-p = 0
C. Lehrenfeld's Hdiv-space is used. The implementation is done without static condensation.
Again, I am using fespace "number" to take care of average pressure zero.
The convergence rate doesn't make sense this time:
order = 1 looks fine,
order = 2 gives me the following rates:
0.96, 9.59e-03 | 0.60, 2.83e-01
3.02, 1.18e-03 | 2.16, 6.32e-02
0.04, 1.15e-03 | 0.75, 3.77e-02 <-----------------this doesn't make any sense
6.01, 1.79e-05 | 3.46, 3.42e-03
3.01, 2.22e-06 | 2.04, 8.29e-04
order =3 gives me:
4.48, 7.07e-04 | 3.48, 4.26e-02
4.12, 4.05e-05 | 3.28, 4.37e-03
-5.41, 1.72e-03 | -3.97, 6.84e-02 <-----------------this doesn't make any sense
5.72, 3.27e-05 | 3.47, 6.17e-03 <-----------------this doesn't make any sense
Then, I did another experiment: Relaxed average-zero pressure condition by changing the divergence-free equation (*) with
div(u) + gamma p = gamma * exact_p
I commented out the average-zero pressure line, and adding back pressure-mass matrix (the lines ended with#FIXME wrong pde)
This time, I got neat convergence rates.
e.g. order = 3 gives me:
4.39, 4.44e-04 | 2.13, 2.40e-02
4.04, 2.71e-05 | 3.35, 2.35e-03
4.03, 1.66e-06 | 2.99, 2.95e-04
4.01, 1.03e-07 | 3.00, 3.69e-05
Finally, I modifed the boundary condition so that pressure is determined:
I changed normal velocity bdry to be the following bdry
(grad u.n).n - p = something
I commented out average-zero and gamma terms, and changed Hdiv space without bdry and added back correct bdry source term on LFI (the lines ended with #FIXME weird bdry)
This time, nice convergence rates again:
e.g. order = 3 gives me:
4.48, 6.05e-04 | 3.04, 3.31e-02
4.04, 3.67e-05 | 3.18, 3.65e-03
4.01, 2.27e-06 | 3.00, 4.56e-04
4.00, 1.42e-07 | 3.00, 5.70e-05
The above tests leads me to believe there is an internal bug for the "number" fespace...
But, I am very confused why the average-zero "number" fespace trick works for the laplace case, but not for the stokes case...
Best,
Guosheng
Here is the sample code:
Hey Guosheng!
i tried your example and observed the same problems with ndof=8443. When you draw the pressure you can see that something goes wrong there. At the moment i don't know exactly that's happening, but i don't think it's the numer FESpace because I tried the same with Taylor Hood and everthing worked fine.
Last edit: Philip Lederer 2016-07-18
View and moderate all "NGSolve-Python" comments posted by this user
Mark all as spam, and block user from posting to "Discussion"
Hey Guosheng!
Joachim tried the same on his mac and had no problem. We realised that he uses the umfpack solver instead of the pardiso so we think that this leads to the observed problems. You can either regularise by adding
a += SymbolicBFI (1e-8*p0 * q0)
to your Bilinearform or change the signs in your bilinearform to get a symmetric system:
:::python
a = BilinearForm (V,flags = { "eliminate_internal" : False }, symmetric=True")
a += SymbolicBFI (InnerProduct(gradu, gradv) + pdiv(v)+ div(u)q )
a += SymbolicBFI ( InnerProduct ( gradu.trans * n, tang(vhat-v) ), element_boundary=True )
a += SymbolicBFI ( InnerProduct ( gradv.trans * n, tang(uhat-u) ), element_boundary=True )
a += SymbolicBFI ( 5(order+1)(order+1)/h * InnerProduct ( tang(vhat-v), tang(uhat-u) ), element_boundary=True )
a += SymbolicBFI (p0*q + p * q0)
Somehow the problems (at least on my computer) dissaper, maybe also because of the the flag "symmetric=True", as now the symmetric Pardiso is called. Don't forget to change the sign of the pressure where you calculate the pressure error,... it took me a while to realize that ;)
Another possibility is to use a CG solver:
c = Preconditioner(a, type="direct", flags={ "inverse": "pardiso"} )
c.Update()
bvp = BVP(bf=a, lf=f, gf=sol, pre=c, maxsteps=20)
bvp.Do()
...and finally, of course you can also install umfpack :D
all the best,
Philip
Last edit: Philip Lederer 2016-07-19
Hi Philip,
Oh... it's a solver issue...thanks!!
I was trying to install umfpack...mumps...no good luck....I give up
Should have done that in Brenel with you guys... :(
All right, I will symmetrize the system wheneven possible when using pardiso... I use to trust direct solver 100%, not anymore
Here is a funny test:
I used CG solver with pardiso preconditioner
The same problem is tested as my original post, I solve either use static condensation or not.
Both approaches give me same error up to order 3, I noticed the direct approach need a few iterations to converge for order=3 while static condensed approach just need 1 solve.
However, for order = 4 or 5, the direct approach fail to converge, no issue for static condensed approach
It seems pardiso preconditioner + CG solver is a good and safe solver to tell us whether it was doing the "right" thing... :)
Best,
Guosheng
At least it works now...
For UMFPACK download the package from here:
http://faculty.cse.tamu.edu/davis/suitesparse.html
then do a make (maybe you have to install some additional packages).
After that add
-DUSE_UMFPACK=ON
and
-DCMAKE_PREFIX_PATH= "...path to the suite sparse folder where you used make"
to your cmake flags for ngsolve and compile!
now umfpack should be available like that
inv_a = a.mat.Inverse(V.FreeDofs(), inverse="umfpack")
res = f.vec.CreateVector()
res.data = f.vec - a.mat*gfu.vec
gfu.vec.data += inv_a * res
(jsut tested it again, indeed, umfpack works!)
Good!
I will try once again then.... :)
Since we are on the solver discussion.
What other (cool) solver other than your inhouse ones is, or is planed, to hook into ngsolve?
Is there any plan for addting Trilinos or Petsc in the near future?
Philip,
Just an update, I installed umpack as you suggested.
It works, is more stable, but is really slow and memory consuming comparing with pardiso... did you observe similar behavior of the solver?
Best,
Guosheng
...note to myself: log in before you answer ;)
side remark:
If you update NGSolve now, you will observe that the "error barrier" is decreased a bit so that you obtain your optimal order eoc for one additional level of refinement ;).
Why is that the case?
You improved pardiso??
No, something simpler. A technicality on how to compute the gradient of Hdiv-functions sufficiently accurate.