Screenshot instructions:
Windows
Mac
Red Hat Linux
Ubuntu
Click URL instructions:
Rightclick on ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)
From: Roy Stogner <roystgnr@ic...>  20120913 20:31:05

My favorite preconditioner has always been ILU4 (wrapped in block Jacobi in parallel), and that's worked well enough that except for a couple applications I've never bothered trying to do better. Now I'm discovering that ILU4 fails horribly on our little old vector_fe_ex3 (curlcurl) example. I have no idea why one step of ILU converges in a hundredodd iterations and four steps doesn't make it to square root of tolerance after 5000 iterations, but clearly I need new LIBMESH_OPTIONS. Any suggestions?  Roy 
From: Paul T. Bauman <ptbauman@gm...>  20120913 20:40:56

If you have superlu installed, I've found pc_type bjacobi sub_pc_type ilu sub_pc_mat_solver_package superlu to be much better than the standard ILU. Note needs to be PETSc 3.3p2 at least because there was a bug where superlu only got used in the first iteration. On Thu, Sep 13, 2012 at 3:30 PM, Roy Stogner <roystgnr@...>wrote: > > My favorite preconditioner has always been ILU4 (wrapped in block > Jacobi in parallel), and that's worked well enough that except for a > couple applications I've never bothered trying to do better. > > Now I'm discovering that ILU4 fails horribly on our little old > vector_fe_ex3 (curlcurl) example. I have no idea why one step of ILU > converges in a hundredodd iterations and four steps doesn't make it > to square root of tolerance after 5000 iterations, but clearly I need > new LIBMESH_OPTIONS. > > Any suggestions? >  > Roy > > >  > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > 
From: Derek Gaston <friedmud@gm...>  20120913 20:45:24

I haven't looked at that problem and don't know much about curlcurl spaces... but if the matrix would respond to multigrid at all I highly recommend compiling PETSc with Hypre support and using: pc_type hypre pc_hypre_type boomeramg For problems where it works it can't be beat... Derek On Thu, Sep 13, 2012 at 2:30 PM, Roy Stogner <roystgnr@...>wrote: > > My favorite preconditioner has always been ILU4 (wrapped in block > Jacobi in parallel), and that's worked well enough that except for a > couple applications I've never bothered trying to do better. > > Now I'm discovering that ILU4 fails horribly on our little old > vector_fe_ex3 (curlcurl) example. I have no idea why one step of ILU > converges in a hundredodd iterations and four steps doesn't make it > to square root of tolerance after 5000 iterations, but clearly I need > new LIBMESH_OPTIONS. > > Any suggestions? >  > Roy > > >  > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > 
From: John Peterson <jwpeterson@gm...>  20120913 20:46:30

On Thu, Sep 13, 2012 at 2:40 PM, Paul T. Bauman <ptbauman@...> wrote: > If you have superlu installed, I've found pc_type bjacobi sub_pc_type ilu > sub_pc_mat_solver_package superlu to be much better than the standard ILU. > Note needs to be PETSc 3.3p2 at least because there was a bug where > superlu only got used in the first iteration. Is superlu doing full (dense) LU or ILU? If it's the former, that would explain why it's better... Anyway, it's probably fine for small examples like this one to use pc_type lu. That way our examples don't require superlu/hypre just to run.  John 
From: Paul T. Bauman <ptbauman@gm...>  20120913 20:49:26

On Thu, Sep 13, 2012 at 3:46 PM, John Peterson <jwpeterson@...> wrote: > On Thu, Sep 13, 2012 at 2:40 PM, Paul T. Bauman <ptbauman@...> > wrote: > > If you have superlu installed, I've found pc_type bjacobi sub_pc_type > ilu > > sub_pc_mat_solver_package superlu to be much better than the standard > ILU. > > Note needs to be PETSc 3.3p2 at least because there was a bug where > > superlu only got used in the first iteration. > > Is superlu doing full (dense) LU or ILU? If it's the former, that > would explain why it's better... > I'm pretty certain it's ILU. This capability was added in the not too distant past to PETSc. In the eyeball norm, the performance was comparable (actually superlu seemed much faster where the work per processor was not super small). > > Anyway, it's probably fine for small examples like this one to use > pc_type lu. That way our examples don't require superlu/hypre just > to run. > This is what I do usually, but with the external packages. The internal PETSc lu doesn't handle pivoting. 
From: John Peterson <jwpeterson@gm...>  20120913 20:57:00

On Thu, Sep 13, 2012 at 2:49 PM, Paul T. Bauman <ptbauman@...> wrote: > > This is what I do usually, but with the external packages. The internal > PETSc lu doesn't handle pivoting. http://www.mcs.anl.gov/petsc/petsccurrent/docs/manualpages/PC/PCLU.html pc_factor_nonzeros_along_diagonal  permutes the rows and columns to try to put nonzero value along the diagonal. That's pivoting, right?  John 
From: Paul T. Bauman <ptbauman@gm...>  20120913 20:58:57

On Thu, Sep 13, 2012 at 3:56 PM, John Peterson <jwpeterson@...> wrote: > > http://www.mcs.anl.gov/petsc/petsccurrent/docs/manualpages/PC/PCLU.html > > pc_factor_nonzeros_along_diagonal  permutes the rows and > columns to > try to put nonzero value along the diagonal. > > That's pivoting, right? > Huh, apparently I haven't paid too close attention to their additions there. Thanks for that. 
From: Jed Brown <jed@59...>  20120913 23:17:34

That is poor man's pivoting. The diagonal may not stay positive during factorization. Real sparse direct solves are significantly complicated by the need for dynamic pivoting. On Sep 13, 2012 3:57 PM, "John Peterson" <jwpeterson@...> wrote: > On Thu, Sep 13, 2012 at 2:49 PM, Paul T. Bauman <ptbauman@...> > wrote: > > > > This is what I do usually, but with the external packages. The internal > > PETSc lu doesn't handle pivoting. > > > http://www.mcs.anl.gov/petsc/petsccurrent/docs/manualpages/PC/PCLU.html > > pc_factor_nonzeros_along_diagonal  permutes the rows and > columns to > try to put nonzero value along the diagonal. > > That's pivoting, right? > >  > John > > >  > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > 
From: David Knezevic <dknezevic@se...>  20120913 21:06:48

On a related issue, I wanted to mention 3D linear elasticity example that I added: systems_of_equations_ex6. It seems very slow with the default iterative solver in PETSc (ILU + GMRES, right?). The example takes 26 seconds on my laptop. But this is a symmetric problem, so you'd expect CG to be good. Calling ./systems_of_equations_ex6opt ksp_type cg runs fast, but gives the wrong answer (I should add some code to check the final residual...) The best thing for me is just a direct solver: ./systems_of_equations_ex6opt ksp_type preonly pc_type lu pc_factor_mat_solver_package umfpack But I guess there must be a good KSP/PC combo for this simple problem, any recommendations? Best, David On 09/13/2012 04:56 PM, John Peterson wrote: > On Thu, Sep 13, 2012 at 2:49 PM, Paul T. Bauman <ptbauman@...> wrote: >> This is what I do usually, but with the external packages. The internal >> PETSc lu doesn't handle pivoting. > > http://www.mcs.anl.gov/petsc/petsccurrent/docs/manualpages/PC/PCLU.html > > pc_factor_nonzeros_along_diagonal  permutes the rows and columns to > try to put nonzero value along the diagonal. > > That's pivoting, right? > 
From: John Peterson <jwpeterson@gm...>  20120913 21:10:01

On Thu, Sep 13, 2012 at 3:06 PM, David Knezevic <dknezevic@...> wrote: > On a related issue, I wanted to mention 3D linear elasticity example > that I added: systems_of_equations_ex6. It seems very slow with the > default iterative solver in PETSc (ILU + GMRES, right?). The example > takes 26 seconds on my laptop. > > But this is a symmetric problem, so you'd expect CG to be good. Calling > ./systems_of_equations_ex6opt ksp_type cg > runs fast, but gives the wrong answer (I should add some code to check > the final residual...) I'd check to be sure your matrix is really symmetric, too. Perhaps something like boundary conditions made it nonsymmetric?  John 
From: Paul T. Bauman <ptbauman@gm...>  20120913 21:12:29

On Thu, Sep 13, 2012 at 4:06 PM, David Knezevic <dknezevic@...>wrote: > On a related issue, I wanted to mention 3D linear elasticity example > that I added: systems_of_equations_ex6. It seems very slow with the > default iterative solver in PETSc (ILU + GMRES, right?). The example > takes 26 seconds on my laptop. > > But this is a symmetric problem, so you'd expect CG to be good. Yes, in another lifetime when I was doing nonlinear elasticity, CG was quite good. > Calling > ./systems_of_equations_ex6opt ksp_type cg > runs fast, but gives the wrong answer That's alarming... Is ICC the default preconditioner with CG? If not, give that a try (maybe with pc_factor_levels 4 or something) 
From: David Knezevic <dknezevic@se...>  20120913 21:17:00

On 09/13/2012 05:12 PM, Paul T. Bauman wrote: > On Thu, Sep 13, 2012 at 4:06 PM, David Knezevic > <dknezevic@... <mailto:dknezevic@...>> wrote: > > On a related issue, I wanted to mention 3D linear elasticity example > that I added: systems_of_equations_ex6. It seems very slow with the > default iterative solver in PETSc (ILU + GMRES, right?). The example > takes 26 seconds on my laptop. > > But this is a symmetric problem, so you'd expect CG to be good. > > > Yes, in another lifetime when I was doing nonlinear elasticity, CG was > quite good. > > Calling > ./systems_of_equations_ex6opt ksp_type cg > runs fast, but gives the wrong answer > > > That's alarming... > > Is ICC the default preconditioner with CG? If not, give that a try > (maybe with pc_factor_levels 4 or something) ahh, "pc_type icc" did the trick, thanks! I'll add that to the Makefile. David 
From: Roy Stogner <roystgnr@ic...>  20120913 22:09:07

On Thu, 13 Sep 2012, Derek Gaston wrote: > I haven't looked at that problem and don't know much about curlcurl spaces... but if the matrix would respond to multigrid at all I highly > recommend compiling PETSc with Hypre support and using: > pc_type hypre pc_hypre_type boomeramg > > For problems where it works it can't be beat... "Where it works" is the tricky part. It seems to fly on the curlcurl problem, which is great; it failed to converge on a navierstokes problem I just tried, which rules it out as a default LIBMESH_OPTIONS for me but still makes it a reasonable candidate for further consideration... except that the very *first* time I tried PETSc with boomeramg, like 5 years ago, it was "converging" to inaccurate answers on some of the runs I tried. I'm 99% sure they'll have fixed whatever was wrong by now, but once bitten twice shy...  Roy 
From: John Peterson <jwpeterson@gm...>  20120913 22:21:53

On Thu, Sep 13, 2012 at 4:09 PM, Roy Stogner <roystgnr@...> wrote: > > On Thu, 13 Sep 2012, Derek Gaston wrote: > >> I haven't looked at that problem and don't know much about curlcurl spaces... but if the matrix would respond to multigrid at all I highly >> recommend compiling PETSc with Hypre support and using: >> pc_type hypre pc_hypre_type boomeramg >> >> For problems where it works it can't be beat... > > "Where it works" is the tricky part. It seems to fly on the curlcurl > problem, which is great; it failed to converge on a navierstokes > problem I just tried, which rules it out as a default LIBMESH_OPTIONS > for me but still makes it a reasonable candidate for further > consideration... > > except that the very *first* time I tried PETSc with boomeramg, like 5 > years ago, it was "converging" to inaccurate answers on some of the > runs I tried. I'm 99% sure they'll have fixed whatever was wrong by > now, but once bitten twice shy... You get the wrong solution with boomeramg on Stokes or NavierStokes because the preconditioner is singular. Try passing ksp_monitor_true_residual when you run that way, and you'll see that it's not really converging. Original thread here: http://permalink.gmane.org/gmane.comp.mathematics.libmesh.user/2187  John 
From: Dmitry Karpeev <karpeev@mc...>  20120913 23:11:40

The curlcurl operator is notoriously difficult because of the form of its nullspace that the AMG algorithms have to carefully take into account. >From what I understand, ML tries to do something intelligent about curlcurl, but as far as I know that requires going around the plain vanilla API that PETSc taps into. Also, currently there is no libMesh API to take in a nullspace and serve it to PETSc (or Trilinos?), although that's something that can be easily fixed. Jed should be able to comment on the issues with curlcurl and AMG in more detail. Dmitry. On Thu, Sep 13, 2012 at 5:09 PM, Roy Stogner <roystgnr@...>wrote: > > On Thu, 13 Sep 2012, Derek Gaston wrote: > > > I haven't looked at that problem and don't know much about curlcurl > spaces... but if the matrix would respond to multigrid at all I highly > > recommend compiling PETSc with Hypre support and using: > > pc_type hypre pc_hypre_type boomeramg > > > > For problems where it works it can't be beat... > > "Where it works" is the tricky part. It seems to fly on the curlcurl > problem, which is great; it failed to converge on a navierstokes > problem I just tried, which rules it out as a default LIBMESH_OPTIONS > for me but still makes it a reasonable candidate for further > consideration... > > except that the very *first* time I tried PETSc with boomeramg, like 5 > years ago, it was "converging" to inaccurate answers on some of the > runs I tried. I'm 99% sure they'll have fixed whatever was wrong by > now, but once bitten twice shy... >  > Roy > > >  > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > 
From: Jed Brown <jed@59...>  20120913 23:22:29

On Thu, Sep 13, 2012 at 4:16 PM, David Knezevic <dknezevic@...>wrote: > > Calling > > ./systems_of_equations_ex6opt ksp_type cg > > runs fast, but gives the wrong answer > > > > > > That's alarming... > > > > Is ICC the default preconditioner with CG? If not, give that a try > > (maybe with pc_factor_levels 4 or something) > > > ahh, "pc_type icc" did the trick, thanks! I'll add that to the Makefile. Use ksp_monitor_true_residual. If ILU does something meaningfully different from ICC, the system is almost certainly nonsymmetric. Check boundary conditions. But for elasticity, it's well worth getting algebraic multigrid working. Hypre does classical AMG which isn't much good for systems. Use petsc3.3 or later and call MatSetNearNullSpace() to provide the rigid body modes, then use either pc_type gamg or pc_type ml. Either should do fine (and can usually be tuned to do about the same algorithm, but have different defaults). If you use a nodal basis, you can stuff the nodal coordinates in a vector with suitable block size and call MatNullSpaceCreateRigidBody(). 
From: David Knezevic <dknezevic@se...>  20120913 23:25:58

On 09/13/2012 07:22 PM, Jed Brown wrote: > On Thu, Sep 13, 2012 at 4:16 PM, David Knezevic > <dknezevic@... <mailto:dknezevic@...>> wrote: > > > Calling > > ./systems_of_equations_ex6opt ksp_type cg > > runs fast, but gives the wrong answer > > > > > > That's alarming... > > > > Is ICC the default preconditioner with CG? If not, give that a try > > (maybe with pc_factor_levels 4 or something) > > > ahh, "pc_type icc" did the trick, thanks! I'll add that to the > Makefile. > > > Use ksp_monitor_true_residual. If ILU does something meaningfully > different from ICC, the system is almost certainly nonsymmetric. Check > boundary conditions. > > But for elasticity, it's well worth getting algebraic multigrid > working. Hypre does classical AMG which isn't much good for systems. > Use petsc3.3 or later and call MatSetNearNullSpace() to provide the > rigid body modes, then use either pc_type gamg or pc_type ml. Either > should do fine (and can usually be tuned to do about the same > algorithm, but have different defaults). If you use a nodal basis, you > can stuff the nodal coordinates in a vector with suitable block size > and call MatNullSpaceCreateRigidBody(). OK, thanks, I'll look into this. David 
From: Jed Brown <jed@59...>  20120913 23:32:14

Yup, curlcurl is a big challenge for normal preconditioners. There are two choices that work reliably: edgebased smoothers (Hiptmair; Arnold, Falk, and Winther) and auxiliary space preconditioners. ML and Hypre have support for different variants of the latter, but they require you to provide some custom operators. PETSc hasn't had many direct requests for Maxwell, but we could add an interface without much work (providing a common interface to the auxiliary space methods from ML and Hypre). Is this frequencydomain? Do you have a purely negative shift or is there damping? The highfrequency lowdamping case is notoriously difficult. Jack Poulson and Paul Tsuji have done some of the best recent work on that problem. Their approach is pretty reliable, but it's heavyweight and requires decomposition into layers with PML boundary conditions. On Thu, Sep 13, 2012 at 6:11 PM, Dmitry Karpeev <karpeev@...> wrote: > The curlcurl operator is notoriously difficult because of the form of its > nullspace that the AMG algorithms have to carefully take into account. > From what I understand, ML tries to do something intelligent about > curlcurl, but as far as I know that requires going around the plain > vanilla API that PETSc taps into. Also, currently there is no libMesh API > to take in a nullspace and serve it to PETSc (or Trilinos?), although > that's something that can be easily fixed. > > Jed should be able to comment on the issues with curlcurl and AMG in more > detail. > > Dmitry. > > > On Thu, Sep 13, 2012 at 5:09 PM, Roy Stogner <roystgnr@...>wrote: > >> >> On Thu, 13 Sep 2012, Derek Gaston wrote: >> >> > I haven't looked at that problem and don't know much about curlcurl >> spaces... but if the matrix would respond to multigrid at all I highly >> > recommend compiling PETSc with Hypre support and using: >> > pc_type hypre pc_hypre_type boomeramg >> > >> > For problems where it works it can't be beat... >> >> "Where it works" is the tricky part. It seems to fly on the curlcurl >> problem, which is great; it failed to converge on a navierstokes >> problem I just tried, which rules it out as a default LIBMESH_OPTIONS >> for me but still makes it a reasonable candidate for further >> consideration... >> >> except that the very *first* time I tried PETSc with boomeramg, like 5 >> years ago, it was "converging" to inaccurate answers on some of the >> runs I tried. I'm 99% sure they'll have fixed whatever was wrong by >> now, but once bitten twice shy... >>  >> Roy >> >> >>  >> Live Security Virtual Conference >> Exclusive live event will cover all the ways today's security and >> threat landscape has changed and how IT managers can respond. Discussions >> will include endpoint security, mobile security and the latest in malware >> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ >> _______________________________________________ >> Libmeshusers mailing list >> Libmeshusers@... >> https://lists.sourceforge.net/lists/listinfo/libmeshusers >> > > 
From: Paul T. Bauman <ptbauman@gm...>  20120914 00:35:15

On Thu, Sep 13, 2012 at 6:32 PM, Jed Brown <jed@...> wrote: > > Is this frequencydomain? Do you have a purely negative shift or is there > damping? That example is a very simple, realvalued curl curl u + u = f, where f is based on a smooth manufactured solution. It's only there to test that I implemented Nedelec elements correctly. 
From: Jed Brown <jed@59...>  20120914 21:04:02

On Thu, Sep 13, 2012 at 7:35 PM, Paul T. Bauman <ptbauman@...> wrote: > On Thu, Sep 13, 2012 at 6:32 PM, Jed Brown <jed@...> wrote: > >> >> Is this frequencydomain? Do you have a purely negative shift or is there >> damping? > > > That example is a very simple, realvalued curl curl u + u = f, where f is > based on a smooth manufactured solution. It's only there to test that I > implemented Nedelec elements correctly. > The positive shift is a good thing. Do you need a highperformance solver? What is the shift in that regime? 
From: Paul T. Bauman <ptbauman@gm...>  20120914 21:12:03

On Fri, Sep 14, 2012 at 4:03 PM, Jed Brown <jed@...> wrote: > On Thu, Sep 13, 2012 at 7:35 PM, Paul T. Bauman <ptbauman@...>wrote: > >> On Thu, Sep 13, 2012 at 6:32 PM, Jed Brown <jed@...> wrote: >> >>> >>> Is this frequencydomain? Do you have a purely negative shift or is there >>> damping? >> >> >> That example is a very simple, realvalued curl curl u + u = f, where f >> is based on a smooth manufactured solution. It's only there to test that I >> implemented Nedelec elements correctly. >> > > The positive shift is a good thing. Do you need a highperformance solver? > For the test problem, no. For the coupled magnetostatics problem I'm working on that will use the Nedelec element capability  yes. But I'm not far enough along yet to start down that road (still debugging/characterizing application). But it's in terms of potentials anyway, so I'm dealing with Laplace coupled to curl curl = f (coupled to other things), so the analogy to the test problem is not really good anyway. But maybe I'm misunderstanding where you're going? 
From: Jed Brown <jed@59...>  20120914 21:16:47

On Fri, Sep 14, 2012 at 4:11 PM, Paul T. Bauman <ptbauman@...> wrote: > For the test problem, no. For the coupled magnetostatics problem I'm > working on that will use the Nedelec element capability  yes. But I'm not > far enough along yet to start down that road (still > debugging/characterizing application). But it's in terms of potentials > anyway, so I'm dealing with Laplace coupled to curl curl = f (coupled to > other things), so the analogy to the test problem is not really good anyway. > > But maybe I'm misunderstanding where you're going? > Fast/multilevel solvers for Maxwell are somewhat involved, either requiring you to use a very specific discretization or to assemble some auxiliary operators so that the MG cycle can run in a nicer auxiliary space. If it's going to be a bottleneck and you care about it a lot, you'll want to set up plumbing for these auxiliary operators and we'll need to prioritize an interface to make that as clean as possible. If it's not in a particularly crucial place, don't worry about it yet. 
From: Paul T. Bauman <ptbauman@gm...>  20120914 21:20:39

On Fri, Sep 14, 2012 at 4:16 PM, Jed Brown <jed@...> wrote: > > Fast/multilevel solvers for Maxwell are somewhat involved, either > requiring you to use a very specific discretization or to assemble some > auxiliary operators so that the MG cycle can run in a nicer auxiliary > space. If it's going to be a bottleneck and you care about it a lot, you'll > want to set up plumbing for these auxiliary operators and we'll need to > prioritize an interface to make that as clean as possible. If it's not in a > particularly crucial place, don't worry about it yet. > Good to know, thanks. When I start hitting that point, I'll ping the PETSc list as needed. Best, Paul 
From: Roy Stogner <roystgnr@ic...>  20120917 22:35:36

On Thu, 13 Sep 2012, John Peterson wrote: > You get the wrong solution with boomeramg on Stokes or NavierStokes > because the preconditioner is singular. > > Try passing ksp_monitor_true_residual when you run that way, and > you'll see that it's not really converging. > > http://permalink.gmane.org/gmane.comp.mathematics.libmesh.user/2187 Thanks for the update. This doesn't mesh with what I thought I remembered (correct solutions on symmetric problems, failure with asymmetry) but indeed I'm seeing failures on plain Stokes now. I don't understand the reason for the failures, though. In that thread Jed said that multigrid breaks down for indefinite systems, which sounded reasonable. (inability to invert an indefinite matrix is basically the null hypothesis) But Stokes stops being indefinite if you handle the unconstrained pressure mode, yet both the pinned pressure and the lagrange multiplier pressure formulations in our examples still fail with BoomerAMG.  Roy 
From: Jed Brown <jed@59...>  20120917 23:02:53

On Mon, Sep 17, 2012 at 5:35 PM, Roy Stogner <roystgnr@...>wrote: > I don't understand the reason for the failures, though. In that > thread Jed said that multigrid breaks down for indefinite systems, > which sounded reasonable. (inability to invert an indefinite matrix > is basically the null hypothesis) But Stokes stops being indefinite > if you handle the unconstrained pressure mode, yet both the pinned > pressure and the lagrange multiplier pressure formulations in our > examples still fail with BoomerAMG. > Remember that pressure is basically a Lagrange multiplier. The issue is not the onedimensional null space. Indeed, multigrid typically works fine for positive semidefinite systems such as allNeumann Poisson with a consistent right hand side (though you cannot use a direct solve on the coarsest level). The problem with Stokes is the zeros (or stabilization matrix) in the (pressure, pressure) block. This makes the problem a saddle point so the solve no longer equates to minimizing some functional. More importantly for multigrid, the strength of connection measures in algebraic multigrid are totally confused by these zeros and smoothers don't know what to do with that block. Geometric multigrid can be used directly for Stokes by (a) using interpolation operators that make sense for pressure and velocity independently and (b) using smoothers that respect the indefinite nature of the problem. Trottenberg's book has lots of details, but an example of a good smoother is due to Vanka. It is a multiplicative overlapping Schwarz method that uses domains obtained by taking a pressure node, adding all velocity dofs that touch it, then assembling that block problem (or extracting it from the matrix), solving, updating the state vector for all dofs under the block solve, then moving to the next pressure node. Efficiently implementing these with assembled matrices or continuous finite elements is unfortunately, somewhat problematic. DG is very nice for this purpose. In any case, if you want to work with assembled matrices and not dive too far into multigridding, we recommend distinguishing the degrees of freedom (via your DM interface or directly/less composably via PCFieldSplitSetISs()) and using PCFieldSplit so that multigrid only operates on the (velocity, velocity) block. We can produce most of the methods from the literature (of which there are many, perhaps start with Elman's) on the command line in this way. 
Sign up for the SourceForge newsletter:
No, thanks