From: John Peterson <jwpeterson@gm...>  20080602 16:36:18

On Mon, Jun 2, 2008 at 11:17 AM, Roy Stogner <roy@...> wrote: > > On Mon, 2 Jun 2008, Derek Gaston wrote: > >> On Jun 2, 2008, at 9:45 AM, Roy Stogner wrote: >> >> What advantages does FEMSystem have over using Ben's >> NonlinearImplicit system stuff using SNES? I've used both, but I >> don't really see the difference. > > For you, since the numerical jacobians aren't useful right now, one of > the biggest advantages is gone. But having the time integration > factored out is nice for transient problems, and the interior_, side_, > etc. helper methods are nice for nonlinear problems. But the biggest Just wanted to mention: not having to write for loops over elements and element sides is a big benefit for me. You also don't have an opportunity to forget to constrain the element matrix and vector before inserting it into the system matrix. > remaining benefits are still "to be written", I think. Threaded > assembly (without having to write your own weird TBB code) should be a > nice feature, even if we'll have to go through an API change to get > there. And there are some other niceties (bubblefunction based > physicsdependent error estimators, for example) that can really only > be implemented if the library can request residuals of one element at > a time. > >> For us, we're using the NonlinearImplicit system because we hope to fill out >> the Trilinos interfaces at some point and use NOX instead... > > Is something wrong with the FEMSystem>SNES interface? I admit that I > basically gave up on PetscDiffSolver when it turned out that my own > quasiNewton code was faster on my apps, but there's at least a place > there to start. If you're doing matrixfree work, the fact that > DiffSystem currently inherits from ImplicitSystem may be a stumbling > block, but NonlinearImplicitSystem has the same problem. >  > Roy > >  > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > 
From: Derek Gaston <friedmud@gm...>  20080602 03:51:10

So... I've been working with the NonlinearImplicit system... solving jacobian free problems using Petsc SNES. Everything is going pretty well (Ben's interface is great!)... but I've hit a bit of a snag on the boundary conditions. I would really like to impose both Dirichlet and Neumann conditions similarly... by integrating their residual contributions over the boundary. So for instance for a Dirichlet condition I have something like this: Intg_boundary( u  BC ) and similarly for Neumann: Intg_boundary( grad_u  BC ) The trouble that I'm running into is that these residual contributions seem to be too small... and are only very loosely satisfied by SNES. For instance, solving u''=0 on [0,1]x[0,1] with u=0 at x=0 and u=1 at x=1... I end up with u actually being .3333 at x=0 and .6666 at x=1. Of course I can dial down the residual tolerances for snes... but this only gets me so much closer. The other thing I've tried is to put a multiplier in front of the boundary integral (like 1e7 or so)... this ends up working well for satisfying the boundary conditions... but then the interior residuals are swamped out... meaning they aren't well resolved. So, I'm wondering if anyone has any better ideas? Is there a good way to choose the correct multiplier? Am I going about the whole thing incorrectly? Thanks for any help... Derek 
From: John Peterson <jwpeterson@gm...>  20080602 04:39:07

On Sun, Jun 1, 2008 at 10:51 PM, Derek Gaston <friedmud@...> wrote: > So... I've been working with the NonlinearImplicit system... solving > jacobian free problems using Petsc SNES. Everything is going pretty > well (Ben's interface is great!)... but I've hit a bit of a snag on > the boundary conditions. I would really like to impose both Dirichlet > and Neumann conditions similarly... by integrating their residual > contributions over the boundary. So for instance for a Dirichlet > condition I have something like this: > > Intg_boundary( u  BC ) Hey Derek, Are you doing a penalty formulation for the Dirichlet BC? You'll need to multiply by a large penalty for this condition to be enforced. > and similarly for Neumann: > > Intg_boundary( grad_u  BC ) Here I don't think you subtract... you just replace du/dn with whatever the BC is (it may depend on u, it may be nonlinear) and then add that term to your weighted residual statement. J > The trouble that I'm running into is that these residual contributions > seem to be too small... and are only very loosely satisfied by SNES. > For instance, solving u''=0 on [0,1]x[0,1] with u=0 at x=0 and u=1 at > x=1... I end up with u actually being .3333 at x=0 and .6666 at x=1. > Of course I can dial down the residual tolerances for snes... but this > only gets me so much closer. The other thing I've tried is to put a > multiplier in front of the boundary integral (like 1e7 or so)... this > ends up working well for satisfying the boundary conditions... but > then the interior residuals are swamped out... meaning they aren't > well resolved. > > So, I'm wondering if anyone has any better ideas? Is there a good way > to choose the correct multiplier? Am I going about the whole thing > incorrectly? > > Thanks for any help... > > Derek > >  > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > 
From: Derek Gaston <friedmud@gm...>  20080602 13:45:53

Oops... you're right about the Neumann condition... I forgot that I get a boundary term out of the integration by parts that I can just use for this purpose... I still don't quite know about the Dirichlet condition though. Yes, I am enforcing it essentially as a penalty... but that penalty just goes into the residual and is finite differenced like the rest of it. If the residual goes to zero... then it shouldn't matter if there is a multiplier on the penalty or not. The only question is of convergence. Or do you think it's the rest of the normal stuff in the residual for a boundary DOF that's screwing it up... and that's why I need the multiplier? Hmmmm.... I guess that makes sense. So, any guidance on choosing a multiplier? The problem is obviously sensitive to it... because if I make it too large I get the wrong answer just as easily as making it too small (and I do get good answers when I get the multiplier in the right range BTW). Thanks, Derek On Jun 1, 2008, at 10:39 PM, John Peterson wrote: > On Sun, Jun 1, 2008 at 10:51 PM, Derek Gaston <friedmud@...> > wrote: >> So... I've been working with the NonlinearImplicit system... solving >> jacobian free problems using Petsc SNES. Everything is going pretty >> well (Ben's interface is great!)... but I've hit a bit of a snag on >> the boundary conditions. I would really like to impose both >> Dirichlet >> and Neumann conditions similarly... by integrating their residual >> contributions over the boundary. So for instance for a Dirichlet >> condition I have something like this: >> >> Intg_boundary( u  BC ) > > Hey Derek, > > Are you doing a penalty formulation for the Dirichlet BC? You'll need > to multiply by a large penalty for this condition to be enforced. > >> and similarly for Neumann: >> >> Intg_boundary( grad_u  BC ) > > Here I don't think you subtract... you just replace du/dn with > whatever the BC is (it may depend on u, it may be nonlinear) and then > add that term to your weighted residual statement. > > J > >> The trouble that I'm running into is that these residual >> contributions >> seem to be too small... and are only very loosely satisfied by SNES. >> For instance, solving u''=0 on [0,1]x[0,1] with u=0 at x=0 and u=1 at >> x=1... I end up with u actually being .3333 at x=0 and .6666 at x=1. >> Of course I can dial down the residual tolerances for snes... but >> this >> only gets me so much closer. The other thing I've tried is to put a >> multiplier in front of the boundary integral (like 1e7 or so)... this >> ends up working well for satisfying the boundary conditions... but >> then the interior residuals are swamped out... meaning they aren't >> well resolved. >> >> So, I'm wondering if anyone has any better ideas? Is there a good >> way >> to choose the correct multiplier? Am I going about the whole thing >> incorrectly? >> >> Thanks for any help... >> >> Derek >> >>  >> This SF.net email is sponsored by: Microsoft >> Defy all challenges. Microsoft(R) Visual Studio 2008. >> http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ >> _______________________________________________ >> Libmeshusers mailing list >> Libmeshusers@... >> https://lists.sourceforge.net/lists/listinfo/libmeshusers >> 
From: John Peterson <jwpeterson@gm...>  20080602 14:15:38

Hey Derek, I'm sure you already have it, but I just wanted to say: be sure to multiply by the test function (in Libmesh "phi_face[i]" or "phi_side[i]"). I just remembered we had forgotten to do this once before in the LaplaceYoung application :) On Mon, Jun 2, 2008 at 8:45 AM, Derek Gaston <friedmud@...> wrote: > Oops... you're right about the Neumann condition... I forgot that I get a > boundary term out of the integration by parts that I can just use for this > purpose... > > I still don't quite know about the Dirichlet condition though. Yes, I am > enforcing it essentially as a penalty... but that penalty just goes into the > residual and is finite differenced like the rest of it. If the residual Right. Another way to think of it: the analytical jacobian contribution from the standard Dirichlet penalty term is a mass matrix "phi[i]*phi[j]" multiplied by the penalty parameter. Likewise, when you finite difference this term you should get something proportional to the penalty. If "uBC" differs appreciably from zero, the residual contributions from this term will be large. We typically use a value on the order of 10^10. What value are you using? > goes to zero... then it shouldn't matter if there is a multiplier on the > penalty or not. The only question is of convergence. Or do you think it's > the rest of the normal stuff in the residual for a boundary DOF that's > screwing it up... and that's why I need the multiplier? I don't *think* the penalty should affect the convergence rate strongly. But I don't have a lot of experience with the matrixfree NewtonKrylov techniques so I can't say for sure. > Hmmmm.... I guess that makes sense. So, any guidance on choosing a > multiplier? The problem is obviously sensitive to it... because if I make > it too large I get the wrong answer just as easily as making it too small > (and I do get good answers when I get the multiplier in the right range > BTW). AFAIK, the solution should not be sensitive to the size of the penalty, provided it's large enough. Could be something else wrong with your residual assembly code? J > > Thanks, > Derek > > On Jun 1, 2008, at 10:39 PM, John Peterson wrote: > >> On Sun, Jun 1, 2008 at 10:51 PM, Derek Gaston <friedmud@...> wrote: >>> >>> So... I've been working with the NonlinearImplicit system... solving >>> jacobian free problems using Petsc SNES. Everything is going pretty >>> well (Ben's interface is great!)... but I've hit a bit of a snag on >>> the boundary conditions. I would really like to impose both Dirichlet >>> and Neumann conditions similarly... by integrating their residual >>> contributions over the boundary. So for instance for a Dirichlet >>> condition I have something like this: >>> >>> Intg_boundary( u  BC ) >> >> Hey Derek, >> >> Are you doing a penalty formulation for the Dirichlet BC? You'll need >> to multiply by a large penalty for this condition to be enforced. >> >>> and similarly for Neumann: >>> >>> Intg_boundary( grad_u  BC ) >> >> Here I don't think you subtract... you just replace du/dn with >> whatever the BC is (it may depend on u, it may be nonlinear) and then >> add that term to your weighted residual statement. >> >> J >> >>> The trouble that I'm running into is that these residual contributions >>> seem to be too small... and are only very loosely satisfied by SNES. >>> For instance, solving u''=0 on [0,1]x[0,1] with u=0 at x=0 and u=1 at >>> x=1... I end up with u actually being .3333 at x=0 and .6666 at x=1. >>> Of course I can dial down the residual tolerances for snes... but this >>> only gets me so much closer. The other thing I've tried is to put a >>> multiplier in front of the boundary integral (like 1e7 or so)... this >>> ends up working well for satisfying the boundary conditions... but >>> then the interior residuals are swamped out... meaning they aren't >>> well resolved. >>> >>> So, I'm wondering if anyone has any better ideas? Is there a good way >>> to choose the correct multiplier? Am I going about the whole thing >>> incorrectly? >>> >>> Thanks for any help... >>> >>> Derek >>> >>>  >>> This SF.net email is sponsored by: Microsoft >>> Defy all challenges. Microsoft(R) Visual Studio 2008. >>> http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ >>> _______________________________________________ >>> Libmeshusers mailing list >>> Libmeshusers@... >>> https://lists.sourceforge.net/lists/listinfo/libmeshusers >>> > > 
From: Derek Gaston <friedmud@gm...>  20080602 15:31:24

On Jun 2, 2008, at 8:15 AM, John Peterson wrote: > Hey Derek, > > I'm sure you already have it, but I just wanted to say: be sure to > multiply by the test function (in Libmesh "phi_face[i]" or > "phi_side[i]"). I just remembered we had forgotten to do this once > before in the LaplaceYoung application :) Hehe... yes... thanks for the reminder. I had already made sure that was the case here! Didn't want to make that mistake twice.... > If "uBC" differs appreciably from zero, the residual > contributions from this term will be large. We typically use a value > on the order of 10^10. What value are you using? Right now I've settled on 10^5... but I might revisit that. > I don't *think* the penalty should affect the convergence rate > strongly. But I don't have a lot of experience with the matrixfree > NewtonKrylov techniques so I can't say for sure. It's weird... it's not really affecting the _rate_ as much as it's actually affecting the convergence itself... ie the problem isn't fully converged when SNES says it is. Maybe I just need to play with the convergence tolerances some more... Thanks for your help! As it is now... the code is working, I just have an unsatisfactory constant in there (1e5)... Derek 
From: Roy Stogner <roy@st...>  20080602 15:46:13

On Mon, 2 Jun 2008, John Peterson wrote: > I'm sure you already have it, but I just wanted to say: be sure to > multiply by the test function (in Libmesh "phi_face[i]" or > "phi_side[i]"). I just remembered we had forgotten to do this once > before in the LaplaceYoung application :) It would be nice to set up some sort of sneaky strong typing to prevent this sort of mistake in the long run. Maybe: A Number times a SideReal gives a SideNumber. A Number times an InteriorReal gives an InteriorNumber. Trying to multiply an InteriorNumber (like phi[i][qp]) and a SideNumber (like JxW_face[i][qp]) without an explicit cast gives a compiler error. Although, when I change the FEMSystem interface to be threadable, it'll fix this problem for those users as a side effect; you'll only have access to an interior FE object when computing interior terms, and you'll only have access to a side FE object when computing side terms. > Right. Another way to think of it: the analytical jacobian > contribution from the standard Dirichlet penalty term is a mass matrix > "phi[i]*phi[j]" multiplied by the penalty parameter. Likewise, when > you finite difference this term you should get something proportional > to the penalty. If "uBC" differs appreciably from zero, the residual > contributions from this term will be large. We typically use a value > on the order of 10^10. What value are you using? I think the best way to think about it is as Robin boundary conditions: you're basically weakly enforcing boundary terms similar to "u = u_b + 1/penalty*du/dn" (where the exact details depend on your PDE and how you add the penalty terms). In other words, even a modest penalty like 10^6 should be more than enough, until you solve the rest of your system to a tolerance that looks like 10^6. >> goes to zero... then it shouldn't matter if there is a multiplier on the >> penalty or not. The only question is of convergence. Or do you think it's >> the rest of the normal stuff in the residual for a boundary DOF that's >> screwing it up... and that's why I need the multiplier? > > I don't *think* the penalty should affect the convergence rate > strongly. But I don't have a lot of experience with the matrixfree > NewtonKrylov techniques so I can't say for sure. It's been a while since I looked at this. From what I remember, the penalty *looks* like it should affect the convergence rate strongly, since it can blow up the condition number of your matrix. But what it really ends up giving you is a nice set of clustered eigenvalues that are just really farremoved from your normal eigenvalues, and in my experience that gets nicely fixed up by even the most basic preconditioning methods. >> Hmmmm.... I guess that makes sense. So, any guidance on choosing a >> multiplier? The problem is obviously sensitive to it... because if I make >> it too large I get the wrong answer just as easily as making it too small >> (and I do get good answers when I get the multiplier in the right range >> BTW). I think the combination of a toolarge penalty value and floating point error can get you in trouble, but usually you have to get up near 10^15 for that to happen. I usually just pull "10^10" out of a hat and get good results. > AFAIK, the solution should not be sensitive to the size of the > penalty, provided it's large enough. Could be something else wrong > with your residual assembly code? I think John's concerns are probably correct, but a couple other options if we're wrong: You can weakly enforce boundary conditions with terms based on Nitsche's method or on DG methods; I've never tried it, but that supposedly has benefits for conditioning and error estimation too. You can extend our DofMap constraints code to allow Dirichlet boundary conditions to be applied strongly. Right now we only support "constrained_dof_i = sum(coefficient_ij*otherdof_j)" sorts of constraints, but it wouldn't be too hard to add a "plus constant_i" term to that.  Roy 
From: Derek Gaston <friedmud@gm...>  20080602 15:59:27

Thanks for the reply Roy... On Jun 2, 2008, at 9:45 AM, Roy Stogner wrote: >> > It would be nice to set up some sort of sneaky strong typing to > prevent this sort of mistake in the long run. Maybe: A Number times a > SideReal gives a SideNumber. A Number times an InteriorReal gives an > InteriorNumber. Trying to multiply an InteriorNumber (like > phi[i][qp]) and a SideNumber (like JxW_face[i][qp]) without an > explicit cast gives a compiler error. In this case the problem wasn't so much that I was multiplying by an interior shape function... it was that I wasn't multiplying by _any_ shape function ;) But this typing doesn't sound like too bad of an idea anyway... > Although, when I change the FEMSystem interface to be threadable, > it'll fix this problem for those users as a side effect; you'll only > have access to an interior FE object when computing interior terms, > and you'll only have access to a side FE object when computing side > terms. What advantages does FEMSystem have over using Ben's NonlinearImplicit system stuff using SNES? I've used both, but I don't really see the difference. For us, we're using the NonlinearImplicit system because we hope to fill out the Trilinos interfaces at some point and use NOX instead... >> > It's been a while since I looked at this. From what I remember, the > penalty *looks* like it should affect the convergence rate strongly, > since it can blow up the condition number of your matrix. But what it > really ends up giving you is a nice set of clustered eigenvalues that > are just really farremoved from your normal eigenvalues, and in my > experience that gets nicely fixed up by even the most basic > preconditioning methods. Yeah  see my recent reply to John. I don't think it's necessarily having convergence "problems"... but the convergence tolerances might not be right. > I think the combination of a toolarge penalty value and floating > point error can get you in trouble, but usually you have to get up > near 10^15 for that to happen. I usually just pull "10^10" out of a > hat and get good results. Yeah  I don't think I'm hitting floating point problems > You can weakly enforce boundary conditions with terms based on > Nitsche's method or on DG methods; I've never tried it, but that > supposedly has benefits for conditioning and error estimation too. If we stick to this path we might look at these... thanks for the info. > You can extend our DofMap constraints code to allow Dirichlet boundary > conditions to be applied strongly. Right now we only support > "constrained_dof_i = sum(coefficient_ij*otherdof_j)" sorts of > constraints, > but it wouldn't be too hard to add a "plus constant_i" term to that. This might be where we go long term. I was talking to my group earlier about this and they were talking about doing just this. I, myself, would rather not. I'd really rather see the BC's directly in the residual calculation (to make them very plugandplayable)... but we'll see. For now, I'm happy with how the code is working... just wanted some input from you guys. Thanks again for the great reply. Derek 
From: Roy Stogner <roy@st...>  20080602 16:17:50

On Mon, 2 Jun 2008, Derek Gaston wrote: > On Jun 2, 2008, at 9:45 AM, Roy Stogner wrote: > > What advantages does FEMSystem have over using Ben's > NonlinearImplicit system stuff using SNES? I've used both, but I > don't really see the difference. For you, since the numerical jacobians aren't useful right now, one of the biggest advantages is gone. But having the time integration factored out is nice for transient problems, and the interior_, side_, etc. helper methods are nice for nonlinear problems. But the biggest remaining benefits are still "to be written", I think. Threaded assembly (without having to write your own weird TBB code) should be a nice feature, even if we'll have to go through an API change to get there. And there are some other niceties (bubblefunction based physicsdependent error estimators, for example) that can really only be implemented if the library can request residuals of one element at a time. > For us, we're using the NonlinearImplicit system because we hope to fill out > the Trilinos interfaces at some point and use NOX instead... Is something wrong with the FEMSystem>SNES interface? I admit that I basically gave up on PetscDiffSolver when it turned out that my own quasiNewton code was faster on my apps, but there's at least a place there to start. If you're doing matrixfree work, the fact that DiffSystem currently inherits from ImplicitSystem may be a stumbling block, but NonlinearImplicitSystem has the same problem.  Roy 
From: John Peterson <jwpeterson@gm...>  20080602 16:36:18

On Mon, Jun 2, 2008 at 11:17 AM, Roy Stogner <roy@...> wrote: > > On Mon, 2 Jun 2008, Derek Gaston wrote: > >> On Jun 2, 2008, at 9:45 AM, Roy Stogner wrote: >> >> What advantages does FEMSystem have over using Ben's >> NonlinearImplicit system stuff using SNES? I've used both, but I >> don't really see the difference. > > For you, since the numerical jacobians aren't useful right now, one of > the biggest advantages is gone. But having the time integration > factored out is nice for transient problems, and the interior_, side_, > etc. helper methods are nice for nonlinear problems. But the biggest Just wanted to mention: not having to write for loops over elements and element sides is a big benefit for me. You also don't have an opportunity to forget to constrain the element matrix and vector before inserting it into the system matrix. > remaining benefits are still "to be written", I think. Threaded > assembly (without having to write your own weird TBB code) should be a > nice feature, even if we'll have to go through an API change to get > there. And there are some other niceties (bubblefunction based > physicsdependent error estimators, for example) that can really only > be implemented if the library can request residuals of one element at > a time. > >> For us, we're using the NonlinearImplicit system because we hope to fill out >> the Trilinos interfaces at some point and use NOX instead... > > Is something wrong with the FEMSystem>SNES interface? I admit that I > basically gave up on PetscDiffSolver when it turned out that my own > quasiNewton code was faster on my apps, but there's at least a place > there to start. If you're doing matrixfree work, the fact that > DiffSystem currently inherits from ImplicitSystem may be a stumbling > block, but NonlinearImplicitSystem has the same problem. >  > Roy > >  > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > 