From: Roy S. <ro...@st...> - 2008-06-02 15:46:13
|
On Mon, 2 Jun 2008, John Peterson wrote: > I'm sure you already have it, but I just wanted to say: be sure to > multiply by the test function (in Libmesh "phi_face[i]" or > "phi_side[i]"). I just remembered we had forgotten to do this once > before in the Laplace-Young application :-) It would be nice to set up some sort of sneaky strong typing to prevent this sort of mistake in the long run. Maybe: A Number times a SideReal gives a SideNumber. A Number times an InteriorReal gives an InteriorNumber. Trying to multiply an InteriorNumber (like phi[i][qp]) and a SideNumber (like JxW_face[i][qp]) without an explicit cast gives a compiler error. Although, when I change the FEMSystem interface to be threadable, it'll fix this problem for those users as a side effect; you'll only have access to an interior FE object when computing interior terms, and you'll only have access to a side FE object when computing side terms. > Right. Another way to think of it: the analytical jacobian > contribution from the standard Dirichlet penalty term is a mass matrix > "phi[i]*phi[j]" multiplied by the penalty parameter. Likewise, when > you finite difference this term you should get something proportional > to the penalty. If "u-BC" differs appreciably from zero, the residual > contributions from this term will be large. We typically use a value > on the order of 10^10. What value are you using? I think the best way to think about it is as Robin boundary conditions: you're basically weakly enforcing boundary terms similar to "u = u_b + 1/penalty*du/dn" (where the exact details depend on your PDE and how you add the penalty terms). In other words, even a modest penalty like 10^6 should be more than enough, until you solve the rest of your system to a tolerance that looks like 10^-6. >> goes to zero... then it shouldn't matter if there is a multiplier on the >> penalty or not. The only question is of convergence. Or do you think it's >> the rest of the normal stuff in the residual for a boundary DOF that's >> screwing it up... and that's why I need the multiplier? > > I don't *think* the penalty should affect the convergence rate > strongly. But I don't have a lot of experience with the matrix-free > Newton-Krylov techniques so I can't say for sure. It's been a while since I looked at this. From what I remember, the penalty *looks* like it should affect the convergence rate strongly, since it can blow up the condition number of your matrix. But what it really ends up giving you is a nice set of clustered eigenvalues that are just really far-removed from your normal eigenvalues, and in my experience that gets nicely fixed up by even the most basic preconditioning methods. >> Hmmmm.... I guess that makes sense. So, any guidance on choosing a >> multiplier? The problem is obviously sensitive to it... because if I make >> it too large I get the wrong answer just as easily as making it too small >> (and I do get good answers when I get the multiplier in the right range >> BTW). I think the combination of a too-large penalty value and floating point error can get you in trouble, but usually you have to get up near 10^15 for that to happen. I usually just pull "10^10" out of a hat and get good results. > AFAIK, the solution should not be sensitive to the size of the > penalty, provided it's large enough. Could be something else wrong > with your residual assembly code? I think John's concerns are probably correct, but a couple other options if we're wrong: You can weakly enforce boundary conditions with terms based on Nitsche's method or on DG methods; I've never tried it, but that supposedly has benefits for conditioning and error estimation too. You can extend our DofMap constraints code to allow Dirichlet boundary conditions to be applied strongly. Right now we only support "constrained_dof_i = sum(coefficient_ij*otherdof_j)" sorts of constraints, but it wouldn't be too hard to add a "plus constant_i" term to that. --- Roy |