Re: [Libmesh-users] boundary condition treatment From: Kirk, Benjamin \(JSC-EG\) - 2006-10-06 18:27:02 ```Sorry to jump in on this late, but I have been out of town... The DenseMatrix and DenseVector condense() function implements exactly = what John says, and can be used if you know the degree of freedom values = on the boundary nodes. For a non-interpolary basis you usually don't know the values a priori, = or at least they are not trivial to obtain. For that reason you can add = the penalty of the L2 projection of the boundary data, which works in = general. Another reason for the penalty approach is that in general an element = may have nodes which reside on the boundary but no face there. Using = the penalty method allows you to only consider elements with faces on = the boundary when applying BCs. These large penalty entries then "trump" = any other contributions to the nodes in floating point arithmetic. -Ben -----Original Message----- From: libmesh-users-bounces@... = [mailto:libmesh-users-bounces@...] On Behalf Of = Hae-Won Choi Sent: Friday, September 29, 2006 9:54 AM To: John Peterson Cc: libmesh-users@...; li pan Subject: Re: [Libmesh-users] boundary condition treatment Hi, actually John clarify what I exactly want. Thank you John, For = PetSc, MatZeroRows set 1 for all diagonal entities of all Dirichlet = boundary rows (other entities will be zero) as I mentioned last time. In fact you can find examples using this approach from PetSc since I = have learned this from PetSc. What you have to do is modify RHS as John's example shows. I have used this method for my other codes using PetSc (but not for = libMesh yet). This approach gives same results as reduced matrix method. Hae-Won On Sep 29, 2006, at 5:45 AM, John Peterson wrote: > Just as an addendum to Tim's note, you can maintain any symmetry=20 > originally present in the problem by "subtracting" > the column entries multiplied by the Dirichlet value from the right=20 > hand side vector. > > If Au=3Db, where > > [a_11 a_12 a_13] [u1] =3D [b1] > [a_21 a_22 a_23] [u2] =3D [b2] > [a_31 a_32 a_33] [u3] =3D [b3] > > and u1 =3D g1, a non-homogeneous BC val, we can modify Au=3Db as: > > [ 1 0 0 ] [u1] =3D [g1] > [ 0 a_22 a_23] [u2] =3D [b2 - g1*a21] > [ 0 a_32 a_33] [u3] =3D [b3 - g1*a31] > > > This imposes u1=3Dg1, and maintains any original symmetry of A. > > > -John > > > > Tim Kr=F6ger writes: >> Dear all, >> >> On Fri, 29 Sep 2006, li pan wrote: >> >>> I'm also thinking about this. Are you sure that you only need to=20 >>> zero all the enties of rows? I read somewhere that columes should=20 >>> also be zeroed. Could somebody confirm this? >> >> If you zero only the row entries, the matrix will no longer be=20 >> symmetric. This is often considered as a drawback (provided that the = >> matrix was symmetric before) because certain solvers (e.g. CG) cannot = >> be used then any more. >> >> On the other hand, if the column entries are also zeroed, the=20 >> solution of the system will be wrong -- except for the case of=20 >> *homogeneous* Dirichlet conditions. For this reason, some people=20 >> transform their problems to homogeneous boundary conditions, i.e.=20 >> they do in practice the same thing that is usually done theoretically = >> anyway, i.e. they subtract a function that fulfills all Dirichlet=20 >> boundary conditions but not the PDE. Note that in the discrete case, = >> such a function is trivial to find. >> >> Because I find this all quite unsatisfactory, I was glad to see that=20 >> libMesh uses the penalty method (which I did not know before) because = >> it is easy to implement, works in all cases, and does not destroy=20 >> symmetry or positivity of the matrix. >> >> Best Regards, >> >> Tim > > ---------------------------------------------------------------------- > --- > Take Surveys. Earn Cash. Influence the Future of IT Join=20 > SourceForge.net's Techsay panel and you'll get the chance to share=20 > your opinions on IT & business topics through brief surveys -- and=20 > earn cash http://www.techsay.com/default.php? > = page=3Djoin.php&p=3Dsourceforge&CID=3DDEVDEV_____________________________= ___ > _______________ > Libmesh-users mailing list > Libmesh-users@... > https://lists.sourceforge.net/lists/listinfo/libmesh-users -------------------------------------------------------------------------= Take Surveys. Earn Cash. Influence the Future of IT Join = SourceForge.net's Techsay panel and you'll get the chance to share your = opinions on IT & business topics through brief surveys -- and earn cash = http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3D= DEVDEV _______________________________________________ Libmesh-users mailing list Libmesh-users@... https://lists.sourceforge.net/lists/listinfo/libmesh-users ```
 Re: [Libmesh-users] boundary condition treatment From: Kirk, Benjamin \(JSC-EG\) - 2006-10-06 18:27:02 ```Sorry to jump in on this late, but I have been out of town... The DenseMatrix and DenseVector condense() function implements exactly = what John says, and can be used if you know the degree of freedom values = on the boundary nodes. For a non-interpolary basis you usually don't know the values a priori, = or at least they are not trivial to obtain. For that reason you can add = the penalty of the L2 projection of the boundary data, which works in = general. Another reason for the penalty approach is that in general an element = may have nodes which reside on the boundary but no face there. Using = the penalty method allows you to only consider elements with faces on = the boundary when applying BCs. These large penalty entries then "trump" = any other contributions to the nodes in floating point arithmetic. -Ben -----Original Message----- From: libmesh-users-bounces@... = [mailto:libmesh-users-bounces@...] On Behalf Of = Hae-Won Choi Sent: Friday, September 29, 2006 9:54 AM To: John Peterson Cc: libmesh-users@...; li pan Subject: Re: [Libmesh-users] boundary condition treatment Hi, actually John clarify what I exactly want. Thank you John, For = PetSc, MatZeroRows set 1 for all diagonal entities of all Dirichlet = boundary rows (other entities will be zero) as I mentioned last time. In fact you can find examples using this approach from PetSc since I = have learned this from PetSc. What you have to do is modify RHS as John's example shows. I have used this method for my other codes using PetSc (but not for = libMesh yet). This approach gives same results as reduced matrix method. Hae-Won On Sep 29, 2006, at 5:45 AM, John Peterson wrote: > Just as an addendum to Tim's note, you can maintain any symmetry=20 > originally present in the problem by "subtracting" > the column entries multiplied by the Dirichlet value from the right=20 > hand side vector. > > If Au=3Db, where > > [a_11 a_12 a_13] [u1] =3D [b1] > [a_21 a_22 a_23] [u2] =3D [b2] > [a_31 a_32 a_33] [u3] =3D [b3] > > and u1 =3D g1, a non-homogeneous BC val, we can modify Au=3Db as: > > [ 1 0 0 ] [u1] =3D [g1] > [ 0 a_22 a_23] [u2] =3D [b2 - g1*a21] > [ 0 a_32 a_33] [u3] =3D [b3 - g1*a31] > > > This imposes u1=3Dg1, and maintains any original symmetry of A. > > > -John > > > > Tim Kr=F6ger writes: >> Dear all, >> >> On Fri, 29 Sep 2006, li pan wrote: >> >>> I'm also thinking about this. Are you sure that you only need to=20 >>> zero all the enties of rows? I read somewhere that columes should=20 >>> also be zeroed. Could somebody confirm this? >> >> If you zero only the row entries, the matrix will no longer be=20 >> symmetric. This is often considered as a drawback (provided that the = >> matrix was symmetric before) because certain solvers (e.g. CG) cannot = >> be used then any more. >> >> On the other hand, if the column entries are also zeroed, the=20 >> solution of the system will be wrong -- except for the case of=20 >> *homogeneous* Dirichlet conditions. For this reason, some people=20 >> transform their problems to homogeneous boundary conditions, i.e.=20 >> they do in practice the same thing that is usually done theoretically = >> anyway, i.e. they subtract a function that fulfills all Dirichlet=20 >> boundary conditions but not the PDE. Note that in the discrete case, = >> such a function is trivial to find. >> >> Because I find this all quite unsatisfactory, I was glad to see that=20 >> libMesh uses the penalty method (which I did not know before) because = >> it is easy to implement, works in all cases, and does not destroy=20 >> symmetry or positivity of the matrix. >> >> Best Regards, >> >> Tim > > ---------------------------------------------------------------------- > --- > Take Surveys. Earn Cash. Influence the Future of IT Join=20 > SourceForge.net's Techsay panel and you'll get the chance to share=20 > your opinions on IT & business topics through brief surveys -- and=20 > earn cash http://www.techsay.com/default.php? > = page=3Djoin.php&p=3Dsourceforge&CID=3DDEVDEV_____________________________= ___ > _______________ > Libmesh-users mailing list > Libmesh-users@... > https://lists.sourceforge.net/lists/listinfo/libmesh-users -------------------------------------------------------------------------= Take Surveys. Earn Cash. Influence the Future of IT Join = SourceForge.net's Techsay panel and you'll get the chance to share your = opinions on IT & business topics through brief surveys -- and earn cash = http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3D= DEVDEV _______________________________________________ Libmesh-users mailing list Libmesh-users@... https://lists.sourceforge.net/lists/listinfo/libmesh-users ```
 Re: [Libmesh-users] boundary condition treatment From: Karl Tomlinson - 2006-10-09 03:44:41 ```On Fri, 6 Oct 2006 13:26:34 -0500, Benjamin Kirk wrote: > The DenseMatrix and DenseVector condense() function implements > exactly what John says, and can be used if you know the degree > of freedom values on the boundary nodes. Thanks for this - I'll check these out. > For a non-interpolary basis you usually don't know the values a > priori, or at least they are not trivial to obtain. For that > reason you can add the penalty of the L2 projection of the > boundary data, which works in general. I can see that the penalty method can work in more cases, but there are still some cases where I can't see how the penalty method can work well. I can see that the penalty method works well if determining the values for the degrees of freedom involved in satisfying the Dirichlet boundary conditions can be considered separately from solving domain equations for the other degrees of freedom. The situation seems different, however, if the degrees of freedom that influence the values on the Dirichlet boundary are not completely constrained by the boundary conditions. i.e. the boundary conditions remain satisfied provided the values of these degrees of freedom satisfy a non-full-rank system of equations. For problems with natural boundary conditions, the equations corresponding to degrees of freedom that influence the values on the Dirichlet boundary condition will usually be inconsistent (not satisfied by the exact solution). This is not a problem if the penalty coefficient can be made so large that the L2 projection of boundary data "trumps" the other contributions to the equations. However, if the boundary data projections don't completely constrain the associated degrees of freedom, then their values should be determined by the domain equation contributions, which are inconsistent and are trumped by the boundary projections. Choosing too small a penalty coefficient results in errors from the inconsistent equations, and it looks like choosing too large a coefficient would result in numerical errors due to the trumping of the domain terms. The Nitsche method for Dirichlet boundary conditions looks like it provides an attractive alternative. It is similar to the penalty method but corrects the domain equations so that they are consistent. There is still a coefficient to be selected for the Dirichlet terms that depends on the mesh (for a positive definite system), but it does not need to be so large as to swamp the domain equations and so the system is better conditioned. More details are in M. Juntunen and R. Stenberg's A finite element method for general boundary conditions for the Proceedings of the 18 Nordic Seminar on Computational Mechanics (http://math.tkk.fi/~rstenber/Publications/nscm_general_boundary.pdf), which also points out the inconsistency of the penalty method. The Nitsche method can also be used on interfaces between portions of the domain with non-matching meshs, as analysed in R. Becker, P. Hansbo, and R. Stenberg's A finite element method for domain decomposition with non-matching grids in Mathematical Modelling and Numerical Analysis 37 (2003) 209-225 (http://www.math.hut.fi/~rstenber/Publications/Becker-Hansbo-Stenberg.pdf). ```
 Re: [Libmesh-users] boundary condition treatment From: Roy Stogner - 2006-10-09 05:38:34 ```On Mon, 9 Oct 2006, Karl Tomlinson wrote: > The situation seems different, however, if the degrees of freedom > that influence the values on the Dirichlet boundary are not > completely constrained by the boundary conditions. This is actually the case for some of the problems I've run. When using Clough-Tocher elements for second order problems, for example, in general it's only weighted sums of nodal gradient degrees of freedom that are constrained, but applying the penalty method on edge integrals still works fine. > For problems with natural boundary conditions, the equations > corresponding to degrees of freedom that influence the values on > the Dirichlet boundary condition will usually be inconsistent (not > satisfied by the exact solution). This is not a problem if the > penalty coefficient can be made so large that the L2 projection of > boundary data "trumps" the other contributions to the equations. > > However, if the boundary data projections don't completely > constrain the associated degrees of freedom, Could you give a concerete example where this wouldn't occur? I don't see even in theory how adding a heavily weighted ((u-g),v) integral on the Dirichlet boundary edges wouldn't suffice, assuming that you're happy with solving the problem with Robin boundary conditions rather than Dirichlet. > The Nitsche method for Dirichlet boundary conditions looks like it > provides an attractive alternative. It is similar to the penalty > method but corrects the domain equations so that they are > consistent. That certainly sounds preferable. > There is still a coefficient to be selected for the Dirichlet > terms that depends on the mesh (for a positive definite system), > but it does not need to be so large as to swamp the domain > equations and so the system is better conditioned. As does that. > More details are in M. Juntunen and R. Stenberg's A finite element > method for general boundary conditions for the Proceedings of the > 18 Nordic Seminar on Computational Mechanics > (http://math.tkk.fi/~rstenber/Publications/nscm_general_boundary.pdf), > which also points out the inconsistency of the penalty method. Is there a typo in this paper? In the first term of equation 5, I would expect there to be a 1 in the numerator rather than an epsilon. How do you choose gamma in practice? Lemma 3 gives an upper bound, and equations 14-15 suggest (perhaps misleadingly) that setting gamma too low will increase the final error. This looks interesting. I'll need to read it through again after I've had some sleep, though. --- Roy ```
 Re: [Libmesh-users] boundary condition treatment From: Karl Tomlinson - 2006-10-10 10:09:47 ```Roy Stogner writes: > On Mon, 9 Oct 2006, Karl Tomlinson wrote: > >> The situation seems different, however, if the degrees of freedom >> that influence the values on the Dirichlet boundary are not >> completely constrained by the boundary conditions. > > This is actually the case for some of the problems I've run. When > using Clough-Tocher elements for second order problems, for example, > in general it's only weighted sums of nodal gradient degrees of > freedom that are constrained, but applying the penalty method on edge > integrals still works fine. I'm pleased it still works fine. What size coefficient do you use for the penalty term (and are the other terms of unit magnitude)? Or does this not seem to matter much? >> However, if the boundary data projections don't completely >> constrain the associated degrees of freedom, > > Could you give a concerete example where this wouldn't occur? I think the case you describe above is an example of what I am trying to describe. If I understand correctly the values of the solution on the Dirichlet boundaries are affected by nodal gradient degrees of freedom, but these degrees of freedom represent x and y derivatives. If the boundary does not run exactly in either the x or y direction, then a linear combination of these derivatives affects the value on the boundary. The ((u-g),v) integral therefore depends on this linear combination and contributes to equations corresponding to both derivatives but the contributions are linearly dependent. > I don't see even in theory how adding a heavily weighted > ((u-g),v) integral on the Dirichlet boundary edges wouldn't > suffice, assuming that you're happy with solving the problem > with Robin boundary conditions rather than Dirichlet. This integral certainly does suffice to ensure that the Robin boundary condition is satisfied. And mathematically (with infinite precision arithmetic) the problem is well constrained. What I'm concerned about is that if the weight is so heavy that the Robin condition is almost a Dirichlet condition, then does it introduce too much numerical error into the conservation equations for the domain? In the example above, the ((u-g),v) integral half defines the x and y derivatives (so that the boundary condition is satisfied). In order to satisfy the domain conservation equation, the (grad(u),grad(v)) integrals (or similar) provides the other half of the definition of the x and y derivatives, and some of this information must be obtained from equations that may have been almost swamped by the ((u-g),v) term. Perhaps if the penalty weight is chosen appropriately, there is still enough precision. e.g. If the weight is 1e6 (epsilon is 1e-6) then this might be good enough to ensure that the Robin condition is effectively a Dirichlet condition (to about 6 significant figures). It would remove about 6 figures of accuracy from the terms in the domain equations but there would still be about 10 significant figures. But I feel this may be an oversimplification. How should the penalty parameter depend on mesh size? It seems that the ratio of magnitudes of the boundary integrals to the domain integrals (of derivatives) is of order h to 1. So the boundary weight should be proportional to 1/h (epsilon to h). I think this is good though as the Robin boundary condition becomes more Dirichlet like as the mesh is refined. > Is there a typo in this paper? In the first term of equation 5, I > would expect there to be a 1 in the numerator rather than an epsilon. I'm seeing a 1 in the numerator of the first term after the summation sign and I think this is right. The equation seems to behave appropriately in the limits of Remarks 2-4. > > How do you choose gamma in practice? I don't know. I only learned of this Nitsche method as I was trying to understand the penalty method better. I'm actually used to applying Dirichlet boundary conditions by removing degrees of freedom from the system of equations as discussed earlier in this thread. For Lagrange and our Hermite elements we usually know which degrees of freedom are involved. > Lemma 3 gives an upper bound, and equations 14-15 suggest > (perhaps misleadingly) that setting gamma too low will increase > the final error. Looking at Remarks 5-7 in this paper and comparing Equation 32 in http://math.tkk.fi/~rstenber/Publications/Becker-Hansbo-Stenberg.pdf, I'm guessing that the gamma in equation 15 is a typo and should not be there. (Warning: the gammas in the two papers seem to be reciprocals.) This second paper goes into more detail on the bound for linear elements but I haven't worked out why the bound seems to differ by a factor of 4. ```
 Re: [Libmesh-users] boundary condition treatment From: Roy Stogner - 2006-10-10 15:31:47 ```On Tue, 10 Oct 2006, Karl Tomlinson wrote: > Roy Stogner writes: > >> On Mon, 9 Oct 2006, Karl Tomlinson wrote: >> >>> The situation seems different, however, if the degrees of freedom >>> that influence the values on the Dirichlet boundary are not >>> completely constrained by the boundary conditions. >> >> This is actually the case for some of the problems I've run. When >> using Clough-Tocher elements for second order problems, for example, >> in general it's only weighted sums of nodal gradient degrees of >> freedom that are constrained, but applying the penalty method on edge >> integrals still works fine. > > I'm pleased it still works fine. What size coefficient do you > use for the penalty term (and are the other terms of unit > magnitude)? Or does this not seem to matter much? It does matter, and I wish I had a better understanding of exactly how it matters. Make epsilon too large (e.g. 1e-5), and your convergence soon bottoms out as approximation error is swamped by boundary error. Make epsilon too small (e.g. 1e-15), and your convergence soon bottoms out as approximation error is swamped by floating point error. I generally set epsilon to 1e-10 and cross my fingers. > How should the penalty parameter depend on mesh size? Good question. Obviously to get anything like a consistent formulation in exact arithmetic you probably need to decrease epsilon with h, and to avoid eventually overwhelming your finite element error you probably need some rate like h^{p+1}. But, in practice it seems like there's nothing wrong with using small epsilons on coarse meshes, and if you make epsilon too small on fine meshes then floating point error kills your solution. >> Is there a typo in this paper? In the first term of equation 5, I >> would expect there to be a 1 in the numerator rather than an epsilon. > > I'm seeing a 1 in the numerator of the first term after the > summation sign and I think this is right. The equation seems to > behave appropriately in the limits of Remarks 2-4. I'm sorry, I meant to say the first term on the second line of equation 5... but on second glance that doesn't look like an obvious mistake either. I must have confused my q's and g's. >> Lemma 3 gives an upper bound, and equations 14-15 suggest >> (perhaps misleadingly) that setting gamma too low will increase >> the final error. > > Looking at Remarks 5-7 in this paper and comparing Equation 32 in > http://math.tkk.fi/~rstenber/Publications/Becker-Hansbo-Stenberg.pdf, > I'm guessing that the gamma in equation 15 is a typo and should > not be there. (Warning: the gammas in the two papers seem to be > reciprocals.) > > This second paper goes into more detail on the bound for linear > elements but I haven't worked out why the bound seems to differ by > a factor of 4. I haven't read through the second paper yet; I'll see if I can figure it out today. --- Roy ```