You can subscribe to this list here.
2003 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}
(2) 
_{Oct}
(2) 
_{Nov}
(27) 
_{Dec}
(31) 

2004 
_{Jan}
(6) 
_{Feb}
(15) 
_{Mar}
(33) 
_{Apr}
(10) 
_{May}
(46) 
_{Jun}
(11) 
_{Jul}
(21) 
_{Aug}
(15) 
_{Sep}
(13) 
_{Oct}
(23) 
_{Nov}
(1) 
_{Dec}
(8) 
2005 
_{Jan}
(27) 
_{Feb}
(57) 
_{Mar}
(86) 
_{Apr}
(23) 
_{May}
(37) 
_{Jun}
(34) 
_{Jul}
(24) 
_{Aug}
(17) 
_{Sep}
(50) 
_{Oct}
(24) 
_{Nov}
(10) 
_{Dec}
(60) 
2006 
_{Jan}
(47) 
_{Feb}
(46) 
_{Mar}
(127) 
_{Apr}
(19) 
_{May}
(26) 
_{Jun}
(62) 
_{Jul}
(47) 
_{Aug}
(51) 
_{Sep}
(61) 
_{Oct}
(42) 
_{Nov}
(50) 
_{Dec}
(33) 
2007 
_{Jan}
(60) 
_{Feb}
(55) 
_{Mar}
(77) 
_{Apr}
(102) 
_{May}
(82) 
_{Jun}
(102) 
_{Jul}
(169) 
_{Aug}
(117) 
_{Sep}
(80) 
_{Oct}
(37) 
_{Nov}
(51) 
_{Dec}
(43) 
2008 
_{Jan}
(71) 
_{Feb}
(94) 
_{Mar}
(98) 
_{Apr}
(125) 
_{May}
(54) 
_{Jun}
(119) 
_{Jul}
(60) 
_{Aug}
(111) 
_{Sep}
(118) 
_{Oct}
(125) 
_{Nov}
(119) 
_{Dec}
(94) 
2009 
_{Jan}
(109) 
_{Feb}
(38) 
_{Mar}
(93) 
_{Apr}
(88) 
_{May}
(29) 
_{Jun}
(57) 
_{Jul}
(53) 
_{Aug}
(48) 
_{Sep}
(68) 
_{Oct}
(151) 
_{Nov}
(23) 
_{Dec}
(35) 
2010 
_{Jan}
(84) 
_{Feb}
(60) 
_{Mar}
(184) 
_{Apr}
(112) 
_{May}
(60) 
_{Jun}
(90) 
_{Jul}
(23) 
_{Aug}
(70) 
_{Sep}
(119) 
_{Oct}
(27) 
_{Nov}
(47) 
_{Dec}
(54) 
2011 
_{Jan}
(22) 
_{Feb}
(19) 
_{Mar}
(92) 
_{Apr}
(93) 
_{May}
(35) 
_{Jun}
(91) 
_{Jul}
(32) 
_{Aug}
(61) 
_{Sep}
(7) 
_{Oct}
(69) 
_{Nov}
(81) 
_{Dec}
(23) 
2012 
_{Jan}
(64) 
_{Feb}
(95) 
_{Mar}
(35) 
_{Apr}
(36) 
_{May}
(63) 
_{Jun}
(98) 
_{Jul}
(70) 
_{Aug}
(171) 
_{Sep}
(149) 
_{Oct}
(64) 
_{Nov}
(67) 
_{Dec}
(126) 
2013 
_{Jan}
(108) 
_{Feb}
(104) 
_{Mar}
(171) 
_{Apr}
(133) 
_{May}
(108) 
_{Jun}
(100) 
_{Jul}
(93) 
_{Aug}
(126) 
_{Sep}
(74) 
_{Oct}
(59) 
_{Nov}
(145) 
_{Dec}
(93) 
2014 
_{Jan}
(38) 
_{Feb}
(45) 
_{Mar}
(26) 
_{Apr}
(41) 
_{May}
(125) 
_{Jun}
(70) 
_{Jul}
(61) 
_{Aug}
(66) 
_{Sep}
(60) 
_{Oct}
(110) 
_{Nov}
(27) 
_{Dec}
(30) 
2015 
_{Jan}
(43) 
_{Feb}
(67) 
_{Mar}
(1) 
_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 

1

2

3
(2) 
4
(3) 
5
(3) 
6
(2) 
7

8
(1) 
9
(3) 
10
(3) 
11
(1) 
12
(2) 
13
(1) 
14

15

16
(1) 
17
(5) 
18
(1) 
19

20

21

22

23
(2) 
24
(1) 
25

26
(2) 
27

28
(2) 
29

30
(1) 
31
(6) 




From: Roy Stogner <roystgnr@ic...>  20061010 15:31:47

On Tue, 10 Oct 2006, Karl Tomlinson wrote: > Roy Stogner writes: > >> On Mon, 9 Oct 2006, Karl Tomlinson wrote: >> >>> The situation seems different, however, if the degrees of freedom >>> that influence the values on the Dirichlet boundary are not >>> completely constrained by the boundary conditions. >> >> This is actually the case for some of the problems I've run. When >> using CloughTocher elements for second order problems, for example, >> in general it's only weighted sums of nodal gradient degrees of >> freedom that are constrained, but applying the penalty method on edge >> integrals still works fine. > > I'm pleased it still works fine. What size coefficient do you > use for the penalty term (and are the other terms of unit > magnitude)? Or does this not seem to matter much? It does matter, and I wish I had a better understanding of exactly how it matters. Make epsilon too large (e.g. 1e5), and your convergence soon bottoms out as approximation error is swamped by boundary error. Make epsilon too small (e.g. 1e15), and your convergence soon bottoms out as approximation error is swamped by floating point error. I generally set epsilon to 1e10 and cross my fingers. > How should the penalty parameter depend on mesh size? Good question. Obviously to get anything like a consistent formulation in exact arithmetic you probably need to decrease epsilon with h, and to avoid eventually overwhelming your finite element error you probably need some rate like h^{p+1}. But, in practice it seems like there's nothing wrong with using small epsilons on coarse meshes, and if you make epsilon too small on fine meshes then floating point error kills your solution. >> Is there a typo in this paper? In the first term of equation 5, I >> would expect there to be a 1 in the numerator rather than an epsilon. > > I'm seeing a 1 in the numerator of the first term after the > summation sign and I think this is right. The equation seems to > behave appropriately in the limits of Remarks 24. I'm sorry, I meant to say the first term on the second line of equation 5... but on second glance that doesn't look like an obvious mistake either. I must have confused my q's and g's. >> Lemma 3 gives an upper bound, and equations 1415 suggest >> (perhaps misleadingly) that setting gamma too low will increase >> the final error. > > Looking at Remarks 57 in this paper and comparing Equation 32 in > http://math.tkk.fi/~rstenber/Publications/BeckerHansboStenberg.pdf, > I'm guessing that the gamma in equation 15 is a typo and should > not be there. (Warning: the gammas in the two papers seem to be > reciprocals.) > > This second paper goes into more detail on the bound for linear > elements but I haven't worked out why the bound seems to differ by > a factor of 4. I haven't read through the second paper yet; I'll see if I can figure it out today.  Roy 
From: Karl Tomlinson <k.tomlinson@au...>  20061010 10:09:47

Roy Stogner writes: > On Mon, 9 Oct 2006, Karl Tomlinson wrote: > >> The situation seems different, however, if the degrees of freedom >> that influence the values on the Dirichlet boundary are not >> completely constrained by the boundary conditions. > > This is actually the case for some of the problems I've run. When > using CloughTocher elements for second order problems, for example, > in general it's only weighted sums of nodal gradient degrees of > freedom that are constrained, but applying the penalty method on edge > integrals still works fine. I'm pleased it still works fine. What size coefficient do you use for the penalty term (and are the other terms of unit magnitude)? Or does this not seem to matter much? >> However, if the boundary data projections don't completely >> constrain the associated degrees of freedom, > > Could you give a concerete example where this wouldn't occur? I think the case you describe above is an example of what I am trying to describe. If I understand correctly the values of the solution on the Dirichlet boundaries are affected by nodal gradient degrees of freedom, but these degrees of freedom represent x and y derivatives. If the boundary does not run exactly in either the x or y direction, then a linear combination of these derivatives affects the value on the boundary. The ((ug),v) integral therefore depends on this linear combination and contributes to equations corresponding to both derivatives but the contributions are linearly dependent. > I don't see even in theory how adding a heavily weighted > ((ug),v) integral on the Dirichlet boundary edges wouldn't > suffice, assuming that you're happy with solving the problem > with Robin boundary conditions rather than Dirichlet. This integral certainly does suffice to ensure that the Robin boundary condition is satisfied. And mathematically (with infinite precision arithmetic) the problem is well constrained. What I'm concerned about is that if the weight is so heavy that the Robin condition is almost a Dirichlet condition, then does it introduce too much numerical error into the conservation equations for the domain? In the example above, the ((ug),v) integral half defines the x and y derivatives (so that the boundary condition is satisfied). In order to satisfy the domain conservation equation, the (grad(u),grad(v)) integrals (or similar) provides the other half of the definition of the x and y derivatives, and some of this information must be obtained from equations that may have been almost swamped by the ((ug),v) term. Perhaps if the penalty weight is chosen appropriately, there is still enough precision. e.g. If the weight is 1e6 (epsilon is 1e6) then this might be good enough to ensure that the Robin condition is effectively a Dirichlet condition (to about 6 significant figures). It would remove about 6 figures of accuracy from the terms in the domain equations but there would still be about 10 significant figures. But I feel this may be an oversimplification. How should the penalty parameter depend on mesh size? It seems that the ratio of magnitudes of the boundary integrals to the domain integrals (of derivatives) is of order h to 1. So the boundary weight should be proportional to 1/h (epsilon to h). I think this is good though as the Robin boundary condition becomes more Dirichlet like as the mesh is refined. > Is there a typo in this paper? In the first term of equation 5, I > would expect there to be a 1 in the numerator rather than an epsilon. I'm seeing a 1 in the numerator of the first term after the summation sign and I think this is right. The equation seems to behave appropriately in the limits of Remarks 24. > > How do you choose gamma in practice? I don't know. I only learned of this Nitsche method as I was trying to understand the penalty method better. I'm actually used to applying Dirichlet boundary conditions by removing degrees of freedom from the system of equations as discussed earlier in this thread. For Lagrange and our Hermite elements we usually know which degrees of freedom are involved. > Lemma 3 gives an upper bound, and equations 1415 suggest > (perhaps misleadingly) that setting gamma too low will increase > the final error. Looking at Remarks 57 in this paper and comparing Equation 32 in http://math.tkk.fi/~rstenber/Publications/BeckerHansboStenberg.pdf, I'm guessing that the gamma in equation 15 is a typo and should not be there. (Warning: the gammas in the two papers seem to be reciprocals.) This second paper goes into more detail on the bound for linear elements but I haven't worked out why the bound seems to differ by a factor of 4. 
From: Mail Delivery System <MailerD<aemon@li...>  20061010 01:16:07

This message was created automatically by mail delivery software. A message that you sent has not yet been delivered to one or more of its recipients after more than 24 hours on the queue on externalmx1.sourceforge.net. The message identifier is: 1GWhbW0000e8IU The date of the message is: Sun, 08 Oct 2006 14:06:00 0200 The subject of the message is: force the Far section. The address to which the message has not yet been delivered is: libmeshusers@... Delay reason: SMTP error from remote mailer after RCPT TO:<libmeshusers@...>: host mail.sourceforge.net [66.35.250.206]: 451Could not complete sender verify callout 451Could not complete sender verify callout for 451<libmeshusers@...>. 451The mail server(s) for the domain may be temporarily unreachable, or 451they may be permanently unreachable from this server. In the latter case, 451you need to change the address or create an MX record for its domain 451if it is suppos No action is required on your part. Delivery attempts will continue for some time, and this warning may be repeated at intervals if the message remains undelivered. Eventually the mail delivery software will give up, and when that happens, the message will be returned to you. 
From: Mail Delivery System <MailerD<aemon@li...>  20061009 12:09:28

This message was created automatically by mail delivery software. A message that you sent has not yet been delivered to one or more of its recipients after more than 10 hours on the queue on externalmx1.sourceforge.net. The message identifier is: 1GWhbW0000e8IU The date of the message is: Sun, 08 Oct 2006 14:06:00 0200 The subject of the message is: force the Far section. The address to which the message has not yet been delivered is: libmeshusers@... Delay reason: SMTP error from remote mailer after RCPT TO:<libmeshusers@...>: host mail.sourceforge.net [66.35.250.206]: 451Could not complete sender verify callout 451Could not complete sender verify callout for 451<libmeshusers@...>. 451The mail server(s) for the domain may be temporarily unreachable, or 451they may be permanently unreachable from this server. In the latter case, 451you need to change the address or create an MX record for its domain 451if it is suppos No action is required on your part. Delivery attempts will continue for some time, and this warning may be repeated at intervals if the message remains undelivered. Eventually the mail delivery software will give up, and when that happens, the message will be returned to you. 
From: Roy Stogner <roystgnr@ic...>  20061009 05:38:34

On Mon, 9 Oct 2006, Karl Tomlinson wrote: > The situation seems different, however, if the degrees of freedom > that influence the values on the Dirichlet boundary are not > completely constrained by the boundary conditions. This is actually the case for some of the problems I've run. When using CloughTocher elements for second order problems, for example, in general it's only weighted sums of nodal gradient degrees of freedom that are constrained, but applying the penalty method on edge integrals still works fine. > For problems with natural boundary conditions, the equations > corresponding to degrees of freedom that influence the values on > the Dirichlet boundary condition will usually be inconsistent (not > satisfied by the exact solution). This is not a problem if the > penalty coefficient can be made so large that the L2 projection of > boundary data "trumps" the other contributions to the equations. > > However, if the boundary data projections don't completely > constrain the associated degrees of freedom, Could you give a concerete example where this wouldn't occur? I don't see even in theory how adding a heavily weighted ((ug),v) integral on the Dirichlet boundary edges wouldn't suffice, assuming that you're happy with solving the problem with Robin boundary conditions rather than Dirichlet. > The Nitsche method for Dirichlet boundary conditions looks like it > provides an attractive alternative. It is similar to the penalty > method but corrects the domain equations so that they are > consistent. That certainly sounds preferable. > There is still a coefficient to be selected for the Dirichlet > terms that depends on the mesh (for a positive definite system), > but it does not need to be so large as to swamp the domain > equations and so the system is better conditioned. As does that. > More details are in M. Juntunen and R. Stenberg's A finite element > method for general boundary conditions for the Proceedings of the > 18 Nordic Seminar on Computational Mechanics > (http://math.tkk.fi/~rstenber/Publications/nscm_general_boundary.pdf), > which also points out the inconsistency of the penalty method. Is there a typo in this paper? In the first term of equation 5, I would expect there to be a 1 in the numerator rather than an epsilon. How do you choose gamma in practice? Lemma 3 gives an upper bound, and equations 1415 suggest (perhaps misleadingly) that setting gamma too low will increase the final error. This looks interesting. I'll need to read it through again after I've had some sleep, though.  Roy 
From: Karl Tomlinson <k.tomlinson@au...>  20061009 03:44:41

On Fri, 6 Oct 2006 13:26:34 0500, Benjamin Kirk wrote: > The DenseMatrix and DenseVector condense() function implements > exactly what John says, and can be used if you know the degree > of freedom values on the boundary nodes. Thanks for this  I'll check these out. > For a noninterpolary basis you usually don't know the values a > priori, or at least they are not trivial to obtain. For that > reason you can add the penalty of the L2 projection of the > boundary data, which works in general. I can see that the penalty method can work in more cases, but there are still some cases where I can't see how the penalty method can work well. I can see that the penalty method works well if determining the values for the degrees of freedom involved in satisfying the Dirichlet boundary conditions can be considered separately from solving domain equations for the other degrees of freedom. The situation seems different, however, if the degrees of freedom that influence the values on the Dirichlet boundary are not completely constrained by the boundary conditions. i.e. the boundary conditions remain satisfied provided the values of these degrees of freedom satisfy a nonfullrank system of equations. For problems with natural boundary conditions, the equations corresponding to degrees of freedom that influence the values on the Dirichlet boundary condition will usually be inconsistent (not satisfied by the exact solution). This is not a problem if the penalty coefficient can be made so large that the L2 projection of boundary data "trumps" the other contributions to the equations. However, if the boundary data projections don't completely constrain the associated degrees of freedom, then their values should be determined by the domain equation contributions, which are inconsistent and are trumped by the boundary projections. Choosing too small a penalty coefficient results in errors from the inconsistent equations, and it looks like choosing too large a coefficient would result in numerical errors due to the trumping of the domain terms. The Nitsche method for Dirichlet boundary conditions looks like it provides an attractive alternative. It is similar to the penalty method but corrects the domain equations so that they are consistent. There is still a coefficient to be selected for the Dirichlet terms that depends on the mesh (for a positive definite system), but it does not need to be so large as to swamp the domain equations and so the system is better conditioned. More details are in M. Juntunen and R. Stenberg's A finite element method for general boundary conditions for the Proceedings of the 18 Nordic Seminar on Computational Mechanics (http://math.tkk.fi/~rstenber/Publications/nscm_general_boundary.pdf), which also points out the inconsistency of the penalty method. The Nitsche method can also be used on interfaces between portions of the domain with nonmatching meshs, as analysed in R. Becker, P. Hansbo, and R. Stenberg's A finite element method for domain decomposition with nonmatching grids in Mathematical Modelling and Numerical Analysis 37 (2003) 209225 (http://www.math.hut.fi/~rstenber/Publications/BeckerHansboStenberg.pdf). 
From: Dianne Herndon <qdaccvkkjrvy@ar...>  20061008 22:00:22

From: Roy Stogner <roystgnr@ic...>  20061006 20:39:46

What library code do we have that relies on a fuzzy TOLERANCE in operator== and operator!= in TypeVector and TypeTensor? I've got some singular benchmarks where the error bottoms out very quickly using the default TOLERANCE setting in those functions, but I don't want to make those equality tests exact (or even drop them down to TOLERANCE*TOLERANCE) without knowing what I might break by doing so. I assume there are a few real cases like identifying common hanging nodes where you have to be prepared for a little FPU error in the geometry arithmetic, but if possible I'd like to hunt those down and replace the operator== tests with a call to an explicitly named "fuzzy equality" method.  Roy Stogner 
From: Kirk, Benjamin \(JSCEG\) <benjamin.kirk1@na...>  20061006 18:27:02

Sorry to jump in on this late, but I have been out of town... The DenseMatrix and DenseVector condense() function implements exactly = what John says, and can be used if you know the degree of freedom values = on the boundary nodes. For a noninterpolary basis you usually don't know the values a priori, = or at least they are not trivial to obtain. For that reason you can add = the penalty of the L2 projection of the boundary data, which works in = general. Another reason for the penalty approach is that in general an element = may have nodes which reside on the boundary but no face there. Using = the penalty method allows you to only consider elements with faces on = the boundary when applying BCs. These large penalty entries then "trump" = any other contributions to the nodes in floating point arithmetic. Ben Original Message From: libmeshusersbounces@... = [mailto:libmeshusersbounces@...] On Behalf Of = HaeWon Choi Sent: Friday, September 29, 2006 9:54 AM To: John Peterson Cc: libmeshusers@...; li pan Subject: Re: [Libmeshusers] boundary condition treatment Hi, actually John clarify what I exactly want. Thank you John, For = PetSc, MatZeroRows set 1 for all diagonal entities of all Dirichlet = boundary rows (other entities will be zero) as I mentioned last time. In fact you can find examples using this approach from PetSc since I = have learned this from PetSc. What you have to do is modify RHS as John's example shows. I have used this method for my other codes using PetSc (but not for = libMesh yet). This approach gives same results as reduced matrix method. HaeWon On Sep 29, 2006, at 5:45 AM, John Peterson wrote: > Just as an addendum to Tim's note, you can maintain any symmetry=20 > originally present in the problem by "subtracting" > the column entries multiplied by the Dirichlet value from the right=20 > hand side vector. > > If Au=3Db, where > > [a_11 a_12 a_13] [u1] =3D [b1] > [a_21 a_22 a_23] [u2] =3D [b2] > [a_31 a_32 a_33] [u3] =3D [b3] > > and u1 =3D g1, a nonhomogeneous BC val, we can modify Au=3Db as: > > [ 1 0 0 ] [u1] =3D [g1] > [ 0 a_22 a_23] [u2] =3D [b2  g1*a21] > [ 0 a_32 a_33] [u3] =3D [b3  g1*a31] > > > This imposes u1=3Dg1, and maintains any original symmetry of A. > > > John > > > > Tim Kr=F6ger writes: >> Dear all, >> >> On Fri, 29 Sep 2006, li pan wrote: >> >>> I'm also thinking about this. Are you sure that you only need to=20 >>> zero all the enties of rows? I read somewhere that columes should=20 >>> also be zeroed. Could somebody confirm this? >> >> If you zero only the row entries, the matrix will no longer be=20 >> symmetric. This is often considered as a drawback (provided that the = >> matrix was symmetric before) because certain solvers (e.g. CG) cannot = >> be used then any more. >> >> On the other hand, if the column entries are also zeroed, the=20 >> solution of the system will be wrong  except for the case of=20 >> *homogeneous* Dirichlet conditions. For this reason, some people=20 >> transform their problems to homogeneous boundary conditions, i.e.=20 >> they do in practice the same thing that is usually done theoretically = >> anyway, i.e. they subtract a function that fulfills all Dirichlet=20 >> boundary conditions but not the PDE. Note that in the discrete case, = >> such a function is trivial to find. >> >> Because I find this all quite unsatisfactory, I was glad to see that=20 >> libMesh uses the penalty method (which I did not know before) because = >> it is easy to implement, works in all cases, and does not destroy=20 >> symmetry or positivity of the matrix. >> >> Best Regards, >> >> Tim > >  >  > Take Surveys. Earn Cash. Influence the Future of IT Join=20 > SourceForge.net's Techsay panel and you'll get the chance to share=20 > your opinions on IT & business topics through brief surveys  and=20 > earn cash http://www.techsay.com/default.php? > = page=3Djoin.php&p=3Dsourceforge&CID=3DDEVDEV_____________________________= ___ > _______________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers = Take Surveys. Earn Cash. Influence the Future of IT Join = SourceForge.net's Techsay panel and you'll get the chance to share your = opinions on IT & business topics through brief surveys  and earn cash = http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3D= DEVDEV _______________________________________________ Libmeshusers mailing list Libmeshusers@... https://lists.sourceforge.net/lists/listinfo/libmeshusers 
From: Roy Stogner <roystgnr@ic...>  20061005 15:54:09

On Wed, 4 Oct 2006, HaeWon Choi wrote: > I have noticed example 18 have crashed for parallel run. > It is okay with serial run Really? It looks to me like I've left ex18.in set up with default options (an adaptive scheme that starts from a toocoarse grid on an unstabilized advectionheavy problem) that are likely to make Newton diverge and die. Needless to say, ex18 is still a bit experimental. In parallel, on the other hand, I'm not having any problems right now (so long as the initial grid size is sufficient). Try a CVS update and see if that fixes things. If not, the most obvious difference I see between your system and mine is that I've still been using PETSc 2.3.0 and 2.3.1. If you can revert to one of those and test it might be helpful.  Roy Stogner 
From: Roy Stogner <roystgnr@ic...>  20061005 15:40:10

On Wed, 4 Oct 2006, Shun Wang wrote: > When libMesh refines the mesh, there will be nonconforming elements and > hanging nodes. Is it handled by libMesh? or if not, how do we handle this > problem to get a consistent solution? Thank you! See the example programs. Calling DoFMap::constrain_element_matrix_and_vector() applies the hanging degree of freedom constrants to each contribution to your linear system. If that is insufficient (e.g. you're using large linear solver tolerances but need particularly small error in the interelement nonconformities) you can follow a linear solve with a call to DoFMap::enforce_constraints_exactly() to postprocess the constrained DoF coefficients.  Roy 
From: Derek Gaston <friedmud@gm...>  20061005 15:36:23

The examples are all very helpful. They live in the "examples" directory in your local version of libMesh.... They are also here (along with the rest of the documentation): http://libmesh.sourceforge.net/examples.php For refinement Examples 10 and 14 are instructive. Specifically look for constrain_element_matrix_and_vector .... Derek On 10/4/06, Shun Wang <shunwang@...> wrote: > When libMesh refines the mesh, there will be nonconforming elements and > hanging nodes. Is it handled by libMesh? or if not, how do we handle this > problem to get a consistent solution? Thank you! > >  > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys  and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > > > 
From: HaeWon Choi <haewon@uc...>  20061004 22:48:31

Hi, all: I have noticed example 18 have crashed for parallel run. It is okay with serial run but it has given the following error message and broke down: [0]PETSC ERROR:  Error Message  [0]PETSC ERROR: Object is in wrong state! [0]PETSC ERROR: You cannot call this after you have called VecSetValues() but before you have called VecAssemblyBegin/End()! [0]PETSC ERROR:  [0]PETSC ERROR: Petsc Release Version 2.3.2, Patch 0, Fri Sep 1 20:38:37 CDT 2006 HG revision: bbe586a138ae98ac9cd678535d59d50fd0b3a9ee [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR:  [0]PETSC ERROR: ./test on a bglibmb named fr0102ge by haewon Wed Oct 4 16:45:38 2006 [0]PETSC ERROR: Libraries linked from /home/haewon/devel/petsc2.3.2 libmesh/lib/bglibmblas_lapackO3_440 [0]PETSC ERROR: Configure run at Fri Sep 15 10:05:31 2006 [0]PETSC ERR[2]PETSC ERROR:  Error Message  [0]PETSC ERROR:  [0]PETSC ERROR: VecSet() line 436 in src/vec/vec/interface/rvector.c [0]PETSC ERROR: User provided function() line 605 in unknowndirectory//home/haewon/devel/libmesh/include/numerics/ petsc_vector.h Have anyone know how to fix this problem? Thank you and I'll appreciate your insight. HaeWon 
From: Shun Wang <shunwang@ui...>  20061004 16:55:29

When libMesh refines the mesh, there will be nonconforming elements and hanging nodes. Is it handled by libMesh? or if not, how do we handle this problem to get a consistent solution? Thank you! 
From: li pan <li76pan@ya...>  20061004 08:26:01

hi, there are too many elements need to set. It's too expensive. So what's the advantage to do it in comparison with penalty method in libmesh? regards pan  HaeWon Choi <haewon@...> wrote: > Hi, actually John clarify what I exactly want. Thank > you John, > For PetSc, MatZeroRows set 1 for all diagonal > entities > of all Dirichlet boundary rows (other entities will > be zero) as I > mentioned last time. > In fact you can find examples using this approach > from PetSc since I > have learned > this from PetSc. > What you have to do is modify RHS as John's example > shows. > I have used this method for my other codes using > PetSc (but not for > libMesh yet). > This approach gives same results as reduced matrix > method. > > HaeWon > > On Sep 29, 2006, at 5:45 AM, John Peterson wrote: > > > Just as an addendum to Tim's note, you can > maintain any > > symmetry originally present in the problem by > "subtracting" > > the column entries multiplied by the Dirichlet > value from > > the right hand side vector. > > > > If Au=b, where > > > > [a_11 a_12 a_13] [u1] = [b1] > > [a_21 a_22 a_23] [u2] = [b2] > > [a_31 a_32 a_33] [u3] = [b3] > > > > and u1 = g1, a nonhomogeneous BC val, we can > modify Au=b as: > > > > [ 1 0 0 ] [u1] = [g1] > > [ 0 a_22 a_23] [u2] = [b2  g1*a21] > > [ 0 a_32 a_33] [u3] = [b3  g1*a31] > > > > > > This imposes u1=g1, and maintains any original > symmetry of A. > > > > > > John > > > > > > > > Tim Kröger writes: > >> Dear all, > >> > >> On Fri, 29 Sep 2006, li pan wrote: > >> > >>> I'm also thinking about this. Are you sure that > you > >>> only need to zero all the enties of rows? I read > >>> somewhere that columes should also be zeroed. > Could > >>> somebody confirm this? > >> > >> If you zero only the row entries, the matrix will > no longer be > >> symmetric. This is often considered as a > drawback (provided that the > >> matrix was symmetric before) because certain > solvers (e.g. CG) cannot > >> be used then any more. > >> > >> On the other hand, if the column entries are also > zeroed, the > >> solution > >> of the system will be wrong  except for the > case of *homogeneous* > >> Dirichlet conditions. For this reason, some > people transform their > >> problems to homogeneous boundary conditions, i.e. > they do in practice > >> the same thing that is usually done theoretically > anyway, i.e. they > >> subtract a function that fulfills all Dirichlet > boundary conditions > >> but not the PDE. Note that in the discrete case, > such a function is > >> trivial to find. > >> > >> Because I find this all quite unsatisfactory, I > was glad to see that > >> libMesh uses the penalty method (which I did not > know before) because > >> it is easy to implement, works in all cases, and > does not destroy > >> symmetry or positivity of the matrix. > >> > >> Best Regards, > >> > >> Tim > > > > >  > > >  > > Take Surveys. Earn Cash. Influence the Future of > IT > > Join SourceForge.net's Techsay panel and you'll > get the chance to > > share your > > opinions on IT & business topics through brief > surveys  and earn > > cash > > http://www.techsay.com/default.php? > > > page=join.php&p=sourceforge&CID=DEVDEV________________________________ > > > _______________ > > Libmeshusers mailing list > > Libmeshusers@... > > > https://lists.sourceforge.net/lists/listinfo/libmeshusers > > >  > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get > the chance to share your > opinions on IT & business topics through brief > surveys  and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com 
From: David Knezevic <david.knezevic@ba...>  20061003 16:56:39

There have been a number of posts to the mailing lists about periodic BCs. I would recommend going to the libMesh mailing list webpage: http://sourceforge.net/mail/?group_id=71130 and searching for "periodic boundary conditions".  David On 03/10/2006, at 3:54 PM, HaeWon Choi wrote: > Hi, anyone have an experience for implementing Periodic B.C.s > for libmesh? > I'll appreciate if anyone could help me to implement that. > > HaeWon > >  >  > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to > share your > opinions on IT & business topics through brief surveys  and earn > cash > http://www.techsay.com/default.php? > page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers 
From: HaeWon Choi <haewon@uc...>  20061003 14:54:09

Hi, anyone have an experience for implementing Periodic B.C.s for libmesh? I'll appreciate if anyone could help me to implement that. HaeWon 