You can subscribe to this list here.
2003 
_{Jan}
(4) 
_{Feb}
(1) 
_{Mar}
(9) 
_{Apr}
(2) 
_{May}
(7) 
_{Jun}
(1) 
_{Jul}
(1) 
_{Aug}
(4) 
_{Sep}
(12) 
_{Oct}
(8) 
_{Nov}
(3) 
_{Dec}
(4) 

2004 
_{Jan}
(1) 
_{Feb}
(21) 
_{Mar}
(31) 
_{Apr}
(10) 
_{May}
(12) 
_{Jun}
(15) 
_{Jul}
(4) 
_{Aug}
(6) 
_{Sep}
(5) 
_{Oct}
(11) 
_{Nov}
(43) 
_{Dec}
(13) 
2005 
_{Jan}
(25) 
_{Feb}
(12) 
_{Mar}
(49) 
_{Apr}
(19) 
_{May}
(104) 
_{Jun}
(60) 
_{Jul}
(10) 
_{Aug}
(42) 
_{Sep}
(15) 
_{Oct}
(12) 
_{Nov}
(6) 
_{Dec}
(4) 
2006 
_{Jan}
(1) 
_{Feb}
(6) 
_{Mar}
(31) 
_{Apr}
(17) 
_{May}
(5) 
_{Jun}
(95) 
_{Jul}
(38) 
_{Aug}
(44) 
_{Sep}
(6) 
_{Oct}
(8) 
_{Nov}
(21) 
_{Dec}

2007 
_{Jan}
(5) 
_{Feb}
(46) 
_{Mar}
(9) 
_{Apr}
(23) 
_{May}
(17) 
_{Jun}
(51) 
_{Jul}
(41) 
_{Aug}
(4) 
_{Sep}
(28) 
_{Oct}
(71) 
_{Nov}
(193) 
_{Dec}
(20) 
2008 
_{Jan}
(46) 
_{Feb}
(46) 
_{Mar}
(18) 
_{Apr}
(38) 
_{May}
(14) 
_{Jun}
(107) 
_{Jul}
(50) 
_{Aug}
(115) 
_{Sep}
(84) 
_{Oct}
(96) 
_{Nov}
(105) 
_{Dec}
(34) 
2009 
_{Jan}
(89) 
_{Feb}
(93) 
_{Mar}
(119) 
_{Apr}
(73) 
_{May}
(39) 
_{Jun}
(51) 
_{Jul}
(27) 
_{Aug}
(8) 
_{Sep}
(91) 
_{Oct}
(90) 
_{Nov}
(77) 
_{Dec}
(67) 
2010 
_{Jan}
(25) 
_{Feb}
(36) 
_{Mar}
(98) 
_{Apr}
(45) 
_{May}
(25) 
_{Jun}
(60) 
_{Jul}
(17) 
_{Aug}
(36) 
_{Sep}
(48) 
_{Oct}
(45) 
_{Nov}
(65) 
_{Dec}
(39) 
2011 
_{Jan}
(26) 
_{Feb}
(48) 
_{Mar}
(151) 
_{Apr}
(108) 
_{May}
(61) 
_{Jun}
(108) 
_{Jul}
(27) 
_{Aug}
(50) 
_{Sep}
(43) 
_{Oct}
(43) 
_{Nov}
(27) 
_{Dec}
(37) 
2012 
_{Jan}
(56) 
_{Feb}
(120) 
_{Mar}
(72) 
_{Apr}
(57) 
_{May}
(82) 
_{Jun}
(66) 
_{Jul}
(51) 
_{Aug}
(75) 
_{Sep}
(166) 
_{Oct}
(232) 
_{Nov}
(284) 
_{Dec}
(105) 
2013 
_{Jan}
(168) 
_{Feb}
(151) 
_{Mar}
(30) 
_{Apr}
(145) 
_{May}
(26) 
_{Jun}
(53) 
_{Jul}
(76) 
_{Aug}
(33) 
_{Sep}
(23) 
_{Oct}
(72) 
_{Nov}
(125) 
_{Dec}
(38) 
2014 
_{Jan}
(47) 
_{Feb}
(62) 
_{Mar}
(27) 
_{Apr}
(8) 
_{May}
(12) 
_{Jun}
(2) 
_{Jul}
(12) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 







1

2
(1) 
3
(3) 
4
(3) 
5
(4) 
6

7

8

9

10

11

12

13
(1) 
14

15
(2) 
16

17
(1) 
18
(2) 
19
(4) 
20
(1) 
21

22

23

24

25
(3) 
26

27

28

29

30

31






From: Roy Stogner <roystgnr@ic...>  20050125 22:47:31

On Tue, 25 Jan 2005, KIRK, BENJAMIN (JSCEG) (NASA) wrote: > You are absolutely correct. What is needed is another step... > System::restrict_vector ? Must it be another step? The project_vector() already loops over all elements; I'd just add a special case to that function to handle justcoarsened elements. > This step has not been required previously since, as you point out, the > restriction operator for Lagrange elements is the identity operator. Exactly. I'm trying to root out these insidious Lagrangian assumptions one by one. ;) Prioritizing, though, this doesn't seem to be as big a problem as the node/edge/face identity crisis (or my confusion regarding it, at least) does for compute_constraints  I'd much rather have adaptive refinement without coarsening than uniform refinement/coarsening without adaptivity. > As for the local L2 projection, there should probably be a separate > projection for each edge, face, and then element. That would handle the uniqueness problem, yes. In fact, we wouldn't need a projection on every edge and face: if every element sharing an edge or face was just coarsened, then we'd need the projection, but if there's even one element that hasn't just been coarsened, then that element exactly specifies all it's edge and face values.  Roy Stogner 
From: KIRK, BENJAMIN (JSCEG) (NASA) <benjamin.kirk1@na...>  20050125 21:30:10

I received an email a week or so ago, and for the life of me I can't find it... It asked specifically about Cygwin support. So, I'm sending this to the devel and users lists in hopes it finds the interested parties. I just installed cygwin on my windows machine (actually, cygwin is running under windows which is running in vmware under linux, talk about contrived!) here and was able to run the examples with no problem. Note, however, that shared libraries are not supported under cygwin at the current time. The following should work: ./configure disableshared make run_examples or at least it did for me with a fresh cygwin install using gcc3.3.3. Perhaps in the future I'll change configure so that it recogizes cygwin and disables shared libraries automatically.  Benjamin S. Kirk benjamin.kirk@... NASA/JSC, Applied Aerosciences & CFD Branch 2814839491 
From: KIRK, BENJAMIN (JSCEG) (NASA) <benjamin.kirk1@na...>  20050125 20:47:51

You are absolutely correct. What is needed is another step... System::restrict_vector ? This step has not been required previously since, as you point out, the restriction operator for Lagrange elements is the identity operator. As for the local L2 projection, there should probably be a separate projection for each edge, face, and then element. In general the projection will be overconstrained (as you point out), but the system is solvable in the leastsquares sense by either a normal equations or QR approach. By breaking it up into an edgebyedge, facebyface, etc... approach we ensure that neighboring elements get the same values for edge, face, etc... DOFs. I'll look into the refinement code and see about adding a System::restrict_vector method and how best to fit it in... Other than that, is the outlined approach clear?  Benjamin S. Kirk benjamin.kirk@... NASA/JSC, Applied Aerosciences & CFD Branch 2814839491 Original Message From: libmeshdeveladmin@... [mailto:libmeshdeveladmin@...] On Behalf Of Roy Stogner Sent: Thursday, January 20, 2005 3:58 PM To: libmeshdevel@... Subject: [Libmeshdevel] System::project_vector bug I just realized that the behavior of nodes "changing identities" depending on whether they're edge, face, corner, or interior has another consequence: System::project_vector only handles mesh coarsening correctly for Lagrange elements: setting a coarse edge DoF to the value of a fine corner DoF probably won't work for anything else. I'm not sure how to fix that. From a mathematical perspective, the local L2 projection that we're using to go from coarse to fine meshes is an exact projection whenever the function space restricted to the fine child elements contains the entire function space from the coarse parent element, but if we were to use a projection to go from fine to coarse elements, the results couldn't be exact, and different elements sharing the same DoF might have different ideas about what its new value should be. We could just go with the projection result from whatever the first element we come across says (and since we're coarsening anyway, we have to expect to lose some accuracy), but it seems like there should be a better way. From a practical perspective there's a more immediate problem. I'm not even sure how to form a projection matrix. When refining, you have access to both child and parent elements; when coarsening, though, aren't the child elements already deleted by the time you project to the new mesh? Could we refine just one element at a time (ignoring level mismatch constraints), use the new children immediately for integration, then coarsen again?  Roy Stogner  This SF.Net email is sponsored by: IntelliVIEW  Interactive Reporting Tool for open source databases. Create drag&drop reports. Save time by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc. Download a FREE copy at http://www.intelliview.com/go/osdn_nl _______________________________________________ Libmeshdevel mailing list Libmeshdevel@... https://lists.sourceforge.net/lists/listinfo/libmeshdevel 
From: Roy Stogner <roystgnr@ic...>  20050120 21:58:10

I just realized that the behavior of nodes "changing identities" depending on whether they're edge, face, corner, or interior has another consequence: System::project_vector only handles mesh coarsening correctly for Lagrange elements: setting a coarse edge DoF to the value of a fine corner DoF probably won't work for anything else. I'm not sure how to fix that. From a mathematical perspective, the local L2 projection that we're using to go from coarse to fine meshes is an exact projection whenever the function space restricted to the fine child elements contains the entire function space from the coarse parent element, but if we were to use a projection to go from fine to coarse elements, the results couldn't be exact, and different elements sharing the same DoF might have different ideas about what its new value should be. We could just go with the projection result from whatever the first element we come across says (and since we're coarsening anyway, we have to expect to lose some accuracy), but it seems like there should be a better way. From a practical perspective there's a more immediate problem. I'm not even sure how to form a projection matrix. When refining, you have access to both child and parent elements; when coarsening, though, aren't the child elements already deleted by the time you project to the new mesh? Could we refine just one element at a time (ignoring level mismatch constraints), use the new children immediately for integration, then coarsen again?  Roy Stogner 
From: Roy Stogner <roystgnr@ic...>  20050119 18:19:38

On Wed, 19 Jan 2005, KIRK, BENJAMIN (JSCEG) (NASA) wrote: > > What about lines that look like this? > > > > u_1 = 0.99999 * u_3 > > > > That's not on the identity diagonal, yet it's the sort of > > situation I would expect to happen when a nodal degree of freedom > > on a Lagrangian quadratic child element corresponds to a midedge > > or midface DoF on the parent: the coefficient is still 1, but the > > local numbering doesn't match. > Right, the local numbering does not match, but the global numbering > should (which is all that really matters). I just added an > assertion to, uh, assert this. Should the global numbering match? That's what's got me worried with the nonLagrangian elements. Take the CloughTocher triangles for example: they've got three DoFs at each node (function value and gradient components) and one DoF at each side (normal flux, which generally won't correspond exactly to any corner DoF). So what happens in the global numbering when a node is both a side node for a less refined element and a corner node for more refined elements? Does libMesh assume that the edge DoF and the first of the node DoFs are equal valued? Does it give me all four global DoFs there and leave it up to me to constrain away three? > I would continue with the implementation as you have it planned... > I would like to keep the implementation in terms of parentchild > relationships wherever possible to protect locality. We can handle > the complications with recursive constraints later. Will do. > As for the FE template madness, I don't have a good answer... :) > Just something that seemed like a good idea at the time. Like > anything else, though, it could be changed. There are very few > (none?) things in the library I consider set in stone... I don't see any pressing reason to change it; I was just curious about the decision. The tradeoffs between templates and OOP are usually efficiency related, but it looked to me like the templated FE code was just trading virtual functions for switch statements, and I didn't think either construct was particularly faster than the other.  Roy Stogner 
From: Roy Stogner <roystgnr@ic...>  20050119 17:57:29

On Wed, 19 Jan 2005, KIRK, BENJAMIN (JSCEG) (NASA) wrote: > Excellent! Will this be the default behavior and then let specific finite > elements (e.g. LAGRANGE) overload or reimplement the method? At the moment I'm making this a Cloughspecific behavior, but making it the default after it's tested is my plan  reimplementation will just require adding more cases to the switch statements in fe_interface.C too. Remind me again why the finite element types are using template specialization instead of inheritance? ;) > The L2 projection method is perfect since it addresses the problem for > arbitrary element types. In fact, it can be extended to use the normal > equations to handle p mismatch on an edge as well. I'd certainly like to see continuous hprefinable function spaces in libMesh, but I'm not sure how much of a help this will be. Shouldn't the hierarchic basis functions be able to handle p mismatch for C0 elements? If element 1 has a polynomial degree p and adjacent element 2 has higher degree q, then isn't the constraint just "element 2 shape functions with degree p+1 through q and support on edge 12 get dropped"? No, on second thought it's more complicated than that  basis functions can have higher polynomial degree perpendicular to a face than they do in the face, and the case of simultaneous h refinement gets even uglier. Should I be preparing for this case in the automatic code, then? Right now I'm just doing the projection between parent and child degrees of freedom; perhaps I should be doing neighbor and child instead. > directly. Line 622 in fe_lagrange.C simply says keep constraints that look > like this: > > u_1 = 0.5*u_2 + 0.5*u_3 > > and ignore ones that look like this: > > u_1 = 0.9999 * u_1 + 1.e6 * u_2 + 1.e6 * u_3 > > Which we can recognize as u_1 = u_1, which is not a constraint at all... What about lines that look like this? u_1 = 0.99999 * u_3 That's not on the identity diagonal, yet it's the sort of situation I would expect to happen when a nodal degree of freedom on a Lagrangian quadratic child element corresponds to a midedge or midface DoF on the parent: the coefficient is still 1, but the local numbering doesn't match.  Roy Stogner 
From: KIRK, BENJAMIN (JSCEG) (NASA) <benjamin.kirk1@na...>  20050119 17:53:33

Right, the local numbering does not match, but the global numbering should (which is all that really matters). I just added an assertion to, uh, assert this. I would continue with the implementation as you have it planned... I would like to keep the implementation in terms of parentchild relationships wherever possible to protect locality. We can handle the complications with recursive constraints later. As for the FE template madness, I don't have a good answer... :) Just something that seemed like a good idea at the time. Like anything else, though, it could be changed. There are very few (none?) things in the library I consider set in stone... Ben Original Message From: Roy Stogner [mailto:roystgnr@...] Sent: Wednesday, January 19, 2005 11:11 AM To: KIRK, BENJAMIN (JSCEG) (NASA) Cc: libmeshdevel@... Subject: RE: [Libmeshdevel] compute_constraints On Wed, 19 Jan 2005, KIRK, BENJAMIN (JSCEG) (NASA) wrote: > Excellent! Will this be the default behavior and then let specific > finite elements (e.g. LAGRANGE) overload or reimplement the method? At the moment I'm making this a Cloughspecific behavior, but making it the default after it's tested is my plan  reimplementation will just require adding more cases to the switch statements in fe_interface.C too. Remind me again why the finite element types are using template specialization instead of inheritance? ;) > The L2 projection method is perfect since it addresses the problem for > arbitrary element types. In fact, it can be extended to use the > normal equations to handle p mismatch on an edge as well. I'd certainly like to see continuous hprefinable function spaces in libMesh, but I'm not sure how much of a help this will be. Shouldn't the hierarchic basis functions be able to handle p mismatch for C0 elements? If element 1 has a polynomial degree p and adjacent element 2 has higher degree q, then isn't the constraint just "element 2 shape functions with degree p+1 through q and support on edge 12 get dropped"? No, on second thought it's more complicated than that  basis functions can have higher polynomial degree perpendicular to a face than they do in the face, and the case of simultaneous h refinement gets even uglier. Should I be preparing for this case in the automatic code, then? Right now I'm just doing the projection between parent and child degrees of freedom; perhaps I should be doing neighbor and child instead. > directly. Line 622 in fe_lagrange.C simply says keep constraints that > look like this: > > u_1 = 0.5*u_2 + 0.5*u_3 > > and ignore ones that look like this: > > u_1 = 0.9999 * u_1 + 1.e6 * u_2 + 1.e6 * u_3 > > Which we can recognize as u_1 = u_1, which is not a constraint at > all... What about lines that look like this? u_1 = 0.99999 * u_3 That's not on the identity diagonal, yet it's the sort of situation I would expect to happen when a nodal degree of freedom on a Lagrangian quadratic child element corresponds to a midedge or midface DoF on the parent: the coefficient is still 1, but the local numbering doesn't match.  Roy Stogner 
From: KIRK, BENJAMIN (JSCEG) (NASA) <benjamin.kirk1@na...>  20050119 15:25:22

Excellent! Will this be the default behavior and then let specific finite elements (e.g. LAGRANGE) overload or reimplement the method? The L2 projection method is perfect since it addresses the problem for arbitrary element types. In fact, it can be extended to use the normal equations to handle p mismatch on an edge as well. In general (see ~benkirk/Documents/lab_presentations/amr/slides.pdf in the CFDLab) A u_c = B u_p u_c = (A^1 B) u_p u_c = C u_p There is no assumption about corner DOFs in libMesh, the implementation in fe_lagrange.C just recognizes that for Lagrangetype elements A=I, so u_c = C u_p directly. Line 622 in fe_lagrange.C simply says keep constraints that look like this: u_1 = 0.5*u_2 + 0.5*u_3 and ignore ones that look like this: u_1 = 0.9999 * u_1 + 1.e6 * u_2 + 1.e6 * u_3 Which we can recognize as u_1 = u_1, which is not a constraint at all... So, for the general case the constraint matrix C = (A^1 B) will have some rows that are like the identity matrix (within floatingpoint error, of course) which should be ignored and other rows that are valid DOF constraints. Clear? Ben Original Message From: libmeshdeveladmin@... [mailto:libmeshdeveladmin@...] On Behalf Of Roy Stogner Sent: Monday, January 17, 2005 9:30 PM To: libmeshdevel@... Subject: [Libmeshdevel] compute_constraints I'm writing a "fallback" implementation of compute_constraints; without knowing the details of every single degree of freedom it should be possible to use local L2 projections (of face values for C0 elements like the Hierarchic, face values and fluxes for C1 elements like the CloughTocher) to generate correct constraint matrices. One concern I have: is libMesh assuming that corner degrees of freedom and edge/face degrees of freedom should be equal when one element's corner node is a less refined element's edge or face node? If not, then I must be misunderstanding how the Lagrange compute_constraints is working, since it throws away identity values. If so, then that's going to be a problem for nonLagrange elements whose edge and corner degrees of freedom are essentially unrelated.  Roy Stogner  The SF.Net email is sponsored by: Beat the postholiday blues Get a FREE limited edition SourceForge.net tshirt from ThinkGeek. It's fun and FREE  well, almost....http://www.thinkgeek.com/sfshirt _______________________________________________ Libmeshdevel mailing list Libmeshdevel@... https://lists.sourceforge.net/lists/listinfo/libmeshdevel 
From: KIRK, BENJAMIN (JSCEG) (NASA) <benjamin.kirk1@na...>  20050118 22:54:36

Looks like that fixed it. Thanks! Original Message From: libmeshdeveladmin@... [mailto:libmeshdeveladmin@...] On Behalf Of Roy Stogner Sent: Sunday, January 16, 2005 8:39 PM To: Benjamin S. Kirk Cc: libmeshdevel@... Subject: Re: [Libmeshdevel] C1 elements, second derivatives On Fri, 14 Jan 2005, Benjamin S. Kirk wrote: > Also, ex10 looks broken. The initial solution is fine, but all the > subsequent time steps look like the interior solution is 0 and the boundary > conditions are then imposed. My guess is that the solution projection is > having an issue? The solution snapshots in ex9 and ex10 should be the same, > but it looks like the interior solution is all 0 in the case of ex10. I think I've found the problem  When revisiting a DoF from a new element I didn't recalculate it, but I did reset it, to zero. Would you take a look at the fix when you have time? Thanks,  Roy Stogner  The SF.Net email is sponsored by: Beat the postholiday blues Get a FREE limited edition SourceForge.net tshirt from ThinkGeek. It's fun and FREE  well, almost....http://www.thinkgeek.com/sfshirt _______________________________________________ Libmeshdevel mailing list Libmeshdevel@... https://lists.sourceforge.net/lists/listinfo/libmeshdevel 
From: Roy Stogner <roystgnr@ic...>  20050118 03:29:33

I'm writing a "fallback" implementation of compute_constraints; without knowing the details of every single degree of freedom it should be possible to use local L2 projections (of face values for C0 elements like the Hierarchic, face values and fluxes for C1 elements like the CloughTocher) to generate correct constraint matrices. One concern I have: is libMesh assuming that corner degrees of freedom and edge/face degrees of freedom should be equal when one element's corner node is a less refined element's edge or face node? If not, then I must be misunderstanding how the Lagrange compute_constraints is working, since it throws away identity values. If so, then that's going to be a problem for nonLagrange elements whose edge and corner degrees of freedom are essentially unrelated.  Roy Stogner 
From: Roy Stogner <roystgnr@ic...>  20050117 02:38:54

On Fri, 14 Jan 2005, Benjamin S. Kirk wrote: > Also, ex10 looks broken. The initial solution is fine, but all the > subsequent time steps look like the interior solution is 0 and the boundary > conditions are then imposed. My guess is that the solution projection is > having an issue? The solution snapshots in ex9 and ex10 should be the same, > but it looks like the interior solution is all 0 in the case of ex10. I think I've found the problem  When revisiting a DoF from a new element I didn't recalculate it, but I did reset it, to zero. Would you take a look at the fix when you have time? Thanks,  Roy Stogner 
From: Roy Stogner <roystgnr@ic...>  20050115 17:28:12

On Fri, 14 Jan 2005, Benjamin S. Kirk wrote: > Thanks! I appreciate & welcome this significant addition to the code! Oh, good. I was a little antsy about committing a few thousand new lines on one day, but since (except for System::project_vector, which I'd just forgotten to commit earlier) it was all infrastructure for the same use, it's hard to break up into independently testable chunks. I think John and you are right that we should be making the "what derivatives to compute" choice at runtime, not configure time, too, but figured the most incremental way for me to commit my code was to start it out #ifdef'ed out of other people's binaries. > I agree that reinit() should be a lot smarter. I think we could just put > together a few bit flags here and avoid a lot of unnecessary computation. > Thoughts? A flags object is the way deal.ii does it, and that works just fine. We could have the default state of the flags be "compute function values and first derivatives" to keep old code working efficiently, and even new (or updated) second order code might be sped up a little bit by not calculating any derivatives on Dirichlet faces. I've been trying to figure out some way to make this decision automatically, based on what get_phi, get_dphi, etc. calls have been made recently. Such calls could return an object whose constructor tells the finite element "compute me" and whose destructor tells it "stop computing me". We could even write a conversion operator that turns this new object into the usual "vector of vector of gradients/numbers/tensors" and tells the finite element "never stop computing me", for source compatibility with older code. This is starting to sound more complicated than it's worth, though. > Also, ex10 looks broken. The initial solution is fine, but all the > subsequent time steps look like the interior solution is 0 and the boundary > conditions are then imposed. My guess is that the solution projection is > having an issue? The solution snapshots in ex9 and ex10 should be the same, > but it looks like the interior solution is all 0 in the case of ex10. You're right  the Lagrange elements case of my new Solution::project_vector is failing. I forgot a new_vector.close() statement at the end, but fixing that wasn't sufficient. Making the "if != LAGRANGE" test into an "if true" test and thus using the L2 projection for every element seems to work (although on ex10, the total project_vector time then becomes 14 seconds instead of 6 on my machine). Obviously, using the old interpolation code works. My attempt to merge the two appears to be what's failed. We could fix this by just restoring most of the old function and handing off to it for the lagrange case, but there's a lot of redundancy between the two and I'd like to keep them merged if it can be done correctly. I'll try and fix it today, and revert the CVS head to the old version tomorrow if I can't find the problem.  Roy Stogner 
From: Benjamin S. Kirk <benkirk@cf...>  20050115 04:03:27

Thanks! I appreciate & welcome this significant addition to the code! I agree that reinit() should be a lot smarter. I think we could just put together a few bit flags here and avoid a lot of unnecessary computation. Thoughts? Also, ex10 looks broken. The initial solution is fine, but all the subsequent time steps look like the interior solution is 0 and the boundary conditions are then imposed. My guess is that the solution projection is having an issue? The solution snapshots in ex9 and ex10 should be the same, but it looks like the interior solution is all 0 in the case of ex10. Ben Roy Stogner wrote: > > I've just finished committing the last chunks of my support for > fourthorder problems to libMesh. > > > The features: > > With the "enablesecond" configure option turned on, you can now > query finite element objects for second derivative information, > expressed as rank 2 tensors. With this you should be able to > formulate discontinuous Galerkin approximations to fourthorder > problems. > > You can now create CloughTocher macrotriangles (and the quadrature > rules to integrate them properly), which form C1differentiable (and > so H2 conforming) finite element spaces. With these, assuming > enablesecond is on, you can formulate continuous Galerkin > approximations to 2D fourthorder problems. Example 15 (remember to > "cvs update d" to get new directories) demonstrates, using the > biharmonic equation. > > > The nonfeatures: > > Since only the C1 elements let you solve fourthorder problems easily, > I've only added second derivative calculations to the other finite > element classes where it was easy to do so  the rest are just stubs > and warnings. Adding C1 infinite elements, in particular, looked like > it would be ugly. > > The CloughTocher elements still work for second order problems with > second derivative calculations off, but there's probably no point  > they might be slightly more efficient than regular cubics in the > solver but are certainly much less efficient in the assembly stage. > > Cubic triangles are the only C1 elements right now  I'm working on > cubic quads and tets now, but it's not my top priority, and these > things are a bear to code and debug. > > > The problems: > > With "enablesecond" turned on, second derivatives are calculated > and stored whether you need them or not  the memory hit is > insignificant but I'll bet the speed hit is noticeable. In the long > run we ought to have a method of making finite element reinit()s > calculate precisely what each application code wants and no more; in > the short run "enablesecond" ought to be left off unless you're > using it. > > I've tested this code against my own applications and the libMesh > examples, but since it makes low level additions to the FEBase class, > I'm worried that it might work on my computer and break yours. > Anyone working with the main CVS tree might want to pay extra > attention to the next postupdate run you make; anyone who wants to > help might also try testing their own code with "enablesecond" too. >  > Roy Stogner > > >  > The SF.Net email is sponsored by: Beat the postholiday blues > Get a FREE limited edition SourceForge.net tshirt from ThinkGeek. > It's fun and FREE  well, almost....http://www.thinkgeek.com/sfshirt > _______________________________________________ > Libmeshdevel mailing list > Libmeshdevel@... > https://lists.sourceforge.net/lists/listinfo/libmeshdevel 
From: Roy Stogner <roystgnr@ic...>  20050113 23:41:20

I've just finished committing the last chunks of my support for fourthorder problems to libMesh. The features: With the "enablesecond" configure option turned on, you can now query finite element objects for second derivative information, expressed as rank 2 tensors. With this you should be able to formulate discontinuous Galerkin approximations to fourthorder problems. You can now create CloughTocher macrotriangles (and the quadrature rules to integrate them properly), which form C1differentiable (and so H2 conforming) finite element spaces. With these, assuming enablesecond is on, you can formulate continuous Galerkin approximations to 2D fourthorder problems. Example 15 (remember to "cvs update d" to get new directories) demonstrates, using the biharmonic equation. The nonfeatures: Since only the C1 elements let you solve fourthorder problems easily, I've only added second derivative calculations to the other finite element classes where it was easy to do so  the rest are just stubs and warnings. Adding C1 infinite elements, in particular, looked like it would be ugly. The CloughTocher elements still work for second order problems with second derivative calculations off, but there's probably no point  they might be slightly more efficient than regular cubics in the solver but are certainly much less efficient in the assembly stage. Cubic triangles are the only C1 elements right now  I'm working on cubic quads and tets now, but it's not my top priority, and these things are a bear to code and debug. The problems: With "enablesecond" turned on, second derivatives are calculated and stored whether you need them or not  the memory hit is insignificant but I'll bet the speed hit is noticeable. In the long run we ought to have a method of making finite element reinit()s calculate precisely what each application code wants and no more; in the short run "enablesecond" ought to be left off unless you're using it. I've tested this code against my own applications and the libMesh examples, but since it makes low level additions to the FEBase class, I'm worried that it might work on my computer and break yours. Anyone working with the main CVS tree might want to pay extra attention to the next postupdate run you make; anyone who wants to help might also try testing their own code with "enablesecond" too.  Roy Stogner 
From: <luthi@gi...>  20050105 18:50:33

KIRK, BENJAMIN (JSCEG) (NASA) writes: > Should work now. The interface was not updated for PETSc 2.2.0 & earlier > 'cus I have 2.2.1 at home... Yes, I was compiling with Petsc 2.2.0. However, I cannot seem to get the patch from CVS. Is it checked in? Thanks, Martin 
From: KIRK, BENJAMIN (JSCEG) (NASA) <benjamin.kirk1@na...>  20050105 18:02:13

Should work now. The interface was not updated for PETSc 2.2.0 & = earlier 'cus I have 2.2.1 at home... =20 Original Message From: libmeshdeveladmin@... [mailto:libmeshdeveladmin@...] On Behalf Of John Peterson Sent: Wednesday, January 05, 2005 10:51 AM To: luthi@...; libmeshdevel@... Subject: [Libmeshdevel] Compilation fails If you could, please also let me know what version of PETSc. I believe = you were running 2.2.x the last time you posted about PETSc. John Martin L=FCthi writes: > Hi John >=20 > No, Linux, gcc >=20 > gcc v > Reading specs from = /usr/local/lib/gcclib/i686pclinuxgnu/3.3.3/specs > Configured with: ./configure=20 > Thread model: posix > gcc version 3.3.3 >=20 >=20 > Also a make clean did not help. I'll try a older version (but not > today...). >=20 > Thanks, Martin  The SF.Net email is sponsored by: Beat the postholiday blues Get a = FREE limited edition SourceForge.net tshirt from ThinkGeek. It's fun and = FREE  well, almost....http://www.thinkgeek.com/sfshirt _______________________________________________ Libmeshdevel mailing list Libmeshdevel@... https://lists.sourceforge.net/lists/listinfo/libmeshdevel 
From: John Peterson <peterson@cf...>  20050105 16:51:04

If you could, please also let me know what version of PETSc. I believe= you were running 2.2.x the last time you posted about PETSc. John Martin L=FCthi writes: > Hi John >=20 > No, Linux, gcc >=20 > gcc v > Reading specs from /usr/local/lib/gcclib/i686pclinuxgnu/3.3.3/sp= ecs > Configured with: ./configure=20 > Thread model: posix > gcc version 3.3.3 >=20 >=20 > Also a make clean did not help. I'll try a older version (but not > today...). >=20 > Thanks, Martin 
From: John Peterson <peterson@cf...>  20050105 01:15:35

Is this on xlC? John Martin L=FCthi writes: > Hi=20 >=20 > After a cvs update I get the following compilation problems: >=20 > Compiling C++ (in debug mode) src/numerics/petsc_linear_solver.C... > src/numerics/petsc_linear_solver.C: In member function `std::pair<un= signed int,=20 > Real> PetscLinearSolver<T>::solve(SparseMatrix<T>&, SparseMatrix<= T>&,=20 > NumericVector<T>&, NumericVector<T>&, double, unsigned int) [with= T =3D=20 > Number]': > /home/tinu/numeric/libmesh/include/numerics/petsc_linear_solver.h:10= 1: instantiated from `std::pair<unsigned int, Real> PetscLinearSolver= <T>::solve(SparseMatrix<T>&, NumericVector<T>&, NumericVector<T>&, doub= le, unsigned int) [with T =3D Number]' > src/numerics/petsc_linear_solver.C:484: instantiated from here > src/numerics/petsc_linear_solver.C:253: error: argument of type ` > _p_Vec*(PetscVector<Number>::)()' does not match `_p_Vec*' > src/numerics/petsc_linear_solver.C:257: error: argument of type ` > _p_Vec*(PetscVector<Number>::)()' does not match `_p_Vec*' > make: *** [src/numerics/petsc_linear_solver.i686pclinuxgnu.g.o] E= rror 1 >=20 > Compilation exited abnormally with code 2 at Tue Jan 4 14:11:50 
From: <luthi@gi...>  20050104 23:22:42

Hi After a cvs update I get the following compilation problems: Compiling C++ (in debug mode) src/numerics/petsc_linear_solver.C... src/numerics/petsc_linear_solver.C: In member function `std::pair<unsigned int, Real> PetscLinearSolver<T>::solve(SparseMatrix<T>&, SparseMatrix<T>&, NumericVector<T>&, NumericVector<T>&, double, unsigned int) [with T = Number]': /home/tinu/numeric/libmesh/include/numerics/petsc_linear_solver.h:101: instantiated from `std::pair<unsigned int, Real> PetscLinearSolver<T>::solve(SparseMatrix<T>&, NumericVector<T>&, NumericVector<T>&, double, unsigned int) [with T = Number]' src/numerics/petsc_linear_solver.C:484: instantiated from here src/numerics/petsc_linear_solver.C:253: error: argument of type ` _p_Vec*(PetscVector<Number>::)()' does not match `_p_Vec*' src/numerics/petsc_linear_solver.C:257: error: argument of type ` _p_Vec*(PetscVector<Number>::)()' does not match `_p_Vec*' make: *** [src/numerics/petsc_linear_solver.i686pclinuxgnu.g.o] Error 1 Compilation exited abnormally with code 2 at Tue Jan 4 14:11:50 
From: <luthi@gi...>  20050104 19:32:10

Hi Roy Stogner writes: > On Mon, 3 Jan 2005, KIRK, BENJAMIN (JSCEG) (NASA) wrote: > > > I'm not really against it, it just introduces something else to maintain... > > (Not so much for the TypeVector<>, but especially for the SparseMatrix<>, > > NumericVector<>, etc...). > > > > Historically libMesh data structures have used () for access since it makes > > it more clear when you are working with a derived/3rdparty type as opposed > > to a builtin type. > > > > Any other thoughts? > > I'd prefer the [] access members  I think the STL has made everyone > comfortable with the fact that [] may index into something more > complicated than a builtin array, whereas the only prior use I'd seen > of operator() was in functor type objects. > > Since we've already got the operator() access, though, I don't think > operator[] is worth adding  even though I think [] would be less > confusing than (), having both would be more confusing than either. Another shortcut I tried was adding methods .x(), .y(), .z() to TypeVector<>. This makes sense because TypeVector<> lives in 3D space, and now I can write the following: RealGradient grad_u, grad_v; grad_u.y()+grad_v.x() which is very readable. Nice syntactic sugar. Martin 
From: <luthi@gi...>  20050104 19:28:32

KIRK, BENJAMIN (JSCEG) (NASA) writes: > With regard to computing the node positions based on already moved nodes, > this corresponds to a GaussSiedeltype iteration of the smoother. The > proposed patch is more like a Jacobi iteration. When the intent is to > perform multiple steps until the nodes no longer move from one iteration to > the next then the GaussSiedel iteration is more appropriate. Maybe we can > support both options? Right, if the only intent is to move some nodes, then computing based on already moved nodes might make sense. However when the mesh is severly deformed (advected nodes), then a mesh relaxation is definitively more appropriate. The approach I implemented is equivalent to a network of springs that finds a new equilibrium position (well, after some steps), thus there is more overall smoothing, which I need. The GaussSeidel approach deforms the mesh even in areas where the mesh is perfect (try this on an undeformed square mesh to see what I mean). Does that make sense? Martin 
From: Roy Stogner <roystgnr@ic...>  20050103 17:47:48

On Mon, 3 Jan 2005, KIRK, BENJAMIN (JSCEG) (NASA) wrote: > I'm not really against it, it just introduces something else to maintain... > (Not so much for the TypeVector<>, but especially for the SparseMatrix<>, > NumericVector<>, etc...). > > Historically libMesh data structures have used () for access since it makes > it more clear when you are working with a derived/3rdparty type as opposed > to a builtin type. > > Any other thoughts? I'd prefer the [] access members  I think the STL has made everyone comfortable with the fact that [] may index into something more complicated than a builtin array, whereas the only prior use I'd seen of operator() was in functor type objects. Since we've already got the operator() access, though, I don't think operator[] is worth adding  even though I think [] would be less confusing than (), having both would be more confusing than either.  Roy 
From: KIRK, BENJAMIN (JSCEG) (NASA) <benjamin.kirk1@na...>  20050103 16:10:48

Thanks! With regard to computing the node positions based on already moved = nodes, this corresponds to a GaussSiedeltype iteration of the smoother. The proposed patch is more like a Jacobi iteration. When the intent is to perform multiple steps until the nodes no longer move from one = iteration to the next then the GaussSiedel iteration is more appropriate. Maybe we = can support both options? Ben Original Message From: libmeshdeveladmin@... [mailto:libmeshdeveladmin@...] On Behalf Of Martin = L=FCthi Sent: Saturday, January 01, 2005 6:58 PM To: libmeshdevel Subject: [Libmeshdevel] Mesh smoother Happy new year! The Laplace mesh_smoother did not work correctly for quadratic = elements. There was also a bug for linear elements (the new node positions were calculated based on already moved nodes). This patch does o fix the bug (introducing a vector storing the new positions) o smoothing the mesh for 2nd order elements (by placing the nodes in the weighted center of the adjacent vertices). o fixes a bug in MeshTools::Modification::distort Sometimes extra nodes (not in any element) were not identified correctly due to a comparison with 1.e20 of different types (float and Real=3Ddouble). Comparison for <1.e20 helps. Also the logic was altered. o everything works with Quad4, Quad8, Quad9, Hex8, Hex20, Hex27 Best, Martin 
From: KIRK, BENJAMIN (JSCEG) (NASA) <benjamin.kirk1@na...>  20050103 16:02:37

I'm not really against it, it just introduces something else to = maintain... (Not so much for the TypeVector<>, but especially for the = SparseMatrix<>, NumericVector<>, etc...).=20 Historically libMesh data structures have used () for access since it = makes it more clear when you are working with a derived/3rdparty type as = opposed to a builtin type. Any other thoughts? Ben Original Message From: libmeshdeveladmin@... [mailto:libmeshdeveladmin@...] On Behalf Of Martin = L=FCthi Sent: Monday, December 20, 2004 6:48 PM To: libmeshdevel Subject: [Libmeshdevel] TypeVector Hi Why does a TypeVector (and thus a Point) not use the [] to access the elements? I find that confusing during programming. Is anybody against adding [] accessor methods (this would just be a copy of the () = methods)? Martin 
From: <luthi@gi...>  20050102 00:57:52

Happy new year! The Laplace mesh_smoother did not work correctly for quadratic elements. There was also a bug for linear elements (the new node positions were calculated based on already moved nodes). This patch does o fix the bug (introducing a vector storing the new positions) o smoothing the mesh for 2nd order elements (by placing the nodes in the weighted center of the adjacent vertices). o fixes a bug in MeshTools::Modification::distort Sometimes extra nodes (not in any element) were not identified correctly due to a comparison with 1.e20 of different types (float and Real=double). Comparison for <1.e20 helps. Also the logic was altered. o everything works with Quad4, Quad8, Quad9, Hex8, Hex20, Hex27 Best, Martin 