From: Roy Stogner <roystgnr@ic...>  20060808 15:17:40

On Tue, 8 Aug 2006, Karl Tomlinson wrote: > Our meshes do try to be as topologically similar as possible to a > subset of a cartesian grid. There are places where this is not > the case but we don't have C1 continuity there  we don't require > C1 for our second order problems but it makes geometries look nice > and provides a dofcheap highorder interpolation. Okay. If you don't need C1 geometry everywhere, then doing a pseudoHermite geometry on unstructured meshes starts to make a little more sense. I'm still not sure what the right thing to do at oddvalence nodes is, but now it's a "what's the best data structure" problem rather than a "how is this even possible" problem. Talking about a C1 geometry on an unstructured mesh was starting to confuse me. On an unstructured mesh you don't have one geometric mapping, you have an entirely different mapping for each finite element. Mapping continuity between elements usually isn't important at all; it's just that a couple of the exceptions (like combining Hermite mappings with Hermite solution spaces to handle more interesting geometries) are very interesting. >> I'm not convinced it's possible to get C1 continuous mapping functions >> with Hermite elements in this situation, regardless of how you code >> them. > > Yes I don't think it is possible in general. It can be C1 at the > node but only almost C1 away from the node. > > But the same issue applies to solution variables with Hermite > elements, doesn't it? Yes  in fact if you'll look at the fe_hermite_* code, you'll see that we just give up and error() if we discover that an element isn't a rectangle parallel to the coordinate axes. In theory you could get C1 solutions on any mesh where the xi/eta/zeta directions are consistent at element interfaces, but I only needed rectilinear domains and so I left the code simple. >> The Hermite elements get away with fewer degrees of freedom >> than you would expect a C1 quad or hex to need, because they take >> advantage of the fact that you're using meshes where the mixed >> derivatives (e.g. d^2/dxideta in 2D) are in the same "direction" on >> all neighboring elements. I don't see how that can happen unless you >> have four quads (or 8 hexes) meeting at every node. > > The Hermite elements normally share their derivatives with > adjacent elements but this need not be the case. The zeroth order > derivatives are always going to be consistent (and shared) between > elements, but, in the 3 quads at a point case, the firstorder > derivatives could be expressed as linear combinations of one pair > of derivatives to ensure C1 continuity at the node (but not > necessarily on the edges). I don't know whether their is an > appropriate way to tie mixed derivatives to each other or whether > this would help with continuity at all (probably not). > > This can be difficult to implement. The way it has been done here > is by having a separate node for each of the three elements and > then tie the appropriate dofs together, but we have only done this > to provide the necessary C0 continuity. > > (It could also be done by having the derivatives as element > parameters so only the zeroth derivatives are automatically > shared, but this would mean that constraints would normally be > involved even for the standard 4 quads at a node case.) > > These are the issues that I was/am hoping ClouchTocher elements > might solve. This is all going to take some thought. If you want to maintain a C1 map between an unstructured 2D reference grid and a deformed grid, though, you're right that using the CloughTochers should make that pretty natural. > I think I'm seeing the issues here: > > It is inefficient to refine a geometry that doesn't change. By > providing a 2 step mapping, only the simplest mapping need be > refined. > > And there is no need for a high order representation of the > internal element boundaries in a homogeneous domain, when usually > representing the boundary is the only reason for the high order > representation. Exactly. > Much of our legacy code was designed for Lagrangian > finitedeformation (nonlinear) solid mechanics. In this problem, > solution variables are the geometry variables. Similarly for > problems involving fitting meshes to boundary data points. And that makes perfect sense too  once you start perturbing the geometry it's very unlikely that you can still get away with a patchbased rather than an elementbased description. I'd like to have an implementation that can handle both sorts of situation, if possible  but as a first draft it may make sense to implement mapping information on a perelement level but try to keep the API abstract enough to allow memory optimization in the future. >> Also, keep in mind that there's no Clough Tet elements in libMesh, >> either. C1 tets are ugly; you need something like p=11 to build them >> without macroelements, and the best macroelements suitable for any Tet >> mesh require p=5 polynomials and can't be restricted below p=3. My >> main motivation for writing the Hermite class in the first place was >> that I wanted to get some C1 3D results without spending a year or two >> bug hunting through a hundred macroelement terms. > > Sounds scary. It is, but only because I don't know enough about handling high polynomial degrees intelligently without getting swamped by floating point error. I've got libMesh's adaptive p refinement capped at around p=10 or p=11 for just that reason. > But I don't know what "restricted below p=3" means. Take a look at the quadratic "restricted CloughTocher" triangles for a 2D example: you can start with the cubic CT triangle, then constrain away the side flux degrees of freedom. The result has fewer degrees of freedom and p=2 accuracy like a normal quadratic element would, but it's still got some cubic terms so the quadrature rules don't get any cheaper. The tet situation is the same: you can start with a p=5 macroelement tet, then constrain away a bunch of the degrees of freedom to get something that has performance more like a cubic element. >> That would work, but I'd be worried about what CloughTocher mappings >> might do to your quadrature rules. Granted, any kind of nonaffine >> map can mess up your nice exact Gaussian quadrature, but the >> CloughTocher basis functions have internal subelement boundaries >> which might be even worse. > > The quadrature rules should be applied over each of the > subelements individually (or through a macroquadrature rule). > Isn't this also an issue when integrating solution variables or > their shape functions? Yes; how we do things now is to just copy a Gaussian rule over each subelement. I guess it's not a serious problem, but tripling the cost of FEM calculations isn't something to be happy about. Do you have any references to macroelementspecific quadrature rules in the literature? I'd been thinking about deriving something more efficient for the cubic CT, but I'd hate to reinvent the wheel. > If the solution variables need to C1 wrt xyz, then I think there > are some requirements for the geometry representation. It entirely depends on the finite element space  CloughTochers will build a C1 solution space on top of any geometry; Hermites need consistent local coordinate directions at nodes.  Roy 