You can subscribe to this list here.
2003 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}
(2) 
_{Oct}
(2) 
_{Nov}
(27) 
_{Dec}
(31) 

2004 
_{Jan}
(6) 
_{Feb}
(15) 
_{Mar}
(33) 
_{Apr}
(10) 
_{May}
(46) 
_{Jun}
(11) 
_{Jul}
(21) 
_{Aug}
(15) 
_{Sep}
(13) 
_{Oct}
(23) 
_{Nov}
(1) 
_{Dec}
(8) 
2005 
_{Jan}
(27) 
_{Feb}
(57) 
_{Mar}
(86) 
_{Apr}
(23) 
_{May}
(37) 
_{Jun}
(34) 
_{Jul}
(24) 
_{Aug}
(17) 
_{Sep}
(50) 
_{Oct}
(24) 
_{Nov}
(10) 
_{Dec}
(60) 
2006 
_{Jan}
(47) 
_{Feb}
(46) 
_{Mar}
(127) 
_{Apr}
(19) 
_{May}
(26) 
_{Jun}
(62) 
_{Jul}
(47) 
_{Aug}
(51) 
_{Sep}
(61) 
_{Oct}
(42) 
_{Nov}
(50) 
_{Dec}
(33) 
2007 
_{Jan}
(60) 
_{Feb}
(55) 
_{Mar}
(77) 
_{Apr}
(102) 
_{May}
(82) 
_{Jun}
(102) 
_{Jul}
(169) 
_{Aug}
(117) 
_{Sep}
(80) 
_{Oct}
(37) 
_{Nov}
(51) 
_{Dec}
(43) 
2008 
_{Jan}
(71) 
_{Feb}
(94) 
_{Mar}
(98) 
_{Apr}
(125) 
_{May}
(54) 
_{Jun}
(119) 
_{Jul}
(60) 
_{Aug}
(111) 
_{Sep}
(118) 
_{Oct}
(125) 
_{Nov}
(119) 
_{Dec}
(94) 
2009 
_{Jan}
(109) 
_{Feb}
(38) 
_{Mar}
(93) 
_{Apr}
(88) 
_{May}
(29) 
_{Jun}
(57) 
_{Jul}
(53) 
_{Aug}
(48) 
_{Sep}
(68) 
_{Oct}
(151) 
_{Nov}
(23) 
_{Dec}
(35) 
2010 
_{Jan}
(84) 
_{Feb}
(60) 
_{Mar}
(184) 
_{Apr}
(112) 
_{May}
(60) 
_{Jun}
(90) 
_{Jul}
(23) 
_{Aug}
(70) 
_{Sep}
(119) 
_{Oct}
(27) 
_{Nov}
(47) 
_{Dec}
(54) 
2011 
_{Jan}
(22) 
_{Feb}
(19) 
_{Mar}
(92) 
_{Apr}
(93) 
_{May}
(35) 
_{Jun}
(91) 
_{Jul}
(32) 
_{Aug}
(61) 
_{Sep}
(7) 
_{Oct}
(69) 
_{Nov}
(81) 
_{Dec}
(23) 
2012 
_{Jan}
(64) 
_{Feb}
(95) 
_{Mar}
(35) 
_{Apr}
(36) 
_{May}
(63) 
_{Jun}
(98) 
_{Jul}
(70) 
_{Aug}
(171) 
_{Sep}
(149) 
_{Oct}
(64) 
_{Nov}
(67) 
_{Dec}
(126) 
2013 
_{Jan}
(108) 
_{Feb}
(104) 
_{Mar}
(171) 
_{Apr}
(133) 
_{May}
(108) 
_{Jun}
(100) 
_{Jul}
(93) 
_{Aug}
(126) 
_{Sep}
(74) 
_{Oct}
(59) 
_{Nov}
(145) 
_{Dec}
(93) 
2014 
_{Jan}
(38) 
_{Feb}
(45) 
_{Mar}
(26) 
_{Apr}
(41) 
_{May}
(125) 
_{Jun}
(70) 
_{Jul}
(28) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 







1

2

3

4

5
(10) 
6
(6) 
7
(6) 
8
(3) 
9

10

11
(19) 
12
(6) 
13
(21) 
14
(1) 
15

16

17
(6) 
18
(7) 
19
(5) 
20
(3) 
21
(2) 
22

23
(2) 
24
(5) 
25
(1) 
26

27
(2) 
28
(2) 
29
(11) 
30
(1) 






From: Roy Stogner <roystgnr@ic...>  20081111 21:26:47

On Tue, 11 Nov 2008, Benjamin Kirk wrote: > So why not compute instead > > phi(Xc) = phi(Xc(X)) = phi(Xc(X(xi))) and > (dphi/dXc)(Xc) = [(dphi/dxi)(dxi/dX)(dX/dXc)](Xc(X(xi))) > > Where Xc(X) is the C1 geometry representation provided by the "geometry > system" described previously. This is what I was thinking of when I said that arbitrary topologies would be the tricky part. X needs to be in the 2D plane for Xc(X) to make sense with our current setup. That's not even possible if your desired manifold Xc is closed and unbounded, which I'm guessing would be a pretty common case. My best idea right now is: let X be the "faceted" C0 representation of the manifold (which on any C1 element is already implicit as the first order Lagrange interpretation of the vertex dofs), and at every node or edge take some "average" of the surrounding elements to get a local coordinate system on which you can take welldefined derivatives. Easy to state, not quite as easy to picture, probably not easy at all to code. My second best idea: actually go back to the geometry literature and find out how this has been done in the past. I'd have done so already, but the only useful articles I can recall off the top of my head aren't online.  Roy 
From: Benjamin Kirk <benjamin.kirk@na...>  20081111 19:19:48

>> Declare a 'geometry system' which uses some C1 fe basis (cloughtocher >> does come to mind...). > > CloughTocher may not be ideal. Since they're not hhierarchic, you > can't refine without (very slightly) changing the result. That's not > a problem for my applications but it may be for this one. > > I'd suggest implementing Argyris or PowellSabinHeindl elements > instead. > > But (and this is embarrassing, since I know some of these elements > started out precisely for use in geometric approximation) I'm not > certain what your global degrees of freedom would look like this way. > Right now we assume that "x" and "y" are well defined globally by the > Lagrange mapping, and we have C1 global dofs that are gradients or > fluxes in xy space. How does that work if "x" and "y" are only > defined by the C1 mapping? xi and eta aren't well defined globally, > and I don't see how to define something similar without making > limiting assumptions that wouldn't handle arbitrary manifold > topologies. Well, what we can provide now is phi(X) = phi(X(xi)) and (dphi/dX)(X) = [(dphi/dxi)(dxi/dX)](X(xi)) Where phi is whatever your finite element says it is and X(xi) is the C0 map provided by the Lagrange basis. (dxi/dX) is obtained directly by inverting the (dX/dxi) transformation map. My understanding of the issue is that this is no good because X(xi) needs to be C1 so that the curvature is square integrable... So why not compute instead phi(Xc) = phi(Xc(X)) = phi(Xc(X(xi))) and (dphi/dXc)(Xc) = [(dphi/dxi)(dxi/dX)(dX/dXc)](Xc(X(xi))) Where Xc(X) is the C1 geometry representation provided by the "geometry system" described previously. The additional terms needed to compute (what I think is) the right map are (dX/dXc) at the quadrature points, which can be constructed analytically from (dXc/dX), which the user computes from the "geometry system." So in some sense there ate two jacobian transformations which are required... Since Roy's worked with a lot more C1 systems than me (read: 0), I ultimately defer to him as to whether this is possible or just a bunch of nonsense... Ben 
From: Roy Stogner <roystgnr@ic...>  20081111 17:56:59

On Tue, 11 Nov 2008, Kirk, Benjamin (JSCEG) wrote: > Pass in a standard 6noded quadratic triangulation which discretuzes the > manifold. > > Declare a 'geometry system' which uses some C1 fe basis (cloughtocher > does come to mind...). CloughTocher may not be ideal. Since they're not hhierarchic, you can't refine without (very slightly) changing the result. That's not a problem for my applications but it may be for this one. I'd suggest implementing Argyris or PowellSabinHeindl elements instead. But (and this is embarrassing, since I know some of these elements started out precisely for use in geometric approximation) I'm not certain what your global degrees of freedom would look like this way. Right now we assume that "x" and "y" are well defined globally by the Lagrange mapping, and we have C1 global dofs that are gradients or fluxes in xy space. How does that work if "x" and "y" are only defined by the C1 mapping? xi and eta aren't well defined globally, and I don't see how to define something similar without making limiting assumptions that wouldn't handle arbitrary manifold topologies. > Since he is solving on a manifold, the memory used is probably > acceptable... Certainly.  Roy 
From: Kirk, Benjamin (JSCEG) <Benjamin.K<irk1@na...>  20081111 17:40:45

Tell me if this bizarre suggestion has merit... Pass in a standard 6noded quadratic triangulation which discretuzes the manifold. Declare a 'geometry system' which uses some C1 fe basis (cloughtocher does come to mind...). The unknowns in the system would be (xC1,yC1,zC1)  the weights for the C1 surface representation. Project your geometry to this system. Implement another fe.reinit() flavor which takes userspecified mapping values. In this case, the maps are precomputed for the current element from the 'geometry system.' Since he is solving on a manifold, the memory used is probably acceptable... Is there a showstopper I am overlooking? Ben  Original Message  From: Derek Gaston <friedmud@...> To: Roy Stogner <roy@...> Cc: Libmeshusers@... <Libmeshusers@...> Sent: Tue Nov 11 11:25:22 2008 Subject: Re: [Libmeshusers] Subdivision surface based FEM On Nov 11, 2008, at 10:02 AM, Roy Stogner wrote: > On Tue, 11 Nov 2008, Norbert Stoop wrote: > >> To summarize, we abuse the traditional mesh as a control mesh. We >> define >> a parametrization of the limit surface which naturally gives us the >> needed mathematical objects such as derivatives, surface patches etc. >> Since we *know* where each control point converges to in the limit, >> we >> can assign back calculated nodal values to the control points and >> assemble the system as usual. >> >> Hope this helps to clarify... It seems like if you could use Clough Touchers (or hermites) as the map then that would solve this problem... but as you mentioned earlier that's probably not doable with our current architecture. Derek  This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblincontest.org/redirect.php?banner_id=100&url=/ _______________________________________________ Libmeshusers mailing list Libmeshusers@... https://lists.sourceforge.net/lists/listinfo/libmeshusers 
From: Derek Gaston <friedmud@gm...>  20081111 17:27:04

On Nov 11, 2008, at 10:02 AM, Roy Stogner wrote: > On Tue, 11 Nov 2008, Norbert Stoop wrote: > >> To summarize, we abuse the traditional mesh as a control mesh. We >> define >> a parametrization of the limit surface which naturally gives us the >> needed mathematical objects such as derivatives, surface patches etc. >> Since we *know* where each control point converges to in the limit, >> we >> can assign back calculated nodal values to the control points and >> assemble the system as usual. >> >> Hope this helps to clarify... It seems like if you could use Clough Touchers (or hermites) as the map then that would solve this problem... but as you mentioned earlier that's probably not doable with our current architecture. Derek 
From: Roy Stogner <roystgnr@ic...>  20081111 17:02:13

On Tue, 11 Nov 2008, Norbert Stoop wrote: > To summarize, we abuse the traditional mesh as a control mesh. We define > a parametrization of the limit surface which naturally gives us the > needed mathematical objects such as derivatives, surface patches etc. > Since we *know* where each control point converges to in the limit, we > can assign back calculated nodal values to the control points and > assemble the system as usual. > > Hope this helps to clarify... Okay, I think so. Yes, there's no getting around overriding the _map data if you want to do this.  Roy 
From: Norbert Stoop <norbert@st...>  20081111 16:56:28

Roy Stogner wrote: >>> Interesting. I've heard of subdivision elements being used for >>> surface mesh refinement, but in a context where the subdivision >>> surface was only C1 in the limiting sense; each discrete mesh was >>> still faceted. We could do something like that relatively easily, but >>> of course it wouldn't be as accurate unless your formulation is >>> insensitive to domain boundary discontinuities. >> >> Hm, I'm not sure if I understand your last comment, > > That's because I didn't understand your context  I was picturing a > volume domain for which you wanted a C1 boundary (which can be > critical for some fourth order problems); you're talking about a 2D > manifold domain in 3D. Yeah, sorry... Let me specify the physical problem: It is all about the simulation of thin shells, where the shell is just a 2D manifold embedded in 3D. For the physical description the curvature must be square integrable, requiring the manifold to be C1. >> but yes, the surface is C1 only in the limit case. > > In that case you don't need an exotic mapping at all; all you need is > a function to correctly "snap" points to their subdivisiondefined > positions when you refine the surface. That's something we've wanted > to do (for boundaries, if not manifolds) for a while, and it may be > hard to come up with a satisfactory user API for it, but once the > API's defined the implementation would be relatively simple. > >> These mappings are expressed in the phi_map, dphidxi_map etc., >> right? So, as a dirty hack, *in principle* one could overwrite the >> FE::init_shape_functions and others to do the mapping right for this >> particular subdivision element. Is my understanding correct? > > If you want a C1 surface even in the discretized case, then fixing the > _map values would be necessary. Ok, that's what I assumed. If you're happy with a C0 discretized > surface that converges (in some norms) to a C1 surface, then all you > need to do is snap midedge points (and midface points on quads) to > their proper places after all the elements touching them have been > refined. That should properly be done by giving Elem::refine() access > to some abstract base class that can return enough information about > the manifold or domain boundary shape, but you could do it (as a dirty > hack) just by looping over nodes on active elements after each > refinement and adjusting them for consistency with the subdivision > surface defined by their parent elements. Ah, wait  the whole idea is to have an initial mesh which is *never* refined. It merely acts as a "control mesh" for the limiting surface. Each node of the control mesh corresponds to a (analytically) known point on the limit surface. This does not have to be the same spacial location, as some subdivision schemes change the position of existing nodes. As shown below, calculations will be made in the limit surface. But since this nodal correspondence is onetoone, nodal properties can as well be treated as if they belonged to the control points. In fact, this map can also be constructed for any point in a triangle on the control mesh, needed for the quadrature points. It is given by x(xi,eta) = sum_nb N_nb(xi,eta) * x_nb where the sum is over the neighbor control nodes of the triangle in question and x_nb is the position of these control points. N_nb are the shape functions of the limit surface, which are given analytically depending on the chosen subdivision scheme. E.g., x(0,0) gives the limit surface position of the triangle's lower left control point, x(0.5,0.5) of the barycentric middle etc. In other words, we have a parametrization of the limit surface in terms of only the position of neighboring control point positions x_nb. This parametrization is like any isoparametric map from the standard triangle to the manifold. C1 continuity is given since the parametrization is "nonlocal" in the sense that neighboring control points are included in the sum above. To summarize, we abuse the traditional mesh as a control mesh. We define a parametrization of the limit surface which naturally gives us the needed mathematical objects such as derivatives, surface patches etc. Since we *know* where each control point converges to in the limit, we can assign back calculated nodal values to the control points and assemble the system as usual. Hope this helps to clarify... Norbert 
From: Derek Gaston <friedmud@gm...>  20081111 16:32:41

(Note... I started this email earlier this morning... but just now finished it and there are like 10 new emails in this thread so forgive me if I'm offtopic now) On Nov 11, 2008, at 7:42 AM, Benjamin Kirk wrote: > If we had a C1 mapping element I'd think we could project the C0 > subdivision > surface into the C1 space... In fact, I think we would almost > always have > to do this with any mapping element other than Lagrange. (Does > anyone know > of a surface mesh format which specifies the coordinates in terms of > e.g. > CloughTocher basis weights?) At Sandia when we were doing higher order geometry on surfaces so that we could refine the mesh and "pop" to the higher order surface.... we had to invent our own "auxiliary" file that we read in with the mesh. That file defined the extra degrees of freedom per edge / node that described the higher order geometry (essentially tangents and normals). > It seems like using anything other than Lagrange would involve a > 'startup' > phase where we declare a system on top of lagrangemapped FEs in the > usual > way, solve an L2 projection or something to get the geometry on the > desired > mapping basis, and then redefine the mesh somehow, probably with an > outofcore write/restart file... But that's just a detail. This seems reasonable to me. > To keep the code anything resembling efficient, we'd need to make sure > multiple mapping types are supported at the same time... I'd think > we want > to use the C1 map *only* on elements with a face or edge trace on the > boundary of interest. Yes... this is exactly what we did at Sandia.... maintained higher order geometry representation on the boundary only. We had two different methods for getting that higher order geometry into the code. The simplest is that we could read a second order mesh generated from Cubit.... then essentially throw away all of the second order information on the interior and only keep the exterior elements as second order. The second was to postprocess a mesh from Cubit using a third party program (that came from Cubit) that would generate that auxiliary file with the descriptions of normals and tangents on the nodes / edges on the boundary... then we would use this information to "pop" new nodes to the "higher order" boundary when doing refinement. I hope any of that was relevant. Derek 
From: Kirk, Benjamin (JSCEG) <Benjamin.K<irk1@na...>  20081111 15:43:57

(Roy, obviously we were *both* thinking of the 3D volume case...) So we should be able to proceed with element refinement as usual, but then afterward process all the boundary faces with a defined geometry and snap points to it... The final step then would need to do something like 'enforce_constraints_exactly' on the hanging edges so as not to introduce topological 'tears'... Ben 
From: Roy Stogner <roystgnr@ic...>  20081111 15:22:12

On Tue, 11 Nov 2008, Kirk, Benjamin (JSCEG) wrote: > Well, I was just thinking about the various places we use the inverse > map... If they are affine the newton iteration would be fast, but setting > up the map would be be (some unquantified amount) more expensive than in > the lagrange case. That's a good point. > Not to mention how much memory we'd be wasting to hold the higherorder > (0 valued) coefficients for the affine elements in the volume. Also true.  Roy 
From: Roy Stogner <roystgnr@ic...>  20081111 15:19:04

On Tue, 11 Nov 2008, Norbert Stoop wrote: > Roy Stogner wrote: >> >> On Tue, 11 Nov 2008, Norbert Stoop wrote: >> >>> Recently, subdivision surfaces were suggested as an alternative way to >>> construct C1 (and higher) conforming surface meshes for finite element >>> simulations. >> >> Interesting. I've heard of subdivision elements being used for >> surface mesh refinement, but in a context where the subdivision >> surface was only C1 in the limiting sense; each discrete mesh was >> still faceted. We could do something like that relatively easily, but >> of course it wouldn't be as accurate unless your formulation is >> insensitive to domain boundary discontinuities. > > Hm, I'm not sure if I understand your last comment, That's because I didn't understand your context  I was picturing a volume domain for which you wanted a C1 boundary (which can be critical for some fourth order problems); you're talking about a 2D manifold domain in 3D. > but yes, the surface is C1 only in the limit case. In that case you don't need an exotic mapping at all; all you need is a function to correctly "snap" points to their subdivisiondefined positions when you refine the surface. That's something we've wanted to do (for boundaries, if not manifolds) for a while, and it may be hard to come up with a satisfactory user API for it, but once the API's defined the implementation would be relatively simple. > These mappings are expressed in the phi_map, dphidxi_map etc., > right? So, as a dirty hack, *in principle* one could overwrite the > FE::init_shape_functions and others to do the mapping right for this > particular subdivision element. Is my understanding correct? If you want a C1 surface even in the discretized case, then fixing the _map values would be necessary. If you're happy with a C0 discretized surface that converges (in some norms) to a C1 surface, then all you need to do is snap midedge points (and midface points on quads) to their proper places after all the elements touching them have been refined. That should properly be done by giving Elem::refine() access to some abstract base class that can return enough information about the manifold or domain boundary shape, but you could do it (as a dirty hack) just by looping over nodes on active elements after each refinement and adjusting them for consistency with the subdivision surface defined by their parent elements.  Roy 
From: Kirk, Benjamin (JSCEG) <Benjamin.K<irk1@na...>  20081111 15:16:46

Well, I was just thinking about the various places we use the inverse map... If they are affine the newton iteration would be fast, but setting up the map would be be (some unquantified amount) more expensive than in the lagrange case. Not to mention how much memory we'd be wasting to hold the higherorder (0 valued) coefficients for the affine elements in the volume. Ben  Original Message  From: Roy Stogner <roystgnr@...> To: Kirk, Benjamin (JSCEG) Cc: Libmeshusers@... <Libmeshusers@...> Sent: Tue Nov 11 09:09:05 2008 Subject: Re: [Libmeshusers] Subdivision surface based FEM On Tue, 11 Nov 2008, Benjamin Kirk wrote: > To keep the code anything resembling efficient, we'd need to make sure > multiple mapping types are supported at the same time... I'd think we want > to use the C1 map *only* on elements with a face or edge trace on the > boundary of interest. Are the Lagrange bases so much more efficient that that would be worthwhile? I think as long as the majority of our elements are interior with affine maps we're fine.  Roy 
From: Norbert Stoop <norbert@st...>  20081111 15:14:25

Kirk, Benjamin (JSCEG) wrote: > No, I'm not suggesting anything I'm particular, I was just speculating about how we wound define a C1 surface... > > If anyone is interested I think I have some subdivision surface code I got from Bob Haimes at MIT. I'll see if I can dig it up... Yes, I am, definitely... ;)  Norbert 
From: Roy Stogner <roystgnr@ic...>  20081111 15:09:11

On Tue, 11 Nov 2008, Benjamin Kirk wrote: > To keep the code anything resembling efficient, we'd need to make sure > multiple mapping types are supported at the same time... I'd think we want > to use the C1 map *only* on elements with a face or edge trace on the > boundary of interest. Are the Lagrange bases so much more efficient that that would be worthwhile? I think as long as the majority of our elements are interior with affine maps we're fine.  Roy 
From: Norbert Stoop <norbert@st...>  20081111 15:05:05

Kirk, Benjamin (JSCEG) wrote: > Phi_ etc... Are the basis functions of the fe discretication, psi are the mapping functions. > > Btw, you are using shells and only solving on the manifold(?), or is there an interior volume problem too? No, it's only solved on the manifold. You would suggest using CloughTocher elements instead? 
From: Norbert Stoop <norbert@st...>  20081111 14:43:54

Roy Stogner wrote: > > On Tue, 11 Nov 2008, Norbert Stoop wrote: > >> Recently, subdivision surfaces were suggested as an alternative way to >> construct C1 (and higher) conforming surface meshes for finite element >> simulations. > > Interesting. I've heard of subdivision elements being used for > surface mesh refinement, but in a context where the subdivision > surface was only C1 in the limiting sense; each discrete mesh was > still faceted. We could do something like that relatively easily, but > of course it wouldn't be as accurate unless your formulation is > insensitive to domain boundary discontinuities. Hm, I'm not sure if I understand your last comment, but yes, the surface is C1 only in the limit case. Given an initial "control" mesh and knowing the subdivision scheme, one can find a local parametrization x(xi,eta) of the limit surface. This is given by a linear combination of the (limit surface) shape functions. Derivatives of this parametrization are continuous since they're formulated in the C1 limiting sense. If Loop's subdivision scheme is used, the shape functions are found to be quartic bsplines for the limit surface. My understanding is that the only inaccuracy is the quality of the initial control mesh, but this is not an issue in my case (simulation of spherical shells). A reference is this paper here: Subdivision surfaces: A new paradigm for thinshell finiteelement analysis, F. Cirak, M. Ortiz, P. Schröder (2000) The scheme was recently extended for adaptive refinement support. >> Assuming triangular elements, the subdivision surface approach results >> in 12 bspline shape functions per element. The isoparametric map to >> real space is a combination of them multiplied with the real space >> position of the triangle's nodes *and* its next neighbor nodes (the >> 1ring of triangles around the element). > > This is a problem, but a minor one  the nodal neighbor elements > always exist as at least ghost elements on even a distributed mesh, > and I think we've already got some nice Patch API for accessing them; > it probably wouldn't be hard to write something similar for collecting > the set of only all nodal neighbors on a surface. Good. >> Looking at existing libmesh elements (lagrange, clough), I see that only >> the ::shape, ::shape_deriv and ::shape_second_deriv are typically >> overwritten, and the mapping to real space is done by libmesh >> (fe_base.C, I think). > > This would be much harder, because to do the mapping right would > require a major code refactoring. Right now, we combine a geometric > element Elem and a finite element space FE to produce results, and > it's understood that the mapping is always done by a Lagrange FE. > We'd probably want to add a mapping element (returned from a factory > method in Mesh?) to that combination, and I suspect that because of > the way they calculate shape functions, most of our FE objects would > not be immediately suitable as mapping objects. I see. These mappings are expressed in the phi_map, dphidxi_map etc., right? So, as a dirty hack, *in principle* one could overwrite the FE::init_shape_functions and others to do the mapping right for this particular subdivision element. Is my understanding correct? Thanks in advance, Norbert 
From: Benjamin Kirk <benjamin.kirk@na...>  20081111 14:42:20

>> Recently, subdivision surfaces were suggested as an alternative way to >> construct C1 (and higher) conforming surface meshes for finite element >> simulations. > > Interesting. I've heard of subdivision elements being used for > surface mesh refinement, but in a context where the subdivision > surface was only C1 in the limiting sense; each discrete mesh was > still faceted. We could do something like that relatively easily, but > of course it wouldn't be as accurate unless your formulation is > insensitive to domain boundary discontinuities. If we had a C1 mapping element I'd think we could project the C0 subdivision surface into the C1 space... In fact, I think we would almost always have to do this with any mapping element other than Lagrange. (Does anyone know of a surface mesh format which specifies the coordinates in terms of e.g. CloughTocher basis weights?) It seems like using anything other than Lagrange would involve a 'startup' phase where we declare a system on top of lagrangemapped FEs in the usual way, solve an L2 projection or something to get the geometry on the desired mapping basis, and then redefine the mesh somehow, probably with an outofcore write/restart file... But that's just a detail. >> Assuming triangular elements, the subdivision surface approach results >> in 12 bspline shape functions per element. The isoparametric map to >> real space is a combination of them multiplied with the real space >> position of the triangle's nodes *and* its next neighbor nodes (the >> 1ring of triangles around the element). > > This is a problem, but a minor one  the nodal neighbor elements > always exist as at least ghost elements on even a distributed mesh, > and I think we've already got some nice Patch API for accessing them; > it probably wouldn't be hard to write something similar for collecting > the set of only all nodal neighbors on a surface. > >> Looking at existing libmesh elements (lagrange, clough), I see that only >> the ::shape, ::shape_deriv and ::shape_second_deriv are typically >> overwritten, and the mapping to real space is done by libmesh >> (fe_base.C, I think). That is correct... > This would be much harder, because to do the mapping right would > require a major code refactoring. Right now, we combine a geometric > element Elem and a finite element space FE to produce results, and > it's understood that the mapping is always done by a Lagrange FE. > We'd probably want to add a mapping element (returned from a factory > method in Mesh?) to that combination, and I suspect that because of > the way they calculate shape functions, most of our FE objects would > not be immediately suitable as mapping objects. To keep the code anything resembling efficient, we'd need to make sure multiple mapping types are supported at the same time... I'd think we want to use the C1 map *only* on elements with a face or edge trace on the boundary of interest. >> As pointed out above, the subdivision elements have a special mapping to >> real space, which needs to take the neighboring nodes' position into >> account. Is there a way to accomplish this in the current code > > No. > >> or would it be possible to extend it in that way? > > Yes, but not easily. 
From: Roy Stogner <roystgnr@ic...>  20081111 13:36:51

On Tue, 11 Nov 2008, Norbert Stoop wrote: > Recently, subdivision surfaces were suggested as an alternative way to > construct C1 (and higher) conforming surface meshes for finite element > simulations. Interesting. I've heard of subdivision elements being used for surface mesh refinement, but in a context where the subdivision surface was only C1 in the limiting sense; each discrete mesh was still faceted. We could do something like that relatively easily, but of course it wouldn't be as accurate unless your formulation is insensitive to domain boundary discontinuities. > Assuming triangular elements, the subdivision surface approach results > in 12 bspline shape functions per element. The isoparametric map to > real space is a combination of them multiplied with the real space > position of the triangle's nodes *and* its next neighbor nodes (the > 1ring of triangles around the element). This is a problem, but a minor one  the nodal neighbor elements always exist as at least ghost elements on even a distributed mesh, and I think we've already got some nice Patch API for accessing them; it probably wouldn't be hard to write something similar for collecting the set of only all nodal neighbors on a surface. > Looking at existing libmesh elements (lagrange, clough), I see that only > the ::shape, ::shape_deriv and ::shape_second_deriv are typically > overwritten, and the mapping to real space is done by libmesh > (fe_base.C, I think). This would be much harder, because to do the mapping right would require a major code refactoring. Right now, we combine a geometric element Elem and a finite element space FE to produce results, and it's understood that the mapping is always done by a Lagrange FE. We'd probably want to add a mapping element (returned from a factory method in Mesh?) to that combination, and I suspect that because of the way they calculate shape functions, most of our FE objects would not be immediately suitable as mapping objects. > As pointed out above, the subdivision elements have a special mapping to > real space, which needs to take the neighboring nodes' position into > account. Is there a way to accomplish this in the current code No. > or would it be possible to extend it in that way? Yes, but not easily.  Roy 
From: Norbert Stoop <norbert@st...>  20081111 10:49:37

Hi, Recently, subdivision surfaces were suggested as an alternative way to construct C1 (and higher) conforming surface meshes for finite element simulations. Now, I'm wondering how hard it would be to implement such elements in libmesh: Assuming triangular elements, the subdivision surface approach results in 12 bspline shape functions per element. The isoparametric map to real space is a combination of them multiplied with the real space position of the triangle's nodes *and* its next neighbor nodes (the 1ring of triangles around the element). Looking at existing libmesh elements (lagrange, clough), I see that only the ::shape, ::shape_deriv and ::shape_second_deriv are typically overwritten, and the mapping to real space is done by libmesh (fe_base.C, I think). As pointed out above, the subdivision elements have a special mapping to real space, which needs to take the neighboring nodes' position into account. Is there a way to accomplish this in the current code or would it be possible to extend it in that way? Thanks in advance, Norbert 