Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver office.
Close
From: Norbert Stoop <norbert@st...>  20081111 10:49:37

Hi, Recently, subdivision surfaces were suggested as an alternative way to construct C1 (and higher) conforming surface meshes for finite element simulations. Now, I'm wondering how hard it would be to implement such elements in libmesh: Assuming triangular elements, the subdivision surface approach results in 12 bspline shape functions per element. The isoparametric map to real space is a combination of them multiplied with the real space position of the triangle's nodes *and* its next neighbor nodes (the 1ring of triangles around the element). Looking at existing libmesh elements (lagrange, clough), I see that only the ::shape, ::shape_deriv and ::shape_second_deriv are typically overwritten, and the mapping to real space is done by libmesh (fe_base.C, I think). As pointed out above, the subdivision elements have a special mapping to real space, which needs to take the neighboring nodes' position into account. Is there a way to accomplish this in the current code or would it be possible to extend it in that way? Thanks in advance, Norbert 
From: Roy Stogner <roystgnr@ic...>  20081111 13:36:51

On Tue, 11 Nov 2008, Norbert Stoop wrote: > Recently, subdivision surfaces were suggested as an alternative way to > construct C1 (and higher) conforming surface meshes for finite element > simulations. Interesting. I've heard of subdivision elements being used for surface mesh refinement, but in a context where the subdivision surface was only C1 in the limiting sense; each discrete mesh was still faceted. We could do something like that relatively easily, but of course it wouldn't be as accurate unless your formulation is insensitive to domain boundary discontinuities. > Assuming triangular elements, the subdivision surface approach results > in 12 bspline shape functions per element. The isoparametric map to > real space is a combination of them multiplied with the real space > position of the triangle's nodes *and* its next neighbor nodes (the > 1ring of triangles around the element). This is a problem, but a minor one  the nodal neighbor elements always exist as at least ghost elements on even a distributed mesh, and I think we've already got some nice Patch API for accessing them; it probably wouldn't be hard to write something similar for collecting the set of only all nodal neighbors on a surface. > Looking at existing libmesh elements (lagrange, clough), I see that only > the ::shape, ::shape_deriv and ::shape_second_deriv are typically > overwritten, and the mapping to real space is done by libmesh > (fe_base.C, I think). This would be much harder, because to do the mapping right would require a major code refactoring. Right now, we combine a geometric element Elem and a finite element space FE to produce results, and it's understood that the mapping is always done by a Lagrange FE. We'd probably want to add a mapping element (returned from a factory method in Mesh?) to that combination, and I suspect that because of the way they calculate shape functions, most of our FE objects would not be immediately suitable as mapping objects. > As pointed out above, the subdivision elements have a special mapping to > real space, which needs to take the neighboring nodes' position into > account. Is there a way to accomplish this in the current code No. > or would it be possible to extend it in that way? Yes, but not easily.  Roy 
From: Benjamin Kirk <benjamin.kirk@na...>  20081111 14:42:20

>> Recently, subdivision surfaces were suggested as an alternative way to >> construct C1 (and higher) conforming surface meshes for finite element >> simulations. > > Interesting. I've heard of subdivision elements being used for > surface mesh refinement, but in a context where the subdivision > surface was only C1 in the limiting sense; each discrete mesh was > still faceted. We could do something like that relatively easily, but > of course it wouldn't be as accurate unless your formulation is > insensitive to domain boundary discontinuities. If we had a C1 mapping element I'd think we could project the C0 subdivision surface into the C1 space... In fact, I think we would almost always have to do this with any mapping element other than Lagrange. (Does anyone know of a surface mesh format which specifies the coordinates in terms of e.g. CloughTocher basis weights?) It seems like using anything other than Lagrange would involve a 'startup' phase where we declare a system on top of lagrangemapped FEs in the usual way, solve an L2 projection or something to get the geometry on the desired mapping basis, and then redefine the mesh somehow, probably with an outofcore write/restart file... But that's just a detail. >> Assuming triangular elements, the subdivision surface approach results >> in 12 bspline shape functions per element. The isoparametric map to >> real space is a combination of them multiplied with the real space >> position of the triangle's nodes *and* its next neighbor nodes (the >> 1ring of triangles around the element). > > This is a problem, but a minor one  the nodal neighbor elements > always exist as at least ghost elements on even a distributed mesh, > and I think we've already got some nice Patch API for accessing them; > it probably wouldn't be hard to write something similar for collecting > the set of only all nodal neighbors on a surface. > >> Looking at existing libmesh elements (lagrange, clough), I see that only >> the ::shape, ::shape_deriv and ::shape_second_deriv are typically >> overwritten, and the mapping to real space is done by libmesh >> (fe_base.C, I think). That is correct... > This would be much harder, because to do the mapping right would > require a major code refactoring. Right now, we combine a geometric > element Elem and a finite element space FE to produce results, and > it's understood that the mapping is always done by a Lagrange FE. > We'd probably want to add a mapping element (returned from a factory > method in Mesh?) to that combination, and I suspect that because of > the way they calculate shape functions, most of our FE objects would > not be immediately suitable as mapping objects. To keep the code anything resembling efficient, we'd need to make sure multiple mapping types are supported at the same time... I'd think we want to use the C1 map *only* on elements with a face or edge trace on the boundary of interest. >> As pointed out above, the subdivision elements have a special mapping to >> real space, which needs to take the neighboring nodes' position into >> account. Is there a way to accomplish this in the current code > > No. > >> or would it be possible to extend it in that way? > > Yes, but not easily. 
From: Roy Stogner <roystgnr@ic...>  20081111 15:09:11

On Tue, 11 Nov 2008, Benjamin Kirk wrote: > To keep the code anything resembling efficient, we'd need to make sure > multiple mapping types are supported at the same time... I'd think we want > to use the C1 map *only* on elements with a face or edge trace on the > boundary of interest. Are the Lagrange bases so much more efficient that that would be worthwhile? I think as long as the majority of our elements are interior with affine maps we're fine.  Roy 
From: Derek Gaston <friedmud@gm...>  20081111 16:32:41

(Note... I started this email earlier this morning... but just now finished it and there are like 10 new emails in this thread so forgive me if I'm offtopic now) On Nov 11, 2008, at 7:42 AM, Benjamin Kirk wrote: > If we had a C1 mapping element I'd think we could project the C0 > subdivision > surface into the C1 space... In fact, I think we would almost > always have > to do this with any mapping element other than Lagrange. (Does > anyone know > of a surface mesh format which specifies the coordinates in terms of > e.g. > CloughTocher basis weights?) At Sandia when we were doing higher order geometry on surfaces so that we could refine the mesh and "pop" to the higher order surface.... we had to invent our own "auxiliary" file that we read in with the mesh. That file defined the extra degrees of freedom per edge / node that described the higher order geometry (essentially tangents and normals). > It seems like using anything other than Lagrange would involve a > 'startup' > phase where we declare a system on top of lagrangemapped FEs in the > usual > way, solve an L2 projection or something to get the geometry on the > desired > mapping basis, and then redefine the mesh somehow, probably with an > outofcore write/restart file... But that's just a detail. This seems reasonable to me. > To keep the code anything resembling efficient, we'd need to make sure > multiple mapping types are supported at the same time... I'd think > we want > to use the C1 map *only* on elements with a face or edge trace on the > boundary of interest. Yes... this is exactly what we did at Sandia.... maintained higher order geometry representation on the boundary only. We had two different methods for getting that higher order geometry into the code. The simplest is that we could read a second order mesh generated from Cubit.... then essentially throw away all of the second order information on the interior and only keep the exterior elements as second order. The second was to postprocess a mesh from Cubit using a third party program (that came from Cubit) that would generate that auxiliary file with the descriptions of normals and tangents on the nodes / edges on the boundary... then we would use this information to "pop" new nodes to the "higher order" boundary when doing refinement. I hope any of that was relevant. Derek 
From: Norbert Stoop <norbert@st...>  20081111 14:43:54

Roy Stogner wrote: > > On Tue, 11 Nov 2008, Norbert Stoop wrote: > >> Recently, subdivision surfaces were suggested as an alternative way to >> construct C1 (and higher) conforming surface meshes for finite element >> simulations. > > Interesting. I've heard of subdivision elements being used for > surface mesh refinement, but in a context where the subdivision > surface was only C1 in the limiting sense; each discrete mesh was > still faceted. We could do something like that relatively easily, but > of course it wouldn't be as accurate unless your formulation is > insensitive to domain boundary discontinuities. Hm, I'm not sure if I understand your last comment, but yes, the surface is C1 only in the limit case. Given an initial "control" mesh and knowing the subdivision scheme, one can find a local parametrization x(xi,eta) of the limit surface. This is given by a linear combination of the (limit surface) shape functions. Derivatives of this parametrization are continuous since they're formulated in the C1 limiting sense. If Loop's subdivision scheme is used, the shape functions are found to be quartic bsplines for the limit surface. My understanding is that the only inaccuracy is the quality of the initial control mesh, but this is not an issue in my case (simulation of spherical shells). A reference is this paper here: Subdivision surfaces: A new paradigm for thinshell finiteelement analysis, F. Cirak, M. Ortiz, P. Schröder (2000) The scheme was recently extended for adaptive refinement support. >> Assuming triangular elements, the subdivision surface approach results >> in 12 bspline shape functions per element. The isoparametric map to >> real space is a combination of them multiplied with the real space >> position of the triangle's nodes *and* its next neighbor nodes (the >> 1ring of triangles around the element). > > This is a problem, but a minor one  the nodal neighbor elements > always exist as at least ghost elements on even a distributed mesh, > and I think we've already got some nice Patch API for accessing them; > it probably wouldn't be hard to write something similar for collecting > the set of only all nodal neighbors on a surface. Good. >> Looking at existing libmesh elements (lagrange, clough), I see that only >> the ::shape, ::shape_deriv and ::shape_second_deriv are typically >> overwritten, and the mapping to real space is done by libmesh >> (fe_base.C, I think). > > This would be much harder, because to do the mapping right would > require a major code refactoring. Right now, we combine a geometric > element Elem and a finite element space FE to produce results, and > it's understood that the mapping is always done by a Lagrange FE. > We'd probably want to add a mapping element (returned from a factory > method in Mesh?) to that combination, and I suspect that because of > the way they calculate shape functions, most of our FE objects would > not be immediately suitable as mapping objects. I see. These mappings are expressed in the phi_map, dphidxi_map etc., right? So, as a dirty hack, *in principle* one could overwrite the FE::init_shape_functions and others to do the mapping right for this particular subdivision element. Is my understanding correct? Thanks in advance, Norbert 
From: Roy Stogner <roystgnr@ic...>  20081111 15:19:04

On Tue, 11 Nov 2008, Norbert Stoop wrote: > Roy Stogner wrote: >> >> On Tue, 11 Nov 2008, Norbert Stoop wrote: >> >>> Recently, subdivision surfaces were suggested as an alternative way to >>> construct C1 (and higher) conforming surface meshes for finite element >>> simulations. >> >> Interesting. I've heard of subdivision elements being used for >> surface mesh refinement, but in a context where the subdivision >> surface was only C1 in the limiting sense; each discrete mesh was >> still faceted. We could do something like that relatively easily, but >> of course it wouldn't be as accurate unless your formulation is >> insensitive to domain boundary discontinuities. > > Hm, I'm not sure if I understand your last comment, That's because I didn't understand your context  I was picturing a volume domain for which you wanted a C1 boundary (which can be critical for some fourth order problems); you're talking about a 2D manifold domain in 3D. > but yes, the surface is C1 only in the limit case. In that case you don't need an exotic mapping at all; all you need is a function to correctly "snap" points to their subdivisiondefined positions when you refine the surface. That's something we've wanted to do (for boundaries, if not manifolds) for a while, and it may be hard to come up with a satisfactory user API for it, but once the API's defined the implementation would be relatively simple. > These mappings are expressed in the phi_map, dphidxi_map etc., > right? So, as a dirty hack, *in principle* one could overwrite the > FE::init_shape_functions and others to do the mapping right for this > particular subdivision element. Is my understanding correct? If you want a C1 surface even in the discretized case, then fixing the _map values would be necessary. If you're happy with a C0 discretized surface that converges (in some norms) to a C1 surface, then all you need to do is snap midedge points (and midface points on quads) to their proper places after all the elements touching them have been refined. That should properly be done by giving Elem::refine() access to some abstract base class that can return enough information about the manifold or domain boundary shape, but you could do it (as a dirty hack) just by looping over nodes on active elements after each refinement and adjusting them for consistency with the subdivision surface defined by their parent elements.  Roy 
From: Norbert Stoop <norbert@st...>  20081111 16:56:28

Roy Stogner wrote: >>> Interesting. I've heard of subdivision elements being used for >>> surface mesh refinement, but in a context where the subdivision >>> surface was only C1 in the limiting sense; each discrete mesh was >>> still faceted. We could do something like that relatively easily, but >>> of course it wouldn't be as accurate unless your formulation is >>> insensitive to domain boundary discontinuities. >> >> Hm, I'm not sure if I understand your last comment, > > That's because I didn't understand your context  I was picturing a > volume domain for which you wanted a C1 boundary (which can be > critical for some fourth order problems); you're talking about a 2D > manifold domain in 3D. Yeah, sorry... Let me specify the physical problem: It is all about the simulation of thin shells, where the shell is just a 2D manifold embedded in 3D. For the physical description the curvature must be square integrable, requiring the manifold to be C1. >> but yes, the surface is C1 only in the limit case. > > In that case you don't need an exotic mapping at all; all you need is > a function to correctly "snap" points to their subdivisiondefined > positions when you refine the surface. That's something we've wanted > to do (for boundaries, if not manifolds) for a while, and it may be > hard to come up with a satisfactory user API for it, but once the > API's defined the implementation would be relatively simple. > >> These mappings are expressed in the phi_map, dphidxi_map etc., >> right? So, as a dirty hack, *in principle* one could overwrite the >> FE::init_shape_functions and others to do the mapping right for this >> particular subdivision element. Is my understanding correct? > > If you want a C1 surface even in the discretized case, then fixing the > _map values would be necessary. Ok, that's what I assumed. If you're happy with a C0 discretized > surface that converges (in some norms) to a C1 surface, then all you > need to do is snap midedge points (and midface points on quads) to > their proper places after all the elements touching them have been > refined. That should properly be done by giving Elem::refine() access > to some abstract base class that can return enough information about > the manifold or domain boundary shape, but you could do it (as a dirty > hack) just by looping over nodes on active elements after each > refinement and adjusting them for consistency with the subdivision > surface defined by their parent elements. Ah, wait  the whole idea is to have an initial mesh which is *never* refined. It merely acts as a "control mesh" for the limiting surface. Each node of the control mesh corresponds to a (analytically) known point on the limit surface. This does not have to be the same spacial location, as some subdivision schemes change the position of existing nodes. As shown below, calculations will be made in the limit surface. But since this nodal correspondence is onetoone, nodal properties can as well be treated as if they belonged to the control points. In fact, this map can also be constructed for any point in a triangle on the control mesh, needed for the quadrature points. It is given by x(xi,eta) = sum_nb N_nb(xi,eta) * x_nb where the sum is over the neighbor control nodes of the triangle in question and x_nb is the position of these control points. N_nb are the shape functions of the limit surface, which are given analytically depending on the chosen subdivision scheme. E.g., x(0,0) gives the limit surface position of the triangle's lower left control point, x(0.5,0.5) of the barycentric middle etc. In other words, we have a parametrization of the limit surface in terms of only the position of neighboring control point positions x_nb. This parametrization is like any isoparametric map from the standard triangle to the manifold. C1 continuity is given since the parametrization is "nonlocal" in the sense that neighboring control points are included in the sum above. To summarize, we abuse the traditional mesh as a control mesh. We define a parametrization of the limit surface which naturally gives us the needed mathematical objects such as derivatives, surface patches etc. Since we *know* where each control point converges to in the limit, we can assign back calculated nodal values to the control points and assemble the system as usual. Hope this helps to clarify... Norbert 
From: Roy Stogner <roystgnr@ic...>  20081111 17:02:13

On Tue, 11 Nov 2008, Norbert Stoop wrote: > To summarize, we abuse the traditional mesh as a control mesh. We define > a parametrization of the limit surface which naturally gives us the > needed mathematical objects such as derivatives, surface patches etc. > Since we *know* where each control point converges to in the limit, we > can assign back calculated nodal values to the control points and > assemble the system as usual. > > Hope this helps to clarify... Okay, I think so. Yes, there's no getting around overriding the _map data if you want to do this.  Roy 
From: Derek Gaston <friedmud@gm...>  20081111 17:27:04

On Nov 11, 2008, at 10:02 AM, Roy Stogner wrote: > On Tue, 11 Nov 2008, Norbert Stoop wrote: > >> To summarize, we abuse the traditional mesh as a control mesh. We >> define >> a parametrization of the limit surface which naturally gives us the >> needed mathematical objects such as derivatives, surface patches etc. >> Since we *know* where each control point converges to in the limit, >> we >> can assign back calculated nodal values to the control points and >> assemble the system as usual. >> >> Hope this helps to clarify... It seems like if you could use Clough Touchers (or hermites) as the map then that would solve this problem... but as you mentioned earlier that's probably not doable with our current architecture. Derek 
From: Norbert Stoop <norbert@st...>  20081114 14:18:38

Derek Gaston wrote: > On Nov 11, 2008, at 10:02 AM, Roy Stogner wrote: > >> On Tue, 11 Nov 2008, Norbert Stoop wrote: >> >>> To summarize, we abuse the traditional mesh as a control mesh. We define >>> a parametrization of the limit surface which naturally gives us the >>> needed mathematical objects such as derivatives, surface patches etc. >>> Since we *know* where each control point converges to in the limit, we >>> can assign back calculated nodal values to the control points and >>> assemble the system as usual. >>> >>> Hope this helps to clarify... > > It seems like if you could use Clough Touchers (or hermites) as the map > then that would solve this problem... but as you mentioned earlier > that's probably not doable with our current architecture. >From an implementation point of view, the subdivision approach does not require this map, since you directly construct the element in the global system (as C1). The only map you have and need is the parametrization X(xi,eta) where X maps to the C1 surface in physical space. The jacobian at some point P(xi,eta) on the (C1!) element is then calculated from two basis vectors a_xi and a_eta of the tangent space at P, according to differential geometry: jac =  a_xi crossprod a_eta  Since you have X(xi,eta), the a_xi and a_eta follow by just differentiating X by xi or eta respectively. To put it in a sloppy way: Instead of defining everything on the master element, then find a transformation into global space, we do *define* the C1 element in global space and *there* construct the jacobian and all the other properties. We sort of go the other way around than how one typically does for FE. Norbert 
From: Norbert Stoop <norbert@st...>  20081118 16:27:40

Hi again, Okay, the basic functionality for subdivision surface FEM should be implemented now, but I have a question regarding proper integration into libmesh: These elements need some additional variables to store subdivision surface properties. For example, a vector of the 1ring of neighbors is needed. Some of these properties also need to be accessible from the application code, for example to create a local coordinate system from the surface tangents. Now, which class should I derive the elements from? Somehow, I feel that FE<2,SUBDIV> does not make too much sense since it has no containers for the extra variables. On the other hand, all other libmesh machinery relies on fe.h only and it's probably not the idea to change that. Thanks in advance, Norbert Norbert Stoop wrote: > Hi, > > Recently, subdivision surfaces were suggested as an alternative way to > construct C1 (and higher) conforming surface meshes for finite element > simulations. Now, I'm wondering how hard it would be to implement such > elements in libmesh: > Assuming triangular elements, the subdivision surface approach results > in 12 bspline shape functions per element. The isoparametric map to > real space is a combination of them multiplied with the real space > position of the triangle's nodes *and* its next neighbor nodes (the > 1ring of triangles around the element). > Looking at existing libmesh elements (lagrange, clough), I see that only > the ::shape, ::shape_deriv and ::shape_second_deriv are typically > overwritten, and the mapping to real space is done by libmesh > (fe_base.C, I think). > As pointed out above, the subdivision elements have a special mapping to > real space, which needs to take the neighboring nodes' position into > account. Is there a way to accomplish this in the current code or would > it be possible to extend it in that way? > > Thanks in advance, > Norbert > >  > This SF.Net email is sponsored by the Moblin Your Move Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblincontest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > 
From: Roy Stogner <roystgnr@ic...>  20081118 16:52:34

On Tue, 18 Nov 2008, Norbert Stoop wrote: > Okay, the basic functionality for subdivision surface FEM should be > implemented now, but I have a question regarding proper integration into > libmesh: > These elements need some additional variables to store > subdivision surface properties. For example, a vector of the 1ring of > neighbors is needed. Does this have to be stored as a vector? Or stored at all? Patch::add_point_neighbors() should do what you want on the fly. If it's worthwhile a caching version of that could be created. > Some of these properties also need to be accessible from the > application code, for example to create a local coordinate system > from the surface tangents. Could you describe this in more detail? Do you just need the FE object to be able to fill in xyz/dxyz/d2xyz properly, or is it more than that? > Now, which class should I derive the elements from? Somehow, I feel that > FE<2,SUBDIV> does not make too much sense since it has no containers for > the extra variables. I agree, but for a different reason  It would be nice to be able to mix and match subdivision surfaces with different FE types, right? If we can factor out the "all mappings are Lagrange" assumption properly, that would be possible. > On the other hand, all other libmesh machinery > relies on fe.h only and it's probably not the idea to change that. I think we could make the change sufficiently "hidden" if we just knew how to best represent it.  Roy 
From: Norbert Stoop <norbert@st...>  20081118 17:41:49

Roy Stogner wrote: > > On Tue, 18 Nov 2008, Norbert Stoop wrote: > >> Okay, the basic functionality for subdivision surface FEM should be >> implemented now, but I have a question regarding proper integration into >> libmesh: > >> These elements need some additional variables to store >> subdivision surface properties. For example, a vector of the 1ring of >> neighbors is needed. > > Does this have to be stored as a vector? Or stored at all? > Patch::add_point_neighbors() should do what you want on the fly. If > it's worthwhile a caching version of that could be created. Hm, the patch needs a special numbering scheme, which should look like this N+4  N+1  N+2 / \ / \ / \ / \ / \ / \ N+5  N  1  N+3 \ / \ e / \ / \ / \ / \ / N1 0  2 \ /\ / \ /  \ / 5 4 3 where e is the element we consider and N is the valence of vertex 0 (which is irregular in this example). >> Some of these properties also need to be accessible from the >> application code, for example to create a local coordinate system >> from the surface tangents. > > Could you describe this in more detail? Do you just need the FE > object to be able to fill in xyz/dxyz/d2xyz properly, or is it more > than that? What I need in particular, and probably all people solving stuff on a manifold, is the local surface metric at the quadrature points. To get this, one needs the C1 parametrization x(eta,xi). The derivatives wrt xi and eta give two tangents, from which the normal can be constructed. Together they basically define the metric. The problem is, x(xi,eta) is given as a sum of phi(i)*nodal_xyz_position(i) where i goes over all vertices of the above patch. Now, either the finite element class exposes an interface to the parametrization x(xi,eta) (and its derivatives) and the metric, or the node patch is exposed and the user constructs what actually is needed (since the phi's are already at hand). Norbert 
From: Roy Stogner <roystgnr@ic...>  20081118 18:06:21

On Tue, 18 Nov 2008, Norbert Stoop wrote: > Roy Stogner wrote: > >> Does this have to be stored as a vector? Or stored at all? >> Patch::add_point_neighbors() should do what you want on the fly. If >> it's worthwhile a caching version of that could be created. > > Hm, the patch needs a special numbering scheme, which should look like this > > N+4  N+1  N+2 > / \ / \ / \ > / \ / \ / \ > N+5  N  1  N+3 > \ / \ e / \ / > \ / \ / \ / > N1 0  2 > \ /\ / > \ /  \ / > 5 4 3 > > where e is the element we consider and N is the valence of vertex 0 > (which is irregular in this example). Ah  then I was completely misunderstanding what you had said before about irregular vertices; I had hanging nodes in mind. What do you do about nonconforming adaptivity? You can probably ignore more refined neighbors, but what about coarser neighbors? You couldn't get this scheme out of a Patch set very easily. But because you've got more of a special case problem (all triangles at the same level) you could get this kind of patch more efficiently: find the neighbor at a side, find which side of his you are, decrement modulo 3 to get the new side, and repeat until you get back to the original element. Do that for all three original sides. >> Could you describe this in more detail? Do you just need the FE >> object to be able to fill in xyz/dxyz/d2xyz properly, or is it more >> than that? > > What I need in particular, and probably all people solving stuff on a > manifold, is the local surface metric at the quadrature points. > To get this, one needs the C1 parametrization x(eta,xi). The derivatives > wrt xi and eta give two tangents, from which the normal can be > constructed. Together they basically define the metric. That sounds like what xyz and dxyz are. > The problem is, x(xi,eta) is given as a sum of > phi(i)*nodal_xyz_position(i) where i goes over all vertices of the above > patch. > > Now, either the finite element class exposes an interface to the > parametrization x(xi,eta) (and its derivatives) Already done at the quadrature points. > and the metric, This we'd probably want the user code to calculate from dxyz. > or the node patch is exposed and the user constructs what actually > is needed (since the phi's are already at hand). That may be what you fall back on, but if it could be put into the library cleanly that would be nice.  Roy 
From: Norbert Stoop <norbert@st...>  20081118 22:28:53

This is from my phone, sorry for any typos.  Norbert Stoop http://tsonny.ch/ On 18.11.2008, at 19:06, Roy Stogner <roystgnr@...> wrote: > > > On Tue, 18 Nov 2008, Norbert Stoop wrote: > >> Roy Stogner wrote: >> >>> Does this have to be stored as a vector? Or stored at all? >>> Patch::add_point_neighbors() should do what you want on the fly. If >>> it's worthwhile a caching version of that could be created. >> >> Hm, the patch needs a special numbering scheme, which should look >> like this >> >> N+4  N+1  N+2 >> / \ / \ / \ >> / \ / \ / \ >> N+5  N  1  N+3 >> \ / \ e / \ / >> \ / \ / \ / >> N1 0  2 >> \ /\ / >> \ /  \ / >> 5 4 3 >> >> where e is the element we consider and N is the valence of vertex 0 >> (which is irregular in this example). > > Ah  then I was completely misunderstanding what you had said before > about irregular vertices; I had hanging nodes in mind. > > What do you do about nonconforming adaptivity? You can probably > ignore more refined neighbors, but what about coarser neighbors? I recently found a Paper which deals with adaptive refinement. I can send you the ref when i'm back at my computer. Instead of the mesh the basis function space is somehow enlarged. They claim it is way easier than traditional amr.. In other words, i dont care about amr at the moment. > You couldn't get this scheme out of a Patch set very easily. But > because you've got more of a special case problem (all triangles at > the same level) you could get this kind of patch more efficiently: > find the neighbor at a side, find which side of his you are, decrement > modulo 3 to get the new side, and repeat until you get back to the > original element. Do that for all three original sides. Yep, thats what i do right now in reinit() >>> Could you describe this in more detail? Do you just need the FE >>> object to be able to fill in xyz/dxyz/d2xyz properly, or is it more >>> than that? >> >> What I need in particular, and probably all people solving stuff on a >> manifold, is the local surface metric at the quadrature points. >> To get this, one needs the C1 parametrization x(eta,xi). The >> derivatives >> wrt xi and eta give two tangents, from which the normal can be >> constructed. Together they basically define the metric. > > That sounds like what xyz and dxyz äre. Ah, lacking a proper browser i cannot find the precise Definition for xyz, but if it is meant as the map from xi,eta to xyz then this is definitely the right place to store it... >> The problem is, x(xi,eta) is given as a sum of >> phi(i)*nodal_xyz_position(i) where i goes over all vertices of the >> above >> patch. >> >> Now, either the finite element class exposes an interface to the >> parametrization x(xi,eta) (and its derivatives) > > Already done at the quadrature points. > >> and the metric, > > This we'd probably want the user code to calculate from dxyz. > >> or the node patch is exposed and the user constructs what actually >> is needed (since the phi's are already at hand). > > That may be what you fall back on, but if it could be put into the > library cleanly that would be nice. Provided i can put the metric into xyz, i could derive from fe<> and cache the current element patch, valences etc. in global scope like is done for the clough element coefficients. Is my understanding correct and would this be acceptable? Thanks for your help! norbert 
From: Norbert Stoop <norbert@st...>  20081119 12:58:32

Okay, so dxyz seems to be appropriate for storing the metric  that problem is solved then. Now, one last thing: All DOFs are at the vertices (and we have only one DOF per vertex). Ie the global vector and matrices have dimension M or MxM if M is the number of mesh vertices. However, each DOF is not only shared with all the triangles incident to that vertex, but also with all elements which have this vertex in their patch (ie in their 1neighborhood). E.g., for the element e shown below, the element matrix contribution Ke(i,j) is a (N+6)x(N+6) matrix, where only the local indices 0,1 and N directly belong to element e, while the other DOFs belong to the elements around e. I don't think that dof_map can provide such a map out of the box, or am I wrong? I was therefore thinking of the following: Create a custom subdiv_dof_map, which collects the dof indices of all vertices in the patch of e and then (in the correct order) combines them into one dof_indices vector. This one can then be used as before. I guess it would be minimally invasive and only need small changes in the application code. I just hope there's no hidden catch... Norbert >> >> N+4  N+1  N+2 >> / \ / \ / \ >> / \ / \ / \ >> N+5  N  1  N+3 >> \ / \ e / \ / >> \ / \ / \ / >> N1 0  2 >> \ /\ / >> \ /  \ / >> 5 4 3 >> 
From: Benjamin Kirk <benjamin.kirk@na...>  20081119 14:10:35

(I'm copying this to the devel list, feel free to take users off any reply to keep its traffic down) > Okay, so dxyz seems to be appropriate for storing the metric  that > problem is solved then. Now, one last thing: All DOFs are at the > vertices (and we have only one DOF per vertex). Ie the global vector and > matrices have dimension M or MxM if M is the number of mesh vertices. > However, each DOF is not only shared with all the triangles incident to > that vertex, but also with all elements which have this vertex in their > patch (ie in their 1neighborhood). E.g., for the element e shown below, > the element matrix contribution Ke(i,j) is a (N+6)x(N+6) matrix, where > only the local indices 0,1 and N directly belong to element e, while the > other DOFs belong to the elements around e. I don't think that dof_map > can provide such a map out of the box, or am I wrong? Correct  it won't do that by default... > I was therefore thinking of the following: Create a custom > subdiv_dof_map, which collects the dof indices of all vertices in the > patch of e and then (in the correct order) combines them into one > dof_indices vector. This one can then be used as before. I guess it > would be minimally invasive and only need small changes in the > application code. I just hope there's no hidden catch... You're right. There may be bizarre AMR constraint implications I haven't thought through, but since you said you're presently not worried about AMR that is a nonissue. All that should be needed is a different method to compute the graph of the sparse matrix. (I think even the current dof distribution stuff will work just fine.) For some time I've wanted 'specialized' dof maps to handle cases where you ether (i) have special needs (like this) or (ii) know something special about your application that allows you to exploit an efficiency. My compressible navierstokes application is an example of (ii) since there are a lot of variables in a system which are all the same type. The System base class holds an AutoPtr<DofMap> object, so I'm thinking it would be *really* easy to modify the DofMap class definition a little so that users can derive their own implementation and supply it to the System for use. Then we just add a System::replace_dof_map() member or something like that. Anyone have a different idea? Ben >>> N+4  N+1  N+2 >>> / \ / \ / \ >>> / \ / \ / \ >>> N+5  N  1  N+3 >>> \ / \ e / \ / >>> \ / \ / \ / >>> N1 0  2 >>> \ /\ / >>> \ /  \ / >>> 5 4 3 >>> 