(thought this should get moved to the list...)
with respect to the level thing: yeah, the 'level' concept becomes much
less meaningful. One potential solution to the problem you point out is
having a 'tree collapse' phase after element refinement. This would
essentially look at the element refinement hierarchy and collapse it
vertically based on equivalence. Of course, equivalence rules would
need to be defined, so this would make it hard to support arbitrary
refinement patterns, but... The idea would be as follows (in 2D):
would be replaced with=20
and similar (admittedly more complicated) in 3D
Although, I don't think this is necessary. Instead, we can rely more on
testing element sides rather than levels. I avoided this originally
because constructing an element side was expensive, but the Side<> proxy
class has decreased that cost. Basically, anywhere the level() is
currently used to determine if two elements share an edge/face we would
test side equality instead. That is,
rh->level() =3D=3D lh->level()
rh->side(rs) =3D=3D lh->side(ls)
Element comparison is still more expensive than int comparison (duh...)
but this should be do-able. Note this should also support arbitrary
refinement patterns more readily. I should probably grep the code to
see just how much code this would change...
From: Roy Stogner [mailto:roystgnr@...
Sent: Friday, March 10, 2006 3:47 PM
To: Kirk, Benjamin (JSC-EG)
Cc: John Peterson; derek.gaston@...
Subject: RE: poor man's axisymmetric formulation
On Fri, 10 Mar 2006, Kirk, Benjamin (JSC-EG) wrote:
> With respect to the refinement, you are absolutely right... The=20
> current AMR rules would add unnecessary DOFs through the thickness.
> This, however, is a perfect case for the "RefinementPattern" I was=20
> playing around with last year. Essentially the user provides an input
> mesh (possibly multiple) which specifies *how* to refine a given=20
> element type. Thus, for this case the refinement pattern would=20
> subdivide a hex into 4 sub-hexes with no subdivision in the theta=20
> direction. Probably time to revive that code...
That would work.
> John may remember the initial motivation for the refinement pattern=20
> was the RBM thin-film flows which are essentially resolved through the
> depth but need AMR to capture planar features. However, it=20
> generalizes really nicely and is how I have thought we might support
I'm not sure how that would work.
> Essentially multiple refinement patterns are available, the burden is=20
> then on the error indicator to select between them.
This is definitely the right way to go about refinement; it can
generalize nicely to edge bisection and p refinement as well.
The problem I ran into when thinking about general anisotropic schemes
is this: there are quite a few places in the library where we compare
the "level" numbers between neighbor elements to detect and constrain a
non-conforming mesh side. How do we do that in a setting where "levels"
may make no sense? An element which has been x refined once and y
refined once may have grandchildren with a higher level number than it's
conforming neighbor which has been uniformly refined once - but if the
uniformly refined element is the anisotropically refined element's
neighbor in the x direction, it's children end up being the "fine"
elements whose DoF values need to be constrained. Even worse, we can't
just restrict those values to be in the parent element space, because
that would be overconstraining them in the y direction!
The only solutions I can think of are atrocious hacks which would only
work if the neighbor elements come from EDGE/QUAD/HEX cells whose
ancestors come from the same topologically cartesian grid - and what
good is that? Even the NURBS people can do conforming bases on multiple