MeshBase::renumber_nodes_and_elements() will handle the deletion of nodes.
It might be a little confusing 'cus the renumbering is done in place, but
the approach is as follows:
- number all the nodes according to their location in the _nodes vector,
whether they are used or not.
- loop over all the elements, pack them into contiguous storage
for each element, loop over its nodes
renumber the node with the next available index if it hasn't been
put the node into the appropriate location in a contiguous array.
- trim any unnecessary nodes or elements.
I've gotta go anniversary shopping right now (God help me if I don't...),
I'll reply in more detail tonight or tomorrow.
[mailto:libmesh-devel-admin@...] On Behalf Of Roy Stogner
Sent: Tuesday, March 08, 2005 3:42 PM
Subject: RE: [Libmesh-devel] libmesh-0.5.0
On Tue, 8 Mar 2005, KIRK, BENJAMIN (JSC-EG) (NASA) wrote:
> Things get a little complicated, however. The original intent was to
> perform coarsening first and then refinement for memory purposes.
> Eventually the coarsening will occur, any required parallel mesh
> redistribution will be performed, and then the refinement. This
> allows parallel communication with the smallest amount of data.
I think you're right, given that priority - the only way to minimize memory
use is to coarsen first, restrict_vector second, refine third and
We'd have to redistribute DoFs twice as often, though - once after the
coarsen step and once after the refinement step. How much extra
communication would that add?
> The other thing that will be a little complicated is the DOF indexing.
> Basically, the parent elements will need to have dof indices allocated
> while the children are still around.
Yup. The best solution I've come up with so far is to change the definition
of Elem::active() - instead of just testing for a NULL pointer to children,
we'd add two or three new possible states for the RefinementFlag. I believe
in that case we'd get away without any more changes to the DofMap, and if
the states were re-enumerated appropriately (e.g. odd states are active, or
states above 32 are active, etc.) it shouldn't be any slower.
> The user code would translate to something like this:
> // Flag elements as COARSEN, REFINE, etc...
> // This will call System::restrict() for all our systems. Essentially
> this employs the DofMap // to give new DOF indices to the parent
> elements that are about to become active and then
> // restricts the solution vectors from the children onto the parents.
> that for strictly
> // Lagrange elements this could be omitted.
> // Actually create and delete elements
> // This is essentially the old EquationSystems::reinit() member.
Why change the user interface? We can have EquationSystems::reinit() do
everything after the flag_elements (as well as all the necessary DoFMap
stuff that reinit currently does).
> This seems like the best approach to me. Otherwise, I actually might
> prefer your idea of leaving the children around, albeit as inactive.
> Of course, I would want a Mesh::compress() member or something like
> that to get rid of these children after the projection is handled.
I'll start trying to code it your way first - if I run into any big problems
> You could make a case, at least for implicit systems, that the memory
> overhead would not be bad in this approach. You just want to make
> sure the projection is completed *and then* the inactive children get
> *before* the matrices get reallocated.
Actually, I don't think it would be too hard to code this in such a way that
both options are available. It might be worth letting users make the memory
vs. CPU tradeoff instead of committing to one or the other.
> In any case I'd rather not have the Systems delete the elements. From
> a philosophical point of view the Systems and Mesh classes should be
> as separate as possible.
Oh - I meant having the System call a Mesh::compress() function to delete
elements, of course. The level of modularization you've maintained so far
is what made adding my C1 elements possible at all, and I certainly don't
want to do anything that might screw up future additions like structured
Speaking of deletions, I'm having trouble understanding part of the current
code: when/where are unused nodes getting deleted?
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
Libmesh-devel mailing list