On Wed, 18 May 2011, John Peterson wrote:
> On Wed, May 18, 2011 at 12:37 PM, Roy Stogner <roystgnr@...> wrote:
>> Currently when we sync a BoundaryMesh for the entire boundary, we give
>> the boundary elements subdomain_ids and processor_ids that correspond
>> (after mapping them to be sequential starting with 0) to the boundary
>> condition ids their corresponding interior elements' sides had...
>> unless we're sync'ing a limited set of requested boundary ids, in
>> which case we make the subdomain_ids and processor_ids match the
>> interior elements.
>> Would anyone mind if we just made the latter behavior the default for
>> *any* BoundaryInfo::sync? The former seems kind of useless, except
>> maybe for visualization-only purposes, and it would be nice to
>> refactor the two implementations, since adding ParallelMesh support is
>> making the sync() a bit more complicated.
> Sounds good to me. I think that code was mostly for viewing boundary
> IDs in GMV on Boundary meshes.
> Since GMV is dead, this feature is probably not of much current use.
That is a nice feature to be able to have, though. We ought to be
able to view subdomain ids with more than just GMV. And Vikram's
IMing me to tell me he still does this kind of viz (but could live
>From a code point of view it's no big deal either way - get the
interior parent of your element and you can get the subdomain id or
boundary ids from that. But for visualization, copying boundary ids
to subdomain ids would be much more convenient to use... *except* that
there's a catch: libMesh supports sets of boundary ids on each side,
but only supports a single boundary id on each element. I don't know
if anyone's actually using any code that has overlapping boundary ids,
but whenever I run across library code that interferes with such usage
I've been trying to fix it. I cring at adding *new* code that makes a
"one boundary id per boundary side" assumption.
We could always add support for overlapping subdomain ids... but Ben
just got finished trimming down mesh memory usage and consolidating
heap allocation; it would feel cruel to add another several bytes per
element for a brand-new indirection layer that most people would never