Right - we don't do m->n restart so that works for us.

Also - recall that I don't like this reordering business because the un-reordering destroys the original ordering... so that's another reason not to reorder...

Unfortunately though.... it turns out that something about the reordering might be necessary for restart with adaptivity.  I'm still investigating that...

If you could find some time to look into this issue I would greatly appreciate it... the parallel code in there would take me a while grok...

Derek



On Fri, Jan 3, 2014 at 11:24 AM, Kirk, Benjamin (JSC-EG311) <benjamin.kirk@nasa.gov> wrote:
Short term yes, but this will break m->n restart I think. 

I recall something similar in the mesh a few yearns back when two disjoint nodes created an identical index. The fix there was to handle identical keys for separate nodes. If I can extend that to this case all will be good. 



On Jan 3, 2014, at 11:54 AM, "Derek Gaston" <friedmud@gmail.com> wrote:

If I just comment out the renumbering in EquationSystems::write() then everything looks fine.  Would anyone object to me adding a flag to EquationSystems::write() allowing you to turn off the renumbering?

Derek


On Fri, Jan 3, 2014 at 10:39 AM, Derek Gaston <friedmud@gmail.com> wrote:
Ok - MeshCommunication::assign_global_indices() is beyond my comprehension... BUT there are a couple of "#if 0" error checks in there that if un-#if'd they do trigger these things with my test case:

Error: nodes with duplicate Hilbert keys!
node 7, (x,y,z)=(     0.5,      0.5,        0) has HilbertIndices 3758096384_0_0
node 9, (x,y,z)=(     0.5,      0.5,        0) has HilbertIndices 3758096384_0_0

So I do suspect that to be the issue...

So what to do?  We basically shouldn't be relying on Hilbert indices for this part of libMesh.

Is it time for a new IO type for EquationSystems?  Should I do an EquationSystemsCheckpointIO class like I did for the mesh?  If you don't care about m->n restart then it should be unbelievably simple (each processor just writes it's local data) and it should scale better in parallel than the current stuff (which ultimately serializes through processor 0).

Thoughts?

Derek



On Fri, Jan 3, 2014 at 8:17 AM, Roy Stogner <roystgnr@ices.utexas.edu> wrote:

On Thu, 2 Jan 2014, Derek Gaston wrote:

One of my users brought an issue to my attention: libMesh is not properly handling multiple nodes occupying the same
position when writing out xda/xdr files for the solution vector.
I am attaching a mesh and main.C that show the issue.  If you look at the resulting eq_sys.xda file you will see that it
has two zeros at the end of the solution vector.  Those two zeros should be "4.0".

The mesh has 2 pairs of nodes that are "duplicated".  They are at (0, 0.5, 0) and (0.5, 0.5, 0).  What I mean by
duplicated is that there are two nodes in the same physical position (so there is a "slit" in the mesh).  Think of it
like a "crack" in the mesh...

Solving on a mesh like this works just fine... but something about the XDA/R output routines doesn't like it (I suspect
it's the Hilbert space reorder... but that's just a guess).  Note that the Exodus output looks perfectly fine - as does
the print of the solution vector.  It really is an issue just with XDA/R

I would really appreciate help on this one!

IIRC there was a problem with find_neighbors() that was causing meshes
with slits to fail at AMR, and so I've never really investigated the
compatibility of the rest of the library with those cases.  I'd
definitely also suspect the Hilbert reordering as the culprit here.
---
Roy


------------------------------------------------------------------------------
Rapidly troubleshoot problems before they affect your business. Most IT
organizations don't have a clear picture of how application performance
affects their revenue. With AppDynamics, you get 100% visibility into your
Java,.NET, & PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831&iu=/4140/ostg.clktrk
_______________________________________________
Libmesh-devel mailing list
Libmesh-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-devel