From: Benjamin K. <ben...@na...> - 2007-11-05 16:56:10
|
>>> Okay, it's fixed. In theory, ParallelMesh should now work in serial >>> for all codes and in parallel (i.e. after deleting non-semilocal >>> elements) for most non-adaptive codes. In practice, I've changed >>> enough DofMap code that I may have even broken SerialMesh. Pay close >>> attention to make sure nothing's gone wonky after >> >> What is required to properly do this? I've simply replaced Mesh with >> ParallelMesh in ex4, (my favorite parallel whipping boy) and am getting a >> segfault when the equation systems initialize. I assume I'm missing a key >> renumber_nodes... / find_neighbors combination. > > Hmm... ex4 seems to be running fine for me, but I can't guarantee that > it'll do the same for you; ex6 was triggering assert() failures for me > in MeshCommunication::allgather(). If you run with -dbg=gdb as an > mpirun argument, can you get a stack trace on the segfault? --enable-parmesh fixed ex4 for me, sort of... The issue is that running in parallel does not reduce the size of the elements/nodes containers after equation_systems.init(). I added a mesh.print_info() after equation_systems.init() and ran $ mpiexec -n 2 ./ex4-devel -d 3 -n 10 elfboy(20)$ mpiexec -n 2 ./ex4-dbg -d 3 -n 10 Running ./ex4-dbg -d 3 -n 10 Mesh Information: mesh_dimension()=3 spatial_dimension()=3 n_nodes()=9261 n_local_nodes()=4889 n_elem()=1000 n_local_elem()=500 n_active_elem()=1000 n_subdomains()=1 n_processors()=2 processor_id()=0 EquationSystems n_systems()=1 System "Poisson" Type "LinearImplicit" Variables="u" Finite Element Types="LAGRANGE", "JACOBI_20_00" Infinite Element Mapping="CARTESIAN" Approximation Orders="SECOND", "THIRD" n_dofs()=9261 n_local_dofs()=4889 n_constrained_dofs()=0 n_vectors()=1 Mesh Information: mesh_dimension()=3 spatial_dimension()=3 n_nodes()=9261 n_local_nodes()=4889 n_elem()=1000 n_local_elem()=500 n_active_elem()=624 n_subdomains()=1 n_processors()=2 processor_id()=0 Or is this the intended behavior? |