From: David K. <dav...@ak...> - 2016-10-07 01:29:56
|
I'm using GhostingFunctor for a contact solve, in which I consider a 1/4 domain with partial Dirichlet boundary conditions that impose a symmetry condition (i.e. displacement normal to the symmetry boundary is clamped to zero, and tangential displacement is unconstrained). This means that I have Dirichlet constraints that affect the dofs on the contact surface. What I find is that the solve works fine in serial, but in parallel the nonlinear convergence fails, presumably because of an incorrect Jacobian. I have actually run into this exact issue before (when I was augmenting the sparsity pattern "manually", prior to GhostingFunctor) and I found that the issue was that the dof constraints on the contact surface were not being communicated in parallel, which caused the incorrect Jacobian in parallel. I fixed it by adding artificial Edge2 elements into the mesh (like in systems_of_equations_ex8) to ensure that the dof constraints are communicated properly in parallel. So, my question is, is there a way to achieve the necessary dof constraint communication with the new GhostingFunctor API? I had hoped that using "add_algebraic_ghosting_functor" would do the job, but it apparently doesn't. Thanks, David |
From: Roy S. <roy...@ic...> - 2016-10-07 02:34:42
|
On Thu, 6 Oct 2016, David Knezevic wrote: > I'm using GhostingFunctor for a contact solve, in which I consider a 1/4 domain with partial Dirichlet boundary conditions that impose a symmetry > condition (i.e. displacement normal to the symmetry boundary is clamped to zero, and tangential displacement is unconstrained). This means that I > have Dirichlet constraints that affect the dofs on the contact surface. > What I find is that the solve works fine in serial, but in parallel the nonlinear convergence fails, presumably because of an incorrect Jacobian. > I have actually run into this exact issue before (when I was augmenting the sparsity pattern "manually", prior to GhostingFunctor) and I found > that the issue was that the dof constraints on the contact surface were not being communicated in parallel, which caused the incorrect Jacobian > in parallel. I fixed it by adding artificial Edge2 elements into the mesh (like in systems_of_equations_ex8) to ensure that the dof constraints > are communicated properly in parallel. > > So, my question is, is there a way to achieve the necessary dof constraint communication with the new GhostingFunctor API? I had hoped that using > "add_algebraic_ghosting_functor" would do the job, but it apparently doesn't. Hmm... If you only needed algebraic ghosting, then add_algebraic_ghosting_functor should have been sufficient - processors won't know about all their ghosted dofs' constraints, but the ghosted dofs will get properly added to the send_list, and enforce_constraints_exactly() will ensure that the dofs, once constrained on the processors which own them, will have their values updated on all the processors which ghost them. But you need to augment the sparsity pattern too, so you should be using add_coupling_functor instead... and in *that* case, we're broken, aren't we? You build a Jacobian connecting the remotely coupled dofs, and you try to hit it with constrain_element_foo() or heterogeneously_constrain_element_foo(), but the processor isn't aware of all the ghosted dofs' constraints, so the element constraint matrix is wrong and so is your final answer. I think the proper fix is to call the coupling functors in scatter_constraints(), then query the processors who own the elements which are to be coupled for any constraints on those elements' dofs. I can take a crack at that tomorrow or Monday. Any chance you could set up a unit test (or even just stuff an assertion into the misc ex9 example?) that checks for the problem? --- Roy |
From: David K. <dav...@ak...> - 2016-10-07 02:44:45
|
On Thu, Oct 6, 2016 at 10:34 PM, Roy Stogner <roy...@ic...> wrote: > > > On Thu, 6 Oct 2016, David Knezevic wrote: > > I'm using GhostingFunctor for a contact solve, in which I consider a 1/4 >> domain with partial Dirichlet boundary conditions that impose a symmetry >> condition (i.e. displacement normal to the symmetry boundary is clamped >> to zero, and tangential displacement is unconstrained). This means that I >> have Dirichlet constraints that affect the dofs on the contact surface. >> What I find is that the solve works fine in serial, but in parallel the >> nonlinear convergence fails, presumably because of an incorrect Jacobian. >> I have actually run into this exact issue before (when I was augmenting >> the sparsity pattern "manually", prior to GhostingFunctor) and I found >> that the issue was that the dof constraints on the contact surface were >> not being communicated in parallel, which caused the incorrect Jacobian >> in parallel. I fixed it by adding artificial Edge2 elements into the mesh >> (like in systems_of_equations_ex8) to ensure that the dof constraints >> are communicated properly in parallel. >> >> So, my question is, is there a way to achieve the necessary dof >> constraint communication with the new GhostingFunctor API? I had hoped that >> using >> "add_algebraic_ghosting_functor" would do the job, but it apparently >> doesn't. >> > > Hmm... If you only needed algebraic ghosting, then > add_algebraic_ghosting_functor should have been sufficient - > processors won't know about all their ghosted dofs' constraints, but > the ghosted dofs will get properly added to the send_list, and > enforce_constraints_exactly() will ensure that the dofs, once > constrained on the processors which own them, will have their > values updated on all the processors which ghost them. > > But you need to augment the sparsity pattern too, so you should be > using add_coupling_functor instead... and in *that* case, we're > broken, aren't we? You build a Jacobian connecting the remotely > coupled dofs, and you try to hit it with constrain_element_foo() or > heterogeneously_constrain_element_foo(), but the processor isn't aware > of all the ghosted dofs' constraints, so the element constraint matrix > is wrong and so is your final answer. > Yep, that's exactly my understanding of the issue. > I think the proper fix is to call the coupling functors in > scatter_constraints(), then query the processors who own the elements > which are to be coupled for any constraints on those elements' dofs. > I can take a crack at that tomorrow or Monday. Any chance you could > set up a unit test (or even just stuff an assertion into the misc ex9 > example?) that checks for the problem? > That'd be great, thanks! I'll be happy to try it out once it's ready. I'll also look into making a test case that can be added to libMesh. Thanks, David |
From: David K. <dav...@ak...> - 2016-10-09 20:17:39
Attachments:
miscellaneous_ex9_with_constraint_check.C
|
On Thu, Oct 6, 2016 at 10:44 PM, David Knezevic <dav...@ak...> wrote: > On Thu, Oct 6, 2016 at 10:34 PM, Roy Stogner <roy...@ic...> > wrote: > >> >> >> On Thu, 6 Oct 2016, David Knezevic wrote: >> >> I'm using GhostingFunctor for a contact solve, in which I consider a 1/4 >>> domain with partial Dirichlet boundary conditions that impose a symmetry >>> condition (i.e. displacement normal to the symmetry boundary is clamped >>> to zero, and tangential displacement is unconstrained). This means that I >>> have Dirichlet constraints that affect the dofs on the contact surface. >>> What I find is that the solve works fine in serial, but in parallel the >>> nonlinear convergence fails, presumably because of an incorrect Jacobian. >>> I have actually run into this exact issue before (when I was augmenting >>> the sparsity pattern "manually", prior to GhostingFunctor) and I found >>> that the issue was that the dof constraints on the contact surface were >>> not being communicated in parallel, which caused the incorrect Jacobian >>> in parallel. I fixed it by adding artificial Edge2 elements into the >>> mesh (like in systems_of_equations_ex8) to ensure that the dof constraints >>> are communicated properly in parallel. >>> >>> So, my question is, is there a way to achieve the necessary dof >>> constraint communication with the new GhostingFunctor API? I had hoped that >>> using >>> "add_algebraic_ghosting_functor" would do the job, but it apparently >>> doesn't. >>> >> >> Hmm... If you only needed algebraic ghosting, then >> add_algebraic_ghosting_functor should have been sufficient - >> processors won't know about all their ghosted dofs' constraints, but >> the ghosted dofs will get properly added to the send_list, and >> enforce_constraints_exactly() will ensure that the dofs, once >> constrained on the processors which own them, will have their >> values updated on all the processors which ghost them. >> >> But you need to augment the sparsity pattern too, so you should be >> using add_coupling_functor instead... and in *that* case, we're >> broken, aren't we? You build a Jacobian connecting the remotely >> coupled dofs, and you try to hit it with constrain_element_foo() or >> heterogeneously_constrain_element_foo(), but the processor isn't aware >> of all the ghosted dofs' constraints, so the element constraint matrix >> is wrong and so is your final answer. >> > > > Yep, that's exactly my understanding of the issue. > > > >> I think the proper fix is to call the coupling functors in >> scatter_constraints(), then query the processors who own the elements >> which are to be coupled for any constraints on those elements' dofs. >> I can take a crack at that tomorrow or Monday. Any chance you could >> set up a unit test (or even just stuff an assertion into the misc ex9 >> example?) that checks for the problem? >> > > That'd be great, thanks! I'll be happy to try it out once it's ready. I'll > also look into making a test case that can be added to libMesh. > I've attached a modified version of misc_ex9 that attaches constraints on one side of the "crack" and checks if the dof constraints are present during assembly. This passes in serial but fails in parallel because constraints are not communicated on the GhostingFunctor-coupled-dofs. The extra constraints I added mean that the problem doesn't make physical sense anymore unfortunately, but at least it tests for the constraint issue. Roy: I'm not sure if this is appropriate for a unit test, but it should at least be helpful for when you want to check your implementation. David |
From: Roy S. <roy...@ic...> - 2016-10-11 20:11:04
|
On Sun, 9 Oct 2016, David Knezevic wrote: > On Thu, Oct 6, 2016 at 10:34 PM, Roy Stogner <roy...@ic...> wrote: > > I think the proper fix is to call the coupling functors in > scatter_constraints(), then query the processors who own the elements > which are to be coupled for any constraints on those elements' dofs. Just about done. Except now I'm paranoid, because I ran across my years-old comment, // Next we need to push constraints to processors which don't own // the constrained dof, don't own the constraining dof, but own an // element supporting the constraining dof. ... // Getting distributed adaptive sparsity patterns right is hard. And I don't recall *why* we had that need! When we're doing an enforce_constraints_exactly() it's only important to have all our own dofs' constraints and their dependencies. When we're constraining an element stiffness matrix we need to build a constraint matrix C, but that constraint matrix only requires us to know about constrained dofs on the local element, not about constraining dofs on the local element. So why the heck did I think processors needed to know about everywhere a locally-supported dof constrain*ing*? If I can't figure out what the reason was then I can't decide whether or not it will be applicable to coupled-ghosted elements too. --- Roy |
From: David K. <dav...@ak...> - 2016-10-11 20:55:56
|
On Tue, Oct 11, 2016 at 4:10 PM, Roy Stogner <roy...@ic...> wrote: > > > On Sun, 9 Oct 2016, David Knezevic wrote: > > On Thu, Oct 6, 2016 at 10:34 PM, Roy Stogner <roy...@ic...> >> wrote: >> >> I think the proper fix is to call the coupling functors in >> scatter_constraints(), then query the processors who own the >> elements >> which are to be coupled for any constraints on those elements' dofs. >> > > Just about done. > > Except now I'm paranoid, because I ran across my years-old comment, > > // Next we need to push constraints to processors which don't own > // the constrained dof, don't own the constraining dof, but own an > // element supporting the constraining dof. > ... > // Getting distributed adaptive sparsity patterns right is hard. > > And I don't recall *why* we had that need! When we're doing an > enforce_constraints_exactly() it's only important to have all our own > dofs' constraints and their dependencies. When we're constraining an > element stiffness matrix we need to build a constraint matrix C, but > that constraint matrix only requires us to know about constrained dofs > on the local element, not about constraining dofs on the local > element. > > So why the heck did I think processors needed to know about everywhere > a locally-supported dof constrain*ing*? If I can't figure out what > the reason was then I can't decide whether or not it will be > applicable to coupled-ghosted elements too. > hmm, I agree with you, I can't see why the constraining dofs are required... maybe we can proceed on the assumption that they aren't required, and do some tests to see if we hit a case where that assumption is wrong? David |
From: Roy S. <roy...@ic...> - 2016-10-12 21:04:17
|
On Sun, 9 Oct 2016, David Knezevic wrote: > I've attached a modified version of misc_ex9 that attaches > constraints on one side of the "crack" and checks if the dof > constraints are present during assembly. This passes in serial but > fails in parallel because constraints are not communicated on the > GhostingFunctor-coupled-dofs. I had to make a couple fixes to the test: switching mesh_1 and mesh_2 to SerialMesh, and using dof_id_type neighbor_node_id = neighbor->node_ref(neighbor_node_index).dof_number(0,0,0); to handle the case where node id isn't node dof id. > The extra constraints I added mean that the problem doesn't make > physical sense anymore unfortunately, but at least it tests for the > constraint issue. Roy: I'm not sure if this is appropriate for a > unit test, but it should at least be helpful for when you want to > check your implementation. It was, thanks! Here's hoping #1120 fixes the real problem too. The modified ex9 is too big for a unit test, and too contrived for an example, and I can't think of any easy way to fix that while maintaining the same level of test coverage. But if you can come up with anything that doesn't have both those problems I'd really love to get this case into continuous integration. If you can't come up with anything either... I suppose I could combine an extra-ghost-layers test case with a Dirichlet boundary and test a wide stencil? That should hit the same code paths. Plus, it's about time libMesh expanded into the cutting edge world of finite difference methods. --- Roy |
From: David K. <dav...@ak...> - 2016-10-13 01:07:18
|
On Wed, Oct 12, 2016 at 5:04 PM, Roy Stogner <roy...@ic...> wrote: > > On Sun, 9 Oct 2016, David Knezevic wrote: > > I've attached a modified version of misc_ex9 that attaches >> constraints on one side of the "crack" and checks if the dof >> constraints are present during assembly. This passes in serial but >> fails in parallel because constraints are not communicated on the >> GhostingFunctor-coupled-dofs. >> > > I had to make a couple fixes to the test: switching mesh_1 and mesh_2 > to SerialMesh, and using > > dof_id_type neighbor_node_id = neighbor->node_ref(neighbor_no > de_index).dof_number(0,0,0); > > to handle the case where node id isn't node dof id. > > The extra constraints I added mean that the problem doesn't make >> physical sense anymore unfortunately, but at least it tests for the >> constraint issue. Roy: I'm not sure if this is appropriate for a >> unit test, but it should at least be helpful for when you want to >> check your implementation. >> > > It was, thanks! Here's hoping #1120 fixes the real problem too. > > The modified ex9 is too big for a unit test, and too contrived for an > example, and I can't think of any easy way to fix that while > maintaining the same level of test coverage. But if you can come up > with anything that doesn't have both those problems I'd really love to > get this case into continuous integration. > > If you can't come up with anything either... I suppose I could combine > an extra-ghost-layers test case with a Dirichlet boundary and test a > wide stencil? That should hit the same code paths. Plus, it's about > time libMesh expanded into the cutting edge world of finite difference > methods. > I just tried my real problem with your PR and it's still not working, unfortunately. I'll have to look into that in more detail. I'll get back to you when I have more info. David |
From: David K. <dav...@ak...> - 2016-10-13 16:12:42
Attachments:
miscellaneous_ex9_with_constraint_check.C
|
On Wed, Oct 12, 2016 at 9:07 PM, David Knezevic <dav...@ak...> wrote: > On Wed, Oct 12, 2016 at 5:04 PM, Roy Stogner <roy...@ic...> > wrote: > >> >> On Sun, 9 Oct 2016, David Knezevic wrote: >> >> I've attached a modified version of misc_ex9 that attaches >>> constraints on one side of the "crack" and checks if the dof >>> constraints are present during assembly. This passes in serial but >>> fails in parallel because constraints are not communicated on the >>> GhostingFunctor-coupled-dofs. >>> >> >> I had to make a couple fixes to the test: switching mesh_1 and mesh_2 >> to SerialMesh, and using >> >> dof_id_type neighbor_node_id = neighbor->node_ref(neighbor_no >> de_index).dof_number(0,0,0); >> >> to handle the case where node id isn't node dof id. >> >> The extra constraints I added mean that the problem doesn't make >>> physical sense anymore unfortunately, but at least it tests for the >>> constraint issue. Roy: I'm not sure if this is appropriate for a >>> unit test, but it should at least be helpful for when you want to >>> check your implementation. >>> >> >> It was, thanks! Here's hoping #1120 fixes the real problem too. >> >> The modified ex9 is too big for a unit test, and too contrived for an >> example, and I can't think of any easy way to fix that while >> maintaining the same level of test coverage. But if you can come up >> with anything that doesn't have both those problems I'd really love to >> get this case into continuous integration. >> >> If you can't come up with anything either... I suppose I could combine >> an extra-ghost-layers test case with a Dirichlet boundary and test a >> wide stencil? That should hit the same code paths. Plus, it's about >> time libMesh expanded into the cutting edge world of finite difference >> methods. >> > > > I just tried my real problem with your PR and it's still not working, > unfortunately. I'll have to look into that in more detail. I'll get back to > you when I have more info. > Roy, I've attached an updated version of the misc_ex9 test. The test now prints out has a Dirichlet boundary on one side of the domain (boundary ids 1 and 11), and it prints out the dof IDs on the "crack" that have constraints on them. With this I get the following output: 1 MPI process: *./example-opt* *constrained upper dofs: 1025 1026 * *constrained lower dofs: 0 1 * *constrained upper dofs: 1026 1029 * *constrained lower dofs: 1 8 * *constrained upper dofs: 1029 1031 * *constrained lower dofs: 8 12 * *constrained upper dofs: 1031 1033 * *constrained lower dofs: 12 16 * 2 MPI processes: *mpirun -np 2 ./example-opt --keep-cout* *constrained upper dofs: 500 501 502 503 * *constrained lower dofs: 525 526 * *constrained upper dofs: 501 504 505 502 * *constrained lower dofs: 526 533 * *constrained upper dofs: 504 506 507 505 * *constrained lower dofs: 533 537 * *constrained upper dofs: 506 508 509 507 * *constrained lower dofs: 537 541 * *constrained upper dofs: 503 502 510 511 * *constrained lower dofs: * *constrained upper dofs: 502 505 512 510 * *constrained lower dofs: * *constrained upper dofs: 505 507 513 512 * *constrained lower dofs: * *constrained upper dofs: 507 509 514 513 * *constrained lower dofs: * *constrained upper dofs: 511 510 515 516 * *constrained lower dofs: * *constrained upper dofs: 510 512 517 515 * *constrained lower dofs: * *constrained upper dofs: 512 513 518 517 * *constrained lower dofs: * *constrained upper dofs: 513 514 519 518 * *constrained lower dofs: * *constrained upper dofs: 516 515 520 521 * *constrained lower dofs: * *constrained upper dofs: 515 517 522 520 * *constrained lower dofs: * *constrained upper dofs: 517 518 523 522 * *constrained lower dofs: * *constrained upper dofs: 518 519 524 523 * *constrained lower dofs:* The 1 process output makes sense to me: there should be only five nodes on top and bottom that have a constraint. I don't understand the 2 process output, there seems to be many extra constraints. Do you think this indicates a bug in the constraint scattering? David |
From: David K. <dav...@ak...> - 2016-10-13 16:19:56
|
On Thu, Oct 13, 2016 at 12:12 PM, David Knezevic <dav...@ak... > wrote: > On Wed, Oct 12, 2016 at 9:07 PM, David Knezevic < > dav...@ak...> wrote: > >> On Wed, Oct 12, 2016 at 5:04 PM, Roy Stogner <roy...@ic...> >> wrote: >> >>> >>> On Sun, 9 Oct 2016, David Knezevic wrote: >>> >>> I've attached a modified version of misc_ex9 that attaches >>>> constraints on one side of the "crack" and checks if the dof >>>> constraints are present during assembly. This passes in serial but >>>> fails in parallel because constraints are not communicated on the >>>> GhostingFunctor-coupled-dofs. >>>> >>> >>> I had to make a couple fixes to the test: switching mesh_1 and mesh_2 >>> to SerialMesh, and using >>> >>> dof_id_type neighbor_node_id = neighbor->node_ref(neighbor_no >>> de_index).dof_number(0,0,0); >>> >>> to handle the case where node id isn't node dof id. >>> >>> The extra constraints I added mean that the problem doesn't make >>>> physical sense anymore unfortunately, but at least it tests for the >>>> constraint issue. Roy: I'm not sure if this is appropriate for a >>>> unit test, but it should at least be helpful for when you want to >>>> check your implementation. >>>> >>> >>> It was, thanks! Here's hoping #1120 fixes the real problem too. >>> >>> The modified ex9 is too big for a unit test, and too contrived for an >>> example, and I can't think of any easy way to fix that while >>> maintaining the same level of test coverage. But if you can come up >>> with anything that doesn't have both those problems I'd really love to >>> get this case into continuous integration. >>> >>> If you can't come up with anything either... I suppose I could combine >>> an extra-ghost-layers test case with a Dirichlet boundary and test a >>> wide stencil? That should hit the same code paths. Plus, it's about >>> time libMesh expanded into the cutting edge world of finite difference >>> methods. >>> >> >> >> I just tried my real problem with your PR and it's still not working, >> unfortunately. I'll have to look into that in more detail. I'll get back to >> you when I have more info. >> > > > Roy, I've attached an updated version of the misc_ex9 test. The test now > prints out has a Dirichlet boundary on one side of the domain (boundary ids > 1 and 11), and it prints out the dof IDs on the "crack" that have > constraints on them. With this I get the following output: > > 1 MPI process: > > *./example-opt* > *constrained upper dofs: 1025 1026 * > *constrained lower dofs: 0 1 * > > *constrained upper dofs: 1026 1029 * > *constrained lower dofs: 1 8 * > > *constrained upper dofs: 1029 1031 * > *constrained lower dofs: 8 12 * > > *constrained upper dofs: 1031 1033 * > *constrained lower dofs: 12 16 * > > 2 MPI processes: > > > *mpirun -np 2 ./example-opt --keep-cout* > *constrained upper dofs: 500 501 502 503 * > *constrained lower dofs: 525 526 * > > *constrained upper dofs: 501 504 505 502 * > *constrained lower dofs: 526 533 * > > *constrained upper dofs: 504 506 507 505 * > *constrained lower dofs: 533 537 * > > *constrained upper dofs: 506 508 509 507 * > *constrained lower dofs: 537 541 * > > *constrained upper dofs: 503 502 510 511 * > *constrained lower dofs: * > > *constrained upper dofs: 502 505 512 510 * > *constrained lower dofs: * > > *constrained upper dofs: 505 507 513 512 * > *constrained lower dofs: * > > *constrained upper dofs: 507 509 514 513 * > *constrained lower dofs: * > > *constrained upper dofs: 511 510 515 516 * > *constrained lower dofs: * > > *constrained upper dofs: 510 512 517 515 * > *constrained lower dofs: * > > *constrained upper dofs: 512 513 518 517 * > *constrained lower dofs: * > > *constrained upper dofs: 513 514 519 518 * > *constrained lower dofs: * > > *constrained upper dofs: 516 515 520 521 * > *constrained lower dofs: * > > *constrained upper dofs: 515 517 522 520 * > *constrained lower dofs: * > > *constrained upper dofs: 517 518 523 522 * > *constrained lower dofs: * > > *constrained upper dofs: 518 519 524 523 * > *constrained lower dofs:* > > The 1 process output makes sense to me: there should be only five nodes on > top and bottom that have a constraint. I don't understand the 2 process > output, there seems to be many extra constraints. Do you think this > indicates a bug in the constraint scattering? > > David > P.S. I changed the code slightly to print out the node info for each constrained node on the "crack". The results are below. You can see that in the parallel case, we have extra constraints that should not be there. David ----------------------------------------------------------------------- 1 MPI process: lower constrained node info: Node id()=0, processor_id()=0, Point=(x,y,z)=( 0, 0, 20) DoFs=(0/0/0) lower constrained node info: Node id()=1, processor_id()=0, Point=(x,y,z)=( 1, 0, 20) DoFs=(0/0/1) upper constrained node info: Node id()=1025, processor_id()=0, Point=(x,y,z)=( 0, 0, 20) DoFs=(0/0/1025) upper constrained node info: Node id()=1026, processor_id()=0, Point=(x,y,z)=( 1, 0, 20) DoFs=(0/0/1026) lower constrained node info: Node id()=1, processor_id()=0, Point=(x,y,z)=( 1, 0, 20) DoFs=(0/0/1) lower constrained node info: Node id()=8, processor_id()=0, Point=(x,y,z)=( 2, 0, 20) DoFs=(0/0/8) upper constrained node info: Node id()=1026, processor_id()=0, Point=(x,y,z)=( 1, 0, 20) DoFs=(0/0/1026) upper constrained node info: Node id()=1029, processor_id()=0, Point=(x,y,z)=( 2, 0, 20) DoFs=(0/0/1029) lower constrained node info: Node id()=8, processor_id()=0, Point=(x,y,z)=( 2, 0, 20) DoFs=(0/0/8) lower constrained node info: Node id()=12, processor_id()=0, Point=(x,y,z)=( 3, 0, 20) DoFs=(0/0/12) upper constrained node info: Node id()=1029, processor_id()=0, Point=(x,y,z)=( 2, 0, 20) DoFs=(0/0/1029) upper constrained node info: Node id()=1031, processor_id()=0, Point=(x,y,z)=( 3, 0, 20) DoFs=(0/0/1031) lower constrained node info: Node id()=12, processor_id()=0, Point=(x,y,z)=( 3, 0, 20) DoFs=(0/0/12) lower constrained node info: Node id()=16, processor_id()=0, Point=(x,y,z)=( 4, 0, 20) DoFs=(0/0/16) upper constrained node info: Node id()=1031, processor_id()=0, Point=(x,y,z)=( 3, 0, 20) DoFs=(0/0/1031) upper constrained node info: Node id()=1033, processor_id()=0, Point=(x,y,z)=( 4, 0, 20) DoFs=(0/0/1033) *2 MPI processes using --keep-cout:* lower constrained node info: Node id()=0, processor_id()=1, Point=(x,y,z)=( 0, 0, 20) DoFs=(0/0/525) lower constrained node info: Node id()=1, processor_id()=1, Point=(x,y,z)=( 1, 0, 20) DoFs=(0/0/526) upper constrained node info: Node id()=1025, processor_id()=0, Point=(x,y,z)=( 0, 0, 20) DoFs=(0/0/500) upper constrained node info: Node id()=1026, processor_id()=0, Point=(x,y,z)=( 1, 0, 20) DoFs=(0/0/501) upper constrained node info: Node id()=1027, processor_id()=0, Point=(x,y,z)=( 1, 1, 20) DoFs=(0/0/502) upper constrained node info: Node id()=1028, processor_id()=0, Point=(x,y,z)=( 0, 1, 20) DoFs=(0/0/503) lower constrained node info: Node id()=1, processor_id()=1, Point=(x,y,z)=( 1, 0, 20) DoFs=(0/0/526) lower constrained node info: Node id()=8, processor_id()=1, Point=(x,y,z)=( 2, 0, 20) DoFs=(0/0/533) upper constrained node info: Node id()=1026, processor_id()=0, Point=(x,y,z)=( 1, 0, 20) DoFs=(0/0/501) upper constrained node info: Node id()=1029, processor_id()=0, Point=(x,y,z)=( 2, 0, 20) DoFs=(0/0/504) upper constrained node info: Node id()=1030, processor_id()=0, Point=(x,y,z)=( 2, 1, 20) DoFs=(0/0/505) upper constrained node info: Node id()=1027, processor_id()=0, Point=(x,y,z)=( 1, 1, 20) DoFs=(0/0/502) lower constrained node info: Node id()=8, processor_id()=1, Point=(x,y,z)=( 2, 0, 20) DoFs=(0/0/533) lower constrained node info: Node id()=12, processor_id()=1, Point=(x,y,z)=( 3, 0, 20) DoFs=(0/0/537) upper constrained node info: Node id()=1029, processor_id()=0, Point=(x,y,z)=( 2, 0, 20) DoFs=(0/0/504) upper constrained node info: Node id()=1031, processor_id()=0, Point=(x,y,z)=( 3, 0, 20) DoFs=(0/0/506) upper constrained node info: Node id()=1032, processor_id()=0, Point=(x,y,z)=( 3, 1, 20) DoFs=(0/0/507) upper constrained node info: Node id()=1030, processor_id()=0, Point=(x,y,z)=( 2, 1, 20) DoFs=(0/0/505) lower constrained node info: Node id()=12, processor_id()=1, Point=(x,y,z)=( 3, 0, 20) DoFs=(0/0/537) lower constrained node info: Node id()=16, processor_id()=1, Point=(x,y,z)=( 4, 0, 20) DoFs=(0/0/541) upper constrained node info: Node id()=1031, processor_id()=0, Point=(x,y,z)=( 3, 0, 20) DoFs=(0/0/506) upper constrained node info: Node id()=1033, processor_id()=0, Point=(x,y,z)=( 4, 0, 20) DoFs=(0/0/508) upper constrained node info: Node id()=1034, processor_id()=0, Point=(x,y,z)=( 4, 1, 20) DoFs=(0/0/509) upper constrained node info: Node id()=1032, processor_id()=0, Point=(x,y,z)=( 3, 1, 20) DoFs=(0/0/507) upper constrained node info: Node id()=1028, processor_id()=0, Point=(x,y,z)=( 0, 1, 20) DoFs=(0/0/503) upper constrained node info: Node id()=1027, processor_id()=0, Point=(x,y,z)=( 1, 1, 20) DoFs=(0/0/502) upper constrained node info: Node id()=1035, processor_id()=0, Point=(x,y,z)=( 1, 2, 20) DoFs=(0/0/510) upper constrained node info: Node id()=1036, processor_id()=0, Point=(x,y,z)=( 0, 2, 20) DoFs=(0/0/511) upper constrained node info: Node id()=1027, processor_id()=0, Point=(x,y,z)=( 1, 1, 20) DoFs=(0/0/502) upper constrained node info: Node id()=1030, processor_id()=0, Point=(x,y,z)=( 2, 1, 20) DoFs=(0/0/505) upper constrained node info: Node id()=1037, processor_id()=0, Point=(x,y,z)=( 2, 2, 20) DoFs=(0/0/512) upper constrained node info: Node id()=1035, processor_id()=0, Point=(x,y,z)=( 1, 2, 20) DoFs=(0/0/510) upper constrained node info: Node id()=1030, processor_id()=0, Point=(x,y,z)=( 2, 1, 20) DoFs=(0/0/505) upper constrained node info: Node id()=1032, processor_id()=0, Point=(x,y,z)=( 3, 1, 20) DoFs=(0/0/507) upper constrained node info: Node id()=1038, processor_id()=0, Point=(x,y,z)=( 3, 2, 20) DoFs=(0/0/513) upper constrained node info: Node id()=1037, processor_id()=0, Point=(x,y,z)=( 2, 2, 20) DoFs=(0/0/512) upper constrained node info: Node id()=1032, processor_id()=0, Point=(x,y,z)=( 3, 1, 20) DoFs=(0/0/507) upper constrained node info: Node id()=1034, processor_id()=0, Point=(x,y,z)=( 4, 1, 20) DoFs=(0/0/509) upper constrained node info: Node id()=1039, processor_id()=0, Point=(x,y,z)=( 4, 2, 20) DoFs=(0/0/514) upper constrained node info: Node id()=1038, processor_id()=0, Point=(x,y,z)=( 3, 2, 20) DoFs=(0/0/513) upper constrained node info: Node id()=1036, processor_id()=0, Point=(x,y,z)=( 0, 2, 20) DoFs=(0/0/511) upper constrained node info: Node id()=1035, processor_id()=0, Point=(x,y,z)=( 1, 2, 20) DoFs=(0/0/510) upper constrained node info: Node id()=1040, processor_id()=0, Point=(x,y,z)=( 1, 3, 20) DoFs=(0/0/515) upper constrained node info: Node id()=1041, processor_id()=0, Point=(x,y,z)=( 0, 3, 20) DoFs=(0/0/516) upper constrained node info: Node id()=1035, processor_id()=0, Point=(x,y,z)=( 1, 2, 20) DoFs=(0/0/510) upper constrained node info: Node id()=1037, processor_id()=0, Point=(x,y,z)=( 2, 2, 20) DoFs=(0/0/512) upper constrained node info: Node id()=1042, processor_id()=0, Point=(x,y,z)=( 2, 3, 20) DoFs=(0/0/517) upper constrained node info: Node id()=1040, processor_id()=0, Point=(x,y,z)=( 1, 3, 20) DoFs=(0/0/515) upper constrained node info: Node id()=1037, processor_id()=0, Point=(x,y,z)=( 2, 2, 20) DoFs=(0/0/512) upper constrained node info: Node id()=1038, processor_id()=0, Point=(x,y,z)=( 3, 2, 20) DoFs=(0/0/513) upper constrained node info: Node id()=1043, processor_id()=0, Point=(x,y,z)=( 3, 3, 20) DoFs=(0/0/518) upper constrained node info: Node id()=1042, processor_id()=0, Point=(x,y,z)=( 2, 3, 20) DoFs=(0/0/517) upper constrained node info: Node id()=1038, processor_id()=0, Point=(x,y,z)=( 3, 2, 20) DoFs=(0/0/513) upper constrained node info: Node id()=1039, processor_id()=0, Point=(x,y,z)=( 4, 2, 20) DoFs=(0/0/514) upper constrained node info: Node id()=1044, processor_id()=0, Point=(x,y,z)=( 4, 3, 20) DoFs=(0/0/519) upper constrained node info: Node id()=1043, processor_id()=0, Point=(x,y,z)=( 3, 3, 20) DoFs=(0/0/518) upper constrained node info: Node id()=1041, processor_id()=0, Point=(x,y,z)=( 0, 3, 20) DoFs=(0/0/516) upper constrained node info: Node id()=1040, processor_id()=0, Point=(x,y,z)=( 1, 3, 20) DoFs=(0/0/515) upper constrained node info: Node id()=1045, processor_id()=0, Point=(x,y,z)=( 1, 4, 20) DoFs=(0/0/520) upper constrained node info: Node id()=1046, processor_id()=0, Point=(x,y,z)=( 0, 4, 20) DoFs=(0/0/521) upper constrained node info: Node id()=1040, processor_id()=0, Point=(x,y,z)=( 1, 3, 20) DoFs=(0/0/515) upper constrained node info: Node id()=1042, processor_id()=0, Point=(x,y,z)=( 2, 3, 20) DoFs=(0/0/517) upper constrained node info: Node id()=1047, processor_id()=0, Point=(x,y,z)=( 2, 4, 20) DoFs=(0/0/522) upper constrained node info: Node id()=1045, processor_id()=0, Point=(x,y,z)=( 1, 4, 20) DoFs=(0/0/520) upper constrained node info: Node id()=1042, processor_id()=0, Point=(x,y,z)=( 2, 3, 20) DoFs=(0/0/517) upper constrained node info: Node id()=1043, processor_id()=0, Point=(x,y,z)=( 3, 3, 20) DoFs=(0/0/518) upper constrained node info: Node id()=1048, processor_id()=0, Point=(x,y,z)=( 3, 4, 20) DoFs=(0/0/523) upper constrained node info: Node id()=1047, processor_id()=0, Point=(x,y,z)=( 2, 4, 20) DoFs=(0/0/522) upper constrained node info: Node id()=1043, processor_id()=0, Point=(x,y,z)=( 3, 3, 20) DoFs=(0/0/518) upper constrained node info: Node id()=1044, processor_id()=0, Point=(x,y,z)=( 4, 3, 20) DoFs=(0/0/519) upper constrained node info: Node id()=1049, processor_id()=0, Point=(x,y,z)=( 4, 4, 20) DoFs=(0/0/524) upper constrained node info: Node id()=1048, processor_id()=0, Point=(x,y,z)=( 3, 4, 20) DoFs=(0/0/523) |