You can subscribe to this list here.
2003 |
Jan
(4) |
Feb
(1) |
Mar
(9) |
Apr
(2) |
May
(7) |
Jun
(1) |
Jul
(1) |
Aug
(4) |
Sep
(12) |
Oct
(8) |
Nov
(3) |
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(1) |
Feb
(21) |
Mar
(31) |
Apr
(10) |
May
(12) |
Jun
(15) |
Jul
(4) |
Aug
(6) |
Sep
(5) |
Oct
(11) |
Nov
(43) |
Dec
(13) |
2005 |
Jan
(25) |
Feb
(12) |
Mar
(49) |
Apr
(19) |
May
(104) |
Jun
(60) |
Jul
(10) |
Aug
(42) |
Sep
(15) |
Oct
(12) |
Nov
(6) |
Dec
(4) |
2006 |
Jan
(1) |
Feb
(6) |
Mar
(31) |
Apr
(17) |
May
(5) |
Jun
(95) |
Jul
(38) |
Aug
(44) |
Sep
(6) |
Oct
(8) |
Nov
(21) |
Dec
|
2007 |
Jan
(5) |
Feb
(46) |
Mar
(9) |
Apr
(23) |
May
(17) |
Jun
(51) |
Jul
(41) |
Aug
(4) |
Sep
(28) |
Oct
(71) |
Nov
(193) |
Dec
(20) |
2008 |
Jan
(46) |
Feb
(46) |
Mar
(18) |
Apr
(38) |
May
(14) |
Jun
(107) |
Jul
(50) |
Aug
(115) |
Sep
(84) |
Oct
(96) |
Nov
(105) |
Dec
(34) |
2009 |
Jan
(89) |
Feb
(93) |
Mar
(119) |
Apr
(73) |
May
(39) |
Jun
(51) |
Jul
(27) |
Aug
(8) |
Sep
(91) |
Oct
(90) |
Nov
(77) |
Dec
(67) |
2010 |
Jan
(25) |
Feb
(36) |
Mar
(98) |
Apr
(45) |
May
(25) |
Jun
(60) |
Jul
(17) |
Aug
(36) |
Sep
(48) |
Oct
(45) |
Nov
(65) |
Dec
(39) |
2011 |
Jan
(26) |
Feb
(48) |
Mar
(151) |
Apr
(108) |
May
(61) |
Jun
(108) |
Jul
(27) |
Aug
(50) |
Sep
(43) |
Oct
(43) |
Nov
(27) |
Dec
(37) |
2012 |
Jan
(56) |
Feb
(120) |
Mar
(72) |
Apr
(57) |
May
(82) |
Jun
(66) |
Jul
(51) |
Aug
(75) |
Sep
(166) |
Oct
(232) |
Nov
(284) |
Dec
(105) |
2013 |
Jan
(168) |
Feb
(151) |
Mar
(30) |
Apr
(145) |
May
(26) |
Jun
(53) |
Jul
(76) |
Aug
(33) |
Sep
(23) |
Oct
(72) |
Nov
(125) |
Dec
(38) |
2014 |
Jan
(47) |
Feb
(62) |
Mar
(27) |
Apr
(8) |
May
(12) |
Jun
(2) |
Jul
(22) |
Aug
(22) |
Sep
|
Oct
(17) |
Nov
(20) |
Dec
(12) |
2015 |
Jan
(25) |
Feb
(2) |
Mar
(16) |
Apr
(13) |
May
(21) |
Jun
(5) |
Jul
(1) |
Aug
(8) |
Sep
(9) |
Oct
(30) |
Nov
(8) |
Dec
|
2016 |
Jan
(16) |
Feb
(31) |
Mar
(43) |
Apr
(18) |
May
(21) |
Jun
(11) |
Jul
(17) |
Aug
(26) |
Sep
(4) |
Oct
(16) |
Nov
(5) |
Dec
(6) |
2017 |
Jan
(1) |
Feb
(2) |
Mar
(5) |
Apr
(4) |
May
(1) |
Jun
(11) |
Jul
(5) |
Aug
|
Sep
(3) |
Oct
(1) |
Nov
(7) |
Dec
|
2018 |
Jan
(8) |
Feb
(8) |
Mar
(1) |
Apr
|
May
(5) |
Jun
(11) |
Jul
|
Aug
(51) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2019 |
Jan
(2) |
Feb
|
Mar
(3) |
Apr
(7) |
May
(2) |
Jun
|
Jul
(6) |
Aug
|
Sep
|
Oct
(4) |
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Nestola M. G. C. <mar...@us...> - 2017-01-09 22:32:15
|
Dear all, we are trying to build parallel meshes. What i need is that each processor has just its own mesh. We are coupling a fem code with FD code thus we need to guarantee also the same partition. We have already implemented a routine to do this but we have some questions. 1) If our setup of elements already includes elements across process boundaries, do we need to still call gather_neighboring_elements()? I added the following line to our code: MeshCommunication().gather_neighboring_elements(cast_ref<DistributedMesh &>(mesh)); but it seems to take forever to execute that in dbg mode (Even with very few grid points i.e. ~100 [8 procs]). 2) The mesh looks ok if I output it even without the above line added but I trip different asserts in dbg mode depending on the type of element I use (I have both implemented). If I use Tri3 I trip: Assertion `oldvar == static_cast<Told>(static_cast<Tnew>(oldvar))' failed. oldvar = 18446744073709551615 static_cast<Told>(static_cast<Tnew>(oldvar)) = 4294967295 If I use Quad4 I trip: Assertion `i < _points.size()' failed. i = 0 _points.size() = 0 The messages are identical on each process. Best Barna & Maria |
From: John P. <jwp...@gm...> - 2016-12-05 18:06:52
|
Dear all, LibMesh 1.1.0 has been tagged and the tarballs are available here: https://github.com/libMesh/libmesh/releases/tag/v1.1.0 The 1.1.0 release branch has been feature-frozen since October 24, 2016, so it does not include any new features merged to master since that time, only bug fixes. A more detailed summary of changes is available at the link above. Thanks, John |
From: Cody P. <cod...@gm...> - 2016-12-02 15:40:19
|
On Fri, Dec 2, 2016 at 8:35 AM Roy Stogner <roy...@ic...> wrote: We maintain node_to_element maps in MOOSE. We're already paying for the time to construct that data structure. If you have that (or as you suggest, something similar) determining whether the node is evaluable is easy. Perhaps we should entertain moving the optional construction and maintenance of that data structure to libMesh? Cody > On Thu, 1 Dec 2016, Cody Permann wrote: > > > I'm starting to use the new Elem::isEvaluable() method available in > libMesh. It seems like it would be handy to promote this method up to > > DofObject though so it could be used with Node. If I understand > correctly, a node "should" be evaluable if it's attached to any element > that's > > evaluable so this method could be implemented rather easily. Does this > sound reasonable? > > The bad news is that getting a list of all the elements attached to a > Node is surprisingly hard - there's basically no way to do it without > a PointLocator, and no way to do it in O(1) time. > The good news is that we don't need a list of all the elements sharing > the node, just a list of all the DoF indices hosted on the node. My > inclination would be to add an extra overload to DofMap::dof_indices() > to grab those, and then the new DofMap::is_evaluable() implementation > would be the exact same as the old DofMap::is_evaluable(); we could > actually just make them two instantiations of the same templated > function. > --- > Roy > |
From: Roy S. <roy...@ic...> - 2016-12-02 15:35:17
|
On Thu, 1 Dec 2016, Cody Permann wrote: > I'm starting to use the new Elem::isEvaluable() method available in libMesh. It seems like it would be handy to promote this method up to > DofObject though so it could be used with Node. If I understand correctly, a node "should" be evaluable if it's attached to any element that's > evaluable so this method could be implemented rather easily. Does this sound reasonable? The bad news is that getting a list of all the elements attached to a Node is surprisingly hard - there's basically no way to do it without a PointLocator, and no way to do it in O(1) time. The good news is that we don't need a list of all the elements sharing the node, just a list of all the DoF indices hosted on the node. My inclination would be to add an extra overload to DofMap::dof_indices() to grab those, and then the new DofMap::is_evaluable() implementation would be the exact same as the old DofMap::is_evaluable(); we could actually just make them two instantiations of the same templated function. --- Roy |
From: Derek G. <fri...@gm...> - 2016-12-02 05:05:57
|
Nodes don't know the elements they are attached to though... On Thu, Dec 1, 2016 at 6:48 PM Cody Permann <cod...@gm...> wrote: > I'm starting to use the new Elem::isEvaluable() method available in > libMesh. It seems like it would be handy to promote this method up to > DofObject though so it could be used with Node. If I understand correctly, > a node "should" be evaluable if it's attached to any element that's > evaluable so this method could be implemented rather easily. Does this > sound reasonable? > > Cody > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, SlashDot.org! http://sdm.link/slashdot > _______________________________________________ > Libmesh-devel mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-devel > |
From: Cody P. <cod...@gm...> - 2016-12-01 23:48:41
|
I'm starting to use the new Elem::isEvaluable() method available in libMesh. It seems like it would be handy to promote this method up to DofObject though so it could be used with Node. If I understand correctly, a node "should" be evaluable if it's attached to any element that's evaluable so this method could be implemented rather easily. Does this sound reasonable? Cody |
From: John P. <jwp...@gm...> - 2016-12-01 15:59:01
|
Sorry for the delay on your email, it was stuck in the mailing list's filters for the last month... On Wed, Nov 9, 2016 at 6:42 AM, Maria Giuseppina Chiara Nestola < mgc...@gm...> wrote: > Hi all, > is there any way in libmesh to concatenate intersecting meshes? > Depends what you mean by "intersecting"... The ReplicatedMesh::stitch_meshes() function can connect two meshes whose nodes more or less perfectly match up on the boundary. -- John |
From: Tim A. <tra...@bu...> - 2016-11-26 15:58:26
|
Thanks for the reply Roy. I figured this wouldn't be a trivial thing to address. I have an AIAA paper and a thesis to get done, so I think for now I will just try hacking my way around this to get my GRINS code working. If/when you do get to looking at this, I will have a few test cases that might help you. Thanks again! ~Tim On Wed, Nov 23, 2016 at 5:45 PM, Roy Stogner <roy...@ic...> wrote: > > On Mon, 21 Nov 2016, Tim Adowski wrote: > > Is there any way to alter EquationSystems::reinit() to not call >> refine/coarsen_elements()? These calls prevent me from properly >> reinitializing my new QoI class. Or, is there another way to use the >> element refinement flags to identify elements that were just >> refined/coarsened? >> >> @pbauman pointed me to a GitHub PR discussion where it seems that >> these refine/coarsen calls in ES::reinit() are not really necessary >> anymore. >> > > This requires some care. > > > Let me think out loud. > > > The current AMR in ES::reinit() is three steps: > > 1: for backwards compatibility, we assume that the user called for > MeshRefinement themselves, and we run a prolong_vectors(). > > 2: coarsen_elements() (and a restrict_vectors(), if anything > changed), to shrink the system as much as refinement flags allow. > > 3: refine_elements() (and a prolong_vectors(), if anything changed), > to grow the system to its final requested size. > > > In the long run, there's probably three classes of behavior we still > want to support: > > A. Backwards compatibility with apps that call > refine_and_coarsen_elements() themselves. > > B. Backwards compatibility with apps that flag elements themselves > but expect libMesh to do the refinement and coarsening. We could do > that two ways: > > B1. Two-step coarsen-then-refine, for apps that worry about > maximum memory use. > > B2. One-step coarsen-with-refine, for apps that worry about total > CPU use. > > > And in all those cases, we want: > > α. The final refinement flags should properly indicate > JUST_REFINED/JUST_COARSENED elements. > > > You need us to support α. (α+A, if you're using the adaptivity code in > GRINS, right?) We need to not break A or B. We can support either B1 > or B2 in the short run without adding an API for both. > > > When we're in coarsen_elements() or refine_elements(), we "clean up" > JUST_REFINED and JUST_COARSENED flags. If we don't do that, and > someone tries to use case B in the code, then they'll be in our B1 > implementation, the projection code will be confused by the leftover > flags, and we'll die. > > But if we want to get α+B1 to work at all, then we have to handle > exactly this problem: when the refine step occurs, it *can't* clear > the JUST_COARSENED flags. > > Perhaps we can give the project_vector code an option to ignore > JUST_COARSENED or JUST_REFINED flags, depending on whether it's being > called from restrict_vectors() or from prolong_vectors()? Then > coarsen_elements will only clear old JUST_COARSENED flags (before it > sets new up to date flags itself), refine_elements will only clear old > JUST_REFINED flags (before setting new up to date flags itself), > restrict_vectors() will ignore the JUST_REFINED case, > prolong_vectors() will ignore the JUST_COARSENED case, and everyone > will be happy. > > This isn't trivial, though, and it affects code paths which we need to > support yet may have no test coverage of, and I'm behind on other > things. If you're up to adding that option yourself, that would be > fantastic, and I'll help with review and debugging. If you can wait a > month and a half then that's okay, and I can get things working by > myself. > > > The best application-level workaround I can think of is: > > 1. Make sure you configured libMesh with unique_ids on. > > 2. Move MeshRefinement::_smooth_flags() to a public > MeshRefinement::smooth_flags() function. > > 3. Call smoothing manually before refinement. > > 4. Save flags manually. I'd suggest a hash map from element > unique_id to RefinementFlag. After the smoothing is done this should > be easy: if an element is marked to coarsen then save JUST_COARSENED > on its parent, if marked to refine then save JUST_REFINED on it (which > isn't quite right, but the children don't exist yet so you don't know > what their unique ids will be). > > > Maybe there's an intermediate-difficulty option? It would be > reasonable as a temporary workaround to add a loop over elements, in > each of refine_elements() and coarsen_elements(), which tests for > REFINE or COARSEN flags, and then *only* clears > JUST_REFINED/JUST_COARSENED flags if there is new refinement or > coarsening to be done. That would be a much easier library > modification and it would fix the α+A use case, even though it > wouldn't be a step in the right direction long-term. If you do this, > though, you definitely need to add a unit test, so I don't > accidentally break your code again when we ever do get around to long > term refactoring. > --- > Roy -- *~Tim Adowski* |
From: Roy S. <roy...@ic...> - 2016-11-23 22:45:37
|
On Mon, 21 Nov 2016, Tim Adowski wrote: > Is there any way to alter EquationSystems::reinit() to not call > refine/coarsen_elements()? These calls prevent me from properly > reinitializing my new QoI class. Or, is there another way to use the > element refinement flags to identify elements that were just > refined/coarsened? > > @pbauman pointed me to a GitHub PR discussion where it seems that > these refine/coarsen calls in ES::reinit() are not really necessary > anymore. This requires some care. Let me think out loud. The current AMR in ES::reinit() is three steps: 1: for backwards compatibility, we assume that the user called for MeshRefinement themselves, and we run a prolong_vectors(). 2: coarsen_elements() (and a restrict_vectors(), if anything changed), to shrink the system as much as refinement flags allow. 3: refine_elements() (and a prolong_vectors(), if anything changed), to grow the system to its final requested size. In the long run, there's probably three classes of behavior we still want to support: A. Backwards compatibility with apps that call refine_and_coarsen_elements() themselves. B. Backwards compatibility with apps that flag elements themselves but expect libMesh to do the refinement and coarsening. We could do that two ways: B1. Two-step coarsen-then-refine, for apps that worry about maximum memory use. B2. One-step coarsen-with-refine, for apps that worry about total CPU use. And in all those cases, we want: α. The final refinement flags should properly indicate JUST_REFINED/JUST_COARSENED elements. You need us to support α. (α+A, if you're using the adaptivity code in GRINS, right?) We need to not break A or B. We can support either B1 or B2 in the short run without adding an API for both. When we're in coarsen_elements() or refine_elements(), we "clean up" JUST_REFINED and JUST_COARSENED flags. If we don't do that, and someone tries to use case B in the code, then they'll be in our B1 implementation, the projection code will be confused by the leftover flags, and we'll die. But if we want to get α+B1 to work at all, then we have to handle exactly this problem: when the refine step occurs, it *can't* clear the JUST_COARSENED flags. Perhaps we can give the project_vector code an option to ignore JUST_COARSENED or JUST_REFINED flags, depending on whether it's being called from restrict_vectors() or from prolong_vectors()? Then coarsen_elements will only clear old JUST_COARSENED flags (before it sets new up to date flags itself), refine_elements will only clear old JUST_REFINED flags (before setting new up to date flags itself), restrict_vectors() will ignore the JUST_REFINED case, prolong_vectors() will ignore the JUST_COARSENED case, and everyone will be happy. This isn't trivial, though, and it affects code paths which we need to support yet may have no test coverage of, and I'm behind on other things. If you're up to adding that option yourself, that would be fantastic, and I'll help with review and debugging. If you can wait a month and a half then that's okay, and I can get things working by myself. The best application-level workaround I can think of is: 1. Make sure you configured libMesh with unique_ids on. 2. Move MeshRefinement::_smooth_flags() to a public MeshRefinement::smooth_flags() function. 3. Call smoothing manually before refinement. 4. Save flags manually. I'd suggest a hash map from element unique_id to RefinementFlag. After the smoothing is done this should be easy: if an element is marked to coarsen then save JUST_COARSENED on its parent, if marked to refine then save JUST_REFINED on it (which isn't quite right, but the children don't exist yet so you don't know what their unique ids will be). Maybe there's an intermediate-difficulty option? It would be reasonable as a temporary workaround to add a loop over elements, in each of refine_elements() and coarsen_elements(), which tests for REFINE or COARSEN flags, and then *only* clears JUST_REFINED/JUST_COARSENED flags if there is new refinement or coarsening to be done. That would be a much easier library modification and it would fix the α+A use case, even though it wouldn't be a step in the right direction long-term. If you do this, though, you definitely need to add a unit test, so I don't accidentally break your code again when we ever do get around to long term refactoring. --- Roy |
From: Tim A. <tra...@bu...> - 2016-11-21 16:43:57
|
Hello: Sorry for the longish email that follows, I tried to keep it as succinct as possible. tl;dr Is there any way to alter EquationSystems::reinit() to not call refine/coarsen_elements()? These calls prevent me from properly reinitializing my new QoI class. Or, is there another way to use the element refinement flags to identify elements that were just refined/coarsened? @pbauman pointed me to a GitHub PR <https://github.com/libMesh/libmesh/pull/1080> discussion where it seems that these refine/coarsen calls in ES::reinit() are not really necessary anymore. ----------------------------- I am currently working on implementing a new QoI in GRINS, but I am having an issue with EquationSystems::reinit(). My new QoI is based on the GRINS::RayfireMesh class, which must be reinitialized after every refine/coarsening. To fully integrate this, I am adding hooks into GRINS::MultiphysicsSystem::reinit() that will call reinit() on my QoI class. I have a test case that follows this flow: assemble QoI, refine certain elements, reinit the EquationSystems (which would also reinit my QoI), assemble QoI again. The reinit() <https://github.com/grinsfem/grins/blob/master/src/qoi/src/rayfire_mesh.C#L160> for RayfireMesh looks for an INACTIVE element with JUST_REFINED children in order to refine its own internal 1D mesh. The problem arises in that MeshRefinement::refine_elements() ends up getting called *twice*, once in my test case and again in ES::reinit() ( https://github.com/libMesh/libmesh/blob/master/src/systems/equation_systems.C#L265 ) When refine_elements() is called the second time, it replaces all JUST_REFINED flags with DO_NOTHING, breaking my RayfireMesh reinit function. If I do not call refine_elements() in my test, and instead just flag the elements and let the ES::reinit() do the refinement, then this <https://github.com/libMesh/libmesh/blob/master/src/systems/equation_systems.C#L270> IF statement is skipped, and prolong_vectors() below it trips an assert. Perhaps I could alter how I am identifying the just refined elements? The RayfireMesh map stores the IDs of main mesh elements along the 1D rayfire. On reinit(), it looks for inactive elements, and then tries to determine if they were refined or coarsened by looking at the refinement flags of its children/parent. Would there be any other situation where an element would go from ACTIVE to INACTIVE with DO_NOTHING children? I'm not sure how robust of a check that would be. I would appreciate any thoughts/insights/suggestions for how to address this. Thank you, -- *~Tim Adowski* |
From: David K. <dav...@ak...> - 2016-11-20 22:27:56
|
If I understand correctly, the "called_recursively" stuff in dof_map_constraints.C is not used anymore because we expand out all recursive constraints beforehand. If that is correct, then I'd be in favor of removing the "called_recursively" code (i.e. in build_constraint_matrix and build_constraint_matrix_and_vector) in order to simplify the code a bit. I'd be happy to make a PR to do this, if it's something that others would also want? David |
From: Maria G. C. N. <mgc...@gm...> - 2016-11-09 13:42:49
|
Hi all, is there any way in libmesh to concatenate intersecting meshes? Best Maria |
From: John P. <jwp...@gm...> - 2016-10-24 21:11:43
|
Hi, Roy and I have started the 1.1.0 release branch. This looks to be an incremental update (no major API incompatible changes that I am aware of) and has fewer changes overall than the 0.9.5->1.0.0 release. We will still merge/cherry-pick new features onto 1.1.0 for the next week or so if there is enough demand, but after that 1.1.0 will be bug fixes only until "v1.1.0" is tagged. -- John |
From: David K. <dav...@ak...> - 2016-10-22 02:46:30
|
On Fri, Oct 21, 2016 at 5:22 PM, John Peterson <jwp...@gm...> wrote: > > > On Fri, Oct 21, 2016 at 2:57 PM, David Knezevic < > dav...@ak...> wrote: > >> It seems that ExodusII_IO::write_discontinuous_exodusII doesn't handle >> NodeElems correctly. NodeElems get plotted as ExodusII SPHERE elements, and >> this works fine with continuous plotting, but seems to be wrong in the >> discontinuous plotting case. >> >> I've attached a modified version of systems_of_equations_ex6 that >> illustrates this (change write_discontinuous_exodusII to >> write_equation_systems in the example to see the correct locations). >> Paraview plots the SPHERE elements as dots, so that is useful for checking >> the result. >> >> John, I thought you may have an idea of where to look to fix this? My >> first guess is that it might be related to the "FIXME" comment >> in ExodusII_IO_Helper::write_elements? >> > > You're talking about the FIXME on line 1449? That should only be a > problem if the mapping between your Mesh's node numbering and Exodus's node > numbering, as stored in the libmesh_node_num_to_exodus, is something > other than the trivial off-by-one map. > > I don't know if the presence of NodeElems, which are treated as Elems (?), > would have an effect on this, but I suppose it's possible. What is the > error message you're getting? > I think I found the issue, see the PR I just made (#1128). Looks like the code previously assumed num_nodes_per_elem was the same for the entire mesh, so that didn't work properly for meshes (like my test case) with different element types. David |
From: John P. <jwp...@gm...> - 2016-10-21 21:23:04
|
On Fri, Oct 21, 2016 at 2:57 PM, David Knezevic <dav...@ak...> wrote: > It seems that ExodusII_IO::write_discontinuous_exodusII doesn't handle > NodeElems correctly. NodeElems get plotted as ExodusII SPHERE elements, and > this works fine with continuous plotting, but seems to be wrong in the > discontinuous plotting case. > > I've attached a modified version of systems_of_equations_ex6 that > illustrates this (change write_discontinuous_exodusII to > write_equation_systems in the example to see the correct locations). > Paraview plots the SPHERE elements as dots, so that is useful for checking > the result. > > John, I thought you may have an idea of where to look to fix this? My > first guess is that it might be related to the "FIXME" comment > in ExodusII_IO_Helper::write_elements? > You're talking about the FIXME on line 1449? That should only be a problem if the mapping between your Mesh's node numbering and Exodus's node numbering, as stored in the libmesh_node_num_to_exodus, is something other than the trivial off-by-one map. I don't know if the presence of NodeElems, which are treated as Elems (?), would have an effect on this, but I suppose it's possible. What is the error message you're getting? -- John |
From: David K. <dav...@ak...> - 2016-10-21 20:57:16
|
It seems that ExodusII_IO::write_discontinuous_exodusII doesn't handle NodeElems correctly. NodeElems get plotted as ExodusII SPHERE elements, and this works fine with continuous plotting, but seems to be wrong in the discontinuous plotting case. I've attached a modified version of systems_of_equations_ex6 that illustrates this (change write_discontinuous_exodusII to write_equation_systems in the example to see the correct locations). Paraview plots the SPHERE elements as dots, so that is useful for checking the result. John, I thought you may have an idea of where to look to fix this? My first guess is that it might be related to the "FIXME" comment in ExodusII_IO_Helper::write_elements? David P.S. The reason I'm using NodeElems is that they're helpful for enabling dof-coupling using GhostingFunctor (e.g. for node-to-node springs); Roy suggested this approach recently and it works well for me. |
From: David K. <dav...@ak...> - 2016-10-13 16:19:56
|
On Thu, Oct 13, 2016 at 12:12 PM, David Knezevic <dav...@ak... > wrote: > On Wed, Oct 12, 2016 at 9:07 PM, David Knezevic < > dav...@ak...> wrote: > >> On Wed, Oct 12, 2016 at 5:04 PM, Roy Stogner <roy...@ic...> >> wrote: >> >>> >>> On Sun, 9 Oct 2016, David Knezevic wrote: >>> >>> I've attached a modified version of misc_ex9 that attaches >>>> constraints on one side of the "crack" and checks if the dof >>>> constraints are present during assembly. This passes in serial but >>>> fails in parallel because constraints are not communicated on the >>>> GhostingFunctor-coupled-dofs. >>>> >>> >>> I had to make a couple fixes to the test: switching mesh_1 and mesh_2 >>> to SerialMesh, and using >>> >>> dof_id_type neighbor_node_id = neighbor->node_ref(neighbor_no >>> de_index).dof_number(0,0,0); >>> >>> to handle the case where node id isn't node dof id. >>> >>> The extra constraints I added mean that the problem doesn't make >>>> physical sense anymore unfortunately, but at least it tests for the >>>> constraint issue. Roy: I'm not sure if this is appropriate for a >>>> unit test, but it should at least be helpful for when you want to >>>> check your implementation. >>>> >>> >>> It was, thanks! Here's hoping #1120 fixes the real problem too. >>> >>> The modified ex9 is too big for a unit test, and too contrived for an >>> example, and I can't think of any easy way to fix that while >>> maintaining the same level of test coverage. But if you can come up >>> with anything that doesn't have both those problems I'd really love to >>> get this case into continuous integration. >>> >>> If you can't come up with anything either... I suppose I could combine >>> an extra-ghost-layers test case with a Dirichlet boundary and test a >>> wide stencil? That should hit the same code paths. Plus, it's about >>> time libMesh expanded into the cutting edge world of finite difference >>> methods. >>> >> >> >> I just tried my real problem with your PR and it's still not working, >> unfortunately. I'll have to look into that in more detail. I'll get back to >> you when I have more info. >> > > > Roy, I've attached an updated version of the misc_ex9 test. The test now > prints out has a Dirichlet boundary on one side of the domain (boundary ids > 1 and 11), and it prints out the dof IDs on the "crack" that have > constraints on them. With this I get the following output: > > 1 MPI process: > > *./example-opt* > *constrained upper dofs: 1025 1026 * > *constrained lower dofs: 0 1 * > > *constrained upper dofs: 1026 1029 * > *constrained lower dofs: 1 8 * > > *constrained upper dofs: 1029 1031 * > *constrained lower dofs: 8 12 * > > *constrained upper dofs: 1031 1033 * > *constrained lower dofs: 12 16 * > > 2 MPI processes: > > > *mpirun -np 2 ./example-opt --keep-cout* > *constrained upper dofs: 500 501 502 503 * > *constrained lower dofs: 525 526 * > > *constrained upper dofs: 501 504 505 502 * > *constrained lower dofs: 526 533 * > > *constrained upper dofs: 504 506 507 505 * > *constrained lower dofs: 533 537 * > > *constrained upper dofs: 506 508 509 507 * > *constrained lower dofs: 537 541 * > > *constrained upper dofs: 503 502 510 511 * > *constrained lower dofs: * > > *constrained upper dofs: 502 505 512 510 * > *constrained lower dofs: * > > *constrained upper dofs: 505 507 513 512 * > *constrained lower dofs: * > > *constrained upper dofs: 507 509 514 513 * > *constrained lower dofs: * > > *constrained upper dofs: 511 510 515 516 * > *constrained lower dofs: * > > *constrained upper dofs: 510 512 517 515 * > *constrained lower dofs: * > > *constrained upper dofs: 512 513 518 517 * > *constrained lower dofs: * > > *constrained upper dofs: 513 514 519 518 * > *constrained lower dofs: * > > *constrained upper dofs: 516 515 520 521 * > *constrained lower dofs: * > > *constrained upper dofs: 515 517 522 520 * > *constrained lower dofs: * > > *constrained upper dofs: 517 518 523 522 * > *constrained lower dofs: * > > *constrained upper dofs: 518 519 524 523 * > *constrained lower dofs:* > > The 1 process output makes sense to me: there should be only five nodes on > top and bottom that have a constraint. I don't understand the 2 process > output, there seems to be many extra constraints. Do you think this > indicates a bug in the constraint scattering? > > David > P.S. I changed the code slightly to print out the node info for each constrained node on the "crack". The results are below. You can see that in the parallel case, we have extra constraints that should not be there. David ----------------------------------------------------------------------- 1 MPI process: lower constrained node info: Node id()=0, processor_id()=0, Point=(x,y,z)=( 0, 0, 20) DoFs=(0/0/0) lower constrained node info: Node id()=1, processor_id()=0, Point=(x,y,z)=( 1, 0, 20) DoFs=(0/0/1) upper constrained node info: Node id()=1025, processor_id()=0, Point=(x,y,z)=( 0, 0, 20) DoFs=(0/0/1025) upper constrained node info: Node id()=1026, processor_id()=0, Point=(x,y,z)=( 1, 0, 20) DoFs=(0/0/1026) lower constrained node info: Node id()=1, processor_id()=0, Point=(x,y,z)=( 1, 0, 20) DoFs=(0/0/1) lower constrained node info: Node id()=8, processor_id()=0, Point=(x,y,z)=( 2, 0, 20) DoFs=(0/0/8) upper constrained node info: Node id()=1026, processor_id()=0, Point=(x,y,z)=( 1, 0, 20) DoFs=(0/0/1026) upper constrained node info: Node id()=1029, processor_id()=0, Point=(x,y,z)=( 2, 0, 20) DoFs=(0/0/1029) lower constrained node info: Node id()=8, processor_id()=0, Point=(x,y,z)=( 2, 0, 20) DoFs=(0/0/8) lower constrained node info: Node id()=12, processor_id()=0, Point=(x,y,z)=( 3, 0, 20) DoFs=(0/0/12) upper constrained node info: Node id()=1029, processor_id()=0, Point=(x,y,z)=( 2, 0, 20) DoFs=(0/0/1029) upper constrained node info: Node id()=1031, processor_id()=0, Point=(x,y,z)=( 3, 0, 20) DoFs=(0/0/1031) lower constrained node info: Node id()=12, processor_id()=0, Point=(x,y,z)=( 3, 0, 20) DoFs=(0/0/12) lower constrained node info: Node id()=16, processor_id()=0, Point=(x,y,z)=( 4, 0, 20) DoFs=(0/0/16) upper constrained node info: Node id()=1031, processor_id()=0, Point=(x,y,z)=( 3, 0, 20) DoFs=(0/0/1031) upper constrained node info: Node id()=1033, processor_id()=0, Point=(x,y,z)=( 4, 0, 20) DoFs=(0/0/1033) *2 MPI processes using --keep-cout:* lower constrained node info: Node id()=0, processor_id()=1, Point=(x,y,z)=( 0, 0, 20) DoFs=(0/0/525) lower constrained node info: Node id()=1, processor_id()=1, Point=(x,y,z)=( 1, 0, 20) DoFs=(0/0/526) upper constrained node info: Node id()=1025, processor_id()=0, Point=(x,y,z)=( 0, 0, 20) DoFs=(0/0/500) upper constrained node info: Node id()=1026, processor_id()=0, Point=(x,y,z)=( 1, 0, 20) DoFs=(0/0/501) upper constrained node info: Node id()=1027, processor_id()=0, Point=(x,y,z)=( 1, 1, 20) DoFs=(0/0/502) upper constrained node info: Node id()=1028, processor_id()=0, Point=(x,y,z)=( 0, 1, 20) DoFs=(0/0/503) lower constrained node info: Node id()=1, processor_id()=1, Point=(x,y,z)=( 1, 0, 20) DoFs=(0/0/526) lower constrained node info: Node id()=8, processor_id()=1, Point=(x,y,z)=( 2, 0, 20) DoFs=(0/0/533) upper constrained node info: Node id()=1026, processor_id()=0, Point=(x,y,z)=( 1, 0, 20) DoFs=(0/0/501) upper constrained node info: Node id()=1029, processor_id()=0, Point=(x,y,z)=( 2, 0, 20) DoFs=(0/0/504) upper constrained node info: Node id()=1030, processor_id()=0, Point=(x,y,z)=( 2, 1, 20) DoFs=(0/0/505) upper constrained node info: Node id()=1027, processor_id()=0, Point=(x,y,z)=( 1, 1, 20) DoFs=(0/0/502) lower constrained node info: Node id()=8, processor_id()=1, Point=(x,y,z)=( 2, 0, 20) DoFs=(0/0/533) lower constrained node info: Node id()=12, processor_id()=1, Point=(x,y,z)=( 3, 0, 20) DoFs=(0/0/537) upper constrained node info: Node id()=1029, processor_id()=0, Point=(x,y,z)=( 2, 0, 20) DoFs=(0/0/504) upper constrained node info: Node id()=1031, processor_id()=0, Point=(x,y,z)=( 3, 0, 20) DoFs=(0/0/506) upper constrained node info: Node id()=1032, processor_id()=0, Point=(x,y,z)=( 3, 1, 20) DoFs=(0/0/507) upper constrained node info: Node id()=1030, processor_id()=0, Point=(x,y,z)=( 2, 1, 20) DoFs=(0/0/505) lower constrained node info: Node id()=12, processor_id()=1, Point=(x,y,z)=( 3, 0, 20) DoFs=(0/0/537) lower constrained node info: Node id()=16, processor_id()=1, Point=(x,y,z)=( 4, 0, 20) DoFs=(0/0/541) upper constrained node info: Node id()=1031, processor_id()=0, Point=(x,y,z)=( 3, 0, 20) DoFs=(0/0/506) upper constrained node info: Node id()=1033, processor_id()=0, Point=(x,y,z)=( 4, 0, 20) DoFs=(0/0/508) upper constrained node info: Node id()=1034, processor_id()=0, Point=(x,y,z)=( 4, 1, 20) DoFs=(0/0/509) upper constrained node info: Node id()=1032, processor_id()=0, Point=(x,y,z)=( 3, 1, 20) DoFs=(0/0/507) upper constrained node info: Node id()=1028, processor_id()=0, Point=(x,y,z)=( 0, 1, 20) DoFs=(0/0/503) upper constrained node info: Node id()=1027, processor_id()=0, Point=(x,y,z)=( 1, 1, 20) DoFs=(0/0/502) upper constrained node info: Node id()=1035, processor_id()=0, Point=(x,y,z)=( 1, 2, 20) DoFs=(0/0/510) upper constrained node info: Node id()=1036, processor_id()=0, Point=(x,y,z)=( 0, 2, 20) DoFs=(0/0/511) upper constrained node info: Node id()=1027, processor_id()=0, Point=(x,y,z)=( 1, 1, 20) DoFs=(0/0/502) upper constrained node info: Node id()=1030, processor_id()=0, Point=(x,y,z)=( 2, 1, 20) DoFs=(0/0/505) upper constrained node info: Node id()=1037, processor_id()=0, Point=(x,y,z)=( 2, 2, 20) DoFs=(0/0/512) upper constrained node info: Node id()=1035, processor_id()=0, Point=(x,y,z)=( 1, 2, 20) DoFs=(0/0/510) upper constrained node info: Node id()=1030, processor_id()=0, Point=(x,y,z)=( 2, 1, 20) DoFs=(0/0/505) upper constrained node info: Node id()=1032, processor_id()=0, Point=(x,y,z)=( 3, 1, 20) DoFs=(0/0/507) upper constrained node info: Node id()=1038, processor_id()=0, Point=(x,y,z)=( 3, 2, 20) DoFs=(0/0/513) upper constrained node info: Node id()=1037, processor_id()=0, Point=(x,y,z)=( 2, 2, 20) DoFs=(0/0/512) upper constrained node info: Node id()=1032, processor_id()=0, Point=(x,y,z)=( 3, 1, 20) DoFs=(0/0/507) upper constrained node info: Node id()=1034, processor_id()=0, Point=(x,y,z)=( 4, 1, 20) DoFs=(0/0/509) upper constrained node info: Node id()=1039, processor_id()=0, Point=(x,y,z)=( 4, 2, 20) DoFs=(0/0/514) upper constrained node info: Node id()=1038, processor_id()=0, Point=(x,y,z)=( 3, 2, 20) DoFs=(0/0/513) upper constrained node info: Node id()=1036, processor_id()=0, Point=(x,y,z)=( 0, 2, 20) DoFs=(0/0/511) upper constrained node info: Node id()=1035, processor_id()=0, Point=(x,y,z)=( 1, 2, 20) DoFs=(0/0/510) upper constrained node info: Node id()=1040, processor_id()=0, Point=(x,y,z)=( 1, 3, 20) DoFs=(0/0/515) upper constrained node info: Node id()=1041, processor_id()=0, Point=(x,y,z)=( 0, 3, 20) DoFs=(0/0/516) upper constrained node info: Node id()=1035, processor_id()=0, Point=(x,y,z)=( 1, 2, 20) DoFs=(0/0/510) upper constrained node info: Node id()=1037, processor_id()=0, Point=(x,y,z)=( 2, 2, 20) DoFs=(0/0/512) upper constrained node info: Node id()=1042, processor_id()=0, Point=(x,y,z)=( 2, 3, 20) DoFs=(0/0/517) upper constrained node info: Node id()=1040, processor_id()=0, Point=(x,y,z)=( 1, 3, 20) DoFs=(0/0/515) upper constrained node info: Node id()=1037, processor_id()=0, Point=(x,y,z)=( 2, 2, 20) DoFs=(0/0/512) upper constrained node info: Node id()=1038, processor_id()=0, Point=(x,y,z)=( 3, 2, 20) DoFs=(0/0/513) upper constrained node info: Node id()=1043, processor_id()=0, Point=(x,y,z)=( 3, 3, 20) DoFs=(0/0/518) upper constrained node info: Node id()=1042, processor_id()=0, Point=(x,y,z)=( 2, 3, 20) DoFs=(0/0/517) upper constrained node info: Node id()=1038, processor_id()=0, Point=(x,y,z)=( 3, 2, 20) DoFs=(0/0/513) upper constrained node info: Node id()=1039, processor_id()=0, Point=(x,y,z)=( 4, 2, 20) DoFs=(0/0/514) upper constrained node info: Node id()=1044, processor_id()=0, Point=(x,y,z)=( 4, 3, 20) DoFs=(0/0/519) upper constrained node info: Node id()=1043, processor_id()=0, Point=(x,y,z)=( 3, 3, 20) DoFs=(0/0/518) upper constrained node info: Node id()=1041, processor_id()=0, Point=(x,y,z)=( 0, 3, 20) DoFs=(0/0/516) upper constrained node info: Node id()=1040, processor_id()=0, Point=(x,y,z)=( 1, 3, 20) DoFs=(0/0/515) upper constrained node info: Node id()=1045, processor_id()=0, Point=(x,y,z)=( 1, 4, 20) DoFs=(0/0/520) upper constrained node info: Node id()=1046, processor_id()=0, Point=(x,y,z)=( 0, 4, 20) DoFs=(0/0/521) upper constrained node info: Node id()=1040, processor_id()=0, Point=(x,y,z)=( 1, 3, 20) DoFs=(0/0/515) upper constrained node info: Node id()=1042, processor_id()=0, Point=(x,y,z)=( 2, 3, 20) DoFs=(0/0/517) upper constrained node info: Node id()=1047, processor_id()=0, Point=(x,y,z)=( 2, 4, 20) DoFs=(0/0/522) upper constrained node info: Node id()=1045, processor_id()=0, Point=(x,y,z)=( 1, 4, 20) DoFs=(0/0/520) upper constrained node info: Node id()=1042, processor_id()=0, Point=(x,y,z)=( 2, 3, 20) DoFs=(0/0/517) upper constrained node info: Node id()=1043, processor_id()=0, Point=(x,y,z)=( 3, 3, 20) DoFs=(0/0/518) upper constrained node info: Node id()=1048, processor_id()=0, Point=(x,y,z)=( 3, 4, 20) DoFs=(0/0/523) upper constrained node info: Node id()=1047, processor_id()=0, Point=(x,y,z)=( 2, 4, 20) DoFs=(0/0/522) upper constrained node info: Node id()=1043, processor_id()=0, Point=(x,y,z)=( 3, 3, 20) DoFs=(0/0/518) upper constrained node info: Node id()=1044, processor_id()=0, Point=(x,y,z)=( 4, 3, 20) DoFs=(0/0/519) upper constrained node info: Node id()=1049, processor_id()=0, Point=(x,y,z)=( 4, 4, 20) DoFs=(0/0/524) upper constrained node info: Node id()=1048, processor_id()=0, Point=(x,y,z)=( 3, 4, 20) DoFs=(0/0/523) |
From: David K. <dav...@ak...> - 2016-10-13 16:12:42
|
On Wed, Oct 12, 2016 at 9:07 PM, David Knezevic <dav...@ak...> wrote: > On Wed, Oct 12, 2016 at 5:04 PM, Roy Stogner <roy...@ic...> > wrote: > >> >> On Sun, 9 Oct 2016, David Knezevic wrote: >> >> I've attached a modified version of misc_ex9 that attaches >>> constraints on one side of the "crack" and checks if the dof >>> constraints are present during assembly. This passes in serial but >>> fails in parallel because constraints are not communicated on the >>> GhostingFunctor-coupled-dofs. >>> >> >> I had to make a couple fixes to the test: switching mesh_1 and mesh_2 >> to SerialMesh, and using >> >> dof_id_type neighbor_node_id = neighbor->node_ref(neighbor_no >> de_index).dof_number(0,0,0); >> >> to handle the case where node id isn't node dof id. >> >> The extra constraints I added mean that the problem doesn't make >>> physical sense anymore unfortunately, but at least it tests for the >>> constraint issue. Roy: I'm not sure if this is appropriate for a >>> unit test, but it should at least be helpful for when you want to >>> check your implementation. >>> >> >> It was, thanks! Here's hoping #1120 fixes the real problem too. >> >> The modified ex9 is too big for a unit test, and too contrived for an >> example, and I can't think of any easy way to fix that while >> maintaining the same level of test coverage. But if you can come up >> with anything that doesn't have both those problems I'd really love to >> get this case into continuous integration. >> >> If you can't come up with anything either... I suppose I could combine >> an extra-ghost-layers test case with a Dirichlet boundary and test a >> wide stencil? That should hit the same code paths. Plus, it's about >> time libMesh expanded into the cutting edge world of finite difference >> methods. >> > > > I just tried my real problem with your PR and it's still not working, > unfortunately. I'll have to look into that in more detail. I'll get back to > you when I have more info. > Roy, I've attached an updated version of the misc_ex9 test. The test now prints out has a Dirichlet boundary on one side of the domain (boundary ids 1 and 11), and it prints out the dof IDs on the "crack" that have constraints on them. With this I get the following output: 1 MPI process: *./example-opt* *constrained upper dofs: 1025 1026 * *constrained lower dofs: 0 1 * *constrained upper dofs: 1026 1029 * *constrained lower dofs: 1 8 * *constrained upper dofs: 1029 1031 * *constrained lower dofs: 8 12 * *constrained upper dofs: 1031 1033 * *constrained lower dofs: 12 16 * 2 MPI processes: *mpirun -np 2 ./example-opt --keep-cout* *constrained upper dofs: 500 501 502 503 * *constrained lower dofs: 525 526 * *constrained upper dofs: 501 504 505 502 * *constrained lower dofs: 526 533 * *constrained upper dofs: 504 506 507 505 * *constrained lower dofs: 533 537 * *constrained upper dofs: 506 508 509 507 * *constrained lower dofs: 537 541 * *constrained upper dofs: 503 502 510 511 * *constrained lower dofs: * *constrained upper dofs: 502 505 512 510 * *constrained lower dofs: * *constrained upper dofs: 505 507 513 512 * *constrained lower dofs: * *constrained upper dofs: 507 509 514 513 * *constrained lower dofs: * *constrained upper dofs: 511 510 515 516 * *constrained lower dofs: * *constrained upper dofs: 510 512 517 515 * *constrained lower dofs: * *constrained upper dofs: 512 513 518 517 * *constrained lower dofs: * *constrained upper dofs: 513 514 519 518 * *constrained lower dofs: * *constrained upper dofs: 516 515 520 521 * *constrained lower dofs: * *constrained upper dofs: 515 517 522 520 * *constrained lower dofs: * *constrained upper dofs: 517 518 523 522 * *constrained lower dofs: * *constrained upper dofs: 518 519 524 523 * *constrained lower dofs:* The 1 process output makes sense to me: there should be only five nodes on top and bottom that have a constraint. I don't understand the 2 process output, there seems to be many extra constraints. Do you think this indicates a bug in the constraint scattering? David |
From: David K. <dav...@ak...> - 2016-10-13 01:07:18
|
On Wed, Oct 12, 2016 at 5:04 PM, Roy Stogner <roy...@ic...> wrote: > > On Sun, 9 Oct 2016, David Knezevic wrote: > > I've attached a modified version of misc_ex9 that attaches >> constraints on one side of the "crack" and checks if the dof >> constraints are present during assembly. This passes in serial but >> fails in parallel because constraints are not communicated on the >> GhostingFunctor-coupled-dofs. >> > > I had to make a couple fixes to the test: switching mesh_1 and mesh_2 > to SerialMesh, and using > > dof_id_type neighbor_node_id = neighbor->node_ref(neighbor_no > de_index).dof_number(0,0,0); > > to handle the case where node id isn't node dof id. > > The extra constraints I added mean that the problem doesn't make >> physical sense anymore unfortunately, but at least it tests for the >> constraint issue. Roy: I'm not sure if this is appropriate for a >> unit test, but it should at least be helpful for when you want to >> check your implementation. >> > > It was, thanks! Here's hoping #1120 fixes the real problem too. > > The modified ex9 is too big for a unit test, and too contrived for an > example, and I can't think of any easy way to fix that while > maintaining the same level of test coverage. But if you can come up > with anything that doesn't have both those problems I'd really love to > get this case into continuous integration. > > If you can't come up with anything either... I suppose I could combine > an extra-ghost-layers test case with a Dirichlet boundary and test a > wide stencil? That should hit the same code paths. Plus, it's about > time libMesh expanded into the cutting edge world of finite difference > methods. > I just tried my real problem with your PR and it's still not working, unfortunately. I'll have to look into that in more detail. I'll get back to you when I have more info. David |
From: Roy S. <roy...@ic...> - 2016-10-12 21:04:17
|
On Sun, 9 Oct 2016, David Knezevic wrote: > I've attached a modified version of misc_ex9 that attaches > constraints on one side of the "crack" and checks if the dof > constraints are present during assembly. This passes in serial but > fails in parallel because constraints are not communicated on the > GhostingFunctor-coupled-dofs. I had to make a couple fixes to the test: switching mesh_1 and mesh_2 to SerialMesh, and using dof_id_type neighbor_node_id = neighbor->node_ref(neighbor_node_index).dof_number(0,0,0); to handle the case where node id isn't node dof id. > The extra constraints I added mean that the problem doesn't make > physical sense anymore unfortunately, but at least it tests for the > constraint issue. Roy: I'm not sure if this is appropriate for a > unit test, but it should at least be helpful for when you want to > check your implementation. It was, thanks! Here's hoping #1120 fixes the real problem too. The modified ex9 is too big for a unit test, and too contrived for an example, and I can't think of any easy way to fix that while maintaining the same level of test coverage. But if you can come up with anything that doesn't have both those problems I'd really love to get this case into continuous integration. If you can't come up with anything either... I suppose I could combine an extra-ghost-layers test case with a Dirichlet boundary and test a wide stencil? That should hit the same code paths. Plus, it's about time libMesh expanded into the cutting edge world of finite difference methods. --- Roy |
From: David K. <dav...@ak...> - 2016-10-11 20:55:56
|
On Tue, Oct 11, 2016 at 4:10 PM, Roy Stogner <roy...@ic...> wrote: > > > On Sun, 9 Oct 2016, David Knezevic wrote: > > On Thu, Oct 6, 2016 at 10:34 PM, Roy Stogner <roy...@ic...> >> wrote: >> >> I think the proper fix is to call the coupling functors in >> scatter_constraints(), then query the processors who own the >> elements >> which are to be coupled for any constraints on those elements' dofs. >> > > Just about done. > > Except now I'm paranoid, because I ran across my years-old comment, > > // Next we need to push constraints to processors which don't own > // the constrained dof, don't own the constraining dof, but own an > // element supporting the constraining dof. > ... > // Getting distributed adaptive sparsity patterns right is hard. > > And I don't recall *why* we had that need! When we're doing an > enforce_constraints_exactly() it's only important to have all our own > dofs' constraints and their dependencies. When we're constraining an > element stiffness matrix we need to build a constraint matrix C, but > that constraint matrix only requires us to know about constrained dofs > on the local element, not about constraining dofs on the local > element. > > So why the heck did I think processors needed to know about everywhere > a locally-supported dof constrain*ing*? If I can't figure out what > the reason was then I can't decide whether or not it will be > applicable to coupled-ghosted elements too. > hmm, I agree with you, I can't see why the constraining dofs are required... maybe we can proceed on the assumption that they aren't required, and do some tests to see if we hit a case where that assumption is wrong? David |
From: Roy S. <roy...@ic...> - 2016-10-11 20:11:04
|
On Sun, 9 Oct 2016, David Knezevic wrote: > On Thu, Oct 6, 2016 at 10:34 PM, Roy Stogner <roy...@ic...> wrote: > > I think the proper fix is to call the coupling functors in > scatter_constraints(), then query the processors who own the elements > which are to be coupled for any constraints on those elements' dofs. Just about done. Except now I'm paranoid, because I ran across my years-old comment, // Next we need to push constraints to processors which don't own // the constrained dof, don't own the constraining dof, but own an // element supporting the constraining dof. ... // Getting distributed adaptive sparsity patterns right is hard. And I don't recall *why* we had that need! When we're doing an enforce_constraints_exactly() it's only important to have all our own dofs' constraints and their dependencies. When we're constraining an element stiffness matrix we need to build a constraint matrix C, but that constraint matrix only requires us to know about constrained dofs on the local element, not about constraining dofs on the local element. So why the heck did I think processors needed to know about everywhere a locally-supported dof constrain*ing*? If I can't figure out what the reason was then I can't decide whether or not it will be applicable to coupled-ghosted elements too. --- Roy |
From: David K. <dav...@ak...> - 2016-10-09 20:17:39
|
On Thu, Oct 6, 2016 at 10:44 PM, David Knezevic <dav...@ak...> wrote: > On Thu, Oct 6, 2016 at 10:34 PM, Roy Stogner <roy...@ic...> > wrote: > >> >> >> On Thu, 6 Oct 2016, David Knezevic wrote: >> >> I'm using GhostingFunctor for a contact solve, in which I consider a 1/4 >>> domain with partial Dirichlet boundary conditions that impose a symmetry >>> condition (i.e. displacement normal to the symmetry boundary is clamped >>> to zero, and tangential displacement is unconstrained). This means that I >>> have Dirichlet constraints that affect the dofs on the contact surface. >>> What I find is that the solve works fine in serial, but in parallel the >>> nonlinear convergence fails, presumably because of an incorrect Jacobian. >>> I have actually run into this exact issue before (when I was augmenting >>> the sparsity pattern "manually", prior to GhostingFunctor) and I found >>> that the issue was that the dof constraints on the contact surface were >>> not being communicated in parallel, which caused the incorrect Jacobian >>> in parallel. I fixed it by adding artificial Edge2 elements into the >>> mesh (like in systems_of_equations_ex8) to ensure that the dof constraints >>> are communicated properly in parallel. >>> >>> So, my question is, is there a way to achieve the necessary dof >>> constraint communication with the new GhostingFunctor API? I had hoped that >>> using >>> "add_algebraic_ghosting_functor" would do the job, but it apparently >>> doesn't. >>> >> >> Hmm... If you only needed algebraic ghosting, then >> add_algebraic_ghosting_functor should have been sufficient - >> processors won't know about all their ghosted dofs' constraints, but >> the ghosted dofs will get properly added to the send_list, and >> enforce_constraints_exactly() will ensure that the dofs, once >> constrained on the processors which own them, will have their >> values updated on all the processors which ghost them. >> >> But you need to augment the sparsity pattern too, so you should be >> using add_coupling_functor instead... and in *that* case, we're >> broken, aren't we? You build a Jacobian connecting the remotely >> coupled dofs, and you try to hit it with constrain_element_foo() or >> heterogeneously_constrain_element_foo(), but the processor isn't aware >> of all the ghosted dofs' constraints, so the element constraint matrix >> is wrong and so is your final answer. >> > > > Yep, that's exactly my understanding of the issue. > > > >> I think the proper fix is to call the coupling functors in >> scatter_constraints(), then query the processors who own the elements >> which are to be coupled for any constraints on those elements' dofs. >> I can take a crack at that tomorrow or Monday. Any chance you could >> set up a unit test (or even just stuff an assertion into the misc ex9 >> example?) that checks for the problem? >> > > That'd be great, thanks! I'll be happy to try it out once it's ready. I'll > also look into making a test case that can be added to libMesh. > I've attached a modified version of misc_ex9 that attaches constraints on one side of the "crack" and checks if the dof constraints are present during assembly. This passes in serial but fails in parallel because constraints are not communicated on the GhostingFunctor-coupled-dofs. The extra constraints I added mean that the problem doesn't make physical sense anymore unfortunately, but at least it tests for the constraint issue. Roy: I'm not sure if this is appropriate for a unit test, but it should at least be helpful for when you want to check your implementation. David |
From: David K. <dav...@ak...> - 2016-10-07 02:44:45
|
On Thu, Oct 6, 2016 at 10:34 PM, Roy Stogner <roy...@ic...> wrote: > > > On Thu, 6 Oct 2016, David Knezevic wrote: > > I'm using GhostingFunctor for a contact solve, in which I consider a 1/4 >> domain with partial Dirichlet boundary conditions that impose a symmetry >> condition (i.e. displacement normal to the symmetry boundary is clamped >> to zero, and tangential displacement is unconstrained). This means that I >> have Dirichlet constraints that affect the dofs on the contact surface. >> What I find is that the solve works fine in serial, but in parallel the >> nonlinear convergence fails, presumably because of an incorrect Jacobian. >> I have actually run into this exact issue before (when I was augmenting >> the sparsity pattern "manually", prior to GhostingFunctor) and I found >> that the issue was that the dof constraints on the contact surface were >> not being communicated in parallel, which caused the incorrect Jacobian >> in parallel. I fixed it by adding artificial Edge2 elements into the mesh >> (like in systems_of_equations_ex8) to ensure that the dof constraints >> are communicated properly in parallel. >> >> So, my question is, is there a way to achieve the necessary dof >> constraint communication with the new GhostingFunctor API? I had hoped that >> using >> "add_algebraic_ghosting_functor" would do the job, but it apparently >> doesn't. >> > > Hmm... If you only needed algebraic ghosting, then > add_algebraic_ghosting_functor should have been sufficient - > processors won't know about all their ghosted dofs' constraints, but > the ghosted dofs will get properly added to the send_list, and > enforce_constraints_exactly() will ensure that the dofs, once > constrained on the processors which own them, will have their > values updated on all the processors which ghost them. > > But you need to augment the sparsity pattern too, so you should be > using add_coupling_functor instead... and in *that* case, we're > broken, aren't we? You build a Jacobian connecting the remotely > coupled dofs, and you try to hit it with constrain_element_foo() or > heterogeneously_constrain_element_foo(), but the processor isn't aware > of all the ghosted dofs' constraints, so the element constraint matrix > is wrong and so is your final answer. > Yep, that's exactly my understanding of the issue. > I think the proper fix is to call the coupling functors in > scatter_constraints(), then query the processors who own the elements > which are to be coupled for any constraints on those elements' dofs. > I can take a crack at that tomorrow or Monday. Any chance you could > set up a unit test (or even just stuff an assertion into the misc ex9 > example?) that checks for the problem? > That'd be great, thanks! I'll be happy to try it out once it's ready. I'll also look into making a test case that can be added to libMesh. Thanks, David |
From: Roy S. <roy...@ic...> - 2016-10-07 02:34:42
|
On Thu, 6 Oct 2016, David Knezevic wrote: > I'm using GhostingFunctor for a contact solve, in which I consider a 1/4 domain with partial Dirichlet boundary conditions that impose a symmetry > condition (i.e. displacement normal to the symmetry boundary is clamped to zero, and tangential displacement is unconstrained). This means that I > have Dirichlet constraints that affect the dofs on the contact surface. > What I find is that the solve works fine in serial, but in parallel the nonlinear convergence fails, presumably because of an incorrect Jacobian. > I have actually run into this exact issue before (when I was augmenting the sparsity pattern "manually", prior to GhostingFunctor) and I found > that the issue was that the dof constraints on the contact surface were not being communicated in parallel, which caused the incorrect Jacobian > in parallel. I fixed it by adding artificial Edge2 elements into the mesh (like in systems_of_equations_ex8) to ensure that the dof constraints > are communicated properly in parallel. > > So, my question is, is there a way to achieve the necessary dof constraint communication with the new GhostingFunctor API? I had hoped that using > "add_algebraic_ghosting_functor" would do the job, but it apparently doesn't. Hmm... If you only needed algebraic ghosting, then add_algebraic_ghosting_functor should have been sufficient - processors won't know about all their ghosted dofs' constraints, but the ghosted dofs will get properly added to the send_list, and enforce_constraints_exactly() will ensure that the dofs, once constrained on the processors which own them, will have their values updated on all the processors which ghost them. But you need to augment the sparsity pattern too, so you should be using add_coupling_functor instead... and in *that* case, we're broken, aren't we? You build a Jacobian connecting the remotely coupled dofs, and you try to hit it with constrain_element_foo() or heterogeneously_constrain_element_foo(), but the processor isn't aware of all the ghosted dofs' constraints, so the element constraint matrix is wrong and so is your final answer. I think the proper fix is to call the coupling functors in scatter_constraints(), then query the processors who own the elements which are to be coupled for any constraints on those elements' dofs. I can take a crack at that tomorrow or Monday. Any chance you could set up a unit test (or even just stuff an assertion into the misc ex9 example?) that checks for the problem? --- Roy |