You can subscribe to this list here.
2003 
_{Jan}
(4) 
_{Feb}
(1) 
_{Mar}
(9) 
_{Apr}
(2) 
_{May}
(7) 
_{Jun}
(1) 
_{Jul}
(1) 
_{Aug}
(4) 
_{Sep}
(12) 
_{Oct}
(8) 
_{Nov}
(3) 
_{Dec}
(4) 

2004 
_{Jan}
(1) 
_{Feb}
(21) 
_{Mar}
(31) 
_{Apr}
(10) 
_{May}
(12) 
_{Jun}
(15) 
_{Jul}
(4) 
_{Aug}
(6) 
_{Sep}
(5) 
_{Oct}
(11) 
_{Nov}
(43) 
_{Dec}
(13) 
2005 
_{Jan}
(25) 
_{Feb}
(12) 
_{Mar}
(49) 
_{Apr}
(19) 
_{May}
(104) 
_{Jun}
(60) 
_{Jul}
(10) 
_{Aug}
(42) 
_{Sep}
(15) 
_{Oct}
(12) 
_{Nov}
(6) 
_{Dec}
(4) 
2006 
_{Jan}
(1) 
_{Feb}
(6) 
_{Mar}
(31) 
_{Apr}
(17) 
_{May}
(5) 
_{Jun}
(95) 
_{Jul}
(38) 
_{Aug}
(44) 
_{Sep}
(6) 
_{Oct}
(8) 
_{Nov}
(21) 
_{Dec}

2007 
_{Jan}
(5) 
_{Feb}
(46) 
_{Mar}
(9) 
_{Apr}
(23) 
_{May}
(17) 
_{Jun}
(51) 
_{Jul}
(41) 
_{Aug}
(4) 
_{Sep}
(28) 
_{Oct}
(71) 
_{Nov}
(193) 
_{Dec}
(20) 
2008 
_{Jan}
(46) 
_{Feb}
(46) 
_{Mar}
(18) 
_{Apr}
(38) 
_{May}
(14) 
_{Jun}
(107) 
_{Jul}
(50) 
_{Aug}
(115) 
_{Sep}
(84) 
_{Oct}
(96) 
_{Nov}
(105) 
_{Dec}
(34) 
2009 
_{Jan}
(89) 
_{Feb}
(93) 
_{Mar}
(119) 
_{Apr}
(73) 
_{May}
(39) 
_{Jun}
(51) 
_{Jul}
(27) 
_{Aug}
(8) 
_{Sep}
(91) 
_{Oct}
(90) 
_{Nov}
(77) 
_{Dec}
(67) 
2010 
_{Jan}
(25) 
_{Feb}
(36) 
_{Mar}
(98) 
_{Apr}
(45) 
_{May}
(25) 
_{Jun}
(60) 
_{Jul}
(17) 
_{Aug}
(36) 
_{Sep}
(48) 
_{Oct}
(45) 
_{Nov}
(65) 
_{Dec}
(39) 
2011 
_{Jan}
(26) 
_{Feb}
(48) 
_{Mar}
(151) 
_{Apr}
(108) 
_{May}
(61) 
_{Jun}
(108) 
_{Jul}
(27) 
_{Aug}
(50) 
_{Sep}
(43) 
_{Oct}
(43) 
_{Nov}
(27) 
_{Dec}
(37) 
2012 
_{Jan}
(56) 
_{Feb}
(120) 
_{Mar}
(72) 
_{Apr}
(57) 
_{May}
(82) 
_{Jun}
(66) 
_{Jul}
(51) 
_{Aug}
(75) 
_{Sep}
(166) 
_{Oct}
(232) 
_{Nov}
(284) 
_{Dec}
(105) 
2013 
_{Jan}
(168) 
_{Feb}
(151) 
_{Mar}
(30) 
_{Apr}
(145) 
_{May}
(26) 
_{Jun}
(53) 
_{Jul}
(76) 
_{Aug}
(33) 
_{Sep}
(23) 
_{Oct}
(72) 
_{Nov}
(125) 
_{Dec}
(38) 
2014 
_{Jan}
(47) 
_{Feb}
(62) 
_{Mar}
(27) 
_{Apr}
(8) 
_{May}
(12) 
_{Jun}
(2) 
_{Jul}
(22) 
_{Aug}
(22) 
_{Sep}

_{Oct}
(17) 
_{Nov}
(20) 
_{Dec}
(12) 
2015 
_{Jan}
(25) 
_{Feb}
(2) 
_{Mar}
(16) 
_{Apr}
(13) 
_{May}
(21) 
_{Jun}
(5) 
_{Jul}
(1) 
_{Aug}
(8) 
_{Sep}
(9) 
_{Oct}
(30) 
_{Nov}
(8) 
_{Dec}

2016 
_{Jan}
(16) 
_{Feb}
(31) 
_{Mar}
(43) 
_{Apr}
(18) 
_{May}
(21) 
_{Jun}
(11) 
_{Jul}
(17) 
_{Aug}
(26) 
_{Sep}
(4) 
_{Oct}
(15) 
_{Nov}

_{Dec}

From: David Knezevic <david.knezevic@ak...>  20161022 02:46:30

On Fri, Oct 21, 2016 at 5:22 PM, John Peterson <jwpeterson@...> wrote: > > > On Fri, Oct 21, 2016 at 2:57 PM, David Knezevic < > david.knezevic@...> wrote: > >> It seems that ExodusII_IO::write_discontinuous_exodusII doesn't handle >> NodeElems correctly. NodeElems get plotted as ExodusII SPHERE elements, and >> this works fine with continuous plotting, but seems to be wrong in the >> discontinuous plotting case. >> >> I've attached a modified version of systems_of_equations_ex6 that >> illustrates this (change write_discontinuous_exodusII to >> write_equation_systems in the example to see the correct locations). >> Paraview plots the SPHERE elements as dots, so that is useful for checking >> the result. >> >> John, I thought you may have an idea of where to look to fix this? My >> first guess is that it might be related to the "FIXME" comment >> in ExodusII_IO_Helper::write_elements? >> > > You're talking about the FIXME on line 1449? That should only be a > problem if the mapping between your Mesh's node numbering and Exodus's node > numbering, as stored in the libmesh_node_num_to_exodus, is something > other than the trivial offbyone map. > > I don't know if the presence of NodeElems, which are treated as Elems (?), > would have an effect on this, but I suppose it's possible. What is the > error message you're getting? > I think I found the issue, see the PR I just made (#1128). Looks like the code previously assumed num_nodes_per_elem was the same for the entire mesh, so that didn't work properly for meshes (like my test case) with different element types. David 
From: John Peterson <jwpeterson@gm...>  20161021 21:23:04

On Fri, Oct 21, 2016 at 2:57 PM, David Knezevic <david.knezevic@...> wrote: > It seems that ExodusII_IO::write_discontinuous_exodusII doesn't handle > NodeElems correctly. NodeElems get plotted as ExodusII SPHERE elements, and > this works fine with continuous plotting, but seems to be wrong in the > discontinuous plotting case. > > I've attached a modified version of systems_of_equations_ex6 that > illustrates this (change write_discontinuous_exodusII to > write_equation_systems in the example to see the correct locations). > Paraview plots the SPHERE elements as dots, so that is useful for checking > the result. > > John, I thought you may have an idea of where to look to fix this? My > first guess is that it might be related to the "FIXME" comment > in ExodusII_IO_Helper::write_elements? > You're talking about the FIXME on line 1449? That should only be a problem if the mapping between your Mesh's node numbering and Exodus's node numbering, as stored in the libmesh_node_num_to_exodus, is something other than the trivial offbyone map. I don't know if the presence of NodeElems, which are treated as Elems (?), would have an effect on this, but I suppose it's possible. What is the error message you're getting?  John 
From: David Knezevic <david.knezevic@ak...>  20161021 20:57:16

From: David Knezevic <david.knezevic@ak...>  20161013 16:19:56

On Thu, Oct 13, 2016 at 12:12 PM, David Knezevic <david.knezevic@... > wrote: > On Wed, Oct 12, 2016 at 9:07 PM, David Knezevic < > david.knezevic@...> wrote: > >> On Wed, Oct 12, 2016 at 5:04 PM, Roy Stogner <roystgnr@...> >> wrote: >> >>> >>> On Sun, 9 Oct 2016, David Knezevic wrote: >>> >>> I've attached a modified version of misc_ex9 that attaches >>>> constraints on one side of the "crack" and checks if the dof >>>> constraints are present during assembly. This passes in serial but >>>> fails in parallel because constraints are not communicated on the >>>> GhostingFunctorcoupleddofs. >>>> >>> >>> I had to make a couple fixes to the test: switching mesh_1 and mesh_2 >>> to SerialMesh, and using >>> >>> dof_id_type neighbor_node_id = neighbor>node_ref(neighbor_no >>> de_index).dof_number(0,0,0); >>> >>> to handle the case where node id isn't node dof id. >>> >>> The extra constraints I added mean that the problem doesn't make >>>> physical sense anymore unfortunately, but at least it tests for the >>>> constraint issue. Roy: I'm not sure if this is appropriate for a >>>> unit test, but it should at least be helpful for when you want to >>>> check your implementation. >>>> >>> >>> It was, thanks! Here's hoping #1120 fixes the real problem too. >>> >>> The modified ex9 is too big for a unit test, and too contrived for an >>> example, and I can't think of any easy way to fix that while >>> maintaining the same level of test coverage. But if you can come up >>> with anything that doesn't have both those problems I'd really love to >>> get this case into continuous integration. >>> >>> If you can't come up with anything either... I suppose I could combine >>> an extraghostlayers test case with a Dirichlet boundary and test a >>> wide stencil? That should hit the same code paths. Plus, it's about >>> time libMesh expanded into the cutting edge world of finite difference >>> methods. >>> >> >> >> I just tried my real problem with your PR and it's still not working, >> unfortunately. I'll have to look into that in more detail. I'll get back to >> you when I have more info. >> > > > Roy, I've attached an updated version of the misc_ex9 test. The test now > prints out has a Dirichlet boundary on one side of the domain (boundary ids > 1 and 11), and it prints out the dof IDs on the "crack" that have > constraints on them. With this I get the following output: > > 1 MPI process: > > *./exampleopt* > *constrained upper dofs: 1025 1026 * > *constrained lower dofs: 0 1 * > > *constrained upper dofs: 1026 1029 * > *constrained lower dofs: 1 8 * > > *constrained upper dofs: 1029 1031 * > *constrained lower dofs: 8 12 * > > *constrained upper dofs: 1031 1033 * > *constrained lower dofs: 12 16 * > > 2 MPI processes: > > > *mpirun np 2 ./exampleopt keepcout* > *constrained upper dofs: 500 501 502 503 * > *constrained lower dofs: 525 526 * > > *constrained upper dofs: 501 504 505 502 * > *constrained lower dofs: 526 533 * > > *constrained upper dofs: 504 506 507 505 * > *constrained lower dofs: 533 537 * > > *constrained upper dofs: 506 508 509 507 * > *constrained lower dofs: 537 541 * > > *constrained upper dofs: 503 502 510 511 * > *constrained lower dofs: * > > *constrained upper dofs: 502 505 512 510 * > *constrained lower dofs: * > > *constrained upper dofs: 505 507 513 512 * > *constrained lower dofs: * > > *constrained upper dofs: 507 509 514 513 * > *constrained lower dofs: * > > *constrained upper dofs: 511 510 515 516 * > *constrained lower dofs: * > > *constrained upper dofs: 510 512 517 515 * > *constrained lower dofs: * > > *constrained upper dofs: 512 513 518 517 * > *constrained lower dofs: * > > *constrained upper dofs: 513 514 519 518 * > *constrained lower dofs: * > > *constrained upper dofs: 516 515 520 521 * > *constrained lower dofs: * > > *constrained upper dofs: 515 517 522 520 * > *constrained lower dofs: * > > *constrained upper dofs: 517 518 523 522 * > *constrained lower dofs: * > > *constrained upper dofs: 518 519 524 523 * > *constrained lower dofs:* > > The 1 process output makes sense to me: there should be only five nodes on > top and bottom that have a constraint. I don't understand the 2 process > output, there seems to be many extra constraints. Do you think this > indicates a bug in the constraint scattering? > > David > P.S. I changed the code slightly to print out the node info for each constrained node on the "crack". The results are below. You can see that in the parallel case, we have extra constraints that should not be there. David  1 MPI process: lower constrained node info: Node id()=0, processor_id()=0, Point=(x,y,z)=( 0, 0, 20) DoFs=(0/0/0) lower constrained node info: Node id()=1, processor_id()=0, Point=(x,y,z)=( 1, 0, 20) DoFs=(0/0/1) upper constrained node info: Node id()=1025, processor_id()=0, Point=(x,y,z)=( 0, 0, 20) DoFs=(0/0/1025) upper constrained node info: Node id()=1026, processor_id()=0, Point=(x,y,z)=( 1, 0, 20) DoFs=(0/0/1026) lower constrained node info: Node id()=1, processor_id()=0, Point=(x,y,z)=( 1, 0, 20) DoFs=(0/0/1) lower constrained node info: Node id()=8, processor_id()=0, Point=(x,y,z)=( 2, 0, 20) DoFs=(0/0/8) upper constrained node info: Node id()=1026, processor_id()=0, Point=(x,y,z)=( 1, 0, 20) DoFs=(0/0/1026) upper constrained node info: Node id()=1029, processor_id()=0, Point=(x,y,z)=( 2, 0, 20) DoFs=(0/0/1029) lower constrained node info: Node id()=8, processor_id()=0, Point=(x,y,z)=( 2, 0, 20) DoFs=(0/0/8) lower constrained node info: Node id()=12, processor_id()=0, Point=(x,y,z)=( 3, 0, 20) DoFs=(0/0/12) upper constrained node info: Node id()=1029, processor_id()=0, Point=(x,y,z)=( 2, 0, 20) DoFs=(0/0/1029) upper constrained node info: Node id()=1031, processor_id()=0, Point=(x,y,z)=( 3, 0, 20) DoFs=(0/0/1031) lower constrained node info: Node id()=12, processor_id()=0, Point=(x,y,z)=( 3, 0, 20) DoFs=(0/0/12) lower constrained node info: Node id()=16, processor_id()=0, Point=(x,y,z)=( 4, 0, 20) DoFs=(0/0/16) upper constrained node info: Node id()=1031, processor_id()=0, Point=(x,y,z)=( 3, 0, 20) DoFs=(0/0/1031) upper constrained node info: Node id()=1033, processor_id()=0, Point=(x,y,z)=( 4, 0, 20) DoFs=(0/0/1033) *2 MPI processes using keepcout:* lower constrained node info: Node id()=0, processor_id()=1, Point=(x,y,z)=( 0, 0, 20) DoFs=(0/0/525) lower constrained node info: Node id()=1, processor_id()=1, Point=(x,y,z)=( 1, 0, 20) DoFs=(0/0/526) upper constrained node info: Node id()=1025, processor_id()=0, Point=(x,y,z)=( 0, 0, 20) DoFs=(0/0/500) upper constrained node info: Node id()=1026, processor_id()=0, Point=(x,y,z)=( 1, 0, 20) DoFs=(0/0/501) upper constrained node info: Node id()=1027, processor_id()=0, Point=(x,y,z)=( 1, 1, 20) DoFs=(0/0/502) upper constrained node info: Node id()=1028, processor_id()=0, Point=(x,y,z)=( 0, 1, 20) DoFs=(0/0/503) lower constrained node info: Node id()=1, processor_id()=1, Point=(x,y,z)=( 1, 0, 20) DoFs=(0/0/526) lower constrained node info: Node id()=8, processor_id()=1, Point=(x,y,z)=( 2, 0, 20) DoFs=(0/0/533) upper constrained node info: Node id()=1026, processor_id()=0, Point=(x,y,z)=( 1, 0, 20) DoFs=(0/0/501) upper constrained node info: Node id()=1029, processor_id()=0, Point=(x,y,z)=( 2, 0, 20) DoFs=(0/0/504) upper constrained node info: Node id()=1030, processor_id()=0, Point=(x,y,z)=( 2, 1, 20) DoFs=(0/0/505) upper constrained node info: Node id()=1027, processor_id()=0, Point=(x,y,z)=( 1, 1, 20) DoFs=(0/0/502) lower constrained node info: Node id()=8, processor_id()=1, Point=(x,y,z)=( 2, 0, 20) DoFs=(0/0/533) lower constrained node info: Node id()=12, processor_id()=1, Point=(x,y,z)=( 3, 0, 20) DoFs=(0/0/537) upper constrained node info: Node id()=1029, processor_id()=0, Point=(x,y,z)=( 2, 0, 20) DoFs=(0/0/504) upper constrained node info: Node id()=1031, processor_id()=0, Point=(x,y,z)=( 3, 0, 20) DoFs=(0/0/506) upper constrained node info: Node id()=1032, processor_id()=0, Point=(x,y,z)=( 3, 1, 20) DoFs=(0/0/507) upper constrained node info: Node id()=1030, processor_id()=0, Point=(x,y,z)=( 2, 1, 20) DoFs=(0/0/505) lower constrained node info: Node id()=12, processor_id()=1, Point=(x,y,z)=( 3, 0, 20) DoFs=(0/0/537) lower constrained node info: Node id()=16, processor_id()=1, Point=(x,y,z)=( 4, 0, 20) DoFs=(0/0/541) upper constrained node info: Node id()=1031, processor_id()=0, Point=(x,y,z)=( 3, 0, 20) DoFs=(0/0/506) upper constrained node info: Node id()=1033, processor_id()=0, Point=(x,y,z)=( 4, 0, 20) DoFs=(0/0/508) upper constrained node info: Node id()=1034, processor_id()=0, Point=(x,y,z)=( 4, 1, 20) DoFs=(0/0/509) upper constrained node info: Node id()=1032, processor_id()=0, Point=(x,y,z)=( 3, 1, 20) DoFs=(0/0/507) upper constrained node info: Node id()=1028, processor_id()=0, Point=(x,y,z)=( 0, 1, 20) DoFs=(0/0/503) upper constrained node info: Node id()=1027, processor_id()=0, Point=(x,y,z)=( 1, 1, 20) DoFs=(0/0/502) upper constrained node info: Node id()=1035, processor_id()=0, Point=(x,y,z)=( 1, 2, 20) DoFs=(0/0/510) upper constrained node info: Node id()=1036, processor_id()=0, Point=(x,y,z)=( 0, 2, 20) DoFs=(0/0/511) upper constrained node info: Node id()=1027, processor_id()=0, Point=(x,y,z)=( 1, 1, 20) DoFs=(0/0/502) upper constrained node info: Node id()=1030, processor_id()=0, Point=(x,y,z)=( 2, 1, 20) DoFs=(0/0/505) upper constrained node info: Node id()=1037, processor_id()=0, Point=(x,y,z)=( 2, 2, 20) DoFs=(0/0/512) upper constrained node info: Node id()=1035, processor_id()=0, Point=(x,y,z)=( 1, 2, 20) DoFs=(0/0/510) upper constrained node info: Node id()=1030, processor_id()=0, Point=(x,y,z)=( 2, 1, 20) DoFs=(0/0/505) upper constrained node info: Node id()=1032, processor_id()=0, Point=(x,y,z)=( 3, 1, 20) DoFs=(0/0/507) upper constrained node info: Node id()=1038, processor_id()=0, Point=(x,y,z)=( 3, 2, 20) DoFs=(0/0/513) upper constrained node info: Node id()=1037, processor_id()=0, Point=(x,y,z)=( 2, 2, 20) DoFs=(0/0/512) upper constrained node info: Node id()=1032, processor_id()=0, Point=(x,y,z)=( 3, 1, 20) DoFs=(0/0/507) upper constrained node info: Node id()=1034, processor_id()=0, Point=(x,y,z)=( 4, 1, 20) DoFs=(0/0/509) upper constrained node info: Node id()=1039, processor_id()=0, Point=(x,y,z)=( 4, 2, 20) DoFs=(0/0/514) upper constrained node info: Node id()=1038, processor_id()=0, Point=(x,y,z)=( 3, 2, 20) DoFs=(0/0/513) upper constrained node info: Node id()=1036, processor_id()=0, Point=(x,y,z)=( 0, 2, 20) DoFs=(0/0/511) upper constrained node info: Node id()=1035, processor_id()=0, Point=(x,y,z)=( 1, 2, 20) DoFs=(0/0/510) upper constrained node info: Node id()=1040, processor_id()=0, Point=(x,y,z)=( 1, 3, 20) DoFs=(0/0/515) upper constrained node info: Node id()=1041, processor_id()=0, Point=(x,y,z)=( 0, 3, 20) DoFs=(0/0/516) upper constrained node info: Node id()=1035, processor_id()=0, Point=(x,y,z)=( 1, 2, 20) DoFs=(0/0/510) upper constrained node info: Node id()=1037, processor_id()=0, Point=(x,y,z)=( 2, 2, 20) DoFs=(0/0/512) upper constrained node info: Node id()=1042, processor_id()=0, Point=(x,y,z)=( 2, 3, 20) DoFs=(0/0/517) upper constrained node info: Node id()=1040, processor_id()=0, Point=(x,y,z)=( 1, 3, 20) DoFs=(0/0/515) upper constrained node info: Node id()=1037, processor_id()=0, Point=(x,y,z)=( 2, 2, 20) DoFs=(0/0/512) upper constrained node info: Node id()=1038, processor_id()=0, Point=(x,y,z)=( 3, 2, 20) DoFs=(0/0/513) upper constrained node info: Node id()=1043, processor_id()=0, Point=(x,y,z)=( 3, 3, 20) DoFs=(0/0/518) upper constrained node info: Node id()=1042, processor_id()=0, Point=(x,y,z)=( 2, 3, 20) DoFs=(0/0/517) upper constrained node info: Node id()=1038, processor_id()=0, Point=(x,y,z)=( 3, 2, 20) DoFs=(0/0/513) upper constrained node info: Node id()=1039, processor_id()=0, Point=(x,y,z)=( 4, 2, 20) DoFs=(0/0/514) upper constrained node info: Node id()=1044, processor_id()=0, Point=(x,y,z)=( 4, 3, 20) DoFs=(0/0/519) upper constrained node info: Node id()=1043, processor_id()=0, Point=(x,y,z)=( 3, 3, 20) DoFs=(0/0/518) upper constrained node info: Node id()=1041, processor_id()=0, Point=(x,y,z)=( 0, 3, 20) DoFs=(0/0/516) upper constrained node info: Node id()=1040, processor_id()=0, Point=(x,y,z)=( 1, 3, 20) DoFs=(0/0/515) upper constrained node info: Node id()=1045, processor_id()=0, Point=(x,y,z)=( 1, 4, 20) DoFs=(0/0/520) upper constrained node info: Node id()=1046, processor_id()=0, Point=(x,y,z)=( 0, 4, 20) DoFs=(0/0/521) upper constrained node info: Node id()=1040, processor_id()=0, Point=(x,y,z)=( 1, 3, 20) DoFs=(0/0/515) upper constrained node info: Node id()=1042, processor_id()=0, Point=(x,y,z)=( 2, 3, 20) DoFs=(0/0/517) upper constrained node info: Node id()=1047, processor_id()=0, Point=(x,y,z)=( 2, 4, 20) DoFs=(0/0/522) upper constrained node info: Node id()=1045, processor_id()=0, Point=(x,y,z)=( 1, 4, 20) DoFs=(0/0/520) upper constrained node info: Node id()=1042, processor_id()=0, Point=(x,y,z)=( 2, 3, 20) DoFs=(0/0/517) upper constrained node info: Node id()=1043, processor_id()=0, Point=(x,y,z)=( 3, 3, 20) DoFs=(0/0/518) upper constrained node info: Node id()=1048, processor_id()=0, Point=(x,y,z)=( 3, 4, 20) DoFs=(0/0/523) upper constrained node info: Node id()=1047, processor_id()=0, Point=(x,y,z)=( 2, 4, 20) DoFs=(0/0/522) upper constrained node info: Node id()=1043, processor_id()=0, Point=(x,y,z)=( 3, 3, 20) DoFs=(0/0/518) upper constrained node info: Node id()=1044, processor_id()=0, Point=(x,y,z)=( 4, 3, 20) DoFs=(0/0/519) upper constrained node info: Node id()=1049, processor_id()=0, Point=(x,y,z)=( 4, 4, 20) DoFs=(0/0/524) upper constrained node info: Node id()=1048, processor_id()=0, Point=(x,y,z)=( 3, 4, 20) DoFs=(0/0/523) 
From: David Knezevic <david.knezevic@ak...>  20161013 16:12:42

From: David Knezevic <david.knezevic@ak...>  20161013 01:07:18

On Wed, Oct 12, 2016 at 5:04 PM, Roy Stogner <roystgnr@...> wrote: > > On Sun, 9 Oct 2016, David Knezevic wrote: > > I've attached a modified version of misc_ex9 that attaches >> constraints on one side of the "crack" and checks if the dof >> constraints are present during assembly. This passes in serial but >> fails in parallel because constraints are not communicated on the >> GhostingFunctorcoupleddofs. >> > > I had to make a couple fixes to the test: switching mesh_1 and mesh_2 > to SerialMesh, and using > > dof_id_type neighbor_node_id = neighbor>node_ref(neighbor_no > de_index).dof_number(0,0,0); > > to handle the case where node id isn't node dof id. > > The extra constraints I added mean that the problem doesn't make >> physical sense anymore unfortunately, but at least it tests for the >> constraint issue. Roy: I'm not sure if this is appropriate for a >> unit test, but it should at least be helpful for when you want to >> check your implementation. >> > > It was, thanks! Here's hoping #1120 fixes the real problem too. > > The modified ex9 is too big for a unit test, and too contrived for an > example, and I can't think of any easy way to fix that while > maintaining the same level of test coverage. But if you can come up > with anything that doesn't have both those problems I'd really love to > get this case into continuous integration. > > If you can't come up with anything either... I suppose I could combine > an extraghostlayers test case with a Dirichlet boundary and test a > wide stencil? That should hit the same code paths. Plus, it's about > time libMesh expanded into the cutting edge world of finite difference > methods. > I just tried my real problem with your PR and it's still not working, unfortunately. I'll have to look into that in more detail. I'll get back to you when I have more info. David 
From: Roy Stogner <roystgnr@ic...>  20161012 21:04:17

On Sun, 9 Oct 2016, David Knezevic wrote: > I've attached a modified version of misc_ex9 that attaches > constraints on one side of the "crack" and checks if the dof > constraints are present during assembly. This passes in serial but > fails in parallel because constraints are not communicated on the > GhostingFunctorcoupleddofs. I had to make a couple fixes to the test: switching mesh_1 and mesh_2 to SerialMesh, and using dof_id_type neighbor_node_id = neighbor>node_ref(neighbor_node_index).dof_number(0,0,0); to handle the case where node id isn't node dof id. > The extra constraints I added mean that the problem doesn't make > physical sense anymore unfortunately, but at least it tests for the > constraint issue. Roy: I'm not sure if this is appropriate for a > unit test, but it should at least be helpful for when you want to > check your implementation. It was, thanks! Here's hoping #1120 fixes the real problem too. The modified ex9 is too big for a unit test, and too contrived for an example, and I can't think of any easy way to fix that while maintaining the same level of test coverage. But if you can come up with anything that doesn't have both those problems I'd really love to get this case into continuous integration. If you can't come up with anything either... I suppose I could combine an extraghostlayers test case with a Dirichlet boundary and test a wide stencil? That should hit the same code paths. Plus, it's about time libMesh expanded into the cutting edge world of finite difference methods.  Roy 
From: David Knezevic <david.knezevic@ak...>  20161011 20:55:56

On Tue, Oct 11, 2016 at 4:10 PM, Roy Stogner <roystgnr@...> wrote: > > > On Sun, 9 Oct 2016, David Knezevic wrote: > > On Thu, Oct 6, 2016 at 10:34 PM, Roy Stogner <roystgnr@...> >> wrote: >> >> I think the proper fix is to call the coupling functors in >> scatter_constraints(), then query the processors who own the >> elements >> which are to be coupled for any constraints on those elements' dofs. >> > > Just about done. > > Except now I'm paranoid, because I ran across my yearsold comment, > > // Next we need to push constraints to processors which don't own > // the constrained dof, don't own the constraining dof, but own an > // element supporting the constraining dof. > ... > // Getting distributed adaptive sparsity patterns right is hard. > > And I don't recall *why* we had that need! When we're doing an > enforce_constraints_exactly() it's only important to have all our own > dofs' constraints and their dependencies. When we're constraining an > element stiffness matrix we need to build a constraint matrix C, but > that constraint matrix only requires us to know about constrained dofs > on the local element, not about constraining dofs on the local > element. > > So why the heck did I think processors needed to know about everywhere > a locallysupported dof constrain*ing*? If I can't figure out what > the reason was then I can't decide whether or not it will be > applicable to coupledghosted elements too. > hmm, I agree with you, I can't see why the constraining dofs are required... maybe we can proceed on the assumption that they aren't required, and do some tests to see if we hit a case where that assumption is wrong? David 
From: Roy Stogner <roystgnr@ic...>  20161011 20:11:04

On Sun, 9 Oct 2016, David Knezevic wrote: > On Thu, Oct 6, 2016 at 10:34 PM, Roy Stogner <roystgnr@...> wrote: > > I think the proper fix is to call the coupling functors in > scatter_constraints(), then query the processors who own the elements > which are to be coupled for any constraints on those elements' dofs. Just about done. Except now I'm paranoid, because I ran across my yearsold comment, // Next we need to push constraints to processors which don't own // the constrained dof, don't own the constraining dof, but own an // element supporting the constraining dof. ... // Getting distributed adaptive sparsity patterns right is hard. And I don't recall *why* we had that need! When we're doing an enforce_constraints_exactly() it's only important to have all our own dofs' constraints and their dependencies. When we're constraining an element stiffness matrix we need to build a constraint matrix C, but that constraint matrix only requires us to know about constrained dofs on the local element, not about constraining dofs on the local element. So why the heck did I think processors needed to know about everywhere a locallysupported dof constrain*ing*? If I can't figure out what the reason was then I can't decide whether or not it will be applicable to coupledghosted elements too.  Roy 
From: David Knezevic <david.knezevic@ak...>  20161009 20:17:39

From: David Knezevic <david.knezevic@ak...>  20161007 02:44:45

On Thu, Oct 6, 2016 at 10:34 PM, Roy Stogner <roystgnr@...> wrote: > > > On Thu, 6 Oct 2016, David Knezevic wrote: > > I'm using GhostingFunctor for a contact solve, in which I consider a 1/4 >> domain with partial Dirichlet boundary conditions that impose a symmetry >> condition (i.e. displacement normal to the symmetry boundary is clamped >> to zero, and tangential displacement is unconstrained). This means that I >> have Dirichlet constraints that affect the dofs on the contact surface. >> What I find is that the solve works fine in serial, but in parallel the >> nonlinear convergence fails, presumably because of an incorrect Jacobian. >> I have actually run into this exact issue before (when I was augmenting >> the sparsity pattern "manually", prior to GhostingFunctor) and I found >> that the issue was that the dof constraints on the contact surface were >> not being communicated in parallel, which caused the incorrect Jacobian >> in parallel. I fixed it by adding artificial Edge2 elements into the mesh >> (like in systems_of_equations_ex8) to ensure that the dof constraints >> are communicated properly in parallel. >> >> So, my question is, is there a way to achieve the necessary dof >> constraint communication with the new GhostingFunctor API? I had hoped that >> using >> "add_algebraic_ghosting_functor" would do the job, but it apparently >> doesn't. >> > > Hmm... If you only needed algebraic ghosting, then > add_algebraic_ghosting_functor should have been sufficient  > processors won't know about all their ghosted dofs' constraints, but > the ghosted dofs will get properly added to the send_list, and > enforce_constraints_exactly() will ensure that the dofs, once > constrained on the processors which own them, will have their > values updated on all the processors which ghost them. > > But you need to augment the sparsity pattern too, so you should be > using add_coupling_functor instead... and in *that* case, we're > broken, aren't we? You build a Jacobian connecting the remotely > coupled dofs, and you try to hit it with constrain_element_foo() or > heterogeneously_constrain_element_foo(), but the processor isn't aware > of all the ghosted dofs' constraints, so the element constraint matrix > is wrong and so is your final answer. > Yep, that's exactly my understanding of the issue. > I think the proper fix is to call the coupling functors in > scatter_constraints(), then query the processors who own the elements > which are to be coupled for any constraints on those elements' dofs. > I can take a crack at that tomorrow or Monday. Any chance you could > set up a unit test (or even just stuff an assertion into the misc ex9 > example?) that checks for the problem? > That'd be great, thanks! I'll be happy to try it out once it's ready. I'll also look into making a test case that can be added to libMesh. Thanks, David 
From: Roy Stogner <roystgnr@ic...>  20161007 02:34:42

On Thu, 6 Oct 2016, David Knezevic wrote: > I'm using GhostingFunctor for a contact solve, in which I consider a 1/4 domain with partial Dirichlet boundary conditions that impose a symmetry > condition (i.e. displacement normal to the symmetry boundary is clamped to zero, and tangential displacement is unconstrained). This means that I > have Dirichlet constraints that affect the dofs on the contact surface. > What I find is that the solve works fine in serial, but in parallel the nonlinear convergence fails, presumably because of an incorrect Jacobian. > I have actually run into this exact issue before (when I was augmenting the sparsity pattern "manually", prior to GhostingFunctor) and I found > that the issue was that the dof constraints on the contact surface were not being communicated in parallel, which caused the incorrect Jacobian > in parallel. I fixed it by adding artificial Edge2 elements into the mesh (like in systems_of_equations_ex8) to ensure that the dof constraints > are communicated properly in parallel. > > So, my question is, is there a way to achieve the necessary dof constraint communication with the new GhostingFunctor API? I had hoped that using > "add_algebraic_ghosting_functor" would do the job, but it apparently doesn't. Hmm... If you only needed algebraic ghosting, then add_algebraic_ghosting_functor should have been sufficient  processors won't know about all their ghosted dofs' constraints, but the ghosted dofs will get properly added to the send_list, and enforce_constraints_exactly() will ensure that the dofs, once constrained on the processors which own them, will have their values updated on all the processors which ghost them. But you need to augment the sparsity pattern too, so you should be using add_coupling_functor instead... and in *that* case, we're broken, aren't we? You build a Jacobian connecting the remotely coupled dofs, and you try to hit it with constrain_element_foo() or heterogeneously_constrain_element_foo(), but the processor isn't aware of all the ghosted dofs' constraints, so the element constraint matrix is wrong and so is your final answer. I think the proper fix is to call the coupling functors in scatter_constraints(), then query the processors who own the elements which are to be coupled for any constraints on those elements' dofs. I can take a crack at that tomorrow or Monday. Any chance you could set up a unit test (or even just stuff an assertion into the misc ex9 example?) that checks for the problem?  Roy 
From: David Knezevic <david.knezevic@ak...>  20161007 01:29:56

I'm using GhostingFunctor for a contact solve, in which I consider a 1/4 domain with partial Dirichlet boundary conditions that impose a symmetry condition (i.e. displacement normal to the symmetry boundary is clamped to zero, and tangential displacement is unconstrained). This means that I have Dirichlet constraints that affect the dofs on the contact surface. What I find is that the solve works fine in serial, but in parallel the nonlinear convergence fails, presumably because of an incorrect Jacobian. I have actually run into this exact issue before (when I was augmenting the sparsity pattern "manually", prior to GhostingFunctor) and I found that the issue was that the dof constraints on the contact surface were not being communicated in parallel, which caused the incorrect Jacobian in parallel. I fixed it by adding artificial Edge2 elements into the mesh (like in systems_of_equations_ex8) to ensure that the dof constraints are communicated properly in parallel. So, my question is, is there a way to achieve the necessary dof constraint communication with the new GhostingFunctor API? I had hoped that using "add_algebraic_ghosting_functor" would do the job, but it apparently doesn't. Thanks, David 
From: John Peterson <jwpeterson@gm...>  20161005 19:25:28

On Wed, Oct 5, 2016 at 12:11 PM, Boris Boutkov <borisbou@...> wrote: > Hello all, > > I was wondering what algorithm or method libMesh uses to renumber nodes > and elements, particularly after uniform mesh refinements or after > equation_system reinits. > > I understand that its designed to help create contiguous blocks on > processors, but was wondering if there are any more details you can provide > as to the details of the implementation such as the specific method used. > For context, I'm investigating how much the node ordering can affect > Multigrid convergence rates when comparing different node numbering > algorithms. > I don't think it is necessarily designed to create contiguous blocks on processors, although it might do that for Cartesian grids? If you look at ReplicatedMesh::renumber_nodes_and_elements (), you can see the algorithm that's used for ReplicatedMesh.  John 
From: Boris Boutkov <borisbou@bu...>  20161005 18:38:34

Hello all, I was wondering what algorithm or method libMesh uses to renumber nodes and elements, particularly after uniform mesh refinements or after equation_system reinits. I understand that its designed to help create contiguous blocks on processors, but was wondering if there are any more details you can provide as to the details of the implementation such as the specific method used. For context, I'm investigating how much the node ordering can affect Multigrid convergence rates when comparing different node numbering algorithms. Thanks for your time, Boris Boutkov 
From: Boris Boutkov <borisbou@bu...>  20160926 18:45:27

Hello again all, Ive been digging at this problem some more and I have some followup questions / observations. I've noticed that the constraint storage is split up into a number components, and would appreciate some basic info as to what these pieces represent. The assert I am/was tripping was with the DofConstraintRow being empty, a row which is inserted to the _dof_constraints map which pairs row with a dof id. Now what I notice is that when I try and use print_dof_constraints(), the values actually being printed are obtained through the _primal_constraint_values map and are simply paired with the _dof_constraint ids. This begs the question what is actually being stored (or should be stored) in an individual DofConstraintRow, or is this some kind of intermediate object that at some point gets moved to other constrain maps and then cleared? Ive also noticed a _node_constraints map, as well as a _stashed_constraints, both of which appear to be unused, though judging by their names could potentially apply to my situation. What are the conceptual differences between the _dof, _node, _primal, and _stashed maps? Finally, in terms of the overall constraint implementation process, what does the R^T * K * C constraint matrix thats mentioned in the comments of constrain_element_matrix refer to? Is it related to the constraint formulation of Mark Shephard in his "Linear multipoint constraints applied via transformation as part of a direct stiffness assembly process" paper, or is there another resource which more closely mimics the overall philosophy libMesh is adhering when building up this constraint matrix? Thanks again for your time, Boris On Thu, Aug 25, 2016 at 10:09 PM, Roy Stogner <roystgnr@...> wrote: > > On Thu, 25 Aug 2016, Paul T. Bauman wrote: > > On Thu, Aug 25, 2016 at 5:45 PM, John Peterson <jwpeterson@...> >> wrote: >> >> > On Thu, Aug 25, 2016 at 3:38 PM, Boris Boutkov <borisbou@...> >> wrote: >> > > Hm  uniform refinement, yes. Are the boundary nodes on the >> > > outside edges of refined elements not considered to be hanging >> > > nodes? Apologies if I'm misusing the term. >> >> >> > Hanging nodes to me means "must be constrained to be compatible >> > with a coarser neighbor", which isn't possible on the boundary. >> >> > I'm not sure what's causing the assert for you, we'll probably >> > need Roy to comment on the different ways of constraining element >> > matrices, but again, for uniform refinement I would not expect >> > there to be *any* constraints. >> >> For the boundary nodes, any DirichletBoundary constraints would be >> handled through that code path, though, right? >> > > That's correct. >  > Roy >  >  > > _______________________________________________ > Libmeshdevel mailing list > Libmeshdevel@... > https://lists.sourceforge.net/lists/listinfo/libmeshdevel > > 
From: Kong (NonUS), Fande <fande.kong@in...>  20160914 21:23:43

Thanks a lot, John. Fande, On Wed, Sep 14, 2016 at 3:20 PM, John Peterson <jwpeterson@...> wrote: > > > On Wed, Sep 14, 2016 at 2:39 PM, Kong (NonUS), Fande <fande.kong@...> > wrote: > >> Hi Developers, Users, >> >> I am trying to use a "typedef" to wrap EigenSystem into TransientSystem: >> >> *typedef TransientSystem<EigenSystem> TransientEigenSystem;* >> >> I will eventually use this in MOOSE. I went to >> examples/eigenproblems/eigenproblems_ex1 to add one line to test this: >> > > I think we got this figured out, we were just missing an explicit > instantiation in transient_system.C... > >  > John > 
From: John Peterson <jwpeterson@gm...>  20160914 21:20:41

On Wed, Sep 14, 2016 at 2:39 PM, Kong (NonUS), Fande <fande.kong@...> wrote: > Hi Developers, Users, > > I am trying to use a "typedef" to wrap EigenSystem into TransientSystem: > > *typedef TransientSystem<EigenSystem> TransientEigenSystem;* > > I will eventually use this in MOOSE. I went to examples/eigenproblems/eigenproblems_ex1 > to add one line to test this: > I think we got this figured out, we were just missing an explicit instantiation in transient_system.C...  John 
From: Kong (NonUS), Fande <fande.kong@in...>  20160914 21:02:24

Hi Developers, Users, I am trying to use a "typedef" to wrap EigenSystem into TransientSystem: *typedef TransientSystem<EigenSystem> TransientEigenSystem;* I will eventually use this in MOOSE. I went to examples/eigenproblems/eigenproblems_ex1 to add one line to test this: *TransientEigenSystem & trst_eigen_system3 = equation_systems.add_system<TransientEigenSystem> ("Transient_Eigen_System");* There is a link error: *CXX example_opteigenproblems_ex1.o CXXLD exampleoptUndefined symbols for architecture x86_64: "libMesh::TransientSystem<libMesh::EigenSystem>::TransientSystem(libMesh::EquationSystems&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned int)", referenced from: libMesh::TransientSystem<libMesh::EigenSystem>& libMesh::EquationSystems::add_system<libMesh::TransientSystem<libMesh::EigenSystem> >(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) in example_opteigenproblems_ex1.old: symbol(s) not found for architecture x86_64clang3.7: error: linker command failed with exit code 1 (use v to see invocation)make: *** [exampleopt] Error 1* All files I have changed are: *diff git a/examples/eigenproblems/eigenproblems_ex1/eigenproblems_ex1.C b/examples/eigenproblems/eigenproblems_ex1/eigenproblems_ex1.Cindex 7b4a909..eb8a597 100644 a/examples/eigenproblems/eigenproblems_ex1/eigenproblems_ex1.C+++ b/examples/eigenproblems/eigenproblems_ex1/eigenproblems_ex1.C@@ 47,6 +47,7 @@ #include "libmesh/sparse_matrix.h" #include "libmesh/numeric_vector.h" #include "libmesh/dof_map.h"+#include "libmesh/transient_system.h" // Bring in everything from the libMesh namespace using namespace libMesh;@@ 113,6 +114,10 @@ int main (int argc, char ** argv) EigenSystem & eigen_system = equation_systems.add_system<EigenSystem> ("Eigensystem"); + // Fails+ TransientEigenSystem & trst_eigen_system3 =+ equation_systems.add_system<TransientEigenSystem> ("Transient_Eigen_System");+ // Declare the system variables. // Adds the variable "p" to "Eigensystem". "p" // will be approximated using secondorder approximation.diff git a/include/systems/transient_system.h b/include/systems/transient_system.hindex 4bc529d..1ffe4c0 100644 a/include/systems/transient_system.h+++ b/include/systems/transient_system.h@@ 22,6 +22,7 @@ // Local Includes #include "libmesh/system.h"+#include "libmesh/libmesh_config.h" // C++ includes @@ 33,6 +34,7 @@ namespace libMesh class LinearImplicitSystem; class NonlinearImplicitSystem; class ExplicitSystem;+// class EigenSystem; /** * This class provides a specific system class. It aims@@ 146,7 +148,9 @@ typedef TransientSystem<LinearImplicitSystem> TransientLinearImplicitSystem; typedef TransientSystem<NonlinearImplicitSystem> TransientNonlinearImplicitSystem; typedef TransientSystem<ExplicitSystem> TransientExplicitSystem; typedef TransientSystem<System> TransientBaseSystem;+//#if LIBMESH_HAVE_SLEPC+typedef TransientSystem<EigenSystem> TransientEigenSystem;+//#endif // diff git a/src/systems/equation_systems.C b/src/systems/equation_systems.Cindex ee925a6..1bbea21 100644 a/src/systems/equation_systems.C+++ b/src/systems/equation_systems.C@@ 408,6 +408,8 @@ System & EquationSystems::add_system (const std::string & sys_type, // build an eigen system else if (sys_type == "Eigen") this>add_system<EigenSystem> (name);+ else if (sys_type == "TransientEigenSystem")+ this>add_system<TransientEigenSystem> (name); #endif #if defined(LIBMESH_USE_COMPLEX_NUMBERS)* Thanks. Fande Kong 
From: Roy Stogner <roystgnr@ic...>  20160826 03:07:10

On Thu, 25 Aug 2016, Paul T. Bauman wrote: > On Thu, Aug 25, 2016 at 5:45 PM, John Peterson <jwpeterson@...> wrote: > > > On Thu, Aug 25, 2016 at 3:38 PM, Boris Boutkov <borisbou@...> wrote: > > > Hm  uniform refinement, yes. Are the boundary nodes on the > > > outside edges of refined elements not considered to be hanging > > > nodes? Apologies if I'm misusing the term. > > > > Hanging nodes to me means "must be constrained to be compatible > > with a coarser neighbor", which isn't possible on the boundary. > > > I'm not sure what's causing the assert for you, we'll probably > > need Roy to comment on the different ways of constraining element > > matrices, but again, for uniform refinement I would not expect > > there to be *any* constraints. > > For the boundary nodes, any DirichletBoundary constraints would be > handled through that code path, though, right? That's correct.  Roy 
From: Paul T. Bauman <ptbauman@gm...>  20160826 00:06:56

Crap didn't replyall. On Thu, Aug 25, 2016 at 5:45 PM, John Peterson <jwpeterson@...> wrote: > > > On Thu, Aug 25, 2016 at 3:38 PM, Boris Boutkov <borisbou@...> > wrote: > >> Hm  uniform refinement, yes. Are the boundary nodes on the outside edges >> of refined elements not considered to be hanging nodes? Apologies if I'm >> misusing the term. >> > > Hanging nodes to me means "must be constrained to be compatible with a > coarser neighbor", which isn't possible on the boundary. > > I'm not sure what's causing the assert for you, we'll probably need Roy to > comment on the different ways of constraining element matrices, but again, > for uniform refinement I would not expect there to be *any* constraints. > For the boundary nodes, any DirichletBoundary constraints would be handled through that code path, though, right? 
From: John Peterson <jwpeterson@gm...>  20160825 21:46:05

On Thu, Aug 25, 2016 at 3:38 PM, Boris Boutkov <borisbou@...> wrote: > Hm  uniform refinement, yes. Are the boundary nodes on the outside edges > of refined elements not considered to be hanging nodes? Apologies if I'm > misusing the term. > Hanging nodes to me means "must be constrained to be compatible with a coarser neighbor", which isn't possible on the boundary. I'm not sure what's causing the assert for you, we'll probably need Roy to comment on the different ways of constraining element matrices, but again, for uniform refinement I would not expect there to be *any* constraints.  John 
From: Boris Boutkov <borisbou@bu...>  20160825 21:39:06

Hm  uniform refinement, yes. Are the boundary nodes on the outside edges of refined elements not considered to be hanging nodes? Apologies if I'm misusing the term. On Thu, Aug 25, 2016 at 5:28 PM, John Peterson <jwpeterson@...> wrote: > > > On Thu, Aug 25, 2016 at 11:57 AM, Boris Boutkov <borisbou@...> > wrote: > >> Hello all, (apologies in advance for the double post, looks like I wasn't >> subscribed properly earlier), >> >> Ive been working on some libMesh/GRINS/PETSc Multigrid code and have been >> getting some unexpected convergence results that don't quite line up with >> theory. >> >> Currently I am uniformly refining a simple square grid with Dirichlet BCs >> and manually elementwise constructing L2 projected interpolation matrices >> between mesh levels while constraining each projection using the four >> argument version of constrain_element_matrix. As a naive first attempt I >> turned asymmetric_constrain_rows to false, as it helped me get past the >> `!constraint_row.empty()' assertion which I didn't quite understand. >> >> Having now seen some of the convergence results I think maybe I was too >> hasty in turning this asymmetric switch off as my iterative solvers are >> having some difficulties on some mesh levels while being OK on others, my >> hunch is this might be related to projections of hanging boundary nodes I >> introduce while refining  but this leaves me stuck on the above assert. >> > > You said you were doing uniform refinement? I'm confused by "hanging > boundary nodes". > >  > John > 
From: John Peterson <jwpeterson@gm...>  20160825 21:29:11

On Thu, Aug 25, 2016 at 11:57 AM, Boris Boutkov <borisbou@...> wrote: > Hello all, (apologies in advance for the double post, looks like I wasn't > subscribed properly earlier), > > Ive been working on some libMesh/GRINS/PETSc Multigrid code and have been > getting some unexpected convergence results that don't quite line up with > theory. > > Currently I am uniformly refining a simple square grid with Dirichlet BCs > and manually elementwise constructing L2 projected interpolation matrices > between mesh levels while constraining each projection using the four > argument version of constrain_element_matrix. As a naive first attempt I > turned asymmetric_constrain_rows to false, as it helped me get past the > `!constraint_row.empty()' assertion which I didn't quite understand. > > Having now seen some of the convergence results I think maybe I was too > hasty in turning this asymmetric switch off as my iterative solvers are > having some difficulties on some mesh levels while being OK on others, my > hunch is this might be related to projections of hanging boundary nodes I > introduce while refining  but this leaves me stuck on the above assert. > You said you were doing uniform refinement? I'm confused by "hanging boundary nodes".  John 
From: Boris Boutkov <borisbou@bu...>  20160825 17:58:09

Hello all, (apologies in advance for the double post, looks like I wasn't subscribed properly earlier), Ive been working on some libMesh/GRINS/PETSc Multigrid code and have been getting some unexpected convergence results that don't quite line up with theory. Currently I am uniformly refining a simple square grid with Dirichlet BCs and manually elementwise constructing L2 projected interpolation matrices between mesh levels while constraining each projection using the four argument version of constrain_element_matrix. As a naive first attempt I turned asymmetric_constrain_rows to false, as it helped me get past the `!constraint_row.empty()' assertion which I didn't quite understand. Having now seen some of the convergence results I think maybe I was too hasty in turning this asymmetric switch off as my iterative solvers are having some difficulties on some mesh levels while being OK on others, my hunch is this might be related to projections of hanging boundary nodes I introduce while refining  but this leaves me stuck on the above assert. Could someone please provide some kind of introductory reference on how this constraint process works as a whole, or what might be causing me to trip this assert? I notice that this assert is disabled in the 3 argument version and I guess am just wondering what is the best way to understand why the DofConstraintRow might be empty in this situation. Thanks for your time, Boris Boutkov 