You can subscribe to this list here.
2003 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}
(2) 
_{Oct}
(2) 
_{Nov}
(27) 
_{Dec}
(31) 

2004 
_{Jan}
(6) 
_{Feb}
(15) 
_{Mar}
(33) 
_{Apr}
(10) 
_{May}
(46) 
_{Jun}
(11) 
_{Jul}
(21) 
_{Aug}
(15) 
_{Sep}
(13) 
_{Oct}
(23) 
_{Nov}
(1) 
_{Dec}
(8) 
2005 
_{Jan}
(27) 
_{Feb}
(57) 
_{Mar}
(86) 
_{Apr}
(23) 
_{May}
(37) 
_{Jun}
(34) 
_{Jul}
(24) 
_{Aug}
(17) 
_{Sep}
(50) 
_{Oct}
(24) 
_{Nov}
(10) 
_{Dec}
(60) 
2006 
_{Jan}
(47) 
_{Feb}
(46) 
_{Mar}
(127) 
_{Apr}
(19) 
_{May}
(26) 
_{Jun}
(62) 
_{Jul}
(47) 
_{Aug}
(51) 
_{Sep}
(61) 
_{Oct}
(42) 
_{Nov}
(50) 
_{Dec}
(33) 
2007 
_{Jan}
(60) 
_{Feb}
(55) 
_{Mar}
(77) 
_{Apr}
(102) 
_{May}
(82) 
_{Jun}
(102) 
_{Jul}
(169) 
_{Aug}
(117) 
_{Sep}
(80) 
_{Oct}
(37) 
_{Nov}
(51) 
_{Dec}
(43) 
2008 
_{Jan}
(71) 
_{Feb}
(94) 
_{Mar}
(98) 
_{Apr}
(125) 
_{May}
(54) 
_{Jun}
(119) 
_{Jul}
(60) 
_{Aug}
(111) 
_{Sep}
(118) 
_{Oct}
(125) 
_{Nov}
(119) 
_{Dec}
(94) 
2009 
_{Jan}
(109) 
_{Feb}
(38) 
_{Mar}
(93) 
_{Apr}
(88) 
_{May}
(29) 
_{Jun}
(57) 
_{Jul}
(53) 
_{Aug}
(48) 
_{Sep}
(68) 
_{Oct}
(151) 
_{Nov}
(23) 
_{Dec}
(35) 
2010 
_{Jan}
(84) 
_{Feb}
(60) 
_{Mar}
(184) 
_{Apr}
(112) 
_{May}
(60) 
_{Jun}
(90) 
_{Jul}
(23) 
_{Aug}
(70) 
_{Sep}
(119) 
_{Oct}
(27) 
_{Nov}
(47) 
_{Dec}
(54) 
2011 
_{Jan}
(22) 
_{Feb}
(19) 
_{Mar}
(92) 
_{Apr}
(93) 
_{May}
(35) 
_{Jun}
(91) 
_{Jul}
(32) 
_{Aug}
(61) 
_{Sep}
(7) 
_{Oct}
(69) 
_{Nov}
(81) 
_{Dec}
(23) 
2012 
_{Jan}
(64) 
_{Feb}
(95) 
_{Mar}
(35) 
_{Apr}
(36) 
_{May}
(63) 
_{Jun}
(98) 
_{Jul}
(70) 
_{Aug}
(171) 
_{Sep}
(149) 
_{Oct}
(64) 
_{Nov}
(67) 
_{Dec}
(126) 
2013 
_{Jan}
(108) 
_{Feb}
(104) 
_{Mar}
(171) 
_{Apr}
(133) 
_{May}
(108) 
_{Jun}
(100) 
_{Jul}
(93) 
_{Aug}
(126) 
_{Sep}
(74) 
_{Oct}
(59) 
_{Nov}
(145) 
_{Dec}
(93) 
2014 
_{Jan}
(38) 
_{Feb}
(45) 
_{Mar}
(26) 
_{Apr}
(41) 
_{May}
(125) 
_{Jun}
(70) 
_{Jul}
(61) 
_{Aug}
(66) 
_{Sep}
(60) 
_{Oct}
(110) 
_{Nov}
(27) 
_{Dec}
(30) 
2015 
_{Jan}
(43) 
_{Feb}
(67) 
_{Mar}
(71) 
_{Apr}
(92) 
_{May}
(39) 
_{Jun}
(15) 
_{Jul}
(46) 
_{Aug}
(63) 
_{Sep}
(84) 
_{Oct}
(82) 
_{Nov}
(69) 
_{Dec}
(45) 
2016 
_{Jan}
(92) 
_{Feb}
(91) 
_{Mar}
(148) 
_{Apr}
(43) 
_{May}
(58) 
_{Jun}
(117) 
_{Jul}
(92) 
_{Aug}
(136) 
_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 

1
(7) 
2
(3) 
3
(4) 
4
(10) 
5
(27) 
6
(7) 
7
(2) 
8

9
(7) 
10
(12) 
11
(1) 
12
(3) 
13

14
(1) 
15
(1) 
16
(1) 
17

18

19
(2) 
20

21

22

23
(1) 
24

25

26

27

28
(2) 
29

30
(2) 
31





From: Andrew Davis <andrew.add@gm...>  20131210 15:25:47

Hi all, I'm observing some strange behavior using the System::point_valuefunction. I have a system of two variables (say "u" and "v"), each added using the function system.add_variable(name, libMeshEnums::Order); if they have the same libMesh::Enums::Order, there is no problem. However, if they have different orders I am noticing that system.point_value(varNum, point) does not always work properly. If the first one is lower order: system.point_value(0, point) is correct system.point_value(1, point) is wrong if the first one is higher order: system.point_value(0, point) is correct system.point_value(1, point) results in a seg. fault Does anyone know what I am doing wrong? Thanks! Andy 
From: Dmitry Karpeyev <karpeev@mc...>  20131209 22:03:54

Please, include snes_view so that we can see exactly what solver options are being used. Dmitry. On Mon, Dec 9, 2013 at 12:48 PM, John Peterson <jwpeterson@...> wrote: > > On Mon, Dec 9, 2013 at 11:31 AM, Lorenzo Zanon > > <zanon@...>wrote: > > > >> I get faster convergence with e.g. snes_linesearch_type basic ksp_rtol > >> 1e4: > >> > >> NL step 0, residual_2 = 3.464102e05 > >> NL step 1, residual_2 = 2.417540e04 > >> NL step 2, residual_2 = 6.174706e08 > >> NL step 3, residual_2 = 3.577768e12 > >> NL step 4, residual_2 = 7.687278e17 > >> StVen system solved at nonlinear iteration 4 , final nonlinear residual > >> norm: 7.687278e17 > > Your ksp_rtol might be hurting your convergence a bit near the root... > > Have you tried snes_ksp_ew? There are several EWspecific options > within you can play with as well: > > > http://www.mcs.anl.gov/petsc/petsccurrent/docs/manualpages/SNES/SNESSetFromOptions.html > >  > John > > >  > Sponsored by Intel(R) XDK > Develop, test and display web and hybrid apps with a single code base. > Download it for free now! > > http://pubads.g.doubleclick.net/gampad/clk?id=111408631&iu=/4140/ostg.clktrk > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers >  Dmitry Karpeev Mathematics and Computer Science Argonne National Laboratory Argonne, Illinois, USA and Computation Institute University of Chicago 5735 S. Ellis Avenue Chicago, IL 60637  Phone: 6302521229 Fax: 6302525986 
From: John Peterson <jwpeterson@gm...>  20131209 18:49:21

> On Mon, Dec 9, 2013 at 11:31 AM, Lorenzo Zanon > <zanon@...>wrote: > >> I get faster convergence with e.g. snes_linesearch_type basic ksp_rtol >> 1e4: >> >> NL step 0, residual_2 = 3.464102e05 >> NL step 1, residual_2 = 2.417540e04 >> NL step 2, residual_2 = 6.174706e08 >> NL step 3, residual_2 = 3.577768e12 >> NL step 4, residual_2 = 7.687278e17 >> StVen system solved at nonlinear iteration 4 , final nonlinear residual >> norm: 7.687278e17 Your ksp_rtol might be hurting your convergence a bit near the root... Have you tried snes_ksp_ew? There are several EWspecific options within you can play with as well: http://www.mcs.anl.gov/petsc/petsccurrent/docs/manualpages/SNES/SNESSetFromOptions.html  John 
From: Derek Gaston <friedmud@gm...>  20131209 18:34:52

Good to hear Lorenzo! How many linear iterations are you taking? (use ksp_monitor to see) You might be able to get better performance using a better preconditioner... I still highly recommend trying Hypre... Derek On Mon, Dec 9, 2013 at 11:31 AM, Lorenzo Zanon <zanon@...>wrote: > Hello, > > I'm sorry it took so long. > > There were actually a couple of mistakes in my jacobian and residual. > > Now for the small problem without specifying any options I get: > > Running ./exampleopt > > Mesh Information: > mesh_dimension()=3 > spatial_dimension()=3 > n_nodes()=66 > n_local_nodes()=66 > n_elem()=20 > n_local_elem()=20 > n_active_elem()=20 > n_subdomains()=1 > n_partitions()=1 > n_processors()=1 > n_threads()=1 > processor_id()=0 > > EquationSystems > n_systems()=1 > System #0, "StVen" > Type "NonlinearImplicit" > Variables={ "u" "v" "z" } > Finite Element Types="LAGRANGE" > Approximation Orders="FIRST" > n_dofs()=198 > n_local_dofs()=198 > n_constrained_dofs()=18 > n_local_constrained_dofs()=18 > n_vectors()=1 > n_matrices()=1 > DofMap Sparsity > Average OnProcessor Bandwidth <= 39.4545 > Average OffProcessor Bandwidth <= 0 > Maximum OnProcessor Bandwidth <= 54 > Maximum OffProcessor Bandwidth <= 0 > DofMap Constraints > Number of DoF Constraints = 18 > Average DoF Constraint Length= 0 > > NL step 0, residual_2 = 3.464102e05 > NL step 1, residual_2 = 3.126867e05 > NL step 2, residual_2 = 2.835742e05 > NL step 3, residual_2 = 2.579682e05 > NL step 4, residual_2 = 2.351091e05 > NL step 5, residual_2 = 2.144758e05 > NL step 6, residual_2 = 1.957083e05 > NL step 7, residual_2 = 1.785531e05 > NL step 8, residual_2 = 1.628262e05 > NL step 9, residual_2 = 1.463259e05 > NL step 10, residual_2 = 1.280174e05 > NL step 11, residual_2 = 1.073276e05 > NL step 12, residual_2 = 8.355152e06 > NL step 13, residual_2 = 7.606801e06 > NL step 14, residual_2 = 9.503420e11 > NL step 15, residual_2 = 2.354762e15 > StVen system solved at nonlinear iteration 15 , final nonlinear residual > norm: 2.354762e15 > > > I get faster convergence with e.g. snes_linesearch_type basic ksp_rtol > 1e4: > > NL step 0, residual_2 = 3.464102e05 > NL step 1, residual_2 = 2.417540e04 > NL step 2, residual_2 = 6.174706e08 > NL step 3, residual_2 = 3.577768e12 > NL step 4, residual_2 = 7.687278e17 > StVen system solved at nonlinear iteration 4 , final nonlinear residual > norm: 7.687278e17 > > > I think it would be fine to use these last options also on a finer mesh > then. It finally looks like I don't need JFNK. > > Thanks! > Lorenzo > > On Nov 26, 2013, at 8:42 PM, Derek Gaston wrote: > > On Tue, Nov 26, 2013 at 11:59 AM, Dmitry Karpeyev <karpeev@...>wrote: > >> LU, naturally, should do better, and we should at least see quicker >> linear convergence. >> If not, it's an indication that your problem is singular. >> > > LU should only take 1 (ish) linear iteration. > > However, I still suspect that the issue here is that his Jacobian is > wrong... which accounts for the degraded nonlinear convergence. > > Lorenzo: Have you double checked your Jacobian statements? Have you > checked them against your friend doing FEAP? Your statements should look > very similar to theirs... > > Derek > > > 
From: Lorenzo Zanon <zanon@ai...>  20131209 18:32:07

Hello, I'm sorry it took so long. There were actually a couple of mistakes in my jacobian and residual. Now for the small problem without specifying any options I get: > Running ./exampleopt > > Mesh Information: > mesh_dimension()=3 > spatial_dimension()=3 > n_nodes()=66 > n_local_nodes()=66 > n_elem()=20 > n_local_elem()=20 > n_active_elem()=20 > n_subdomains()=1 > n_partitions()=1 > n_processors()=1 > n_threads()=1 > processor_id()=0 > > EquationSystems > n_systems()=1 > System #0, "StVen" > Type "NonlinearImplicit" > Variables={ "u" "v" "z" } > Finite Element Types="LAGRANGE" > Approximation Orders="FIRST" > n_dofs()=198 > n_local_dofs()=198 > n_constrained_dofs()=18 > n_local_constrained_dofs()=18 > n_vectors()=1 > n_matrices()=1 > DofMap Sparsity > Average OnProcessor Bandwidth <= 39.4545 > Average OffProcessor Bandwidth <= 0 > Maximum OnProcessor Bandwidth <= 54 > Maximum OffProcessor Bandwidth <= 0 > DofMap Constraints > Number of DoF Constraints = 18 > Average DoF Constraint Length= 0 > > NL step 0, residual_2 = 3.464102e05 > NL step 1, residual_2 = 3.126867e05 > NL step 2, residual_2 = 2.835742e05 > NL step 3, residual_2 = 2.579682e05 > NL step 4, residual_2 = 2.351091e05 > NL step 5, residual_2 = 2.144758e05 > NL step 6, residual_2 = 1.957083e05 > NL step 7, residual_2 = 1.785531e05 > NL step 8, residual_2 = 1.628262e05 > NL step 9, residual_2 = 1.463259e05 > NL step 10, residual_2 = 1.280174e05 > NL step 11, residual_2 = 1.073276e05 > NL step 12, residual_2 = 8.355152e06 > NL step 13, residual_2 = 7.606801e06 > NL step 14, residual_2 = 9.503420e11 > NL step 15, residual_2 = 2.354762e15 > StVen system solved at nonlinear iteration 15 , final nonlinear residual norm: 2.354762e15 I get faster convergence with e.g. snes_linesearch_type basic ksp_rtol 1e4: > NL step 0, residual_2 = 3.464102e05 > NL step 1, residual_2 = 2.417540e04 > NL step 2, residual_2 = 6.174706e08 > NL step 3, residual_2 = 3.577768e12 > NL step 4, residual_2 = 7.687278e17 > StVen system solved at nonlinear iteration 4 , final nonlinear residual norm: 7.687278e17 I think it would be fine to use these last options also on a finer mesh then. It finally looks like I don't need JFNK. Thanks! Lorenzo On Nov 26, 2013, at 8:42 PM, Derek Gaston wrote: > On Tue, Nov 26, 2013 at 11:59 AM, Dmitry Karpeyev <karpeev@...> wrote: > LU, naturally, should do better, and we should at least see quicker linear convergence. > If not, it's an indication that your problem is singular. > > LU should only take 1 (ish) linear iteration. > > However, I still suspect that the issue here is that his Jacobian is wrong... which accounts for the degraded nonlinear convergence. > > Lorenzo: Have you double checked your Jacobian statements? Have you checked them against your friend doing FEAP? Your statements should look very similar to theirs... > > Derek 
From: Subramanya Sadasiva <potaman@ou...>  20131209 17:28:23

I had attached a picture. The system is not updating itself. So i get these cracklike discontinuities in the solution, with the number of sections equal to the number of processors. On Dec 9, 2013, at 12:17 PM, Derek Gaston <friedmud@...> wrote: > How is it failing in parallel? Is it not converging, or something else? > > What preconditioner are you using? How well are the linear solves going (use ksp_monitor)? > > Derek > > > > On Mon, Dec 9, 2013 at 9:59 AM, subramanya sadasiva <potaman@...> wrote: > Hi, > i am trying to run a nonlinear solver (a multiphase cahn hilliard) with 4 variables (3 phases  so two phase and two chemical potentials). The code seems to be doing fine on one processor, however, it is failing miserably in parallel. I get the following divisions, always equal to the number of processors that I am running on. I only use current_local_solution and old_local_solution to construct the jacobians and residuals, so the vectors should be properly ghosted. These divisions remain even if I call update on the system again. Any ideas?Thanks, Subramanya > >  > Sponsored by Intel(R) XDK > Develop, test and display web and hybrid apps with a single code base. > Download it for free now! > http://pubads.g.doubleclick.net/gampad/clk?id=111408631&iu=/4140/ostg.clktrk > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > > 
From: Derek Gaston <friedmud@gm...>  20131209 17:17:23

How is it failing in parallel? Is it not converging, or something else? What preconditioner are you using? How well are the linear solves going (use ksp_monitor)? Derek On Mon, Dec 9, 2013 at 9:59 AM, subramanya sadasiva <potaman@...>wrote: > Hi, > i am trying to run a nonlinear solver (a multiphase cahn hilliard) with 4 > variables (3 phases  so two phase and two chemical potentials). The code > seems to be doing fine on one processor, however, it is failing miserably > in parallel. I get the following divisions, always equal to the number of > processors that I am running on. I only use current_local_solution and > old_local_solution to construct the jacobians and residuals, so the vectors > should be properly ghosted. These divisions remain even if I call update > on the system again. Any ideas?Thanks, Subramanya > > >  > Sponsored by Intel(R) XDK > Develop, test and display web and hybrid apps with a single code base. > Download it for free now! > > http://pubads.g.doubleclick.net/gampad/clk?id=111408631&iu=/4140/ostg.clktrk > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > > 
From: subramanya sadasiva <potaman@ou...>  20131209 17:00:03

Hi, i am trying to run a nonlinear solver (a multiphase cahn hilliard) with 4 variables (3 phases  so two phase and two chemical potentials). The code seems to be doing fine on one processor, however, it is failing miserably in parallel. I get the following divisions, always equal to the number of processors that I am running on. I only use current_local_solution and old_local_solution to construct the jacobians and residuals, so the vectors should be properly ghosted. These divisions remain even if I call update on the system again. Any ideas?Thanks, Subramanya 
From: David Knezevic <dknezevic@se...>  20131207 18:20:24

On 12/07/2013 01:15 PM, Roy Stogner wrote: > > On Fri, 6 Dec 2013, David Knezevic wrote: > >> I'm using a PointLocator to copy a solution onto a boundary mesh, > > In theory on modern libMesh you ought to be able to do this much more > efficiently (O(N) instead of O(N log N)) with the > Elem::interior_parent() pointers. Oh, that's a great point! I forgot about interior_parent... thanks! > >> 3. loop over the nodes and elements of boundary_mesh and use >> point_locator from 2 to find out which element each node and element >> belongs to, and then copy the solution over to a system defined on the >> boundary mesh >> >> This worked well. But then I ran into some problems when I did the above >> several times in a row with different surface_ids. In this case the >> PointLocator seemed to grind to a halt. > > Could you attach a debugger and figure out more specifically what was > going wrong? E.g. if it's infinite looping, where? Yeah, I would need to debug this further. I should make a minimal test case and run it through the debugger. But this is probably all moot now anyway, due to your suggestion of interior_parent. > >> I then added "system.get_mesh().clear_point_locator()" in step 2 before >> the call to sub_point_locator, and that fixed the issue. > > In theory the only time we should ever need to get new point locators > is after the mesh clears the old master locator itself, which should > only happen when the mesh changes... OK, good to know. Thanks, David 
From: Roy Stogner <roystgnr@ic...>  20131207 18:15:33

On Fri, 6 Dec 2013, David Knezevic wrote: > I'm using a PointLocator to copy a solution onto a boundary mesh, In theory on modern libMesh you ought to be able to do this much more efficiently (O(N) instead of O(N log N)) with the Elem::interior_parent() pointers. > 3. loop over the nodes and elements of boundary_mesh and use > point_locator from 2 to find out which element each node and element > belongs to, and then copy the solution over to a system defined on the > boundary mesh > > This worked well. But then I ran into some problems when I did the above > several times in a row with different surface_ids. In this case the > PointLocator seemed to grind to a halt. Could you attach a debugger and figure out more specifically what was going wrong? E.g. if it's infinite looping, where? > I then added "system.get_mesh().clear_point_locator()" in step 2 before > the call to sub_point_locator, and that fixed the issue. In theory the only time we should ever need to get new point locators is after the mesh clears the old master locator itself, which should only happen when the mesh changes...  Roy 
From: David Knezevic <dknezevic@se...>  20131206 21:28:40

I'm using a PointLocator to copy a solution onto a boundary mesh, and I was initially using the following steps: 1. extract the boundary mesh using mesh>boundary_info>sync(surface_ids, boundary_mesh); 2. get a PointLocator for the full mesh, using AutoPtr<PointLocatorBase> point_locator = full_mesh.sub_point_locator(); 3. loop over the nodes and elements of boundary_mesh and use point_locator from 2 to find out which element each node and element belongs to, and then copy the solution over to a system defined on the boundary mesh This worked well. But then I ran into some problems when I did the above several times in a row with different surface_ids. In this case the PointLocator seemed to grind to a halt. I then added "system.get_mesh().clear_point_locator()" in step 2 before the call to sub_point_locator, and that fixed the issue. So I was just wondering what might've been going wrong here, and why (and when) it's necessary to clear the point_locator? Thanks, David 
From: David Knezevic <dknezevic@se...>  20131206 18:15:39

Oh, yeah, that looks good, thanks. This didn't cause any compilation problem for me, not sure why... David On 12/06/2013 01:08 PM, Manav Bhatia wrote: > I ended up adding the a call to the mesh constructor in the RBEIMEvaluation constructor with “comm” as the argument. > > Not sure if this would be the correct communicator to pass. Maybe David could comment. > > Manav > > > RBEIMEvaluation::RBEIMEvaluation(const libMesh::Parallel::Communicator &comm) > : > RBEvaluation(comm), > extra_interpolation_point_elem(NULL), > _previous_N(0), > _previous_error_bound(1), > _interpolation_points_mesh(comm) > { > // Indicate that we need to compute the RB > // inner product matrix in this case > compute_RB_inner_product = true; > > // initialize to the empty RBThetaExpansion object > set_rb_theta_expansion(_empty_rb_theta_expansion); > > // Let's not renumber the _interpolation_points_mesh > _interpolation_points_mesh.allow_renumbering(false); > } > > > On Dec 6, 2013, at 12:13 PM, John Peterson <jwpeterson@...> wrote: > >> On Fri, Dec 6, 2013 at 10:03 AM, Manav Bhatia <bhatiamanav@...> wrote: >>> Hi, >>> >>> I got the latest libMesh from github, and compilation on Mac OS X with clang5 gives the following error: >>> >>> src/reduced_basis/rb_eim_evaluation.C:37:20: error: constructor for 'libMesh::RBEIMEvaluation' must explicitly initialize the member '_interpolation_points_mesh' which does not have a default constructor >>> RBEIMEvaluation::RBEIMEvaluation(const libMesh::Parallel::Communicator &comm) >>> ^ >>> ./include/libmesh/rb_eim_evaluation.h:251:14: note: member is declared here >>> SerialMesh _interpolation_points_mesh; >>> ^ >>> ./include/libmesh/serial_mesh.h:47:7: note: 'libMesh::SerialMesh' declared here >>> class SerialMesh : public UnstructuredMesh >>> ^ >> Looks like that code is relying on a deprecated constructor which is >> only available if LIBMESH_DISABLE_COMMWORLD is not defined. >> >> Try initializing it with libMesh::CommWorld? >> >>  >> John >  > Sponsored by Intel(R) XDK > Develop, test and display web and hybrid apps with a single code base. > Download it for free now! > http://pubads.g.doubleclick.net/gampad/clk?id=111408631&iu=/4140/ostg.clktrk > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers 
From: Roy Stogner <roystgnr@ic...>  20131206 18:13:17

On Fri, 6 Dec 2013, Manav Bhatia wrote: > K U = F solution based projection is performed for hrefined > elements. For prefined, however, the dof based assignment is > performed. > > — Is there a reason for why the same K U = F based projection is > not performed for prefined elements? I understand that this would > be more expensive than what is currently being done for prefined, > but it will likely also take care of the nonhierarchich basis. (?) IIRC it simply would have made the code more complicated, and I was more worried about bugs from overlycomplicated code than about efficiency or generality. > — Is the dof based assignment for prefined elements the source of > the error for hprefinement that you had referred to in your initial > email? Not the "source" exactly, but it's the location of the error, yes. > — I noticed that a similar operation are performed in > FEBase::coarsened_dof_values() and > FEBase::compute_proj_constraints(). So, these too would need > attention for hp studies? I'm not sure if those are wrong, but they'd definitely need attention, yes.  Roy 
From: Manav Bhatia <bhatiamanav@gm...>  20131206 18:09:03

I ended up adding the a call to the mesh constructor in the RBEIMEvaluation constructor with “comm” as the argument. Not sure if this would be the correct communicator to pass. Maybe David could comment. Manav RBEIMEvaluation::RBEIMEvaluation(const libMesh::Parallel::Communicator &comm) : RBEvaluation(comm), extra_interpolation_point_elem(NULL), _previous_N(0), _previous_error_bound(1), _interpolation_points_mesh(comm) { // Indicate that we need to compute the RB // inner product matrix in this case compute_RB_inner_product = true; // initialize to the empty RBThetaExpansion object set_rb_theta_expansion(_empty_rb_theta_expansion); // Let's not renumber the _interpolation_points_mesh _interpolation_points_mesh.allow_renumbering(false); } On Dec 6, 2013, at 12:13 PM, John Peterson <jwpeterson@...> wrote: > On Fri, Dec 6, 2013 at 10:03 AM, Manav Bhatia <bhatiamanav@...> wrote: >> Hi, >> >> I got the latest libMesh from github, and compilation on Mac OS X with clang5 gives the following error: >> >> src/reduced_basis/rb_eim_evaluation.C:37:20: error: constructor for 'libMesh::RBEIMEvaluation' must explicitly initialize the member '_interpolation_points_mesh' which does not have a default constructor >> RBEIMEvaluation::RBEIMEvaluation(const libMesh::Parallel::Communicator &comm) >> ^ >> ./include/libmesh/rb_eim_evaluation.h:251:14: note: member is declared here >> SerialMesh _interpolation_points_mesh; >> ^ >> ./include/libmesh/serial_mesh.h:47:7: note: 'libMesh::SerialMesh' declared here >> class SerialMesh : public UnstructuredMesh >> ^ > > Looks like that code is relying on a deprecated constructor which is > only available if LIBMESH_DISABLE_COMMWORLD is not defined. > > Try initializing it with libMesh::CommWorld? > >  > John 
From: John Peterson <jwpeterson@gm...>  20131206 17:14:22

On Fri, Dec 6, 2013 at 10:03 AM, Manav Bhatia <bhatiamanav@...> wrote: > Hi, > > I got the latest libMesh from github, and compilation on Mac OS X with clang5 gives the following error: > > src/reduced_basis/rb_eim_evaluation.C:37:20: error: constructor for 'libMesh::RBEIMEvaluation' must explicitly initialize the member '_interpolation_points_mesh' which does not have a default constructor > RBEIMEvaluation::RBEIMEvaluation(const libMesh::Parallel::Communicator &comm) > ^ > ./include/libmesh/rb_eim_evaluation.h:251:14: note: member is declared here > SerialMesh _interpolation_points_mesh; > ^ > ./include/libmesh/serial_mesh.h:47:7: note: 'libMesh::SerialMesh' declared here > class SerialMesh : public UnstructuredMesh > ^ Looks like that code is relying on a deprecated constructor which is only available if LIBMESH_DISABLE_COMMWORLD is not defined. Try initializing it with libMesh::CommWorld?  John 
From: Manav Bhatia <bhatiamanav@gm...>  20131206 17:03:58

Hi, I got the latest libMesh from github, and compilation on Mac OS X with clang5 gives the following error: src/reduced_basis/rb_eim_evaluation.C:37:20: error: constructor for 'libMesh::RBEIMEvaluation' must explicitly initialize the member '_interpolation_points_mesh' which does not have a default constructor RBEIMEvaluation::RBEIMEvaluation(const libMesh::Parallel::Communicator &comm) ^ ./include/libmesh/rb_eim_evaluation.h:251:14: note: member is declared here SerialMesh _interpolation_points_mesh; ^ ./include/libmesh/serial_mesh.h:47:7: note: 'libMesh::SerialMesh' declared here class SerialMesh : public UnstructuredMesh ^ Manav 
From: Manav Bhatia <bhatiamanav@gm...>  20131206 16:19:45

Hi Roy, I spent some time looking through the ProjectVector::operator() code, and have some questions: K U = F solution based projection is performed for hrefined elements. For prefined, however, the dof based assignment is performed. — Is there a reason for why the same K U = F based projection is not performed for prefined elements? I understand that this would be more expensive than what is currently being done for prefined, but it will likely also take care of the nonhierarchich basis. (?) — Is the dof based assignment for prefined elements the source of the error for hprefinement that you had referred to in your initial email? — I noticed that a similar operation are performed in FEBase::coarsened_dof_values() and FEBase::compute_proj_constraints(). So, these too would need attention for hp studies? Thanks, Manav 
From: Manav Bhatia <bhatiamanav@gm...>  20131205 20:33:34

On Thu, Dec 5, 2013 at 3:22 PM, Roy Stogner <roystgnr@...>wrote: > > > > >> My experience with HIERARCHIC has been unfavorable, with >> convergence issues of iterative solvers. I have not seen any such >> problems with SZABAB, and BERNSTEIN, so am favoring those. >> > > Strange. From a brief glance at SZABAB it looked like they were > practically just HIERARCHIC with different scaling factors, the kind > of thing even Jacobi preconditioning would fix. > > I remember triying all sorts of preconditioning, and nothing helped. My understanding is that the HIERARCHIC polynomials get highly illconditioned as the porder increases. Infact Karniadakis, in his book on spectral hpelements, has a plot of condition number for HIERARCHIC, LAGRANGE, and LEGENDRE polynomials for increasing p, and shows that HIERARCHIC are the first to go through the roof, followed by LAGRANGE. I am interested in implementing a C0 continuous version fo LEGENDRE, which modifies the first two basis from a constant and linear to a combination of two linears, and maintains the bubble functions. The orthogonality of these polynomials can be quite useful, especially in getting a diagonal mass matrix. > > I managed to get the higher order SZABAB working with three >> modifications: >> >>  modified FEInterface::extra_hanging_dofs() to be true for SZABAB. >> >>  FEInterface::compute_constraints() does not do anything for SZABAB. >> Added a case SZABAB: for that. >> >>  fe_szabab.C needs FE<2, SZABAB>::compute_constraints. >> >> With this, hrefinement of higher order SZABAB works without problems. >> > > Great! Send us a PR? > > > I am not sure if this is the issue plaguing hprefinement, but I >> will get to it soon. >> > > No, sadly you ran into another bug on the way to the hp bug. >  > Roy aahhh... ok... I will continue pushing forward... Manav 
From: tigermail <amesga1@ti...>  20131205 20:22:46

Thanks Roy, That did the trick. I was on the right course i.e., I was doing : dof_map.get_dirichlet_boundaries()>clear() but I wasn’t regenerating the constraints before putting in the new ones and for some reason the constraints were not imposed correctly. I think the thread you remembering is also me :). Thanks a lot, Ata On Dec 5, 2013, at 1:38 PM, Roy Stogner <roystgnr@...> wrote: > > On Thu, 5 Dec 2013, Ataollah Mesgarnejad wrote: > >> How can I remove all previously set DricihletBoundaries for a certain >> system? > > You need to remove them from the collection in the DofMap, then tell > the DofMap to regenerate the constraints. > > dof_map.get_dirichlet_boundaries().clear() would handle the removal in > the "all previously set" case, for primal Dirichlet boundaries. > > IIRC we discussed efficient constraint regeneration on the list > recently. >  > Roy 
From: Roy Stogner <roystgnr@ic...>  20131205 20:22:16

On Thu, 5 Dec 2013, Manav Bhatia wrote: > From you definition of *hierarchich*, SZABAB would classify as one, correct? The polynomials are nest. Bearing in mind that I'm really looking at those elements for the first time... The 1D SzaBab elements definitely qualify as nested. The magic numbers in those coefficients annoy me... but it looks like they're all just scaling factors so at least the doubleprecision wouldn't kill their convergence rates when compiling with quadprecision. The 2D SzaBab quads appear to be nested. So are at least the cubic or so triangles, so I'd strongly bet the triangles are nested too. It looks like 3D SzaBab hexes would be easy to define; wonder why nobody's gotten around to it yet. > My experience with HIERARCHIC has been unfavorable, with > convergence issues of iterative solvers. I have not seen any such > problems with SZABAB, and BERNSTEIN, so am favoring those. Strange. From a brief glance at SZABAB it looked like they were practically just HIERARCHIC with different scaling factors, the kind of thing even Jacobi preconditioning would fix. > I managed to get the higher order SZABAB working with three modifications: > >  modified FEInterface::extra_hanging_dofs() to be true for SZABAB. > >  FEInterface::compute_constraints() does not do anything for SZABAB. Added a case SZABAB: for that. > >  fe_szabab.C needs FE<2, SZABAB>::compute_constraints. > > With this, hrefinement of higher order SZABAB works without problems. Great! Send us a PR? > I am not sure if this is the issue plaguing hprefinement, but I > will get to it soon. No, sadly you ran into another bug on the way to the hp bug.  Roy 
From: Manav Bhatia <bhatiamanav@gm...>  20131205 20:13:47

On Thu, Dec 5, 2013 at 2:58 PM, Roy Stogner <roystgnr@...>wrote: > What p level and element type were you using that you saw a failure > with hrefinement? This was with an hrefinement for SZABAB with p=3. > I don't know much about the SZABAB elements but > it looks like perhaps they require extra_hanging_dofs==true but aren't > getting that in fe_interface.C. > > Looking at SZABAB, I see it implemented only for 1D and 2D elements. Maybe I could add a bit of extra code to enable 3D as well. Shouldn't be too difficult, given what I learnt about the code internals today! Manav 
From: Roy Stogner <roystgnr@ic...>  20131205 20:12:44

On Thu, 5 Dec 2013, Manav Bhatia wrote: > If the problem described in the previous email is relevant to > all higher order elements with edgedofs, then wouldn't > FEInterface::extra_hangin_dofs() be true all such elements? If yes, > then why would BERNSTEIN and SZABAB return false, while HIERARCHICH > return true for this function? Bi/triquadratic LAGRANGE dofs are higherorder (for loose values of "high" with edgedofs which shouldn't need extra_hanging_dofs() == true. It looks like BERNSTEIN and SZABAB may be bugs, though. Would you try changing your fe_interface.C and see if that fixes the problem you've encountered? Thanks,  Roy 
From: Manav Bhatia <bhatiamanav@gm...>  20131205 20:06:43

Hi Roy, From you definition of *hierarchich*, SZABAB would classify as one, correct? The polynomials are nest. My experience with HIERARCHIC has been unfavorable, with convergence issues of iterative solvers. I have not seen any such problems with SZABAB, and BERNSTEIN, so am favoring those. I managed to get the higher order SZABAB working with three modifications:  modified FEInterface::extra_hanging_dofs() to be true for SZABAB.  FEInterface::compute_constraints() does not do anything for SZABAB. Added a case SZABAB: for that.  fe_szabab.C needs FE<2, SZABAB>::compute_constraints. With this, hrefinement of higher order SZABAB works without problems. I am not sure if this is the issue plaguing hprefinement, but I will get to it soon. Manav On Thu, Dec 5, 2013 at 2:58 PM, Roy Stogner <roystgnr@...>wrote: > > On Wed, 4 Dec 2013, Manav Bhatia wrote: > > — 2nd order LAGRANGE FE type, *with hrefinemement* works fine and >> converges. >> > > This is as I'd expect  the current libMesh adaptive prefinement code > assumes a hierarchic basis and should fail in weird ways (or in devel > mode, throw an error) without one. > > > — Changing FE type to SZABAB and *using hrefinement* leads to >> unrealistic (huge) residual values immediately after first >> hrefinement and the plotted solution has odd looking spikes in each >> element next to a hanging node. >> >> This seems to be an expected behavior given the bug described in the >> libmeshdevel thread you had sent to me. (?) >> > > No, actually. The bug I described should affect adaptive > hprefinement, but not pure adaptive hrefinement. > > What p level and element type were you using that you saw a failure > with hrefinement? I don't know much about the SZABAB elements but > it looks like perhaps they require extra_hanging_dofs==true but aren't > getting that in fe_interface.C. > > > Out of the higherorder polynomials, SZABAB is nested, but BERNSTEIN >> is not since the whole basis changes as soon as the porder is >> modified. Does this bug/discussion affect BERNSTEIN in the same was >> as it does SZABAB? >> > > With nonnested (what I called "hierarchic" before, not referring only > to the HIERARCHICs) polynomials we don't support prefinement at all > yet. That's not so much a bug as an unimplemented feature. >  > Roy 
From: Roy Stogner <roystgnr@ic...>  20131205 20:05:35

On Thu, 5 Dec 2013, Manav Bhatia wrote: > I am working on this now, and need some help understanding the extrahangingdofs. Basically they're for the case where there isn't a onetoone correspondance between vertex dofs and side/face dofs. With LAGRANGE elements, by contrast, the dof equation on a corner is the dof equation on an edge or side, and you just reuse the same dof value. With HIERARCHIC elements, not so much. If a bubble function on an edge midpoint has value 0 and the neighboring vertices have value 1 and 0, then if that midpoint is a hanging node we want the corresponding vertex dof there to be 1/2. So what we do is we make those independent: in the global numbering, there are now two dofs on this node, one of which takes value 0 and the other of which takes value 1/2.  Roy 
From: Roy Stogner <roystgnr@ic...>  20131205 19:58:26

On Wed, 4 Dec 2013, Manav Bhatia wrote: > — 2nd order LAGRANGE FE type, *with hrefinemement* works fine and converges. This is as I'd expect  the current libMesh adaptive prefinement code assumes a hierarchic basis and should fail in weird ways (or in devel mode, throw an error) without one. > — Changing FE type to SZABAB and *using hrefinement* leads to > unrealistic (huge) residual values immediately after first > hrefinement and the plotted solution has odd looking spikes in each > element next to a hanging node. > > This seems to be an expected behavior given the bug described in the > libmeshdevel thread you had sent to me. (?) No, actually. The bug I described should affect adaptive hprefinement, but not pure adaptive hrefinement. What p level and element type were you using that you saw a failure with hrefinement? I don't know much about the SZABAB elements but it looks like perhaps they require extra_hanging_dofs==true but aren't getting that in fe_interface.C. > Out of the higherorder polynomials, SZABAB is nested, but BERNSTEIN > is not since the whole basis changes as soon as the porder is > modified. Does this bug/discussion affect BERNSTEIN in the same was > as it does SZABAB? With nonnested (what I called "hierarchic" before, not referring only to the HIERARCHICs) polynomials we don't support prefinement at all yet. That's not so much a bug as an unimplemented feature.  Roy 