You can subscribe to this list here.
2003 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}
(2) 
_{Oct}
(2) 
_{Nov}
(27) 
_{Dec}
(31) 

2004 
_{Jan}
(6) 
_{Feb}
(15) 
_{Mar}
(33) 
_{Apr}
(10) 
_{May}
(46) 
_{Jun}
(11) 
_{Jul}
(21) 
_{Aug}
(15) 
_{Sep}
(13) 
_{Oct}
(23) 
_{Nov}
(1) 
_{Dec}
(8) 
2005 
_{Jan}
(27) 
_{Feb}
(57) 
_{Mar}
(86) 
_{Apr}
(23) 
_{May}
(37) 
_{Jun}
(34) 
_{Jul}
(24) 
_{Aug}
(17) 
_{Sep}
(50) 
_{Oct}
(24) 
_{Nov}
(10) 
_{Dec}
(60) 
2006 
_{Jan}
(47) 
_{Feb}
(46) 
_{Mar}
(127) 
_{Apr}
(19) 
_{May}
(26) 
_{Jun}
(62) 
_{Jul}
(47) 
_{Aug}
(51) 
_{Sep}
(61) 
_{Oct}
(42) 
_{Nov}
(50) 
_{Dec}
(33) 
2007 
_{Jan}
(60) 
_{Feb}
(55) 
_{Mar}
(77) 
_{Apr}
(102) 
_{May}
(82) 
_{Jun}
(102) 
_{Jul}
(169) 
_{Aug}
(117) 
_{Sep}
(80) 
_{Oct}
(37) 
_{Nov}
(51) 
_{Dec}
(43) 
2008 
_{Jan}
(71) 
_{Feb}
(94) 
_{Mar}
(98) 
_{Apr}
(125) 
_{May}
(54) 
_{Jun}
(119) 
_{Jul}
(60) 
_{Aug}
(111) 
_{Sep}
(118) 
_{Oct}
(125) 
_{Nov}
(119) 
_{Dec}
(94) 
2009 
_{Jan}
(109) 
_{Feb}
(38) 
_{Mar}
(93) 
_{Apr}
(88) 
_{May}
(29) 
_{Jun}
(57) 
_{Jul}
(53) 
_{Aug}
(48) 
_{Sep}
(68) 
_{Oct}
(151) 
_{Nov}
(23) 
_{Dec}
(35) 
2010 
_{Jan}
(84) 
_{Feb}
(60) 
_{Mar}
(184) 
_{Apr}
(112) 
_{May}
(60) 
_{Jun}
(90) 
_{Jul}
(23) 
_{Aug}
(70) 
_{Sep}
(119) 
_{Oct}
(27) 
_{Nov}
(47) 
_{Dec}
(54) 
2011 
_{Jan}
(22) 
_{Feb}
(19) 
_{Mar}
(92) 
_{Apr}
(93) 
_{May}
(35) 
_{Jun}
(91) 
_{Jul}
(32) 
_{Aug}
(61) 
_{Sep}
(7) 
_{Oct}
(69) 
_{Nov}
(81) 
_{Dec}
(23) 
2012 
_{Jan}
(64) 
_{Feb}
(95) 
_{Mar}
(35) 
_{Apr}
(36) 
_{May}
(63) 
_{Jun}
(98) 
_{Jul}
(70) 
_{Aug}
(171) 
_{Sep}
(149) 
_{Oct}
(64) 
_{Nov}
(67) 
_{Dec}
(126) 
2013 
_{Jan}
(108) 
_{Feb}
(104) 
_{Mar}
(171) 
_{Apr}
(133) 
_{May}
(108) 
_{Jun}
(100) 
_{Jul}
(93) 
_{Aug}
(126) 
_{Sep}
(74) 
_{Oct}
(59) 
_{Nov}
(145) 
_{Dec}
(93) 
2014 
_{Jan}
(38) 
_{Feb}
(45) 
_{Mar}
(26) 
_{Apr}
(41) 
_{May}
(125) 
_{Jun}
(70) 
_{Jul}
(61) 
_{Aug}
(66) 
_{Sep}
(60) 
_{Oct}
(110) 
_{Nov}
(27) 
_{Dec}
(30) 
2015 
_{Jan}
(43) 
_{Feb}
(67) 
_{Mar}
(71) 
_{Apr}
(92) 
_{May}
(39) 
_{Jun}
(15) 
_{Jul}
(3) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 


1
(6) 
2
(2) 
3
(6) 
4
(25) 
5
(5) 
6

7
(7) 
8
(3) 
9

10

11

12

13

14

15

16

17

18
(30) 
19
(15) 
20

21

22
(5) 
23
(9) 
24
(13) 
25

26
(2) 
27

28

29
(1) 
30
(4) 




From: Roy Stogner <roystgnr@ic...>  20130403 06:25:50

On Wed, 3 Apr 2013, Manav Bhatia wrote: > As a related question, if my code is running on a multicore machine, > then can I use nthreads to parallelize both the matrix assembly > and the Petsc linear solvers? Or do I have to use mpi for Petsc? PETSc isn't multithreaded, but I'm told it can be built to use thirdparty preconditioners which are multithreaded, so that you get decent scaling out of your solve. I haven't done this myself. > I am running problems with over a million elements, and using mpi on > my multicore machine makes each process consume over 1GB of RAM. ParallelMesh was invented to get me out of a similar jam. > On Apr 3, 2013, at 1:24 AM, Manav Bhatia <bhatiamanav@...> wrote: > >> I am curious if the parallel mesh is now suitable for general use. Unfortunately ParallelMesh may never be suitable for "general" use, because the most general SerialMeshusing codes sometimes assume at the application level that every process can see every element. If your problem includes contact, integrodifferential terms, or any such coupling beyond the layer of ghost elements that ParallelMesh exposes, then you have to do some very careful manual communications to make that work on a distributed mesh. ParallelMesh is also still much less tested than SerialMesh  it works with all the examples and all the compatible application codes I've tried, but I wouldn't be surprised if there are tricky AMR or other corner cases where it breaks in nasty ways. More testing would certainly be appreciated.  Roy 
From: Manav Bhatia <bhatiamanav@gm...>  20130403 05:30:59

As a related question, if my code is running on a multicore machine, then can I use nthreads to parallelize both the matrix assembly and the Petsc linear solvers? Or do I have to use mpi for Petsc? I am running problems with over a million elements, and using mpi on my multicore machine makes each process consume over 1GB of RAM. Any suggestions would be helpful. Thanks Manav On Apr 3, 2013, at 1:24 AM, Manav Bhatia <bhatiamanav@...> wrote: > Hi, > > I am curious if the parallel mesh is now suitable for general use. > > Thanks > Manav > 
From: Manav Bhatia <bhatiamanav@gm...>  20130403 05:24:40

Hi, I am curious if the parallel mesh is now suitable for general use. Thanks Manav 
From: Roy Stogner <roystgnr@ic...>  20130403 03:21:36

On Tue, 2 Apr 2013, Shiyuan Gu wrote: > In case of no hanging nodes contraint, and the boundary condition is > only Dirichlet, should the asymmetric_constraint_rows always be set to true > in the function of > DofMap::heterogenously_constraint_element_matrix_and_vector? Thanks. > > I am looking at the libmesh examples introduction_ex4. Changing > asymmetric_constraint_rows from true to false will give wrong solution. > dof_map.heterogenously_constrain_element_matrix_and_vector (Ke, Fe, > dof_indices,true); > > void DofMap::heterogenously_constrain_element_matrix_and_vector > (DenseMatrix<Number>& matrix, > DenseVector<Number>& rhs, > std::vector<dof_id_type>& elem_dofs, > bool asymmetric_constraint_rows) One goal of the symmetric constraint rows option is to preserve the symmetry of an otherwise symmetric Jacobian, but the cost of that is we don't get to indirectly enforce the constraints by handing rows for them to the linear solver, and so when you want the constrained dof coefficients to take their correct values after a solve you need to either use a solver which calls enforce_constraints_exactly (generally our nonlinear solvers do this; our linear solvers don't) or you need to call enforce_constraints exactly yourself after the solve. If you add a call like: system.get_dof_map().enforce_constraints_exactly(system); after your solve, does that fix things? If not, then we've got a bug that needs to be fixed. If so, then we've merely got some misleading documentation that needs to be fixed.  Roy 
From: Manav Bhatia <bhatiamanav@gm...>  20130403 01:01:23

To get the boundary surface mesh, I ended up hacking into the vtk_io class. In the cells_to_vtk method, I added an IF block based on a flag passed during construction (true=write whole mesh, false=write mesh on boundary). So, when true, the method operates as usual, and when false, it iterate on the sides of the element, and if it lies on the boundary, it is written to the output file. So far, this seems to be working alright, except that I have a lot of unwanted nodal data in the output file. I will be happy to share the patch if anyone is interested. Manav On Mon, Apr 1, 2013 at 9:05 AM, David Knezevic <dknezevic@...>wrote: > On 04/01/2013 07:50 AM, Kirk, Benjamin (JSCEG311) wrote: > > On Mar 31, 2013, at 11:35 PM, "Manav Bhatia" <bhatiamanav@...> > wrote: > > > >> Thanks, Derek. > >> > >> A related question about libMesh: > >> > >> If I need to output the solution values on the boundary nodes, can I > use BoundaryMesh along with the IO classes for that? > >> > >> Meaning, if I do the following > >> > >> BoundaryMesh b_mesh(mesh.mesh_dimension()1); > >> mesh.boundary_info>sync(b_mesh); > >> > VTKIO(b_mesh).write_equation_systems("boundary_output.pvtu", > equation_systems); > >> > >> > >> Would this write the solution values on the boundary nodes? > > Almost. Unfortunately it is a little more manual than this, but that's > the right idea. In your example the equation systems object can only be > associated with one mesh, so it won't work. The fix is to create a second > equation systems object that lives on the boundary and then extract the > trace values. > > > > This is not automated because in general the user will want > physicsdependent things like shear stress, heat flux, whatever... > > > > I suppose we could automate the valuesonly case though. > > > > Ben > > > To get the "boundary only values" I first make a "boundary_dofmap", as > in the code below. Then copying the solution values over from system to > boundary_system is easy. > > > > void create_disp_boundary_dofmap(System& system, > System& boundary_system, > std::vector<unsigned int>& > boundary_dofmap) > { > boundary_dofmap.resize(boundary_system.n_dofs()); > > // make a point locator for system > AutoPtr<PointLocatorBase> point_locator = > system.get_mesh().sub_point_locator(); > > // loop over the boundary nodes and locate them > // then fill in boundary_dofmap > MeshBase::node_iterator node_it = > boundary_system.get_mesh().nodes_begin(); > const MeshBase::node_iterator node_end = > boundary_system.get_mesh().nodes_end(); > > for( ; node_it != node_end; node_it++) > { > Node* node = *node_it; > > // get an element in the full mesh that contains node > const Elem* element = point_locator>operator()(*node); > > // loop over the nodes of element until we find the one that > matches node > for(unsigned int node_id=0; node_id<element>n_nodes(); node_id++) > { > Node* new_node = element>get_node(node_id); > > Real dist = std::sqrt( > pow(node>operator()(0)new_node>operator()(0),2.) + > pow(node>operator()(1)new_node>operator()(1),2.) + > pow(node>operator()(2)new_node>operator()(2),2.) ); > > if(dist < TOLERANCE) > { > for(unsigned int var=0; var<system.n_vars(); var++) > { > unsigned int index1 = > node>dof_number(boundary_system.number(), var, 0); > unsigned int index2 = new_node>dof_number(system.number(), > var, 0); > boundary_dofmap[index1] = index2; > } > continue; > } > > } > } > > } > > >  > Own the FutureIntel® Level Up Game Demo Contest 2013 > Rise to greatness in Intel's independent game demo contest. > Compete for recognition, cash, and the chance to get your game > on Steam. $5K grand prize plus 10 genre and skill prizes. > Submit your demo by 6/6/13. http://p.sf.net/sfu/intel_levelupd2d > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > 
From: Shiyuan Gu <sgu@an...>  20130403 00:24:20

Hi all, In case of no hanging nodes contraint, and the boundary condition is only Dirichlet, should the asymmetric_constraint_rows always be set to true in the function of DofMap::heterogenously_constraint_element_matrix_and_vector? Thanks. I am looking at the libmesh examples introduction_ex4. Changing asymmetric_constraint_rows from true to false will give wrong solution. dof_map.heterogenously_constrain_element_matrix_and_vector (Ke, Fe, dof_indices,true); void DofMap::heterogenously_constrain_element_matrix_and_vector (DenseMatrix<Number>& matrix, DenseVector<Number>& rhs, std::vector<dof_id_type>& elem_dofs, bool asymmetric_constraint_rows) 