You can subscribe to this list here.
2003 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}
(2) 
_{Oct}
(2) 
_{Nov}
(27) 
_{Dec}
(31) 

2004 
_{Jan}
(6) 
_{Feb}
(15) 
_{Mar}
(33) 
_{Apr}
(10) 
_{May}
(46) 
_{Jun}
(11) 
_{Jul}
(21) 
_{Aug}
(15) 
_{Sep}
(13) 
_{Oct}
(23) 
_{Nov}
(1) 
_{Dec}
(8) 
2005 
_{Jan}
(27) 
_{Feb}
(57) 
_{Mar}
(86) 
_{Apr}
(23) 
_{May}
(37) 
_{Jun}
(34) 
_{Jul}
(24) 
_{Aug}
(17) 
_{Sep}
(50) 
_{Oct}
(24) 
_{Nov}
(10) 
_{Dec}
(60) 
2006 
_{Jan}
(47) 
_{Feb}
(46) 
_{Mar}
(127) 
_{Apr}
(19) 
_{May}
(26) 
_{Jun}
(62) 
_{Jul}
(47) 
_{Aug}
(51) 
_{Sep}
(61) 
_{Oct}
(42) 
_{Nov}
(50) 
_{Dec}
(33) 
2007 
_{Jan}
(60) 
_{Feb}
(55) 
_{Mar}
(77) 
_{Apr}
(102) 
_{May}
(82) 
_{Jun}
(102) 
_{Jul}
(169) 
_{Aug}
(117) 
_{Sep}
(80) 
_{Oct}
(37) 
_{Nov}
(51) 
_{Dec}
(43) 
2008 
_{Jan}
(71) 
_{Feb}
(94) 
_{Mar}
(98) 
_{Apr}
(125) 
_{May}
(54) 
_{Jun}
(119) 
_{Jul}
(60) 
_{Aug}
(111) 
_{Sep}
(118) 
_{Oct}
(125) 
_{Nov}
(119) 
_{Dec}
(94) 
2009 
_{Jan}
(109) 
_{Feb}
(38) 
_{Mar}
(93) 
_{Apr}
(88) 
_{May}
(29) 
_{Jun}
(57) 
_{Jul}
(53) 
_{Aug}
(48) 
_{Sep}
(68) 
_{Oct}
(151) 
_{Nov}
(23) 
_{Dec}
(35) 
2010 
_{Jan}
(84) 
_{Feb}
(60) 
_{Mar}
(184) 
_{Apr}
(112) 
_{May}
(60) 
_{Jun}
(90) 
_{Jul}
(23) 
_{Aug}
(70) 
_{Sep}
(119) 
_{Oct}
(27) 
_{Nov}
(47) 
_{Dec}
(54) 
2011 
_{Jan}
(22) 
_{Feb}
(19) 
_{Mar}
(92) 
_{Apr}
(93) 
_{May}
(35) 
_{Jun}
(91) 
_{Jul}
(32) 
_{Aug}
(61) 
_{Sep}
(7) 
_{Oct}
(69) 
_{Nov}
(81) 
_{Dec}
(23) 
2012 
_{Jan}
(64) 
_{Feb}
(95) 
_{Mar}
(35) 
_{Apr}
(36) 
_{May}
(63) 
_{Jun}
(98) 
_{Jul}
(70) 
_{Aug}
(171) 
_{Sep}
(149) 
_{Oct}
(64) 
_{Nov}
(67) 
_{Dec}
(126) 
2013 
_{Jan}
(108) 
_{Feb}
(104) 
_{Mar}
(171) 
_{Apr}
(133) 
_{May}
(108) 
_{Jun}
(100) 
_{Jul}
(93) 
_{Aug}
(126) 
_{Sep}
(74) 
_{Oct}
(59) 
_{Nov}
(145) 
_{Dec}
(93) 
2014 
_{Jan}
(38) 
_{Feb}
(45) 
_{Mar}
(26) 
_{Apr}
(41) 
_{May}
(125) 
_{Jun}
(70) 
_{Jul}
(28) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 




1
(5) 
2
(2) 
3
(1) 
4
(5) 
5
(2) 
6
(13) 
7
(5) 
8
(9) 
9
(6) 
10
(5) 
11
(1) 
12
(1) 
13
(9) 
14
(1) 
15
(3) 
16
(4) 
17
(2) 
18

19

20
(2) 
21

22
(1) 
23
(11) 
24

25

26
(2) 
27
(7) 
28
(12) 
29
(9) 
30
(7) 
31
(2) 

From: Benjamin S. Kirk <benjamin.kirk@na...>  20060327 22:13:15

Yeah, sure, they should work. I'll try and track down some images, but I think you are looking in the wrong place. The compute_constraints() should work based on 1level differences, and in the end the build_constraint_matrix() handles recursive constraints. Ben On Mon, 20060327 at 15:41 0600, Roy Stogner wrote: > It looks to me like neither the Lagrange compute_constraints() nor the > general compute_proj_constraints() functions are set up to handle more > than a one level mismatch between adjoining elements. Am I wrong  is > there anyone running computations on such meshes? >  > Roy Stogner > > >  > This SF.Net email is sponsored by xPML, a groundbreaking scripting language > that extends applications into web and mobile media. Attend the live webcast > and join the prime developer group breaking into this new coding territory! > http://sel.asus.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers 
From: Roy Stogner <roystgnr@ic...>  20060327 21:41:26

It looks to me like neither the Lagrange compute_constraints() nor the general compute_proj_constraints() functions are set up to handle more than a one level mismatch between adjoining elements. Am I wrong  is there anyone running computations on such meshes?  Roy Stogner 
From: Wout Ruijter <woutruijter@gm...>  20060327 16:57:43

Hello Li, can't reproduce the error here on an ubuntu system with petsc 2.3.0: withshared withfortran=3D0 and libmesh from CVS: export METHOD=3Dopt enableifem=3Dno enablepetsc enableperflog \ =09enablesecond enableamr enabletetgen Cheers W On 3/27/06, li pan <li76pan@...> wrote: > Deal all, > I met a strang problem. Maybe you can also try it. > Just write a simple code of building a mesh and > defining an equation_system, adding some variables and > so on. > int main (int argc, char** argv) > { > // Initialize Petsc, like in example 2. > libMesh::init (argc, argv); > > const unsigned int dim =3D 3; > > // Create a dimdimensional mesh. > Mesh mesh (dim); > std::cout<<DIM; > MeshTools::Generation::build_cube(mesh, > 1,2,2, > 0.,1., > 0.,1., > 0.,1., > HEX8); > mesh.print_info(); > > EquationSystems equation_systems (mesh); > > TransientLinearImplicitSystem & system =3D > equation_systems.add_system<TransientLinearImplicitSystem>("non_linear"); > > system.add_variable("u",FIRST); > system.add_variable("v",FIRST); > system.add_variable("w",FIRST); > > > system.attach_assemble_function(assemble_nonlinear > ); > equation_systems.init(); > equation_systems.parameters.set<unsigned > int>("linear solver maximum iterations")=3D1000; > equation_systems.parameters.set<Real>("linear > solver tolerance")=3D1.e10; > } > That's it. Compile and run the program. You'll get > error report: > [0]PETSC ERROR: MatSeqAIJSetPreallocation_SeqAIJ() > line 2631 in src/mat/impls/aij/seq/aij.c > [0]PETSC ERROR: Argument out of range! > [0]PETSC ERROR: nnz cannot be greater than row length: > local row 6 value 69 rowlength 54! > [0]PETSC ERROR: MatCreateSeqAIJ() line 2544 in > src/mat/impls/aij/seq/aij.c > [0]PETSC ERROR: User provided function() line 138 in > unknowndirectory/src/numerics/petsc_matrix.C > [unset]: aborting job: > application called MPI_Abort(comm=3D0x84000000, 63)  > process 0 > But if I use > MeshTools::Generation::build_cube(mesh, > 1,2,1, > 0.,1., > 0.,1., > 0.,1., > HEX8); > or some other grid dimensions, the problem disappear. > Do you know why? I configured my petsc with > downloadfblaslapack=3D1 downloadmpich=3D1 > withshared=3D1 > > > > pan > > __________________________________________________ > Do You Yahoo!? > Tired of spam? Yahoo! Mail has the best spam protection around > http://mail.yahoo.com > > >  > This SF.Net email is sponsored by xPML, a groundbreaking scripting langua= ge > that extends applications into web and mobile media. Attend the live webc= ast > and join the prime developer group breaking into this new coding territor= y! > http://sel.asus.falkag.net/sel?cmd=3Dlnk&kid=3D110944&bid=3D241720&dat= =3D121642 > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > 
From: Roy Stogner <roystgnr@ic...>  20060327 15:01:45

On Sun, 26 Mar 2006, li pan wrote: > maybe it's a little stupid. But I have to say it. In > the Netwon iteration code of ex13, I can't find > something like Xn+1 = Xn + deltaX. Oh! I completely misunderstood what you meant by "accumulated solution". This isn't a stupid question at all, because different codes do it in different ways. In example 13, I believe what's going on is that the solution vector itself, *stokes_system.solution, is being used to hold Xn+1. I think I've just about convinced John that for more tricky nonlinear problems it's better for the system solution to hold the difference between deltaX at the last Newton step and deltaX at the current Newton step, though; the equations usually come out simpler, and it ends up being easier to program quasiNewton schemes and convergence tests. John, Ben, am I right about ex13? We probably ought to add comments to the nonlinear loop explaining as much.  Roy Stogner 
From: li pan <li76pan@ya...>  20060327 14:26:18

Deal all, I met a strang problem. Maybe you can also try it. Just write a simple code of building a mesh and defining an equation_system, adding some variables and so on. int main (int argc, char** argv) { // Initialize Petsc, like in example 2. libMesh::init (argc, argv); const unsigned int dim = 3; // Create a dimdimensional mesh. Mesh mesh (dim); std::cout<<DIM; MeshTools::Generation::build_cube(mesh, 1,2,2, 0.,1., 0.,1., 0.,1., HEX8); mesh.print_info(); EquationSystems equation_systems (mesh); TransientLinearImplicitSystem & system = equation_systems.add_system<TransientLinearImplicitSystem>("non_linear"); system.add_variable("u",FIRST); system.add_variable("v",FIRST); system.add_variable("w",FIRST); system.attach_assemble_function(assemble_nonlinear ); equation_systems.init(); equation_systems.parameters.set<unsigned int>("linear solver maximum iterations")=1000; equation_systems.parameters.set<Real>("linear solver tolerance")=1.e10; } That's it. Compile and run the program. You'll get error report: [0]PETSC ERROR: MatSeqAIJSetPreallocation_SeqAIJ() line 2631 in src/mat/impls/aij/seq/aij.c [0]PETSC ERROR: Argument out of range! [0]PETSC ERROR: nnz cannot be greater than row length: local row 6 value 69 rowlength 54! [0]PETSC ERROR: MatCreateSeqAIJ() line 2544 in src/mat/impls/aij/seq/aij.c [0]PETSC ERROR: User provided function() line 138 in unknowndirectory/src/numerics/petsc_matrix.C [unset]: aborting job: application called MPI_Abort(comm=0x84000000, 63)  process 0 But if I use MeshTools::Generation::build_cube(mesh, 1,2,1, 0.,1., 0.,1., 0.,1., HEX8); or some other grid dimensions, the problem disappear. Do you know why? I configured my petsc with downloadfblaslapack=1 downloadmpich=1 withshared=1 pan __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com 
From: Ondrej Certik <ondrej@ce...>  20060327 14:09:19

Hello, if anyone is also interested in computing the lowest eigenvalues, below is how. Libmesh already has support for slepc, but it's not really needed if the only thing I want is to construct global matrices, save them to disk and solve them (externally). I am solving a generalized hermitian problem Ax=kBx. So after assembling matrices A and B, I save them to a file using PETSC_VIEWER_BINARY_DEFAULT format. Then I use example 7 from slepc http://www.grycap.upv.es/slepc/handson/handson3.html to solve them. The advantage is that now I can play with various options on the command line. 1)to find the largest eigenvalues: ./ex7 f1 matrixA f2 matrixB works with any solver. 2)to find the smallest eigenvalues, either: ./ex7 f1 matrixA f2 matrixB st_type sinvert works with any solver or: ./ex7 f1 matrixA f2 matrixB eps_type arpack eps_smallest_real works only with the arpack solver. It's very slow, I have no idea why. This was around 20 times faster: ./ex7 f1 matrixA f2 matrixB st_type sinvert eps_type arpack You can also change the tolerance or the number of requested eigenvalues using options: eps_nev 2 eps_tol 1e3 All of this can of course be built into libmesh, I think it already is. Another option is to use pysparse, the solver is also very good. Ondrej 
From: Shengli Xu <shengli.xu@gm...>  20060327 10:44:59

Dear Libmesh developers and users, I am solving the stokes equation with periodic boundary condition. The velocity variables of U and V on the four sides of the compute domain are applied with periodic boundary condition. For example U(left) =3D U(right), V(left) =3D V(right), U(bottom) =3D U(top), V(bottom) =3D V(top). I feel un= certain that how to deal with the four corners' dofs. It is needed that U(corner1)=3DU(corner2)=3DU(corner3)=3DU(corner4), V(corner1)=3DV(corner2)=3DV(corner3)=3DV(corner4). I can't get the correct periodic result with the four corners till now. In my program, dof_map.add_constraint_row() is used to add the coupled dof degree. During the loop of assembling the element stiffness and right load, dof_map.constrain_element_matrix_and_vector(Ke,Fe,dof_indces) is called before system.matrix>add_matrix(Ke,dof_indices) and system.rhs >add_vector(Fe,dof_indices). The velocity periodic boundary condition on the left and right sides is ver= y good. For example, in my test example, node 19 is on the left side (not corner), node 95 is on the right side (not corner), node 19 and 95 are periodic couple nodes. The result is U(19)=3DU(95)=3D0.00063330345, V(19)=3DV(95)=3D6.5320225e06. This is very good. But the periodic result on the bottom and top sides has a little difference= . For example, node 209 is on the bottom side and node 16 is on the top side. The result is U(16)=3D0.00097276895,U(209)=3D0.00097277163, V(16)=3D 1.3452936e06,V(209)=3D1.3444377e06. They just have a little difference. I= s it correct? Why the result is not as good as that of the left and right sides? The constraint on the four corners that U(corner1)=3DU(corner2)=3DU(corner3)=3DU(corner4),V(corner1)=3DV(corner2)= =3DV(corner3)=3DV(corner4) can not get a reasonable result. The four corners are: corner1 is node 13, corner2 is node90, corner3 is node205, corner4 is node281. U(13)=3D 0.00014477273, U(90)=3D2.8227261, U(205)=3D9.6206806e05, U(281)=3D 2.0338666e05, V(13)=3D8.11487e05, V(90)=3D7.3168002e06, V(205)=3D 2.4994687e05, V(281)=3D0.00011346019. They are not right. The four corner is dealed with like : ... typedef std::vector<Node*> evec; evec Ncorner; MeshBase::const_node_iterator nod =3D mesh.active_nodes_begin(); const MeshBase::const_node_iterator end_nod =3D mesh.active_nodes_end(); for( ; nod !=3D end_nod; ++nod){ Node* node =3D (*nod); // the domain is 1x1, the four corners is (0.,0.),(1.,0.),(1.,1.),(0.,1.) if(((*node)(0)<0.01 && (*node)(1)<0.01)  ((*node)(0)>0.99&&(*node)(1)= < 0.01)  ((*node)(0)<0.01 && (*node)(1)>0.99)  ((*node)(0)>0.99 && (*node)(1)>0.99)){ Ncorner.push_back(node); } } // four corner perioidc b.c. evec::iterator it =3D Ncorner.begin(); unsigned int u1 =3D (*it)>dof_number(0,0,0); unsigned int v1 =3D (*it)>dof_number(0,1,0); DofConstraintRow cu,cv; for(it+=3D1;it!=3DNcorner.end();++it){ unsigned int u2 =3D (*it)>dof_number(0,0,0); unsigned int v2 =3D (*it)>dof_number(0,1,0); cu[u2] =3D 1.0; cv[v2] =3D 1.0; } dof_map.add_constraint_row(u1,cu); dof_map.add_constraint_row(v1,cv); ... Another question is about the information of the linear solver convergence. Use system.n_linear_iterations() and system.final_linear_residual() to output the information. Without the four corners constraint, the output is : Linear solver converge= d at step: 5000, final residual : 1.96297e06. But the total degree is 1232, Is it need so many times of iteration? With the four corners constraint, the output is : Linear solver converged a= t step: 5000, final residual: 1.91226e05. I think the residual is too big. How to change it? 