You can subscribe to this list here.
2003 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}
(2) 
_{Oct}
(2) 
_{Nov}
(27) 
_{Dec}
(31) 

2004 
_{Jan}
(6) 
_{Feb}
(15) 
_{Mar}
(33) 
_{Apr}
(10) 
_{May}
(46) 
_{Jun}
(11) 
_{Jul}
(21) 
_{Aug}
(15) 
_{Sep}
(13) 
_{Oct}
(23) 
_{Nov}
(1) 
_{Dec}
(8) 
2005 
_{Jan}
(27) 
_{Feb}
(57) 
_{Mar}
(86) 
_{Apr}
(23) 
_{May}
(37) 
_{Jun}
(34) 
_{Jul}
(24) 
_{Aug}
(17) 
_{Sep}
(50) 
_{Oct}
(24) 
_{Nov}
(10) 
_{Dec}
(60) 
2006 
_{Jan}
(47) 
_{Feb}
(46) 
_{Mar}
(127) 
_{Apr}
(19) 
_{May}
(26) 
_{Jun}
(62) 
_{Jul}
(47) 
_{Aug}
(51) 
_{Sep}
(61) 
_{Oct}
(42) 
_{Nov}
(50) 
_{Dec}
(33) 
2007 
_{Jan}
(60) 
_{Feb}
(55) 
_{Mar}
(77) 
_{Apr}
(102) 
_{May}
(82) 
_{Jun}
(102) 
_{Jul}
(169) 
_{Aug}
(117) 
_{Sep}
(80) 
_{Oct}
(37) 
_{Nov}
(51) 
_{Dec}
(43) 
2008 
_{Jan}
(71) 
_{Feb}
(94) 
_{Mar}
(98) 
_{Apr}
(125) 
_{May}
(54) 
_{Jun}
(119) 
_{Jul}
(60) 
_{Aug}
(111) 
_{Sep}
(118) 
_{Oct}
(125) 
_{Nov}
(119) 
_{Dec}
(94) 
2009 
_{Jan}
(109) 
_{Feb}
(38) 
_{Mar}
(93) 
_{Apr}
(88) 
_{May}
(29) 
_{Jun}
(57) 
_{Jul}
(53) 
_{Aug}
(48) 
_{Sep}
(68) 
_{Oct}
(151) 
_{Nov}
(23) 
_{Dec}
(35) 
2010 
_{Jan}
(84) 
_{Feb}
(60) 
_{Mar}
(184) 
_{Apr}
(112) 
_{May}
(60) 
_{Jun}
(90) 
_{Jul}
(23) 
_{Aug}
(70) 
_{Sep}
(119) 
_{Oct}
(27) 
_{Nov}
(47) 
_{Dec}
(54) 
2011 
_{Jan}
(22) 
_{Feb}
(19) 
_{Mar}
(92) 
_{Apr}
(93) 
_{May}
(35) 
_{Jun}
(91) 
_{Jul}
(32) 
_{Aug}
(61) 
_{Sep}
(7) 
_{Oct}
(69) 
_{Nov}
(81) 
_{Dec}
(23) 
2012 
_{Jan}
(64) 
_{Feb}
(95) 
_{Mar}
(35) 
_{Apr}
(36) 
_{May}
(63) 
_{Jun}
(98) 
_{Jul}
(70) 
_{Aug}
(171) 
_{Sep}
(149) 
_{Oct}
(64) 
_{Nov}
(67) 
_{Dec}
(126) 
2013 
_{Jan}
(108) 
_{Feb}
(104) 
_{Mar}
(171) 
_{Apr}
(133) 
_{May}
(108) 
_{Jun}
(100) 
_{Jul}
(93) 
_{Aug}
(126) 
_{Sep}
(74) 
_{Oct}
(59) 
_{Nov}
(145) 
_{Dec}
(93) 
2014 
_{Jan}
(38) 
_{Feb}
(45) 
_{Mar}
(26) 
_{Apr}
(41) 
_{May}
(125) 
_{Jun}
(70) 
_{Jul}
(61) 
_{Aug}
(66) 
_{Sep}
(60) 
_{Oct}
(110) 
_{Nov}
(27) 
_{Dec}
(30) 
2015 
_{Jan}
(43) 
_{Feb}
(67) 
_{Mar}
(71) 
_{Apr}
(92) 
_{May}
(39) 
_{Jun}
(15) 
_{Jul}
(46) 
_{Aug}
(63) 
_{Sep}
(84) 
_{Oct}
(82) 
_{Nov}
(69) 
_{Dec}
(45) 
2016 
_{Jan}
(92) 
_{Feb}
(91) 
_{Mar}
(148) 
_{Apr}
(43) 
_{May}
(58) 
_{Jun}
(117) 
_{Jul}
(92) 
_{Aug}
(140) 
_{Sep}
(49) 
_{Oct}
(33) 
_{Nov}
(85) 
_{Dec}
(40) 
2017 
_{Jan}
(41) 
_{Feb}
(36) 
_{Mar}
(49) 
_{Apr}
(41) 
_{May}
(73) 
_{Jun}
(51) 
_{Jul}
(12) 
_{Aug}
(69) 
_{Sep}
(24) 
_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 




1
(5) 
2
(2) 
3
(1) 
4
(5) 
5
(2) 
6
(13) 
7
(5) 
8
(9) 
9
(6) 
10
(5) 
11
(1) 
12
(1) 
13
(9) 
14
(1) 
15
(3) 
16
(4) 
17
(2) 
18

19

20
(2) 
21

22
(1) 
23
(11) 
24

25

26
(2) 
27
(7) 
28
(12) 
29
(9) 
30
(7) 
31
(2) 

From: Benjamin S. Kirk <benjamin.kirk@na...>  20060328 18:59:38

On Tue, 20060328 at 07:43 0600, Roy Stogner wrote: > I'm not sure that should be a function argument, though. It looks to > me as if the maintain_level_one code only works if the mesh is already > level one compatible, in which case you're not going to be turning it > on and off from refinement step to refinement step  perhaps it ought > to be set in the MeshRefinement constructor instead. >  > Roy Excellent point. I have no problem making it an optional argument to the constructor which defaults to true. You are correct, I never anticipated taking a nonlevel1 conforming mesh and restoring leveloneness (or whatever you call it). Ben 
From: David Xu <dxu@my...>  20060328 17:43:26

On 3/28/06, Ondrej Certik <ondrej@...> wrote: > > I did load matlab sparse matrix to pysparse. It still requires some > > conversion and if the matlab matrix file gets as large as gigabyte in > > size, the simple conversion and loading will surely drag down the > > speed. > > I see. That's why I use the petsc binary format. And as to pysparse  you= will > have to construct the matrix in your libmesh code using the pysparse C co= de. > Probably. > > > BLOPEX web site http://wwwmath.cudenver.edu/~aknyazev/software/BLOPEX/ > > has a PETSc driver code for download. Maybe that will shed us some > > light on how to make it work with libmesh. > > yes, this driver is in src/contrib/blopex in the petsc directory. It call= s > lobpcg_solve function do solve the eigenvalue problem, which is probably = in the > blopex package (I don't have it installed). > When I installed petsc2.3.1, I added downloadblopex=3D1 withblopex= =3D1 it automatically installed and complied blopex. 
From: Roy Stogner <roystgnr@ic...>  20060328 13:43:56

On Tue, 28 Mar 2006, Wout Ruijter wrote: > The mesh refinement code reads (after the comment this is actually implemented): > // Case 2: The neighbor is one level lower than I am. > // The neighbor thus MUST be refined to satisfy > // the levelone rule, regardless of whether it > // was originally flagged for refinement. If it > // wasn't flagged already we need to repeat > // this process. > Doesn't this weed out the meshes you're describing? > Or do you bypass the mesh refinement stage in some way? That code is wrapped in "if (maintain_level_one)"  if you make that argument false, you'll skip it. I'm not sure that should be a function argument, though. It looks to me as if the maintain_level_one code only works if the mesh is already level one compatible, in which case you're not going to be turning it on and off from refinement step to refinement step  perhaps it ought to be set in the MeshRefinement constructor instead.  Roy 
From: Wout Ruijter <woutruijter@gm...>  20060328 11:00:44

Ben and Roy, The mesh refinement code reads (after the comment this is actually implemen= ted): // Case 2: The neighbor is one level lower than I am. // The neighbor thus MUST be refined to satisfy // the levelone rule, regardless of whether it // was originally flagged for refinement. If it // wasn't flagged already we need to repeat // this process.=09=09=09 Doesn't this weed out the meshes you're describing? Or do you bypass the mesh refinement stage in some way? Cheers W On 3/27/06, Benjamin S. Kirk <benjamin.kirk@...> wrote: > Yeah, sure, they should work. I'll try and track down some images, but > I think you are looking in the wrong place. The compute_constraints() > should work based on 1level differences, and in the end the > build_constraint_matrix() handles recursive constraints. > > Ben > > > On Mon, 20060327 at 15:41 0600, Roy Stogner wrote: > > It looks to me like neither the Lagrange compute_constraints() nor the > > general compute_proj_constraints() functions are set up to handle more > > than a one level mismatch between adjoining elements. Am I wrong  is > > there anyone running computations on such meshes? > >  > > Roy Stogner > > > > > >  > > This SF.Net email is sponsored by xPML, a groundbreaking scripting lang= uage > > that extends applications into web and mobile media. Attend the live we= bcast > > and join the prime developer group breaking into this new coding territ= ory! > > http://sel.asus.falkag.net/sel?cmd=3Dlnk&kid=3D110944&bid=3D241720&dat= =3D121642 > > _______________________________________________ > > Libmeshusers mailing list > > Libmeshusers@... > > https://lists.sourceforge.net/lists/listinfo/libmeshusers > > > 
From: Ondrej Certik <ondrej@ce...>  20060328 10:21:18

> I did load matlab sparse matrix to pysparse. It still requires some > conversion and if the matlab matrix file gets as large as gigabyte in > size, the simple conversion and loading will surely drag down the > speed. I see. That's why I use the petsc binary format. And as to pysparse  you will have to construct the matrix in your libmesh code using the pysparse C code. Probably. > BLOPEX web site http://wwwmath.cudenver.edu/~aknyazev/software/BLOPEX/ > has a PETSc driver code for download. Maybe that will shed us some > light on how to make it work with libmesh. yes, this driver is in src/contrib/blopex in the petsc directory. It calls lobpcg_solve function do solve the eigenvalue problem, which is probably in the blopex package (I don't have it installed). Ondrej 
From: David Xu <dxu@my...>  20060328 09:52:04

On 3/28/06, Ondrej Certik <ondrej@...> wrote: > I also wasn't able to find any other documentation besides sources. "BLOPEX is availabe as an external block to the PETSc package from Mathematics and Computer Science Division of ANL. The BLOPEXPETSc archive blopex_petsc.tgz contains a primitive BLOPEXPETSc test driver that can be used as an example to call BLOPEX eigenxolvers from PETSc." http://wwwmath.cudenver.edu/~aknyazev/software/BLOPEX/blopex_petsc.tgz 
From: David Xu <dxu@my...>  20060328 09:49:42

> > Then load the sparse matrix to pysparse in a matlab format. Should be fas= t > enough. The code I sent you was slow because I assemble from element mat= rices > in python, which is obviously slow. I did load matlab sparse matrix to pysparse. It still requires some conversion and if the matlab matrix file gets as large as gigabyte in size, the simple conversion and loading will surely drag down the speed. > > Another great eigenvalue solver is BLOPEX > > (http://wwwmath.cudenver.edu/~aknyazev/software/BLOPEX/), the > > performance is comparable to Pysparse/JDBSYM and it has become one of > > PETSc 2.3.1 extermal packages. But I haven't figured out how to use it > > in libmesh. Maybe it's worth a try. > > It is  but why is the eigensolver in petsc? I thought that petsc doesn't= have > interfaces to any eigensolvers (slepc does). And slepc doesn't (yet) have > blopex interface. Do you use it through petsc, or directly? And if throug= h > petsc, how? BLOPEX web site http://wwwmath.cudenver.edu/~aknyazev/software/BLOPEX/ has a PETSc driver code for download. Maybe that will shed us some light on how to make it work with libmesh. David 
From: Ondrej Certik <ondrej@ce...>  20060328 09:44:13

I also wasn't able to find any other documentation besides sources. But arpack, slepc arnoldi, and pysparse are enough working options for me. Ondrej On Tue, Mar 28, 2006 at 01:39:42AM 0800, David Xu wrote: > > It is  but why is the eigensolver in petsc? I thought that petsc doesn't have > > interfaces to any eigensolvers (slepc does). And slepc doesn't (yet) have > > blopex interface. Do you use it through petsc, or directly? And if through > > petsc, how? > > This is also my question, how to get it working with libmesh since it > comes with PETSc? > See the bottom of this link: > http://wwwunix.mcs.anl.gov/petsc/petsc2/documentation/changes/231.html 
From: David Xu <dxu@my...>  20060328 09:39:44

> It is  but why is the eigensolver in petsc? I thought that petsc doesn't= have > interfaces to any eigensolvers (slepc does). And slepc doesn't (yet) have > blopex interface. Do you use it through petsc, or directly? And if throug= h > petsc, how? This is also my question, how to get it working with libmesh since it comes with PETSc? See the bottom of this link: http://wwwunix.mcs.anl.gov/petsc/petsc2/documentation/changes/231.html 
From: Ondrej Certik <ondrej@ce...>  20060328 09:34:50

On Mon, Mar 27, 2006 at 09:43:04PM 0800, David Xu wrote: > Ondrej, > > Thanks for the tips. How's the matrix file i/o performance when using > the PETSC_VIEWER_BINARY_DEFAULT format for large matrices (size > > 100000)? I was doing these tests  it's something like 0.5 s. The same with loading it in my external petsc program. So it's ok. But I wasn't trying some very large matrices, like size > milion. > What's your libmesh code for exporting PETSC_VIEWER_BINARY_DEFAULT > format matrices? void save_sparse_matrix(SparseMatrix<Number>& M, const char *fname) { //M.print_matlab(fname); int ierr=0; PetscViewer petsc_viewer; M.close(); ierr = PetscViewerCreate (libMesh::COMM_WORLD, &petsc_viewer); CHKERRABORT(libMesh::COMM_WORLD,ierr); ierr = PetscViewerBinaryOpen( libMesh::COMM_WORLD, fname, FILE_MODE_WRITE, &petsc_viewer); CHKERRABORT(libMesh::COMM_WORLD,ierr); ierr = PetscViewerSetFormat (petsc_viewer, PETSC_VIEWER_BINARY_DEFAULT); CHKERRABORT(libMesh::COMM_WORLD,ierr); Mat mat = ((PetscMatrix<Number>&)(M)).mat(); ierr = MatView (mat, petsc_viewer); CHKERRABORT(libMesh::COMM_WORLD,ierr); ierr = PetscViewerDestroy (petsc_viewer); CHKERRABORT(libMesh::COMM_WORLD,ierr); } > As we discussed before, I tried pysparse with success, the bottleneck > is the file i/o. I also compiled ARPACK with SLEPc and enabled ARPACK Then load the sparse matrix to pysparse in a matlab format. Should be fast enough. The code I sent you was slow because I assemble from element matrices in python, which is obviously slow. > in libmesh. It worked in finding smallest eigenvalues, but it wasn't > very fast compared to pysparse + time spend on file i/o. Arpack can be also very fast  see my last email. It was as fast as pysparse. Even a little faster then the arnoldi/lanczos in slepc. > It would be nice to implement JDBSYM algorithm in libmesh. I've been > trying to compile JDBSYM library (from it's C code implementation) > with my libmesh code, haven't got it work yet. The reason I need the > eigensolver integrated with libmesh code is to be able to postprocess > the eigenvectors, say, normalization. Do you know how to do that with > pysparse? Sure. The jdsym.jdsym(...) returns both eigenvalues and eigenvectors, so export the eigenvector to a file and do what you want with it. Better would be to integrate jdsym to slepc. > Another great eigenvalue solver is BLOPEX > (http://wwwmath.cudenver.edu/~aknyazev/software/BLOPEX/), the > performance is comparable to Pysparse/JDBSYM and it has become one of > PETSc 2.3.1 extermal packages. But I haven't figured out how to use it > in libmesh. Maybe it's worth a try. It is  but why is the eigensolver in petsc? I thought that petsc doesn't have interfaces to any eigensolvers (slepc does). And slepc doesn't (yet) have blopex interface. Do you use it through petsc, or directly? And if through petsc, how? Ondrej 
From: David Xu <dxu@my...>  20060328 05:43:08

Ondrej, Thanks for the tips. How's the matrix file i/o performance when using the PETSC_VIEWER_BINARY_DEFAULT format for large matrices (size > 100000)? What's your libmesh code for exporting PETSC_VIEWER_BINARY_DEFAULT format matrices? As we discussed before, I tried pysparse with success, the bottleneck is the file i/o. I also compiled ARPACK with SLEPc and enabled ARPACK in libmesh. It worked in finding smallest eigenvalues, but it wasn't very fast compared to pysparse + time spend on file i/o. It would be nice to implement JDBSYM algorithm in libmesh. I've been trying to compile JDBSYM library (from it's C code implementation) with my libmesh code, haven't got it work yet. The reason I need the eigensolver integrated with libmesh code is to be able to postprocess the eigenvectors, say, normalization. Do you know how to do that with pysparse? Another great eigenvalue solver is BLOPEX (http://wwwmath.cudenver.edu/~aknyazev/software/BLOPEX/), the performance is comparable to Pysparse/JDBSYM and it has become one of PETSc 2.3.1 extermal packages. But I haven't figured out how to use it in libmesh. Maybe it's worth a try. Thanks again for sharing your experience! David On 3/27/06, Ondrej Certik <ondrej@...> wrote: > Hello, > > if anyone is also interested in computing the lowest eigenvalues, below i= s > how. > > Libmesh already has support for slepc, but it's not really needed if the = only > thing I want is to construct global matrices, save them to disk and solve= them > (externally). > > I am solving a generalized hermitian problem Ax=3DkBx. So after assemblin= g > matrices A and B, I save them to a file using PETSC_VIEWER_BINARY_DEFAULT > format. Then I use example 7 from slepc > > http://www.grycap.upv.es/slepc/handson/handson3.html > > to solve them. The advantage is that now I can play with various options = on the > command line. > > 1)to find the largest eigenvalues: > > ./ex7 f1 matrixA f2 matrixB > > works with any solver. > > 2)to find the smallest eigenvalues, either: > > ./ex7 f1 matrixA f2 matrixB st_type sinvert > > works with any solver > > or: > > ./ex7 f1 matrixA f2 matrixB eps_type arpack eps_smallest_real > > works only with the arpack solver. It's very slow, I have no idea why. Th= is was > around 20 times faster: > > ./ex7 f1 matrixA f2 matrixB st_type sinvert eps_type arpack > > > You can also change the tolerance or the number of requested eigenvalues = using > options: > > eps_nev 2 eps_tol 1e3 > > > All of this can of course be built into libmesh, I think it already is. A= nother > option is to use pysparse, the solver is also very good. > Ondrej > > >  > This SF.Net email is sponsored by xPML, a groundbreaking scripting langua= ge > that extends applications into web and mobile media. Attend the live webc= ast > and join the prime developer group breaking into this new coding territor= y! > http://sel.asus.falkag.net/sel?cmd=3Dlnk&kid=3D110944&bid=3D241720&dat= =3D121642 > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > 
From: Shengli Xu <shengli.xu.xu@gm...>  20060328 05:04:23

On 3/28/06, libmeshusersrequest@... < libmeshusersrequest@...> wrote: > > Send Libmeshusers mailing list submissions to > libmeshusers@... > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.sourceforge.net/lists/listinfo/libmeshusers > or, via email, send a message with subject or body 'help' to > libmeshusersrequest@... > > You can reach the person managing the list at > libmeshusersadmin@... > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Libmeshusers digest..." > > > Today's Topics: > > 1. A strange phenomenon in applying peroidic boundary condition > (Shengli Xu) > > > ____ > > Message: 1 > Date: Mon, 27 Mar 2006 18:44:56 +0800 > From: "Shengli Xu" <shengli.xu.xu@...> > To: libmeshusers@... > Subject: [Libmeshusers] A strange phenomenon in applying peroidic > boundary condition > > =3D_Part_345_5174524.1143456296123 > ContentType: text/plain; charset=3DISO88591 > ContentTransferEncoding: quotedprintable > ContentDisposition: inline > > Dear Libmesh developers and users, > I am solving the stokes equation with periodic boundary condition. The > velocity variables of U and V on the four sides of the compute domain are > applied with periodic boundary condition. For example U(left) =3D3D > U(right), > V(left) =3D3D V(right), U(bottom) =3D3D U(top), V(bottom) =3D3D V(top). I= feel > un=3D > certain > that how to deal with the four corners' dofs. It is needed that > U(corner1)=3D3DU(corner2)=3D3DU(corner3)=3D3DU(corner4), > V(corner1)=3D3DV(corner2)=3D3DV(corner3)=3D3DV(corner4). I can't get the = correct > periodic result with the four corners till now. > > In my program, dof_map.add_constraint_row() is used to add the coupled do= f > degree. > During the loop of assembling the element stiffness and right load, > dof_map.constrain_element_matrix_and_vector(Ke,Fe,dof_indces) is called > before system.matrix>add_matrix(Ke,dof_indices) and system.rhs > >add_vector(Fe,dof_indices). > > The velocity periodic boundary condition on the left and right sides is > ver=3D > y > good. For example, in my test example, node 19 is on the left side (not > corner), node 95 is on the right side (not corner), node 19 and 95 are > periodic couple nodes. The result is U(19)=3D3DU(95)=3D3D0.00063330345, > V(19)=3D3DV(95)=3D3D6.5320225e06. This is very good. > > But the periodic result on the bottom and top sides has a little > difference=3D > . > For example, node 209 is on the bottom side and node 16 is on the top > side. > The result is U(16)=3D3D0.00097276895,U(209)=3D3D0.00097277163, V(16)= =3D3D > 1.3452936e06,V(209)=3D3D1.3444377e06. They just have a little differenc= e. > I=3D > s > it correct? Why the result is not as good as that of the left and right > sides? > > The constraint on the four corners that > U(corner1)=3D3DU(corner2)=3D3DU(corner3)=3D3DU(corner4),V(corner1)=3D3DV(= corner2)=3D > =3D3DV(corner3)=3D3DV(corner4) > can not get a reasonable result. The four corners are: corner1 is node 13= , > corner2 is node90, corner3 is node205, corner4 is node281. U(13)=3D3D > 0.00014477273, U(90)=3D3D2.8227261, U(205)=3D3D9.6206806e05, U(281)=3D= 3D > 2.0338666e05, V(13)=3D3D8.11487e05, V(90)=3D3D7.3168002e06, V(205)=3D= 3D > 2.4994687e05, V(281)=3D3D0.00011346019. They are not right. > > The four corner is dealed with like : > ... > typedef std::vector<Node*> evec; > evec Ncorner; > MeshBase::const_node_iterator nod =3D3D mesh.active_nodes_begin(); > const MeshBase::const_node_iterator end_nod =3D3D mesh.active_nodes_end= (); > for( ; nod !=3D3D end_nod; ++nod){ > Node* node =3D3D (*nod); > // the domain is 1x1, the four corners is > (0.,0.),(1.,0.),(1.,1.),(0.,1.) > if(((*node)(0)<0.01 && (*node)(1)<0.01)  ((*node)(0)>0.99&& > (*node)(1)=3D > < > 0.01)  ((*node)(0)<0.01 && (*node)(1)>0.99)  ((*node)(0)>0.99 && > (*node)(1)>0.99)){ > Ncorner.push_back(node); > } > } > // four corner perioidc b.c. > evec::iterator it =3D3D Ncorner.begin(); > unsigned int u1 =3D3D (*it)>dof_number(0,0,0); > unsigned int v1 =3D3D (*it)>dof_number(0,1,0); > > DofConstraintRow cu,cv; > for(it+=3D3D1;it!=3D3DNcorner.end();++it){ > unsigned int u2 =3D3D (*it)>dof_number(0,0,0); > unsigned int v2 =3D3D (*it)>dof_number(0,1,0); > cu[u2] =3D3D 1.0; > cv[v2] =3D3D 1.0; > } > > dof_map.add_constraint_row(u1,cu); > dof_map.add_constraint_row(v1,cv); > ... > I think this is wrong. Because in the function > dof_map.add_constraint_row(u1, cu), u1 is the slave dof and must be > eliminated. I changed the code like this: ... ... typedef std::vector<Node*> evec; evec Ncorner; MeshBase::const_node_iterator nod =3D3D mesh.active_nodes_begin(); const MeshBase::const_node_iterator end_nod =3D3D mesh.active_nodes_end()= ; for( ; nod !=3D3D end_nod; ++nod){ Node* node =3D3D (*nod); // the domain is 1x1, the four corners is(0.,0.),(1.,0.),(1.,1.),(0.,1.= ) if(((*node)(0)<0.01 && (*node)(1)<0.01)  ((*node)(0)>0.99&&(*node)(1)< 0.01)  ((*node)(0)<0.01 && (*node)(1)>0.99)  ((*node)(0)>0.99&&(*node)(= 1)> 0.99)){ Ncorner.push_back(node); } } // four corner perioidc b.c. evec::iterator it =3D3D Ncorner.begin(); unsigned int u1 =3D3D (*it)>dof_number(0,0,0); // u1 is the master dof, = Is it right? unsigned int v1 =3D3D (*it)>dof_number(0,1,0); DofConstraintRow cu,cv; cu[u1] =3D 1.0; cv[v1] =3D 1.0; for(it+=3D3D1;it!=3D3DNcorner.end();++it){ unsigned int u2 =3D3D (*it)>dof_number(0,0,0); // u2 is the slave do= f, Is it right? unsigned int v2 =3D3D (*it)>dof_number(0,1,0); dof_map.add_constraint_row ( u2,cu); dof_map.add_constraint_row ( v2, cv); } ....... Then, the result of the four corners U(13) =3D 0.00086189012, V(13) =3D 4.0574022e06 U(90) =3D 0.00086189654, V(90) =3D 4.05748e06 U(205) =3D 0.00086189954 V(205) =3D 4.0571357e06 U(281) =3D 0.00086189787 V(281) =3D 4.0574268e06 I think that the dof constraint is right. But why they have a little difference? Maybe it is decided by the method used in libmesh. Which method is used for constraint equation in libmesh? the Lagrange multiplier method, the penalty method or the transformation method? Another question is about the information of the linear solver convergence. > Use system.n_linear_iterations() and system.final_linear_residual() to > output the information. > Without the four corners constraint, the output is : Linear solver > converge=3D > d > at step: 5000, final residual : 1.96297e06. > But the total degree is 1232, Is it need so many times of iteration? > With the four corners constraint, the output is : Linear solver converged > a=3D > t > step: 5000, final residual: 1.91226e05. > I think the residual is too big. How to change it? > > 