You can subscribe to this list here.
2003 
_{Jan}
(4) 
_{Feb}
(1) 
_{Mar}
(9) 
_{Apr}
(2) 
_{May}
(7) 
_{Jun}
(1) 
_{Jul}
(1) 
_{Aug}
(4) 
_{Sep}
(12) 
_{Oct}
(8) 
_{Nov}
(3) 
_{Dec}
(4) 

2004 
_{Jan}
(1) 
_{Feb}
(21) 
_{Mar}
(31) 
_{Apr}
(10) 
_{May}
(12) 
_{Jun}
(15) 
_{Jul}
(4) 
_{Aug}
(6) 
_{Sep}
(5) 
_{Oct}
(11) 
_{Nov}
(43) 
_{Dec}
(13) 
2005 
_{Jan}
(25) 
_{Feb}
(12) 
_{Mar}
(49) 
_{Apr}
(19) 
_{May}
(104) 
_{Jun}
(60) 
_{Jul}
(10) 
_{Aug}
(42) 
_{Sep}
(15) 
_{Oct}
(12) 
_{Nov}
(6) 
_{Dec}
(4) 
2006 
_{Jan}
(1) 
_{Feb}
(6) 
_{Mar}
(31) 
_{Apr}
(17) 
_{May}
(5) 
_{Jun}
(95) 
_{Jul}
(38) 
_{Aug}
(44) 
_{Sep}
(6) 
_{Oct}
(8) 
_{Nov}
(21) 
_{Dec}

2007 
_{Jan}
(5) 
_{Feb}
(46) 
_{Mar}
(9) 
_{Apr}
(23) 
_{May}
(17) 
_{Jun}
(51) 
_{Jul}
(41) 
_{Aug}
(4) 
_{Sep}
(28) 
_{Oct}
(71) 
_{Nov}
(193) 
_{Dec}
(20) 
2008 
_{Jan}
(46) 
_{Feb}
(46) 
_{Mar}
(18) 
_{Apr}
(38) 
_{May}
(14) 
_{Jun}
(107) 
_{Jul}
(50) 
_{Aug}
(115) 
_{Sep}
(84) 
_{Oct}
(96) 
_{Nov}
(105) 
_{Dec}
(34) 
2009 
_{Jan}
(89) 
_{Feb}
(93) 
_{Mar}
(119) 
_{Apr}
(73) 
_{May}
(39) 
_{Jun}
(51) 
_{Jul}
(27) 
_{Aug}
(8) 
_{Sep}
(91) 
_{Oct}
(90) 
_{Nov}
(77) 
_{Dec}
(67) 
2010 
_{Jan}
(25) 
_{Feb}
(36) 
_{Mar}
(98) 
_{Apr}
(45) 
_{May}
(25) 
_{Jun}
(60) 
_{Jul}
(17) 
_{Aug}
(36) 
_{Sep}
(48) 
_{Oct}
(45) 
_{Nov}
(65) 
_{Dec}
(39) 
2011 
_{Jan}
(26) 
_{Feb}
(48) 
_{Mar}
(151) 
_{Apr}
(108) 
_{May}
(61) 
_{Jun}
(108) 
_{Jul}
(27) 
_{Aug}
(50) 
_{Sep}
(43) 
_{Oct}
(43) 
_{Nov}
(27) 
_{Dec}
(37) 
2012 
_{Jan}
(56) 
_{Feb}
(120) 
_{Mar}
(72) 
_{Apr}
(57) 
_{May}
(82) 
_{Jun}
(66) 
_{Jul}
(51) 
_{Aug}
(75) 
_{Sep}
(166) 
_{Oct}
(232) 
_{Nov}
(284) 
_{Dec}
(105) 
2013 
_{Jan}
(168) 
_{Feb}
(151) 
_{Mar}
(30) 
_{Apr}
(145) 
_{May}
(26) 
_{Jun}
(53) 
_{Jul}
(76) 
_{Aug}
(33) 
_{Sep}
(23) 
_{Oct}
(72) 
_{Nov}
(125) 
_{Dec}
(38) 
2014 
_{Jan}
(47) 
_{Feb}
(62) 
_{Mar}
(27) 
_{Apr}
(8) 
_{May}
(12) 
_{Jun}
(2) 
_{Jul}
(22) 
_{Aug}
(22) 
_{Sep}

_{Oct}
(17) 
_{Nov}
(20) 
_{Dec}
(12) 
2015 
_{Jan}
(25) 
_{Feb}
(2) 
_{Mar}
(16) 
_{Apr}
(13) 
_{May}
(21) 
_{Jun}
(5) 
_{Jul}
(1) 
_{Aug}
(8) 
_{Sep}

_{Oct}

_{Nov}

_{Dec}

From: <luthi@gi...>  20040922 00:11:05

Hi PETSc 2.2.1 complains about following call in the Newmark_System (line 180)=20 void NewmarkSystem::compute_matrix () { // zero the global matrix this>matrix>zero(); with [0]PETSC ERROR: MatZeroEntries() line 3730 in src/mat/interface/matrix.= c [0]PETSC ERROR: Object is in wrong state! [0]PETSC ERROR: Not for matrices where you have set values but not yet = assembled! [0]PETSC ERROR: User provided function() line 170 in unknowndirectory/s= rc/numerics/petsc_matrix.C Any clues what to do with this? Best, Martin =20 Martin L=FCthi University of Alaska, Fairbanks Dr. sc. nat. tel: +1 (907) 474 7443 fax: +1 (907) 474 7290 mel: luthi@... http://gi.alaska.edu/~luthi/ 
From: <luthi@gi...>  20040921 23:50:16

Hi Petsc changed their solver interface a little. So here is a trivial patch for petsc_interface.C Best, Martin 
From: Benjamin S. Kirk <benkirk@cf...>  20040911 03:37:46

You are right that the variable and element order are related, but sadly the relationship is a little more complicated that what you described... In general, the Elem::order() defines the "natural" order of the element. This is the order implied by a Lagrange nodal basis that employs all the nodes of the element. This is the basis that is used for geometrical operations, like mapping the physical element to a reference element for shape function evaluation, computing the element Jacobian, etc... Right now Elem::order() is hardcoded for each element type to be what you would expect: Quad4::order() is FIRST, Quad8::order() is SECOND, Quad9::order() is SECOND, etc... If we had a Quad16 bicubic element its order would be THIRD. Since this is the order that is used in the geometric mapping a 2D mesh composed of Quad9's and Tri6's can represent quadratic boundaries exactly since the elements may have secondorder curved edges. (Note that hardcoding Elem::order() in this case may be inefficient: if the sides are straight with equispaced nodes then there is no need to compute a secondorder map. In the future fullorder mapping may only be applied to elements touching the boundary of the domain.) On the other hand, the approximation order is exactly that: the polynomial degree used in approximating an unknown. This is where the element order and approximation order get a bit confusing...  For Lagrange finite elements (the default) you have *at most* one DOF per node, so here your approximation order must be <= the element order. For example: o You *could* do firstorder approximation on a Quad9, there will just not be any DOFs at the edge or center node. o You *cannot* do secondorder approximation on a Quad4.  For Hierarchic finite elements there may be multiple DOFs per node, so in this case your approximation order may be higher than the element order. Another example: o You *can* do thirdorder approximation on a Quad9. In this case there will be 1 DOF per corner node, 2 DOFs per edge node, and 4 DOFs at the center node  For DG finite elements you can have any approximation order on any element order. As for the other questions: A mesh should contain elements of the same order. A mix of Quad8, Quad9, and Tri6's is valid, but a mix of Tri3's and Quad9's is not. (In the former case all edges have a middle node, in the latter only some would.) You can add variables of whatever order you want, provided you observe the previous "rules." Sometimes, however, the equations you are solving actually limit the choice of variable approximation. See for example, uh, example 11. There we solve the Stokes equations for a lowspeed incompressible fluid. We use second order approximation for the Cartesian velocity components, but only first order for the pressure. In the case of the Stokes equations using equalorder interpolation for the velocity and pressure can actually produce a mathematically unstable approximation. If you are curious about similar restrictions for the class of problems you are interested in I recommend checking the finite element literature related to such applications... If there is no endless babble about approximation space compatibility, LBB, infsup, etc... then equalorder approximation is probably safe. Let me know if you have any more questions, Ben PS: A longterm goal is to add socalled prefinement support, in which case the approximation order can actually vary from element to element. Manav Bhatia wrote: > Hi > I needed some help in understanding the order of the > variable that we add to a system and that of the > elements. > As I would guess, the former has to be less than or > equal to the latter. But I can not understand the > following: >  what would happen if the mesh has elements of mixed > order, say first and second, and then we add a > variable of order first? >  what would happen if we have to variables in the > same system of the same order? >  are there any specific guidelines about choosing > the variable's order in such mixed cases? > > Thanks in advance. > > Regards > Manav > > > > __________________________________ > Do you Yahoo!? > Yahoo! Mail  50x more storage than other providers! > http://promotions.yahoo.com/new_mail > > >  > This SF.Net email is sponsored by BEA Weblogic Workshop > FREE Java Enterprise J2EE developer tools! > Get your free copy of BEA WebLogic Workshop 8.1 today. > http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers 
From: Rafa Rodriguez Galvan <rafael.rodriguez@uc...>  20040816 09:16:05

Hello,=0D =0D I am testing, for my Ph. D. Thesis, a finite element defined=0D as the tensor product of a 1D monomial constant element=0D times a 1D lagrange first order element. The resulting element=0D is what we may call a "QUAD2" element, that (i think) is not=0D implemented in libmesh:=0D =0D 1=0D + O +=0D  =0D  =0D + O +=0D 2=0D =0D I have been looking at the code of libmesh, trying to implement =0D it, but it is difficult (of course, I suppuse libmesh has not =0D been developed for make easy creating new elements): I =0D have to change the core of a lot of functions, and it is not =0D easy to assure I'm doing it in the right way.=0D =0D The question is: is there any easy way to do it? is there any=0D prebuilt element like the "QUAD2" that I have not found in=0D lib mesh?=0D =0D Thanks.=0D =0D =0D =0D J. Rafael Rodr=EDguez Galv=E1n.=0D Mathematics Dpt. C=E1diz University. Spain.=0D 
From: Rafa Rodriguez Galvan <rafael.rodriguez@uc...>  20040816 09:00:22

Hello,=0D =0D I am testing, for my Ph. D. Thesis, a finite element that=0D we define as tensorial product of a 1D monomial constant element=0D times a 1D lagrange first order element:=0D =0D + O +=0D  =0D  =0D =0D =0D is not implemented in libmesh (I think):=0D =0D =0D J. Rafael Rodr=EDguez Galv=E1n.=0D Dpto. Matem=E1ticas. Universidad C=E1diz.=0D email: rafael.rodriguez@...=0D Tlf: 956 01 5478. Fax: 956 01 5378.=20 
From: John Peterson <peterson@cf...>  20040809 17:45:23

Hi all, I just checked in a few changes, and the current CVS compiles under AIX 5.2.0.0 (xlC version 6). John 
From: Steffen Petersen <steffen.petersen@tu...>  20040805 22:18:46

John Peterson wrote: >Can anyone comment on whether or not the library >still compiles on the HP compiler? > > It used to do so a few month ago. I have not tried recently, but I can check if it still does next week. Steffen >Is it the only compiler which requires the OStringStream >replacement for std::ostringstream? > > >Thanks, >John > > > >This SF.Net email is sponsored by OSTG. Have you noticed the changes on >Linux.com, ITManagersJournal and NewsForge in the past few weeks? Now, >one more big change to announce. We are now OSTG Open Source Technology >Group. Come see the changes on the new OSTG site. http://www.ostg.com >_______________________________________________ >Libmeshdevel mailing list >Libmeshdevel@... >https://lists.sourceforge.net/lists/listinfo/libmeshdevel > > > 
From: John Peterson <peterson@cf...>  20040805 17:28:58

Can anyone comment on whether or not the library still compiles on the HP compiler? Is it the only compiler which requires the OStringStream replacement for std::ostringstream? Thanks, John 
From: KIRK, BENJAMIN (JSCEG) (NASA) <benjamin.kirk1@na...>  20040803 16:07:58

The current CVS source has been tagged as libmesh0_4_3rc1 and a new .tar.gz file is available. Please report any usability or portability issues you may encounter. We will be moving to a more frequent release cycle by omitting the tedious multiplatform, multiconfiguration test cycle. Any portability issues discovered in the "rc*" releases will be addressed in the next major "official" version, 0.4.3. The idea here is to provide more uptodate .tar.gz files for people who don't use CVS...  Benjamin S. Kirk benjamin.kirk@... NASA/JSC, Applied Aerosciences & CFD Branch 2814839491 
From: John Peterson <peterson@cf...>  20040708 16:07:53

Hi, In general, the weights should add up to the area of the element. In this case, the area is 1/2 for the TRI3 and TRI6 elements In the reference you mention Eq. 6.3.7 on page 10 has the area Ae=1/2 pulled out of the summation, while we have it multiplied through. That should account for the difference. John Ahmed ELSheikh writes: > In the file > http://libmesh.sourceforge.net/doxygen/quadrature__gauss__2D_8Csource.html > > The weights of the integration points are specified as following > > 00095 case THIRD: > 00096 { > 00097 // Exact for cubics > 00098 _points.resize(4); > 00099 _weights.resize(4); > 00100 > 00101 _points[0](0) = .33333333333333333333333333333333; > 00102 _points[0](1) = .33333333333333333333333333333333; > 00103 > 00104 _points[1](0) = .2; > 00105 _points[1](1) = .6; > 00106 > 00107 _points[2](0) = .2; > 00108 _points[2](1) = .2; > 00109 > 00110 _points[3](0) = .6; > 00111 _points[3](1) = .2; > 00112 > 00113 > 00114 _weights[0] = 27./96.; > 00115 _weights[1] = 25./96.; > 00116 _weights[2] = 25./96.; > 00117 _weights[3] = 25./96.; > > While in the Zienkiewicz book or the notes by Flaherty > (www.rpi.edu/~gilade/fem6.ps page 11) > > The weights are doubled what is in the LibMesh. On the other hand the > resulting > stiffness matrices are correct. > > I do remember there is something about the calculation of the Jacobian > for the > triangles (2 * Area) but I couldn't figure what happen when the > lagrangain basis > are used for the Jacobian calculation. > It have been a while since I took the FE course :) > > Thanks in advance. > Ahmed 
From: Ahmed ELSheikh <elsheiam@mc...>  20040708 15:50:22

In the file http://libmesh.sourceforge.net/doxygen/quadrature__gauss__2D_8Csource.html The weights of the integration points are specified as following 00095 case THIRD: 00096 { 00097 // Exact for cubics 00098 _points.resize(4); 00099 _weights.resize(4); 00100 00101 _points[0](0) = .33333333333333333333333333333333; 00102 _points[0](1) = .33333333333333333333333333333333; 00103 00104 _points[1](0) = .2; 00105 _points[1](1) = .6; 00106 00107 _points[2](0) = .2; 00108 _points[2](1) = .2; 00109 00110 _points[3](0) = .6; 00111 _points[3](1) = .2; 00112 00113 00114 _weights[0] = 27./96.; 00115 _weights[1] = 25./96.; 00116 _weights[2] = 25./96.; 00117 _weights[3] = 25./96.; While in the Zienkiewicz book or the notes by Flaherty (www.rpi.edu/~gilade/fem6.ps page 11) The weights are doubled what is in the LibMesh. On the other hand the resulting stiffness matrices are correct. I do remember there is something about the calculation of the Jacobian for the triangles (2 * Area) but I couldn't figure what happen when the lagrangain basis are used for the Jacobian calculation. It have been a while since I took the FE course :) Thanks in advance. Ahmed 
From: John Peterson <peterson@cf...>  20040707 14:04:25

Alright, I'll work on the second one (GMSH support) http://www.geuz.org/gmsh Looks like it requires both the GSL (http://sources.redhat.com/gsl/) and FLTK to run. John KIRK, BENJAMIN (JSCEG) (NASA) writes: > Apparently people have begun using the feature requests available through > Sourceforge. Problem is, I don't think any of us have been monitoring these > requests! I think the OpenDX and GMSH requests should be easy to > accommodate. > > http://sourceforge.net/tracker/?atid=530257&group_id=71130&func=browse > > I'll look into the OpenDX output. > > >  > Benjamin S. Kirk > benjamin.kirk@... > NASA/JSC, Applied Aerosciences & CFD Branch > 2814839491 
From: KIRK, BENJAMIN (JSCEG) (NASA) <benjamin.kirk1@na...>  20040706 17:59:17

Apparently people have begun using the feature requests available through Sourceforge. Problem is, I don't think any of us have been monitoring these requests! I think the OpenDX and GMSH requests should be easy to accommodate. http://sourceforge.net/tracker/?atid=530257&group_id=71130&func=browse I'll look into the OpenDX output.  Benjamin S. Kirk benjamin.kirk@... NASA/JSC, Applied Aerosciences & CFD Branch 2814839491 
From: Steffen Petersen <steffen.petersen@tu...>  20040622 14:28:07

Hello Alain, find attached a paper on acoustic infinite elements (will be presented at ISMA 2004 in Leuven). Although, the acoustic problems investigated may not be too interesting for you, it covers the scaling aspects as well as large problems with up to ~1e6 degrees of freedom. Steffen > > Another question, do you have some results (articles,...) on > scalability or large scale // applications with libMesh, maybe it > could help me ? > > Any ideas will be appriciated. > Sincerely > Alain > 
From: Benjamin S. Kirk <benkirk@cf...>  20040608 02:45:43

Yes, the element node numbering corresponds to the local dof numbering. DOFs are first assigned to the nodes of the element, and then to the element itself. Strictly speaking, ex3 is more correct. It forms the proper L2 projection matrix for each node on the boundary. A "shortcut" is made in ex10 in which we "lump" the mass (projection) matrix. Also, in this example, linear elements are used. In this case each node has a DOF, and the local DOF numbering corresponds to the local node numbering. Note ex10 should break if you use linear approximation on a QUAD8 element, for instance, since the midface nodes have no DOFS. In that case the code in ex3 is correct. I should probably change it to be consistent... In the case that a node lives on n sides (e.g. n=2, a corner in 2D) then either case will have a similar effect: It adds an identical row to the matrix n times and adds n (possibly) different RHS values. The net effect is to assign the node the average value of all the Dirichlet values it touches. In general, the element node (and hence local DOF) numbering is designed to be nested... That way only the new nodes introduced in a higherorder element need to be numbered. This has strengths & weaknesses... Clearly a HEX64 numbered like this would be confusing! Anyway, the local DOFs are assigned to nodes first, then to the element itself. Finally, there is a more "conventional" way to assign Dirichlet BCs in the case of lagrange elements, but it requires more "if" tests. Basically, the problem is that it is possible for an element to have a node with a BC even if it has no sides on the boundary (especially triangles, tets). In this case you must test every node in the element, not just sides on the boundary. We have found penalty to work quite well, but if you want more info on the more "standard" treatment let me know. Note that the L2 projection brought in by the penalty approach is particularly appealing when you do not have a nodal basis, like in the case of the hierarchics... Ben Ahmed ELSheikh wrote: > I am wondering if this is correct... > > There is a implicit assumption of the DOF numbering for an element. > Where the local node numbering corresponds to the element local > degree of freedom. ( I checked DofMap::dof_indices) > > So this why the Dirichlet boundary conditions applied in ex10 > works, where n is the node number and Ke(n,n) and Fe(n,n) > are the corresponding contributions for the penalty function. > > Following up of the same subject of Dirichlet BC on > example 3 where there is a different treatment. > Where we find > > for (unsigned int i=0; i<phi_face.size(); i++) > for (unsigned int j=0; j<phi_face.size(); j++) > Ke(i,j) += JxW_face[qp]*penalty*phi_face[i][qp]*phi_face[j][qp]; > > // Righthandside contribution of the L2 > // projection. > for (unsigned int i=0; i<phi_face.size(); i++) > Fe(i) += JxW_face[qp]*penalty*value*phi_face[i][qp]; > > > Is there any possible case when i is not equal to j and > K(i,j) have a value. Or that we are readding Fe(0) on > each face. Say if the element have two NULL neighbors. > > Ahmed > > > >  > This SF.Net email is sponsored by: GNOME Foundation > Hackers Unite! GUADEC: The world's #1 Open Source Desktop Event. > GNOME Users and Developers European Conference, 2830th June in Norway > http://2004/guadec.org > _______________________________________________ > Libmeshdevel mailing list > Libmeshdevel@... > https://lists.sourceforge.net/lists/listinfo/libmeshdevel 
From: Ahmed ELSheikh <elsheiam@mc...>  20040607 23:21:36

I am wondering if this is correct... There is a implicit assumption of the DOF numbering for an element. Where the local node numbering corresponds to the element local degree of freedom. ( I checked DofMap::dof_indices) So this why the Dirichlet boundary conditions applied in ex10 works, where n is the node number and Ke(n,n) and Fe(n,n) are the corresponding contributions for the penalty function. Following up of the same subject of Dirichlet BC on example 3 where there is a different treatment. Where we find for (unsigned int i=0; i<phi_face.size(); i++) for (unsigned int j=0; j<phi_face.size(); j++) Ke(i,j) += JxW_face[qp]*penalty*phi_face[i][qp]*phi_face[j][qp]; // Righthandside contribution of the L2 // projection. for (unsigned int i=0; i<phi_face.size(); i++) Fe(i) += JxW_face[qp]*penalty*value*phi_face[i][qp]; Is there any possible case when i is not equal to j and K(i,j) have a value. Or that we are readding Fe(0) on each face. Say if the element have two NULL neighbors. Ahmed 
From: John Peterson <peterson@cf...>  20040604 14:44:17

Ahmed ELSheikh writes: > The problem with the number of unknowns appears for the elements on the boundary.. linear > chain of quad elements is simple example. As you said for linear and quadratic > reconstructions it is some how ok but if the initial mesh is bad there might be a problem. > > > Eventually I'd like the library to look for LAPACK in ./configure and > > then use it internal to the DenseMatrix<> and DenseVector<> class. In > > the meantime, if you'd like to "share" an SVD implementation we'd be > > happy to include it! > > I would recommend to adopt an SVD implementation like the LU one so if you run it in > parallel it works too. I will attach the code which I have it is the TNT implementation of > the SVD. I modified it to work with the DenseMatrix<> and DenseVector<>. > I call it from the error estimator like FYI: Here is the Copyright notice from the page if anyone is interested. It's public domain software, so I guess there should be no problem copying/distributing it under LGPL. Copyright Notice This software is a cooperative product of The MathWorks and the National Institute of Standards and Technology (NIST) which has been released to the public domain. Neither The MathWorks nor NIST assumes any responsibility whatsoever for its use by other parties, and makes no guarantees, expressed or implied, about its quality, reliability, or any other characteristic. John 
From: Ahmed ELSheikh <elsheiam@mc...>  20040604 04:08:12

The problem with the number of unknowns appears for the elements on the boundary.. linear chain of quad elements is simple example. As you said for linear and quadratic reconstructions it is some how ok but if the initial mesh is bad there might be a problem. > Eventually I'd like the library to look for LAPACK in ./configure and > then use it internal to the DenseMatrix<> and DenseVector<> class. In > the meantime, if you'd like to "share" an SVD implementation we'd be > happy to include it! I would recommend to adopt an SVD implementation like the LU one so if you run it in parallel it works too. I will attach the code which I have it is the TNT implementation of the SVD. I modified it to work with the DenseMatrix<> and DenseVector<>. I call it from the error estimator like // SVD<double> M_svd(M); M_svd.solve(bx, solx); M_svd.solve(by, soly); // BUT this code may loop for ever. I am checking how to put a stopping criteria if there is a convergence problem. Ahmed > > Ben > > Ahmed ELSheikh wrote: > > That is great.. I have already developed a similar code :( > > I started with the same code but I added some modifications > > > > The first is the need to use a local coordinate system for the > > patch to avoid the different weighting of x and y. > > So I did something like this: > > > > // determine the x_min, x_max, y_min, y_max , z_min, z_max for all the points on the > > patch > > // Scaled x and y by z by > > // x=(xx_min)/(x_maxx_min); > > > > > > The other thing is using Singular value decomposition to solve. > > I adopted a code from TNT for that. > > > > Do you have any plans for different error estimators .... > > > > > > John Peterson wrote: > > > > > >>Note that your technique will only add *face* neighbors of the > >>current element to the patch, i.e. you will not in general include > >>the full support of each of the nodal basis functions in your > >>patch. > > > > > > I new that .. It is a bit problem here, > > What I want is a patch ... with the same system of equation > > as well as the current solution. This patch may use the same assemble function > > but on the residuals or attach its own assembling function. > > > > > > One choice is use these faces I build a map from the patch mesh to > > the original mesh ... then initialize a different equation system for the > > patch .. obtain what is needed from the old mesh when needed. > > Attach another assembling function for the error estimation with > > different boundary conditions. Then instead of solving it using the method > > solve use SVD. > > > > Is that make sense !! > > > > Ahmed > > > > > > > >  > > This SF.Net email is sponsored by the new InstallShield X. > >>From Windows to Linux, servers to mobile, InstallShield X is the one > > installationauthoring solution that does it all. Learn more and > > evaluate today! http://www.installshield.com/Dev2Dev/0504 > > _______________________________________________ > > Libmeshdevel mailing list > > Libmeshdevel@... > > https://lists.sourceforge.net/lists/listinfo/libmeshdevel 
From: Benjamin S. Kirk <benkirk@cf...>  20040604 01:46:09

What a relevant thread! Funny, as John mentioned, we started implementing this yesterday. What approximation order are you using for the reconstructed gradient? Certainly, when the patch size is close to the number of unknowns in the reconstruction (or less!) you can get a horribly conditioned (or singular) system that needs SVD. We have some numerical experiments that suggest LU is OK for linear and quadratic reconstructions when the patches are of "reasonable" size... Eventually I'd like the library to look for LAPACK in ./configure and then use it internal to the DenseMatrix<> and DenseVector<> class. In the meantime, if you'd like to "share" an SVD implementation we'd be happy to include it! Ben Ahmed ELSheikh wrote: > That is great.. I have already developed a similar code :( > I started with the same code but I added some modifications > > The first is the need to use a local coordinate system for the > patch to avoid the different weighting of x and y. > So I did something like this: > > // determine the x_min, x_max, y_min, y_max , z_min, z_max for all the points on the > patch > // Scaled x and y by z by > // x=(xx_min)/(x_maxx_min); > > > The other thing is using Singular value decomposition to solve. > I adopted a code from TNT for that. > > Do you have any plans for different error estimators .... > > > John Peterson wrote: > > >>Note that your technique will only add *face* neighbors of the >>current element to the patch, i.e. you will not in general include >>the full support of each of the nodal basis functions in your >>patch. > > > I new that .. It is a bit problem here, > What I want is a patch ... with the same system of equation > as well as the current solution. This patch may use the same assemble function > but on the residuals or attach its own assembling function. > > > One choice is use these faces I build a map from the patch mesh to > the original mesh ... then initialize a different equation system for the > patch .. obtain what is needed from the old mesh when needed. > Attach another assembling function for the error estimation with > different boundary conditions. Then instead of solving it using the method > solve use SVD. > > Is that make sense !! > > Ahmed > > > >  > This SF.Net email is sponsored by the new InstallShield X. >>From Windows to Linux, servers to mobile, InstallShield X is the one > installationauthoring solution that does it all. Learn more and > evaluate today! http://www.installshield.com/Dev2Dev/0504 > _______________________________________________ > Libmeshdevel mailing list > Libmeshdevel@... > https://lists.sourceforge.net/lists/listinfo/libmeshdevel 
From: John Peterson <peterson@cf...>  20040603 19:52:21

Ahmed ELSheikh writes: > That is great.. I have already developed a similar code :( > I started with the same code but I added some modifications > > The first is the need to use a local coordinate system for the > patch to avoid the different weighting of x and y. > So I did something like this: > > // determine the x_min, x_max, y_min, y_max , z_min, z_max for all the points on the > patch > // Scaled x and y by z by > // x=(xx_min)/(x_maxx_min); > > > The other thing is using Singular value decomposition to solve. > I adopted a code from TNT for that. > > Do you have any plans for different error estimators .... I have been thinking briefly about residualbased error indicators, but by definition they are problemspecific and not generally appropriate for inclusion in the library. The best approach (as you mention below) is to reuse the same assemble function already provided by the user to the EquationSystems, but do not assemble any global matrix, just compute the residual on each element. That might take some modification to the way we currently code the assembly routines, but it could be done. A problem with the above approach is that the edge residuals, i.e. the jump in the normal derivatives across interelement boundaries, are sometimes the largest contribution to an element residual, and they are not computed during most matrix assembly routines (for continuous Galerkin). So providing a generic class ResidualBasedErrorEstimator {}; with the method attach_residual_function(...); would probably be what would eventually make it into the library. > One choice is use these faces I build a map from the patch mesh to > the original mesh ... then initialize a different equation system for the > patch .. obtain what is needed from the old mesh when needed. > Attach another assembling function for the error estimation with > different boundary conditions. Then instead of solving it using the method > solve use SVD. Hmm...are the systems of equations that you generate singular or nearly singular? We assumed that it would be possible to compute an LU decomposition to solve for the coefficients, but maybe that won't work....? John 
From: Ahmed ELSheikh <elsheiam@mc...>  20040603 18:35:53

That is great.. I have already developed a similar code :( I started with the same code but I added some modifications The first is the need to use a local coordinate system for the patch to avoid the different weighting of x and y. So I did something like this: // determine the x_min, x_max, y_min, y_max , z_min, z_max for all the points on the patch // Scaled x and y by z by // x=(xx_min)/(x_maxx_min); The other thing is using Singular value decomposition to solve. I adopted a code from TNT for that. Do you have any plans for different error estimators .... John Peterson wrote: > Note that your technique will only add *face* neighbors of the > current element to the patch, i.e. you will not in general include > the full support of each of the nodal basis functions in your > patch. I new that .. It is a bit problem here, What I want is a patch ... with the same system of equation as well as the current solution. This patch may use the same assemble function but on the residuals or attach its own assembling function. One choice is use these faces I build a map from the patch mesh to the original mesh ... then initialize a different equation system for the patch .. obtain what is needed from the old mesh when needed. Attach another assembling function for the error estimation with different boundary conditions. Then instead of solving it using the method solve use SVD. Is that make sense !! Ahmed 
From: John Peterson <peterson@cf...>  20040603 18:08:01

Ahmed ELSheikh writes: > Thanks John so much for this fast reply. It works :) > > Actually, I am investigating another error estimation > technique based on solving local patches. > > Is anybody interested or did similar work before? Actually, yes, we have just been discussing a generalization of the ZienkiewiczZhu superconvergent patch recovery error indicator which computes an improved approximation to the flux grad(u) on patches of userdefinable size. Note that your technique will only add *face* neighbors of the current element to the patch, i.e. you will not in general include the full support of each of the nodal basis functions in your patch. The latest CVS should have the start of the patch recovery error code in it, if you want to take a look. > > Neat idea. Does your submesh have a hole in it, or do you > > insert the current element as well? > > I have already inserted the current element before iterating on the neighbors. > Anyway, I will check that again. John 
From: Ahmed ELSheikh <elsheiam@mc...>  20040603 17:57:09

Thanks John so much for this fast reply. It works :) Actually, I am investigating another error estimation technique based on solving local patches. Is anybody interested or did similar work before? > Neat idea. Does your submesh have a hole in it, or do you > insert the current element as well? I have already inserted the current element before iterating on the neighbors. Anyway, I will check that again. Regards, Ahmed  "No great thing is created suddenly." Epictetus ************************************************** Ahmed ElSheikh <elsheiam_AT_mcmaster.ca> PhD Candidate, Department of Civil Engineering McMaster University 1280 Main Street West Hamilton, Ontario, L8S 4L7, Canada. ************************************************** 
From: John Peterson <peterson@cf...>  20040603 17:17:51

Hi, Neat idea. Does your submesh have a hole in it, or do you insert the current element as well? Try the following line: const_active_local_elem_iterator pelem_it (patch_mesh.const_elements_begin()); ***** const const_active_local_elem_iterator pelem_end (patch_mesh.const_elements_end()); ***** Since patch_mesh is not a const object, but you want const_iterators out of it, you have to use the differentlynamed function. I hope that helps you. John Ahmed ELSheikh writes: > > So far everything works fine and the patch_mesh.print_info(); shows a > correct size of the patch. > > When I try to extract infromation from the patch_mesh I used > > const_active_local_elem_iterator pelem_it > (patch_mesh.elements_begin()); > const const_active_local_elem_iterator > pelem_end(patch_mesh.elements_end()); > > for (; pelem_it != pelem_end; ++pelem_it) > { > const Elem* e = *pelem_it; > e>write_tecplot_connectivity(std::cout); > for (unsigned int n=0; n<e>n_nodes(); n++) e>point(n).print(); > > } > > But I got an error related to the iterators. I'll attach it at the end > of > this email. > > Any advice of what I am missing. > > Ahmed > > > /////  compilation errors > /usr/local/include/c++/3.3.3/bits/stl_pair.h: In constructor > `std::pair<_T1, > _T2>::pair(const std::pair<_U1, _U2>&) [with _U1 = > __gnu_cxx::__normal_iterator<Elem**, std::vector<Elem*, > std::allocator<Elem*> > >, _U2 = __gnu_cxx::__normal_iterator<Elem**, > > std::vector<Elem*, std::allocator<Elem*> > >, _T1 = > __gnu_cxx::__normal_iterator<const Elem* const*, std::vector<const > Elem*, > std::allocator<const Elem*> > >, _T2 = > __gnu_cxx::__normal_iterator<const > Elem* const*, std::vector<const Elem*, std::allocator<const Elem*> > > >]': > /work/elsheiam/libmesh/include/mesh/elem_iterators.h:851: instantiated > > from here > /usr/local/include/c++/3.3.3/bits/stl_pair.h:88: error: no matching > function > for call to `__gnu_cxx::__normal_iterator<const Elem* const*, > std::vector<const Elem*, std::allocator<const Elem*> > > >::__normal_iterator( > const __gnu_cxx::__normal_iterator<Elem**, std::vector<Elem*, > std::allocator<Elem*> > >&)' > /usr/local/include/c++/3.3.3/bits/stl_iterator.h:580: error: candidates > are: > __gnu_cxx::__normal_iterator<const Elem* const*, std::vector<const > Elem*, > std::allocator<const Elem*> > >::__normal_iterator(const > __gnu_cxx::__normal_iterator<const Elem* const*, std::vector<const > Elem*, 
From: Ahmed ELSheikh <elsheiam@mc...>  20040603 15:25:52

Hi, I want to get a patch of all elements and initialize a new submesh with these elements. Say all the neighbors of one element (I didnt use the neighbor iterator because later I will have different types of patchs) So this is what I did, //// const_active_local_elem_iterator elem_it (mesh.elements_begin()); const const_active_local_elem_iterator elem_end(mesh.elements_end()); for (; elem_it != elem_end; ++elem_it) { const Elem* input_el= *elem_it; Mesh patch_mesh (2); // a vector to store the patch elements std::vector<const Elem *> elements_patch; elements_patch.reserve(input_el>n_neighbors()+1); // insert the element itself elements_patch.push_back(input_el); for (unsigned int n_e=0; n_e<input_el>n_neighbors(); n_e++) if (input_el>neighbor(n_e) != NULL) { Elem* ef = input_el>neighbor(n_e); elements_patch.push_back(ef); } // end neighbors // create iterators to pass it to create_submesh const_elem_iterator it(std::make_pair( elements_patch.begin(), elements_patch.end())); const const_elem_iterator it_end(std::make_pair( ((std::vector<const Elem*>*) &elements_patch)>end(), ((std::vector<const Elem*>*) &elements_patch)>end())); mesh.create_submesh (patch_mesh, it, it_end); patch_mesh.print_info(); } // end of elem_it // So far everything works fine and the patch_mesh.print_info(); shows a correct size of the patch. When I try to extract infromation from the patch_mesh I used const_active_local_elem_iterator pelem_it (patch_mesh.elements_begin()); const const_active_local_elem_iterator pelem_end(patch_mesh.elements_end()); for (; pelem_it != pelem_end; ++pelem_it) { const Elem* e = *pelem_it; e>write_tecplot_connectivity(std::cout); for (unsigned int n=0; n<e>n_nodes(); n++) e>point(n).print(); } But I got an error related to the iterators. I'll attach it at the end of this email. Any advice of what I am missing. Ahmed /////  compilation errors /usr/local/include/c++/3.3.3/bits/stl_pair.h: In constructor `std::pair<_T1, _T2>::pair(const std::pair<_U1, _U2>&) [with _U1 = __gnu_cxx::__normal_iterator<Elem**, std::vector<Elem*, std::allocator<Elem*> > >, _U2 = __gnu_cxx::__normal_iterator<Elem**, std::vector<Elem*, std::allocator<Elem*> > >, _T1 = __gnu_cxx::__normal_iterator<const Elem* const*, std::vector<const Elem*, std::allocator<const Elem*> > >, _T2 = __gnu_cxx::__normal_iterator<const Elem* const*, std::vector<const Elem*, std::allocator<const Elem*> > >]': /work/elsheiam/libmesh/include/mesh/elem_iterators.h:851: instantiated from here /usr/local/include/c++/3.3.3/bits/stl_pair.h:88: error: no matching function for call to `__gnu_cxx::__normal_iterator<const Elem* const*, std::vector<const Elem*, std::allocator<const Elem*> > >::__normal_iterator( const __gnu_cxx::__normal_iterator<Elem**, std::vector<Elem*, std::allocator<Elem*> > >&)' /usr/local/include/c++/3.3.3/bits/stl_iterator.h:580: error: candidates are: __gnu_cxx::__normal_iterator<const Elem* const*, std::vector<const Elem*, std::allocator<const Elem*> > >::__normal_iterator(const __gnu_cxx::__normal_iterator<const Elem* const*, std::vector<const Elem*, std::allocator<const Elem*> > >&) /usr/local/include/c++/3.3.3/bits/stl_iterator.h:593: error: __gnu_cxx::__normal_iterator<_Iterator, _Container>::__normal_iterator(const _Iterator&) [with _Iterator = const Elem* const*, _Container = std::vector<const Elem*, std::allocator<const Elem*> >] /usr/local/include/c++/3.3.3/bits/stl_iterator.h:590: error: __gnu_cxx::__normal_iterator<_Iterator, _Container>::__normal_iterator() [with _Iterator = const Elem* const*, _Container = std::vector<const Elem*, std::allocator<const Elem*> >] /usr/local/include/c++/3.3.3/bits/stl_pair.h:88: error: no matching function for call to `__gnu_cxx::__normal_iterator<const Elem* const*, std::vector<const Elem*, std::allocator<const Elem*> > >::__normal_iterator( const __gnu_cxx::__normal_iterator<Elem**, std::vector<Elem*, std::allocator<Elem*> > >&)' /usr/local/include/c++/3.3.3/bits/stl_iterator.h:580: error: candidates are: __gnu_cxx::__normal_iterator<const Elem* const*, std::vector<const Elem*, std::allocator<const Elem*> > >::__normal_iterator(const __gnu_cxx::__normal_iterator<const Elem* const*, std::vector<const Elem*, std::allocator<const Elem*> > >&) /usr/local/include/c++/3.3.3/bits/stl_iterator.h:593: error: __gnu_cxx::__normal_iterator<_Iterator, _Container>::__normal_iterator(const _Iterator&) [with _Iterator = const Elem* const*, _Container = std::vector<const Elem*, std::allocator<const Elem*> >] /usr/local/include/c++/3.3.3/bits/stl_iterator.h:590: error: __gnu_cxx::__normal_iterator<_Iterator, _Container>::__normal_iterator() [with _Iterator = const Elem* const*, _Container = std::vector<const Elem*, std::allocator<const Elem*> >] 