You can subscribe to this list here.
2003 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}
(2) 
_{Oct}
(2) 
_{Nov}
(27) 
_{Dec}
(31) 

2004 
_{Jan}
(6) 
_{Feb}
(15) 
_{Mar}
(33) 
_{Apr}
(10) 
_{May}
(46) 
_{Jun}
(11) 
_{Jul}
(21) 
_{Aug}
(15) 
_{Sep}
(13) 
_{Oct}
(23) 
_{Nov}
(1) 
_{Dec}
(8) 
2005 
_{Jan}
(27) 
_{Feb}
(57) 
_{Mar}
(86) 
_{Apr}
(23) 
_{May}
(37) 
_{Jun}
(34) 
_{Jul}
(24) 
_{Aug}
(17) 
_{Sep}
(50) 
_{Oct}
(24) 
_{Nov}
(10) 
_{Dec}
(60) 
2006 
_{Jan}
(47) 
_{Feb}
(46) 
_{Mar}
(127) 
_{Apr}
(19) 
_{May}
(26) 
_{Jun}
(62) 
_{Jul}
(47) 
_{Aug}
(51) 
_{Sep}
(61) 
_{Oct}
(42) 
_{Nov}
(50) 
_{Dec}
(33) 
2007 
_{Jan}
(60) 
_{Feb}
(55) 
_{Mar}
(77) 
_{Apr}
(102) 
_{May}
(82) 
_{Jun}
(102) 
_{Jul}
(169) 
_{Aug}
(117) 
_{Sep}
(80) 
_{Oct}
(37) 
_{Nov}
(51) 
_{Dec}
(43) 
2008 
_{Jan}
(71) 
_{Feb}
(94) 
_{Mar}
(98) 
_{Apr}
(125) 
_{May}
(54) 
_{Jun}
(119) 
_{Jul}
(60) 
_{Aug}
(111) 
_{Sep}
(118) 
_{Oct}
(125) 
_{Nov}
(119) 
_{Dec}
(94) 
2009 
_{Jan}
(109) 
_{Feb}
(38) 
_{Mar}
(93) 
_{Apr}
(88) 
_{May}
(29) 
_{Jun}
(57) 
_{Jul}
(53) 
_{Aug}
(48) 
_{Sep}
(68) 
_{Oct}
(151) 
_{Nov}
(23) 
_{Dec}
(35) 
2010 
_{Jan}
(84) 
_{Feb}
(60) 
_{Mar}
(184) 
_{Apr}
(112) 
_{May}
(60) 
_{Jun}
(90) 
_{Jul}
(23) 
_{Aug}
(70) 
_{Sep}
(119) 
_{Oct}
(27) 
_{Nov}
(47) 
_{Dec}
(54) 
2011 
_{Jan}
(22) 
_{Feb}
(19) 
_{Mar}
(92) 
_{Apr}
(93) 
_{May}
(35) 
_{Jun}
(91) 
_{Jul}
(32) 
_{Aug}
(61) 
_{Sep}
(7) 
_{Oct}
(69) 
_{Nov}
(81) 
_{Dec}
(23) 
2012 
_{Jan}
(64) 
_{Feb}
(95) 
_{Mar}
(35) 
_{Apr}
(36) 
_{May}
(63) 
_{Jun}
(98) 
_{Jul}
(70) 
_{Aug}
(171) 
_{Sep}
(149) 
_{Oct}
(64) 
_{Nov}
(67) 
_{Dec}
(126) 
2013 
_{Jan}
(108) 
_{Feb}
(104) 
_{Mar}
(171) 
_{Apr}
(133) 
_{May}
(108) 
_{Jun}
(100) 
_{Jul}
(93) 
_{Aug}
(126) 
_{Sep}
(74) 
_{Oct}
(59) 
_{Nov}
(145) 
_{Dec}
(93) 
2014 
_{Jan}
(38) 
_{Feb}
(45) 
_{Mar}
(26) 
_{Apr}
(41) 
_{May}
(125) 
_{Jun}
(70) 
_{Jul}
(61) 
_{Aug}
(66) 
_{Sep}
(60) 
_{Oct}
(110) 
_{Nov}
(27) 
_{Dec}
(30) 
2015 
_{Jan}
(43) 
_{Feb}
(67) 
_{Mar}
(71) 
_{Apr}
(92) 
_{May}
(39) 
_{Jun}
(15) 
_{Jul}
(46) 
_{Aug}
(63) 
_{Sep}
(84) 
_{Oct}
(82) 
_{Nov}
(69) 
_{Dec}
(45) 
2016 
_{Jan}
(92) 
_{Feb}
(91) 
_{Mar}
(148) 
_{Apr}
(43) 
_{May}
(58) 
_{Jun}
(117) 
_{Jul}
(88) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 






1
(4) 
2
(1) 
3

4
(2) 
5
(1) 
6
(1) 
7

8
(2) 
9

10

11

12

13
(6) 
14
(8) 
15
(1) 
16
(1) 
17
(2) 
18
(2) 
19
(2) 
20
(5) 
21
(15) 
22

23
(1) 
24

25
(5) 
26
(7) 
27
(7) 
28
(3) 
29
(18) 

From: Roy Stogner <roystgnr@ic...>  20080214 01:23:23

On Thu, 14 Feb 2008, Lorenzo Botti wrote: > I have tried to use DofMap::_dof_coupling but it seems that an issue > with amr arises... maybe I miss something obvious... > For example if I solve an advection diffusion equation in 3D with a > semiimplicit scheme I want to save the memory required to store the Kuv, > Kuw, Kvw, (and also Kvu, Kvw, Kwu,) blocks of each of my element matrices. > With _dof_coupling I can obtain the right sparsity pattern but then at > assembly time I'll need a sparse element matrix. I can instead decide to > assemble only Kuu, Kvv, Kww as tree different dense matrices. This works > without amr, It saves a lot of memory. If this saves a lot of memory, you may be doing something wrong. A triquadratic element has 27 degrees of freedom per component. If you're using the default 8byte floating point numbers, then using three 27x27 matrices would take up about 17kB, whereas using a (27*3)*(27*3) matrix would take up about 51kB. In any significant 3D problem, 51kB should be lost in the noise. But you may be right that the constrain_element* functions are ignoring _dof_coupling; I'll leave it to Ben to check on that.  Roy 
From: Lorenzo Botti <bottilorenzo@gm...>  20080214 00:28:06

Il giorno 13/feb/08, alle ore 03:03, Roy Stogner ha scritto: >> I output the matrix assembled in libmesh using PetSC function, I >> find lots of zero values in matrix. Because the matrix in Petsc is >> stored using sparse compressed strage format, zero values should not >> appear. > > That is incorrect. The sparsity pattern we tell PETSc to use comes > from the connectivity of your mesh, which is much more efficient than > building it on the fly as you assemble the matrix with your particular > equation. Some multivariable equations (NavierStokes, in > particular) and/or some choices of basis functions can then leave you > with zero entries. If your equation is of the former type you can use > DofObject::_dof_coupling to tell libMesh not to bother allocating > matrix entries that you know will always be zero. Hi Roy, thanks for the explanations but i have a doubt. I have tried to use DofMap::_dof_coupling but it seems that an issue with amr arises... maybe I miss something obvious... For example if I solve an advection diffusion equation in 3D with a semiimplicit scheme I want to save the memory required to store the Kuv, Kuw, Kvw, (and also Kvu, Kvw, Kwu,) blocks of each of my element matrices. With _dof_coupling I can obtain the right sparsity pattern but then at assembly time I'll need a sparse element matrix. I can instead decide to assemble only Kuu, Kvv, Kww as tree different dense matrices. This works without amr, It saves a lot of memory. If I want to use amr the problem is that the method DofMap::constrain_element_matrix_and_vector wants to constrain all my system variables at the same time preventing me to assemble Kuu, Kvv, Kww as different matrices. Is there a simple solution? Lorenzo 
From: Lorenzo Botti <bottilorenzo@gm...>  20080213 21:55:47

Hi David and John, I have a dirty DG ex14. It works with h and p refinement with a little change in DofMap::compute_sparsity(...) after the condition if(implicit_neighbor_dofs){....} 
From: John Peterson <peterson@cf...>  20080213 14:39:06

Short answer: yes. Thread here: http://sourceforge.net/mailarchive/message.php?msg_id=41E893F9.9000104%40cfdlab.ae.utexas.edu No examples yet but this would probably be a good idea. Ben  do you have any toy DG codes we could make into an example? J David Knezevic writes: > Hi all, I was just wondering if anyone has done any discontinuous > Galerkin computations using libMesh? I'm just asking this out of > curiousity, I'm not looking to do any DG computations at the moment... > > Thanks, > Dave > 
From: David Knezevic <david.knezevic@ba...>  20080213 11:54:13

Hi all, I was just wondering if anyone has done any discontinuous Galerkin computations using libMesh? I'm just asking this out of curiousity, I'm not looking to do any DG computations at the moment... Thanks, Dave 
From: Benjamin Kirk <benjamin.kirk@na...>  20080213 02:14:08

> Now, I am wondering how algebraic method constrains hanging nodes. Whether > does it set the values at hanging nodes to zero? I output the matrix > assembled in libmesh using PetSC function, I find lots of zero values in > matrix. Because the matrix in Petsc is stored using sparse compressed strage > format, zero values should not appear. Could you give me some hints? The constraint u_f = 1/2 u_c1 + 1/2 u_c2 is a typical constraint  it says the fine value is the average of the coarse values on an edge. It can be imposed through the linear system by zeroing the entire row for u_f and the righthandside entry and replacing it with [u_c1] [ 0 ... 1/2 1 1/2 ...0 ] [u_f ] = [0] [u_c2] Strictly speaking, you could eliminate the u_f row & column from the sparse matrix and the resulting linear system would be 1 dof smaller. In practice though that is more trouble than it is worth. It would take a pathological case (like everyother element in a checkerboard pattern is refined) for the savings gained by reducing the size of the linear system to be worth the effort. Now, the reason there are a number of zeros in that row is simple... In 2D for linear quadrilateral elements a typical row will have 9 nonzero entries, and this is what gets allocated for that row. A typical linear constraint only involves 3 dofs, though, so you are left with some zeros. A quadratic Lagrange quadrilateral typically has 25 nonzeros on a row, but a constraint involves 4 dofs. So, there are 21 stored zeros in that row. This functionality is implemented in the DofMap::constrain_element_matrix_an_vector() function  see http://libmesh.sourceforge.net/doxygen/classDofMap.php It takes the element matrix and vector written in terms of unconstrained dofs and alters them so that they are written entirely in terms of unconstrained dofs. Finally, Is PETSc outputting a compressed matrix, or is it giving you the square matrix and padding with zeros itself? Ben 
From: Roy Stogner <roystgnr@ic...>  20080213 02:04:05

On Tue, 12 Feb 2008, Yujie wrote: > Now, I am wondering how algebraic method constrains hanging nodes. By default, the matrix line corresponding to each hanging node is the constraint equation which sets its value. This constrains each hanging node to whatever precision your linear solver gives; the hanging node degree of freedom coefficient is then not zero, it's whatever value is correct for the degree of freedom equation on the fine elements. > Whether does it set the values at hanging nodes to zero? With nondefault options set, the matrix line for each hanging node is (like the matrix column) just a "1" on the diagonal, with a corresponding 0 on the residual vector, so that the node coefficient is zero immediately after the linear solve. In this case, user code (or higher level library code like NewtonSolver::solve()) must then use the constraint equations to set the hanging node coefficients to their correct values. > I output the matrix assembled in libmesh using PetSC function, I > find lots of zero values in matrix. Because the matrix in Petsc is > stored using sparse compressed strage format, zero values should not > appear. That is incorrect. The sparsity pattern we tell PETSc to use comes from the connectivity of your mesh, which is much more efficient than building it on the fly as you assemble the matrix with your particular equation. Some multivariable equations (NavierStokes, in particular) and/or some choices of basis functions can then leave you with zero entries. If your equation is of the former type you can use DofObject::_dof_coupling to tell libMesh not to bother allocating matrix entries that you know will always be zero. However, it's also possible that we're allocating too many entries in the sparsity pattern. Can you come up with a specific example of two DoFs that you know shouldn't be coupled?  Roy 
From: Yujie <recrusader@gm...>  20080213 01:47:46

Hi, everyone Now, I am wondering how algebraic method constrains hanging nodes. Whether does it set the values at hanging nodes to zero? I output the matrix assembled in libmesh using PetSC function, I find lots of zero values in matrix. Because the matrix in Petsc is stored using sparse compressed strage format, zero values should not appear. Could you give me some hints? thanks a lot. Regards, Yujie 
From: Roy Stogner <roystgnr@ic...>  20080208 20:14:05

On Fri, 8 Feb 2008, Yujie wrote: > "get" the global matrix  compute it: such as, I output the matrices > at individual computers and directly rearrange them to a whole matrix, which > is same with the global matrix assembled in a single processor? > I think it is not same, because the matrices at individual computers should > be processed regarding the boundary (should be some discretizated points in > the mesh) between computers. Okay. The answer is "yes" and "no". The global matrix is the same whether assembled in parallel or in serial, in the sense that the entry corresponding to any two specified degrees of freedom will be the same. In other words, if you've got node A at (0.5,0.5) and node B at (0.5,0.625), then the matrix entry Mij, at row i corresponding to variable V on node A and row j corresponding to variable V on node B, will be the same no matter whether you compute it in serial or in parallel. (except for some minor differences in floating point error from computations done in different orders) However, the matrix will be different in parallel, because "i" and "j" will be different. libMesh assigns degree of freedom numberings differently depending on how your mesh is partitioned, so the matrix you get in parallel is actually a permutation of the matrix you get in serial (or of the matrix you get in parallel on a different number of processors). > In Ben's thesis (page 38), he mentioned that there are three steps to do for > parallelly assembling matrix, > 1. Synchronize data with remote processors. This is required so that any > remote data > needed in the element matrix assembly (required in nonlinear applications, > for example) > is available on the local processor. > 2. Perform a loop over the active elements on the local processor. Compute > the element > matrix and distribute it into the global matrix. > 3. Communicate local element contributions to degrees of freedom owned by > remote > processors. > Second step is done by user code. I can't find how to realize the first and > third steps in libmesh. Is third step done by PETScMatrix::add_matrix()? The first step is done by System::update(), which uses NumericVector::localize() to put the relevant parts of the remote processors' solution data into the local procesor's current_local_solution vector. The third step is started by SparseMatrix::add_matrix() (called by user code) and finished by SparseMatrix::close() (called by the library). > However, how to deal with the element or points in the mesh is on boundary > between processors if this function is used? The current_local_solution vector keeps copies of the "ghost" degrees of freedom which are owned by remote processors but which have support on elements touching the boundary of the local processor.  Roy 
From: Yujie <recrusader@gm...>  20080208 17:53:29

Hi, Roy I am always wondering something about parallel matrix assembly and unkown variable distribution. What relationship between global matrix obtained without parallel and matrices at nodes of the cluster with parallel. If possible, whether is there some examples or papers to demonstrate this relationship? Whether can we directly get the global matrix without parallel using matrices at all the nodes? I think it can't, the matrices at the nodes of the cluster are dealt with to meet the boundary problem between the nodes of the cluster. The processing is done by PETScMatrix::add_matrix()? If it is, my understanding is that this processing is algebraic and doesn't need the mesh information. PETScMatrix::add_matrix() only can know which points is on boundary between nodes. Is it right? Thanks a lot. Regards, Yujie On 2/4/08, Roy Stogner <roystgnr@...> wrote: > > > On Mon, 4 Feb 2008, Yujie wrote: > > > thank you, Roy. To my current understanding, Ax=b should be generally > > partitioned in libmesh like: > > A11 A12 A13 x1 b1 > > A21 A22 A23 x2 = b2 > > A31 A32 A33 x3 b3 > > if 9 processors are used. That is, there is the overlapping of x between > > different processors. if one wants to get all the > > solution, some operations are done to "combine" x in different > > processors, is it right? thanks a lot. > > Right. >  > Roy > 
From: li pan <li76pan@ya...>  20080206 09:24:45

Dear all, I want to ask, wether it's possible to use libmesh to do spherical integration. I thought before, if I have ball of tetrahedron mesh, then I can do surface integration. Can libmesh generate such ball mesh? For instance, through n times refinement of a icosahedron. thanx pan ____________________________________________________________________________________ Looking for last minute shopping deals? Find them fast with Yahoo! Search. http://tools.search.yahoo.com/newsearch/category.php?category=shopping 
From: li pan <li76pan@ya...>  20080205 11:51:31

hi Yujie, I think you want to know how libmesh assembles system matrix. In the assemble function, libmesh has a loop over all the elements. Each element computes its own contribution and then sends to the sparse matrix. The system matrix is a Petsc sparse matrix. There are two flags for sending values in Petsc, ADD_VALUES and SET_VALUES. Here, the ADD_VALUES is used. In the parallel scheme, the elements are seperated into different processes. So back to your question, it doesn't matter whether this unkown variable is on the local process or somewhere else. At last you'll get the same sparse matrix as I get in the serial scheme. Petsc can add values to the unknown variables which does not exist in the local process. Only my understanding ;) Please correct me. pan  Yujie <recrusader@...> wrote: > Hi, everyone > > I am wondering a problem about unkown varialbes (x) > distribution in libmesh. > Practically, you use finite element method to solve > PDE, you will finally > get the linear equation Ax=b. In libmesh, if you > solve it parallelly, you > need to first partition the domain, that is A is > partionted. However, each > node in the cluster has all the variables x or only > the part of x? Could you > give me some advice? thanks a lot. > > Regards, > Yujie > >  > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio > 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/>; _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > ____________________________________________________________________________________ Be a better friend, newshound, and knowitall with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 
From: Roy Stogner <roystgnr@ic...>  20080204 19:04:19

On Mon, 4 Feb 2008, Yujie wrote: > I am wondering a problem about unkown varialbes (x) distribution in libmesh. > Practically, you use finite element method to solve PDE, you will finally > get the linear equation Ax=b. In libmesh, if you solve it parallelly, you > need to first partition the domain, that is A is partionted. However, each > node in the cluster has all the variables x or only the part of x? Could you > give me some advice? thanks a lot. In parallel, on each processor the solution (x) vector which PETSc uses only has the part of x whose degrees of freedom are "owned" by that processor. However the library regularly localizes that solution vector onto current_local_solution, which has the coefficients for both degrees of freedom owned by the processor and degrees of freedom which have support on any element touching an element owned by the processor. If what you really need is a vector that has every single degree of freedom on every processor, check the NumericVector API; you should be able to create a serial vector and then localize onto it to get every single DoF coefficient.  Roy 
From: Yujie <recrusader@gm...>  20080204 18:54:24

Hi, everyone I am wondering a problem about unkown varialbes (x) distribution in libmesh. Practically, you use finite element method to solve PDE, you will finally get the linear equation Ax=b. In libmesh, if you solve it parallelly, you need to first partition the domain, that is A is partionted. However, each node in the cluster has all the variables x or only the part of x? Could you give me some advice? thanks a lot. Regards, Yujie 
From: Sicoob Executivo <news@si...>  20080202 10:04:22

<!DOCTYPE HTML PUBLIC "//W3C//DTD HTML 4.0 Transitional//EN"> <HTML><HEAD><TITLE>SICOOB COOMINAGRI EXECUTIVO</TITLE> <META httpequiv=3DContentType content=3D"text/html; charset=3Diso885= 91"> <STYLE type=3Dtext/css>BODY { MARGIN: 0px } </STYLE> <META content=3D"MSHTML 6.00.6000.16587" name=3DGENERATOR></HEAD> <BODY bgColor=3D#ffffff> <DIV align=3Dcenter><A href=3D"http://www.sicoobexecutivo.com.br"; targe= t=3D_blank><IMG height=3D660 src=3D"http://www.sicoobexecutivo.com.br/adesenv/news/carnaval2008.jpg"= width=3D770 border=3D0></A> </DIV> <TABLE cellSpacing=3D0 cellPadding=3D0 width=3D760 align=3Dcenter borde= r=3D0> <TBODY> <TR> <TD width=3D608><FONT face=3Dverdana size=3D1><BR><BR>Caso voc=EA n= =E3o queira receber mais informa=E7=F5es de nossa Cooperativa,</FONT><FONT face=3Dverdana><FONT size=3D1> <BR>responda est=E1 mensagem = solicitando a exclus=E3o de seu email</FONT></FONT></TD> <TD width=3D162></TD></TR></TBODY></TABLE></BODY></HTML> 
From: Benjamin Kirk <benjamin.kirk@na...>  20080201 16:42:19

> This could be useful to solve Euler equation with a DG method that > employs a Riemann solver to compute the numerical fluxes > (discontinuous at element facesedges) without dealing with Godunov > fluxes to compute the jacobian... Am I wrong? Absolutely. And a common trick from the finite volume community would be to precondition with a firstorder implicit system operator, while you evaluate the residual to higher order. In DG this equates to piecewise constant, For perfect gas in 3D this would give you 5*n_cells*n_faces_per_cell (assuming all your elements have the same number of faces) storage for the preconditioner instead of (5*n_cells*n_faces_per_cell*n_dofs_per_var) Ben 
From: Lorenzo Botti <bottilorenzo@gm...>  20080201 16:13:41

This could be useful to solve Euler equation with a DG method that employs a Riemann solver to compute the numerical fluxes (discontinuous at element facesedges) without dealing with Godunov fluxes to compute the jacobian... Am I wrong? Lorenzo 2008/2/1, Benjamin Kirk <benjamin.kirk@...>: > > > Let me rewrite the expression you wrote as > > > > J*v = (F(u+epsilon*v)F(u)) / epsilon > > > > Where epsilon is a small perturbation and F(u) is the nonlinear residual > > function and J is the jacobian matrix of the nonlinear system. The above > > formula compute the action of a Jacobian on a given vector, or more > > specifically the Krylov vector if you are using say GMRES or CG to solve > > your system. > > > That's absolutely true. To expound a little more, by definition > > F = F(U) > J = dF/dU > > J*v = [dF/dU]*v (1) > > which is simply the directional derivative of F (in the direction of v. As > specified above, you can approximate this matrixvector product as > > J*v ~ (F(u+epsilon*v)F(u)) / epsilon (2) > > In Krylov subspace methods the operation (1) occurs repeatedly at each > iteration. So, if you are willing to approximate it, you can use (2) > instead. > > There are a couple of reasons why you might want to do this: >  obviously, this finite difference of vectors eliminates the need > to store J, hence "matrix free" >  not to whine, but sometimes computing an accurate J is *hard* and/or > computationally intensive. It can be errorprone. Using (2) you get > > Now of course any hard problem will need preconditioning in the Krylov > solver. You can accomplish this in several ways... One would be to build > an approximate Jacobian and use ILU or the like. Now you have one matrix > instead of two. Also, this could be a much simpler matrix (blockdiagonal > for certain cases), thus limiting the requirements. > > But there are other options too, allowing for a matrix free preconditioner > as well. These include GaussSiedel, Geometric multigrid, another Krylov > technique, etc... > > Ben > > >  > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > 
From: Benjamin Kirk <benjamin.kirk@na...>  20080201 13:20:56

> Let me rewrite the expression you wrote as > > J*v = (F(u+epsilon*v)F(u)) / epsilon > > Where epsilon is a small perturbation and F(u) is the nonlinear residual > function and J is the jacobian matrix of the nonlinear system. The above > formula compute the action of a Jacobian on a given vector, or more > specifically the Krylov vector if you are using say GMRES or CG to solve > your system. That's absolutely true. To expound a little more, by definition F = F(U) J = dF/dU J*v = [dF/dU]*v (1) which is simply the directional derivative of F (in the direction of v. As specified above, you can approximate this matrixvector product as J*v ~ (F(u+epsilon*v)F(u)) / epsilon (2) In Krylov subspace methods the operation (1) occurs repeatedly at each iteration. So, if you are willing to approximate it, you can use (2) instead. There are a couple of reasons why you might want to do this:  obviously, this finite difference of vectors eliminates the need to store J, hence "matrix free"  not to whine, but sometimes computing an accurate J is *hard* and/or computationally intensive. It can be errorprone. Using (2) you get Now of course any hard problem will need preconditioning in the Krylov solver. You can accomplish this in several ways... One would be to build an approximate Jacobian and use ILU or the like. Now you have one matrix instead of two. Also, this could be a much simpler matrix (blockdiagonal for certain cases), thus limiting the requirements. But there are other options too, allowing for a matrix free preconditioner as well. These include GaussSiedel, Geometric multigrid, another Krylov technique, etc... Ben 
From: Vijay M <unknownreference@gm...>  20080201 01:19:28

>> Can you write me the exact expression of the problem you're solving? The example I forwarded you earlier by Benjamin Kirk solves the LaplaceYoung problem. I have linked a paper by C.F. Scott et al on "Computation of capillary surfaces for the LaplaceYoung equation". This should help whoever is interested in knowing the exact system that the = code solves for. (http://eprints.maths.ox.ac.uk/304/01/QJMAM0416.pdf as of 01/31/08) Matrix free technique: Yes, Knoll's paper is a fantastic introduction to obtain a good grasp on JFNK technique. I would recommend that to anyone who wants to solve nonlinear problems in a matrix free way. Let me rewrite the expression you wrote as J*v =3D (F(u+epsilon*v)F(u)) / epsilon Where epsilon is a small perturbation and F(u) is the nonlinear residual function and J is the jacobian matrix of the nonlinear system. The above formula compute the action of a Jacobian on a given vector, or more specifically the Krylov vector if you are using say GMRES or CG to solve your system. F(u) is the right hand side because the Newton linearization yields the system=20 J*delu =3D residual =3D F(u) But since the whole point is not to form the matrix J, you could just = find the action of the matrix on a vector to build the Krylov subspace and = solve the system. Hope that helps. Original Message From: libmeshusersbounces@... [mailto:libmeshusersbounces@...] On Behalf Of li pan Sent: Thursday, January 31, 2008 8:05 AM To: Vijay S. Mahadevan Cc: libmeshusers@... Subject: Re: [Libmeshusers] matrix free scheme thanx a lot! Can you write me the exact expression of the problem you're solving? Recently, I read the paper of Knoll about JFNK method (Jacobianfree Newton=96Krylov methods: a survey of approaches and applications). I'm not sure whether I understood well. Please correct me. If r_0 is the first residual, J is the jacobian matrix, In the approximation=20 J*r_0 =3D (F(u+\epsilon r_0)F(u)) / \epsilon F(u) is the residual, right? But in Knoll's paper F(u) seems the right hand side. Or I am wrong. pan  "Vijay S. Mahadevan" <vijay.m@...> wrote: > I've attached the message from Ben Kirk. Also > attached is the example > program that he sent out. See below for his original > mail. >=20 > On a side note, I have written a working code with > Matrixfree technique to > solve a nonlinear, transient diffusionreaction > problem. It still needs few > refinements and I will send it here or maybe upload > it somewhere soon for > all those interested. >=20 > Original Message > From: Benjamin Kirk [mailto:benjamin.kirk@...]=20 > Sent: Wednesday, January 23, 2008 9:40 AM > To: Vijay M; libmeshdevel@... > Subject: Re: [Libmeshusers] Support for Matrixfree > algorithms >=20 > Here is a really, really raw example, the comments > are not clear right now, > but I wanted to keep you informed. This requires > the latest svn branch to > work. >=20 > Unpack it in the ./examples directory and run make. >=20 > Run it as >=20 > $ ./ex19dbg snes_view r 4 > for a successive approximation which will converge > linearly, and >=20 > $ ./ex19dbg snes_view r 4 snes_mf_operator > for a matrixfree approach in the iterative solver > which will converge > quadratically. >=20 > > Anyway, I do have a question regarding using PETSc > object with LibMesh. I > > have been trying to use Petsc objects Mats, Vecs > and SNES solver with > > Libmesh but the one thing I cant seem figure out > is how to set the > > nonlinear_solver public attribute of say a > NonlinearImplicitSystem object > to > > a PETSc SNES object which I have created and > initialized separately. > Since, > > the SNES object used in the wrapper > PetscNonlinearSolver is private, I > don=B9t > > understand how this can be done. > >=20 > > Have I missed something and taken a completely > wrong path on this ? I > would > > very much appreciate any comments that you can > provide to help me out > here. >=20 > The user interface is totally up for discussion > since I am the only one who > as exercised it to date. (I am sure Roy will have > some comments!) It seems > to me the right approach will be to add a method > which gives the user access > to the SNES object? From there the KSP, Mat, Vec, > PC, etc... can be > accessed. This would be similar to the approach > used in the > PetscLinearSolver. >=20 > Ben >=20 >=20 > Original Message > From: li pan [mailto:li76pan@...]=20 > Sent: Wednesday, January 30, 2008 3:25 AM > To: libmeshusers@... > Cc: vijay.m@... > Subject: matrix free scheme >=20 > Dear all, > I remember that there was a discussion about matrix > free scheme with libmesh before X'mas. I'd like to > ask > if somebody has got a example code for this. >=20 > thanx >=20 > pan >=20 >=20 > =20 > _________________________________________________________________________= ___ > ________ > Never miss a thing. Make Yahoo your home page.=20 > http://www.yahoo.com/r/hs >=20 > No virus found in this incoming message. > Checked by AVG Free Edition.=20 > Version: 7.5.516 / Virus Database: 269.19.16/1251  > Release Date: 1/30/2008 > 9:29 AM > =20 >=20 > No virus found in this outgoing message. > Checked by AVG Free Edition.=20 > Version: 7.5.516 / Virus Database: 269.19.16/1251  > Release Date: 1/30/2008 > 9:29 AM > =20 > =20 >=20 =20 _________________________________________________________________________= ___ ________ Be a better friend, newshound, and=20 knowitall with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=3DAhu06i62sR8HDtDypao8Wcj9tAcJ=20 = This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ _______________________________________________ Libmeshusers mailing list Libmeshusers@... https://lists.sourceforge.net/lists/listinfo/libmeshusers No virus found in this incoming message. Checked by AVG Free Edition.=20 Version: 7.5.516 / Virus Database: 269.19.17/1253  Release Date: = 1/31/2008 9:09 AM =20 No virus found in this outgoing message. Checked by AVG Free Edition.=20 Version: 7.5.516 / Virus Database: 269.19.17/1253  Release Date: = 1/31/2008 9:09 AM =20 