From: Yujie <recrusader@gm...>  20080204 18:54:24
Attachments:
Message as HTML

Hi, everyone I am wondering a problem about unkown varialbes (x) distribution in libmesh. Practically, you use finite element method to solve PDE, you will finally get the linear equation Ax=b. In libmesh, if you solve it parallelly, you need to first partition the domain, that is A is partionted. However, each node in the cluster has all the variables x or only the part of x? Could you give me some advice? thanks a lot. Regards, Yujie 
From: Yujie <recrusader@gm...>  20080204 18:54:24
Attachments:
Message as HTML

Hi, everyone I am wondering a problem about unkown varialbes (x) distribution in libmesh. Practically, you use finite element method to solve PDE, you will finally get the linear equation Ax=b. In libmesh, if you solve it parallelly, you need to first partition the domain, that is A is partionted. However, each node in the cluster has all the variables x or only the part of x? Could you give me some advice? thanks a lot. Regards, Yujie 
From: Roy Stogner <roystgnr@ic...>  20080204 19:04:19

On Mon, 4 Feb 2008, Yujie wrote: > I am wondering a problem about unkown varialbes (x) distribution in libmesh. > Practically, you use finite element method to solve PDE, you will finally > get the linear equation Ax=b. In libmesh, if you solve it parallelly, you > need to first partition the domain, that is A is partionted. However, each > node in the cluster has all the variables x or only the part of x? Could you > give me some advice? thanks a lot. In parallel, on each processor the solution (x) vector which PETSc uses only has the part of x whose degrees of freedom are "owned" by that processor. However the library regularly localizes that solution vector onto current_local_solution, which has the coefficients for both degrees of freedom owned by the processor and degrees of freedom which have support on any element touching an element owned by the processor. If what you really need is a vector that has every single degree of freedom on every processor, check the NumericVector API; you should be able to create a serial vector and then localize onto it to get every single DoF coefficient.  Roy 
From: li pan <li76pan@ya...>  20080205 11:51:31

hi Yujie, I think you want to know how libmesh assembles system matrix. In the assemble function, libmesh has a loop over all the elements. Each element computes its own contribution and then sends to the sparse matrix. The system matrix is a Petsc sparse matrix. There are two flags for sending values in Petsc, ADD_VALUES and SET_VALUES. Here, the ADD_VALUES is used. In the parallel scheme, the elements are seperated into different processes. So back to your question, it doesn't matter whether this unkown variable is on the local process or somewhere else. At last you'll get the same sparse matrix as I get in the serial scheme. Petsc can add values to the unknown variables which does not exist in the local process. Only my understanding ;) Please correct me. pan  Yujie <recrusader@...> wrote: > Hi, everyone > > I am wondering a problem about unkown varialbes (x) > distribution in libmesh. > Practically, you use finite element method to solve > PDE, you will finally > get the linear equation Ax=b. In libmesh, if you > solve it parallelly, you > need to first partition the domain, that is A is > partionted. However, each > node in the cluster has all the variables x or only > the part of x? Could you > give me some advice? thanks a lot. > > Regards, > Yujie > >  > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio > 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/>; _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > ____________________________________________________________________________________ Be a better friend, newshound, and knowitall with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 
From: Yujie <recrusader@gm...>  20080208 17:53:29
Attachments:
Message as HTML

Hi, Roy I am always wondering something about parallel matrix assembly and unkown variable distribution. What relationship between global matrix obtained without parallel and matrices at nodes of the cluster with parallel. If possible, whether is there some examples or papers to demonstrate this relationship? Whether can we directly get the global matrix without parallel using matrices at all the nodes? I think it can't, the matrices at the nodes of the cluster are dealt with to meet the boundary problem between the nodes of the cluster. The processing is done by PETScMatrix::add_matrix()? If it is, my understanding is that this processing is algebraic and doesn't need the mesh information. PETScMatrix::add_matrix() only can know which points is on boundary between nodes. Is it right? Thanks a lot. Regards, Yujie On 2/4/08, Roy Stogner <roystgnr@...> wrote: > > > On Mon, 4 Feb 2008, Yujie wrote: > > > thank you, Roy. To my current understanding, Ax=b should be generally > > partitioned in libmesh like: > > A11 A12 A13 x1 b1 > > A21 A22 A23 x2 = b2 > > A31 A32 A33 x3 b3 > > if 9 processors are used. That is, there is the overlapping of x between > > different processors. if one wants to get all the > > solution, some operations are done to "combine" x in different > > processors, is it right? thanks a lot. > > Right. >  > Roy > 
From: Roy Stogner <roystgnr@ic...>  20080208 20:14:05

On Fri, 8 Feb 2008, Yujie wrote: > "get" the global matrix  compute it: such as, I output the matrices > at individual computers and directly rearrange them to a whole matrix, which > is same with the global matrix assembled in a single processor? > I think it is not same, because the matrices at individual computers should > be processed regarding the boundary (should be some discretizated points in > the mesh) between computers. Okay. The answer is "yes" and "no". The global matrix is the same whether assembled in parallel or in serial, in the sense that the entry corresponding to any two specified degrees of freedom will be the same. In other words, if you've got node A at (0.5,0.5) and node B at (0.5,0.625), then the matrix entry Mij, at row i corresponding to variable V on node A and row j corresponding to variable V on node B, will be the same no matter whether you compute it in serial or in parallel. (except for some minor differences in floating point error from computations done in different orders) However, the matrix will be different in parallel, because "i" and "j" will be different. libMesh assigns degree of freedom numberings differently depending on how your mesh is partitioned, so the matrix you get in parallel is actually a permutation of the matrix you get in serial (or of the matrix you get in parallel on a different number of processors). > In Ben's thesis (page 38), he mentioned that there are three steps to do for > parallelly assembling matrix, > 1. Synchronize data with remote processors. This is required so that any > remote data > needed in the element matrix assembly (required in nonlinear applications, > for example) > is available on the local processor. > 2. Perform a loop over the active elements on the local processor. Compute > the element > matrix and distribute it into the global matrix. > 3. Communicate local element contributions to degrees of freedom owned by > remote > processors. > Second step is done by user code. I can't find how to realize the first and > third steps in libmesh. Is third step done by PETScMatrix::add_matrix()? The first step is done by System::update(), which uses NumericVector::localize() to put the relevant parts of the remote processors' solution data into the local procesor's current_local_solution vector. The third step is started by SparseMatrix::add_matrix() (called by user code) and finished by SparseMatrix::close() (called by the library). > However, how to deal with the element or points in the mesh is on boundary > between processors if this function is used? The current_local_solution vector keeps copies of the "ghost" degrees of freedom which are owned by remote processors but which have support on elements touching the boundary of the local processor.  Roy 