From: <la...@im...> - 2005-03-24 14:40:03
|
Dear libmeshers, I've got an issue on how to organize my code. For a start, I'm implementing a pressure correction solver for the generalized Stokes problem. The code is now working great, the solution is the right one, except that it could run way faster if it didn't have to reassemble matrices and rhs's from scratch at each time step. Since I'm planning to ultimately implement a semi-implicit Navier-Stokes solver, I'll have to use a large number of time steps and this issue is crucial. The problem could be reassembled at each time step by storing the single matrices (mass, diffusion) and adding them to form the system matrix or multiplying them by a vector to form rhs terms. This would solve performance issues, but there's a problem. Matrices in the Stokes system are M, the mass matrix, S, the diffusion (laplacian) matrix, and L the divergence matrix (whose transpose is the gradient matrix for the pressure). The problem resides in L: how can I store it? It's a rectangular matrix, because it basically couples the velocity dofs with the pressure dofs. Notice that I need it even in a pressure-correction scheme in order to assemble the rhs without looping over the elements constructing the pressure gradients at hand. Is there a way in which I can store the L matrix with the right sparsity pattern, or is there an easy extension to limbesh I could contribute? (Mind that this problem is potentially common to a lot of splitting schemes) I hope I've been clear enough, and thanks in advance for your help! Luca -- Luca Antiga, PhD ---------------------------------------------- Biomedical Technologies Laboratory Bioengineering Department, Mario Negri Institute Villa Camozzi, 24020, Ranica (BG), Italy ---------------------------------------------- phone: +39 035 4535-381 email: an...@ma... web: http://villacamozzi.marionegri.it/~luca ---------------------------------------------- |
From: Benjamin S. K. <be...@cf...> - 2005-03-25 12:33:42
|
A fairly arbitrary sparsity pattern can be computed if you use the DofMap::dof_coupling array. By default, all DOFs are considered coupled to each other, but it sounds like for your case (in 2D) you probably want something like this: u v P |0 0 1| |0 0 1| |1 1 0| to store L & L^T in the same matrix, or just the last row for L^T? You should be able to modify the dof_coupling array, call compute_sparsity, and build an appropriate matrix. Note that there are more options if you know you are working with PETSc behind the scenes. For example, consider PetscMatrix<> pmat = dynamic_cast<PetscMatrix<> >(mat); pmat->get_mat(); allows you to access the PETSc Mat object directly. You can then use the PETScGetSubMatrix commands. In fact, this might be the most straightforward thing to do. You could build a full matrix for a coupled system and then extract whatever components you need. You then might even be able to re-assemble the full system if you want a fully coupled solver some time. Anyway, these are my thoughts. I know John has done some work on special preconditioners for the Stokes system and has had to extract the matrices you are interested in, so he could probably offer some more suggestions. You might need to add some code to make your job easier, if so I'll be happy to help any way possible. -Ben la...@im... wrote: > Dear libmeshers, > I've got an issue on how to organize my code. > For a start, I'm implementing a pressure correction solver for the > generalized Stokes problem. The code is now working great, the > solution is the right one, except that it could run way faster if it > didn't have to reassemble matrices and rhs's from scratch at each time > step. Since I'm planning to ultimately implement a semi-implicit > Navier-Stokes solver, I'll have to use a large number of time steps > and this issue is crucial. > The problem could be reassembled at each time step by storing the > single matrices (mass, diffusion) and adding them to form the system > matrix or multiplying them by a vector to form rhs terms. This would > solve performance issues, but there's a problem. Matrices in the Stokes > system are M, the mass matrix, S, the diffusion (laplacian) matrix, and > L the divergence matrix (whose transpose is the gradient matrix for the > pressure). The problem resides in L: how can I store it? It's a > rectangular matrix, because it basically couples the velocity dofs > with the pressure dofs. Notice that I need it even in a pressure-correction > scheme in order to assemble the rhs without looping over the elements > constructing the pressure gradients at hand. > Is there a way in which I can store the L matrix with the right > sparsity pattern, or is there an easy extension to limbesh I could > contribute? (Mind that this problem is potentially common to a lot of > splitting schemes) > I hope I've been clear enough, and thanks in advance for your help! > > Luca > > |
From: Michael S. <m-s...@us...> - 2005-03-25 13:07:45
|
On 25.03.05, Benjamin S. Kirk wrote: > A fairly arbitrary sparsity pattern can be computed if you use the > DofMap::dof_coupling array. By default, all DOFs are considered coupled > to each other, but it sounds like for your case (in 2D) you probably > want something like this: > > u v P > |0 0 1| > |0 0 1| > |1 1 0| I needed something like this for testing purposes some time ago. Because I was not interested in a very efficient program I let Petsc compute the sparsity pattern for me which is sub-optimal. I think, a framework for this is of broader interest. Best greetings, Michael -- "A mathematician is a device for turning coffee into theorems" Paul Erdös. |
From: John P. <pet...@cf...> - 2005-03-25 14:46:04
|
Benjamin S. Kirk writes: > > For example, consider > > PetscMatrix<> pmat = dynamic_cast<PetscMatrix<> >(mat); > > pmat->get_mat(); > > allows you to access the PETSc Mat object directly. You can then use > the PETScGetSubMatrix commands. Actually, you can now use the SparseMatrix::create_submatrix() routine, which does that cast for you. (Still only works with PETSc, though.) -John |
From: <an...@ma...> - 2005-03-25 14:36:23
|
Thanks for the hint, this is exaclty what I was looking for. I'll let you know! Luca On Fri, Mar 25, 2005 at 06:33:30AM -0600, Benjamin S. Kirk wrote: > A fairly arbitrary sparsity pattern can be computed if you use the > DofMap::dof_coupling array. By default, all DOFs are considered coupled > to each other, but it sounds like for your case (in 2D) you probably > want something like this: > > u v P > |0 0 1| > |0 0 1| > |1 1 0| > > to store L & L^T in the same matrix, or just the last row for L^T? > > You should be able to modify the dof_coupling array, call > compute_sparsity, and build an appropriate matrix. > > > Note that there are more options if you know you are working with PETSc > behind the scenes. > > For example, consider > > PetscMatrix<> pmat = dynamic_cast<PetscMatrix<> >(mat); > > pmat->get_mat(); > > allows you to access the PETSc Mat object directly. You can then use > the PETScGetSubMatrix commands. > > In fact, this might be the most straightforward thing to do. You could > build a full matrix for a coupled system and then extract whatever > components you need. You then might even be able to re-assemble the > full system if you want a fully coupled solver some time. > > Anyway, these are my thoughts. I know John has done some work on > special preconditioners for the Stokes system and has had to extract the > matrices you are interested in, so he could probably offer some more > suggestions. > > You might need to add some code to make your job easier, if so I'll be > happy to help any way possible. > > -Ben > > > > la...@im... wrote: > >Dear libmeshers, > > I've got an issue on how to organize my code. > > For a start, I'm implementing a pressure correction solver for the > >generalized Stokes problem. The code is now working great, the > >solution is the right one, except that it could run way faster if it > >didn't have to reassemble matrices and rhs's from scratch at each time > >step. Since I'm planning to ultimately implement a semi-implicit > >Navier-Stokes solver, I'll have to use a large number of time steps > >and this issue is crucial. > > The problem could be reassembled at each time step by storing the > >single matrices (mass, diffusion) and adding them to form the system > >matrix or multiplying them by a vector to form rhs terms. This would > >solve performance issues, but there's a problem. Matrices in the Stokes > >system are M, the mass matrix, S, the diffusion (laplacian) matrix, and > >L the divergence matrix (whose transpose is the gradient matrix for the > >pressure). The problem resides in L: how can I store it? It's a > >rectangular matrix, because it basically couples the velocity dofs > >with the pressure dofs. Notice that I need it even in a > >pressure-correction scheme in order to assemble the rhs without looping > >over the elements > >constructing the pressure gradients at hand. > > Is there a way in which I can store the L matrix with the right > >sparsity pattern, or is there an easy extension to limbesh I could > >contribute? (Mind that this problem is potentially common to a lot of > >splitting schemes) > > I hope I've been clear enough, and thanks in advance for your help! > > > >Luca > > > > > > > ------------------------------------------------------- > SF email is sponsored by - The IT Product Guide > Read honest & candid reviews on hundreds of IT Products from real users. > Discover which products truly live up to the hype. Start reading now. > http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users -- Luca Antiga, PhD ---------------------------------------------- Biomedical Technologies Laboratory Bioengineering Department, Mario Negri Institute Villa Camozzi, 24020, Ranica (BG), Italy ---------------------------------------------- phone: +39 035 4535-381 email: an...@ma... web: http://villacamozzi.marionegri.it/~luca ---------------------------------------------- |