From: Roy S. <roy...@ic...> - 2004-11-18 23:30:58
|
On Thu, 18 Nov 2004, KIRK, BENJAMIN (JSC-EG) (NASA) wrote: > If you do need to perform such an inversion I would suggest doing it through > the DenseMatrix class. Right now it implements (by hand) lu factorization > without pivoting... Hmmm... I need pivoting; there's no fixed ordering of the matrix rows that can guarantee you won't have a zero on the diagonal, and zero diagonal entries are pretty much guaranteed to happen for elements whose master->physical transformation is just a 90 degree rotation. I suppose I only need it in a very limited sense; I could probably do it by hand as the matrix is built without making the code too ugly. > In reality, however, this method is really just a placeholder for a > call to LAPACK (which is guaranteed to be present if PETSc is there) > or replaced altogether with the Boost matrix implementation... Have you seen any recent benchmarks of ublas? It (like all the C++ linear algebra codes, unfortunately) occasionally had some trouble keeping up with C/Fortran/Assembly code on a benchmark I ran across recently: http://www.linuxclustersinstitute.org/Linux-HPC-Revolution/Archive/PDF03/Mello_U.pdf But this paper is nearly 2 years old now; the version of ublas they tested apparantly even predates its addition to Boost. > In either case, I assume this would need to be done once per element & you > can cache either the lu factorization or the solution? Right; the coefficients don't change until the geometry changes. I don't think it would be worth caching, if only there were some way to get a solver to take the sparsity pattern into account ahead of time. The 2D matrix, for instance, only has 36 (out of 144) non-zero entries, and only 3 of those are above the main diagonal. 3D will have many more above-diagonal entries, but the sparsity ratio should be even better. What would be ideal is if there were some sort of trick that would optimize the multiply-by-zero and optimize away the add-zero operations. I'd love to be able to write the code in matrix form, write the sparsity pattern down as a constant, and make something like that "C++ templates are Turing complete" magic do the rest. > Perhaps the data could then be written to file so at least restarts > are quick?? I don't think that's necessary, as long as they can be cached in RAM elegantly. Doing these inversions once per mesh refinement shouldn't be a problem; it's just doing them once per linear solve that could get ugly. --- Roy |