Screenshot instructions:
Windows
Mac
Red Hat Linux
Ubuntu
Click URL instructions:
Rightclick on ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)
From: Jed Brown <jed@59...>  20090531 17:52:33

I'm using libmesh to provide a reference implementation of Q_2 elements for Poisson, and Q_2Q_1 for Stokes. If possible, I would also like to compare with thirdorder elements, but HIERARCHIC appears to produce insanely illconditioned matrices. For example, ./ex4opt d 3 n 4 f HIERARCHIC o THIRD ksp_monitor ksp_converged_reason pc_type lu pc_factor_mat_solver_package superlu does not converge. All of petsc, umfpack, mumps, and spooles are unsuccessful as well. Apparently there is no thirdorder Lagrange implementation so perhaps I'm out of luck here. For the Stokes solver, I modified ex11 for 3D, but no Schur complement preconditioners are available (I could plug it into the new factorization version of PCFieldSplit, but don't have time at the moment). Since BoomerAMG [1] and ML fail when naively given indefinite matrices, the libmesh Stokes implementation isn't really getting a fair shake (I probably just won't include it). I know some people on this list solve (Navier)Stokes, so I'm curious if anyone has an algorithmically scalable implementation they'd like to compare. [1] BoomerAMG does a great job of handling the penalty boundary conditions so the residual initially drops by a large factor and then stagnates (watch with ksp_monitor_true_residual) on a nontrivial problem size. Heads up on a change in petscdev: The arguments to MatGetSubMatrix used to reflect implementation details for MPIAIJ/MPIBAIJ and was fundamentally unscalable (though fine for the sizes we've seen so far). We decided to drop the 'csize' argument and take a parallel index set for 'iscol' (instead of the serial gathered one used through 3.0.0). The current implementations still do the gather so there is still a scalability issue when taking submatrices with number of columns similar to singlenode memory, but at least the interface is now scalable (I take submatrices of MatShell so this is good). Speak up if you run into this issue, it can be made scalable but it's not a high priority at the moment. It looks like PetscMatrix::_get_submatrix() will work correctly in parallel if you drop the csize argument (PETSC_DECIDE in your current source). The arguments become the rows and columns desired in the *local* part of the new submatrix. (Yes, the local part can contain rows and columns that were formerly remote.) Jed 