From: Jed B. <je...@59...> - 2010-09-15 15:45:22
|
On Wed, 15 Sep 2010 17:34:14 +0200 (CEST), Tim Kroeger <tim...@ce...> wrote: > But now, let's assume > > all = [ 0 .. 9 | 10 .. 19 | 20 .. 29 ] > > and > > sub = [ 8 9 | 10 .. 19 | 20 21 22 ] > > Assume further that the matrix is tridiagonal. If I then do what you > suggested, four matrix entries are lost (that is, (9,10), (10,9), > (19,20), (20,19)), aren't they? That's why I thought the IS for the > columns should be larger than that for the rows. If the user provided > dof indices are only those already owned by the processor, it seems to > be necessary to communicate those across the processors in order to do > the correct column selection. Is this right? No, the local part of the column IS denotes the column indices that will belong to that "diagonal block". To put it differently, it defines the ownership of a vector that the matrix can be multiplied by. Suppose we want the equation y = A x to make sense. The "row IS" specifies the distribution of y, the "column IS" specifies the distribution of x. The diagonal block of A is the tensor product of the local part of the row and column ISs. > I don't find this weird. Well, we could let the user decide between > several modes here, that is KEEP_SOLUTION, CLEAR, TAKE_RHS (or > whatever names we agree to). Roy, what do you think? Usually a (tightly converged) solve is a linear operation from the right-hand side to a solution, the values in the solution vector only specify the starting point for the iteration. If tight tolerances are used, then the starting vector only affects the time to solution, but not the actual value. Solving on a subdomain without modifying the solution vector for "inactive" dofs is a deeply different semantic. Jed |