From: Tim K. <tim...@ce...> - 2010-09-15 10:44:18
|
On Wed, 15 Sep 2010, Jed Brown wrote: > On Tue, 14 Sep 2010 09:00:39 +0200 (CEST), Tim Kroeger <tim...@ce...> wrote: >> Well, I could make it a vector instead, but then, on the other hand, I >> guess that in most situations the user does not have any idea of which >> order is best to use. > > That's what ordering packages are for, but defining the interface to use > std::set prohibits a smart user from providing an ordering. Yes, you're right. >> Well, what I would like to do, but don't know how, is to let subrhs be >> a vector with the same layout as rhs and with any dof being owned by >> that processor which owns the corresponding dof in rhs. Or, perhaps >> better, do an automatic load balancing for subrhs. (Likewise for >> subsolution of course.) > > Each process provides a list of global indices that it wants to own in > the sub-problem. The length of this list determines the local size. > You probably want > > VecCreate(comm,X); > VecSetSizes(X,local_size,PETSC_DETERMINE); > VecSetFromOptions(X); > > which will create a serial vector in serial and a parallel vector in > parallel. The user can use a partitioning algorithm if they want to > redistribute dofs (or you could add a runtime option to use an arbitrary > algorithm, PETSc offers MatPartitioning for this). Ok, I understand. However, using VecSetSizes(X,PETSC_DECIDE,global_size) seems even more useful to me. Or am I wrong somehow? >> Also, I am unsure whether or not I have to take care about the matrix >> rows being owned by the same processors that own the corresponding >> vector dofs. > > It's all determined by the layout of the IS. If you don't want rows to > "move", then the IS should only consist of global indices that are > presently owned. Ah, I understand. That means that the current implementation of my new method will make a matrix where each entry is stored on all processors. This is certainly not efficient. What does PETSc do if the matrix rows are owned by the wrong processors? Does is crash, or will it just compensate by communicating all the information around as necessary? Anyway, if I do VecSetSizes(X,PETSC_DECIDE,global_size) as described above, the best thing would then perhaps to be to ask the vector about which dofs are local to the current processor and then tell the matrix to own the rows that correspond to these dofs (but still all columns since otherwise some components are lost). I guess that VecGetOwnershipRange() is the thing to do then, right? However, if the dofs are *not* consecutive, what will VecGetOwnershipRange() do then? >>> If dofs == NULL, this function is a no-op? >> >> Yes. You see, this function is for all solver packages except PETSc, >> and they should just give an error message when somebody attempts to >> use solve_only_on(). But, of course, if a NULL pointer is used, which >> means solve on the entire domain, there is no need to give an error >> message, because this is what is done anyway, isn't it? (-: > > I wrote that comment first, before realizing that solve_only_on was just > restricting the domain and didn't solve anything. Perhaps something > like set_active_domain would be a better name? Yes, Roy also seemed to be confused by this. I'll wait whether he suggests some name, and if not, I'll use you suggestion. Best Regards, Tim -- Dr. Tim Kroeger CeVis -- Center of Complex Systems and Visualization University of Bremen tim...@ce... Universitaetsallee 29 tim...@me... D-28359 Bremen Phone +49-421-218-7710 Germany Fax +49-421-218-4236 |