From: Tim Kroeger <tim.kroeger@ce...>  20100909 13:52:57

Dear all, Let X be a computational domain, covered by a Mesh on which an EquationSystems object is based on. Let X2 be a subset of X, given by the union of selected grid elements (which are, say, marked by certain values of subdomain_id). Assume I want (at some point in my application) to solve a system only on X2 (say, using Dirichlet boundary conditions on all boundaries of X2 that are not boundaries of X). I can easily achieve this by assembling Dirichlet conditions everywhere ouside X2 and then solving as usual. However, then I cannot benefit from the performance gain that I should theoretically have if X2 contains much less elements than X. This is in particular true if I am using a direct solver (such as SuperLU_DIST, wrapped via PETSc). What is the easiest way to do this more efficiently, that is, (1) let SuperLU_DIST only see the necessary part of the matrix, (2) if possible, guarantee a sensible loadbalancing, and (3) not run into problems due to dofs that appear free if viewed from inside X2 but are actually constrained in X ? Thank you in advance for your ideas. Best Regards, Tim  Dr. Tim Kroeger CeVis  Center of Complex Systems and Visualization University of Bremen tim.kroeger@... Universitaetsallee 29 tim.kroeger@... D28359 Bremen Phone +494212187710 Germany Fax +494212184236 
From: David Knezevic <dknez@MIT.EDU>  20100909 13:58:36

Hi Tim, You can specify variables that are defined only on X2 by specifying the appropriate subdomain id(s) when you add the variable to the system. See add_variable in system.h David On 09/09/2010 09:27 AM, Tim Kroeger wrote: > Dear all, > > Let X be a computational domain, covered by a Mesh on which an > EquationSystems object is based on. Let X2 be a subset of X, given by > the union of selected grid elements (which are, say, marked by certain > values of subdomain_id). > > Assume I want (at some point in my application) to solve a system only > on X2 (say, using Dirichlet boundary conditions on all boundaries of > X2 that are not boundaries of X). > > I can easily achieve this by assembling Dirichlet conditions > everywhere ouside X2 and then solving as usual. However, then I > cannot benefit from the performance gain that I should theoretically > have if X2 contains much less elements than X. This is in particular > true if I am using a direct solver (such as SuperLU_DIST, wrapped via > PETSc). > > What is the easiest way to do this more efficiently, that is, > > (1) let SuperLU_DIST only see the necessary part of the matrix, > > (2) if possible, guarantee a sensible loadbalancing, and > > (3) not run into problems due to dofs that appear free if viewed from > inside X2 but are actually constrained in X > > ? > > Thank you in advance for your ideas. > > Best Regards, > > Tim > 
From: Tim Kroeger <tim.kroeger@ce...>  20100909 14:06:47

Dear David, On Thu, 9 Sep 2010, David Knezevic wrote: > You can specify variables that are defined only on X2 by specifying the > appropriate subdomain id(s) when you add the variable to the system. See > add_variable in system.h Thank you for your reply. How does this solution behave if the subdomain_id values change during runtime? I guess, it won't behave that way I need it to do. That is, the set X2 (in my notation) may change within the application quite frequently, so that (if my assumtion is correct) I can't use your solution. Anybody having a different idea? Best Regards, Tim  Dr. Tim Kroeger CeVis  Center of Complex Systems and Visualization University of Bremen tim.kroeger@... Universitaetsallee 29 tim.kroeger@... D28359 Bremen Phone +494212187710 Germany Fax +494212184236 
From: Jed Brown <jed@59...>  20100909 14:06:31

On Thu, 9 Sep 2010 15:27:42 +0200 (CEST), Tim Kroeger <tim.kroeger@...> wrote: > Dear all, > > Let X be a computational domain, covered by a Mesh on which an > EquationSystems object is based on. Let X2 be a subset of X, given by > the union of selected grid elements (which are, say, marked by certain > values of subdomain_id). > > Assume I want (at some point in my application) to solve a system only > on X2 (say, using Dirichlet boundary conditions on all boundaries of > X2 that are not boundaries of X). > > I can easily achieve this by assembling Dirichlet conditions > everywhere ouside X2 and then solving as usual. However, then I > cannot benefit from the performance gain that I should theoretically > have if X2 contains much less elements than X. This is in particular > true if I am using a direct solver (such as SuperLU_DIST, wrapped via > PETSc). Do you want to assemble X, or are you really only working with X2? If the former, you can MatGetSubMatrix (you just need an index set defining the subdomain) the part you want and solve with that. Maybe more interesting, you can provide the index sets to PCFieldSplit which gives you an additive or multiplicative method with a KSP created for each split (which you can control under the fieldsplit_{0,1,...}_{ksp,pc}_ prefixes). Jed 
From: Tim Kroeger <tim.kroeger@ce...>  20100909 14:14:52

Dear Jed, On Thu, 9 Sep 2010, Jed Brown wrote: > On Thu, 9 Sep 2010 15:27:42 +0200 (CEST), Tim Kroeger <tim.kroeger@...> wrote: >> Dear all, >> >> Let X be a computational domain, covered by a Mesh on which an >> EquationSystems object is based on. Let X2 be a subset of X, given by >> the union of selected grid elements (which are, say, marked by certain >> values of subdomain_id). >> >> Assume I want (at some point in my application) to solve a system only >> on X2 (say, using Dirichlet boundary conditions on all boundaries of >> X2 that are not boundaries of X). >> >> I can easily achieve this by assembling Dirichlet conditions >> everywhere ouside X2 and then solving as usual. However, then I >> cannot benefit from the performance gain that I should theoretically >> have if X2 contains much less elements than X. This is in particular >> true if I am using a direct solver (such as SuperLU_DIST, wrapped via >> PETSc). > > Do you want to assemble X, or are you really only working with X2? Most probably, my assmble method will loop over X but skip every element that is not contained in X2. This seems to be the easiest assemble method at the moment. > If the former, you can MatGetSubMatrix (you just need an index set > defining the subdomain) the part you want and solve with that. That sounds like a good idea. I will think about this. It should certainly be worth implementing a general method for this in libMesh somehow (with an exception trown if a solver other than PETSc is used...). The question is what the API should look like. > Maybe more interesting, you can provide the index sets to > PCFieldSplit which gives you an additive or multiplicative method > with a KSP created for each split (which you can control under the > fieldsplit_{0,1,...}_{ksp,pc}_ prefixes). I didn't understand this part at the moment. Best Regards, Tim  Dr. Tim Kroeger CeVis  Center of Complex Systems and Visualization University of Bremen tim.kroeger@... Universitaetsallee 29 tim.kroeger@... D28359 Bremen Phone +494212187710 Germany Fax +494212184236 
From: David Knezevic <dknez@MIT.EDU>  20100909 14:20:23

On 09/09/2010 10:06 AM, Tim Kroeger wrote: > Dear David, > > On Thu, 9 Sep 2010, David Knezevic wrote: > >> You can specify variables that are defined only on X2 by specifying the >> appropriate subdomain id(s) when you add the variable to the system. See >> add_variable in system.h > > Thank you for your reply. How does this solution behave if the > subdomain_id values change during runtime? I guess, it won't behave > that way I need it to do. That is, the set X2 (in my notation) may > change within the application quite frequently, so that (if my > assumtion is correct) I can't use your solution. I haven't tried subdomainonly variables with changing subdomains, but it seems possible that equations_systems.reinit(); might do the appropriate resizing and reindexing for you (or could be modified to do so)... I don't know though. David 
From: Tim Kroeger <tim.kroeger@ce...>  20100909 14:50:22

On Thu, 9 Sep 2010, David Knezevic wrote: > On 09/09/2010 10:06 AM, Tim Kroeger wrote: >> >> On Thu, 9 Sep 2010, David Knezevic wrote: >> >>> You can specify variables that are defined only on X2 by specifying the >>> appropriate subdomain id(s) when you add the variable to the system. See >>> add_variable in system.h >> >> Thank you for your reply. How does this solution behave if the >> subdomain_id values change during runtime? I guess, it won't behave >> that way I need it to do. That is, the set X2 (in my notation) may >> change within the application quite frequently, so that (if my >> assumtion is correct) I can't use your solution. > > I haven't tried subdomainonly variables with changing subdomains, but > it seems possible that equations_systems.reinit(); might do the > appropriate resizing and reindexing for you (or could be modified to > do so)... I don't know though. Thank you again. Even if this works, I think that Jed's solution is better suited in my situation, in particular because I want to keep the values of the variable outside X2. Best Regards, Tim  Dr. Tim Kroeger CeVis  Center of Complex Systems and Visualization University of Bremen tim.kroeger@... Universitaetsallee 29 tim.kroeger@... D28359 Bremen Phone +494212187710 Germany Fax +494212184236 
From: Jed Brown <jed@59...>  20100909 14:23:06

On Thu, 9 Sep 2010 16:14:39 +0200 (CEST), Tim Kroeger <tim.kroeger@...> wrote: > > Do you want to assemble X, or are you really only working with X2? > > Most probably, my assmble method will loop over X but skip every > element that is not contained in X2. This seems to be the easiest > assemble method at the moment. Okay, do you mind storing explicit zeros for all of X, even though you may not be touching them (or solving with them)? The alternative is to reallocate when the sparsity pattern changes. > > If the former, you can MatGetSubMatrix (you just need an index set > > defining the subdomain) the part you want and solve with that. > > That sounds like a good idea. I will think about this. It should > certainly be worth implementing a general method for this in libMesh > somehow (with an exception trown if a solver other than PETSc is > used...). The question is what the API should look like. > > > Maybe more interesting, you can provide the index sets to > > PCFieldSplit which gives you an additive or multiplicative method > > with a KSP created for each split (which you can control under the > > fieldsplit_{0,1,...}_{ksp,pc}_ prefixes). > > I didn't understand this part at the moment. Suppose you have a system which after some permutations can be written [A B; C D] Then fieldsplit can give you solvers that solve these blocks together (like blockJacobi, but partitioned by userdefined "fields") or multiplicatively (like a multiplicative Schwarz method, or block GaussSeidel with userdefined, and possibly overlapping splits), or Schurcomplement methods. There is no restriction to 2x2. The solvers for all the pieces can be controlled at the command line (or through the API, but it's more work that way). Jed 
From: Tim Kroeger <tim.kroeger@ce...>  20100909 14:47:57

On Thu, 9 Sep 2010, Jed Brown wrote: > On Thu, 9 Sep 2010 16:14:39 +0200 (CEST), Tim Kroeger <tim.kroeger@...> wrote: >>> Do you want to assemble X, or are you really only working with X2? >> >> Most probably, my assmble method will loop over X but skip every >> element that is not contained in X2. This seems to be the easiest >> assemble method at the moment. > > Okay, do you mind storing explicit zeros for all of X, even though you > may not be touching them (or solving with them)? The alternative is to > reallocate when the sparsity pattern changes. I don't mind storing all the zeros. The only condition is that SuperLU_DIST does not get to see anything outside X2. I guess, your recommendation is the right thing to do then. >>> Maybe more interesting, you can provide the index sets to >>> PCFieldSplit which gives you an additive or multiplicative method >>> with a KSP created for each split (which you can control under the >>> fieldsplit_{0,1,...}_{ksp,pc}_ prefixes). >> >> I didn't understand this part at the moment. > > Suppose you have a system which after some permutations can be written > > [A B; C D] > > Then fieldsplit can give you solvers that solve these blocks together > (like blockJacobi, but partitioned by userdefined "fields") or > multiplicatively (like a multiplicative Schwarz method, or block > GaussSeidel with userdefined, and possibly overlapping splits), or > Schurcomplement methods. There is no restriction to 2x2. The solvers > for all the pieces can be controlled at the command line (or through the > API, but it's more work that way). As far as I understand, this allows me to specify a partition of X2 into some subsets and then select solvers/preconditioners for these subsets as well as a global method to combine them. However, I assume that this will *not* move dofs to different processors, will it? So I can't use this to improve the load balancing, can I? I guess that this might be in particular a useful thing to do if X2 naturally consists of more than one connected component. This might (in my application) in fact be the case, but not canonically, and figuring this out might be complicated, and I also assume that SuperLU_DIST anyway identifies the connected components, so that, for the moment, I don't think I will need this. Best Regards, Tim  Dr. Tim Kroeger CeVis  Center of Complex Systems and Visualization University of Bremen tim.kroeger@... Universitaetsallee 29 tim.kroeger@... D28359 Bremen Phone +494212187710 Germany Fax +494212184236 
From: Jed Brown <jed@59...>  20100909 15:02:39

On Thu, 9 Sep 2010 16:47:38 +0200 (CEST), Tim Kroeger <tim.kroeger@...> wrote: > As far as I understand, this allows me to specify a partition of X2 > into some subsets and then select solvers/preconditioners for these > subsets as well as a global method to combine them. However, I assume > that this will *not* move dofs to different processors, will it? So I > can't use this to improve the load balancing, can I? Yes, you can. The index sets have a communicator, their distribution specifies the partition to use (so you can move dofs around at will). > I guess that this might be in particular a useful thing to do if X2 > naturally consists of more than one connected component. This might > (in my application) in fact be the case, but not canonically, and > figuring this out might be complicated, and I also assume that > SuperLU_DIST anyway identifies the connected components, so that, for > the moment, I don't think I will need this. Yup, depends on the problem and the method you are using. Jed 
From: Tim Kroeger <tim.kroeger@ce...>  20100909 15:27:20

On Thu, 9 Sep 2010, Tim Kroeger wrote: > On Thu, 9 Sep 2010, Jed Brown wrote: > >> On Thu, 9 Sep 2010 15:27:42 +0200 (CEST), Tim Kroeger <tim.kroeger@...> wrote: >>> Dear all, >>> >>> Let X be a computational domain, covered by a Mesh on which an >>> EquationSystems object is based on. Let X2 be a subset of X, given by >>> the union of selected grid elements (which are, say, marked by certain >>> values of subdomain_id). >>> >>> Assume I want (at some point in my application) to solve a system only >>> on X2 (say, using Dirichlet boundary conditions on all boundaries of >>> X2 that are not boundaries of X). >>> >>> I can easily achieve this by assembling Dirichlet conditions >>> everywhere ouside X2 and then solving as usual. However, then I >>> cannot benefit from the performance gain that I should theoretically >>> have if X2 contains much less elements than X. This is in particular >>> true if I am using a direct solver (such as SuperLU_DIST, wrapped via >>> PETSc). >> >> Do you want to assemble X, or are you really only working with X2? > > Most probably, my assmble method will loop over X but skip every > element that is not contained in X2. This seems to be the easiest > assemble method at the moment. > >> If the former, you can MatGetSubMatrix (you just need an index set >> defining the subdomain) the part you want and solve with that. > > That sounds like a good idea. I will think about this. It should > certainly be worth implementing a general method for this in libMesh > somehow (with an exception trown if a solver other than PETSc is > used...). The question is what the API should look like. My suggestion for an API in libMesh is as follows: LinearSolver gets a method LinearSolver::solve_only_on(const std::set<unsigned int>*const) after a call to which each call to LinearSolver::solve() will remove all dofs not contained in the list from the matrix before actually solving. The default implementation of this will be libmesh_not_implemented(), and only for PetscLinearSolver, I will implement the solution using MatGetSubMatrix() as Jed pointed out. (Optionally, an efficient load balancing using PCFieldSplit() can be added later.) Likewise, System gets a method System::solve_only_on(const std::set<subdomain_id_type>*const) which makes System::solve() call LinearSolver::solve_only_on() with the correct index set, that is, sucht that it solves on all dofs that are associated with at least one elem having a subdomain_id value that is contained in the list. Calling either solve_only_on(NULL) should in both cases restore the default behaviour. libMesh developers, what do you think? Best Regards, Tim  Dr. Tim Kroeger CeVis  Center of Complex Systems and Visualization University of Bremen tim.kroeger@... Universitaetsallee 29 tim.kroeger@... D28359 Bremen Phone +494212187710 Germany Fax +494212184236 
From: Roy Stogner <roystgnr@ic...>  20100909 15:41:44

On Thu, 9 Sep 2010, Tim Kroeger wrote: > My suggestion for an API in libMesh is as follows: > > LinearSolver gets a method > > LinearSolver::solve_only_on(const std::set<unsigned int>*const) > > after a call to which each call to LinearSolver::solve() will remove > all dofs not contained in the list from the matrix before actually > solving. The default implementation of this will be > libmesh_not_implemented(), and only for PetscLinearSolver, I will > implement the solution using MatGetSubMatrix() as Jed pointed out. > (Optionally, an efficient load balancing using PCFieldSplit() can be > added later.) > > Likewise, System gets a method > > System::solve_only_on(const std::set<subdomain_id_type>*const) > > which makes System::solve() call LinearSolver::solve_only_on() with > the correct index set, that is, sucht that it solves on all dofs that > are associated with at least one elem having a subdomain_id value that > is contained in the list. > > Calling either solve_only_on(NULL) should in both cases restore the > default behaviour. > > libMesh developers, what do you think? I like it... I think I'd prefer to have the std::set arguments limited to dof ids and make it a std::vector for subdomain ids. That way people could do a System::solve_only_on for subsets like patches that aren't predefined as subdomains. Also, we'd like to have the same functionality for nonlinear solves too, but it's fine to start with just LinearSolver. One question, since I haven't followed the thread closely enough: what are the effective boundary conditions on subset boundaries which don't correspond to domain boundaries? The dofs outside the subset would be treated as fixed, but neighboring dofs would still make their contributions to the subsystem?  Roy 
From: Jed Brown <jed@59...>  20100909 15:48:15

On Thu, 9 Sep 2010 10:41:29 0500 (CDT), Roy Stogner <roystgnr@...> wrote: > > On Thu, 9 Sep 2010, Tim Kroeger wrote: > > > System::solve_only_on(const std::set<subdomain_id_type>*const) You might want this to contain a communicator (i.e. be equivalent to a PETSc IS) and an ordering (as Roy says) so that the same API can provide redistribution and choosing a good ordering for the subdomain problems. > One question, since I haven't followed the thread closely enough: what > are the effective boundary conditions on subset boundaries which don't > correspond to domain boundaries? The dofs outside the subset would be > treated as fixed, but neighboring dofs would still make their > contributions to the subsystem? The boundary conditions are effectively Dirichlet. For a nonlinear problem, you would have to provide function values for this region outside the "active" domain. Jed 
From: John Peterson <peterson@ta...>  20100909 15:53:33

On Thu, Sep 9, 2010 at 10:27 AM, Tim Kroeger <tim.kroeger@...> wrote: > My suggestion for an API in libMesh is as follows: > > LinearSolver gets a method > > LinearSolver::solve_only_on(const std::set<unsigned int>*const) > Likewise, System gets a method > > System::solve_only_on(const std::set<subdomain_id_type>*const) Are you sure you don't want to add one more overloaded solve() member to LinearSolver? ;)  John 
From: Tim Kroeger <tim.kroeger@ce...>  20100910 06:44:56

On Thu, 9 Sep 2010, John Peterson wrote: > On Thu, Sep 9, 2010 at 10:27 AM, Tim Kroeger > <tim.kroeger@...> wrote: > >> LinearSolver::solve_only_on(const std::set<unsigned int>*const) > > Are you sure you don't want to add one more overloaded solve() member > to LinearSolver? ;) Good idea, and also I would suggest to rename LinearSolver::print_convergence_reason() to LinearSolver::solve(void) since there is no solve() version with void argument yet. (; (To be serious, besides from not making things clearer by this, it would, if done that way, require *six* new overloaded solve() members since each of the existing ones should be able to be performed on a subset. But I guess you already knew this...) Best Regards, Tim  Dr. Tim Kroeger CeVis  Center of Complex Systems and Visualization University of Bremen tim.kroeger@... Universitaetsallee 29 tim.kroeger@... D28359 Bremen Phone +494212187710 Germany Fax +494212184236 
From: Tim Kroeger <tim.kroeger@ce...>  20100910 06:34:48

On Thu, 9 Sep 2010, Roy Stogner wrote: > On Thu, 9 Sep 2010, Tim Kroeger wrote: > >> My suggestion for an API in libMesh is as follows: >> >> LinearSolver gets a method >> >> LinearSolver::solve_only_on(const std::set<unsigned int>*const) >> >> Likewise, System gets a method >> >> System::solve_only_on(const std::set<subdomain_id_type>*const) > > I like it... > > I think I'd prefer to have the std::set arguments limited to dof ids > and make it a std::vector for subdomain ids. That way people could do > a System::solve_only_on for subsets like patches that aren't > predefined as subdomains. I don't quite understand what you mean. That is, the two parts of your first sentence seem to contradict each other. I understand that you want LinearSolver::solve_only_on(const std::set<unsigned int>*const); System::solve_only_on(const std::set<unsigned int>*const); where both methods get a std::set of dof ids. But then again, you say you want a std::vector for subdomain ids, and I don't know how this fits together. Ah, perhaps you mean, that there should be *two* versions of System::solve_only_on(), one taking a std::set of dof ids, and the other like this: System::solve_only_on(const std::vector<subdomain_id_type>*const); where using "vector" rather than "set" avoids problems in the case that someone defines subdomain_id_type as unsigned int. Is this what you mean? > Also, we'd like to have the same functionality for nonlinear solves too, but > it's fine to start with just LinearSolver. Yes, I don't object if you do this. (: > One question, since I haven't followed the thread closely enough: what > are the effective boundary conditions on subset boundaries which don't > correspond to domain boundaries? The dofs outside the subset would be > treated as fixed, but neighboring dofs would still make their > contributions to the subsystem? Well, as always with boundary conditions, you get what you assemble. That is (if using the subdomain_id version), the dofs on the boundary of your active domain will be included in the solve, and you can assemble whatever you like. However, for most applications I assume that Dirichlet boundary conditions are the only ones that make sense. Best Regards, Tim BTW: I won't start implementing this before next week anyway because there are different things to do today.  Dr. Tim Kroeger CeVis  Center of Complex Systems and Visualization University of Bremen tim.kroeger@... Universitaetsallee 29 tim.kroeger@... D28359 Bremen Phone +494212187710 Germany Fax +494212184236 
From: Roy Stogner <roystgnr@ic...>  20100910 15:53:18

On Fri, 10 Sep 2010, Tim Kroeger wrote: > Ah, perhaps you mean, that there should be *two* versions of > System::solve_only_on(), one taking a std::set of dof ids, and the > other like this: > > System::solve_only_on(const std::vector<subdomain_id_type>*const); > > where using "vector" rather than "set" avoids problems in the case that > someone defines subdomain_id_type as unsigned int. Is this what you mean? Exactly. Either that, or we have some sort of SystemSubset class to abstract everything out. Our functions exclusively take SystemSubset& args, SystemSubset::get_dof_ids() gives them the dof ids to restrict to, and then at our leisure we can give users member functions which allow applications to specify a set of dof ids, a set of subdomain ids, a set of elements, whatever. > Well, as always with boundary conditions, you get what you assemble. That is > (if using the subdomain_id version), the dofs on the boundary of your active > domain will be included in the solve, and you can assemble whatever you like. "assemble whatever you like" is a good point  I was imagining us being restricted to subsets of a global formulation, and then needing multiple options to handle different types of local boundaries, but there'd be nothing stopping a user from evaluating a boundaryaware local formulation instead.  Roy 
From: Tim Kroeger <tim.kroeger@ce...>  20100913 09:35:28

On Thu, 9 Sep 2010, Jed Brown wrote: > Do you want to assemble X, or are you really only working with X2? If > the former, you can MatGetSubMatrix (you just need an index set defining > the subdomain) the part you want and solve with that. What will I have to do with the right hand side and the solution vectors then? I assumed there would be something like VecGetSubVector(), but that doesn't seem to be true. Or is there no need to do anything at all? Best Regards, Tim  Dr. Tim Kroeger CeVis  Center of Complex Systems and Visualization University of Bremen tim.kroeger@... Universitaetsallee 29 tim.kroeger@... D28359 Bremen Phone +494212187710 Germany Fax +494212184236 
From: Jed Brown <jed@59...>  20100913 09:39:52

On Mon, 13 Sep 2010 11:35:17 +0200 (CEST), Tim Kroeger <tim.kroeger@...> wrote: > On Thu, 9 Sep 2010, Jed Brown wrote: > > > Do you want to assemble X, or are you really only working with X2? If > > the former, you can MatGetSubMatrix (you just need an index set defining > > the subdomain) the part you want and solve with that. > > What will I have to do with the right hand side and the solution > vectors then? I assumed there would be something like > VecGetSubVector(), but that doesn't seem to be true. Or is there no > need to do anything at all? PCFieldSplit does this for you, if you are extracting submatrices yourself, then you would create a VecScatter to extract the part of the vector you want (with parallel redistribution if desired). Jed 
From: Tim Kroeger <tim.kroeger@ce...>  20100913 13:18:41
Attachments:
patch

On Mon, 13 Sep 2010, Jed Brown wrote: > On Mon, 13 Sep 2010 11:35:17 +0200 (CEST), Tim Kroeger <tim.kroeger@...> wrote: >> On Thu, 9 Sep 2010, Jed Brown wrote: >> >>> Do you want to assemble X, or are you really only working with X2? If >>> the former, you can MatGetSubMatrix (you just need an index set defining >>> the subdomain) the part you want and solve with that. >> >> What will I have to do with the right hand side and the solution >> vectors then? I assumed there would be something like >> VecGetSubVector(), but that doesn't seem to be true. Or is there no >> need to do anything at all? > > PCFieldSplit does this for you, if you are extracting submatrices > yourself, then you would create a VecScatter to extract the part of the > vector you want (with parallel redistribution if desired). I have now implemented it for one of the solve() methods of the PetscLinearSolver class. Since my experience with PETSc is still low, I'm somehow unsure whether the things I did are the correct ones. Jed, can you have a look at it and let me know whether this looks right? If it does, I will continue doing the same thing for the remaining solve() methods and then work on the System class following Roy's idea with the subset class. I will not check in anything before I have extended my application to successfully use this, but this may take some time. Best Regards, Tim  Dr. Tim Kroeger CeVis  Center of Complex Systems and Visualization University of Bremen tim.kroeger@... Universitaetsallee 29 tim.kroeger@... D28359 Bremen Phone +494212187710 Germany Fax +494212184236 
From: Jed Brown <jed@59...>  20100913 15:48:34

comments inline On Mon, 13 Sep 2010 15:18:30 +0200 (CEST), Tim Kroeger <tim.kroeger@...> wrote: > Index: include/solvers/petsc_linear_solver.h > =================================================================== >  include/solvers/petsc_linear_solver.h (Revision 3957) > +++ include/solvers/petsc_linear_solver.h (Arbeitskopie) > @@ 122,7 +122,15 @@ > * Initialize data structures if not done so already plus much more > */ > void init (PetscMatrix<T>* matrix); > + > /** > + * After calling this method, all successive solves will be limited > + * to the given set of dofs. This mode can be disabled by calling > + * this method with \p dofs being a \p NULL pointer. > + */ > + virtual void solve_only_on (const std::set<unsigned int>* const dofs); > + > + /** > * Call the Petsc solver. It calls the method below, using the > * same matrix for the system and preconditioner matrices. > */ > @@ 253,13 +261,36 @@ > * Krylov subspace context > */ > KSP _ksp; > + > + /** > + * Set of dofs to which any solve is limited (\p NULL means solve on > + * all dofs). > + */ > + const std::set<unsigned int>* _solve_only_on_dofs; I don't think a set is the correct data structure because the order actually matters (it affects memory performance and the strength of preconditioners). I also don't see why it needs to be kept separate from the IS, you're duplicating information, it uses more memory and will have to be kept consistent. > + > + /** > + * PETSc index set containing the dofs on which to solve (\p NULL > + * means solve on all dofs). > + */ > + IS _solve_only_on_is; > + > + /** > + * PETSc index set containing dofs from 0 to n1 where n is the > + * number of dofs on which to solve (\p NULL means solve on all > + * dofs). > + */ > + IS _solve_only_on_is_seq; You don't need this IS, see below. > + > }; > > > /* functions */ > template <typename T> > inline > PetscLinearSolver<T>::PetscLinearSolver () > + PetscLinearSolver<T>::PetscLinearSolver (): > + _solve_only_on_dofs(NULL), > + _solve_only_on_is(NULL), > + _solve_only_on_is_seq(NULL) > { > if (libMesh::n_processors() == 1) > this>_preconditioner_type = ILU_PRECOND; > Index: include/solvers/linear_solver.h > =================================================================== >  include/solvers/linear_solver.h (Revision 3957) > +++ include/solvers/linear_solver.h (Arbeitskopie) > @@ 24,6 +24,7 @@ > > > // C++ includes > +#include <set> > > // Local includes > #include "libmesh_common.h" > @@ 117,6 +118,13 @@ > void attach_preconditioner(Preconditioner<T> * preconditioner); > > /** > + * After calling this method, all successive solves will be limited > + * to the given set of dofs. This mode can be disabled by calling > + * this method with \p dofs being a \p NULL pointer. > + */ > + virtual void solve_only_on (const std::set<unsigned int>* const dofs); > + > + /** > * This function calls the solver > * "_solver_type" preconditioned with the > * "_preconditioner_type" preconditioner. Note that this method > Index: src/solvers/petsc_linear_solver.C > =================================================================== >  src/solvers/petsc_linear_solver.C (Revision 3957) > +++ src/solvers/petsc_linear_solver.C (Arbeitskopie) > @@ 98,6 +98,8 @@ > { > if (this>initialized()) > { > + this>solve_only_on(NULL); > + > this>_is_initialized = false; > > int ierr=0; > @@ 360,9 +362,59 @@ > > > > +template <typename T> > +void > +PetscLinearSolver<T>::solve_only_on (const std::set<unsigned int>* const dofs) > +{ > + libmesh_assert(this>initialized()); > > + int ierr=0; > > + if(_solve_only_on_is!=NULL) > + { > + ierr = ISDestroy(_solve_only_on_is); > + CHKERRABORT(libMesh::COMM_WORLD,ierr); > + _solve_only_on_is = NULL; > + } > > + if(_solve_only_on_is_seq!=NULL) > + { > + ierr = ISDestroy(_solve_only_on_is_seq); > + CHKERRABORT(libMesh::COMM_WORLD,ierr); > + _solve_only_on_is_seq = NULL; > + } > + > + _solve_only_on_dofs = dofs; So the LinearSolver doesn't take ownership of this set? It seems horrible to make the user responsible for that. > + > + if(dofs!=NULL) > + { > + PetscInt* petsc_dofs; > + PetscInt* petsc_dofs_seq; > + ierr = PetscMalloc(dofs>size()*sizeof(PetscInt),&petsc_dofs); > + ierr = PetscMalloc(dofs>size()*sizeof(PetscInt),&petsc_dofs_seq); > + CHKERRABORT(libMesh::COMM_WORLD,ierr); > + std::set<unsigned int>::const_iterator it = dofs>begin(); > + std::set<unsigned int>::const_iterator itEnd = dofs>end(); > + size_t count = 0; > + while(it!=itEnd) > + { > + petsc_dofs[count] = *it; > + petsc_dofs_seq[count] = count; > + count++; > + it++; > + } > + libmesh_assert(count=dofs>size()); > + > + ierr = ISCreateGeneralNC(libMesh::COMM_WORLD,count,petsc_dofs,&_solve_only_on_is); > + CHKERRABORT(libMesh::COMM_WORLD,ierr); > + > + ierr = ISCreateGeneralNC(libMesh::COMM_WORLD,count,petsc_dofs_seq,&_solve_only_on_is_seq); Could build this IS with PetscErrorCode ISCreateStride(MPI_Comm comm,PetscInt n,PetscInt first,PetscInt step,IS *is); but you don't need the IS at all, see below. > + CHKERRABORT(libMesh::COMM_WORLD,ierr); > + } > +} > + > + > + > template <typename T> > std::pair<unsigned int, Real> > PetscLinearSolver<T>::solve (SparseMatrix<T>& matrix_in, > @@ 406,6 +458,11 @@ > > // 2.1.x & earlier style > #if PETSC_VERSION_LESS_THAN(2,2,0) > + > + if(_solve_only_on_is!=NULL) > + { > + libmesh_not_implemented(); > + } > > // Set operators. The input matrix works as the preconditioning matrix > ierr = SLESSetOperators(_sles, matrix>mat(), precond>mat(), > @@ 431,6 +488,11 @@ > // 2.2.0 > #elif PETSC_VERSION_LESS_THAN(2,2,1) > > + if(_solve_only_on_is!=NULL) > + { > + libmesh_not_implemented(); > + } > + > // Set operators. The input matrix works as the preconditioning matrix > //ierr = KSPSetOperators(_ksp, matrix>mat(), precond>mat(), > // SAME_NONZERO_PATTERN); > @@ 475,19 +537,34 @@ > // 2.2.1 & newer style > #else > >  // Set operators. The input matrix works as the preconditioning matrix >  if(!this>same_preconditioner) >  { >  ierr = KSPSetOperators(_ksp, matrix>mat(), precond>mat(), >  SAME_NONZERO_PATTERN); >  CHKERRABORT(libMesh::COMM_WORLD,ierr); >  } > + Mat submat = NULL; > + Mat subprecond = NULL; > + > + // Set operators. > + if(_solve_only_on_is!=NULL) > + { > + libmesh_assert(_solve_only_on_is_seq!=NULL); > + > + ierr = MatGetSubMatrix(matrix>mat(), > + _solve_only_on_is,_solve_only_on_is, > + PETSC_DECIDE,MAT_INITIAL_MATRIX,&submat); > + CHKERRABORT(libMesh::COMM_WORLD,ierr); > + > + ierr = MatGetSubMatrix(precond>mat(), > + _solve_only_on_is,_solve_only_on_is, > + PETSC_DECIDE,MAT_INITIAL_MATRIX,&subprecond); > + CHKERRABORT(libMesh::COMM_WORLD,ierr); > + > + ierr = KSPSetOperators(_ksp, submat, subprecond, > + this>same_preconditioner ? SAME_PRECONDITIONER : SAME_NONZERO_PATTERN); > + CHKERRABORT(libMesh::COMM_WORLD,ierr); > + } > else >  { >  ierr = KSPSetOperators(_ksp, matrix>mat(), precond>mat(), >  SAME_PRECONDITIONER); >  CHKERRABORT(libMesh::COMM_WORLD,ierr); >  } > + { > + ierr = KSPSetOperators(_ksp, matrix>mat(), precond>mat(), > + this>same_preconditioner ? SAME_PRECONDITIONER : SAME_NONZERO_PATTERN); > + CHKERRABORT(libMesh::COMM_WORLD,ierr); > + } Use of this>same_preconditioner is an unrelated change, correct? Note that there are more than two values in that enum. Also, you really have a DIFFERENT_NONZERO_PATTERN if you just switched matrices to solve on a different index set. > > // Set the tolerances for the iterative solver. Use the usersupplied > // tolerance for the relative residual & leave the others at default values. > @@ 496,8 +573,56 @@ > CHKERRABORT(libMesh::COMM_WORLD,ierr); > > // Solve the linear system >  ierr = KSPSolve (_ksp, rhs>vec(), solution>vec()); >  CHKERRABORT(libMesh::COMM_WORLD,ierr); > + if(_solve_only_on_is!=NULL) > + { > + libmesh_assert(_solve_only_on_is_seq!=NULL); > + libmesh_assert(_solve_only_on_dofs!=NULL); > + Vec subrhs; > + Vec subsolution; > + > + ierr = VecCreateSeq(libMesh::COMM_WORLD,_solve_only_on_dofs>size(),&subrhs); > + CHKERRABORT(libMesh::COMM_WORLD,ierr); > + > + ierr = VecCreateSeq(libMesh::COMM_WORLD,_solve_only_on_dofs>size(),&subsolution); > + CHKERRABORT(libMesh::COMM_WORLD,ierr); Why are you specializing this to only work in serial? > + > + VecScatter scatter; > + ierr = VecScatterCreate(rhs>vec(),_solve_only_on_is, > + subrhs,_solve_only_on_is_seq, > + &scatter); > + CHKERRABORT(libMesh::COMM_WORLD,ierr); Can use NULL for the second IS, it means "scatter to the whole vector". > + > + ierr = VecScatterBegin(scatter,rhs>vec(),subrhs,INSERT_VALUES,SCATTER_FORWARD); > + CHKERRABORT(libMesh::COMM_WORLD,ierr); > + ierr = VecScatterEnd(scatter,rhs>vec(),subrhs,INSERT_VALUES,SCATTER_FORWARD); > + CHKERRABORT(libMesh::COMM_WORLD,ierr); > + > + ierr = VecScatterBegin(scatter,solution>vec(),subsolution,INSERT_VALUES,SCATTER_FORWARD); > + CHKERRABORT(libMesh::COMM_WORLD,ierr); > + ierr = VecScatterEnd(scatter,solution>vec(),subsolution,INSERT_VALUES,SCATTER_FORWARD); > + CHKERRABORT(libMesh::COMM_WORLD,ierr); > + > + ierr = KSPSolve (_ksp, subrhs, subsolution); > + CHKERRABORT(libMesh::COMM_WORLD,ierr); > + > + ierr = VecScatterBegin(scatter,subsolution,solution>vec(),INSERT_VALUES,SCATTER_REVERSE); > + CHKERRABORT(libMesh::COMM_WORLD,ierr); > + ierr = VecScatterEnd(scatter,subsolution,solution>vec(),INSERT_VALUES,SCATTER_REVERSE); > + CHKERRABORT(libMesh::COMM_WORLD,ierr); > + > + ierr = VecScatterDestroy(scatter); > + CHKERRABORT(libMesh::COMM_WORLD,ierr); > + > + ierr = VecDestroy(subsolution); > + CHKERRABORT(libMesh::COMM_WORLD,ierr); > + ierr = VecDestroy(subrhs); > + CHKERRABORT(libMesh::COMM_WORLD,ierr); > + } > + else > + { > + ierr = KSPSolve (_ksp, rhs>vec(), solution>vec()); > + CHKERRABORT(libMesh::COMM_WORLD,ierr); > + } > > // Get the number of iterations required for convergence > ierr = KSPGetIterationNumber (_ksp, &its); > @@ 507,6 +632,14 @@ > ierr = KSPGetResidualNorm (_ksp, &final_resid); > CHKERRABORT(libMesh::COMM_WORLD,ierr); > > + if(_solve_only_on_is!=NULL) > + { > + ierr = MatDestroy(submat); > + CHKERRABORT(libMesh::COMM_WORLD,ierr); > + ierr = MatDestroy(subprecond); > + CHKERRABORT(libMesh::COMM_WORLD,ierr); > + } > + > #endif > > STOP_LOG("solve()", "PetscLinearSolver"); > Index: src/solvers/linear_solver.C > =================================================================== >  src/solvers/linear_solver.C (Revision 3957) > +++ src/solvers/linear_solver.C (Arbeitskopie) > @@ 115,7 +115,18 @@ > _preconditioner = preconditioner; > } > > +template <typename T> > +void > +LinearSolver<T>::solve_only_on(const std::set<unsigned int>* const dofs) > +{ > + if(dofs!=NULL) > + { > + libmesh_not_implemented(); > + } > +} If dofs == NULL, this function is a noop? > > + > + > // > // Explicit instantiations > template class LinearSolver<Number>; 
From: Tim Kroeger <tim.kroeger@ce...>  20100914 07:17:40

Dear Jed, Thank you for your help so far. I still have some questions: On Mon, 13 Sep 2010, Jed Brown wrote: > On Mon, 13 Sep 2010 15:18:30 +0200 (CEST), Tim Kroeger <tim.kroeger@...> wrote: > >> + >> + /** >> + * Set of dofs to which any solve is limited (\p NULL means solve on >> + * all dofs). >> + */ >> + const std::set<unsigned int>* _solve_only_on_dofs; > > I don't think a set is the correct data structure because the order > actually matters (it affects memory performance and the strength of > preconditioners). Well, I could make it a vector instead, but then, on the other hand, I guess that in most situations the user does not have any idea of which order is best to use. I suggest to make two methods, one of which takes a vector and the other takes a set. Then, the settaking method will by default just convert the set to a vector and call the vectortaking method. Roy, what do you think? > I also don't see why it needs to be kept separate from the IS, > you're duplicating information, it uses more memory and will have to > be kept consistent. Yes, you're right. >> + _solve_only_on_dofs = dofs; > > So the LinearSolver doesn't take ownership of this set? It seems > horrible to make the user responsible for that. Well, users are supposed to do things like std::set<unsigned int> a; // Add values here linear_solver.solve_only_on(&a); Then, calling something like "delete &a;" inside the PetscLinearSolver class will certainly not be a good idea. Using a pointer as the argument is just to allow people to explicitely pass a NULL pointer. >> + { >> + ierr = KSPSetOperators(_ksp, matrix>mat(), precond>mat(), >> + this>same_preconditioner ? SAME_PRECONDITIONER : SAME_NONZERO_PATTERN); >> + CHKERRABORT(libMesh::COMM_WORLD,ierr); >> + } > > Use of this>same_preconditioner is an unrelated change, correct? No, I just changed an ifelse construction into a "?:" construction to make it less confusing (in connection with the additional ifelse construction). The original code looked like this: 00477 if(!this>same_preconditioner) 00478 { 00479 ierr = KSPSetOperators(_ksp, matrix>mat(), precond>mat(), 00480 SAME_NONZERO_PATTERN); 00481 CHKERRABORT(libMesh::COMM_WORLD,ierr); 00482 } 00483 else 00484 { 00485 ierr = KSPSetOperators(_ksp, matrix>mat(), precond>mat(), 00486 SAME_PRECONDITIONER); 00487 CHKERRABORT(libMesh::COMM_WORLD,ierr); 00488 } The diff algorithm matched the "else" with a new, completely different "else", that's why it's difficult to see that I actually didn't change the functionality here. > Note that there are more than two values in that enum. Also, you > really have a DIFFERENT_NONZERO_PATTERN if you just switched > matrices to solve on a different index set. Yes, that's a good point. Also, if the same solver object is used to solve different systems, there seems to be no reason to use SAME_NONZERO_PATTERN here. An further more, if some grid refinement/coarsening has taken place in the meantime, the nonzero pattern is definitely not the same as before. I really don't know why SAME_NONZERO_PATTERN has been used here originally. Roy, can you comment on this? >> + ierr = VecCreateSeq(libMesh::COMM_WORLD,_solve_only_on_dofs>size(),&subrhs); >> + CHKERRABORT(libMesh::COMM_WORLD,ierr); >> + >> + ierr = VecCreateSeq(libMesh::COMM_WORLD,_solve_only_on_dofs>size(),&subsolution); >> + CHKERRABORT(libMesh::COMM_WORLD,ierr); > > Why are you specializing this to only work in serial? I hoped that you would comment on this. (: Well, what I would like to do, but don't know how, is to let subrhs be a vector with the same layout as rhs and with any dof being owned by that processor which owns the corresponding dof in rhs. Or, perhaps better, do an automatic load balancing for subrhs. (Likewise for subsolution of course.) I know that libMesh uses some external libraries for load balancing in general, but I have never dealt directly with that part of code, so I don't know whether and, if yes, how to involve it here. Also, I am unsure whether or not I have to take care about the matrix rows being owned by the same processors that own the corresponding vector dofs. >> + ierr = VecScatterCreate(rhs>vec(),_solve_only_on_is, >> + subrhs,_solve_only_on_is_seq, >> + &scatter); >> + CHKERRABORT(libMesh::COMM_WORLD,ierr); > > Can use NULL for the second IS, it means "scatter to the whole vector". Thank you for that hint! >> +template <typename T> >> +void >> +LinearSolver<T>::solve_only_on(const std::set<unsigned int>* const dofs) >> +{ >> + if(dofs!=NULL) >> + { >> + libmesh_not_implemented(); >> + } >> +} > > If dofs == NULL, this function is a noop? Yes. You see, this function is for all solver packages except PETSc, and they should just give an error message when somebody attempts to use solve_only_on(). But, of course, if a NULL pointer is used, which means solve on the entire domain, there is no need to give an error message, because this is what is done anyway, isn't it? (: Best Regards, Tim  Dr. Tim Kroeger CeVis  Center of Complex Systems and Visualization University of Bremen tim.kroeger@... Universitaetsallee 29 tim.kroeger@... D28359 Bremen Phone +494212187710 Germany Fax +494212184236 
From: Roy Stogner <roystgnr@ic...>  20100915 03:41:46

On Tue, 14 Sep 2010, Tim Kroeger wrote: > I suggest to make two methods, one of which takes a vector and the other > takes a set. Then, the settaking method will by default just convert the > set to a vector and call the vectortaking method. Roy, what do you think? I think this is one of those cases like our send_list accumulation where the most efficient way to build the (conceptual) set is to build a vector of nonunique indices and then sortuniq it after we're done. If the user wants to play with the indexing order for efficiency, then having a way to convey the order is a bonus, but in any case there'd be no reason to pass a set at all. Even my initial idea of using one container to signify dof ids and a different container to signify subdomain ids was a lousy one; if we're not going to make some modular Subdomain class, then it would be much less confusing to just use two different method names, e.g. solve_on_dofs(vector&) and solve_on_subdomains(vector&). > Yes, that's a good point. Also, if the same solver object is used to solve > different systems, there seems to be no reason to use SAME_NONZERO_PATTERN > here. An further more, if some grid refinement/coarsening has taken place in > the meantime, the nonzero pattern is definitely not the same as before. I > really don't know why SAME_NONZERO_PATTERN has been used here originally. > Roy, can you comment on this? Hmm... according to svn blame, David is using SAME_NONZERO_PATTERN there because it's what I did nearby, and I'm using it because it's what Ben did first... And if I had to guess I'd say that Ben did it because the way he'd construct a separate preconditioning matrix is to make a copy of the system matrix with the same sparsity pattern but different data. We'd have to add a bit of cached information if we wanted to quickly test whether two arbitrary PetscMatrix objects corresponded to the same sparsity pattern, so the best fix is probably just to switch those calls to DIFFERENT_NONZERO_PATTERN for now. > I know that libMesh uses some external libraries for load balancing in > general, but I have never dealt directly with that part of code, so I don't > know whether and, if yes, how to involve it here. Even though you'd need to get a new partition the subdomain for optimal scaling, in most cases you'd probably do better using the existing partitioning and accepting a significant imbalance than running in serial and being completely imbalanced. > But, of course, if a NULL pointer is used, which means > solve on the entire domain Do we need that semantic? I'd just use a reference instead of a pointer; we've already got a function for solving on the entire domain.  Roy 
From: Tim Kroeger <tim.kroeger@ce...>  20100915 06:42:30

On Tue, 14 Sep 2010, Roy Stogner wrote: > On Tue, 14 Sep 2010, Tim Kroeger wrote: > >> I suggest to make two methods, one of which takes a vector and the other >> takes a set. Then, the settaking method will by default just convert the >> set to a vector and call the vectortaking method. Roy, what do you think? > > I think this is one of those cases like our send_list accumulation > where the most efficient way to build the (conceptual) set is to build > a vector of nonunique indices and then sortuniq it after we're done. > If the user wants to play with the indexing order for efficiency, then > having a way to convey the order is a bonus, but in any case there'd > be no reason to pass a set at all. Even my initial idea of using one > container to signify dof ids and a different container to signify > subdomain ids was a lousy one; if we're not going to make some modular > Subdomain class, then it would be much less confusing to just use two > different method names, e.g. solve_on_dofs(vector&) and > solve_on_subdomains(vector&). Okay, so I will switch this to std::vector. Just to keep this clear: NumericVector::solve_only_on() will only accept a list of dofs, nothing else, because in particular a NumericVector does not know anything about subdomain ids. On the other hand, System::solve_only_on() will only accept (a reference to) an object of the (new) SystemSubset class. A derived class SystemSubsetBySubdomain will exist, and users can write their own derived classes if they need. >> Yes, that's a good point. Also, if the same solver object is used to solve >> different systems, there seems to be no reason to use SAME_NONZERO_PATTERN >> here. An further more, if some grid refinement/coarsening has taken place >> in the meantime, the nonzero pattern is definitely not the same as before. >> I really don't know why SAME_NONZERO_PATTERN has been used here originally. >> Roy, can you comment on this? > > Hmm... according to svn blame, David is using SAME_NONZERO_PATTERN > there because it's what I did nearby, and I'm using it because it's > what Ben did first... > > And if I had to guess I'd say that Ben did it because the way he'd > construct a separate preconditioning matrix is to make a copy > of the system matrix with the same sparsity pattern but different > data. We'd have to add a bit of cached information if we wanted to > quickly test whether two arbitrary PetscMatrix objects corresponded to > the same sparsity pattern, so the best fix is probably just to switch > those calls to DIFFERENT_NONZERO_PATTERN for now. Okay, I'll do this along the way. (One conjecture why SAME_NONZERO_PATTERN has been used here before is that somebody thought it means that the system matrix and the preconditioner matrix have the same nonzero pattern, but as far as I understand now, this is just not true.) >> I know that libMesh uses some external libraries for load balancing in >> general, but I have never dealt directly with that part of code, so I don't >> know whether and, if yes, how to involve it here. > > Even though you'd need to get a new partition the subdomain for > optimal scaling, in most cases you'd probably do better using the > existing partitioning and accepting a significant imbalance than > running in serial and being completely imbalanced. Yes, but I still don't know how to do this. Another idea is to use VecCreateMPI(), giving only the global length and using PETSC_DECIDE for the local length. This way, approximately equal amounts of dofs are assigned to each processor, but will they correspond to patches of the computational domain, or will they be scattered all around? Well, since we are using a vector, this is of course in the responsiblity of the user. Also, I still don't know whether I will have to do something with the matrix to make sure that the corresponding rows are owned by the corresponding processores. >> But, of course, if a NULL pointer is used, which means solve on the entire >> domain > > Do we need that semantic? I'd just use a reference instead of a > pointer; we've already got a function for solving on the entire > domain. No, we don't have such a function. That is, at least in the way I thought of this, solve_only_on() will make *all* successive calls to *any* of the solve() methods restrict on the given subdomain. Hence, there must be a possibility to reset this functionality. That's why I used a pointer. We can, if you like, also use a reference and instead use a different method solve_everywhere(void) to reset this, but I find this less intuitive. Best Regards, Tim  Dr. Tim Kroeger CeVis  Center of Complex Systems and Visualization University of Bremen tim.kroeger@... Universitaetsallee 29 tim.kroeger@... D28359 Bremen Phone +494212187710 Germany Fax +494212184236 
From: Tim Kroeger <tim.kroeger@ce...>  20100915 06:50:33

On Wed, 15 Sep 2010, Tim Kroeger wrote: > Just to keep this clear: NumericVector::solve_only_on() will only > accept a list of dofs, nothing else, because in particular a > NumericVector does not know anything about subdomain ids. On the > other hand, System::solve_only_on() will only accept (a reference to) > an object of the (new) SystemSubset class. A derived class > SystemSubsetBySubdomain will exist, and users can write their own > derived classes if they need. /NumericVector/LinearSolver/  Dr. Tim Kroeger CeVis  Center of Complex Systems and Visualization University of Bremen tim.kroeger@... Universitaetsallee 29 tim.kroeger@... D28359 Bremen Phone +494212187710 Germany Fax +494212184236 