You can subscribe to this list here.
2003 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}
(2) 
_{Oct}
(2) 
_{Nov}
(27) 
_{Dec}
(31) 

2004 
_{Jan}
(6) 
_{Feb}
(15) 
_{Mar}
(33) 
_{Apr}
(10) 
_{May}
(46) 
_{Jun}
(11) 
_{Jul}
(21) 
_{Aug}
(15) 
_{Sep}
(13) 
_{Oct}
(23) 
_{Nov}
(1) 
_{Dec}
(8) 
2005 
_{Jan}
(27) 
_{Feb}
(57) 
_{Mar}
(86) 
_{Apr}
(23) 
_{May}
(37) 
_{Jun}
(34) 
_{Jul}
(24) 
_{Aug}
(17) 
_{Sep}
(50) 
_{Oct}
(24) 
_{Nov}
(10) 
_{Dec}
(60) 
2006 
_{Jan}
(47) 
_{Feb}
(46) 
_{Mar}
(127) 
_{Apr}
(19) 
_{May}
(26) 
_{Jun}
(62) 
_{Jul}
(47) 
_{Aug}
(51) 
_{Sep}
(61) 
_{Oct}
(42) 
_{Nov}
(50) 
_{Dec}
(33) 
2007 
_{Jan}
(60) 
_{Feb}
(55) 
_{Mar}
(77) 
_{Apr}
(102) 
_{May}
(82) 
_{Jun}
(102) 
_{Jul}
(169) 
_{Aug}
(117) 
_{Sep}
(80) 
_{Oct}
(37) 
_{Nov}
(51) 
_{Dec}
(43) 
2008 
_{Jan}
(71) 
_{Feb}
(94) 
_{Mar}
(98) 
_{Apr}
(125) 
_{May}
(54) 
_{Jun}
(119) 
_{Jul}
(60) 
_{Aug}
(111) 
_{Sep}
(118) 
_{Oct}
(125) 
_{Nov}
(119) 
_{Dec}
(94) 
2009 
_{Jan}
(109) 
_{Feb}
(38) 
_{Mar}
(93) 
_{Apr}
(88) 
_{May}
(29) 
_{Jun}
(57) 
_{Jul}
(53) 
_{Aug}
(48) 
_{Sep}
(68) 
_{Oct}
(151) 
_{Nov}
(23) 
_{Dec}
(35) 
2010 
_{Jan}
(84) 
_{Feb}
(60) 
_{Mar}
(184) 
_{Apr}
(112) 
_{May}
(60) 
_{Jun}
(90) 
_{Jul}
(23) 
_{Aug}
(70) 
_{Sep}
(119) 
_{Oct}
(27) 
_{Nov}
(47) 
_{Dec}
(54) 
2011 
_{Jan}
(22) 
_{Feb}
(19) 
_{Mar}
(92) 
_{Apr}
(93) 
_{May}
(35) 
_{Jun}
(91) 
_{Jul}
(32) 
_{Aug}
(61) 
_{Sep}
(7) 
_{Oct}
(69) 
_{Nov}
(81) 
_{Dec}
(23) 
2012 
_{Jan}
(64) 
_{Feb}
(95) 
_{Mar}
(35) 
_{Apr}
(36) 
_{May}
(63) 
_{Jun}
(98) 
_{Jul}
(70) 
_{Aug}
(171) 
_{Sep}
(149) 
_{Oct}
(64) 
_{Nov}
(67) 
_{Dec}
(126) 
2013 
_{Jan}
(108) 
_{Feb}
(104) 
_{Mar}
(171) 
_{Apr}
(133) 
_{May}
(108) 
_{Jun}
(100) 
_{Jul}
(93) 
_{Aug}
(126) 
_{Sep}
(74) 
_{Oct}
(59) 
_{Nov}
(145) 
_{Dec}
(93) 
2014 
_{Jan}
(38) 
_{Feb}
(45) 
_{Mar}
(26) 
_{Apr}
(41) 
_{May}
(125) 
_{Jun}
(70) 
_{Jul}
(61) 
_{Aug}
(66) 
_{Sep}
(60) 
_{Oct}
(110) 
_{Nov}
(27) 
_{Dec}
(30) 
2015 
_{Jan}
(43) 
_{Feb}
(67) 
_{Mar}
(71) 
_{Apr}
(92) 
_{May}
(39) 
_{Jun}
(15) 
_{Jul}
(46) 
_{Aug}
(63) 
_{Sep}
(84) 
_{Oct}
(82) 
_{Nov}
(69) 
_{Dec}
(45) 
2016 
_{Jan}
(92) 
_{Feb}
(91) 
_{Mar}
(148) 
_{Apr}
(43) 
_{May}
(58) 
_{Jun}
(117) 
_{Jul}
(92) 
_{Aug}
(140) 
_{Sep}
(29) 
_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 




1

2

3
(3) 
4
(3) 
5
(5) 
6
(5) 
7
(1) 
8

9
(14) 
10
(10) 
11

12
(1) 
13
(13) 
14
(2) 
15
(22) 
16
(4) 
17
(5) 
18
(1) 
19
(1) 
20
(5) 
21
(3) 
22
(5) 
23
(2) 
24

25

26

27
(2) 
28
(2) 
29
(5) 
30
(5) 


From: John Peterson <peterson@ta...>  20100909 15:53:33

On Thu, Sep 9, 2010 at 10:27 AM, Tim Kroeger <tim.kroeger@...> wrote: > My suggestion for an API in libMesh is as follows: > > LinearSolver gets a method > > LinearSolver::solve_only_on(const std::set<unsigned int>*const) > Likewise, System gets a method > > System::solve_only_on(const std::set<subdomain_id_type>*const) Are you sure you don't want to add one more overloaded solve() member to LinearSolver? ;)  John 
From: Jed Brown <jed@59...>  20100909 15:48:15

On Thu, 9 Sep 2010 10:41:29 0500 (CDT), Roy Stogner <roystgnr@...> wrote: > > On Thu, 9 Sep 2010, Tim Kroeger wrote: > > > System::solve_only_on(const std::set<subdomain_id_type>*const) You might want this to contain a communicator (i.e. be equivalent to a PETSc IS) and an ordering (as Roy says) so that the same API can provide redistribution and choosing a good ordering for the subdomain problems. > One question, since I haven't followed the thread closely enough: what > are the effective boundary conditions on subset boundaries which don't > correspond to domain boundaries? The dofs outside the subset would be > treated as fixed, but neighboring dofs would still make their > contributions to the subsystem? The boundary conditions are effectively Dirichlet. For a nonlinear problem, you would have to provide function values for this region outside the "active" domain. Jed 
From: Roy Stogner <roystgnr@ic...>  20100909 15:41:44

On Thu, 9 Sep 2010, Tim Kroeger wrote: > My suggestion for an API in libMesh is as follows: > > LinearSolver gets a method > > LinearSolver::solve_only_on(const std::set<unsigned int>*const) > > after a call to which each call to LinearSolver::solve() will remove > all dofs not contained in the list from the matrix before actually > solving. The default implementation of this will be > libmesh_not_implemented(), and only for PetscLinearSolver, I will > implement the solution using MatGetSubMatrix() as Jed pointed out. > (Optionally, an efficient load balancing using PCFieldSplit() can be > added later.) > > Likewise, System gets a method > > System::solve_only_on(const std::set<subdomain_id_type>*const) > > which makes System::solve() call LinearSolver::solve_only_on() with > the correct index set, that is, sucht that it solves on all dofs that > are associated with at least one elem having a subdomain_id value that > is contained in the list. > > Calling either solve_only_on(NULL) should in both cases restore the > default behaviour. > > libMesh developers, what do you think? I like it... I think I'd prefer to have the std::set arguments limited to dof ids and make it a std::vector for subdomain ids. That way people could do a System::solve_only_on for subsets like patches that aren't predefined as subdomains. Also, we'd like to have the same functionality for nonlinear solves too, but it's fine to start with just LinearSolver. One question, since I haven't followed the thread closely enough: what are the effective boundary conditions on subset boundaries which don't correspond to domain boundaries? The dofs outside the subset would be treated as fixed, but neighboring dofs would still make their contributions to the subsystem?  Roy 
From: Tim Kroeger <tim.kroeger@ce...>  20100909 15:27:20

On Thu, 9 Sep 2010, Tim Kroeger wrote: > On Thu, 9 Sep 2010, Jed Brown wrote: > >> On Thu, 9 Sep 2010 15:27:42 +0200 (CEST), Tim Kroeger <tim.kroeger@...> wrote: >>> Dear all, >>> >>> Let X be a computational domain, covered by a Mesh on which an >>> EquationSystems object is based on. Let X2 be a subset of X, given by >>> the union of selected grid elements (which are, say, marked by certain >>> values of subdomain_id). >>> >>> Assume I want (at some point in my application) to solve a system only >>> on X2 (say, using Dirichlet boundary conditions on all boundaries of >>> X2 that are not boundaries of X). >>> >>> I can easily achieve this by assembling Dirichlet conditions >>> everywhere ouside X2 and then solving as usual. However, then I >>> cannot benefit from the performance gain that I should theoretically >>> have if X2 contains much less elements than X. This is in particular >>> true if I am using a direct solver (such as SuperLU_DIST, wrapped via >>> PETSc). >> >> Do you want to assemble X, or are you really only working with X2? > > Most probably, my assmble method will loop over X but skip every > element that is not contained in X2. This seems to be the easiest > assemble method at the moment. > >> If the former, you can MatGetSubMatrix (you just need an index set >> defining the subdomain) the part you want and solve with that. > > That sounds like a good idea. I will think about this. It should > certainly be worth implementing a general method for this in libMesh > somehow (with an exception trown if a solver other than PETSc is > used...). The question is what the API should look like. My suggestion for an API in libMesh is as follows: LinearSolver gets a method LinearSolver::solve_only_on(const std::set<unsigned int>*const) after a call to which each call to LinearSolver::solve() will remove all dofs not contained in the list from the matrix before actually solving. The default implementation of this will be libmesh_not_implemented(), and only for PetscLinearSolver, I will implement the solution using MatGetSubMatrix() as Jed pointed out. (Optionally, an efficient load balancing using PCFieldSplit() can be added later.) Likewise, System gets a method System::solve_only_on(const std::set<subdomain_id_type>*const) which makes System::solve() call LinearSolver::solve_only_on() with the correct index set, that is, sucht that it solves on all dofs that are associated with at least one elem having a subdomain_id value that is contained in the list. Calling either solve_only_on(NULL) should in both cases restore the default behaviour. libMesh developers, what do you think? Best Regards, Tim  Dr. Tim Kroeger CeVis  Center of Complex Systems and Visualization University of Bremen tim.kroeger@... Universitaetsallee 29 tim.kroeger@... D28359 Bremen Phone +494212187710 Germany Fax +494212184236 
From: Jed Brown <jed@59...>  20100909 15:02:39

On Thu, 9 Sep 2010 16:47:38 +0200 (CEST), Tim Kroeger <tim.kroeger@...> wrote: > As far as I understand, this allows me to specify a partition of X2 > into some subsets and then select solvers/preconditioners for these > subsets as well as a global method to combine them. However, I assume > that this will *not* move dofs to different processors, will it? So I > can't use this to improve the load balancing, can I? Yes, you can. The index sets have a communicator, their distribution specifies the partition to use (so you can move dofs around at will). > I guess that this might be in particular a useful thing to do if X2 > naturally consists of more than one connected component. This might > (in my application) in fact be the case, but not canonically, and > figuring this out might be complicated, and I also assume that > SuperLU_DIST anyway identifies the connected components, so that, for > the moment, I don't think I will need this. Yup, depends on the problem and the method you are using. Jed 
From: Tim Kroeger <tim.kroeger@ce...>  20100909 14:50:22

On Thu, 9 Sep 2010, David Knezevic wrote: > On 09/09/2010 10:06 AM, Tim Kroeger wrote: >> >> On Thu, 9 Sep 2010, David Knezevic wrote: >> >>> You can specify variables that are defined only on X2 by specifying the >>> appropriate subdomain id(s) when you add the variable to the system. See >>> add_variable in system.h >> >> Thank you for your reply. How does this solution behave if the >> subdomain_id values change during runtime? I guess, it won't behave >> that way I need it to do. That is, the set X2 (in my notation) may >> change within the application quite frequently, so that (if my >> assumtion is correct) I can't use your solution. > > I haven't tried subdomainonly variables with changing subdomains, but > it seems possible that equations_systems.reinit(); might do the > appropriate resizing and reindexing for you (or could be modified to > do so)... I don't know though. Thank you again. Even if this works, I think that Jed's solution is better suited in my situation, in particular because I want to keep the values of the variable outside X2. Best Regards, Tim  Dr. Tim Kroeger CeVis  Center of Complex Systems and Visualization University of Bremen tim.kroeger@... Universitaetsallee 29 tim.kroeger@... D28359 Bremen Phone +494212187710 Germany Fax +494212184236 
From: Tim Kroeger <tim.kroeger@ce...>  20100909 14:47:57

On Thu, 9 Sep 2010, Jed Brown wrote: > On Thu, 9 Sep 2010 16:14:39 +0200 (CEST), Tim Kroeger <tim.kroeger@...> wrote: >>> Do you want to assemble X, or are you really only working with X2? >> >> Most probably, my assmble method will loop over X but skip every >> element that is not contained in X2. This seems to be the easiest >> assemble method at the moment. > > Okay, do you mind storing explicit zeros for all of X, even though you > may not be touching them (or solving with them)? The alternative is to > reallocate when the sparsity pattern changes. I don't mind storing all the zeros. The only condition is that SuperLU_DIST does not get to see anything outside X2. I guess, your recommendation is the right thing to do then. >>> Maybe more interesting, you can provide the index sets to >>> PCFieldSplit which gives you an additive or multiplicative method >>> with a KSP created for each split (which you can control under the >>> fieldsplit_{0,1,...}_{ksp,pc}_ prefixes). >> >> I didn't understand this part at the moment. > > Suppose you have a system which after some permutations can be written > > [A B; C D] > > Then fieldsplit can give you solvers that solve these blocks together > (like blockJacobi, but partitioned by userdefined "fields") or > multiplicatively (like a multiplicative Schwarz method, or block > GaussSeidel with userdefined, and possibly overlapping splits), or > Schurcomplement methods. There is no restriction to 2x2. The solvers > for all the pieces can be controlled at the command line (or through the > API, but it's more work that way). As far as I understand, this allows me to specify a partition of X2 into some subsets and then select solvers/preconditioners for these subsets as well as a global method to combine them. However, I assume that this will *not* move dofs to different processors, will it? So I can't use this to improve the load balancing, can I? I guess that this might be in particular a useful thing to do if X2 naturally consists of more than one connected component. This might (in my application) in fact be the case, but not canonically, and figuring this out might be complicated, and I also assume that SuperLU_DIST anyway identifies the connected components, so that, for the moment, I don't think I will need this. Best Regards, Tim  Dr. Tim Kroeger CeVis  Center of Complex Systems and Visualization University of Bremen tim.kroeger@... Universitaetsallee 29 tim.kroeger@... D28359 Bremen Phone +494212187710 Germany Fax +494212184236 
From: Jed Brown <jed@59...>  20100909 14:23:06

On Thu, 9 Sep 2010 16:14:39 +0200 (CEST), Tim Kroeger <tim.kroeger@...> wrote: > > Do you want to assemble X, or are you really only working with X2? > > Most probably, my assmble method will loop over X but skip every > element that is not contained in X2. This seems to be the easiest > assemble method at the moment. Okay, do you mind storing explicit zeros for all of X, even though you may not be touching them (or solving with them)? The alternative is to reallocate when the sparsity pattern changes. > > If the former, you can MatGetSubMatrix (you just need an index set > > defining the subdomain) the part you want and solve with that. > > That sounds like a good idea. I will think about this. It should > certainly be worth implementing a general method for this in libMesh > somehow (with an exception trown if a solver other than PETSc is > used...). The question is what the API should look like. > > > Maybe more interesting, you can provide the index sets to > > PCFieldSplit which gives you an additive or multiplicative method > > with a KSP created for each split (which you can control under the > > fieldsplit_{0,1,...}_{ksp,pc}_ prefixes). > > I didn't understand this part at the moment. Suppose you have a system which after some permutations can be written [A B; C D] Then fieldsplit can give you solvers that solve these blocks together (like blockJacobi, but partitioned by userdefined "fields") or multiplicatively (like a multiplicative Schwarz method, or block GaussSeidel with userdefined, and possibly overlapping splits), or Schurcomplement methods. There is no restriction to 2x2. The solvers for all the pieces can be controlled at the command line (or through the API, but it's more work that way). Jed 
From: David Knezevic <dknez@MIT.EDU>  20100909 14:20:23

On 09/09/2010 10:06 AM, Tim Kroeger wrote: > Dear David, > > On Thu, 9 Sep 2010, David Knezevic wrote: > >> You can specify variables that are defined only on X2 by specifying the >> appropriate subdomain id(s) when you add the variable to the system. See >> add_variable in system.h > > Thank you for your reply. How does this solution behave if the > subdomain_id values change during runtime? I guess, it won't behave > that way I need it to do. That is, the set X2 (in my notation) may > change within the application quite frequently, so that (if my > assumtion is correct) I can't use your solution. I haven't tried subdomainonly variables with changing subdomains, but it seems possible that equations_systems.reinit(); might do the appropriate resizing and reindexing for you (or could be modified to do so)... I don't know though. David 
From: Tim Kroeger <tim.kroeger@ce...>  20100909 14:14:52

Dear Jed, On Thu, 9 Sep 2010, Jed Brown wrote: > On Thu, 9 Sep 2010 15:27:42 +0200 (CEST), Tim Kroeger <tim.kroeger@...> wrote: >> Dear all, >> >> Let X be a computational domain, covered by a Mesh on which an >> EquationSystems object is based on. Let X2 be a subset of X, given by >> the union of selected grid elements (which are, say, marked by certain >> values of subdomain_id). >> >> Assume I want (at some point in my application) to solve a system only >> on X2 (say, using Dirichlet boundary conditions on all boundaries of >> X2 that are not boundaries of X). >> >> I can easily achieve this by assembling Dirichlet conditions >> everywhere ouside X2 and then solving as usual. However, then I >> cannot benefit from the performance gain that I should theoretically >> have if X2 contains much less elements than X. This is in particular >> true if I am using a direct solver (such as SuperLU_DIST, wrapped via >> PETSc). > > Do you want to assemble X, or are you really only working with X2? Most probably, my assmble method will loop over X but skip every element that is not contained in X2. This seems to be the easiest assemble method at the moment. > If the former, you can MatGetSubMatrix (you just need an index set > defining the subdomain) the part you want and solve with that. That sounds like a good idea. I will think about this. It should certainly be worth implementing a general method for this in libMesh somehow (with an exception trown if a solver other than PETSc is used...). The question is what the API should look like. > Maybe more interesting, you can provide the index sets to > PCFieldSplit which gives you an additive or multiplicative method > with a KSP created for each split (which you can control under the > fieldsplit_{0,1,...}_{ksp,pc}_ prefixes). I didn't understand this part at the moment. Best Regards, Tim  Dr. Tim Kroeger CeVis  Center of Complex Systems and Visualization University of Bremen tim.kroeger@... Universitaetsallee 29 tim.kroeger@... D28359 Bremen Phone +494212187710 Germany Fax +494212184236 
From: Tim Kroeger <tim.kroeger@ce...>  20100909 14:06:47

Dear David, On Thu, 9 Sep 2010, David Knezevic wrote: > You can specify variables that are defined only on X2 by specifying the > appropriate subdomain id(s) when you add the variable to the system. See > add_variable in system.h Thank you for your reply. How does this solution behave if the subdomain_id values change during runtime? I guess, it won't behave that way I need it to do. That is, the set X2 (in my notation) may change within the application quite frequently, so that (if my assumtion is correct) I can't use your solution. Anybody having a different idea? Best Regards, Tim  Dr. Tim Kroeger CeVis  Center of Complex Systems and Visualization University of Bremen tim.kroeger@... Universitaetsallee 29 tim.kroeger@... D28359 Bremen Phone +494212187710 Germany Fax +494212184236 
From: Jed Brown <jed@59...>  20100909 14:06:31

On Thu, 9 Sep 2010 15:27:42 +0200 (CEST), Tim Kroeger <tim.kroeger@...> wrote: > Dear all, > > Let X be a computational domain, covered by a Mesh on which an > EquationSystems object is based on. Let X2 be a subset of X, given by > the union of selected grid elements (which are, say, marked by certain > values of subdomain_id). > > Assume I want (at some point in my application) to solve a system only > on X2 (say, using Dirichlet boundary conditions on all boundaries of > X2 that are not boundaries of X). > > I can easily achieve this by assembling Dirichlet conditions > everywhere ouside X2 and then solving as usual. However, then I > cannot benefit from the performance gain that I should theoretically > have if X2 contains much less elements than X. This is in particular > true if I am using a direct solver (such as SuperLU_DIST, wrapped via > PETSc). Do you want to assemble X, or are you really only working with X2? If the former, you can MatGetSubMatrix (you just need an index set defining the subdomain) the part you want and solve with that. Maybe more interesting, you can provide the index sets to PCFieldSplit which gives you an additive or multiplicative method with a KSP created for each split (which you can control under the fieldsplit_{0,1,...}_{ksp,pc}_ prefixes). Jed 
From: David Knezevic <dknez@MIT.EDU>  20100909 13:58:36

Hi Tim, You can specify variables that are defined only on X2 by specifying the appropriate subdomain id(s) when you add the variable to the system. See add_variable in system.h David On 09/09/2010 09:27 AM, Tim Kroeger wrote: > Dear all, > > Let X be a computational domain, covered by a Mesh on which an > EquationSystems object is based on. Let X2 be a subset of X, given by > the union of selected grid elements (which are, say, marked by certain > values of subdomain_id). > > Assume I want (at some point in my application) to solve a system only > on X2 (say, using Dirichlet boundary conditions on all boundaries of > X2 that are not boundaries of X). > > I can easily achieve this by assembling Dirichlet conditions > everywhere ouside X2 and then solving as usual. However, then I > cannot benefit from the performance gain that I should theoretically > have if X2 contains much less elements than X. This is in particular > true if I am using a direct solver (such as SuperLU_DIST, wrapped via > PETSc). > > What is the easiest way to do this more efficiently, that is, > > (1) let SuperLU_DIST only see the necessary part of the matrix, > > (2) if possible, guarantee a sensible loadbalancing, and > > (3) not run into problems due to dofs that appear free if viewed from > inside X2 but are actually constrained in X > > ? > > Thank you in advance for your ideas. > > Best Regards, > > Tim > 
From: Tim Kroeger <tim.kroeger@ce...>  20100909 13:52:57

Dear all, Let X be a computational domain, covered by a Mesh on which an EquationSystems object is based on. Let X2 be a subset of X, given by the union of selected grid elements (which are, say, marked by certain values of subdomain_id). Assume I want (at some point in my application) to solve a system only on X2 (say, using Dirichlet boundary conditions on all boundaries of X2 that are not boundaries of X). I can easily achieve this by assembling Dirichlet conditions everywhere ouside X2 and then solving as usual. However, then I cannot benefit from the performance gain that I should theoretically have if X2 contains much less elements than X. This is in particular true if I am using a direct solver (such as SuperLU_DIST, wrapped via PETSc). What is the easiest way to do this more efficiently, that is, (1) let SuperLU_DIST only see the necessary part of the matrix, (2) if possible, guarantee a sensible loadbalancing, and (3) not run into problems due to dofs that appear free if viewed from inside X2 but are actually constrained in X ? Thank you in advance for your ideas. Best Regards, Tim  Dr. Tim Kroeger CeVis  Center of Complex Systems and Visualization University of Bremen tim.kroeger@... Universitaetsallee 29 tim.kroeger@... D28359 Bremen Phone +494212187710 Germany Fax +494212184236 