Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver office.
Close
From: <tim@ce...>  20070710 08:37:38

Dear all, what is the difference between System::solution and System::current_local_solution? My guess would be that current_local_solution only holds those values that live on the local processor, whereas solution holds all values. But then, I wonder why it is necessary to have both. Another guess would be that solution is only valid in certain situations. But then, I don't know how to find out whether it is currently valid or not. Best Regards, Tim  Dr. Tim Kroeger Phone +494212187710 tim@..., tim@... Fax +494212184236 MeVis Research GmbH, Universitaetsallee 29, D28359 Bremen, Germany Amtsgericht Bremen HRB 16222 Geschaeftsfuehrer: Prof. Dr. H.O. Peitgen 
From: Andrea Hawkins <andjhawkins@gm...>  20100112 16:36:47

Hello I'm wondering if someone could give me a quick tutorial on how the storage works for current_local_solution... I understand that current_local_solution is supposed to carry only the information necessary for the computations on a specific processor. And in the documentation, there is a comment in regards to the current_local_solution "All the values I need to compute my contribution to the simulation at hand. Think of this as the current solution with any ghost values needed from other processors. This vector is necessarily larger than the solution vector in the case of a parallel simulation" But... in the update function there is a line libmesh_assert (current_local_solution <http://libmesh.sourceforge.net/doxygen/classSystem.php#47b00cd00684aebe3c095aba010626e5>>size() == solution <http://libmesh.sourceforge.net/doxygen/classSystem.php#d679ef41b83fdefc3fe649e99672a941>>size()); which implies that they are the same size... =) So, I thought perhaps this is just the global size, maybe it is the local size that should be different. But, I checked that (at least on a 2 processor run) and they had the same local size as well. So, what exactly is the current_local_solution? It appears to have the same parallel structure of solution. Does it just fill in zeros wherever there is an unneeded value? Thanks! Andrea 
From: Roy Stogner <roystgnr@ic...>  20100112 17:16:41

On Tue, 12 Jan 2010, Andrea Hawkins wrote: > "All the values I need to compute my contribution to the simulation at hand. > Think of this as the current solution with any ghost values needed from > other processors. This vector is necessarily larger than the solution vector > in the case of a parallel simulation" Right. solution is of type PARALLEL, which should store just the local values plus some header info. (that header info includes the total NumericVector::size()) current_local_solution is (by default) now of type GHOSTED. > which implies that they are the same size... =) So, I thought perhaps this > is just the global size, maybe it is the local size that should be > different. But, I checked that (at least on a 2 processor run) and they had > the same local size as well. Yes, but local_size() just refers to the number of locallyowned dofs; it doesn't includes the "ghost dofs" copied from other processors. > So, what exactly is the current_local_solution? It appears to have the same > parallel structure of solution. Does it just fill in zeros wherever there is > an unneeded value? In our current PETSc implementation, GHOSTED vectors effectively include a vector of local dof coefficients, a vector of ghost coefficients, and a mapping between the ghost coefficients' dense indices in that vector and their sparse indices in the real vector. For Trilinos, they're still just serial vectors with zeros in the unneeded entries.  Roy 
From: John Peterson <peterson@cf...>  20070710 12:32:28

Hi, I think current_local_solution contains all the information you need to do a matrix assembly (all onprocessor and some neighborprocessor dofs). current_local_solution is updated from solution via the call to system.update(). I believe the reason to have both is that it somehow reduces communication overhead... all the solution values needed for a matrix assembly step are local to your processor after the call to update(). You don't have to worry about pulling values from other CPUs during the middle of assembly. John Tim Kr,bv(Bger writes: > Dear all, > > what is the difference between System::solution and > System::current_local_solution? My guess would be that > current_local_solution only holds those values that live on the local > processor, whereas solution holds all values. But then, I wonder why > it is necessary to have both. Another guess would be that solution is > only valid in certain situations. But then, I don't know how to find > out whether it is currently valid or not. > > Best Regards, > > Tim > >  > Dr. Tim Kroeger Phone +494212187710 > tim@..., tim@... Fax +494212184236 > > MeVis Research GmbH, Universitaetsallee 29, D28359 Bremen, Germany > > Amtsgericht Bremen HRB 16222 > Geschaeftsfuehrer: Prof. Dr. H.O. Peitgen > >  > This SF.net email is sponsored by DB2 Express > Download DB2 Express C  the FREE version of DB2 express and take > control of your XML. No limits. Just data. Click to get it now. > http://sourceforge.net/powerbar/db2/ > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers 
From: <tim@ce...>  20070710 13:09:51

Dear John, On Tue, 10 Jul 2007, John Peterson wrote: > I think current_local_solution contains all the information you need > to do a matrix assembly (all onprocessor and some neighborprocessor > dofs). current_local_solution is updated from solution via the call > to system.update(). > > I believe the reason to have both is that it somehow reduces > communication overhead... all the solution values needed for a matrix > assembly step are local to your processor after the call to update(). > You don't have to worry about pulling values from other CPUs during > the middle of assembly. Okay, thank you. I have to admit that this confuses me since I started using libMesh, and I would appreciate if I would understand this completely, because I always worked with trial and error here. Mainly, I usually face one of the following three situations: 1.) I want to read a value at a node of an element that lives on the local processor. If I understand you correct, it should in this case be regardless whether I read from solution or from current_local_solution. 2.) I want to read a value at a node without knowing on which processor it lives. If I understand you correct, I can still get this value by reading from solution. 3.) I want to *write* a value at a node of an element that lives on the local processor. If I understand you correct, I should write this value to solution (not current_local_solution) and call update() afterwards. Also, I have to ensure that nodes that live on more than one processor (because they are located at the boundary) get the same value written on each processor. Am I right with my conjectures? Best Regards, Tim  Dr. Tim Kroeger Phone +494212187710 tim@..., tim@... Fax +494212184236 MeVis Research GmbH, Universitaetsallee 29, D28359 Bremen, Germany Amtsgericht Bremen HRB 16222 Geschaeftsfuehrer: Prof. Dr. H.O. Peitgen 
From: John Peterson <peterson@cf...>  20070710 13:38:35

Tim Kr,bv(Bger writes: > Dear John, > > On Tue, 10 Jul 2007, John Peterson wrote: > > > I think current_local_solution contains all the information you need > > to do a matrix assembly (all onprocessor and some neighborprocessor > > dofs). current_local_solution is updated from solution via the call > > to system.update(). > > > > I believe the reason to have both is that it somehow reduces > > communication overhead... all the solution values needed for a matrix > > assembly step are local to your processor after the call to update(). > > You don't have to worry about pulling values from other CPUs during > > the middle of assembly. > > Okay, thank you. > > I have to admit that this confuses me since I started using libMesh, > and I would appreciate if I would understand this completely, because > I always worked with trial and error here. > > Mainly, I usually face one of the following three situations: > > 1.) I want to read a value at a node of an element that lives on the > local processor. If I understand you correct, it should in this case > be regardless whether I read from solution or from > current_local_solution. Correct. > 2.) I want to read a value at a node without knowing on which > processor it lives. If I understand you correct, I can still get this > value by reading from solution. Yes. In this case you would check if the DOF index was between first_local_index and last_local_index on each processor, and if so get the value. Since you need to know the value on all processors, you could use an MPI call to send this value to other processors. This is a bit cumbersome and inefficient and if you expect to do it frequently i.e. read almost every node's value (without changing the entries in the global vector, obviously) your best bet is probably to localize the NumericVector using one of the available methods first. > 3.) I want to *write* a value at a node of an element that lives on > the local processor. If I understand you correct, I should write this > value to solution (not current_local_solution) and call update() > afterwards. Correct, you never write to current_local_solution in any case that I can think of. > Also, I have to ensure that nodes that live on more than > one processor (because they are located at the boundary) get the same > value written on each processor. Nodes on the boundary between two or more processors should technically be "owned" by the min processor ID of the set of potential "owners" (this should also be the value returned by node>processor_id()). If you look at the implementation of e.g. PetscVector::set() it appears that it works from any CPU. Petsc caches the set values locally until you call close() on the vector. J 