Screenshot instructions:
Windows
Mac
Red Hat Linux
Ubuntu
Click URL instructions:
Rightclick on ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)
From: Roy Stogner <roystgnr@ic...>  20090325 17:46:45

On Wed, 25 Mar 2009, Derek Gaston wrote: > One question... will one processor end up doing a lot more work in parallel? Each row has to go somewhere. We could "round robin" the processor assignments if more than one SCALAR variable is attached, but there's no way to get around *some* imbalance other than by handing fewer elements to those processors that also have SCALAR rows. I wouldn't worry about it too much, though  in fact, we could probably assign SCALAR dofs in reverse order (to processor N if there's one of them, N and N1 if there's two, etc.) and cancel out some of our existing imbalance. Right now lower numbered processors are a little over burdened with dofs on partition boundaries.  Roy 
From: Kirk, Benjamin (JSCEG311) <benjamin.kirk1@na...>  20090325 16:30:04

system.add_variable(foo,SCALAR); as the interface? We jist then add the number of such variables into the nonzero count for each row, and add a number of rows of full width. ??  Original Message  From: Derek Gaston <friedmud@...> To: libmeshusers@... <libmeshusers@...> Sent: Wed Mar 25 11:14:35 2009 Subject: [Libmeshusers] Adding one more equation... So... I got a question from one of my colleagues this morning about adding just one more equation (one more row and column to the matrix) to an existing NonlinearImplicitSystem. Essentially, this is adding one global scalar equation. In this case both the row and column are actually going to be dense... Any ideas on this? It's almost like I need to add a node that's connected to every other node... or something. Thanks, Derek  _______________________________________________ Libmeshusers mailing list Libmeshusers@... https://lists.sourceforge.net/lists/listinfo/libmeshusers 
From: Roy Stogner <roystgnr@ic...>  20090325 17:13:37

On Wed, 25 Mar 2009, Kirk, Benjamin (JSCEG311) wrote: > system.add_variable(foo,SCALAR); as the interface? Interesting. That fits perfectly into the persubdomain interface, too. I don't know if this is what Derek's colleague needs, but it would certainly be perfect for some problems I'd looked at. And it's a much better API idea than the "GlobalElem" nonsense I had vaguely floating around in my head. > We jist then add the number of such variables into the nonzero count > for each row, and add a number of rows of full width. ?? And make sure that dof_indices includes them, so that everyone's element residuals and jacobians end up sized appropriately.  Roy 
From: Derek Gaston <friedmud@gm...>  20090325 17:36:17

On Mar 25, 2009, at 11:13 AM, Roy Stogner wrote: > On Wed, 25 Mar 2009, Kirk, Benjamin (JSCEG311) wrote: > >> system.add_variable(foo,SCALAR); as the interface? > > Interesting. That fits perfectly into the persubdomain interface, > too. > > I don't know if this is what Derek's colleague needs, but it would > certainly be perfect for some problems I'd looked at. And it's a much > better API idea than the "GlobalElem" nonsense I had vaguely floating > around in my head. This sounds like exactly what he's looking for. How difficult would it be to pull off? >> We jist then add the number of such variables into the nonzero count >> for each row, and add a number of rows of full width. ?? > > And make sure that dof_indices includes them, so that everyone's > element residuals and jacobians end up sized appropriately. Definitely... in fact to answer John's question... we will be doing this matrix free (with some crazy preconditioning... so we do need to be able to build matrices).... so the residuals definitely need to get sized correctly. One question... will one processor end up doing a lot more work in parallel? If that row is dense and is owned by one processor it feels like there might be a parallel inefficiency there. Is there something we can do in the partitioning to try to even that out... like weighting that DOF heavily in the partitioning scheme so that that processor has less other work to do? Obviously this isn't a big deal initially... the capability is more important than the optimization... just something to think about. Derek 
From: Roy Stogner <roystgnr@ic...>  20090325 17:46:45

On Wed, 25 Mar 2009, Derek Gaston wrote: > One question... will one processor end up doing a lot more work in parallel? Each row has to go somewhere. We could "round robin" the processor assignments if more than one SCALAR variable is attached, but there's no way to get around *some* imbalance other than by handing fewer elements to those processors that also have SCALAR rows. I wouldn't worry about it too much, though  in fact, we could probably assign SCALAR dofs in reverse order (to processor N if there's one of them, N and N1 if there's two, etc.) and cancel out some of our existing imbalance. Right now lower numbered processors are a little over burdened with dofs on partition boundaries.  Roy 
From: David Knezevic <dknez@MIT.EDU>  20090401 18:01:44

Regarding Ben's system.add_variable(foo,SCALAR); idea: I actually need the exact same functionality to implement a pureNeumann problem, with a constraint \int_\Omega u \dx = 0 that is imposed via a scalar Lagrange multiplier. Derek, has anyone in your group had a chance to look at implementing this yet? If not, I'll try to follow up on it.  Dave Roy Stogner wrote: > > On Wed, 25 Mar 2009, Derek Gaston wrote: > >> One question... will one processor end up doing a lot more work in parallel? > > Each row has to go somewhere. We could "round robin" the processor > assignments if more than one SCALAR variable is attached, but there's > no way to get around *some* imbalance other than by handing fewer > elements to those processors that also have SCALAR rows. > > I wouldn't worry about it too much, though  in fact, we could > probably assign SCALAR dofs in reverse order (to processor N if > there's one of them, N and N1 if there's two, etc.) and cancel out > some of our existing imbalance. Right now lower numbered processors > are a little over burdened with dofs on partition boundaries. >  > Roy > >  > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers 
From: Derek Gaston <friedmud@gm...>  20090401 19:30:48

Nope... it's still hanging out there though. It is definitely a problem we need to solve... we just haven't made it around to it yet. Derek On Apr 1, 2009, at 11:51 AM, David Knezevic wrote: > Regarding Ben's system.add_variable(foo,SCALAR); idea: I actually > need the exact same functionality to implement a pureNeumann > problem, with a constraint \int_\Omega u \dx = 0 that is imposed via > a scalar Lagrange multiplier. > > Derek, has anyone in your group had a chance to look at implementing > this yet? If not, I'll try to follow up on it. > >  Dave > > > Roy Stogner wrote: >> On Wed, 25 Mar 2009, Derek Gaston wrote: >>> One question... will one processor end up doing a lot more work in >>> parallel? >> Each row has to go somewhere. We could "round robin" the processor >> assignments if more than one SCALAR variable is attached, but there's >> no way to get around *some* imbalance other than by handing fewer >> elements to those processors that also have SCALAR rows. >> I wouldn't worry about it too much, though  in fact, we could >> probably assign SCALAR dofs in reverse order (to processor N if >> there's one of them, N and N1 if there's two, etc.) and cancel out >> some of our existing imbalance. Right now lower numbered processors >> are a little over burdened with dofs on partition boundaries. >>  >> Roy >>  >> _______________________________________________ >> Libmeshusers mailing list >> Libmeshusers@... >> https://lists.sourceforge.net/lists/listinfo/libmeshusers > 
From: David Knezevic <dknez@MIT.EDU>  20090418 21:31:03

So, I'd really like to add this system.add_variable(foo,SCALAR) functionality to libMesh. I've been poking around the library, but I'm not really sure how to get started, so I could definitely use some pointers... It seems to me that the important details for this are in DofMap, for storing and retrieving the dof index of the scalar variable, as well as for setting the sparsity pattern/number of nonzeros...? Some thoughts I've had are:  Add a new enum SCALAR to FEFamily (or alternatively, a new enum SCALAR to Order)?  Short circuit all the loops over elements for DOF counting in DofMap for SCALAR variables, and instead store the dof index of each SCALAR variable in a vector in DofMap?  However, I can't see where in the code one should compute the dof index of a SCALAR variable in the first place?  Ben, regarding setting the nonzero count and the number of rows that you mentioned; where is this controlled? In SparsityPattern? As you can probably tell, I'm not very familiar with the guts of the library, so any help would be appreciated! Regards, Dave Kirk, Benjamin (JSCEG311) wrote: > system.add_variable(foo,SCALAR); as the interface? > > We jist then add the number of such variables into the nonzero count for each row, and add a number of rows of full width. > ?? > > > > >  Original Message  > From: Derek Gaston <friedmud@...> > To: libmeshusers@... <libmeshusers@...> > Sent: Wed Mar 25 11:14:35 2009 > Subject: [Libmeshusers] Adding one more equation... > > So... I got a question from one of my colleagues this morning about adding > just one more equation (one more row and column to the matrix) to an > existing NonlinearImplicitSystem. Essentially, this is adding one global > scalar equation. In this case both the row and column are actually going to > be dense... > Any ideas on this? It's almost like I need to add a node that's connected > to every other node... or something. > > Thanks, > Derek >  > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers >  > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers 
From: Kirk, Benjamin (JSCEG311) <benjamin.kirk1@na...>  20090418 21:48:40

>  Add a new enum SCALAR to FEFamily (or alternatively, a new enum SCALAR > to Order)? Something like that sounds good... This is a single scalar value coupled to *all* other DOFs? >  Short circuit all the loops over elements for DOF counting in DofMap > for SCALAR variables, and instead store the dof index of each SCALAR > variable in a vector in DofMap? I would think the DofMap needs to store a vector of scalar indices, yeah.... >  However, I can't see where in the code one should compute the dof > index of a SCALAR variable in the first place? Well, after the typical dof indexing each processor knows the local number of Dofs and their global Dof indices, [0, n_global_dofs) so don't the scalars get numbered sequentially in [n_global_dofs, n_global_dofs+n_scalars) ? >  Ben, regarding setting the nonzero count and the number of rows that > you mentioned; where is this controlled? In SparsityPattern? Yeah, the sparsity pattern is constructed, potentially handed off to initialize matrices, then the nonzeros are counted, and it is removed to save storage. Everything should fall through provided: (1) DofMap::dof_indices() tacks on the scalar indices for an element when it is called, since you are expecting them to be coupled to all elemend dofs, and (2) n_scalar additional, *full* rows are added at the end of the sparsity pattern. I think it would be pretty easy to code up and try in serial, not sure at what point in parallel we will need to get smarter with the load balancing. probably roundrobbin'ing the scalars across processors is a better idea than putting them continuously at the end, and thus in the last processor's memory... Ben So at the end of the sparsity pattern construction we a 
From: David Knezevic <dknez@MIT.EDU>  20090418 22:05:53

Thanks Ben and Roy, I'll have a go at it tomorrow.  Dave Kirk, Benjamin (JSCEG311) wrote: >>  Add a new enum SCALAR to FEFamily (or alternatively, a new enum SCALAR >> to Order)? > > Something like that sounds good... This is a single scalar value coupled to > *all* other DOFs? > >>  Short circuit all the loops over elements for DOF counting in DofMap >> for SCALAR variables, and instead store the dof index of each SCALAR >> variable in a vector in DofMap? > > I would think the DofMap needs to store a vector of scalar indices, yeah.... > >>  However, I can't see where in the code one should compute the dof >> index of a SCALAR variable in the first place? > > Well, after the typical dof indexing each processor knows the local number > of Dofs and their global Dof indices, [0, n_global_dofs) so don't the > scalars get numbered sequentially in > [n_global_dofs, n_global_dofs+n_scalars) ? > >>  Ben, regarding setting the nonzero count and the number of rows that >> you mentioned; where is this controlled? In SparsityPattern? > > Yeah, the sparsity pattern is constructed, potentially handed off to > initialize matrices, then the nonzeros are counted, and it is removed to > save storage. > > Everything should fall through provided: > > (1) DofMap::dof_indices() tacks on the scalar indices for an element when it > is called, since you are expecting them to be coupled to all elemend dofs, > and > (2) n_scalar additional, *full* rows are added at the end of the sparsity > pattern. > > I think it would be pretty easy to code up and try in serial, not sure at > what point in parallel we will need to get smarter with the load balancing. > probably roundrobbin'ing the scalars across processors is a better idea > than putting them continuously at the end, and thus in the last processor's > memory... > > Ben > > So at the end of the sparsity pattern construction we a > > 
From: Roy Stogner <roystgnr@ic...>  20090418 21:48:52

On Sat, 18 Apr 2009, David Knezevic wrote: > So, I'd really like to add this system.add_variable(foo,SCALAR) > functionality to libMesh. I've been poking around the library, but I'm > not really sure how to get started, so I could definitely use some > pointers... > > It seems to me that the important details for this are in DofMap, for > storing and retrieving the dof index of the scalar variable, as well as > for setting the sparsity pattern/number of nonzeros...? That's right. > Some thoughts I've had are: > >  Add a new enum SCALAR to FEFamily (or alternatively, a new enum SCALAR > to Order)? FEFamily. The Order would then specify how many scalars to add. Maybe next write an FE<SCALAR> specialization? Something that acted sort of like a discontinuous monomial element? That might help avoiding the need to add special cases to a lot of loops in the library later. >  Short circuit all the loops over elements for DOF counting in DofMap > for SCALAR variables, If the FE<SCALAR> claimed to have zero dofs per element, DofMap would basically skip right over it... but I'm not sure that's an intuitive behavior; this might be a loop you really do want a special case on. > and instead store the dof index of each SCALAR variable in a vector > in DofMap? Right. >  However, I can't see where in the code one should compute the dof > index of a SCALAR variable in the first place? That's all in dof_map.C. >  Ben, regarding setting the nonzero count and the number of rows that > you mentioned; where is this controlled? In SparsityPattern? In SparseMatrix, I believe, although the code to build the sparsity pattern is in DofMap.  Roy 
Sign up for the SourceForge newsletter:
No, thanks