From: John P. <jwp...@gm...> - 2018-06-27 14:17:09
|
On Wed, Jun 27, 2018 at 7:42 AM, Bailey Curzadd <bcu...@gm...> wrote: > Hi there, > > > I’m using libMesh to perform topology optimization on a mesh with a > spherical surface. I only have a single LinearImplicitSystem. The normal > displacement of nodes on the spherical surface needs to be constrained to > force the surface to remain spherical. I am using Lagrange multipliers to > do this, but have found that adding a large number of scalar variables > results in EquationSystems::init() taking an extraordinary amount of time > to build the system. For a mesh with 9585 nodes, 45728 elements, and > first-order interpolation, EquationSystems::init() takes practically no > time at all without scalar variables for the MPCs. However, when I add 1332 > scalar variables (one for each node on the spherical surface, > EquationSystems::init() runs for over an hour. Is this to be expected with > libMesh? Is there a more efficient way to apply MPCs to numerous nodes? > We do have support for per-subdomain SCALAR variable coupling, but AFAIK, SCALAR variables couple to *every* other dof (including other SCALAR variables) in the System, and so can result in not-very-sparse sparsity patterns for System matrices. This can lead to a lot of memory being allocated (which can slow things down if your machine goes into swap) but it might also be exposing an inefficiency in the DofMap or other code... Would it be possible to discretize your Lagrange multiplier variable using a (linear Lagrange, constant monomial, etc.) field variable instead of a bunch of SCALARS? -- John |