On Apr 30, 2013 9:52 PM, "Derek Gaston" <friedmud@...> wrote:
>
> I've got a problem with 1000+ variables in it (meaning 1000+ DoFs per
node with Lagrange). If I do normal domain decomposition I end up with
only about 50 nodes per processor to get down to ~50000 DoFs per processor.
Unfortunately, that means that for any given variable there are only 50
DoFs for that variable on a processor... which apparently is causing
_extremely_ poor preconditioning (even using AMG like Hypre)
I would recommend trying GAMG, where you have a wider choice of smoothers.
Admittedly, it may still be somewhat rough around the edges, but we can
work on resolving the specific kinks you might encounter. By default GAMG
uses a Chebyshev smoother, which should be relatively robust and circumvent
some of the problems with block Jacobi. It can be tricky to finetune,
though. A simpler choice may be ASM with a modest overlap.
>
> What this case really needs is to do decomposition _by variable_. So if
you had 1000 processors each one would take the part of the problem
corresponding to one variable. This would allow you to form great
blockdiagonal preconditioners (which is what this problem needs... all of
the variables are coupled... but the block diagonals dominate).
The problem with this, it seems to me, is that you would end up with an
alltoall coupling between the ranks of your comm. That implies a
communication volume of a dense problem (even to assemble the residual)
without its local arithmetic intensity to hide the communication cost.
Dmitry.
>
> I'm pretty sure we're far off from being able to do that... but I thought
I would ping you guys to see what you thought about the idea. Any thoughts
on how doable that would be?
>
> Derek
>
>

> Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
> Get 100% visibility into your production application  at no cost.
> Codelevel diagnostics for performance bottlenecks with <2% overhead
> Download for free and get started troubleshooting in minutes.
> http://p.sf.net/sfu/appdyn_d2d_ap1
> _______________________________________________
> Libmeshdevel mailing list
> Libmeshdevel@...
> https://lists.sourceforge.net/lists/listinfo/libmeshdevel
>
