You can subscribe to this list here.
2003 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}
(2) 
_{Oct}
(2) 
_{Nov}
(27) 
_{Dec}
(31) 

2004 
_{Jan}
(6) 
_{Feb}
(15) 
_{Mar}
(33) 
_{Apr}
(10) 
_{May}
(46) 
_{Jun}
(11) 
_{Jul}
(21) 
_{Aug}
(15) 
_{Sep}
(13) 
_{Oct}
(23) 
_{Nov}
(1) 
_{Dec}
(8) 
2005 
_{Jan}
(27) 
_{Feb}
(57) 
_{Mar}
(86) 
_{Apr}
(23) 
_{May}
(37) 
_{Jun}
(34) 
_{Jul}
(24) 
_{Aug}
(17) 
_{Sep}
(50) 
_{Oct}
(24) 
_{Nov}
(10) 
_{Dec}
(60) 
2006 
_{Jan}
(47) 
_{Feb}
(46) 
_{Mar}
(127) 
_{Apr}
(19) 
_{May}
(26) 
_{Jun}
(62) 
_{Jul}
(47) 
_{Aug}
(51) 
_{Sep}
(61) 
_{Oct}
(42) 
_{Nov}
(50) 
_{Dec}
(33) 
2007 
_{Jan}
(60) 
_{Feb}
(55) 
_{Mar}
(77) 
_{Apr}
(102) 
_{May}
(82) 
_{Jun}
(102) 
_{Jul}
(169) 
_{Aug}
(117) 
_{Sep}
(80) 
_{Oct}
(37) 
_{Nov}
(51) 
_{Dec}
(43) 
2008 
_{Jan}
(71) 
_{Feb}
(94) 
_{Mar}
(98) 
_{Apr}
(125) 
_{May}
(54) 
_{Jun}
(119) 
_{Jul}
(60) 
_{Aug}
(111) 
_{Sep}
(118) 
_{Oct}
(125) 
_{Nov}
(119) 
_{Dec}
(94) 
2009 
_{Jan}
(109) 
_{Feb}
(38) 
_{Mar}
(93) 
_{Apr}
(88) 
_{May}
(29) 
_{Jun}
(57) 
_{Jul}
(53) 
_{Aug}
(48) 
_{Sep}
(68) 
_{Oct}
(151) 
_{Nov}
(23) 
_{Dec}
(35) 
2010 
_{Jan}
(84) 
_{Feb}
(60) 
_{Mar}
(184) 
_{Apr}
(112) 
_{May}
(60) 
_{Jun}
(90) 
_{Jul}
(23) 
_{Aug}
(70) 
_{Sep}
(119) 
_{Oct}
(27) 
_{Nov}
(47) 
_{Dec}
(54) 
2011 
_{Jan}
(22) 
_{Feb}
(19) 
_{Mar}
(92) 
_{Apr}
(93) 
_{May}
(35) 
_{Jun}
(91) 
_{Jul}
(32) 
_{Aug}
(61) 
_{Sep}
(7) 
_{Oct}
(69) 
_{Nov}
(81) 
_{Dec}
(23) 
2012 
_{Jan}
(64) 
_{Feb}
(95) 
_{Mar}
(35) 
_{Apr}
(36) 
_{May}
(63) 
_{Jun}
(98) 
_{Jul}
(70) 
_{Aug}
(171) 
_{Sep}
(149) 
_{Oct}
(64) 
_{Nov}
(67) 
_{Dec}
(126) 
2013 
_{Jan}
(108) 
_{Feb}
(104) 
_{Mar}
(171) 
_{Apr}
(133) 
_{May}
(108) 
_{Jun}
(100) 
_{Jul}
(93) 
_{Aug}
(126) 
_{Sep}
(74) 
_{Oct}
(59) 
_{Nov}
(145) 
_{Dec}
(93) 
2014 
_{Jan}
(38) 
_{Feb}
(45) 
_{Mar}
(26) 
_{Apr}
(41) 
_{May}
(125) 
_{Jun}
(70) 
_{Jul}
(61) 
_{Aug}
(66) 
_{Sep}
(60) 
_{Oct}
(110) 
_{Nov}
(27) 
_{Dec}
(30) 
2015 
_{Jan}
(43) 
_{Feb}
(67) 
_{Mar}
(71) 
_{Apr}
(92) 
_{May}
(39) 
_{Jun}
(15) 
_{Jul}
(46) 
_{Aug}
(63) 
_{Sep}
(84) 
_{Oct}
(82) 
_{Nov}
(69) 
_{Dec}
(45) 
2016 
_{Jan}
(92) 
_{Feb}
(91) 
_{Mar}
(148) 
_{Apr}
(43) 
_{May}
(58) 
_{Jun}
(117) 
_{Jul}
(92) 
_{Aug}
(140) 
_{Sep}
(49) 
_{Oct}
(33) 
_{Nov}
(85) 
_{Dec}
(40) 
2017 
_{Jan}
(41) 
_{Feb}
(36) 
_{Mar}
(49) 
_{Apr}
(41) 
_{May}
(73) 
_{Jun}
(51) 
_{Jul}
(9) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 







1

2

3

4

5
(10) 
6
(6) 
7
(6) 
8
(3) 
9

10

11
(19) 
12
(6) 
13
(21) 
14
(1) 
15

16

17
(6) 
18
(7) 
19
(5) 
20
(3) 
21
(2) 
22

23
(2) 
24
(5) 
25
(1) 
26

27
(2) 
28
(2) 
29
(11) 
30
(1) 






From: Kirk, Benjamin (JSCEG) <Benjamin.Kirk1@na...>  20081112 23:48:29

I was juist thinking about the dof indexing too... What about adding a system for each mg level? That solves the indexing... But we need a mechanism for telling the associated dof maps to operate on a specified 'view' of the mesh. Deteting hanging nodes on the sublevels will take some work... Ben  Original Message  From: Roy Stogner <roystgnr@...> To: Vijay M <vijay.m@...> Cc: libmeshusers@... <libmeshusers@...> Sent: Wed Nov 12 17:43:07 2008 Subject: Re: [Libmeshusers] Multigrid techniques with libmesh On Wed, 12 Nov 2008, Vijay M wrote: > Of course, I do not have any multigrid support in my framework yet and so > was wondering if there was a cleaner and elegant way to perform geometric > multigrid. My main problem is that since mesh is associated with the > EquationSystem, when I coarsen the mesh, all my systems are reinited and > reallocated. And so it just seems too expensive to perform couple of cycles > this way. It definitely would be. We wouldn't be averse to making library changes to enable CPUefficient geometric multigrid, but the amount of work you'd have to do might not be programmerefficient. > In the meanwhile, if you have suggestions to perform geometric > multigrid, do let me know. I've never done it on any but toy code, so I'm not sure what the most efficient way to do things would be. For doing multiple multigrid cycles per Newton step I think you'd want to store all the projection & restriction operators and the coarse grid jacobians as sparse matrices themselves to avoid having to do redundant FEM integrations. But that sounds like it would roughly double your memory use. And then there's the question of what to do with the Mesh and DoFMap. We'd finally use the "subactive" element capabilities to provide different active level views of the same mesh, but DoF indexing (especially with nonLagrange elements!) would be tricky; we'd need to keep track of perlevel global dof indices, not just the (maximum of) 2 that we use now.  Roy  This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblincontest.org/redirect.php?banner_id=100&url=/ _______________________________________________ Libmeshusers mailing list Libmeshusers@... https://lists.sourceforge.net/lists/listinfo/libmeshusers 
From: Roy Stogner <roystgnr@ic...>  20081112 23:43:13

On Wed, 12 Nov 2008, Vijay M wrote: > Of course, I do not have any multigrid support in my framework yet and so > was wondering if there was a cleaner and elegant way to perform geometric > multigrid. My main problem is that since mesh is associated with the > EquationSystem, when I coarsen the mesh, all my systems are reinited and > reallocated. And so it just seems too expensive to perform couple of cycles > this way. It definitely would be. We wouldn't be averse to making library changes to enable CPUefficient geometric multigrid, but the amount of work you'd have to do might not be programmerefficient. > In the meanwhile, if you have suggestions to perform geometric > multigrid, do let me know. I've never done it on any but toy code, so I'm not sure what the most efficient way to do things would be. For doing multiple multigrid cycles per Newton step I think you'd want to store all the projection & restriction operators and the coarse grid jacobians as sparse matrices themselves to avoid having to do redundant FEM integrations. But that sounds like it would roughly double your memory use. And then there's the question of what to do with the Mesh and DoFMap. We'd finally use the "subactive" element capabilities to provide different active level views of the same mesh, but DoF indexing (especially with nonLagrange elements!) would be tricky; we'd need to keep track of perlevel global dof indices, not just the (maximum of) 2 that we use now.  Roy 
From: Vijay M <vijay.m@gm...>  20081112 23:28:22

Derek, Thanks for the reply. I would be very interested to read the paper, when it is published and so do keep me updated ! I already have options to run my code with snes_mf and snes_mf_operator so that a user specified preconditioner (the approximate Jacobian) like you mentioned is used. But from an efficiency perspective, for nonlinear problems, I was thinking may be a geometric multigrid algorithm would be competitive as compared to building a linearized version of the Jacobian and applying ILU or algebraic multigrid to solve it. From my initial studies, the ILU route does not seem to scale as I increase the number of processors and so the option to either use Geometric or Algebraic multigrid for ellipticelliptic coupled systems seems attractive. I can use blockjacobi preconditioning, in which case each physics block is symmetric and here I can possibly use multigrid strategy. Also for incompressible flows, I can reduce the coupled equations to a pressure poisson and use multigrid again to precondition the physics block. Of course, I do not have any multigrid support in my framework yet and so was wondering if there was a cleaner and elegant way to perform geometric multigrid. My main problem is that since mesh is associated with the EquationSystem, when I coarsen the mesh, all my systems are reinited and reallocated. And so it just seems too expensive to perform couple of cycles this way. If this is the only route to perform geometric multigrid, then I will settle for algebraic multigrid in which case I'll build the linearized Jacobian matrix for each physic and call hypre to precondition that. Now what do you suggest would be optimal based on the datastructures provided by LibMesh and petsc ? Ben, It is interesting that you mention the pmultigrid for fluid flow because I was also thinking of that as an option. But I have tried something like using a lower order upwinded flux to precondition a higher order DG discretization with Rusanov flux. Of course I've never compared the efficiency of this scheme to any alternate methods and so, it is still up for debate on how good this idea is. Anyway, thanks for the ideas till now guys. I am still working on installing hypre with petsc and configuring that with libmesh. And once I am done with that, I'll try algebraic multigrid and then compare to traditional preconditioning methods. In the meanwhile, if you have suggestions to perform geometric multigrid, do let me know. Thanks, Vijay _____ From: Derek Gaston [mailto:friedmud@...] Sent: Wednesday, November 12, 2008 4:49 PM To: Vijay M Cc: Roy Stogner; libmeshusers@... Subject: Re: [Libmeshusers] Multigrid techniques with libmesh We're currently working on some papers in this area... but I don't have anything to share yet. Here's what I will say. If you are using a matrix or jacobian free method and you specify your residual correctly... then (in theory) you will get the right answer (eventually....). In reality, you're using a Krylov method... and we all know that those aren't going anywhere without some preconditioning. With libMesh and Petsc (and Trilinos... eventually) you can fill in the Jacobian matrix (by specifying a compute_jacobian() function) and even if you are solving matrix free... you can precondition what's in the Jacobian matrix and use the result to precondition your matrix free solve. Look at example19. Note that there are both compute_residual() and compute_jacobian() functions that are attached to the nonlinear solver. By default example19 will solve in a pure matrix free manner (the same as specifying snes_mf on the commandline)... but if you pass "pre" on the commandline it will cause the jacobian matrix to get filled using compute_jacobian and that matrix will be preconditioned... and the result will be used to precondition the matrix free solve. This is equivalent to passing "snes_mf_operator" on the commandline when using Petsc (if you look at the code you will see that indeed when using petsc that option is set when you pass "pre"). Now.... like I mentioned earlier if your residual is correct.... you will get the right solution.... regardless of what you put into the jacobian.... right?? Well... the truth is tricky. It turns out that as long as you don't put something _wrong_ into the jacobian you will be good to go. But.... you don't necessarily have to put the exact _right_ thing either. There is some grey area here... and what works for one system of equations won't work for another. In general, if you put something resembling the true jacobian in there... it will greatly help your linear solves. So... back to multigrid. It doesn't like nonsymmetric operators right? So just don't put any into your jacobian! Essentially, you'll just be preconditioning the symmetric part of your problem... but this might be sufficient to get good convergence behavior. Note... this is just a suggestion.... you can dream up all kinds of things to put into the Jacobian............ I hope that helps, Derek On Sat, Nov 8, 2008 at 12:01 AM, Vijay M <vijay.m@...> wrote: Derek, My Jacobian is very much unsymmetric and so I am curious based on what Roy suggests. If the BoomerAMG does not work with unsymmetric systems, this could be a problem. When you get back, please do detail on your findings and I would be very much interested to know about your experiences. Thanks. Vijay > Original Message > From: Roy Stogner [mailto:roystgnr@...] > Sent: Friday, November 07, 2008 9:48 PM > To: Derek Gaston > Cc: libmeshusers@... > Subject: Re: [Libmeshusers] Multigrid techniques with libmesh > > > On Fri, 7 Nov 2008, Derek Gaston wrote: > > > The answer to #2 is YES... We use Hypre with BoomerAMG to precondition > > our matrix free solves all the time. Just build petsc with Hypre > > support and pass the following on the commandline for your app: > > > > snes_mf_operator pc_type hypre pc_hypre_type boomeramg > > > > This will use AMG on whatever you put into the jacobian matrix and use > > the result to precondition your matrix free solve. > > Wait  run that by me again? > > "matrix free solves" ... "use AMG on whatever you put into the > jacobian matrix" ... > > Wouldn't that make it an "AMG free solve"? > > But seriously, how is AMG possible without a matrix to work with? > > Also: do you have BoomerAMG working on asymmetric problems? It was > giving us real trouble (like, converging to the wrong result trouble) > a couple years back when we tried to use it on a problem with an > asymmetric jacobian. >  > Roy > >  > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge > Build the coolest Linux based applications with Moblin SDK & win great > prizes > Grand prize is a trip for two to an Open Source event anywhere in the > world > http://moblincontest.org/redirect.php?banner_id=100 <http://moblincontest.org/redirect.php?banner_id=100&url=/>; &url=/ > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers 
From: Benjamin Kirk <benjamin.kirk@na...>  20081112 22:56:59

> But.... you don't necessarily have to put the exact _right_ thing either. > There is some grey area here... and what works for one system of equations > won't work for another. In general, if you put something resembling the > true jacobian in there... it will greatly help your linear solves. >From the finite volume world, it is often common that a 1storder accurate discretization is used to precondition.... I'd be interested to see what work has been done with DG taking a similar approach  preconditioning with lower P. It is a bit harder here, though, 'cus you need an additional restriction from your larger residual to your preconditioner. In FV the number of unknowns is the same for P=1 or P=2, it is just that the implicit operator stencil grows... Ben 
From: Derek Gaston <friedmud@gm...>  20081112 22:49:29

We're currently working on some papers in this area... but I don't have anything to share yet. Here's what I will say. If you are using a matrix or jacobian free method and you specify your residual correctly... then (in theory) you will get the right answer (eventually....). In reality, you're using a Krylov method... and we all know that those aren't going anywhere without some preconditioning. With libMesh and Petsc (and Trilinos... eventually) you can fill in the Jacobian matrix (by specifying a compute_jacobian() function) and even if you are solving matrix free... you can precondition what's in the Jacobian matrix and use the result to precondition your matrix free solve. Look at example19. Note that there are both compute_residual() and compute_jacobian() functions that are attached to the nonlinear solver. By default example19 will solve in a pure matrix free manner (the same as specifying snes_mf on the commandline)... but if you pass "pre" on the commandline it will cause the jacobian matrix to get filled using compute_jacobian and that matrix will be preconditioned... and the result will be used to precondition the matrix free solve. This is equivalent to passing "snes_mf_operator" on the commandline when using Petsc (if you look at the code you will see that indeed when using petsc that option is set when you pass "pre"). Now.... like I mentioned earlier if your residual is correct.... you will get the right solution.... regardless of what you put into the jacobian.... right?? Well... the truth is tricky. It turns out that as long as you don't put something _wrong_ into the jacobian you will be good to go. But.... you don't necessarily have to put the exact _right_ thing either. There is some grey area here... and what works for one system of equations won't work for another. In general, if you put something resembling the true jacobian in there... it will greatly help your linear solves. So... back to multigrid. It doesn't like nonsymmetric operators right? So just don't put any into your jacobian! Essentially, you'll just be preconditioning the symmetric part of your problem... but this might be sufficient to get good convergence behavior. Note... this is just a suggestion.... you can dream up all kinds of things to put into the Jacobian............ I hope that helps, Derek On Sat, Nov 8, 2008 at 12:01 AM, Vijay M <vijay.m@...> wrote: > Derek, > > My Jacobian is very much unsymmetric and so I am curious based on what Roy > suggests. If the BoomerAMG does not work with unsymmetric systems, this > could be a problem. > > When you get back, please do detail on your findings and I would be very > much interested to know about your experiences. > > Thanks. > Vijay > > > Original Message > > From: Roy Stogner [mailto:roystgnr@...] > > Sent: Friday, November 07, 2008 9:48 PM > > To: Derek Gaston > > Cc: libmeshusers@... > > Subject: Re: [Libmeshusers] Multigrid techniques with libmesh > > > > > > On Fri, 7 Nov 2008, Derek Gaston wrote: > > > > > The answer to #2 is YES... We use Hypre with BoomerAMG to precondition > > > our matrix free solves all the time. Just build petsc with Hypre > > > support and pass the following on the commandline for your app: > > > > > > snes_mf_operator pc_type hypre pc_hypre_type boomeramg > > > > > > This will use AMG on whatever you put into the jacobian matrix and use > > > the result to precondition your matrix free solve. > > > > Wait  run that by me again? > > > > "matrix free solves" ... "use AMG on whatever you put into the > > jacobian matrix" ... > > > > Wouldn't that make it an "AMG free solve"? > > > > But seriously, how is AMG possible without a matrix to work with? > > > > Also: do you have BoomerAMG working on asymmetric problems? It was > > giving us real trouble (like, converging to the wrong result trouble) > > a couple years back when we tried to use it on a problem with an > > asymmetric jacobian. > >  > > Roy > > > >  > > This SF.Net email is sponsored by the Moblin Your Move Developer's > > challenge > > Build the coolest Linux based applications with Moblin SDK & win great > > prizes > > Grand prize is a trip for two to an Open Source event anywhere in the > > world > > http://moblincontest.org/redirect.php?banner_id=100&url=/ > > _______________________________________________ > > Libmeshusers mailing list > > Libmeshusers@... > > https://lists.sourceforge.net/lists/listinfo/libmeshusers > > 
From: Derek Gaston <friedmud@gm...>  20081112 22:34:03

First: my apologies for not getting back to you sooner... Second: A quick warning on the variational smoother code... it was originally a C code developed by a PhD student that I picked up and wrapped in C++ and stuck into libMesh. I did various things to make it interface better with libMesh... but the code itself is still somewhat rough. Just keep that in mind when working with/in it. Now... on to the meat: Internal boundaries are not currently preserved.... but it wouldn't take much to hack something together that would preserve them. If you look at the readgr() method you'll see where in 3D there is a mask bit that is set to 1 if a node is on a boundary. This will keep the variational smoother from moving that node. It should be trivial to extend that functionality to allow you to specify a node list that you don't want to move and set those mask bits to 1 as well... this would allow you to preserve an internal boundary. I did a bit of work to make AMR compatible with the VariationalSmoother.... it really should work ok. Note that the vsmoother is inherently a _Serial_ process. You can use it in parallel... but you will literally be redistributing the mesh separately on each processor. This is ok as long as the input to the smoother is exactly the same on every processor... it's just redundant calculation. This also means that you can't use it with ParallelMesh... Let me know if you have more questions. Derek On Thu, Oct 30, 2008 at 3:57 PM, Francis Moore <fmoore@...> wrote: > Hi, > > I'm trying to solve the compressible Euler equations in Lagrangian > coordinate with two fluids separated by a free surface. They are no mixing > between the two fluids. The mesh is moving with the fluids and the quality > of the mesh degrade rapidly with the deformation. So I need tools to > correct the mesh, just like your AMR tools and your VariationalSmoother > functions. But I have to know if the AMR and the VariationalSmoother > support the presence of internal boundary (a free surface) and how they > deal with it. > > So I have two main questions: > 1 Is it possible to add a free surface (or an internal boundary) in a > mesh? If yes, how can I add a free surface in libMesh? > > 2 If I suppose that I have an internal boundary in the mesh and I use the > VariationnalSmoother and the AMR. What happen to my internal boundary? Is > it preserved or the algorithms with destroy it or move it. > > Thank you > > Francis Moore > INRSEMT, Varennes (Québec) > > >  > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge > Build the coolest Linux based applications with Moblin SDK & win great > prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblincontest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > 