From: Vijay S. M. <vi...@gm...> - 2008-11-07 15:51:10
|
Hi there, Before I even start, I know this is probably going to be a loaded question, but nevertheless, any help would be appreciated. Background: I'm working on solution to large scale PDE systems based on libmesh for discretization and petsc for solution procedures. My physics objects are derived from EquationSystems in libmesh datastructure and can contain one or more implicitsystems in it. I use the Jacobian-Free Krylov scheme for my nonlinear solve and hence the efficiency of the method is achieved from good preconditioning. I've tried using ILU to precondition my preconditioner (linearized version) of the original nonlinear system and this works well for serial runs. But this does not scale when I run in parallel. So my questions are 1) Is it is possible to use libmesh objects to perform geometric multigrid preconditioning and completely avoid the matrix creation except in the coarsest case ?! I've seen some of Ben's presentation that mentions using multigrid for stokes problems etc. and so am curious as to what this would entail ... For example, would I have to create multiple equationsystems object since the mesh is not associated with just the system. Hence for every multi-grid level, I would have a new instance of the physics system but on a coarser mesh level. Is this how you would possibly implement this ?! 2) If I can afford to create the matrix in the finest level, can we use algebraic multigrid procedures to precondition this system in parallel ? Anyone know how this scales ? 3) Also, I was reading about Prometheus (http://www.columbia.edu/~ma2325/prom_intro.html) and it looks promising as both a black-box solver and preconditioner when the matrix is available and geometric multigrid in 2D (not sure about 3D). Has anyone in this list used this package with Petsc and/or libMesh ?! Again, like I said before, there is a lot of things here you can comment on. Feel free to write your thoughts because I want to get as much input as possible before choosing anything specific. Thanks, Vijay |
From: Derek G. <fri...@gm...> - 2008-11-08 03:37:23
|
Just a quick reply until I get back to a computer on Monday... The answer to #2 is YES... We use Hypre with BoomerAMG to precondition our matrix free solves all the time. Just build petsc with Hypre support and pass the following on the command-line for your app: -snes_mf_operator -pc_type hypre -pc_hypre_type boomeramg This will use AMG on whatever you put into the jacobian matrix and use the result to precondition your matrix free solve. Derek Sent from my iPhone On Nov 7, 2008, at 8:50 AM, "Vijay S. Mahadevan" <vi...@gm...> wrote: > Hi there, > > Before I even start, I know this is probably going to be a loaded > question, but nevertheless, any help would be appreciated. > > Background: I'm working on solution to large scale PDE systems based > on libmesh for discretization and petsc for solution procedures. My > physics objects are derived from EquationSystems in libmesh > datastructure and can contain one or more implicitsystems in it. I use > the Jacobian-Free Krylov scheme for my nonlinear solve and hence the > efficiency of the method is achieved from good preconditioning. I've > tried using ILU to precondition my preconditioner (linearized version) > of the original nonlinear system and this works well for serial runs. > But this does not scale when I run in parallel. > > So my questions are > > 1) Is it is possible to use libmesh objects to perform geometric > multigrid preconditioning and completely avoid the matrix creation > except in the coarsest case ?! I've seen some of Ben's presentation > that mentions using multigrid for stokes problems etc. and so am > curious as to what this would entail ... For example, would I have to > create multiple equationsystems object since the mesh is not > associated with just the system. Hence for every multi-grid level, I > would have a new instance of the physics system but on a coarser mesh > level. Is this how you would possibly implement this ?! > 2) If I can afford to create the matrix in the finest level, can we > use algebraic multigrid procedures to precondition this system in > parallel ? Anyone know how this scales ? > 3) Also, I was reading about Prometheus > (http://www.columbia.edu/~ma2325/prom_intro.html) and it looks > promising as both a black-box solver and preconditioner when the > matrix is available and geometric multigrid in 2D (not sure about 3D). > Has anyone in this list used this package with Petsc and/or libMesh ?! > > Again, like I said before, there is a lot of things here you can > comment on. Feel free to write your thoughts because I want to get as > much input as possible before choosing anything specific. > > Thanks, > Vijay > > --- > ---------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge > Build the coolest Linux based applications with Moblin SDK & win > great prizes > Grand prize is a trip for two to an Open Source event anywhere in > the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users |
From: Roy S. <roy...@ic...> - 2008-11-08 03:48:03
|
On Fri, 7 Nov 2008, Derek Gaston wrote: > The answer to #2 is YES... We use Hypre with BoomerAMG to precondition > our matrix free solves all the time. Just build petsc with Hypre > support and pass the following on the command-line for your app: > > -snes_mf_operator -pc_type hypre -pc_hypre_type boomeramg > > This will use AMG on whatever you put into the jacobian matrix and use > the result to precondition your matrix free solve. Wait - run that by me again? "matrix free solves" ... "use AMG on whatever you put into the jacobian matrix" ... Wouldn't that make it an "AMG free solve"? But seriously, how is AMG possible without a matrix to work with? Also: do you have BoomerAMG working on asymmetric problems? It was giving us real trouble (like, converging to the wrong result trouble) a couple years back when we tried to use it on a problem with an asymmetric jacobian. --- Roy |
From: Vijay M <vi...@gm...> - 2008-11-08 07:01:45
|
Derek, My Jacobian is very much unsymmetric and so I am curious based on what Roy suggests. If the BoomerAMG does not work with unsymmetric systems, this could be a problem. When you get back, please do detail on your findings and I would be very much interested to know about your experiences. Thanks. Vijay > -----Original Message----- > From: Roy Stogner [mailto:roy...@ic...] > Sent: Friday, November 07, 2008 9:48 PM > To: Derek Gaston > Cc: lib...@li... > Subject: Re: [Libmesh-users] Multigrid techniques with libmesh > > > On Fri, 7 Nov 2008, Derek Gaston wrote: > > > The answer to #2 is YES... We use Hypre with BoomerAMG to precondition > > our matrix free solves all the time. Just build petsc with Hypre > > support and pass the following on the command-line for your app: > > > > -snes_mf_operator -pc_type hypre -pc_hypre_type boomeramg > > > > This will use AMG on whatever you put into the jacobian matrix and use > > the result to precondition your matrix free solve. > > Wait - run that by me again? > > "matrix free solves" ... "use AMG on whatever you put into the > jacobian matrix" ... > > Wouldn't that make it an "AMG free solve"? > > But seriously, how is AMG possible without a matrix to work with? > > Also: do you have BoomerAMG working on asymmetric problems? It was > giving us real trouble (like, converging to the wrong result trouble) > a couple years back when we tried to use it on a problem with an > asymmetric jacobian. > --- > Roy > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge > Build the coolest Linux based applications with Moblin SDK & win great > prizes > Grand prize is a trip for two to an Open Source event anywhere in the > world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users |
From: Derek G. <fri...@gm...> - 2008-11-12 22:49:29
|
We're currently working on some papers in this area... but I don't have anything to share yet. Here's what I will say. If you are using a matrix or jacobian free method and you specify your residual correctly... then (in theory) you will get the right answer (eventually....). In reality, you're using a Krylov method... and we all know that those aren't going anywhere without some preconditioning. With libMesh and Petsc (and Trilinos... eventually) you can fill in the Jacobian matrix (by specifying a compute_jacobian() function) and even if you are solving matrix free... you can precondition what's in the Jacobian matrix and use the result to precondition your matrix free solve. Look at example19. Note that there are both compute_residual() and compute_jacobian() functions that are attached to the non-linear solver. By default example19 will solve in a pure matrix free manner (the same as specifying -snes_mf on the command-line)... but if you pass "-pre" on the command-line it will cause the jacobian matrix to get filled using compute_jacobian and that matrix will be preconditioned... and the result will be used to precondition the matrix free solve. This is equivalent to passing "-snes_mf_operator" on the command-line when using Petsc (if you look at the code you will see that indeed when using petsc that option is set when you pass "-pre"). Now.... like I mentioned earlier if your residual is correct.... you will get the right solution.... regardless of what you put into the jacobian.... right?? Well... the truth is tricky. It turns out that as long as you don't put something _wrong_ into the jacobian you will be good to go. But.... you don't necessarily have to put the exact _right_ thing either. There is some grey area here... and what works for one system of equations won't work for another. In general, if you put something resembling the true jacobian in there... it will greatly help your linear solves. So... back to multigrid. It doesn't like non-symmetric operators right? So just don't put any into your jacobian! Essentially, you'll just be preconditioning the symmetric part of your problem... but this might be sufficient to get good convergence behavior. Note... this is just a suggestion.... you can dream up all kinds of things to put into the Jacobian............ I hope that helps, Derek On Sat, Nov 8, 2008 at 12:01 AM, Vijay M <vi...@gm...> wrote: > Derek, > > My Jacobian is very much unsymmetric and so I am curious based on what Roy > suggests. If the BoomerAMG does not work with unsymmetric systems, this > could be a problem. > > When you get back, please do detail on your findings and I would be very > much interested to know about your experiences. > > Thanks. > Vijay > > > -----Original Message----- > > From: Roy Stogner [mailto:roy...@ic...] > > Sent: Friday, November 07, 2008 9:48 PM > > To: Derek Gaston > > Cc: lib...@li... > > Subject: Re: [Libmesh-users] Multigrid techniques with libmesh > > > > > > On Fri, 7 Nov 2008, Derek Gaston wrote: > > > > > The answer to #2 is YES... We use Hypre with BoomerAMG to precondition > > > our matrix free solves all the time. Just build petsc with Hypre > > > support and pass the following on the command-line for your app: > > > > > > -snes_mf_operator -pc_type hypre -pc_hypre_type boomeramg > > > > > > This will use AMG on whatever you put into the jacobian matrix and use > > > the result to precondition your matrix free solve. > > > > Wait - run that by me again? > > > > "matrix free solves" ... "use AMG on whatever you put into the > > jacobian matrix" ... > > > > Wouldn't that make it an "AMG free solve"? > > > > But seriously, how is AMG possible without a matrix to work with? > > > > Also: do you have BoomerAMG working on asymmetric problems? It was > > giving us real trouble (like, converging to the wrong result trouble) > > a couple years back when we tried to use it on a problem with an > > asymmetric jacobian. > > --- > > Roy > > > > ------------------------------------------------------------------------- > > This SF.Net email is sponsored by the Moblin Your Move Developer's > > challenge > > Build the coolest Linux based applications with Moblin SDK & win great > > prizes > > Grand prize is a trip for two to an Open Source event anywhere in the > > world > > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > > _______________________________________________ > > Libmesh-users mailing list > > Lib...@li... > > https://lists.sourceforge.net/lists/listinfo/libmesh-users > > |
From: Benjamin K. <ben...@na...> - 2008-11-12 22:56:59
|
> But.... you don't necessarily have to put the exact _right_ thing either. > There is some grey area here... and what works for one system of equations > won't work for another. In general, if you put something resembling the > true jacobian in there... it will greatly help your linear solves. >From the finite volume world, it is often common that a 1st-order accurate discretization is used to precondition.... I'd be interested to see what work has been done with DG taking a similar approach - preconditioning with lower P. It is a bit harder here, though, 'cus you need an additional restriction from your larger residual to your preconditioner. In FV the number of unknowns is the same for P=1 or P=2, it is just that the implicit operator stencil grows... -Ben |
From: Vijay M <vi...@gm...> - 2008-11-12 23:28:22
|
Derek, Thanks for the reply. I would be very interested to read the paper, when it is published and so do keep me updated ! I already have options to run my code with -snes_mf and -snes_mf_operator so that a user specified preconditioner (the approximate Jacobian) like you mentioned is used. But from an efficiency perspective, for nonlinear problems, I was thinking may be a geometric multigrid algorithm would be competitive as compared to building a linearized version of the Jacobian and applying ILU or algebraic multigrid to solve it. From my initial studies, the ILU route does not seem to scale as I increase the number of processors and so the option to either use Geometric or Algebraic multigrid for elliptic-elliptic coupled systems seems attractive. I can use block-jacobi preconditioning, in which case each physics block is symmetric and here I can possibly use multigrid strategy. Also for incompressible flows, I can reduce the coupled equations to a pressure poisson and use multigrid again to precondition the physics block. Of course, I do not have any multi-grid support in my framework yet and so was wondering if there was a cleaner and elegant way to perform geometric multigrid. My main problem is that since mesh is associated with the EquationSystem, when I coarsen the mesh, all my systems are reinited and reallocated. And so it just seems too expensive to perform couple of cycles this way. If this is the only route to perform geometric multigrid, then I will settle for algebraic multigrid in which case I'll build the linearized Jacobian matrix for each physic and call hypre to precondition that. Now what do you suggest would be optimal based on the data-structures provided by LibMesh and petsc ? Ben, It is interesting that you mention the p-multigrid for fluid flow because I was also thinking of that as an option. But I have tried something like using a lower order upwinded flux to precondition a higher order DG discretization with Rusanov flux. Of course I've never compared the efficiency of this scheme to any alternate methods and so, it is still up for debate on how good this idea is. Anyway, thanks for the ideas till now guys. I am still working on installing hypre with petsc and configuring that with libmesh. And once I am done with that, I'll try algebraic multigrid and then compare to traditional preconditioning methods. In the meanwhile, if you have suggestions to perform geometric multigrid, do let me know. Thanks, Vijay _____ From: Derek Gaston [mailto:fri...@gm...] Sent: Wednesday, November 12, 2008 4:49 PM To: Vijay M Cc: Roy Stogner; lib...@li... Subject: Re: [Libmesh-users] Multigrid techniques with libmesh We're currently working on some papers in this area... but I don't have anything to share yet. Here's what I will say. If you are using a matrix or jacobian free method and you specify your residual correctly... then (in theory) you will get the right answer (eventually....). In reality, you're using a Krylov method... and we all know that those aren't going anywhere without some preconditioning. With libMesh and Petsc (and Trilinos... eventually) you can fill in the Jacobian matrix (by specifying a compute_jacobian() function) and even if you are solving matrix free... you can precondition what's in the Jacobian matrix and use the result to precondition your matrix free solve. Look at example19. Note that there are both compute_residual() and compute_jacobian() functions that are attached to the non-linear solver. By default example19 will solve in a pure matrix free manner (the same as specifying -snes_mf on the command-line)... but if you pass "-pre" on the command-line it will cause the jacobian matrix to get filled using compute_jacobian and that matrix will be preconditioned... and the result will be used to precondition the matrix free solve. This is equivalent to passing "-snes_mf_operator" on the command-line when using Petsc (if you look at the code you will see that indeed when using petsc that option is set when you pass "-pre"). Now.... like I mentioned earlier if your residual is correct.... you will get the right solution.... regardless of what you put into the jacobian.... right?? Well... the truth is tricky. It turns out that as long as you don't put something _wrong_ into the jacobian you will be good to go. But.... you don't necessarily have to put the exact _right_ thing either. There is some grey area here... and what works for one system of equations won't work for another. In general, if you put something resembling the true jacobian in there... it will greatly help your linear solves. So... back to multigrid. It doesn't like non-symmetric operators right? So just don't put any into your jacobian! Essentially, you'll just be preconditioning the symmetric part of your problem... but this might be sufficient to get good convergence behavior. Note... this is just a suggestion.... you can dream up all kinds of things to put into the Jacobian............ I hope that helps, Derek On Sat, Nov 8, 2008 at 12:01 AM, Vijay M <vi...@gm...> wrote: Derek, My Jacobian is very much unsymmetric and so I am curious based on what Roy suggests. If the BoomerAMG does not work with unsymmetric systems, this could be a problem. When you get back, please do detail on your findings and I would be very much interested to know about your experiences. Thanks. Vijay > -----Original Message----- > From: Roy Stogner [mailto:roy...@ic...] > Sent: Friday, November 07, 2008 9:48 PM > To: Derek Gaston > Cc: lib...@li... > Subject: Re: [Libmesh-users] Multigrid techniques with libmesh > > > On Fri, 7 Nov 2008, Derek Gaston wrote: > > > The answer to #2 is YES... We use Hypre with BoomerAMG to precondition > > our matrix free solves all the time. Just build petsc with Hypre > > support and pass the following on the command-line for your app: > > > > -snes_mf_operator -pc_type hypre -pc_hypre_type boomeramg > > > > This will use AMG on whatever you put into the jacobian matrix and use > > the result to precondition your matrix free solve. > > Wait - run that by me again? > > "matrix free solves" ... "use AMG on whatever you put into the > jacobian matrix" ... > > Wouldn't that make it an "AMG free solve"? > > But seriously, how is AMG possible without a matrix to work with? > > Also: do you have BoomerAMG working on asymmetric problems? It was > giving us real trouble (like, converging to the wrong result trouble) > a couple years back when we tried to use it on a problem with an > asymmetric jacobian. > --- > Roy > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge > Build the coolest Linux based applications with Moblin SDK & win great > prizes > Grand prize is a trip for two to an Open Source event anywhere in the > world > http://moblin-contest.org/redirect.php?banner_id=100 <http://moblin-contest.org/redirect.php?banner_id=100&url=/> &url=/ > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users |
From: Roy S. <roy...@ic...> - 2008-11-12 23:43:13
|
On Wed, 12 Nov 2008, Vijay M wrote: > Of course, I do not have any multi-grid support in my framework yet and so > was wondering if there was a cleaner and elegant way to perform geometric > multigrid. My main problem is that since mesh is associated with the > EquationSystem, when I coarsen the mesh, all my systems are reinited and > reallocated. And so it just seems too expensive to perform couple of cycles > this way. It definitely would be. We wouldn't be averse to making library changes to enable CPU-efficient geometric multigrid, but the amount of work you'd have to do might not be programmer-efficient. > In the meanwhile, if you have suggestions to perform geometric > multigrid, do let me know. I've never done it on any but toy code, so I'm not sure what the most efficient way to do things would be. For doing multiple multigrid cycles per Newton step I think you'd want to store all the projection & restriction operators and the coarse grid jacobians as sparse matrices themselves to avoid having to do redundant FEM integrations. But that sounds like it would roughly double your memory use. And then there's the question of what to do with the Mesh and DoFMap. We'd finally use the "subactive" element capabilities to provide different active level views of the same mesh, but DoF indexing (especially with non-Lagrange elements!) would be tricky; we'd need to keep track of per-level global dof indices, not just the (maximum of) 2 that we use now. --- Roy |
From: Vijay S. M. <vi...@gm...> - 2008-11-13 00:49:27
|
> We wouldn't be averse to making library > changes to enable CPU-efficient geometric multigrid That is good to know. I'll try my best to suggest ideas to make the implementation of geometric multigrid as easy as possible, as I go through the process of using libMesh data structures. Like I mentioned, I use matrix-free methodology and do provide an option to build the matrix when necessary. So really, I could build the matrix only for the coarsest grid that user requires and that would be a lot smaller, hopefully, as compared to the finer mesh. This smaller mesh can be solved very accurately either by direct methods or iterative methods and can be used inside the newton iteration to precondition the linear iteration. I am curious on the concept of the 'view' of a mesh. For example when I have an unstructured grid, and when I coarsen uniformly twice and refine uniformly twice, would I get the exact same mesh ?! I ask this because if you use external meshers to create the initial mesh can the libmesh mesh handlers perform this operation in a pseudo inverse fashion ? If the dof_map and mesh can provide views that are consistent with the refinement or coarsening based on an initial mesh, then I do think this should be possible without too much programming effort for an external user. Of course, you would also have to propagate the mesh_data (Or again, am I the only one using this ?) which stores material information/element or create a view of it also. Roy and Ben: If you can provide a subset view of the total mesh and dof_map, it would be tremendously helpful to create a nice geometric multigrid framework with very little changes on my existing design. Also, I think it will be useful for anyone who wants to pursue the same idea for their problem of interest. Feel free to let me know if you need any help from my side. On Wed, Nov 12, 2008 at 5:43 PM, Roy Stogner <roy...@ic...> wrote: > > On Wed, 12 Nov 2008, Vijay M wrote: > >> Of course, I do not have any multi-grid support in my framework yet and so >> was wondering if there was a cleaner and elegant way to perform geometric >> multigrid. My main problem is that since mesh is associated with the >> EquationSystem, when I coarsen the mesh, all my systems are reinited and >> reallocated. And so it just seems too expensive to perform couple of >> cycles >> this way. > > It definitely would be. We wouldn't be averse to making library > changes to enable CPU-efficient geometric multigrid, but the amount of > work you'd have to do might not be programmer-efficient. > >> In the meanwhile, if you have suggestions to perform geometric >> multigrid, do let me know. > > I've never done it on any but toy code, so I'm not sure what the most > efficient way to do things would be. For doing multiple multigrid > cycles per Newton step I think you'd want to store all the projection > & restriction operators and the coarse grid jacobians as sparse > matrices themselves to avoid having to do redundant FEM integrations. > But that sounds like it would roughly double your memory use. > > And then there's the question of what to do with the Mesh and DoFMap. > We'd finally use the "subactive" element capabilities to provide > different active level views of the same mesh, but DoF indexing > (especially with non-Lagrange elements!) would be tricky; we'd need > to keep track of per-level global dof indices, not just the (maximum > of) 2 that we use now. > --- > Roy > |
From: Roy S. <roy...@ic...> - 2008-11-13 01:02:13
|
On Wed, 12 Nov 2008, Vijay S. Mahadevan wrote: > I am curious on the concept of the 'view' of a mesh. Basically, we store hierarchic meshes like this: *-----------*-----------* *-----*=====*=====*=====* *==*==* Where the mesh is 1D, there are 5 active elements and 3 ancestor elements. We can also store meshes like this: *-----------*===========* *=====*=====*.....*.....* *..*..* Where there are 3 active elements, 1 ancestor element, and 4 "subactive" elements". Looping over active elements just gets you the ones in the active view, but the subactive elements and their associated Dof objects still exist. We use this for coarsening non-Lagrange elements properly, but it could make geometric multigrid possible too. > For example when I have an unstructured grid, and when I coarsen > uniformly twice and refine uniformly twice, would I get the exact > same mesh?! No. For instance, the second mesh above is a uniform coarsening (so far as such a thing exists) of the first mesh, but uniformly refining it will produce two new elements that weren't in the first mesh. We could, however, create a "rerefine" method that jumps you straight back to the finest mesh by making any active element with children into an ancestor and making any subactive element without children active. > If the dof_map and mesh can provide views that are consistent with > the refinement or coarsening based on an initial mesh, then I do think > this should be possible without too much programming effort for an > external user. Ideally the external user shouldn't need to do anything that he isn't already other than set some solver options or choose a different solver object. But that implies the existence of an internal developer with the time and expertise to do the library work, and such a person may not exist right now. > Of course, you would also have to propagate the mesh_data (Or again, > am I the only one using this ?) which stores material > information/element or create a view of it also. You may be the only one using this. ;-) But yes, we'd want to handle its restriction as well. --- Roy |
From: Benjamin K. <ben...@na...> - 2008-11-13 01:43:48
|
>> Of course, you would also have to propagate the mesh_data (Or again, >> am I the only one using this ?) which stores material >> information/element or create a view of it also. > > You may be the only one using this. ;-) But yes, we'd want to handle > its restriction as well. If your primary use of the mesh_data is to define material information, I'd wonder if anything special needs to be done in the case of refinement. As Roy mentions, when an element is refined, it is set to 'inactive' (or 'subactive') because it has children, but it is not removed from the mesh. And specifically, level-0 elements (the ones you read in before adaptivity are still there too. You can get the 'top parent' (the coarsest elementm, read from file, before any refinement) of any element by calling 'elem->top_parent()' - and these are the elements that your mesh_data is probably defined in terms of? I'm guessing your initial mesh resolved material interfaces by necessity, right? So when you refine an element, all the children will be the same material? If so, this is a perfect use for 'top_parent()' We actually also use this notion with boundary conditions associated with element faces. If a parent has a bc on face 2, then all children touching that face inherit the bc. We use this in the library by actually only storing BCs for the level-0 elements. Not to be a MeshData hater, but I wonder if your purposes would be better served by the element subdomain flag? It is an unsigned char in each element you can use to mark elements for whatever purpose you wish... You could then have a function or table lookup that provides material property data based on element subdomain id. What's more, you can exploit this in one of out mesh 'views' - just as the 'active_local_element' iterators provide just that, it would be easy for us to add 'subdomain_active_local_element' iterators, where you could loop over all the active copper elements, then the steel, whatever... on the current processor. -Ben |
From: Vijay S. M. <vi...@gm...> - 2008-11-13 03:28:59
|
> Not to be a MeshData hater, but I wonder if your purposes would be better > served by the element subdomain flag? It is an unsigned char in each > element you can use to mark elements for whatever purpose you wish... You > could then have a function or table lookup that provides material property > data based on element subdomain id. What's more, you can exploit this in > one of out mesh 'views' - just as the 'active_local_element' iterators > provide just that, it would be easy for us to add > 'subdomain_active_local_element' iterators, where you could loop over all > the active copper elements, then the steel, whatever... on the current > processor. That is very interesting. I did not realize that there was an iterator over specific elements in current processor based on a flag. I would very much use this if the idea was extended to say an array of unsigned ints rather than just one, namely subdomain flag. I want to store the physical material id, the original surface or volume id which helps me in visualization and any other helpful parameter that I do not foresee right now. This is the reason why I like mesh_data because it allows you to add a std::vector to each element and hence is a lot more flexible. But if this concept can be implemented in mesh itself, that would be a lot more helpful since I do not have to worry about propagating whenever there is a change in the mesh . On Wed, Nov 12, 2008 at 7:43 PM, Benjamin Kirk <ben...@na...> wrote: >>> Of course, you would also have to propagate the mesh_data (Or again, >>> am I the only one using this ?) which stores material >>> information/element or create a view of it also. >> >> You may be the only one using this. ;-) But yes, we'd want to handle >> its restriction as well. > > If your primary use of the mesh_data is to define material information, I'd > wonder if anything special needs to be done in the case of refinement. > > As Roy mentions, when an element is refined, it is set to 'inactive' (or > 'subactive') because it has children, but it is not removed from the mesh. > And specifically, level-0 elements (the ones you read in before adaptivity > are still there too. > > You can get the 'top parent' (the coarsest elementm, read from file, before > any refinement) of any element by calling 'elem->top_parent()' - and these > are the elements that your mesh_data is probably defined in terms of? I'm > guessing your initial mesh resolved material interfaces by necessity, right? > So when you refine an element, all the children will be the same material? > If so, this is a perfect use for 'top_parent()' > > We actually also use this notion with boundary conditions associated with > element faces. If a parent has a bc on face 2, then all children touching > that face inherit the bc. We use this in the library by actually only > storing BCs for the level-0 elements. > > Not to be a MeshData hater, but I wonder if your purposes would be better > served by the element subdomain flag? It is an unsigned char in each > element you can use to mark elements for whatever purpose you wish... You > could then have a function or table lookup that provides material property > data based on element subdomain id. What's more, you can exploit this in > one of out mesh 'views' - just as the 'active_local_element' iterators > provide just that, it would be easy for us to add > 'subdomain_active_local_element' iterators, where you could loop over all > the active copper elements, then the steel, whatever... on the current > processor. > > -Ben > > |
From: Vijay M <vi...@gm...> - 2008-11-13 02:15:03
|
Roy, thanks for the very helpful demonstration of the coarsen and refine flagging in libmesh and clarifying my doubts on the view of a mesh. I have another question that stems from your explanation. If I start from a mesh and refine it once and coarsen it once, do you still store the old refined dofs in the mesh and just make them inactive. When exactly would you release this allocation ? Also, this does call the EquationSystem.reinit() in order to associate all system variables to the new dofs. Is this correct ? I can see this getting very messy quickly since there is way too much bookkeeping that is needed to get the V-cycle right. > Ideally the external user shouldn't need to do anything that he isn't > already other than set some solver options or choose a different > solver object. I do not use Libmesh or Petsc as a black-box but have inherited from the structures exposed by these libraries and have my own wrappers on top to give me flexibility. So I do not mind getting my hands dirty a little, if need be, in order to get a geometric multi-grid framework working. The question though is how much of this can be done without changing too many interfaces in existing libmesh objects ?! Again, I appreciate all the help Roy. > -----Original Message----- > From: Roy Stogner [mailto:roy...@ic...] > Sent: Wednesday, November 12, 2008 7:02 PM > To: Vijay S. Mahadevan > Cc: lib...@li... > Subject: Re: [Libmesh-users] Multigrid techniques with libmesh > > > On Wed, 12 Nov 2008, Vijay S. Mahadevan wrote: > > > I am curious on the concept of the 'view' of a mesh. > > > Basically, we store hierarchic meshes like this: > > *-----------*-----------* > *-----*=====*=====*=====* > *==*==* > > Where the mesh is 1D, there are 5 active elements and 3 ancestor > elements. > > We can also store meshes like this: > > *-----------*===========* > *=====*=====*.....*.....* > *..*..* > > Where there are 3 active elements, 1 ancestor element, and 4 > "subactive" elements". Looping over active elements just gets you the > ones in the active view, but the subactive elements and their > associated Dof objects still exist. We use this for coarsening > non-Lagrange elements properly, but it could make geometric multigrid > possible too. > > > For example when I have an unstructured grid, and when I coarsen > > uniformly twice and refine uniformly twice, would I get the exact > > same mesh?! > > No. For instance, the second mesh above is a uniform coarsening (so > far as such a thing exists) of the first mesh, but uniformly refining > it will produce two new elements that weren't in the first mesh. > > We could, however, create a "rerefine" method that jumps you straight > back to the finest mesh by making any active element with children > into an ancestor and making any subactive element without children > active. > > > If the dof_map and mesh can provide views that are consistent with > > the refinement or coarsening based on an initial mesh, then I do think > > this should be possible without too much programming effort for an > > external user. > > Ideally the external user shouldn't need to do anything that he isn't > already other than set some solver options or choose a different > solver object. > > But that implies the existence of an internal developer with the time > and expertise to do the library work, and such a person may not exist > right now. > > > Of course, you would also have to propagate the mesh_data (Or again, > > am I the only one using this ?) which stores material > > information/element or create a view of it also. > > You may be the only one using this. ;-) But yes, we'd want to handle > its restriction as well. > --- > Roy |
From: Vijay M <vi...@gm...> - 2008-11-13 03:24:07
|
> > For example when I have an unstructured grid, and when I coarsen > > uniformly twice and refine uniformly twice, would I get the exact > > same mesh?! > > No. For instance, the second mesh above is a uniform coarsening (so > far as such a thing exists) of the first mesh, but uniformly refining > it will produce two new elements that weren't in the first mesh. I do not understand this. I thought you maintained the level number for each element and so once I specify an initial mesh, all elements in it are treated as level_0. Now if I refine it once, all new elements get the level_1 flag and become active while the original elements still are in memory but inactive. And now if I coarsen the mesh uniformly, then the level_1 elements become inactive and level_0 elements become active. Isn't this how the refine-coarsen methodology implemented ? Or am I way off in my understanding ? > -----Original Message----- > From: Roy Stogner [mailto:roy...@ic...] > Sent: Wednesday, November 12, 2008 7:02 PM > To: Vijay S. Mahadevan > Cc: lib...@li... > Subject: Re: [Libmesh-users] Multigrid techniques with libmesh > > > On Wed, 12 Nov 2008, Vijay S. Mahadevan wrote: > > > I am curious on the concept of the 'view' of a mesh. > > > Basically, we store hierarchic meshes like this: > > *-----------*-----------* > *-----*=====*=====*=====* > *==*==* > > Where the mesh is 1D, there are 5 active elements and 3 ancestor > elements. > > We can also store meshes like this: > > *-----------*===========* > *=====*=====*.....*.....* > *..*..* > > Where there are 3 active elements, 1 ancestor element, and 4 > "subactive" elements". Looping over active elements just gets you the > ones in the active view, but the subactive elements and their > associated Dof objects still exist. We use this for coarsening > non-Lagrange elements properly, but it could make geometric multigrid > possible too. > > > For example when I have an unstructured grid, and when I coarsen > > uniformly twice and refine uniformly twice, would I get the exact > > same mesh?! > > No. For instance, the second mesh above is a uniform coarsening (so > far as such a thing exists) of the first mesh, but uniformly refining > it will produce two new elements that weren't in the first mesh. > > We could, however, create a "rerefine" method that jumps you straight > back to the finest mesh by making any active element with children > into an ancestor and making any subactive element without children > active. > > > If the dof_map and mesh can provide views that are consistent with > > the refinement or coarsening based on an initial mesh, then I do think > > this should be possible without too much programming effort for an > > external user. > > Ideally the external user shouldn't need to do anything that he isn't > already other than set some solver options or choose a different > solver object. > > But that implies the existence of an internal developer with the time > and expertise to do the library work, and such a person may not exist > right now. > > > Of course, you would also have to propagate the mesh_data (Or again, > > am I the only one using this ?) which stores material > > information/element or create a view of it also. > > You may be the only one using this. ;-) But yes, we'd want to handle > its restriction as well. > --- > Roy |
From: Roy S. <roy...@ic...> - 2008-11-13 03:53:32
|
On Wed, 12 Nov 2008, Vijay M wrote: >>> For example when I have an unstructured grid, and when I coarsen >>> uniformly twice and refine uniformly twice, would I get the exact >>> same mesh?! >> >> No. For instance, the second mesh above is a uniform coarsening (so >> far as such a thing exists) of the first mesh, but uniformly refining >> it will produce two new elements that weren't in the first mesh. > > I do not understand this. I thought you maintained the level number for each > element and so once I specify an initial mesh, all elements in it are > treated as level_0. Now if I refine it once, all new elements get the > level_1 flag and become active while the original elements still are in > memory but inactive. And now if I coarsen the mesh uniformly, then the > level_1 elements become inactive and level_0 elements become active. > > Isn't this how the refine-coarsen methodology implemented ? Or am I way off > in my understanding ? That's exactly how refine-coarsen is implemented, and refine*N followed by coarsen*N will always give you the same mesh. But coarsen*N followed by refine*N will not. Again, take a look at mesh 2 in my last email to see why. --- Roy |
From: Vijay M <vi...@gm...> - 2008-11-13 04:04:20
|
Ahh. Now I see what you mean. Yes, in order to avoid it, I'm going to make the conscious decision to start from a coarse mesh that resolves all the boundaries and material interfaces correctly and then use that as the coarsest mesh. Now, if I refine this uniformly twice and keep that as my true mesh, I can coarsen it twice and get back to the original mesh I started with. I hope that is correct. If this makes sense, I think I have a nice place to start working on this. And as long as coarsen and refine are pseudo-inverse operations, I should be able to propagate material data, boundary data correctly without any problems and perform the whole operation matrix-free except for the coarsest mesh solve. Again, coming back to the allocation, if the mesh and dof_map are not reallocated essentially, is the solution and local_solution vectors the only ones I need to worry about based on the mesh distribution ? I also assume that when you perform the refine and coarsen operations, it would still work in parallel since every proc only has to take care of its own elements (hmm, I'm not sure about this though). What I am getting at here is that when I call update() on system, would the new dof mapping be used correctly to get the scatter list ? If this is the case, that is fantastic. > -----Original Message----- > From: Roy Stogner [mailto:roy...@ic...] > Sent: Wednesday, November 12, 2008 9:53 PM > To: Vijay M > Cc: lib...@li... > Subject: RE: [Libmesh-users] Multigrid techniques with libmesh > > > > On Wed, 12 Nov 2008, Vijay M wrote: > > >>> For example when I have an unstructured grid, and when I coarsen > >>> uniformly twice and refine uniformly twice, would I get the exact > >>> same mesh?! > >> > >> No. For instance, the second mesh above is a uniform coarsening (so > >> far as such a thing exists) of the first mesh, but uniformly refining > >> it will produce two new elements that weren't in the first mesh. > > > > I do not understand this. I thought you maintained the level number for > each > > element and so once I specify an initial mesh, all elements in it are > > treated as level_0. Now if I refine it once, all new elements get the > > level_1 flag and become active while the original elements still are in > > memory but inactive. And now if I coarsen the mesh uniformly, then the > > level_1 elements become inactive and level_0 elements become active. > > > > Isn't this how the refine-coarsen methodology implemented ? Or am I way > off > > in my understanding ? > > That's exactly how refine-coarsen is implemented, and refine*N > followed by coarsen*N will always give you the same mesh. > > But coarsen*N followed by refine*N will not. Again, take a look at > mesh 2 in my last email to see why. > --- > Roy |
From: Roy S. <roy...@ic...> - 2008-11-13 04:21:53
|
On Wed, 12 Nov 2008, Vijay M wrote: > Yes, in order to avoid it, I'm going to make the conscious decision to start > from a coarse mesh that resolves all the boundaries and material interfaces > correctly and then use that as the coarsest mesh. Now, if I refine this > uniformly twice and keep that as my true mesh, I can coarsen it twice and > get back to the original mesh I started with. I hope that is correct. Yes, definitely. > If this makes sense, I think I have a nice place to start working on this. > And as long as coarsen and refine are pseudo-inverse operations, Hmm... they still may not be. If you do a System::reinit() after coarsen/refine, the Mesh will also be repartitioned, and I don't know that our partitioners (the default, METIS/ParMETIS, in particular) are guaranteed to give you the same partitioning every time you give them the same mesh - I suspect their results will depend on the previous partitioning/ordering, which will be scrambled by the coarsen/refine. The different partitioning will then give you different DoF indexing. Really, this needs internal library development - we need to "coarsen without *really* coarsening", so to speak, and that'll require at least some DofMap and EquationSystems modification. > Again, coming back to the allocation, if the mesh and dof_map are not > reallocated essentially, Not sure what you mean by that. > is the solution and local_solution vectors the only > ones I need to worry about based on the mesh distribution? A System can have any number of vectors added, in some cases (like transient systems' old solution vectors) "behind the user's back". But they'll all get projected correctly (albeit perhaps too slowly) by an EquationSystems::reinit(). > I also assume that when you perform the refine and coarsen > operations, it would still work in parallel since every proc only > has to take care of its own elements (hmm, I'm not sure about this > though). Parallel adaptivity requires some synchronization. We don't do that as efficiently as we should yet, but the results should be correct. > What I am getting at here is that when I call update() on system, > would the new dof mapping be used correctly to get the scatter list Yes, but only if you've done an EquationSystems::reinit() first - which as I said might also trash your partitioning. --- Roy |
From: Benjamin K. <ben...@na...> - 2008-11-13 16:17:41
|
>> If this makes sense, I think I have a nice place to start working on this. >> And as long as coarsen and refine are pseudo-inverse operations, > > Hmm... they still may not be. If you do a System::reinit() after > coarsen/refine, the Mesh will also be repartitioned, and I don't know > that our partitioners (the default, METIS/ParMETIS, in particular) are > guaranteed to give you the same partitioning every time you give them > the same mesh - I suspect their results will depend on the previous > partitioning/ordering, which will be scrambled by the coarsen/refine. > The different partitioning will then give you different DoF indexing. So this all rang a bell... http://sourceforge.net/mailarchive/forum.php?thread_name=C3E2413C.496F%25ben jamin.kirk%40nasa.gov&forum_name=libmesh-devel Parmetis can either compute the partition of a graph independent of its distribution, or repartition with some heuristic to minimize the amount of redistribution required while still creating a quality partition. Obviously the latter will create a distribution which is dependent on the initial partitioning, but what about the former? The problem we ran in to is that while the partitioning depends only on the input graph, in our case the input graph was dependent on the mesh distribution because the element global indicies were tied to the partitioning. See the thread, there was some debate as to whether repeated calls to the partitioner on the same mesh with the same number of processors should return the same partitioning... Apparently I thought it was important enough to implement a fix in the parmetis partitioner. What we do now is compute a unique, global ordering of the elements based on their hilbert key ordering. This ordering is independent of the actual element parallel distribution. The graph is then built in terms of this ordering, and as far as I can tell the same partitioning of the graph will always result, independent of the graph distribution. -Ben |
From: Vijay S. M. <vi...@gm...> - 2008-11-13 22:19:10
|
Sorry about the late reply. I just wrote a small toy code to see if the coarsening and refining produces the same distribution using ParMetis and it does !! I've attached the code with the mail and feel free to test it with some of your own mesh files to confirm. Ben, thanks for the update on the graph distribution and all I can say is that is just good forward thinking ! I guess this hopefully brings me one step closer to doing geometric multigrid now... On Thu, Nov 13, 2008 at 10:17 AM, Benjamin Kirk <ben...@na...> wrote: >>> If this makes sense, I think I have a nice place to start working on >>> this. >>> And as long as coarsen and refine are pseudo-inverse operations, >> >> Hmm... they still may not be. If you do a System::reinit() after >> coarsen/refine, the Mesh will also be repartitioned, and I don't know >> that our partitioners (the default, METIS/ParMETIS, in particular) are >> guaranteed to give you the same partitioning every time you give them >> the same mesh - I suspect their results will depend on the previous >> partitioning/ordering, which will be scrambled by the coarsen/refine. >> The different partitioning will then give you different DoF indexing. > > So this all rang a bell... > > http://sourceforge.net/mailarchive/forum.php?thread_name=C3E2413C.496F%25benjamin.kirk%40nasa.gov&forum_name=libmesh-devel > > Parmetis can either compute the partition of a graph independent of its > distribution, or repartition with some heuristic to minimize the amount of > redistribution required while still creating a quality partition. Obviously > the latter will create a distribution which is dependent on the initial > partitioning, but what about the former? > > The problem we ran in to is that while the partitioning depends only on the > input graph, in our case the input graph was dependent on the mesh > distribution because the element global indicies were tied to the > partitioning. > > See the thread, there was some debate as to whether repeated calls to the > partitioner on the same mesh with the same number of processors should > return the same partitioning... Apparently I thought it was important > enough to implement a fix in the parmetis partitioner. > > What we do now is compute a unique, global ordering of the elements based on > their hilbert key ordering. This ordering is independent of the actual > element parallel distribution. The graph is then built in terms of this > ordering, and as far as I can tell the same partitioning of the graph will > always result, independent of the graph distribution. > > -Ben |
From: Vijay S. M. <vi...@gm...> - 2008-11-13 05:07:34
|
> Hmm... they still may not be. If you do a System::reinit() after > coarsen/refine, the Mesh will also be repartitioned, and I don't know > that our partitioners (the default, METIS/ParMETIS, in particular) are > guaranteed to give you the same partitioning every time you give them > the same mesh - I suspect their results will depend on the previous > partitioning/ordering, which will be scrambled by the coarsen/refine. > The different partitioning will then give you different DoF indexing. But really, do you think there is a need for redistribution of all the dofs ? I was thinking that each processor would handle coarsen/refine on its own and only the boundary elements need to update the data as to which other processor's elements it is still connected or needs data from. You could simply provide an option to handle this case if it was a multigrid context. Or am I trying to blow over a lot of things without knowing what is happening internally ? If that is a code-breaking change, I will verify with the developers of ParMetis if there is a way to obtain the initial distribution of the mesh exactly back based on some kind of constraints. I hope this is possible. >> Again, coming back to the allocation, if the mesh and dof_map are not >> reallocated essentially, > > Not sure what you mean by that. What I meant was that essentially you add or remove points from mesh and dof_map connectivity but it is not in a sense reallocating new memory as in the case of a solution vector since I need all the coarse and fine grid residuals in order to perform the prolongation and restriction. It is just something that came to my mind and thought that it is another change that is required. Well thanks for all the help till now guys. I've got a very good idea on where I would need to get my hands dirty and what I might have to do in case I start going this route. Have a good night and I'll get back on this topic tomorrow, with maybe fresh ideas. ~Vijay On Wed, Nov 12, 2008 at 10:21 PM, Roy Stogner <roy...@ic...> wrote: > > On Wed, 12 Nov 2008, Vijay M wrote: > >> Yes, in order to avoid it, I'm going to make the conscious decision to >> start >> from a coarse mesh that resolves all the boundaries and material >> interfaces >> correctly and then use that as the coarsest mesh. Now, if I refine this >> uniformly twice and keep that as my true mesh, I can coarsen it twice and >> get back to the original mesh I started with. I hope that is correct. > > Yes, definitely. > >> If this makes sense, I think I have a nice place to start working on this. >> And as long as coarsen and refine are pseudo-inverse operations, > > Hmm... they still may not be. If you do a System::reinit() after > coarsen/refine, the Mesh will also be repartitioned, and I don't know > that our partitioners (the default, METIS/ParMETIS, in particular) are > guaranteed to give you the same partitioning every time you give them > the same mesh - I suspect their results will depend on the previous > partitioning/ordering, which will be scrambled by the coarsen/refine. > The different partitioning will then give you different DoF indexing. > > Really, this needs internal library development - we need to "coarsen > without *really* coarsening", so to speak, and that'll require at > least some DofMap and EquationSystems modification. > >> Again, coming back to the allocation, if the mesh and dof_map are not >> reallocated essentially, > > Not sure what you mean by that. > >> is the solution and local_solution vectors the only >> ones I need to worry about based on the mesh distribution? > > A System can have any number of vectors added, in some cases (like > transient systems' old solution vectors) "behind the user's back". > But they'll all get projected correctly (albeit perhaps too slowly) by > an EquationSystems::reinit(). > >> I also assume that when you perform the refine and coarsen >> operations, it would still work in parallel since every proc only >> has to take care of its own elements (hmm, I'm not sure about this >> though). > > Parallel adaptivity requires some synchronization. We don't do that > as efficiently as we should yet, but the results should be correct. > >> What I am getting at here is that when I call update() on system, >> would the new dof mapping be used correctly to get the scatter list > > Yes, but only if you've done an EquationSystems::reinit() first - > which as I said might also trash your partitioning. > --- > Roy > |
From: Roy S. <roy...@ic...> - 2008-11-13 13:18:18
|
On Wed, 12 Nov 2008, Vijay S. Mahadevan wrote: > But really, do you think there is a need for redistribution of all the > dofs ? I was thinking that each processor would handle coarsen/refine > on its own and only the boundary elements need to update the data as > to which other processor's elements it is still connected or needs > data from. You could simply provide an option to handle this case if > it was a multigrid context. Or am I trying to blow over a lot of > things without knowing what is happening internally ? I think so. I'd be happy to be proven wrong, though. > If that is a code-breaking change, I will verify with the developers > of ParMetis if there is a way to obtain the initial distribution of > the mesh exactly back based on some kind of constraints. I hope this > is possible. This isn't the right thing to do - for multigrid we shouldn't be repartitioning at all, and we can do that by hacking it as an option on to MeshBase. Don't bother them about it. --- Roy |
From: Benjamin K. <ben...@na...> - 2008-11-13 03:59:08
|
>>> For example when I have an unstructured grid, and when I coarsen >>> uniformly twice and refine uniformly twice, would I get the exact >>> same mesh?! One issue is we cannot coarsen below the initial, level-0 mesh. So if you start with a mesh, uniformly refine twice, and then uniformly coarsen two times you will get back to the initial mesh. I think there was a little confusion - you first asked 'coarsen twice then refine twice' but then mentioned > I refine it once, all new elements get the > level_1 flag and become active while the original elements still are in > memory but inactive. And now if I coarsen the mesh uniformly, then the > level_1 elements become inactive and level_0 elements become active. the latter of which is certainly true. What's more, if you have a mesh and uniformly refine it N times, you will have N+1 total levels in the mesh, and you can access each level directly with a 'level element iterator.' we just usually use active local ones, especially in the examples. -Ben |
From: Vijay M <vi...@gm...> - 2008-11-13 04:10:30
|
Ben, Thanks for the clarification. Now I get what Roy was suggesting. I think the libmesh 'Mesh' and 'Elem' data structures already have enough base routines to help me started on creating a geometric multigrid algorithm. Another question that I have in mind is whether I would need to create multiple equations systems in order to maintain different Mesh levels. I understand from your answers that this might not be necessary entirely. And so would I then create a new System with new dofs attached to the current level ? In that case, I would have an active System for every active level of the mesh. Is this what you had in mind ? > -----Original Message----- > From: Benjamin Kirk [mailto:ben...@na...] > Sent: Wednesday, November 12, 2008 9:59 PM > To: Vijay M > Cc: lib...@li... > Subject: Re: [Libmesh-users] Multigrid techniques with libmesh > > >>> For example when I have an unstructured grid, and when I coarsen > >>> uniformly twice and refine uniformly twice, would I get the exact > >>> same mesh?! > > One issue is we cannot coarsen below the initial, level-0 mesh. > > So if you start with a mesh, uniformly refine twice, and then uniformly > coarsen two times you will get back to the initial mesh. I think there > was > a little confusion - you first asked 'coarsen twice then refine twice' but > then mentioned > > > I refine it once, all new elements get the > > level_1 flag and become active while the original elements still are in > > memory but inactive. And now if I coarsen the mesh uniformly, then the > > level_1 elements become inactive and level_0 elements become active. > > the latter of which is certainly true. > > What's more, if you have a mesh and uniformly refine it N times, you will > have N+1 total levels in the mesh, and you can access each level directly > with a 'level element iterator.' we just usually use active local ones, > especially in the examples. > > -Ben > > |