From: Roy S. <roy...@ic...> - 2006-03-27 21:41:26
|
It looks to me like neither the Lagrange compute_constraints() nor the general compute_proj_constraints() functions are set up to handle more than a one level mismatch between adjoining elements. Am I wrong - is there anyone running computations on such meshes? --- Roy Stogner |
From: Benjamin S. K. <ben...@na...> - 2006-03-27 22:13:15
|
Yeah, sure, they should work. I'll try and track down some images, but I think you are looking in the wrong place. The compute_constraints() should work based on 1-level differences, and in the end the build_constraint_matrix() handles recursive constraints. -Ben On Mon, 2006-03-27 at 15:41 -0600, Roy Stogner wrote: > It looks to me like neither the Lagrange compute_constraints() nor the > general compute_proj_constraints() functions are set up to handle more > than a one level mismatch between adjoining elements. Am I wrong - is > there anyone running computations on such meshes? > --- > Roy Stogner > > > ------------------------------------------------------- > This SF.Net email is sponsored by xPML, a groundbreaking scripting language > that extends applications into web and mobile media. Attend the live webcast > and join the prime developer group breaking into this new coding territory! > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users |
From: Wout R. <wou...@gm...> - 2006-03-28 11:00:44
|
Ben and Roy, The mesh refinement code reads (after the comment this is actually implemen= ted): // Case 2: The neighbor is one level lower than I am. // The neighbor thus MUST be refined to satisfy // the level-one rule, regardless of whether it // was originally flagged for refinement. If it // wasn't flagged already we need to repeat // this process.=09=09=09 Doesn't this weed out the meshes you're describing? Or do you bypass the mesh refinement stage in some way? Cheers W On 3/27/06, Benjamin S. Kirk <ben...@na...> wrote: > Yeah, sure, they should work. I'll try and track down some images, but > I think you are looking in the wrong place. The compute_constraints() > should work based on 1-level differences, and in the end the > build_constraint_matrix() handles recursive constraints. > > -Ben > > > On Mon, 2006-03-27 at 15:41 -0600, Roy Stogner wrote: > > It looks to me like neither the Lagrange compute_constraints() nor the > > general compute_proj_constraints() functions are set up to handle more > > than a one level mismatch between adjoining elements. Am I wrong - is > > there anyone running computations on such meshes? > > --- > > Roy Stogner > > > > > > ------------------------------------------------------- > > This SF.Net email is sponsored by xPML, a groundbreaking scripting lang= uage > > that extends applications into web and mobile media. Attend the live we= bcast > > and join the prime developer group breaking into this new coding territ= ory! > > http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D110944&bid=3D241720&dat= =3D121642 > > _______________________________________________ > > Libmesh-users mailing list > > Lib...@li... > > https://lists.sourceforge.net/lists/listinfo/libmesh-users > > > |
From: Roy S. <roy...@ic...> - 2006-03-28 13:43:56
|
On Tue, 28 Mar 2006, Wout Ruijter wrote: > The mesh refinement code reads (after the comment this is actually implemented): > // Case 2: The neighbor is one level lower than I am. > // The neighbor thus MUST be refined to satisfy > // the level-one rule, regardless of whether it > // was originally flagged for refinement. If it > // wasn't flagged already we need to repeat > // this process. > Doesn't this weed out the meshes you're describing? > Or do you bypass the mesh refinement stage in some way? That code is wrapped in "if (maintain_level_one)" - if you make that argument false, you'll skip it. I'm not sure that should be a function argument, though. It looks to me as if the maintain_level_one code only works if the mesh is already level one compatible, in which case you're not going to be turning it on and off from refinement step to refinement step - perhaps it ought to be set in the MeshRefinement constructor instead. --- Roy |
From: Benjamin S. K. <ben...@na...> - 2006-03-28 18:59:38
|
On Tue, 2006-03-28 at 07:43 -0600, Roy Stogner wrote: > I'm not sure that should be a function argument, though. It looks to > me as if the maintain_level_one code only works if the mesh is already > level one compatible, in which case you're not going to be turning it > on and off from refinement step to refinement step - perhaps it ought > to be set in the MeshRefinement constructor instead. > --- > Roy Excellent point. I have no problem making it an optional argument to the constructor which defaults to true. You are correct, I never anticipated taking a non-level-1 conforming mesh and restoring level-one-ness (or whatever you call it). -Ben |
From: Roy S. <roy...@ic...> - 2006-03-29 05:58:29
|
On Tue, 28 Mar 2006, Benjamin S. Kirk wrote: > On Tue, 2006-03-28 at 07:43 -0600, Roy Stogner wrote: >> I'm not sure that should be a function argument, though. It looks to >> me as if the maintain_level_one code only works if the mesh is already >> level one compatible, in which case you're not going to be turning it >> on and off from refinement step to refinement step - perhaps it ought >> to be set in the MeshRefinement constructor instead. > > Excellent point. I have no problem making it an optional argument to > the constructor which defaults to true. Sounds good. I don't think I'm going to make any more small changes while I'm in the middle of this p refinement stuff, though; you've already had to fix one of my poor attempts to commit a changed file that wasn't in sync with my local sandbox. I think I'm ready to sync my current code to CVS now, actually. I'm having trouble thinking of a quick way to test adaptive p refinement (is there such a thing as a simple element-by-element way to decide between h and p refinement?), but our existing examples and my uniform p refinement tests are all working now. > You are correct, I never anticipated taking a non-level-1 conforming > mesh and restoring level-one-ness (or whatever you call it). It's always the unanticipated stuff that gets you. That's why I like this collaborative software development stuff; there's always someone else to catch your bugs if you don't! ;-) On that note, we probably ought to put a global tag on the CVS head before I do my next big commit. I don't think there's anything that should break any code which doesn't do p refinement, but it's a big change that touches a lot of files, and just in case people do start running into bugs it might be prudent to make reverting the whole changeset easy. Ben, would you mind doing that (and letting me know when it's safe to commit)? It's been a while since I've done anything with CVS tags; I'd hate to find out I'd accidentally started a new branch or something. --- Roy |
From: Roy S. <roy...@ic...> - 2006-03-29 18:48:24
|
On Tue, 28 Mar 2006, Roy Stogner wrote: > On that note, we probably ought to put a global tag on the CVS head > before I do my next big commit. I don't think there's anything that > should break any code which doesn't do p refinement, but it's a big > change that touches a lot of files, and just in case people do start > running into bugs it might be prudent to make reverting the whole > changeset easy. This morning's CVS head has been tagged, and my p refinement code is committed. If anyone's feeling adventurous enough to cvs update, let me know (preferably with a stack trace) if you encounter any bugs. The commit log: Initial p refinement support - uniform p refinement is working; adaptive p and hp refinement has yet to be well tested. Major changes include: DofMap now examines element p_level when assigning and indexing degrees of freedom. MeshRefinement now tests for level one p_level conformity when asked, and can do uniform p refinement and coarsening. Quadrature rules and FE objects now examine Elem::p_level() when they are able to. Library internals have been changed to pass total polynomial order (= base polynomial order + element p_level) to functions that cannot examine Elem::p_level() themselves. Major limitations include: System::project_vector does not yet work with p refinement. The option to turn off projection of solution vectors has been added as a temporary workaround. Non-hierarchical element types cannot support adaptive p refinement, and will be limited to uniform p refinement for the forseeable future. --- Roy |
From: Benjamin S. K. <ben...@na...> - 2006-03-29 20:01:13
|
Sorry, I was going to tag it, but it looks like you already got there, Does > Initial p refinement support - uniform p refinement is working; > adaptive p and hp refinement has yet to be well tested. mean that hp refinement has been tested, but just not rigorously? So, libMesh now supports hp refinement? You get a *GOLD STAR.* as for > Non-hierarchical element types cannot support adaptive p refinement, > and will be limited to uniform p refinement for the forseeable future. I assume this means finite element types, and that you cannot adaptively refine lagrange elements, for example? Finally, *soon* might be a good time for some regression tests. Maybe Derek is interested in scripting some stuff up & learning the capabilities of the library? -Ben |
From: Roy S. <roy...@ic...> - 2006-03-29 20:26:29
|
On Wed, 29 Mar 2006, Benjamin S. Kirk wrote: > Does > >> Initial p refinement support - uniform p refinement is working; >> adaptive p and hp refinement has yet to be well tested. > > mean that hp refinement has been tested, but just not rigorously? Not even unrigorously. I've done some fiddling to run through parts of the code, but I'd hoped to do the real tests on a real benchmark problem... but I'm not sure what to do about error indicators! I don't know how you can choose between h and p refinement with a simple to implement code, much less how you can make that choice without access to an element assembly function. > So, libMesh now supports hp refinement? > You get a *GOLD STAR.* Well, no. But we're getting closer! If all the code I put in is bugfree (it won't be), and if our simple error indicators can fairly compare elements of different degree (here's hoping), and if you have a magic fairy guiding your h vs. p refinement choices (you don't), and if your code doesn't accidentally assume constant p (our examples don't, but more complicated code might), and if you don't want p>6 in 1D, p>5 in 2D, p>3 or non-hexes in 3D (you will), then libMesh now supports hp refinement. > as for > >> Non-hierarchical element types cannot support adaptive p refinement, >> and will be limited to uniform p refinement for the forseeable future. > > I assume this means finite element types, and that you cannot adaptively > refine lagrange elements, for example? Exactly. Right now it's just HIERARCHIC, although I plan to add higher degree CLOUGH triangles in the next year. I think my projection-based constraint matrix code might be able to handle arbitrary non-hierarchic elements as long as the side function spaces are backwards-compatible from p level to p level, but I don't think I can get the degree of freedom handler to cooperate. The DofMap required some odd tricks just to get to where it is now, in fact. Face and edge degrees of freedom are now globally numbered in reverse order, for example; that way if you might be on a hanging node it's still safe to get your vertex DoFs by counting up from the lowest global DoF number on that node or to get your edge/face DoFs by counting down from the highest global DoF! > Finally, *soon* might be a good time for some regression tests. I'd certainly feel a lot better if I had more than the current "make run_examples" to do regression testing with. I don't have time to sit down and write a hundred unit tests, but even if we just put more option combinations into the examples Makefiles it would be an improvement. > Maybe Derek is interested in scripting some stuff up & learning the > capabilities of the library? I'll forward this to him just in case he's not reading the mailing lists yet. Somehow volunteering other people for work never works out for me as well as I hope, though. ;-) --- Roy |
From: Roy S. <roy...@ic...> - 2006-04-05 16:52:10
|
This email subject would now be more accurate as "Initial hp refinement=20 support committed to CVS". The only thing I haven't committed is the p and= =20 hp options I added to example 14; there's some 3D code mixed in with my=20 changes that would just be confusing until the 3D hierarchics are working. = =20 If anyone really wants the new ex14 soon, let me know and I'll get it clean= ed=20 up to add. On Wednesday 29 March 2006 2:26, I wrote: > On Wed, 29 Mar 2006, Benjamin S. Kirk wrote: > > Does > > > >> Initial p refinement support - uniform p refinement is working; > >> adaptive p and hp refinement has yet to be well tested. > > > > mean that hp refinement has been tested, but just not rigorously? > > Not even unrigorously. I've done some fiddling to run through parts > of the code, but I'd hoped to do the real tests on a real benchmark > problem... but I'm not sure what to do about error indicators! I > don't know how you can choose between h and p refinement with a simple > to implement code, much less how you can make that choice without > access to an element assembly function. Okay, for now I'm ignoring the h-vs-p problem. That will have to wait for = me=20 or another libMesh user to become more interested in hp as a research topic= ,=20 not just hp as something I've figured out how to implement easily. For=20 debugging purposes, I've just been doing both h and p refinement together, = so=20 technically hp refinement has been tested, but in practice code with singul= ar=20 solution derivatives will want to stick with h for now. > If all the code I put in is bugfree (it won't be), I'm sure it still isn't, but it's giving me solutions on hp meshes now. I'= m=20 going to take a break from it here, but if others want to do any testing I'= d=20 appreciate it. > and if our simple error indicators can fairly compare elements of differe= nt=20 > degree (here's hoping), =46rom what I've been reading, the answer here is that Kelly won't give fai= r=20 comparisons of different p elements, but the unfairness will probably be a= =20 small constant that won't break the adaptive loop. > and if you have a magic fairy guiding your h vs. p=20 > refinement choices (you don't), Ben's right that the patch recovery estimators may be the way for us to go= =20 here. Anything that gives a sufficiently better estimate for the solution= =20 local to an element will also give a good hp refinement scheme, just by=20 comparing the local error reduction vs. DoF increases for h vs. p refinemen= t. > and if your code doesn't accidentally assume constant p (our examples don= 't,=20 > but more complicated code might), I'm a little more optimistic here now that I've converted ex14; basically=20 anything that works with HIERARCHIC elements at all is likely to work with = p=20 or hp refined hierarchics - with the exception of infinite element code; I= =20 don't understand the InfFE classes well enough to feel safe upgrading them = to=20 hp. > and if you don't want p>6 in 1D, p>5 in 2D, p>3 or non-hexes=20 > in 3D (you will), then libMesh now supports hp refinement. I've bumped p up to 11 in 1D and for 2D quads for testing purposes; the cod= e=20 could go higher, but I'm not sure it could go higher accurately without=20 higher precision arithmetic. > > Maybe Derek is interested in scripting some stuff up & learning the > > capabilities of the library? > > I'll forward this to him just in case he's not reading the mailing > lists yet. Somehow volunteering other people for work never works out > for me as well as I hope, though. ;-) Yeah, no luck there. I'll forward this to Derek too so maybe he'll feel=20 guilty. ;-) =2D-=20 Roy Stogner |