Screenshot instructions:
Windows
Mac
Red Hat Linux
Ubuntu
Click URL instructions:
Rightclick on ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)
From: Manav Bhatia <bhatiamanav@gm...>  20131130 14:34:21

Hi, I am trying to understand the hpselection (coarsen) strategy currently in libMesh. My current reference is Demkowicz’s book: Computing with hpAdaptive Finite Elements, which uses projections from a fine mesh solution onto a coarse mesh to identify rate of error reduction using h or p refinements. Is the strategy in the code through coarsentest similar to that? While looking through the code, it seemed like the current mesh solution is treated as the fine solution and projected onto h or p coarsening to identify rate of error reduction, and this rate is used to select between h or p refinement. So, it seems similar in principle, but I would appreciate if someone with more knowledge could comment. Also, is there a reference that describes the strategy currently implemented? Thanks, Manav 
From: Roy Stogner <roystgnr@ic...>  20131201 20:02:14

On Sat, 30 Nov 2013, Manav Bhatia wrote: > I am trying to understand the hpselection (coarsen) strategy currently in libMesh. > > My current reference is Demkowicz’s book: Computing with hpAdaptive Finite Elements, which uses projections from a fine mesh solution onto a coarse mesh to identify rate of error reduction using h or p refinements. Good choice. > Is the strategy in the code through coarsentest similar to that? > While looking through the code, it seemed like the current mesh > solution is treated as the fine solution and projected onto h or > p coarsening to identify rate of error reduction, and this rate > is used to select between h or p refinement. That's correct. > So, it seems similar in principle, but I would appreciate if someone with more knowledge could comment. > > Also, is there a reference that describes the strategy currently implemented? No  sadly the hp support in libMesh was something I had just been playing with in my free time. I wouldn't guarantee correctness of the constraint equation generation in every corner case, much less optimality of the refinement heuristic. If you find yourself with enough time and motivation to work with it, though, I'd love to help fix any problems you can uncover.  Roy 
From: Roy Stogner <roystgnr@ic...>  20131201 21:30:35

On Sun, 1 Dec 2013, Manav Bhatia wrote: > I will working on some hpadaptation studies in the coming days. > I will keep you posted about my experience. Thanks! FYI, the only case where I recall definitely getting exponential convergence was on 1D problems with a "h refine only for elements next to predefined singularity points" heuristic. This suggests that we might have both a bug in the constraints generation and a bug or a poor design in the coarseningbased hpselection heuristic. IIRC a while ago I identified a possible bug in the constraints generation but didn't have time to fix it. Let me see... http://www.mailarchive.com/libmeshdevel@.../msg01928.html And yes, that definitely looks like a bug. When we see highp edge/face dofs on hanging nodes we're not constraining the right indices; we need to be testing FEInterface::extra_hanging_dofs and hitting the nodal dof indices in reverse order in that case. I can probably cook up a patch for that next week, but I don't have much time; would you be interested in helping test it? > I also owe you another PR on the user sensitivity routines for > sensitivity analysis. The code is complete at my end and I have > been using it. I will soon create a PR for the same. Thanks! Didn't you already have a PR open for that? I could swear I remember starting to look that over, but going back I can't find it in either the open or the closed pulls list on github.  Roy 
From: Manav Bhatia <bhatiamanav@gm...>  20131205 04:36:34

Hi Roy, I have started to run some test cases in 2D (subsonic flow over a sine bump) using QUAD9, and have the following observations: — All FE types tested: LAGRANGE (till 2nd order), SZABAB and BERNSTEIN work fine without AMR, and the solution converges. — 2nd order LAGRANGE FE type, *with hrefinemement* works fine and converges. — Changing FE type to SZABAB and *using hrefinement* leads to unrealistic (huge) residual values immediately after first hrefinement and the plotted solution has odd looking spikes in each element next to a hanging node. This seems to be an expected behavior given the bug described in the libmeshdevel thread you had sent to me. (?) Out of the higherorder polynomials, SZABAB is nested, but BERNSTEIN is not since the whole basis changes as soon as the porder is modified. Does this bug/discussion affect BERNSTEIN in the same was as it does SZABAB? I mean, if the project_solution, assumes that the change in porder could be handled with ignoring/zeroing the highest order dof, then that should not be the correct approach for BERNSTEIN polynomials? Manav On Dec 1, 2013, at 4:30 PM, Roy Stogner <roystgnr@...> wrote: > > On Sun, 1 Dec 2013, Manav Bhatia wrote: > >> I will working on some hpadaptation studies in the coming days. >> I will keep you posted about my experience. > > Thanks! > > FYI, the only case where I recall definitely getting exponential > convergence was on 1D problems with a "h refine only for elements next > to predefined singularity points" heuristic. This suggests that we > might have both a bug in the constraints generation and a bug or a > poor design in the coarseningbased hpselection heuristic. > > IIRC a while ago I identified a possible bug in the constraints > generation but didn't have time to fix it. Let me see... > > http://www.mailarchive.com/libmeshdevel@.../msg01928.html > > And yes, that definitely looks like a bug. When we see highp > edge/face dofs on hanging nodes we're not constraining the right > indices; we need to be testing FEInterface::extra_hanging_dofs and > hitting the nodal dof indices in reverse order in that case. I can > probably cook up a patch for that next week, but I don't have much > time; would you be interested in helping test it? > >> I also owe you another PR on the user sensitivity routines for >> sensitivity analysis. The code is complete at my end and I have >> been using it. I will soon create a PR for the same. > > Thanks! > > Didn't you already have a PR open for that? I could swear I remember > starting to look that over, but going back I can't find it in either > the open or the closed pulls list on github. >  > Roy 
From: Manav Bhatia <bhatiamanav@gm...>  20131205 17:59:59

Hi, I am working on this now, and need some help understanding the extrahangingdofs. The higherorder elements, say SZABAB on a QUAD9, have 4 dofs for each corner node, then edge bubble functions for each midside node (depending on the order of element), and finally all bubble functions on element domain are attached to the middle node. With hrefinement, a side node (now of the neighbor) becomes a corner node of the new elements, and new midside nodes are created for the new elements. Given this, would the intent be to constrain the *new* edge dofs (of the child elements) to the *old* corner and edge dofs (of the edge shared by the parent/neighbor)? What about the new dofs that show up at the old edge node, which is now a corner dof? Which of these get characterized as the extra_hanging_dofs? Any comments would be helpful. Manav On Wed, Dec 4, 2013 at 11:36 PM, Manav Bhatia <bhatiamanav@...> wrote: > Hi Roy, > > I have started to run some test cases in 2D (subsonic flow over a sine > bump) using QUAD9, and have the following observations: > > — All FE types tested: LAGRANGE (till 2nd order), SZABAB and BERNSTEIN > work fine without AMR, and the solution converges. > > — 2nd order LAGRANGE FE type, *with hrefinemement* works fine and > converges. > > — Changing FE type to SZABAB and *using hrefinement* leads to unrealistic > (huge) residual values immediately after first hrefinement and the plotted > solution has odd looking spikes in each element next to a hanging node. > > > This seems to be an expected behavior given the bug described in the > libmeshdevel thread you had sent to me. (?) > > Out of the higherorder polynomials, SZABAB is nested, but BERNSTEIN is > not since the whole basis changes as soon as the porder is modified. Does > this bug/discussion affect BERNSTEIN in the same was as it does SZABAB? I > mean, if the project_solution, assumes that the change in porder could be > handled with ignoring/zeroing the highest order dof, then that should not > be the correct approach for BERNSTEIN polynomials? > > > Manav > > > > On Dec 1, 2013, at 4:30 PM, Roy Stogner <roystgnr@...> wrote: > > > > > On Sun, 1 Dec 2013, Manav Bhatia wrote: > > > >> I will working on some hpadaptation studies in the coming days. > >> I will keep you posted about my experience. > > > > Thanks! > > > > FYI, the only case where I recall definitely getting exponential > > convergence was on 1D problems with a "h refine only for elements next > > to predefined singularity points" heuristic. This suggests that we > > might have both a bug in the constraints generation and a bug or a > > poor design in the coarseningbased hpselection heuristic. > > > > IIRC a while ago I identified a possible bug in the constraints > > generation but didn't have time to fix it. Let me see... > > > > > http://www.mailarchive.com/libmeshdevel@.../msg01928.html > > > > And yes, that definitely looks like a bug. When we see highp > > edge/face dofs on hanging nodes we're not constraining the right > > indices; we need to be testing FEInterface::extra_hanging_dofs and > > hitting the nodal dof indices in reverse order in that case. I can > > probably cook up a patch for that next week, but I don't have much > > time; would you be interested in helping test it? > > > >> I also owe you another PR on the user sensitivity routines for > >> sensitivity analysis. The code is complete at my end and I have > >> been using it. I will soon create a PR for the same. > > > > Thanks! > > > > Didn't you already have a PR open for that? I could swear I remember > > starting to look that over, but going back I can't find it in either > > the open or the closed pulls list on github. > >  > > Roy > > 
From: Roy Stogner <roystgnr@ic...>  20131205 20:05:35

On Thu, 5 Dec 2013, Manav Bhatia wrote: > I am working on this now, and need some help understanding the extrahangingdofs. Basically they're for the case where there isn't a onetoone correspondance between vertex dofs and side/face dofs. With LAGRANGE elements, by contrast, the dof equation on a corner is the dof equation on an edge or side, and you just reuse the same dof value. With HIERARCHIC elements, not so much. If a bubble function on an edge midpoint has value 0 and the neighboring vertices have value 1 and 0, then if that midpoint is a hanging node we want the corresponding vertex dof there to be 1/2. So what we do is we make those independent: in the global numbering, there are now two dofs on this node, one of which takes value 0 and the other of which takes value 1/2.  Roy 
From: Manav Bhatia <bhatiamanav@gm...>  20131205 18:06:26

Another followup question: If the problem described in the previous email is relevant to all higher order elements with edgedofs, then wouldn't FEInterface::extra_hangin_dofs() be true all such elements? If yes, then why would BERNSTEIN and SZABAB return false, while HIERARCHICH return true for this function? Manav On Thu, Dec 5, 2013 at 12:59 PM, Manav Bhatia <bhatiamanav@...> wrote: > Hi, > > I am working on this now, and need some help understanding the > extrahangingdofs. > > The higherorder elements, say SZABAB on a QUAD9, have 4 dofs for each > corner node, then edge bubble functions for each midside node (depending > on the order of element), and finally all bubble functions on element > domain are attached to the middle node. > > With hrefinement, a side node (now of the neighbor) becomes a > corner node of the new elements, and new midside nodes are created for the > new elements. > > Given this, would the intent be to constrain the *new* edge dofs (of > the child elements) to the *old* corner and edge dofs (of the edge shared > by the parent/neighbor)? > > What about the new dofs that show up at the old edge node, which is > now a corner dof? > > Which of these get characterized as the extra_hanging_dofs? > > Any comments would be helpful. > > Manav > > > > On Wed, Dec 4, 2013 at 11:36 PM, Manav Bhatia <bhatiamanav@...>wrote: > >> Hi Roy, >> >> I have started to run some test cases in 2D (subsonic flow over a sine >> bump) using QUAD9, and have the following observations: >> >> — All FE types tested: LAGRANGE (till 2nd order), SZABAB and BERNSTEIN >> work fine without AMR, and the solution converges. >> >> — 2nd order LAGRANGE FE type, *with hrefinemement* works fine and >> converges. >> >> — Changing FE type to SZABAB and *using hrefinement* leads to >> unrealistic (huge) residual values immediately after first hrefinement and >> the plotted solution has odd looking spikes in each element next to a >> hanging node. >> >> >> This seems to be an expected behavior given the bug described in the >> libmeshdevel thread you had sent to me. (?) >> >> Out of the higherorder polynomials, SZABAB is nested, but BERNSTEIN is >> not since the whole basis changes as soon as the porder is modified. Does >> this bug/discussion affect BERNSTEIN in the same was as it does SZABAB? I >> mean, if the project_solution, assumes that the change in porder could be >> handled with ignoring/zeroing the highest order dof, then that should not >> be the correct approach for BERNSTEIN polynomials? >> >> >> Manav >> >> >> >> On Dec 1, 2013, at 4:30 PM, Roy Stogner <roystgnr@...> wrote: >> >> > >> > On Sun, 1 Dec 2013, Manav Bhatia wrote: >> > >> >> I will working on some hpadaptation studies in the coming days. >> >> I will keep you posted about my experience. >> > >> > Thanks! >> > >> > FYI, the only case where I recall definitely getting exponential >> > convergence was on 1D problems with a "h refine only for elements next >> > to predefined singularity points" heuristic. This suggests that we >> > might have both a bug in the constraints generation and a bug or a >> > poor design in the coarseningbased hpselection heuristic. >> > >> > IIRC a while ago I identified a possible bug in the constraints >> > generation but didn't have time to fix it. Let me see... >> > >> > >> http://www.mailarchive.com/libmeshdevel@.../msg01928.html >> > >> > And yes, that definitely looks like a bug. When we see highp >> > edge/face dofs on hanging nodes we're not constraining the right >> > indices; we need to be testing FEInterface::extra_hanging_dofs and >> > hitting the nodal dof indices in reverse order in that case. I can >> > probably cook up a patch for that next week, but I don't have much >> > time; would you be interested in helping test it? >> > >> >> I also owe you another PR on the user sensitivity routines for >> >> sensitivity analysis. The code is complete at my end and I have >> >> been using it. I will soon create a PR for the same. >> > >> > Thanks! >> > >> > Didn't you already have a PR open for that? I could swear I remember >> > starting to look that over, but going back I can't find it in either >> > the open or the closed pulls list on github. >> >  >> > Roy >> >> > 
From: Roy Stogner <roystgnr@ic...>  20131205 20:12:44

On Thu, 5 Dec 2013, Manav Bhatia wrote: > If the problem described in the previous email is relevant to > all higher order elements with edgedofs, then wouldn't > FEInterface::extra_hangin_dofs() be true all such elements? If yes, > then why would BERNSTEIN and SZABAB return false, while HIERARCHICH > return true for this function? Bi/triquadratic LAGRANGE dofs are higherorder (for loose values of "high" with edgedofs which shouldn't need extra_hanging_dofs() == true. It looks like BERNSTEIN and SZABAB may be bugs, though. Would you try changing your fe_interface.C and see if that fixes the problem you've encountered? Thanks,  Roy 
From: Roy Stogner <roystgnr@ic...>  20131205 19:58:26

On Wed, 4 Dec 2013, Manav Bhatia wrote: > — 2nd order LAGRANGE FE type, *with hrefinemement* works fine and converges. This is as I'd expect  the current libMesh adaptive prefinement code assumes a hierarchic basis and should fail in weird ways (or in devel mode, throw an error) without one. > — Changing FE type to SZABAB and *using hrefinement* leads to > unrealistic (huge) residual values immediately after first > hrefinement and the plotted solution has odd looking spikes in each > element next to a hanging node. > > This seems to be an expected behavior given the bug described in the > libmeshdevel thread you had sent to me. (?) No, actually. The bug I described should affect adaptive hprefinement, but not pure adaptive hrefinement. What p level and element type were you using that you saw a failure with hrefinement? I don't know much about the SZABAB elements but it looks like perhaps they require extra_hanging_dofs==true but aren't getting that in fe_interface.C. > Out of the higherorder polynomials, SZABAB is nested, but BERNSTEIN > is not since the whole basis changes as soon as the porder is > modified. Does this bug/discussion affect BERNSTEIN in the same was > as it does SZABAB? With nonnested (what I called "hierarchic" before, not referring only to the HIERARCHICs) polynomials we don't support prefinement at all yet. That's not so much a bug as an unimplemented feature.  Roy 
From: Manav Bhatia <bhatiamanav@gm...>  20131205 20:13:47

On Thu, Dec 5, 2013 at 2:58 PM, Roy Stogner <roystgnr@...>wrote: > What p level and element type were you using that you saw a failure > with hrefinement? This was with an hrefinement for SZABAB with p=3. > I don't know much about the SZABAB elements but > it looks like perhaps they require extra_hanging_dofs==true but aren't > getting that in fe_interface.C. > > Looking at SZABAB, I see it implemented only for 1D and 2D elements. Maybe I could add a bit of extra code to enable 3D as well. Shouldn't be too difficult, given what I learnt about the code internals today! Manav 
From: Manav Bhatia <bhatiamanav@gm...>  20131205 20:06:43

Hi Roy, From you definition of *hierarchich*, SZABAB would classify as one, correct? The polynomials are nest. My experience with HIERARCHIC has been unfavorable, with convergence issues of iterative solvers. I have not seen any such problems with SZABAB, and BERNSTEIN, so am favoring those. I managed to get the higher order SZABAB working with three modifications:  modified FEInterface::extra_hanging_dofs() to be true for SZABAB.  FEInterface::compute_constraints() does not do anything for SZABAB. Added a case SZABAB: for that.  fe_szabab.C needs FE<2, SZABAB>::compute_constraints. With this, hrefinement of higher order SZABAB works without problems. I am not sure if this is the issue plaguing hprefinement, but I will get to it soon. Manav On Thu, Dec 5, 2013 at 2:58 PM, Roy Stogner <roystgnr@...>wrote: > > On Wed, 4 Dec 2013, Manav Bhatia wrote: > > — 2nd order LAGRANGE FE type, *with hrefinemement* works fine and >> converges. >> > > This is as I'd expect  the current libMesh adaptive prefinement code > assumes a hierarchic basis and should fail in weird ways (or in devel > mode, throw an error) without one. > > > — Changing FE type to SZABAB and *using hrefinement* leads to >> unrealistic (huge) residual values immediately after first >> hrefinement and the plotted solution has odd looking spikes in each >> element next to a hanging node. >> >> This seems to be an expected behavior given the bug described in the >> libmeshdevel thread you had sent to me. (?) >> > > No, actually. The bug I described should affect adaptive > hprefinement, but not pure adaptive hrefinement. > > What p level and element type were you using that you saw a failure > with hrefinement? I don't know much about the SZABAB elements but > it looks like perhaps they require extra_hanging_dofs==true but aren't > getting that in fe_interface.C. > > > Out of the higherorder polynomials, SZABAB is nested, but BERNSTEIN >> is not since the whole basis changes as soon as the porder is >> modified. Does this bug/discussion affect BERNSTEIN in the same was >> as it does SZABAB? >> > > With nonnested (what I called "hierarchic" before, not referring only > to the HIERARCHICs) polynomials we don't support prefinement at all > yet. That's not so much a bug as an unimplemented feature. >  > Roy 
From: Roy Stogner <roystgnr@ic...>  20131205 20:22:16

On Thu, 5 Dec 2013, Manav Bhatia wrote: > From you definition of *hierarchich*, SZABAB would classify as one, correct? The polynomials are nest. Bearing in mind that I'm really looking at those elements for the first time... The 1D SzaBab elements definitely qualify as nested. The magic numbers in those coefficients annoy me... but it looks like they're all just scaling factors so at least the doubleprecision wouldn't kill their convergence rates when compiling with quadprecision. The 2D SzaBab quads appear to be nested. So are at least the cubic or so triangles, so I'd strongly bet the triangles are nested too. It looks like 3D SzaBab hexes would be easy to define; wonder why nobody's gotten around to it yet. > My experience with HIERARCHIC has been unfavorable, with > convergence issues of iterative solvers. I have not seen any such > problems with SZABAB, and BERNSTEIN, so am favoring those. Strange. From a brief glance at SZABAB it looked like they were practically just HIERARCHIC with different scaling factors, the kind of thing even Jacobi preconditioning would fix. > I managed to get the higher order SZABAB working with three modifications: > >  modified FEInterface::extra_hanging_dofs() to be true for SZABAB. > >  FEInterface::compute_constraints() does not do anything for SZABAB. Added a case SZABAB: for that. > >  fe_szabab.C needs FE<2, SZABAB>::compute_constraints. > > With this, hrefinement of higher order SZABAB works without problems. Great! Send us a PR? > I am not sure if this is the issue plaguing hprefinement, but I > will get to it soon. No, sadly you ran into another bug on the way to the hp bug.  Roy 
From: Manav Bhatia <bhatiamanav@gm...>  20131205 20:33:34

On Thu, Dec 5, 2013 at 3:22 PM, Roy Stogner <roystgnr@...>wrote: > > > > >> My experience with HIERARCHIC has been unfavorable, with >> convergence issues of iterative solvers. I have not seen any such >> problems with SZABAB, and BERNSTEIN, so am favoring those. >> > > Strange. From a brief glance at SZABAB it looked like they were > practically just HIERARCHIC with different scaling factors, the kind > of thing even Jacobi preconditioning would fix. > > I remember triying all sorts of preconditioning, and nothing helped. My understanding is that the HIERARCHIC polynomials get highly illconditioned as the porder increases. Infact Karniadakis, in his book on spectral hpelements, has a plot of condition number for HIERARCHIC, LAGRANGE, and LEGENDRE polynomials for increasing p, and shows that HIERARCHIC are the first to go through the roof, followed by LAGRANGE. I am interested in implementing a C0 continuous version fo LEGENDRE, which modifies the first two basis from a constant and linear to a combination of two linears, and maintains the bubble functions. The orthogonality of these polynomials can be quite useful, especially in getting a diagonal mass matrix. > > I managed to get the higher order SZABAB working with three >> modifications: >> >>  modified FEInterface::extra_hanging_dofs() to be true for SZABAB. >> >>  FEInterface::compute_constraints() does not do anything for SZABAB. >> Added a case SZABAB: for that. >> >>  fe_szabab.C needs FE<2, SZABAB>::compute_constraints. >> >> With this, hrefinement of higher order SZABAB works without problems. >> > > Great! Send us a PR? > > > I am not sure if this is the issue plaguing hprefinement, but I >> will get to it soon. >> > > No, sadly you ran into another bug on the way to the hp bug. >  > Roy aahhh... ok... I will continue pushing forward... Manav 
From: Manav Bhatia <bhatiamanav@gm...>  20131206 16:19:45

Hi Roy, I spent some time looking through the ProjectVector::operator() code, and have some questions: K U = F solution based projection is performed for hrefined elements. For prefined, however, the dof based assignment is performed. — Is there a reason for why the same K U = F based projection is not performed for prefined elements? I understand that this would be more expensive than what is currently being done for prefined, but it will likely also take care of the nonhierarchich basis. (?) — Is the dof based assignment for prefined elements the source of the error for hprefinement that you had referred to in your initial email? — I noticed that a similar operation are performed in FEBase::coarsened_dof_values() and FEBase::compute_proj_constraints(). So, these too would need attention for hp studies? Thanks, Manav 
From: Roy Stogner <roystgnr@ic...>  20131206 18:13:17

On Fri, 6 Dec 2013, Manav Bhatia wrote: > K U = F solution based projection is performed for hrefined > elements. For prefined, however, the dof based assignment is > performed. > > — Is there a reason for why the same K U = F based projection is > not performed for prefined elements? I understand that this would > be more expensive than what is currently being done for prefined, > but it will likely also take care of the nonhierarchich basis. (?) IIRC it simply would have made the code more complicated, and I was more worried about bugs from overlycomplicated code than about efficiency or generality. > — Is the dof based assignment for prefined elements the source of > the error for hprefinement that you had referred to in your initial > email? Not the "source" exactly, but it's the location of the error, yes. > — I noticed that a similar operation are performed in > FEBase::coarsened_dof_values() and > FEBase::compute_proj_constraints(). So, these too would need > attention for hp studies? I'm not sure if those are wrong, but they'd definitely need attention, yes.  Roy 
Sign up for the SourceForge newsletter:
No, thanks