Screenshot instructions:
Windows
Mac
Red Hat Linux
Ubuntu
Click URL instructions:
Rightclick on ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)
From: Tim Kroeger <tim.kroeger@ce...>  20080416 13:07:59

Dear libMesh team, Next question from me: Is there some easy way in libMesh to compare computation results on different grids? I.e., I have two EquationSystems instances that consist of equal systems but live on different grids, where both grids are refinements of the same initial grid. For each variable in each system, I want the difference of the computed solutions on the two grids, in various norms (say L1, L2, Linfty). I conjecture that there is no easy way to do this in libMesh, but I thought it might be sensible to ask. Best Regards, Tim  Dr. Tim Kroeger Phone +494212187710 tim.kroeger@..., tim.kroeger@... Fax +494212184236 MeVis Research GmbH, Universitaetsallee 29, 28359 Bremen, Germany Amtsgericht Bremen HRB 16222 Geschaeftsfuehrer: Prof. Dr. H.O. Peitgen 
From: Roy Stogner <roy@st...>  20080416 13:15:50

On Wed, 16 Apr 2008, Tim Kroeger wrote: > Next question from me: Is there some easy way in libMesh to compare > computation results on different grids? I.e., I have two > EquationSystems instances that consist of equal systems but live on > different grids, where both grids are refinements of the same initial > grid. For each variable in each system, I want the difference of the > computed solutions on the two grids, in various norms (say L1, L2, > Linfty). > > I conjecture that there is no easy way to do this in libMesh, but I > thought it might be sensible to ask. If by "different" you mean that one mesh is everywhere as fine as or finer than the other, then you can construct a MeshFunction from the coarser solution and use it with an ExactSolution object on the fine system to get the right answer. If one grid is finer in some places and the other grid is finer in others then there's currently no way to do this precisely. You might use the ExactSolution trick on a "mostly finer" grid with an overkill quadrature rule to get something close enough, but the right thing to do here would require new code. It's a common problem, though; if you write the aforementioned new code please consider contributing it to the library. ;)  Roy 
From: Tim Kroeger <tim.kroeger@ce...>  20080417 14:10:18

Dear Roy and all, On Wed, 16 Apr 2008, Roy Stogner wrote: > On Wed, 16 Apr 2008, Tim Kroeger wrote: > >> Next question from me: Is there some easy way in libMesh to compare >> computation results on different grids? I.e., I have two >> EquationSystems instances that consist of equal systems but live on >> different grids, where both grids are refinements of the same initial >> grid. For each variable in each system, I want the difference of the >> computed solutions on the two grids, in various norms (say L1, L2, >> Linfty). >> >> I conjecture that there is no easy way to do this in libMesh, but I >> thought it might be sensible to ask. > > If by "different" you mean that one mesh is everywhere as fine as or > finer than the other, then you can construct a MeshFunction from the > coarser solution and use it with an ExactSolution object on the fine > system to get the right answer. Sounds easy at first sight, but as I looked closer to it, it seems quite difficult on more than one processor, because I can't assume that the vectors on both meshes are distributed in the same way, right? It should be necessary to either (a) to redistribute one solution (preferably the coarser one) such that it is distributed equivalently as the other one, or (b) perform a lot of interprocess communication, or (c) write everything to disk and perform the comparison in serial. Since I am not familiar with the interna of the vector distribution mechanism of libMesh, I feel not able to implement (a). Version (b) seems to be a lot of work and also not very effective, so I will use (c) as a workaround for now, although I don't really like it, and it might run out of memory. Please let me know if you have any thoughts about this. Best Regards, Tim  Dr. Tim Kroeger Phone +494212187710 tim.kroeger@..., tim.kroeger@... Fax +494212184236 MeVis Research GmbH, Universitaetsallee 29, 28359 Bremen, Germany Amtsgericht Bremen HRB 16222 Geschaeftsfuehrer: Prof. Dr. H.O. Peitgen 
From: Tim Kroeger <tim.kroeger@ce...>  20080825 10:56:07

Dear all, Some time has passed since this discussion (I suddenly had to work on different things). Since time may have solved some things, I ask again: Is there currently a method to perform a comparison of two computation results on different grids (one of which is finer than the other) *in parallel*? I know the trick using MeshFunction, but as far as I understand, it will fail in parallel since it is not guaranteed that corresponding elements live on the same grid. Serializing the results will be problematic, because I assume them not to fit onto one processor. Best Regard, Tim  Dr. Tim Kroeger Phone +494212187710 tim.kroeger@..., tim.kroeger@... Fax +494212184236 MeVis Research GmbH, Universitaetsallee 29, 28359 Bremen, Germany Amtsgericht Bremen HRB 16222 Geschaeftsfuehrer: Prof. Dr. H.O. Peitgen 
From: Tim Kroeger <tim.kroeger@ce...>  20080825 13:29:04
Attachments:
patch

Dear all, Meanwhile, I found out that the ExactSolution class seems to offer exactly what I need. However, I have attached a patch that extends that class in two ways: 1. It is satisfied with const references to the EquationSystem objects. 2. It can compute also the L1 and L_INF norms of the errors (I had to extend the FEMNormType enum for that case, too). Would you please be so kind and check that changes in (provided you like them)? Best Regards, Tim  Dr. Tim Kroeger Phone +494212187710 tim.kroeger@..., tim.kroeger@... Fax +494212184236 MeVis Research GmbH, Universitaetsallee 29, 28359 Bremen, Germany Amtsgericht Bremen HRB 16222 Geschaeftsfuehrer: Prof. Dr. H.O. Peitgen 
From: John Peterson <jwpeterson@gm...>  20080825 14:27:09

Hi Tim, On Mon, Aug 25, 2008 at 8:28 AM, Tim Kroeger <tim.kroeger@...> wrote: > > Would you please be so kind and check that changes in (provided you like > them)? I'll take a look at this patch today. Thanks!  John 
From: Tim Kroeger <tim.kroeger@ce...>  20080826 10:09:41

Dear John, On Mon, 25 Aug 2008, John Peterson wrote: > On Mon, Aug 25, 2008 at 8:28 AM, Tim Kroeger >> Would you please be so kind and check that changes in (provided you like >> them)? > > I'll take a look at this patch today. Thanks! So, what was the result of looking at the patch? Best Regards, Tim  Dr. Tim Kroeger Phone +494212187710 tim.kroeger@..., tim.kroeger@... Fax +494212184236 MeVis Research GmbH, Universitaetsallee 29, 28359 Bremen, Germany Amtsgericht Bremen HRB 16222 Geschaeftsfuehrer: Prof. Dr. H.O. Peitgen 
From: John Peterson <jwpeterson@gm...>  20080826 14:10:50

On Tue, Aug 26, 2008 at 5:02 AM, Tim Kroeger <tim.kroeger@...> wrote: > Dear John, > > On Mon, 25 Aug 2008, John Peterson wrote: > >> On Mon, Aug 25, 2008 at 8:28 AM, Tim Kroeger > >>> Would you please be so kind and check that changes in (provided you like >>> them)? >> >> I'll take a look at this patch today. Thanks! > > So, what was the result of looking at the patch? I'm not sure about your implementation of L_INF. You're taking e_{\infty} = max_q e(x_q) where x_q are the quadrature points. In fact, isn't the solution sometimes superconvergent at the quadrature points, and therefore this approximation could drastically underpredict the Linfty norm? I don't know of an efficient, simple algorithm for estimating Linfty error ... perhaps there is something out there? My preference would be to leave out the L_INF calculation rather than return a potentially wrong value, but I would like the other developers (esp. Roy, who is on holiday now) to comment as well.  John 
From: Tim Kroeger <tim.kroeger@ce...>  20080826 14:20:14

Dear John, On Tue, 26 Aug 2008, John Peterson wrote: > I'm not sure about your implementation of L_INF. You're taking > > e_{\infty} = max_q e(x_q) > > where x_q are the quadrature points. In fact, isn't the solution > sometimes superconvergent at the quadrature points, and therefore this > approximation could drastically underpredict the Linfty norm? Oh, I see, I (again) forgot that people are using different ansatz functions than piecewise linear (for which this is obviously correct). What about returning this value as the DISCRETE_L_INF norm instead? In particular since the FEMNormType enum offers this norm anyway. Best Regards, Tim  Dr. Tim Kroeger Phone +494212187710 tim.kroeger@..., tim.kroeger@... Fax +494212184236 MeVis Research GmbH, Universitaetsallee 29, 28359 Bremen, Germany Amtsgericht Bremen HRB 16222 Geschaeftsfuehrer: Prof. Dr. H.O. Peitgen 
From: David Knezevic <dave.knez@gm...>  20080826 15:27:25

Hi all, >> What about returning this value as the DISCRETE_L_INF norm instead? In >> particular since the FEMNormType enum offers this norm anyway. > > I think this might be confusing ... the DISCRETE_ versions are meant > to be for R^n vectors, and in this case of course you can get the > "exact" L_INF. I'd prefer adding a new enum called APPROXIMATE_L_INF > (or something similar). The user would know immediately that he was > getting an approximation to the true Linfty norm, and in the > documentation we could mention (as Derek said) that one can improve > the approximation by increasing the number of quadrature points. But the L2, H1 etc errors in ExactSolution are computed using quadrature rules, so they are just approximations as well. As a result, it seems to me that the L_INF norm based on sampling at quadrature points is the natural counterpart for the Sobolev norms currently available in ExactSolution. Also, regarding the superconvergence issue, if we have superconvergence in the L_INF norm at the quadrature points, and we use that quadrature rule to compute the L2 error, then won't we just get the same superconvergence in the quadraturebased L2 error as well?  Dave 
From: Derek Gaston <friedmud@gm...>  20080826 17:23:48

On Aug 26, 2008, at 9:27 AM, David Knezevic wrote: > But the L2, H1 etc errors in ExactSolution are computed using > quadrature > rules, so they are just approximations as well. They are not approximations if both your FE solution and the analytic solution you are comparing against can be integrated exactly by the quadrature rule you choose. Otherwise, yep, you're right... it's an approximation. Definitely in the case of comparing two different FE solutions together it's almost always an approximation, unless both solutions have the exact same mesh.... because otherwise you are trying to integrate piecewise continuous functions with Gaussian quadrature rules.... which we all know isn't quite right... but if you squint a little it looks right ;) Derek 
From: John Peterson <jwpeterson@gm...>  20080416 14:12:14

On Wed, Apr 16, 2008 at 8:15 AM, Roy Stogner <roy@...> wrote: > > On Wed, 16 Apr 2008, Tim Kroeger wrote: > > > Next question from me: Is there some easy way in libMesh to compare > > computation results on different grids? I.e., I have two > > EquationSystems instances that consist of equal systems but live on > > different grids, where both grids are refinements of the same initial > > grid. For each variable in each system, I want the difference of the > > computed solutions on the two grids, in various norms (say L1, L2, > > Linfty). > > > > I conjecture that there is no easy way to do this in libMesh, but I > > thought it might be sensible to ask. > > If by "different" you mean that one mesh is everywhere as fine as or > finer than the other, then you can construct a MeshFunction from the > coarser solution and use it with an ExactSolution object on the fine > system to get the right answer. > > If one grid is finer in some places and the other grid is finer in > others then there's currently no way to do this precisely. You might > use the ExactSolution trick on a "mostly finer" grid with an overkill > quadrature rule to get something close enough, but the right thing to > do here would require new code. > Just out of curiosity, what would be the "right thing" to do in order to compare two meshes with different (still nested) refinement patterns or two solutions on completely nonnested grids? It's not immediately obvious to me what such a comparison would actually tell you in any case... J 
From: Derek Gaston <friedmud@gm...>  20080416 14:28:43

The tool I work on here at work can compare any two arbitrary solutions to each other... even with completely nonnested grids. The user gets to decide what kind of crime they want to commit though. Often (if the meshes are really dissimilar) we'll use an overkill rendezvous mesh to transfer both of the original solutions to first... then compare them on that grid. (Note: To do this in libMesh use two MeshFunctions and 3 meshes) My favorite way though... is just to lay down a super fine quadrature rule and sample both grids at those quadrature points. Sure... on one of the meshes you'll be integrating nonsmooth functions with Gauss quadrature... but the results are usually pretty good. I do think that some of the capabilities in this code at my current job are going to be put into libMesh over the next year. libMesh already has a lot of the infrastructure for it... it wouldn't be too hard to make it a little more generic. Derek On Wed, Apr 16, 2008 at 8:12 AM, John Peterson <jwpeterson@...> wrote: > On Wed, Apr 16, 2008 at 8:15 AM, Roy Stogner <roy@...> wrote: > > > > On Wed, 16 Apr 2008, Tim Kroeger wrote: > > > > > Next question from me: Is there some easy way in libMesh to compare > > > computation results on different grids? I.e., I have two > > > EquationSystems instances that consist of equal systems but live on > > > different grids, where both grids are refinements of the same initial > > > grid. For each variable in each system, I want the difference of the > > > computed solutions on the two grids, in various norms (say L1, L2, > > > Linfty). > > > > > > I conjecture that there is no easy way to do this in libMesh, but I > > > thought it might be sensible to ask. > > > > If by "different" you mean that one mesh is everywhere as fine as or > > finer than the other, then you can construct a MeshFunction from the > > coarser solution and use it with an ExactSolution object on the fine > > system to get the right answer. > > > > If one grid is finer in some places and the other grid is finer in > > others then there's currently no way to do this precisely. You might > > use the ExactSolution trick on a "mostly finer" grid with an overkill > > quadrature rule to get something close enough, but the right thing to > > do here would require new code. > > > > Just out of curiosity, what would be the "right thing" to do in order > to compare two meshes with different (still nested) refinement > patterns or two solutions on completely nonnested grids? It's not > immediately obvious to me what such a comparison would actually tell > you in any case... > > J > > > >  > This SF.net email is sponsored by the 2008 JavaOne(SM) Conference > Don't miss this year's exciting event. There's still time to save $100. > Use priority code J8TL2D2. > http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > 
From: Benjamin Kirk <benjamin.kirk@na...>  20080416 14:55:30

> The tool I work on here at work can compare any two arbitrary > solutions to each other... even with completely nonnested grids. The > user gets to decide what kind of crime they want to commit though. > Often (if the meshes are really dissimilar) we'll use an overkill > rendezvous mesh to transfer both of the original solutions to first... > then compare them on that grid. (Note: To do this in libMesh use two > MeshFunctions and 3 meshes) > > My favorite way though... is just to lay down a super fine quadrature > rule and sample both grids at those quadrature points. Sure... on one > of the meshes you'll be integrating nonsmooth functions with Gauss > quadrature... but the results are usually pretty good. > > I do think that some of the capabilities in this code at my current > job are going to be put into libMesh over the next year. libMesh > already has a lot of the infrastructure for it... it wouldn't be too > hard to make it a little more generic. Take a look at src/apps/grid2grid.cc. It is a little dated and tuned for a specific code, but it is an 70% solution that could guide the implementation. This was implemented specifically for Bill to do mesh refinement studies as part of his thesis. Specifically, he wanted to demonstrate u  u_fine where the meshes spanned the same space and were not nested. We essentially followed Derek's suggested approach... We used an overkill quadrature rule on the fine mesh. Ben 
From: Roy Stogner <roy@st...>  20080416 14:27:51

On Wed, 16 Apr 2008, John Peterson wrote: > Just out of curiosity, what would be the "right thing" to do in order > to compare two meshes with different (still nested) refinement > patterns or two solutions on completely nonnested grids? Loop over one grid. On each element, if the other grid is coarser, you integrate here. If the other grid is finer, you get the corresponding ancestor element there and integrate on all its active descendants. That way you get the numerical integration right up to floating point error, without worrying about quadrature error. > It's not immediately obvious to me what such a comparison would > actually tell you in any case... Oh, well I just meant that *what* to do to integrate between two such grids was obvious; *why* to do such a comparison isn't obvious to me either. If you're contrasting two different refinement strategies then you generally want to compare them to a completely finer reference solution, not to each other. I bet you'd usually still get a good error indicator for grid A in the places where grid B was more refined, and vice versa; running into pollution effects would be a nightmare, though. It's also debatably a good idea to use that sort of "integrate whichever mesh is locally finer" code when you're integrating to assemble a system rather than to take a norm. Back when I was playing with deal.II some of my experiments were to solve multiphysics problems on independently refined nested grids. You can save a lot of degrees of freedom that way if you've got a system that's loosely coupled enough to handle it.  Roy 
From: Tim Kroeger <tim.kroeger@ce...>  20080416 14:36:37

Dear Roy and all, On Wed, 16 Apr 2008, Roy Stogner wrote: > On Wed, 16 Apr 2008, John Peterson wrote: > >> It's not immediately obvious to me what such a comparison would >> actually tell you in any case... > > Oh, well I just meant that *what* to do to integrate between two such > grids was obvious; *why* to do such a comparison isn't obvious to me > either. If you're contrasting two different refinement strategies > then you generally want to compare them to a completely finer > reference solution, not to each other. Just to reassure you: In my case, one of the grids will be finer than the other, so I am glad with the solution you suggested. Best Regards, Tim  Dr. Tim Kroeger Phone +494212187710 tim.kroeger@..., tim.kroeger@... Fax +494212184236 MeVis Research GmbH, Universitaetsallee 29, 28359 Bremen, Germany Amtsgericht Bremen HRB 16222 Geschaeftsfuehrer: Prof. Dr. H.O. Peitgen 
From: John Peterson <jwpeterson@gm...>  20080416 14:45:59

On Wed, Apr 16, 2008 at 9:27 AM, Roy Stogner <roy@...> wrote: > > On Wed, 16 Apr 2008, John Peterson wrote: > > > > Just out of curiosity, what would be the "right thing" to do in order > > to compare two meshes with different (still nested) refinement > > patterns or two solutions on completely nonnested grids? > > > > Loop over one grid. On each element, if the other grid is coarser, > you integrate here. If the other grid is finer, you get the > corresponding ancestor element there and integrate on all its active > descendants. That way you get the numerical integration right up to > floating point error, without worrying about quadrature error. That should work great for the nested case. For the arbitrarily mismatched grids I think you must have to do something like Derek suggested and introduce a "transfer" Mesh which is somehow the "union" of both meshes you are comparing. In the nested case, differing plevels would also be interesting. Should I then perform my integration over the mesh which has a single highp element or over the one that chose to split it into multiple lowdegree elements having small h? The answer probably depends on the smoothness of the true solution. > > > > It's not immediately obvious to me what such a comparison would > > actually tell you in any case... > > > > Oh, well I just meant that *what* to do to integrate between two such > grids was obvious; *why* to do such a comparison isn't obvious to me > either. If you're contrasting two different refinement strategies > then you generally want to compare them to a completely finer > reference solution, not to each other. > > I bet you'd usually still get a good error indicator for grid A in the > places where grid B was more refined, and vice versa; running into > pollution effects would be a nightmare, though. That's where my line of thought was heading. In problems with a singularity the simple "is h finer here or there?" test can fail due to nonlocal effects. J 
From: Roy Stogner <roy@st...>  20080416 14:52:52

On Wed, 16 Apr 2008, John Peterson wrote: > In the nested case, differing plevels would also be interesting. > Should I then perform my integration over the mesh which has a single > highp element or over the one that chose to split it into multiple > lowdegree elements having small h? You integrate over the elements with small h/low p, but you use a quadrature rule sized for the element with high p.  Roy 
From: Derek Gaston <friedmud@gm...>  20080826 14:17:03

In Encore at Sandia you get the choice to either compute L_Inf at the quadrature points or at the nodes. There really isn't a good way to give L_Inf for a finite element calculation.... because our solutions are continuous functions. The finite difference guys would just take the difference at all the nodes and find the largest one.... but that doesn't quite work for us (especially with higher order elements). Personally, I prefer finding the L_Inf error at quadrature points... one nice thing about this is that if you want a better calculation of your error... you just up your number of quadrature points. This is essentially the same thing as comparing to a nonpolynomial exact solution (one you can't integrate exactly).... you do _something_ that will give you a good answer... but if you want a better answer you crank up the quadrature rule. Derek On Aug 26, 2008, at 8:10 AM, John Peterson wrote: > On Tue, Aug 26, 2008 at 5:02 AM, Tim Kroeger > <tim.kroeger@...> wrote: >> Dear John, >> >> On Mon, 25 Aug 2008, John Peterson wrote: >> >>> On Mon, Aug 25, 2008 at 8:28 AM, Tim Kroeger >> >>>> Would you please be so kind and check that changes in (provided >>>> you like >>>> them)? >>> >>> I'll take a look at this patch today. Thanks! >> >> So, what was the result of looking at the patch? > > I'm not sure about your implementation of L_INF. You're taking > > e_{\infty} = max_q e(x_q) > > where x_q are the quadrature points. In fact, isn't the solution > sometimes superconvergent at the quadrature points, and therefore this > approximation could drastically underpredict the Linfty norm? I > don't know of an efficient, simple algorithm for estimating Linfty > error ... perhaps there is something out there? > > My preference would be to leave out the L_INF calculation rather than > return a potentially wrong value, but I would like the other > developers (esp. Roy, who is on holiday now) to comment as well. > >  > John > >  > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge > Build the coolest Linux based applications with Moblin SDK & win > great prizes > Grand prize is a trip for two to an Open Source event anywhere in > the world > http://moblincontest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers 
From: John Peterson <jwpeterson@gm...>  20080826 14:40:54

On Tue, Aug 26, 2008 at 9:15 AM, Derek Gaston <friedmud@...> wrote: > In Encore at Sandia you get the choice to either compute L_Inf at the > quadrature points or at the nodes. > > There really isn't a good way to give L_Inf for a finite element > calculation.... because our solutions are continuous functions. The finite > difference guys would just take the difference at all the nodes and find the > largest one.... but that doesn't quite work for us (especially with higher > order elements). Given the exact solution and gradient, one could presumably come up with a little function optimization scheme which finds (a local) max on each element. The trouble would still be knowing whether the max found was actually the global max for that element... > Personally, I prefer finding the L_Inf error at quadrature points... one > nice thing about this is that if you want a better calculation of your > error... you just up your number of quadrature points. This is essentially > the same thing as comparing to a nonpolynomial exact solution (one you > can't integrate exactly).... you do _something_ that will give you a good > answer... but if you want a better answer you crank up the quadrature rule. Good point about increasing the number of quadrature points to get a better Linfty approximation. And as long as you are using a different quadrature rule for estimating the error than was used when computing the FE solution, I don't think the error can possibly be superconvergent at the quadrature points any more.  John 
From: John Peterson <jwpeterson@gm...>  20080826 15:05:37

On Tue, Aug 26, 2008 at 9:20 AM, Tim Kroeger <tim.kroeger@...> wrote: > Dear John, > > On Tue, 26 Aug 2008, John Peterson wrote: > >> I'm not sure about your implementation of L_INF. You're taking >> >> e_{\infty} = max_q e(x_q) >> >> where x_q are the quadrature points. In fact, isn't the solution >> sometimes superconvergent at the quadrature points, and therefore this >> approximation could drastically underpredict the Linfty norm? > > Oh, I see, I (again) forgot that people are using different ansatz functions > than piecewise linear (for which this is obviously correct). Sorry, I'm a little slow. The formula above is correct for piecewise linears? I can see this for linear elements in 1D, with a 1point quadrature rule. But this implies it's not true for a 2point rule... etc. > What about returning this value as the DISCRETE_L_INF norm instead? In > particular since the FEMNormType enum offers this norm anyway. I think this might be confusing ... the DISCRETE_ versions are meant to be for R^n vectors, and in this case of course you can get the "exact" L_INF. I'd prefer adding a new enum called APPROXIMATE_L_INF (or something similar). The user would know immediately that he was getting an approximation to the true Linfty norm, and in the documentation we could mention (as Derek said) that one can improve the approximation by increasing the number of quadrature points.  John 
From: Tim Kroeger <tim.kroeger@ce...>  20080826 15:36:44

Dear John, On Tue, 26 Aug 2008, John Peterson wrote: > On Tue, Aug 26, 2008 at 9:20 AM, Tim Kroeger >> >> On Tue, 26 Aug 2008, John Peterson wrote: >> >>> I'm not sure about your implementation of L_INF. You're taking >>> >>> e_{\infty} = max_q e(x_q) >>> >>> where x_q are the quadrature points. In fact, isn't the solution >>> sometimes superconvergent at the quadrature points, and therefore this >>> approximation could drastically underpredict the Linfty norm? >> >> Oh, I see, I (again) forgot that people are using different ansatz functions >> than piecewise linear (for which this is obviously correct). > > Sorry, I'm a little slow. The formula above is correct for piecewise > linears? I can see this for linear elements in 1D, with a 1point > quadrature rule. But this implies it's not true for a 2point > rule... etc. Oops, I'm very sorry. I mixed up quadrature points and nodes. What I meant was that for a linear function on a tetrahedron, its maximal value can be obtained by evaluating it at the corners of the tetrahedron only (and taking the max of these values). >> What about returning this value as the DISCRETE_L_INF norm instead? In >> particular since the FEMNormType enum offers this norm anyway. > > I think this might be confusing ... the DISCRETE_ versions are meant > to be for R^n vectors, and in this case of course you can get the > "exact" L_INF. I'd prefer adding a new enum called APPROXIMATE_L_INF > (or something similar). The user would know immediately that he was > getting an approximation to the true Linfty norm, and in the > documentation we could mention (as Derek said) that one can improve > the approximation by increasing the number of quadrature points. Yes, I agree with that. Also, there is a different error in my patch: In parallel, I sum up the Linfty norms of all the processors, instead of taking their max value. I will send you a corrected patch tomorrow. Sorry again. Best Regards, Tim  Dr. Tim Kroeger Phone +494212187710 tim.kroeger@..., tim.kroeger@... Fax +494212184236 MeVis Research GmbH, Universitaetsallee 29, 28359 Bremen, Germany Amtsgericht Bremen HRB 16222 Geschaeftsfuehrer: Prof. Dr. H.O. Peitgen 
From: John Peterson <jwpeterson@gm...>  20080826 15:36:29

On Tue, Aug 26, 2008 at 10:22 AM, Tim Kroeger <tim.kroeger@...> wrote: > > Also, there is a different error in my patch: In parallel, I sum up the > Linfty norms of all the processors, instead of taking their max value. Ah very true, there is a Parallel::sum for the error_vals at the end of _compute_error(). > I will send you a corrected patch tomorrow. Sounds good. > Sorry again. Not a problem! We are having a (what I find to be) very useful discussion.  John 
From: John Peterson <jwpeterson@gm...>  20080826 15:39:24

On Tue, Aug 26, 2008 at 10:22 AM, Tim Kroeger <tim.kroeger@...> wrote: > Dear John, > > On Tue, 26 Aug 2008, John Peterson wrote: > >> On Tue, Aug 26, 2008 at 9:20 AM, Tim Kroeger >>> >>> On Tue, 26 Aug 2008, John Peterson wrote: >>> >>>> I'm not sure about your implementation of L_INF. You're taking >>>> >>>> e_{\infty} = max_q e(x_q) >>>> >>>> where x_q are the quadrature points. In fact, isn't the solution >>>> sometimes superconvergent at the quadrature points, and therefore this >>>> approximation could drastically underpredict the Linfty norm? >>> >>> Oh, I see, I (again) forgot that people are using different ansatz >>> functions >>> than piecewise linear (for which this is obviously correct). >> >> Sorry, I'm a little slow. The formula above is correct for piecewise >> linears? I can see this for linear elements in 1D, with a 1point >> quadrature rule. But this implies it's not true for a 2point rule... >> etc. > > Oops, I'm very sorry. I mixed up quadrature points and nodes. What I meant > was that for a linear function on a tetrahedron, its maximal value can be > obtained by evaluating it at the corners of the tetrahedron only (and taking > the max of these values). Unfortunately the error is not a linear function in general, even though the approximate solution may be.  John 
From: John Peterson <jwpeterson@gm...>  20080826 15:54:25

On Tue, Aug 26, 2008 at 10:27 AM, David Knezevic <dave.knez@...> wrote: > Hi all, > >>> What about returning this value as the DISCRETE_L_INF norm instead? In >>> particular since the FEMNormType enum offers this norm anyway. >> >> I think this might be confusing ... the DISCRETE_ versions are meant >> to be for R^n vectors, and in this case of course you can get the >> "exact" L_INF. I'd prefer adding a new enum called APPROXIMATE_L_INF >> (or something similar). The user would know immediately that he was >> getting an approximation to the true Linfty norm, and in the >> documentation we could mention (as Derek said) that one can improve >> the approximation by increasing the number of quadrature points. > > > But the L2, H1 etc errors in ExactSolution are computed using quadrature > rules, so they are just approximations as well. As a result, it seems to me > that the L_INF norm based on sampling at quadrature points is the natural > counterpart for the Sobolev norms currently available in ExactSolution. Well... yeah but it still feels it's a different class of approximation deserving a different enum. Errors in computing the L2 and H1 errors are due to quadrature error, which can be bounded in terms of higherorder derivatives of the exact solution. The approximate L_INF norm calculation (as we have defined it here) may not have an error representation which is quite so welldefined ... then again maybe it does? Seems to me it would depend strongly on the number of sampling points as well. > Also, regarding the superconvergence issue, if we have superconvergence in > the L_INF norm at the quadrature points, and we use that quadrature rule to > compute the L2 error, then won't we just get the same superconvergence in > the quadraturebased L2 error as well? I think you are right, so in general one should always use a different quadrature rule, unless I am mistaken about that superconvergence property. For the life of me, I can't remember where I heard that and I'm starting to wonder if I may have made it up :)  John 
Sign up for the SourceForge newsletter:
No, thanks