You can subscribe to this list here.
2003 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}
(2) 
_{Oct}
(2) 
_{Nov}
(27) 
_{Dec}
(31) 

2004 
_{Jan}
(6) 
_{Feb}
(15) 
_{Mar}
(33) 
_{Apr}
(10) 
_{May}
(46) 
_{Jun}
(11) 
_{Jul}
(21) 
_{Aug}
(15) 
_{Sep}
(13) 
_{Oct}
(23) 
_{Nov}
(1) 
_{Dec}
(8) 
2005 
_{Jan}
(27) 
_{Feb}
(57) 
_{Mar}
(86) 
_{Apr}
(23) 
_{May}
(37) 
_{Jun}
(34) 
_{Jul}
(24) 
_{Aug}
(17) 
_{Sep}
(50) 
_{Oct}
(24) 
_{Nov}
(10) 
_{Dec}
(60) 
2006 
_{Jan}
(47) 
_{Feb}
(46) 
_{Mar}
(127) 
_{Apr}
(19) 
_{May}
(26) 
_{Jun}
(62) 
_{Jul}
(47) 
_{Aug}
(51) 
_{Sep}
(61) 
_{Oct}
(42) 
_{Nov}
(50) 
_{Dec}
(33) 
2007 
_{Jan}
(60) 
_{Feb}
(55) 
_{Mar}
(77) 
_{Apr}
(102) 
_{May}
(82) 
_{Jun}
(102) 
_{Jul}
(169) 
_{Aug}
(117) 
_{Sep}
(80) 
_{Oct}
(37) 
_{Nov}
(51) 
_{Dec}
(43) 
2008 
_{Jan}
(71) 
_{Feb}
(94) 
_{Mar}
(98) 
_{Apr}
(125) 
_{May}
(54) 
_{Jun}
(119) 
_{Jul}
(60) 
_{Aug}
(111) 
_{Sep}
(118) 
_{Oct}
(125) 
_{Nov}
(119) 
_{Dec}
(94) 
2009 
_{Jan}
(109) 
_{Feb}
(38) 
_{Mar}
(93) 
_{Apr}
(88) 
_{May}
(29) 
_{Jun}
(57) 
_{Jul}
(53) 
_{Aug}
(48) 
_{Sep}
(68) 
_{Oct}
(151) 
_{Nov}
(23) 
_{Dec}
(35) 
2010 
_{Jan}
(84) 
_{Feb}
(60) 
_{Mar}
(184) 
_{Apr}
(112) 
_{May}
(60) 
_{Jun}
(90) 
_{Jul}
(23) 
_{Aug}
(70) 
_{Sep}
(119) 
_{Oct}
(27) 
_{Nov}
(47) 
_{Dec}
(54) 
2011 
_{Jan}
(22) 
_{Feb}
(19) 
_{Mar}
(92) 
_{Apr}
(93) 
_{May}
(35) 
_{Jun}
(91) 
_{Jul}
(32) 
_{Aug}
(61) 
_{Sep}
(7) 
_{Oct}
(69) 
_{Nov}
(81) 
_{Dec}
(23) 
2012 
_{Jan}
(64) 
_{Feb}
(95) 
_{Mar}
(35) 
_{Apr}
(36) 
_{May}
(63) 
_{Jun}
(98) 
_{Jul}
(70) 
_{Aug}
(171) 
_{Sep}
(149) 
_{Oct}
(64) 
_{Nov}
(67) 
_{Dec}
(126) 
2013 
_{Jan}
(108) 
_{Feb}
(104) 
_{Mar}
(171) 
_{Apr}
(133) 
_{May}
(108) 
_{Jun}
(100) 
_{Jul}
(93) 
_{Aug}
(126) 
_{Sep}
(74) 
_{Oct}
(59) 
_{Nov}
(145) 
_{Dec}
(93) 
2014 
_{Jan}
(38) 
_{Feb}
(45) 
_{Mar}
(26) 
_{Apr}
(41) 
_{May}
(125) 
_{Jun}
(70) 
_{Jul}
(61) 
_{Aug}
(66) 
_{Sep}
(60) 
_{Oct}
(110) 
_{Nov}
(27) 
_{Dec}
(30) 
2015 
_{Jan}
(43) 
_{Feb}
(67) 
_{Mar}
(71) 
_{Apr}
(92) 
_{May}
(39) 
_{Jun}
(15) 
_{Jul}
(46) 
_{Aug}
(63) 
_{Sep}
(84) 
_{Oct}
(82) 
_{Nov}
(69) 
_{Dec}
(45) 
2016 
_{Jan}
(92) 
_{Feb}
(91) 
_{Mar}
(148) 
_{Apr}
(43) 
_{May}
(58) 
_{Jun}
(117) 
_{Jul}
(92) 
_{Aug}
(140) 
_{Sep}
(49) 
_{Oct}
(33) 
_{Nov}
(85) 
_{Dec}
(40) 
2017 
_{Jan}
(41) 
_{Feb}
(26) 
_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 






1
(17) 
2
(2) 
3
(6) 
4
(5) 
5
(1) 
6
(1) 
7

8
(6) 
9
(16) 
10

11
(3) 
12
(8) 
13
(2) 
14
(1) 
15
(3) 
16
(1) 
17

18
(2) 
19
(1) 
20

21

22
(2) 
23

24

25
(5) 
26
(14) 
27
(5) 
28

29
(7) 
30

31
(3) 






From: Nasser Mohieddin Abukhdeir <nasser.abukhdeir@mc...>  20080831 23:58:18

Hi: I'm sorry for all of the questions, but its been hours at this and I really do not understand what the issue is. I am implementing the FEMSystem::element_postprocess virtual function and need to access/modify another system's solution: PostProc & postproc_system = (this>get_equation_systems()).get_system<PostProc>("PostProc"); where the PostProc system inherits ExplicitSystem. When I try to access PostProc::solution (or do anything with the data structure for that matter): NumericVector<Number> & soln = *postproc_system.solution; soln.set(0,0); I get this compilation error: nemfem.C:776: error: invalid use of incomplete type 'struct NumericVector<double>' /home/nasser/libmesh/include/solvers/system.h:45: error: declaration of 'struct NumericVector<double>' which makes about as much sense to me as a Spamburger.  Nasser Mohieddin Abukhdeir Graduate Student (Materials Modeling Research Group) McGill University  Department of Chemical Engineering http://webpages.mcgill.ca/students/nabukh/web/ http://mmrg.chemeng.mcgill.ca/ 
From: Nasser Mohieddin Abukhdeir <nasser.abukhdeir@mc...>  20080831 22:48:27

So, here is what I am implementing to concurrently postprocess my data: 1) added an ExplicitSystem that contains my postprocessing variables to the EquationSystem instance 2) implemented the FEMSystem::element_postprocess() function in my primary system (tensor variables) and then added a FEMSystem::postprocess() call after every FEMSystem::solve() in my main function 3) What I am working on now is to implement the very convenient FEMSystem::elem_subsolutions variable but for the postprocessing ExplicitSystem class. Does this make sense? It seems like the cleanest way to get access to the postprocessing variables that correspond to the primary variables on the element of interest. Nasser Mohieddin Abukhdeir Graduate Student (Materials Modeling Research Group) McGill University  Department of Chemical Engineering http://webpages.mcgill.ca/students/nabukh/web/ http://mmrg.chemeng.mcgill.ca/ 
From: Kirk, Benjamin (JSCEG) <Benjamin.Kirk1@na...>  20080831 19:56:08

(I am forwarding ths aong. Perhaps sourceforge's ongoing rehosting effort is delaying the addition of new subscribers...) Hello! I am new on the libmesh user list. I would like to solve some kind of Cahn Hillard phase field equation. On the wiki I saw the images of the phase field example, but i could not find it in the examples. It would be really kind if somebody had a some code with which i could start with. Otherwise which examples would you suppose to start with (bilaplacian+ navier stokes)? Thank you very much hansjoerg 
From: Nasser Mohieddin Abukhdeir <nasser.abukhdeir@mc...>  20080829 16:01:12

Hi: ExodusII does work well with Paraview, but for some reason: a) the simulation time is not available in the variables list b) the animation function isn't working. I have the output file names in the right format and Paraview groups them together in the file open window, but I cannot navigate through them or use the animate functionality, does this have to do with a)? Nasser Mohieddin Abukhdeir Graduate Student (Materials Modeling Research Group) McGill University  Department of Chemical Engineering http://webpages.mcgill.ca/students/nabukh/web/ http://mmrg.chemeng.mcgill.ca/ Derek Gaston wrote: > As a work around you could try the ExodusII_IO writer to write an > Exodus file.... which paraview reads nicely. > > Derek > > On Aug 28, 2008, at 10:49 PM, Nasser Mohieddin Abukhdeir wrote: > > >> Hello: >> I've been trying to use Paraview and VTKIO to visual some results >> and have not been all that successful. Using >> VTKIO::write_equation_systems two files are outputted, one is the name >> that I provided (lets say data001.pvtu) to the function and the other >> has VTU at the end of it (data001_0.vtu). When I try to open >> data001.pvtu in Paraview, the file gets listed but no data is >> displayed. >> >>  >> Nasser Mohieddin Abukhdeir >> Graduate Student (Materials Modeling Research Group) >> McGill University  Department of Chemical Engineering >> http://webpages.mcgill.ca/students/nabukh/web/ >> http://mmrg.chemeng.mcgill.ca/ >> >> >>  >> This SF.Net email is sponsored by the Moblin Your Move Developer's >> challenge >> Build the coolest Linux based applications with Moblin SDK & win >> great prizes >> Grand prize is a trip for two to an Open Source event anywhere in >> the world >> http://moblincontest.org/redirect.php?banner_id=100&url=/ >> _______________________________________________ >> Libmeshusers mailing list >> Libmeshusers@... >> https://lists.sourceforge.net/lists/listinfo/libmeshusers >> > > 
From: Nasser Mohieddin Abukhdeir <nasser.abukhdeir@mc...>  20080829 15:59:21

Here is Roy's email that didn't get posted to the list, thanks Roy and Ben, I'll probably need some more help before I can solve this problem, but I will add this to a DiffSystem section of the Wiki once I get it (and all my other issues :) worked out On Fri, 29 Aug 2008, Roy Stogner wrote: > > I understand that from a basic point of view, but the devil is in the > > details with respect to implementation: > Yes, my last message was pretty detailfree, wasn't it? ;) I've had a busy week, and in fact I need to get some sleep before a ride to the airport in 7 hours, so this is going to be kind of brief as well. At the moment I'm also using an email account that's not registered with libmeshusers, so if you want everyone else's 2 cents feel free to copy your replies to the list. > >  would this be of the class System? > Yes, or ExplicitSystem if having a RHS vector is helpful for the postprocessing. > >  in order to use parallelization would I implement the System.Solve() > > function and work on the public member System.current_local_solution? > The parallelization is done for you whether you use solve() or not; but I think you'll want to work on System::solution directly, then use System::update() to synchronize ghost node values (i.e. current_local_solution) between processors. > >  how would this system have access to the other system's solution, would > > that be FEMSystem.current_local_solution? > Yes; they'll share the same mesh partitioning. > > I'm really sorry, my understanding is at a basic level, but if I can work > > this out I'll add something to the wiki. > That would be very helpful; thanks!  Roy 
From: Derek Gaston <friedmud@gm...>  20080829 13:30:36

As a work around you could try the ExodusII_IO writer to write an Exodus file.... which paraview reads nicely. Derek On Aug 28, 2008, at 10:49 PM, Nasser Mohieddin Abukhdeir wrote: > Hello: > I've been trying to use Paraview and VTKIO to visual some results > and have not been all that successful. Using > VTKIO::write_equation_systems two files are outputted, one is the name > that I provided (lets say data001.pvtu) to the function and the other > has VTU at the end of it (data001_0.vtu). When I try to open > data001.pvtu in Paraview, the file gets listed but no data is > displayed. > >  > Nasser Mohieddin Abukhdeir > Graduate Student (Materials Modeling Research Group) > McGill University  Department of Chemical Engineering > http://webpages.mcgill.ca/students/nabukh/web/ > http://mmrg.chemeng.mcgill.ca/ > > >  > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge > Build the coolest Linux based applications with Moblin SDK & win > great prizes > Grand prize is a trip for two to an Open Source event anywhere in > the world > http://moblincontest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers 
From: Benjamin Kirk <benjamin.kirk@na...>  20080829 13:16:15

> Is there a way to tell Libmesh that certain variables are neither > timeevolving or constraints, so that they are not accounted for in the > factorization? So I would start with a FEMsystem with all of the tensor > components, eigenvalues, and eigenvector components in the same system. Since they are computed as a postprocessing step from the tensor components you definitely do not want them coupled in the linear system, so that calls for another system. I have a somewhat similar issue in my compressible NS stuff. I solve a large implicit system for (rho, rho*u, rho*v, rho*w, rho*E)  the socalled conserved variables. But other places in the code, and for visualization, I really want access to (P, u, v, T, M)  primitive variables + Mach number. And I don't want them making my linear system any bigger. > Would I then just implement the postprocessing functions and use the Gnu > Scientific Library to compute the eigenvalues/vectors based upon the > tensor components computed from the current solution. > > Please give me a few pointers... What I do is add an ExplicitSystem to my EquationSystems. I then fill it with these "auxiliary variables." The benefit of this approach is (i) there is no matrix allocation for these explicit variables, and (ii) they can be output to visualization formats in the usual way. You can either provide your own assemble function for the auxiliary system which does the processing you want, or do it at the beginning/end of the assemble of the primary system. Ben 
From: Wout Ruijter <woutruijter@gm...>  20080829 09:24:30

Is the data not present in the file? If you have a bit of code that replicates the problem I can have a look next week. W On Fri, Aug 29, 2008 at 6:49 AM, Nasser Mohieddin Abukhdeir <nasser.abukhdeir@...> wrote: > Hello: > I've been trying to use Paraview and VTKIO to visual some results > and have not been all that successful. Using > VTKIO::write_equation_systems two files are outputted, one is the name > that I provided (lets say data001.pvtu) to the function and the other > has VTU at the end of it (data001_0.vtu). When I try to open > data001.pvtu in Paraview, the file gets listed but no data is displayed. > >  > Nasser Mohieddin Abukhdeir > Graduate Student (Materials Modeling Research Group) > McGill University  Department of Chemical Engineering > http://webpages.mcgill.ca/students/nabukh/web/ > http://mmrg.chemeng.mcgill.ca/ > > >  > This SF.Net email is sponsored by the Moblin Your Move Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblincontest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > 
From: Nasser Mohieddin Abukhdeir <nasser.abukhdeir@mc...>  20080829 05:50:28

Hello: I have a nonlinear timedependent PDE system where the individual variables compose a second order tensor. This form is convenient for simulation, but not for interpreting the data, for which I need the eigenvalues and vectors of the tensor. I don't really have the choice of using Octave to do this postsimulation in that I am using adaptive meshing and a huge dataset that requires parallelization. I'm trying to figure out a sensible way to modify my Libmesh implementation (using FEMSystem) to do this but I'm not sure how to proceed. Is there a way to tell Libmesh that certain variables are neither timeevolving or constraints, so that they are not accounted for in the factorization? So I would start with a FEMsystem with all of the tensor components, eigenvalues, and eigenvector components in the same system. Would I then just implement the postprocessing functions and use the Gnu Scientific Library to compute the eigenvalues/vectors based upon the tensor components computed from the current solution. Please give me a few pointers...  Nasser Mohieddin Abukhdeir Graduate Student (Materials Modeling Research Group) McGill University  Department of Chemical Engineering http://webpages.mcgill.ca/students/nabukh/web/ http://mmrg.chemeng.mcgill.ca/ 
From: Nasser Mohieddin Abukhdeir <nasser.abukhdeir@mc...>  20080829 04:50:01

Hello: I've been trying to use Paraview and VTKIO to visual some results and have not been all that successful. Using VTKIO::write_equation_systems two files are outputted, one is the name that I provided (lets say data001.pvtu) to the function and the other has VTU at the end of it (data001_0.vtu). When I try to open data001.pvtu in Paraview, the file gets listed but no data is displayed.  Nasser Mohieddin Abukhdeir Graduate Student (Materials Modeling Research Group) McGill University  Department of Chemical Engineering http://webpages.mcgill.ca/students/nabukh/web/ http://mmrg.chemeng.mcgill.ca/ 
From: John Peterson <jwpeterson@gm...>  20080827 14:54:12

Hi Tim, On Wed, Aug 27, 2008 at 3:53 AM, Tim Kroeger <tim.kroeger@...> wrote: > > Please find attached the new patch that corrects the error I made in > parallel for the Linfty norm. After all, I did not change the name of the > L_INF norm since I think the result of the discussion is that this is the > most sensible approximation and that it naturally corresponds to the > approximation that is done for the other norms, too. Additionally, I added > some comments that clarify this to the user. The patch has been checked in.  John 
From: Tim Kroeger <tim.kroeger@ce...>  20080827 10:21:43

Dear Roy, On Tue, 26 Aug 2008, Roy Stogner wrote: >> (In the current state, my code does not show any speedup on 20 processors >> compared with 1 processor, because it spends apparently 95% of the time in >> EquationSystems::reinit().) > > Could you insert some more finegrained performance logs and verify > that it's really project_vector causing most of the delay? That's an > astonishingly lousy result. Well, the '95%' were just the sensed percentage. To find out how much it really is, I now used the PerfLog class. I have to admit that I had not used that class ever before, so I am not completely sure whether I understood the concept correctly. I did the following: In EquationSystems::reinit(), I added some PerfLog stuff (see attached modified copy of equation_systmes.C). Also, in my main program, I added a PerfLog instance that measures the complete computation time. Attached are the results for 1 cpu and for 20 cpus. Note that EquationSystems::reinit() has been called 6 times, hence there are 6 sections about this. (This is where I'm not sure whether I understood the concept correctly.) Note that the total time the computation spends in this function is about 220 seconds for 1 cpu and about 230 seconds for 20 cpus, whereas the overall total computation time is about 660 seconds for 1 cpu and about 345 seconds for 20 cpus. Hence, the speedup is less than a factor of 2. Also, for 20 cpus, 67% of the time is spent in EquationSystmes::reinit(). I hope this information is sufficient for you. If not, please let me know what else you need. By the way: After the last refinement, the mesh consists of 98897 elements and 22675 nodes for 1 cpu, 98841 elements and 22662 nodes for 20 cpus. Best Regards, Tim  Dr. Tim Kroeger Phone +494212187710 tim.kroeger@..., tim.kroeger@... Fax +494212184236 MeVis Research GmbH, Universitaetsallee 29, 28359 Bremen, Germany Amtsgericht Bremen HRB 16222 Geschaeftsfuehrer: Prof. Dr. H.O. Peitgen 
From: Tim Kroeger <tim.kroeger@ce...>  20080827 08:53:25

Dear John On Tue, 26 Aug 2008, John Peterson wrote: > Unfortunately the error is not a linear function in general, even > though the approximate solution may be. Yes. I was thinking about the case where the 'exact' solution is just a solution on a finer grid, in which case (if I understand the code correctly) the computation is performed on the fine grid and hence the 'error' is actually contained in the ansatz space. Of course, you could not know what I was thinking about. Please find attached the new patch that corrects the error I made in parallel for the Linfty norm. After all, I did not change the name of the L_INF norm since I think the result of the discussion is that this is the most sensible approximation and that it naturally corresponds to the approximation that is done for the other norms, too. Additionally, I added some comments that clarify this to the user. Best Regards, Tim  Dr. Tim Kroeger Phone +494212187710 tim.kroeger@..., tim.kroeger@... Fax +494212184236 MeVis Research GmbH, Universitaetsallee 29, 28359 Bremen, Germany Amtsgericht Bremen HRB 16222 Geschaeftsfuehrer: Prof. Dr. H.O. Peitgen 
From: John Peterson <peterson@cf...>  20080827 03:38:05

Another one from Roy:  Forwarded message  From: Roy Stogner <roystgnr@...> To: Tim Kroeger <tim.kroeger@...> Date: Tue, 26 Aug 2008 18:34:11 0500 (CDT) Subject: Re: [Libmeshusers] Performance of EquationSystems::reinit() with ParallelMesh On Tue, 26 Aug 2008, Tim Kroeger wrote: > Could someone please tell me what the current state about the item mentioned below is? Untouched; we're still using a serial vector in project_vector. > (In the current state, my code does not show any speedup on 20 processors compared with 1 processor, because it spends apparently 95% of the time in EquationSystems::reinit().) Could you insert some more finegrained performance logs and verify that it's really project_vector causing most of the delay? That's an astonishingly lousy result.  Roy 
From: John Peterson <peterson@cf...>  20080827 03:36:11

Forwarded message from Roy...  Forwarded message  From: Roy Stogner <roystgnr@...> Date: Tue, Aug 26, 2008 at 6:35 PM Subject: Re: [Libmeshusers] Comparison of solutions on different grids (fwd) To: John Peterson <peterson@...> post the attachment to the list for me? I need to reconfigure my From: address at the cfdlab.  Forwarded message  Date: Tue, 26 Aug 2008 16:30:36 0700 From: libmeshusersowner@... To: roystgnr@... Subject: Re: [Libmeshusers] Comparison of solutions on different grids You are not allowed to post to this mailing list, and your message has been automatically rejected. If you think that your messages are being rejected in error, contact the mailing list owner at libmeshusersowner@...  Forwarded message  From: Roy Stogner <roystgnr@...> To: David Knezevic <dave.knez@...> Date: Tue, 26 Aug 2008 18:30:23 0500 (CDT) Subject: Re: [Libmeshusers] Comparison of solutions on different grids On Tue, 26 Aug 2008, David Knezevic wrote: > Yeah, I see what you mean. I suppose the ideal thing (I'm not saying > this should be done in practice) would be to compute the interpolant of > the error based on values at the quadrature points, and take the L_INFTY > norm of the interpolant. Given a regularity assumption on the error, I'm > sure there are bounds for the L_INFTY error of the interpolant. That's ideal if you're taking the norm of something piecewise polynomial, and probably if you're just sufficiently smooth, but in some places where we want to approximate L_infty we might be taking the norm of even a discontinuous function, where trying to get a polynomial interpolant would be disasterous. For consistency, even when comparing two grids' solutions for C^0 elements I think the "max value at a quadrature point" is the way to go, so long as we give the user the ability to override the quadrature rule selection.  Roy  John 
From: Derek Gaston <friedmud@gm...>  20080826 17:23:48

On Aug 26, 2008, at 9:27 AM, David Knezevic wrote: > But the L2, H1 etc errors in ExactSolution are computed using > quadrature > rules, so they are just approximations as well. They are not approximations if both your FE solution and the analytic solution you are comparing against can be integrated exactly by the quadrature rule you choose. Otherwise, yep, you're right... it's an approximation. Definitely in the case of comparing two different FE solutions together it's almost always an approximation, unless both solutions have the exact same mesh.... because otherwise you are trying to integrate piecewise continuous functions with Gaussian quadrature rules.... which we all know isn't quite right... but if you squint a little it looks right ;) Derek 
From: David Knezevic <dave.knez@gm...>  20080826 16:57:49

> Well... yeah but it still feels it's a different class of > approximation deserving a different enum. Errors in computing the L2 > and H1 errors are due to quadrature error, which can be bounded in > terms of higherorder derivatives of the exact solution. The > approximate L_INF norm calculation (as we have defined it here) may > not have an error representation which is quite so welldefined ... > then again maybe it does? Seems to me it would depend strongly on the > number of sampling points as well. Yeah, I see what you mean. I suppose the ideal thing (I'm not saying this should be done in practice) would be to compute the interpolant of the error based on values at the quadrature points, and take the L_INFTY norm of the interpolant. Given a regularity assumption on the error, I'm sure there are bounds for the L_INFTY error of the interpolant. However, I think in some cases the maximum of the values at the interpolation points would be a good approximation to the supremum of the interpolant of the error. For example, if the interpolation points are Gauss quadrature points in 1D (or any points that are clustered like Chebyshev points), then I believe that the supremum of the polynomial interpolant will (asympotically) be very close to the maximum of the sampled values, and both of these would converge "spectrally" to the exact L_INFTY error. On the other hand, if we're using bad interpolation points, e.g. equally spaced points in 1D, then the supremum of the interpolant grows exponentially fast compared to the values at the interpolation points, so in that case the heuristic would fail horribly. Anyway, I guess what I'm saying is that I think you're right John, the quadrature point samples need not be a good approximation to the continuous L_INFTY norm, but perhaps it's OK as a heuristic...? >> Also, regarding the superconvergence issue, if we have superconvergence in >> the L_INF norm at the quadrature points, and we use that quadrature rule to >> compute the L2 error, then won't we just get the same superconvergence in >> the quadraturebased L2 error as well? > > I think you are right, so in general one should always use a different > quadrature rule, unless I am mistaken about that superconvergence > property. For the life of me, I can't remember where I heard that and > I'm starting to wonder if I may have made it up :) It seems plausible to me. Or, at a minimum, I've definitely heard about superconvergence at the nodes of the mesh, and the user could use the nodes as quadrature points...  Dave 
From: John Peterson <jwpeterson@gm...>  20080826 15:54:25

On Tue, Aug 26, 2008 at 10:27 AM, David Knezevic <dave.knez@...> wrote: > Hi all, > >>> What about returning this value as the DISCRETE_L_INF norm instead? In >>> particular since the FEMNormType enum offers this norm anyway. >> >> I think this might be confusing ... the DISCRETE_ versions are meant >> to be for R^n vectors, and in this case of course you can get the >> "exact" L_INF. I'd prefer adding a new enum called APPROXIMATE_L_INF >> (or something similar). The user would know immediately that he was >> getting an approximation to the true Linfty norm, and in the >> documentation we could mention (as Derek said) that one can improve >> the approximation by increasing the number of quadrature points. > > > But the L2, H1 etc errors in ExactSolution are computed using quadrature > rules, so they are just approximations as well. As a result, it seems to me > that the L_INF norm based on sampling at quadrature points is the natural > counterpart for the Sobolev norms currently available in ExactSolution. Well... yeah but it still feels it's a different class of approximation deserving a different enum. Errors in computing the L2 and H1 errors are due to quadrature error, which can be bounded in terms of higherorder derivatives of the exact solution. The approximate L_INF norm calculation (as we have defined it here) may not have an error representation which is quite so welldefined ... then again maybe it does? Seems to me it would depend strongly on the number of sampling points as well. > Also, regarding the superconvergence issue, if we have superconvergence in > the L_INF norm at the quadrature points, and we use that quadrature rule to > compute the L2 error, then won't we just get the same superconvergence in > the quadraturebased L2 error as well? I think you are right, so in general one should always use a different quadrature rule, unless I am mistaken about that superconvergence property. For the life of me, I can't remember where I heard that and I'm starting to wonder if I may have made it up :)  John 
From: John Peterson <jwpeterson@gm...>  20080826 15:39:24

On Tue, Aug 26, 2008 at 10:22 AM, Tim Kroeger <tim.kroeger@...> wrote: > Dear John, > > On Tue, 26 Aug 2008, John Peterson wrote: > >> On Tue, Aug 26, 2008 at 9:20 AM, Tim Kroeger >>> >>> On Tue, 26 Aug 2008, John Peterson wrote: >>> >>>> I'm not sure about your implementation of L_INF. You're taking >>>> >>>> e_{\infty} = max_q e(x_q) >>>> >>>> where x_q are the quadrature points. In fact, isn't the solution >>>> sometimes superconvergent at the quadrature points, and therefore this >>>> approximation could drastically underpredict the Linfty norm? >>> >>> Oh, I see, I (again) forgot that people are using different ansatz >>> functions >>> than piecewise linear (for which this is obviously correct). >> >> Sorry, I'm a little slow. The formula above is correct for piecewise >> linears? I can see this for linear elements in 1D, with a 1point >> quadrature rule. But this implies it's not true for a 2point rule... >> etc. > > Oops, I'm very sorry. I mixed up quadrature points and nodes. What I meant > was that for a linear function on a tetrahedron, its maximal value can be > obtained by evaluating it at the corners of the tetrahedron only (and taking > the max of these values). Unfortunately the error is not a linear function in general, even though the approximate solution may be.  John 
From: Tim Kroeger <tim.kroeger@ce...>  20080826 15:36:44

Dear John, On Tue, 26 Aug 2008, John Peterson wrote: > On Tue, Aug 26, 2008 at 9:20 AM, Tim Kroeger >> >> On Tue, 26 Aug 2008, John Peterson wrote: >> >>> I'm not sure about your implementation of L_INF. You're taking >>> >>> e_{\infty} = max_q e(x_q) >>> >>> where x_q are the quadrature points. In fact, isn't the solution >>> sometimes superconvergent at the quadrature points, and therefore this >>> approximation could drastically underpredict the Linfty norm? >> >> Oh, I see, I (again) forgot that people are using different ansatz functions >> than piecewise linear (for which this is obviously correct). > > Sorry, I'm a little slow. The formula above is correct for piecewise > linears? I can see this for linear elements in 1D, with a 1point > quadrature rule. But this implies it's not true for a 2point > rule... etc. Oops, I'm very sorry. I mixed up quadrature points and nodes. What I meant was that for a linear function on a tetrahedron, its maximal value can be obtained by evaluating it at the corners of the tetrahedron only (and taking the max of these values). >> What about returning this value as the DISCRETE_L_INF norm instead? In >> particular since the FEMNormType enum offers this norm anyway. > > I think this might be confusing ... the DISCRETE_ versions are meant > to be for R^n vectors, and in this case of course you can get the > "exact" L_INF. I'd prefer adding a new enum called APPROXIMATE_L_INF > (or something similar). The user would know immediately that he was > getting an approximation to the true Linfty norm, and in the > documentation we could mention (as Derek said) that one can improve > the approximation by increasing the number of quadrature points. Yes, I agree with that. Also, there is a different error in my patch: In parallel, I sum up the Linfty norms of all the processors, instead of taking their max value. I will send you a corrected patch tomorrow. Sorry again. Best Regards, Tim  Dr. Tim Kroeger Phone +494212187710 tim.kroeger@..., tim.kroeger@... Fax +494212184236 MeVis Research GmbH, Universitaetsallee 29, 28359 Bremen, Germany Amtsgericht Bremen HRB 16222 Geschaeftsfuehrer: Prof. Dr. H.O. Peitgen 
From: John Peterson <jwpeterson@gm...>  20080826 15:36:29

On Tue, Aug 26, 2008 at 10:22 AM, Tim Kroeger <tim.kroeger@...> wrote: > > Also, there is a different error in my patch: In parallel, I sum up the > Linfty norms of all the processors, instead of taking their max value. Ah very true, there is a Parallel::sum for the error_vals at the end of _compute_error(). > I will send you a corrected patch tomorrow. Sounds good. > Sorry again. Not a problem! We are having a (what I find to be) very useful discussion.  John 
From: David Knezevic <dave.knez@gm...>  20080826 15:27:25

Hi all, >> What about returning this value as the DISCRETE_L_INF norm instead? In >> particular since the FEMNormType enum offers this norm anyway. > > I think this might be confusing ... the DISCRETE_ versions are meant > to be for R^n vectors, and in this case of course you can get the > "exact" L_INF. I'd prefer adding a new enum called APPROXIMATE_L_INF > (or something similar). The user would know immediately that he was > getting an approximation to the true Linfty norm, and in the > documentation we could mention (as Derek said) that one can improve > the approximation by increasing the number of quadrature points. But the L2, H1 etc errors in ExactSolution are computed using quadrature rules, so they are just approximations as well. As a result, it seems to me that the L_INF norm based on sampling at quadrature points is the natural counterpart for the Sobolev norms currently available in ExactSolution. Also, regarding the superconvergence issue, if we have superconvergence in the L_INF norm at the quadrature points, and we use that quadrature rule to compute the L2 error, then won't we just get the same superconvergence in the quadraturebased L2 error as well?  Dave 
From: John Peterson <jwpeterson@gm...>  20080826 15:05:37

On Tue, Aug 26, 2008 at 9:20 AM, Tim Kroeger <tim.kroeger@...> wrote: > Dear John, > > On Tue, 26 Aug 2008, John Peterson wrote: > >> I'm not sure about your implementation of L_INF. You're taking >> >> e_{\infty} = max_q e(x_q) >> >> where x_q are the quadrature points. In fact, isn't the solution >> sometimes superconvergent at the quadrature points, and therefore this >> approximation could drastically underpredict the Linfty norm? > > Oh, I see, I (again) forgot that people are using different ansatz functions > than piecewise linear (for which this is obviously correct). Sorry, I'm a little slow. The formula above is correct for piecewise linears? I can see this for linear elements in 1D, with a 1point quadrature rule. But this implies it's not true for a 2point rule... etc. > What about returning this value as the DISCRETE_L_INF norm instead? In > particular since the FEMNormType enum offers this norm anyway. I think this might be confusing ... the DISCRETE_ versions are meant to be for R^n vectors, and in this case of course you can get the "exact" L_INF. I'd prefer adding a new enum called APPROXIMATE_L_INF (or something similar). The user would know immediately that he was getting an approximation to the true Linfty norm, and in the documentation we could mention (as Derek said) that one can improve the approximation by increasing the number of quadrature points.  John 
From: John Peterson <jwpeterson@gm...>  20080826 14:40:54

On Tue, Aug 26, 2008 at 9:15 AM, Derek Gaston <friedmud@...> wrote: > In Encore at Sandia you get the choice to either compute L_Inf at the > quadrature points or at the nodes. > > There really isn't a good way to give L_Inf for a finite element > calculation.... because our solutions are continuous functions. The finite > difference guys would just take the difference at all the nodes and find the > largest one.... but that doesn't quite work for us (especially with higher > order elements). Given the exact solution and gradient, one could presumably come up with a little function optimization scheme which finds (a local) max on each element. The trouble would still be knowing whether the max found was actually the global max for that element... > Personally, I prefer finding the L_Inf error at quadrature points... one > nice thing about this is that if you want a better calculation of your > error... you just up your number of quadrature points. This is essentially > the same thing as comparing to a nonpolynomial exact solution (one you > can't integrate exactly).... you do _something_ that will give you a good > answer... but if you want a better answer you crank up the quadrature rule. Good point about increasing the number of quadrature points to get a better Linfty approximation. And as long as you are using a different quadrature rule for estimating the error than was used when computing the FE solution, I don't think the error can possibly be superconvergent at the quadrature points any more.  John 
From: Tim Kroeger <tim.kroeger@ce...>  20080826 14:20:14

Dear John, On Tue, 26 Aug 2008, John Peterson wrote: > I'm not sure about your implementation of L_INF. You're taking > > e_{\infty} = max_q e(x_q) > > where x_q are the quadrature points. In fact, isn't the solution > sometimes superconvergent at the quadrature points, and therefore this > approximation could drastically underpredict the Linfty norm? Oh, I see, I (again) forgot that people are using different ansatz functions than piecewise linear (for which this is obviously correct). What about returning this value as the DISCRETE_L_INF norm instead? In particular since the FEMNormType enum offers this norm anyway. Best Regards, Tim  Dr. Tim Kroeger Phone +494212187710 tim.kroeger@..., tim.kroeger@... Fax +494212184236 MeVis Research GmbH, Universitaetsallee 29, 28359 Bremen, Germany Amtsgericht Bremen HRB 16222 Geschaeftsfuehrer: Prof. Dr. H.O. Peitgen 