You can subscribe to this list here.
2003 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}
(2) 
_{Oct}
(2) 
_{Nov}
(27) 
_{Dec}
(31) 

2004 
_{Jan}
(6) 
_{Feb}
(15) 
_{Mar}
(33) 
_{Apr}
(10) 
_{May}
(46) 
_{Jun}
(11) 
_{Jul}
(21) 
_{Aug}
(15) 
_{Sep}
(13) 
_{Oct}
(23) 
_{Nov}
(1) 
_{Dec}
(8) 
2005 
_{Jan}
(27) 
_{Feb}
(57) 
_{Mar}
(86) 
_{Apr}
(23) 
_{May}
(37) 
_{Jun}
(34) 
_{Jul}
(24) 
_{Aug}
(17) 
_{Sep}
(50) 
_{Oct}
(24) 
_{Nov}
(10) 
_{Dec}
(60) 
2006 
_{Jan}
(47) 
_{Feb}
(46) 
_{Mar}
(127) 
_{Apr}
(19) 
_{May}
(26) 
_{Jun}
(62) 
_{Jul}
(47) 
_{Aug}
(51) 
_{Sep}
(61) 
_{Oct}
(42) 
_{Nov}
(50) 
_{Dec}
(33) 
2007 
_{Jan}
(60) 
_{Feb}
(55) 
_{Mar}
(77) 
_{Apr}
(102) 
_{May}
(82) 
_{Jun}
(102) 
_{Jul}
(169) 
_{Aug}
(117) 
_{Sep}
(80) 
_{Oct}
(37) 
_{Nov}
(51) 
_{Dec}
(43) 
2008 
_{Jan}
(71) 
_{Feb}
(94) 
_{Mar}
(98) 
_{Apr}
(125) 
_{May}
(54) 
_{Jun}
(119) 
_{Jul}
(60) 
_{Aug}
(111) 
_{Sep}
(118) 
_{Oct}
(125) 
_{Nov}
(119) 
_{Dec}
(94) 
2009 
_{Jan}
(109) 
_{Feb}
(38) 
_{Mar}
(93) 
_{Apr}
(88) 
_{May}
(29) 
_{Jun}
(57) 
_{Jul}
(53) 
_{Aug}
(48) 
_{Sep}
(68) 
_{Oct}
(151) 
_{Nov}
(23) 
_{Dec}
(35) 
2010 
_{Jan}
(84) 
_{Feb}
(60) 
_{Mar}
(184) 
_{Apr}
(112) 
_{May}
(60) 
_{Jun}
(90) 
_{Jul}
(23) 
_{Aug}
(70) 
_{Sep}
(119) 
_{Oct}
(27) 
_{Nov}
(47) 
_{Dec}
(54) 
2011 
_{Jan}
(22) 
_{Feb}
(19) 
_{Mar}
(92) 
_{Apr}
(93) 
_{May}
(35) 
_{Jun}
(91) 
_{Jul}
(32) 
_{Aug}
(61) 
_{Sep}
(7) 
_{Oct}
(69) 
_{Nov}
(81) 
_{Dec}
(23) 
2012 
_{Jan}
(64) 
_{Feb}
(95) 
_{Mar}
(35) 
_{Apr}
(36) 
_{May}
(63) 
_{Jun}
(98) 
_{Jul}
(70) 
_{Aug}
(171) 
_{Sep}
(149) 
_{Oct}
(64) 
_{Nov}
(67) 
_{Dec}
(126) 
2013 
_{Jan}
(108) 
_{Feb}
(104) 
_{Mar}
(171) 
_{Apr}
(133) 
_{May}
(108) 
_{Jun}
(100) 
_{Jul}
(93) 
_{Aug}
(126) 
_{Sep}
(74) 
_{Oct}
(59) 
_{Nov}
(145) 
_{Dec}
(93) 
2014 
_{Jan}
(38) 
_{Feb}
(45) 
_{Mar}
(26) 
_{Apr}
(41) 
_{May}
(125) 
_{Jun}
(70) 
_{Jul}
(61) 
_{Aug}
(66) 
_{Sep}
(60) 
_{Oct}
(110) 
_{Nov}
(27) 
_{Dec}
(30) 
2015 
_{Jan}
(43) 
_{Feb}
(67) 
_{Mar}
(71) 
_{Apr}
(92) 
_{May}
(39) 
_{Jun}
(15) 
_{Jul}
(46) 
_{Aug}
(63) 
_{Sep}
(84) 
_{Oct}
(82) 
_{Nov}
(69) 
_{Dec}
(45) 
2016 
_{Jan}
(92) 
_{Feb}
(91) 
_{Mar}
(148) 
_{Apr}
(43) 
_{May}
(58) 
_{Jun}
(117) 
_{Jul}
(92) 
_{Aug}
(140) 
_{Sep}
(49) 
_{Oct}
(33) 
_{Nov}
(85) 
_{Dec}
(40) 
2017 
_{Jan}
(30) 
_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 


1
(6) 
2
(2) 
3
(6) 
4
(25) 
5
(5) 
6

7
(7) 
8
(3) 
9

10

11

12

13

14

15

16

17

18
(30) 
19
(15) 
20

21

22
(5) 
23
(9) 
24
(13) 
25

26
(2) 
27

28

29
(1) 
30
(4) 




From: Kirk, Benjamin (JSCEG311) <benjamin.kirk1@na...>  20130418 23:41:23

So I thought I should share my general thoughts based on experience with both continuous FEM & FV codes for these types of problems. And since this is archived it'll probably come back to bite me, so keep in mind this is *my opinion based on my personal research and application.* 1.) For highorder discretizations of elliptic equations, continuous FEM is the goto method. 2.) For highorder discretizations of hyperbolic equations, DG methods are about the best you can do, particularly unstructured. Combined with an explicit, multgridaccelerated scheme, it is a powerful technique. 3.) The problems come in when you have both: Continuous FEM: diffusive terms "easy," convective terms "hard & researchy" Discontinous FEM: convective terms easy, diffusive terms "hard & researchy" 4.) Shock capturing is a challenge for every scheme. In the F.V. world we rely on good alignment of the mesh to shockwaves or else the results really suffer. *Everything* does something O(h) at the shock, so you either live with it, or control h directly via refinement, or indirectly. It is this latter category that I would contend has been the recent trend in the DG community with 'subcell shock capturing'. In my mind taking a p=5 element and replacing it with 256 p=1 elements for shock capturing purposes is just h refinement by another name. 5.) For 2ndorder accurate discretizations it is hard to beat classic F.V. High order F.V. with its broad stencils is a nightmare. 6.) For things like the compressible NS equations, the underlying characteristicsbased behavior of the system is important, and the most effective numerical methods deal with the characteristic form of the equations at some level. Historically SUPG/GLS has only been concerned with the "flow direction" and, again in my opinion, been at a disadvantage because they did not treat this crucial aspect. That has been the motivation for the recent tau stuff I alluded to earlier. For a derivation of that tau from a characteristics argument see https://github.com/libMesh/libmesh/wiki/documents/presentations/2013/fins_FEF2013.pdf The attached paper shows some good results  basically with work you can make either scheme give high order results, but there is no panacea. 
From: Jed Brown <jed@59...>  20130418 23:29:50

"Kirk, Benjamin (JSCEG311)" <benjamin.kirk1@...> writes: > well you probably should clarify that  you are certainly "unwinding" > at the cell interfaces to get an upwind bias in the scheme, right? So > that could be alternatively looked at as a central + diffusive > discretrization… So I would contend the artificial viscosity is (i) > less direct and (ii) physically based, but could be thought of as > viscosity nonetheless. 1. "upwinding" here means the solution of a Riemann problem. If you use a Godunov flux, then the "upwinding" is introducing no numerical viscosity. I think of Riemann problems as being very fundamental when solving problems that do not have continuous solutions. 2. Numerical dissipation introduced by an approximate Riemann solver is decoupled from the convergence rate of the method. The Riemann solve has no tunable parameters, does not depend on the grid, and can attain any order of accuracy purely by raising the order of reconstruction (in FV) or the basis order (in DG). Compare this to SUPG, for example, which has an O(h) term. Viscous fluxes are messier with DG: they have tunable parameters and the tradeoffs are never satisfying. > Certainly if you computed the interface cell flux as the average of > the neighbors things would go to hell in a hurry? Yes, that's unstable. 
From: Kirk, Benjamin (JSCEG311) <benjamin.kirk1@na...>  20130418 23:03:07

On Apr 18, 2013, at 5:29 PM, Jed Brown <jed@...> wrote: > lorenzo alessio botti <lorenzoalessiobotti@...> writes: > >> In my experience dG works without stabilization and additional artificial >> viscosity well you probably should clarify that  you are certainly "unwinding" at the cell interfaces to get an upwind bias in the scheme, right? So that could be alternatively looked at as a central + diffusive discretrization… So I would contend the artificial viscosity is (i) less direct and (ii) physically based, but could be thought of as viscosity nonetheless. Certainly if you computed the interface cell flux as the average of the neighbors things would go to hell in a hurry? 
From: Jed Brown <jed@59...>  20130418 22:58:37

Manav Bhatia <bhatiamanav@...> writes: > Well, the question in my mind is: what happens with DG when the > element is increased to the size of the complete flow domain (so we > have one element) and the interpolation order is increased gradually. > > Wouldn't DG in this case reduce to the basic Galerkin method, which is > known to be unstable? Or is DG still guaranteed to give stable > solutions? You would be giving up local conservation and a local entropy inequality. You're welcome to think of continuous Galerkin as a single huge DG element that happens to use modest order compact basis functions. DG only gives you precise control of wholecell behavior, and the main problem with high order is dealing with any instabilities that arise inside the cell. 
From: Manav Bhatia <bhatiamanav@gm...>  20130418 22:52:28

On Apr 18, 2013, at 6:29 PM, Jed Brown <jed@...> wrote: > lorenzo alessio botti <lorenzoalessiobotti@...> writes: > >> In my experience dG works without stabilization and additional artificial >> viscosity > > This is generally attributed to the cell entropy inequality. > > http://www.ams.org/journals/mcom/199462206/S00255718199412232327/ > > This is the best stability result I'm aware of for any high order linear > spatial discretization. Well, the question in my mind is: what happens with DG when the element is increased to the size of the complete flow domain (so we have one element) and the interpolation order is increased gradually. Wouldn't DG in this case reduce to the basic Galerkin method, which is known to be unstable? Or is DG still guaranteed to give stable solutions? Manav 
From: Jed Brown <jed@59...>  20130418 22:29:35

lorenzo alessio botti <lorenzoalessiobotti@...> writes: > In my experience dG works without stabilization and additional artificial > viscosity This is generally attributed to the cell entropy inequality. http://www.ams.org/journals/mcom/199462206/S00255718199412232327/ This is the best stability result I'm aware of for any high order linear spatial discretization. 
From: lorenzo alessio botti <lorenzoalessiobotti@gm...>  20130418 21:54:49

Interesting... In my experience dG works without stabilization and additional artificial viscosity in convectiondominated incompressible flows (let's consider incompressible flows to avoid issues with discontinuities and monotonicity). One interesting point is that some formulations that works very well in practice are not conservative since the convective term is not written in divergence form. E.g. the skewsymmetric form introduced by Temam in combination with centered fluxes works perfectly. With this choice the energy is conserved (even if the velocity field is only weekly divergence free), but not the momentum. In general I think that discretizations based on numerical fluxes are more robust, in particular when the fluxes are based on the physics. Relaxing the continuity requirements might help in case of insufficient spatial resolution and week imposition of boundary conditions also helps. I admit that my experience with supg is limited, I'm not able to "compare" with dG. Lorenzo On Apr 18, 2013 7:09 PM, "Jed Brown" <jed@...> wrote: > Manav Bhatia <bhatiamanav@...> writes: > >  How critical is this lack of conservation properties for general > > application of CFD for transonic and supersonic aerodynamic analyses? > > Ah, a question for the ages. A lot of CFD engineers will say that local > conservation is essential and they won't even consider a method that > isn't. Most commercial CFD packages use finite volume methods for this > reason. > > At the other end of the spectrum are the FOSLS advocates that say the > engineers just don't understand what they want and insist that > conservation always sacrifices something else. Other least squares > advocates (e.g. Pavel Bochev) insist that conservation is essential, but > that you should get it by using compatible spaces in the LS formulation > (erasing some notable benefits of LS in the process). > > In the end, it depends on the problem. > > >  Is this apparant nonideal conservation behavior a primary reason > > for interest in DG vs continuous FEM? > > Yes, conservation and stability. Linear DG satisfies a cellwise > entropy inequality so that it can often be used without limiting. > Strict monotonicity is not practically available for continuous FEM. > > >  The typical DG methods that I have seen do not add a stabilization > > within the domain, but handle the boundary terms to ensure continuity of > > flux across the discontinuous elements. However, a straightforward > > application of Galerkin method is known to be unstable. So, I am assuming > > that the stabilization and nonoscillatory behavior comes from forgoing > the > > requirement of continuity and the treatment of boundary terms. However, > > would this also not imply that if one continued to increase the element > > size while increasing the polynomial order, the solution within the will > at > > some point become unstable/oscillatory? > > Yes, DG stability is only cellwise. Normal flux limiting reduces the > method to second order even in smooth regions. So if you go to > highorder, you need more exotic limiting to prevent internal > oscillations (see Zhang and Shu 2010 and 2011). > > >  > Precog is a nextgeneration analytics platform capable of advanced > analytics on semistructured data. The platform includes APIs for building > apps and a phenomenal toolset for data science. Developers can use > our toolset for easy data analysis & visualization. Get a free account! > http://www2.precog.com/precogplatform/slashdotnewsletter > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > 
From: Jed Brown <jed@59...>  20130418 17:31:50

Manav Bhatia <bhatiamanav@...> writes: > Hi Jed, > > All of this is really intriguing! Would you be able to suggest a book > (or some paper) that discusses these topics in more detail? > > I am eager to read more on this, and think/talk more intelligently about > it. I think survey is a decent place to start reading: http://zjw.public.iastate.edu/papers/2007jpashighorder.pdf 
From: Cody Permann <codypermann@gm...>  20130418 17:21:40

On Thu, Apr 18, 2013 at 11:11 AM, Roy Stogner <roystgnr@...>wrote: > > On Thu, 18 Apr 2013, Derek Gaston wrote: > > > And don't forget my favorite debugging tool: > > > > sleep(libMesh::processor_id()) > > std::cerr << "Stuff!" << std::endl; > Nothing funnier than hearing Derek say that the scaling is terrible on some new piece of code. "Yeah, it runs 10 times longer when I make the job 10x bigger". Only to find one of these guys lurking around ;) Cody > > I do this exact same horrible horrible thing too, but in these > conditions it should often be better to use "redirectstdout" > as an alternative, which spits each processor's messages to its own > file. Time stamp everything or stick in a few barrier() calls so you > can infer message ordering, and that way you don't end up waiting > forever when you're trying to debug something deep in a loop.  > Roy > > >  > Precog is a nextgeneration analytics platform capable of advanced > analytics on semistructured data. The platform includes APIs for building > apps and a phenomenal toolset for data science. Developers can use > our toolset for easy data analysis & visualization. Get a free account! > http://www2.precog.com/precogplatform/slashdotnewsletter > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > 
From: Dmitry Karpeev <karpeev@mc...>  20130418 17:19:47

If you are using PETSc underneath, one option is http://www.mcs.anl.gov/petsc/petsccurrent/docs/manualpages/Sys/PetscSynchronizedPrintf.html It uses Cstyle formatted output, but will take care of the message ordering. Dmitry. On Thu, Apr 18, 2013 at 12:11 PM, Roy Stogner <roystgnr@...>wrote: > > On Thu, 18 Apr 2013, Derek Gaston wrote: > > > And don't forget my favorite debugging tool: > > > > sleep(libMesh::processor_id()) > > std::cerr << "Stuff!" << std::endl; > > I do this exact same horrible horrible thing too, but in these > conditions it should often be better to use "redirectstdout" > as an alternative, which spits each processor's messages to its own > file. Time stamp everything or stick in a few barrier() calls so you > can infer message ordering, and that way you don't end up waiting > forever when you're trying to debug something deep in a loop. >  > Roy > > >  > Precog is a nextgeneration analytics platform capable of advanced > analytics on semistructured data. The platform includes APIs for building > apps and a phenomenal toolset for data science. Developers can use > our toolset for easy data analysis & visualization. Get a free account! > http://www2.precog.com/precogplatform/slashdotnewsletter > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > 
From: Roy Stogner <roystgnr@ic...>  20130418 17:11:39

On Thu, 18 Apr 2013, Derek Gaston wrote: > And don't forget my favorite debugging tool: > > sleep(libMesh::processor_id()) > std::cerr << "Stuff!" << std::endl; I do this exact same horrible horrible thing too, but in these conditions it should often be better to use "redirectstdout" as an alternative, which spits each processor's messages to its own file. Time stamp everything or stick in a few barrier() calls so you can infer message ordering, and that way you don't end up waiting forever when you're trying to debug something deep in a loop.  Roy 
From: Jed Brown <jed@59...>  20130418 17:09:34

Manav Bhatia <bhatiamanav@...> writes: >  How critical is this lack of conservation properties for general > application of CFD for transonic and supersonic aerodynamic analyses? Ah, a question for the ages. A lot of CFD engineers will say that local conservation is essential and they won't even consider a method that isn't. Most commercial CFD packages use finite volume methods for this reason. At the other end of the spectrum are the FOSLS advocates that say the engineers just don't understand what they want and insist that conservation always sacrifices something else. Other least squares advocates (e.g. Pavel Bochev) insist that conservation is essential, but that you should get it by using compatible spaces in the LS formulation (erasing some notable benefits of LS in the process). In the end, it depends on the problem. >  Is this apparant nonideal conservation behavior a primary reason > for interest in DG vs continuous FEM? Yes, conservation and stability. Linear DG satisfies a cellwise entropy inequality so that it can often be used without limiting. Strict monotonicity is not practically available for continuous FEM. >  The typical DG methods that I have seen do not add a stabilization > within the domain, but handle the boundary terms to ensure continuity of > flux across the discontinuous elements. However, a straightforward > application of Galerkin method is known to be unstable. So, I am assuming > that the stabilization and nonoscillatory behavior comes from forgoing the > requirement of continuity and the treatment of boundary terms. However, > would this also not imply that if one continued to increase the element > size while increasing the polynomial order, the solution within the will at > some point become unstable/oscillatory? Yes, DG stability is only cellwise. Normal flux limiting reduces the method to second order even in smooth regions. So if you go to highorder, you need more exotic limiting to prevent internal oscillations (see Zhang and Shu 2010 and 2011). 
From: Derek Gaston <friedmud@gm...>  20130418 16:59:42

And don't forget my favorite debugging tool: sleep(libMesh::processor_id()) std::cerr << "Stuff!" << std::endl; Yep, it's old school  but it really helps to order output when you're trying to track down some problem... JUST DONT FORGET TO TAKE THOSE SLEEP STATEMENTS OUT BEFORE COMMITTING YOUR CODE BACK TO THE REPO! Yes, I speak from experience ;) Derek On Thu, Apr 18, 2013 at 8:12 AM, Cody Permann <codypermann@...> wrote: > On Thu, Apr 18, 2013 at 8:00 AM, Kirk, Benjamin (JSCEG311) < > benjamin.kirk1@...> wrote: > > > On Apr 18, 2013, at 8:33 AM, Manav Bhatia <bhatiamanav@...> wrote: > > > > > I am curious if there are any recommended practices and/or open source > > debugging tools for MPI codes. What are the tools used by the libMesh > > developers? > > > > Open source? sadly, the best I've come up with is > > $ mpirun np # … start_in_debugger > > > > then type 'c' in each window… > > > > Totalview is supposedly very capable, but not open source  and I've > never > > used it. > > > > Ben > > > > To answer this question, it's also useful to know what kinds of problems > you are experiencing and at what scale. If you can reproduce issues with > small numbers of processors (24), then Ben's method does indeed work > fairly well and is what I use too. If you get to the point where you only > see issues when you run on larger number of processors (64  thousands), > then you have to be more clever. I have a python script that logs into > each node of scheduled job and runs "pstack" or even just "gdb" with batch > commands to get back stack traces of running processes. The script saves > all this data to a file which can be reread several times to intelligently > merge the stacks into unique sets after filtering out memory addresses and > other extraneous information. This helps find bugs where certain processes > fail to participate in global communication operations. We have Totalview, > but it's really not all that great. The codebase is ancient, and they are > focusing more on debugging accelerators these days than improving > traditional CPU debugging. Licensing is also very expensive. > > Cody > > > > > > >  > > Precog is a nextgeneration analytics platform capable of advanced > > analytics on semistructured data. The platform includes APIs for > building > > apps and a phenomenal toolset for data science. Developers can use > > our toolset for easy data analysis & visualization. Get a free account! > > http://www2.precog.com/precogplatform/slashdotnewsletter > > _______________________________________________ > > Libmeshusers mailing list > > Libmeshusers@... > > https://lists.sourceforge.net/lists/listinfo/libmeshusers > > > >  > Precog is a nextgeneration analytics platform capable of advanced > analytics on semistructured data. The platform includes APIs for building > apps and a phenomenal toolset for data science. Developers can use > our toolset for easy data analysis & visualization. Get a free account! > http://www2.precog.com/precogplatform/slashdotnewsletter > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > 
From: Kirk, Benjamin (JSCEG311) <benjamin.kirk1@na...>  20130418 16:54:59

On Apr 18, 2013, at 10:37 AM, Manav Bhatia <bhatiamanav@...> wrote: > That brings me to another question. I am looking at using higher order elements and using this tau and delta leads to spurious oscillations in problems with shocks. Delta for higher order elements is definitely ongoing work. I can recommend a much superior tau, though, which I've switched to almost exclusively. See AIAA20113411, and it is directly applicable to higher order elements. I'd be happy to talk about this some more offline if you are interested. Ben 
From: Manav Bhatia <bhatiamanav@gm...>  20130418 16:25:48

Hi Jed, I greatly appreciate your comments. I would have missed these nuances that you pointed out as I come from an engineering background and am trying to slowly bring myself upto speed with the mathematical aspects of FEM and CFD. So, all of this discussion is highly valuable and educational to me. I do have a few questions about what all of this might imply from a practical standpoint, that I hope you and the others may weigh in on.  How critical is this lack of conservation properties for general application of CFD for transonic and supersonic aerodynamic analyses?  Is this apparant nonideal conservation behavior a primary reason for interest in DG vs continuous FEM?  The typical DG methods that I have seen do not add a stabilization within the domain, but handle the boundary terms to ensure continuity of flux across the discontinuous elements. However, a straightforward application of Galerkin method is known to be unstable. So, I am assuming that the stabilization and nonoscillatory behavior comes from forgoing the requirement of continuity and the treatment of boundary terms. However, would this also not imply that if one continued to increase the element size while increasing the polynomial order, the solution within the will at some point become unstable/oscillatory?  My previous questions is movitvated by an interest to move to higher order interpolation functions (read isogeometric methods) for applications in design optimization of aerodynamic vehicles. There, I am interested in really large element sizes (if possible). From what I understand, the continuous FEM methods would be better suited than DG. But, I don't know if the conservation properties that you pointed out might become an issue. I would appreciate your comments. Thanks, Manav On Thu, Apr 18, 2013 at 10:30 AM, Jed Brown <jed@...> wrote: > Manav Bhatia <bhatiamanav@...> writes: > > Jed: I am curious about your comment on lack of conservation of the > > GLS schemes. I did a bit of search and came across the following two > > papers. They make a case for conservation properties of the methods. I > > am curious what you think. > > Sure, I'm familiar with these papers. > > > Hughes, T. J. R., Engel, G., Mazzei, L., & Larson, M. G. (2000). The > > Continuous Galerkin Method Is Locally Conservative. Journal of > > Computational Physics, 163(2), 467–488. doi:10.1006/jcph.2000.6577 > > > > Abstract: We examine the conservation law structure of the continuous > > Galerkin method. We employ the scalar, advection–diffusion equation as > > a model problem for this purpose, but our results are quite general > > and apply to timedependent, nonlinear systems as well. In addition to > > global conservation laws, we establish local con servation laws which > > pertain to subdomains consisting of a union of elements as well as > > individual elements. These results are somewhat surprising and > > contradict the widely held opinion that the continuous Galerkin method > > is not locally conser votive. > > This paper changes the definition of local conservation. I wouldn't say > it's "surprising" at all because it is exactly the conservation > statement induced by the choice of test space. In essence, the > continuous Galerkin conservation statement is smeared out over the width > of one cell where as the DG or finite volume conservation statement has > no such smearing. On coarse grids, one cell can be mighty big. > > > Hughes, T. J. R., & Wells, G. N. (2005). Conservation properties for > > the Galerkin and stabilised forms of the advection–diffusion and > > incompressible Navier–Stokes equations. Computer Methods in Applied > > Mechanics and Engineering, 194(911), > > 1141–1159. doi:10.1016/j.cma.2004.06.034 > > > > Abstract: A common criticism of continuous Galerkin finite element > > methods is their perceived lack of conservation. This may in fact be > > true for incompressible flows when advective, rather than > > conservative, weak forms are employed. However, advective forms are > > often preferred on grounds of accuracy despite violation of > > conservation. It is shown here that this deficiency can be easily > > remedied, and conservative procedures for advective forms can be > > developed from multiscale concepts. As a result, conservative > > stabilised finite element procedures are presented for the > > advection–diffusion and incompressible Navier–Stokes equations. > > This paper is specific to incompressible flow, but it's mostly > investigating the "advective" form > > v \cdot \nabla v > > as compared to the divergence form > > \nabla \cdot (v \otimes v) > > With stabilization, they are able to make a weak conservation statement > ("smeared" as in the other paper) using the advective form. Note that > when using the identity > > \nabla \cdot (u \otimes a) = a \cdot \nabla u + u (\nabla\cdot a) > > where 'a' is a discrete velocity field, we rarely have that 'a' is > exactly divergence free. Indeed, it is generally only weakly > divergencefree unless we use a stable element pair with a discontinuous > pressure. Their analysis assumes that 'a' is exactly divergencefree > and still only makes a weak conservation statement. > > Again, the difference between strong and weak conservation is more > significant on coarse grids. With agglomerationbased multigrid (FV or > DG), a coarsegrid cell satisfies exactly the same conservation > statement as the corresponding agglomerated finegrid cells. > > > As an aside, we can see mass conservation problems already for Stokes. > We need only choose a discontinuous body force (as in RayleighTaylor > initiation) or discontinuous viscosity to find a velocity field that has > serious nonconservative artifacts on coarse grids when using stabilized > finite elements. This is why finite element methods for problems like > subduction must use stable finite element pairs with discontinuous > pressure. (Some use almoststable Q1P0, but these still have > problems.) > 
From: Manav Bhatia <bhatiamanav@gm...>  20130418 15:37:23

On Thu, Apr 18, 2013 at 11:06 AM, Kirk, Benjamin (JSCEG311) < benjamin.kirk1@...> wrote: > > Well you did say inviscid, correct? Then I would think the > discretizations would be equivalent  modulo your choice of tau. Which is > what exactly? > > I am using the same tau definition as in your dissertation. tau_mat = diag(tau_c, tau_m, tau_m, tau_m, tau_e) tau_c = tau_m = tau_e = tau tau = ( (2/dt)^2 + ( (2\u\ + a) / h_u )^2 )^0.5 The discontinuity capturing operator, delta, is also the same as in your work. That brings me to another question. I am looking at using higher order elements and using this tau and delta leads to spurious oscillations in problems with shocks. Do you have a recommendation on a better tau and delta definition for higher order elements? I was considering deriving these matrices based on the residualfree bubbles approach, but could also experiment with definitions. Manav 
From: Roy Stogner <roystgnr@ic...>  20130418 15:25:11

On Thu, 18 Apr 2013, Kirk, Benjamin (JSCEG311) wrote: > On Apr 18, 2013, at 9:49 AM, Roy Stogner <roystgnr@...> wrote: > >> I don't know how well Ben's and my experiences will carry over to your >> code  we're using SUPG rather than GLS, so we ought to be getting a >> better conditioned matrix at the cost of more finicky stability. >> Don't GLS matrix condition numbers grow like h^4 instead of h^2? > > Well you did say inviscid, correct? Then I would think the > discretizations would be equivalent  modulo your choice of tau. He did, and I didn't consider the implications of that. Pay no attention to my above paragraph; I've been spending too much time addling my brain staring at tiny viscous boundary layers.  Roy 
From: Kirk, Benjamin (JSCEG311) <benjamin.kirk1@na...>  20130418 15:07:18

On Apr 18, 2013, at 9:49 AM, Roy Stogner <roystgnr@...> wrote: > On Thu, 18 Apr 2013, Manav Bhatia wrote: > >> I use a constant time step of 0.05 sec with a backwards Euler >> implicit time scheme. Of course, the local CFL increases to >> large numbers as the mesh is refined, since I do not change the >> time step. >> >> Do you typically change the time step post refinement? Or use >> local time stepping? > > I don't know how well Ben's and my experiences will carry over to your > code  we're using SUPG rather than GLS, so we ought to be getting a > better conditioned matrix at the cost of more finicky stability. > Don't GLS matrix condition numbers grow like h^4 instead of h^2? Well you did say inviscid, correct? Then I would think the discretizations would be equivalent  modulo your choice of tau. Which is what exactly? > But with that in mind: I definitely reduce the time step > postrefinement, and not just by a factor of 2. Otherwise it gets too > hard for our scheme to handle sudden sharpening of shocks in > hypersonic real gas flows. Usually our adaptive time stepper ramps up > the time step afterwards pretty aggressively, though, successfully. > Also, problems with no shock or with no sensitive chemistry are more > robust. If you have shocks, this is very important. The problem is that for inviscid flows shocks are selfsimilar under refinement  if you smear the shock over three cells on the coarse grid, you expect it will be smeared over three cells on the fine grid too. When you interpolate the coarse>fine, at the first time step your shock will be spread across 3>6 cells, and a large disturbance will develop there that has to be dealt with. This is why we often reduce the time step immediately after refinement. Ben 
From: Roy Stogner <roystgnr@ic...>  20130418 14:49:56

On Thu, 18 Apr 2013, Manav Bhatia wrote: > I use a constant time step of 0.05 sec with a backwards Euler > implicit time scheme. Of course, the local CFL increases to > large numbers as the mesh is refined, since I do not change the > time step. > > Do you typically change the time step post refinement? Or use > local time stepping? I don't know how well Ben's and my experiences will carry over to your code  we're using SUPG rather than GLS, so we ought to be getting a better conditioned matrix at the cost of more finicky stability. Don't GLS matrix condition numbers grow like h^4 instead of h^2? But with that in mind: I definitely reduce the time step postrefinement, and not just by a factor of 2. Otherwise it gets too hard for our scheme to handle sudden sharpening of shocks in hypersonic real gas flows. Usually our adaptive time stepper ramps up the time step afterwards pretty aggressively, though, successfully. Also, problems with no shock or with no sensitive chemistry are more robust. I'd be a little surprised if the time step size was the root of your linear solver convergence problem, but that's such an easy thing to check that I would definitely give a reduced time step a try.  Roy 
From: Jed Brown <jed@59...>  20130418 14:30:16

Manav Bhatia <bhatiamanav@...> writes: > Jed: I am curious about your comment on lack of conservation of the > GLS schemes. I did a bit of search and came across the following two > papers. They make a case for conservation properties of the methods. I > am curious what you think. Sure, I'm familiar with these papers. > Hughes, T. J. R., Engel, G., Mazzei, L., & Larson, M. G. (2000). The > Continuous Galerkin Method Is Locally Conservative. Journal of > Computational Physics, 163(2), 467–488. doi:10.1006/jcph.2000.6577 > > Abstract: We examine the conservation law structure of the continuous > Galerkin method. We employ the scalar, advection–diffusion equation as > a model problem for this purpose, but our results are quite general > and apply to timedependent, nonlinear systems as well. In addition to > global conservation laws, we establish local con servation laws which > pertain to subdomains consisting of a union of elements as well as > individual elements. These results are somewhat surprising and > contradict the widely held opinion that the continuous Galerkin method > is not locally conser votive. This paper changes the definition of local conservation. I wouldn't say it's "surprising" at all because it is exactly the conservation statement induced by the choice of test space. In essence, the continuous Galerkin conservation statement is smeared out over the width of one cell where as the DG or finite volume conservation statement has no such smearing. On coarse grids, one cell can be mighty big. > Hughes, T. J. R., & Wells, G. N. (2005). Conservation properties for > the Galerkin and stabilised forms of the advection–diffusion and > incompressible Navier–Stokes equations. Computer Methods in Applied > Mechanics and Engineering, 194(911), > 1141–1159. doi:10.1016/j.cma.2004.06.034 > > Abstract: A common criticism of continuous Galerkin finite element > methods is their perceived lack of conservation. This may in fact be > true for incompressible flows when advective, rather than > conservative, weak forms are employed. However, advective forms are > often preferred on grounds of accuracy despite violation of > conservation. It is shown here that this deficiency can be easily > remedied, and conservative procedures for advective forms can be > developed from multiscale concepts. As a result, conservative > stabilised finite element procedures are presented for the > advection–diffusion and incompressible Navier–Stokes equations. This paper is specific to incompressible flow, but it's mostly investigating the "advective" form v \cdot \nabla v as compared to the divergence form \nabla \cdot (v \otimes v) With stabilization, they are able to make a weak conservation statement ("smeared" as in the other paper) using the advective form. Note that when using the identity \nabla \cdot (u \otimes a) = a \cdot \nabla u + u (\nabla\cdot a) where 'a' is a discrete velocity field, we rarely have that 'a' is exactly divergence free. Indeed, it is generally only weakly divergencefree unless we use a stable element pair with a discontinuous pressure. Their analysis assumes that 'a' is exactly divergencefree and still only makes a weak conservation statement. Again, the difference between strong and weak conservation is more significant on coarse grids. With agglomerationbased multigrid (FV or DG), a coarsegrid cell satisfies exactly the same conservation statement as the corresponding agglomerated finegrid cells. As an aside, we can see mass conservation problems already for Stokes. We need only choose a discontinuous body force (as in RayleighTaylor initiation) or discontinuous viscosity to find a velocity field that has serious nonconservative artifacts on coarse grids when using stabilized finite elements. This is why finite element methods for problems like subduction must use stable finite element pairs with discontinuous pressure. (Some use almoststable Q1P0, but these still have problems.) 
From: Cody Permann <codypermann@gm...>  20130418 14:12:42

On Thu, Apr 18, 2013 at 8:00 AM, Kirk, Benjamin (JSCEG311) < benjamin.kirk1@...> wrote: > On Apr 18, 2013, at 8:33 AM, Manav Bhatia <bhatiamanav@...> wrote: > > > I am curious if there are any recommended practices and/or open source > debugging tools for MPI codes. What are the tools used by the libMesh > developers? > > Open source? sadly, the best I've come up with is > $ mpirun np # … start_in_debugger > > then type 'c' in each window… > > Totalview is supposedly very capable, but not open source  and I've never > used it. > > Ben > To answer this question, it's also useful to know what kinds of problems you are experiencing and at what scale. If you can reproduce issues with small numbers of processors (24), then Ben's method does indeed work fairly well and is what I use too. If you get to the point where you only see issues when you run on larger number of processors (64  thousands), then you have to be more clever. I have a python script that logs into each node of scheduled job and runs "pstack" or even just "gdb" with batch commands to get back stack traces of running processes. The script saves all this data to a file which can be reread several times to intelligently merge the stacks into unique sets after filtering out memory addresses and other extraneous information. This helps find bugs where certain processes fail to participate in global communication operations. We have Totalview, but it's really not all that great. The codebase is ancient, and they are focusing more on debugging accelerators these days than improving traditional CPU debugging. Licensing is also very expensive. Cody > >  > Precog is a nextgeneration analytics platform capable of advanced > analytics on semistructured data. The platform includes APIs for building > apps and a phenomenal toolset for data science. Developers can use > our toolset for easy data analysis & visualization. Get a free account! > http://www2.precog.com/precogplatform/slashdotnewsletter > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > 
From: Kirk, Benjamin (JSCEG311) <benjamin.kirk1@na...>  20130418 14:01:00

On Apr 18, 2013, at 8:33 AM, Manav Bhatia <bhatiamanav@...> wrote: > I am curious if there are any recommended practices and/or open source debugging tools for MPI codes. What are the tools used by the libMesh developers? Open source? sadly, the best I've come up with is $ mpirun np # … start_in_debugger then type 'c' in each window… Totalview is supposedly very capable, but not open source  and I've never used it. Ben 
From: Manav Bhatia <bhatiamanav@gm...>  20130418 13:33:55

Hi, I am curious if there are any recommended practices and/or open source debugging tools for MPI codes. What are the tools used by the libMesh developers? Thanks, Manav 
From: Manav Bhatia <bhatiamanav@gm...>  20130418 13:21:38

Hi Ben, I use a constant time step of 0.05 sec with a backwards Euler implicit time scheme. Of course, the local CFL increases to large numbers as the mesh is refined, since I do not change the time step. Do you typically change the time step post refinement? Or use local time stepping? The farfield boundary conditions I use are based on the Reimann invariants, so I account for the components of flux coming in versus leaving the flow domain via splitting the Jacobian. The solid wall boundary conditions are enforced implicitly by modifying the flux at the wall through v \dot n = 0 (I am working with inviscid flow), and then I create a Jacobian for the modified flux term. So, I don't think this could be an issue. Jed: I am curious about your comment on lack of conservation of the GLS schemes. I did a bit of search and came across the following two papers. They make a case for conservation properties of the methods. I am curious what you think. Hughes, T. J. R., Engel, G., Mazzei, L., & Larson, M. G. (2000). The Continuous Galerkin Method Is Locally Conservative. Journal of Computational Physics, 163(2), 467–488. doi:10.1006/jcph.2000.6577 Abstract: We examine the conservation law structure of the continuous Galerkin method. We employ the scalar, advection–diffusion equation as a model problem for this purpose, but our results are quite general and apply to timedependent, nonlinear systems as well. In addition to global conservation laws, we establish local con servation laws which pertain to subdomains consisting of a union of elements as well as individual elements. These results are somewhat surprising and contradict the widely held opinion that the continuous Galerkin method is not locally conser votive. Hughes, T. J. R., & Wells, G. N. (2005). Conservation properties for the Galerkin and stabilised forms of the advection–diffusion and incompressible Navier–Stokes equations. Computer Methods in Applied Mechanics and Engineering, 194(911), 1141–1159. doi:10.1016/j.cma.2004.06.034 Abstract: A common criticism of continuous Galerkin finite element methods is their perceived lack of conservation. This may in fact be true for incompressible flows when advective, rather than conservative, weak forms are employed. However, advective forms are often preferred on grounds of accuracy despite violation of conservation. It is shown here that this deficiency can be easily remedied, and conservative procedures for advective forms can be developed from multiscale concepts. As a result, conservative stabilised finite element procedures are presented for the advection–diffusion and incompressible Navier–Stokes equations. Manav On Apr 18, 2013, at 7:25 AM, "Kirk, Benjamin (JSCEG311)" <benjamin.kirk1@...> wrote: > On Apr 17, 2013, at 11:28 PM, "Manav Bhatia" <bhatiamanav@...> wrote: > >>> It's usually preferable to order your unknowns so that the fields are >>> interlaced, with all values at a node contiguous. >> >> I will certainly give these a shot tomorrow. Do you know if these require any other modification in my code/libMesh, or providing the command line options would be enough? > > Jed's commas line option should work for older versions of libMesh, but if you are using 0.9.0 or newer the interlaced variables should be automatic if the DofMap properly discovers one "VariableGroup" > > What types of CFL numbers are you running at? > > My experience with compressible navier stokes (without mulrigrid) is that convergence is usually very good provided the numerical time step is less than the characteristic time of the flow (l_ref/U_ref). Going much higher than that for supersonic flows can give fits unless the boundary conditions are linearized perfectly. Given that the subsonic bcs are more complex I thought I'd mention this. > > Ben > 
From: Kirk, Benjamin (JSCEG311) <benjamin.kirk1@na...>  20130418 11:25:34

On Apr 17, 2013, at 11:28 PM, "Manav Bhatia" <bhatiamanav@...> wrote: >> It's usually preferable to order your unknowns so that the fields are >> interlaced, with all values at a node contiguous. > > I will certainly give these a shot tomorrow. Do you know if these require any other modification in my code/libMesh, or providing the command line options would be enough? Jed's commas line option should work for older versions of libMesh, but if you are using 0.9.0 or newer the interlaced variables should be automatic if the DofMap properly discovers one "VariableGroup" What types of CFL numbers are you running at? My experience with compressible navier stokes (without mulrigrid) is that convergence is usually very good provided the numerical time step is less than the characteristic time of the flow (l_ref/U_ref). Going much higher than that for supersonic flows can give fits unless the boundary conditions are linearized perfectly. Given that the subsonic bcs are more complex I thought I'd mention this. Ben 