From: Manav Bhatia <bhatiamanav@gm...>  20130418 03:58:47

Hi, I am writing to seek some advice about use of iterative solvers with hrefinement. My application is a GLS solver for compressible Euler equations, and I am running 2D and 3D simulations at transonic Mach numbers. The initial mesh in 3D analysis has ~2millions tetrahedra, and the hrefinement process increases this to ~8million tetrahedra. I am using the standard PETSc GMRES solver with block Jacobi preconditioner (the default). The challenge that I am facing is that with mesh refinement the iterative linear solver takes a large number of iterations to converge, and even with 20,000 iterations the residual is, at times, of order 10^4. I increased the Krylov subspace dimension form the default value of 30 to ~250, and this took care of most of the my 2D simulations with refinement. However, this is continuing to be a problem for 3D. I can certainly increase this subspace to higher numbers (~1000), but I want to make sure that I am not going out of the norm here. I am suspecting that this might be too high a number, but I don't have enough experience to say for sure. I also highly doubt that there is a problem with the code. So, a few questions:  I am curious about the experience that others have with such applications?  What is the typical Krylov subspace dimension size that people use for such problems?  Are there other builtin preconditioners in PETSc that could do a better job?  Are other custom preconditioners expected to perform better? If so, which ones? I would appreciate any comments on this. Thanks, Manav 
From: Jed Brown <jed@59...>  20130418 04:14:38

Manav Bhatia <bhatiamanav@...> writes: > Hi, > > I am writing to seek some advice about use of iterative solvers > with hrefinement. > > My application is a GLS solver for compressible Euler equations, > and I am running 2D and 3D simulations at transonic Mach > numbers. The initial mesh in 3D analysis has ~2millions tetrahedra, > and the hrefinement process increases this to ~8million > tetrahedra. I am using the standard PETSc GMRES solver with block > Jacobi preconditioner (the default). Are these steadystate computations or finite time step? > So, a few questions: >  I am curious about the experience that others have with such applications? >  What is the typical Krylov subspace dimension size that people use for such problems? >  Are there other builtin preconditioners in PETSc that could do a better job? Try increasing the overlap slightly 'pc_type asm pc_asm_overlap 1' (and 2). Perhaps also combine with 'sub_pc_factor_levels 1' (and 2). Let us know if these make a useful difference. It's usually preferable to order your unknowns so that the fields are interlaced, with all values at a node contiguous. >  Are other custom preconditioners expected to perform better? If so, which ones? It's hard to beat nonlinear multigrid (FAS) for steadystate problems of this variety, but those depend on having an accurate coarse grid discretization. The favorite for that has always been to use cellcentered (finite volume) discretizations and use agglomeration to produce a coarse operator. A nonlinear relaxation process is of less value for finite element methods because it's relatively more expensive than with FV/FD discretizations. Fieldsplit preconditioners can also be useful, but most will only work properly for low Mach. 
From: Manav Bhatia <bhatiamanav@gm...>  20130418 04:28:39

On Apr 18, 2013, at 12:14 AM, Jed Brown <jed@...> wrote: >> My application is a GLS solver for compressible Euler equations, >> and I am running 2D and 3D simulations at transonic Mach >> numbers. The initial mesh in 3D analysis has ~2millions tetrahedra, >> and the hrefinement process increases this to ~8million >> tetrahedra. I am using the standard PETSc GMRES solver with block >> Jacobi preconditioner (the default). > > Are these steadystate computations or finite time step? > I am time marching to steady state. >> So, a few questions: >>  I am curious about the experience that others have with such applications? >>  What is the typical Krylov subspace dimension size that people use for such problems? >>  Are there other builtin preconditioners in PETSc that could do a better job? > > Try increasing the overlap slightly 'pc_type asm pc_asm_overlap 1' > (and 2). Perhaps also combine with 'sub_pc_factor_levels 1' (and 2). > Let us know if these make a useful difference. > > It's usually preferable to order your unknowns so that the fields are > interlaced, with all values at a node contiguous. I will certainly give these a shot tomorrow. Do you know if these require any other modification in my code/libMesh, or providing the command line options would be enough? > >>  Are other custom preconditioners expected to perform better? If so, which ones? > > It's hard to beat nonlinear multigrid (FAS) for steadystate problems of > this variety, but those depend on having an accurate coarse grid > discretization. The favorite for that has always been to use > cellcentered (finite volume) discretizations and use agglomeration to > produce a coarse operator. A nonlinear relaxation process is of less > value for finite element methods because it's relatively more expensive > than with FV/FD discretizations. > > Fieldsplit preconditioners can also be useful, but most will only work > properly for low Mach. I have been thinking of doing something with multigrid, but am not sure if something along those lines is currently feasible with libMesh<>PETSc interface. Currently I don't have much experience with multigrid, so I am curious to see what is possible. Do you think that somehow going between the original mesh and the refined mesh would provide some reasonable form of multigrid? Manav 
From: Jed Brown <jed@59...>  20130418 04:39:51

Manav Bhatia <bhatiamanav@...> writes: > I will certainly give these a shot tomorrow. Do you know if these > require any other modification in my code/libMesh, or providing the > command line options would be enough? Try node_major_dofs > I have been thinking of doing something with multigrid, but am not > sure if something along those lines is currently feasible with > libMesh<>PETSc interface. Currently I don't have much experience > with multigrid, so I am curious to see what is possible. Do you think > that somehow going between the original mesh and the refined mesh > would provide some reasonable form of multigrid? Multigrid for transonic flow is technical to implement. I don't have much experience with GLS, but from what I've seen, its accuracy is not very good on coarse grids (including the usual lack of conservation). That is a big problem for multigrid, so unless you can use the fine grid to numerically homogenize, producing accurate coarse grid equations (I don't know how to do this for transonic flow), MG convergence will not be very good. OTOH, even a crummy MG algorithm may be better than onelevel methods. 
From: Kirk, Benjamin (JSCEG311) <benjamin.kirk1@na...>  20130418 11:25:34

On Apr 17, 2013, at 11:28 PM, "Manav Bhatia" <bhatiamanav@...> wrote: >> It's usually preferable to order your unknowns so that the fields are >> interlaced, with all values at a node contiguous. > > I will certainly give these a shot tomorrow. Do you know if these require any other modification in my code/libMesh, or providing the command line options would be enough? Jed's commas line option should work for older versions of libMesh, but if you are using 0.9.0 or newer the interlaced variables should be automatic if the DofMap properly discovers one "VariableGroup" What types of CFL numbers are you running at? My experience with compressible navier stokes (without mulrigrid) is that convergence is usually very good provided the numerical time step is less than the characteristic time of the flow (l_ref/U_ref). Going much higher than that for supersonic flows can give fits unless the boundary conditions are linearized perfectly. Given that the subsonic bcs are more complex I thought I'd mention this. Ben 
From: Manav Bhatia <bhatiamanav@gm...>  20130418 13:21:38

Hi Ben, I use a constant time step of 0.05 sec with a backwards Euler implicit time scheme. Of course, the local CFL increases to large numbers as the mesh is refined, since I do not change the time step. Do you typically change the time step post refinement? Or use local time stepping? The farfield boundary conditions I use are based on the Reimann invariants, so I account for the components of flux coming in versus leaving the flow domain via splitting the Jacobian. The solid wall boundary conditions are enforced implicitly by modifying the flux at the wall through v \dot n = 0 (I am working with inviscid flow), and then I create a Jacobian for the modified flux term. So, I don't think this could be an issue. Jed: I am curious about your comment on lack of conservation of the GLS schemes. I did a bit of search and came across the following two papers. They make a case for conservation properties of the methods. I am curious what you think. Hughes, T. J. R., Engel, G., Mazzei, L., & Larson, M. G. (2000). The Continuous Galerkin Method Is Locally Conservative. Journal of Computational Physics, 163(2), 467–488. doi:10.1006/jcph.2000.6577 Abstract: We examine the conservation law structure of the continuous Galerkin method. We employ the scalar, advection–diffusion equation as a model problem for this purpose, but our results are quite general and apply to timedependent, nonlinear systems as well. In addition to global conservation laws, we establish local con servation laws which pertain to subdomains consisting of a union of elements as well as individual elements. These results are somewhat surprising and contradict the widely held opinion that the continuous Galerkin method is not locally conser votive. Hughes, T. J. R., & Wells, G. N. (2005). Conservation properties for the Galerkin and stabilised forms of the advection–diffusion and incompressible Navier–Stokes equations. Computer Methods in Applied Mechanics and Engineering, 194(911), 1141–1159. doi:10.1016/j.cma.2004.06.034 Abstract: A common criticism of continuous Galerkin finite element methods is their perceived lack of conservation. This may in fact be true for incompressible flows when advective, rather than conservative, weak forms are employed. However, advective forms are often preferred on grounds of accuracy despite violation of conservation. It is shown here that this deficiency can be easily remedied, and conservative procedures for advective forms can be developed from multiscale concepts. As a result, conservative stabilised finite element procedures are presented for the advection–diffusion and incompressible Navier–Stokes equations. Manav On Apr 18, 2013, at 7:25 AM, "Kirk, Benjamin (JSCEG311)" <benjamin.kirk1@...> wrote: > On Apr 17, 2013, at 11:28 PM, "Manav Bhatia" <bhatiamanav@...> wrote: > >>> It's usually preferable to order your unknowns so that the fields are >>> interlaced, with all values at a node contiguous. >> >> I will certainly give these a shot tomorrow. Do you know if these require any other modification in my code/libMesh, or providing the command line options would be enough? > > Jed's commas line option should work for older versions of libMesh, but if you are using 0.9.0 or newer the interlaced variables should be automatic if the DofMap properly discovers one "VariableGroup" > > What types of CFL numbers are you running at? > > My experience with compressible navier stokes (without mulrigrid) is that convergence is usually very good provided the numerical time step is less than the characteristic time of the flow (l_ref/U_ref). Going much higher than that for supersonic flows can give fits unless the boundary conditions are linearized perfectly. Given that the subsonic bcs are more complex I thought I'd mention this. > > Ben > 
From: Jed Brown <jed@59...>  20130418 14:30:16

Manav Bhatia <bhatiamanav@...> writes: > Jed: I am curious about your comment on lack of conservation of the > GLS schemes. I did a bit of search and came across the following two > papers. They make a case for conservation properties of the methods. I > am curious what you think. Sure, I'm familiar with these papers. > Hughes, T. J. R., Engel, G., Mazzei, L., & Larson, M. G. (2000). The > Continuous Galerkin Method Is Locally Conservative. Journal of > Computational Physics, 163(2), 467–488. doi:10.1006/jcph.2000.6577 > > Abstract: We examine the conservation law structure of the continuous > Galerkin method. We employ the scalar, advection–diffusion equation as > a model problem for this purpose, but our results are quite general > and apply to timedependent, nonlinear systems as well. In addition to > global conservation laws, we establish local con servation laws which > pertain to subdomains consisting of a union of elements as well as > individual elements. These results are somewhat surprising and > contradict the widely held opinion that the continuous Galerkin method > is not locally conser votive. This paper changes the definition of local conservation. I wouldn't say it's "surprising" at all because it is exactly the conservation statement induced by the choice of test space. In essence, the continuous Galerkin conservation statement is smeared out over the width of one cell where as the DG or finite volume conservation statement has no such smearing. On coarse grids, one cell can be mighty big. > Hughes, T. J. R., & Wells, G. N. (2005). Conservation properties for > the Galerkin and stabilised forms of the advection–diffusion and > incompressible Navier–Stokes equations. Computer Methods in Applied > Mechanics and Engineering, 194(911), > 1141–1159. doi:10.1016/j.cma.2004.06.034 > > Abstract: A common criticism of continuous Galerkin finite element > methods is their perceived lack of conservation. This may in fact be > true for incompressible flows when advective, rather than > conservative, weak forms are employed. However, advective forms are > often preferred on grounds of accuracy despite violation of > conservation. It is shown here that this deficiency can be easily > remedied, and conservative procedures for advective forms can be > developed from multiscale concepts. As a result, conservative > stabilised finite element procedures are presented for the > advection–diffusion and incompressible Navier–Stokes equations. This paper is specific to incompressible flow, but it's mostly investigating the "advective" form v \cdot \nabla v as compared to the divergence form \nabla \cdot (v \otimes v) With stabilization, they are able to make a weak conservation statement ("smeared" as in the other paper) using the advective form. Note that when using the identity \nabla \cdot (u \otimes a) = a \cdot \nabla u + u (\nabla\cdot a) where 'a' is a discrete velocity field, we rarely have that 'a' is exactly divergence free. Indeed, it is generally only weakly divergencefree unless we use a stable element pair with a discontinuous pressure. Their analysis assumes that 'a' is exactly divergencefree and still only makes a weak conservation statement. Again, the difference between strong and weak conservation is more significant on coarse grids. With agglomerationbased multigrid (FV or DG), a coarsegrid cell satisfies exactly the same conservation statement as the corresponding agglomerated finegrid cells. As an aside, we can see mass conservation problems already for Stokes. We need only choose a discontinuous body force (as in RayleighTaylor initiation) or discontinuous viscosity to find a velocity field that has serious nonconservative artifacts on coarse grids when using stabilized finite elements. This is why finite element methods for problems like subduction must use stable finite element pairs with discontinuous pressure. (Some use almoststable Q1P0, but these still have problems.) 
From: Roy Stogner <roystgnr@ic...>  20130418 14:49:56

On Thu, 18 Apr 2013, Manav Bhatia wrote: > I use a constant time step of 0.05 sec with a backwards Euler > implicit time scheme. Of course, the local CFL increases to > large numbers as the mesh is refined, since I do not change the > time step. > > Do you typically change the time step post refinement? Or use > local time stepping? I don't know how well Ben's and my experiences will carry over to your code  we're using SUPG rather than GLS, so we ought to be getting a better conditioned matrix at the cost of more finicky stability. Don't GLS matrix condition numbers grow like h^4 instead of h^2? But with that in mind: I definitely reduce the time step postrefinement, and not just by a factor of 2. Otherwise it gets too hard for our scheme to handle sudden sharpening of shocks in hypersonic real gas flows. Usually our adaptive time stepper ramps up the time step afterwards pretty aggressively, though, successfully. Also, problems with no shock or with no sensitive chemistry are more robust. I'd be a little surprised if the time step size was the root of your linear solver convergence problem, but that's such an easy thing to check that I would definitely give a reduced time step a try.  Roy 
From: Kirk, Benjamin (JSCEG311) <benjamin.kirk1@na...>  20130418 15:07:18

On Apr 18, 2013, at 9:49 AM, Roy Stogner <roystgnr@...> wrote: > On Thu, 18 Apr 2013, Manav Bhatia wrote: > >> I use a constant time step of 0.05 sec with a backwards Euler >> implicit time scheme. Of course, the local CFL increases to >> large numbers as the mesh is refined, since I do not change the >> time step. >> >> Do you typically change the time step post refinement? Or use >> local time stepping? > > I don't know how well Ben's and my experiences will carry over to your > code  we're using SUPG rather than GLS, so we ought to be getting a > better conditioned matrix at the cost of more finicky stability. > Don't GLS matrix condition numbers grow like h^4 instead of h^2? Well you did say inviscid, correct? Then I would think the discretizations would be equivalent  modulo your choice of tau. Which is what exactly? > But with that in mind: I definitely reduce the time step > postrefinement, and not just by a factor of 2. Otherwise it gets too > hard for our scheme to handle sudden sharpening of shocks in > hypersonic real gas flows. Usually our adaptive time stepper ramps up > the time step afterwards pretty aggressively, though, successfully. > Also, problems with no shock or with no sensitive chemistry are more > robust. If you have shocks, this is very important. The problem is that for inviscid flows shocks are selfsimilar under refinement  if you smear the shock over three cells on the coarse grid, you expect it will be smeared over three cells on the fine grid too. When you interpolate the coarse>fine, at the first time step your shock will be spread across 3>6 cells, and a large disturbance will develop there that has to be dealt with. This is why we often reduce the time step immediately after refinement. Ben 
From: Roy Stogner <roystgnr@ic...>  20130418 15:25:11

On Thu, 18 Apr 2013, Kirk, Benjamin (JSCEG311) wrote: > On Apr 18, 2013, at 9:49 AM, Roy Stogner <roystgnr@...> wrote: > >> I don't know how well Ben's and my experiences will carry over to your >> code  we're using SUPG rather than GLS, so we ought to be getting a >> better conditioned matrix at the cost of more finicky stability. >> Don't GLS matrix condition numbers grow like h^4 instead of h^2? > > Well you did say inviscid, correct? Then I would think the > discretizations would be equivalent  modulo your choice of tau. He did, and I didn't consider the implications of that. Pay no attention to my above paragraph; I've been spending too much time addling my brain staring at tiny viscous boundary layers.  Roy 
From: Manav Bhatia <bhatiamanav@gm...>  20130418 15:37:23

On Thu, Apr 18, 2013 at 11:06 AM, Kirk, Benjamin (JSCEG311) < benjamin.kirk1@...> wrote: > > Well you did say inviscid, correct? Then I would think the > discretizations would be equivalent  modulo your choice of tau. Which is > what exactly? > > I am using the same tau definition as in your dissertation. tau_mat = diag(tau_c, tau_m, tau_m, tau_m, tau_e) tau_c = tau_m = tau_e = tau tau = ( (2/dt)^2 + ( (2\u\ + a) / h_u )^2 )^0.5 The discontinuity capturing operator, delta, is also the same as in your work. That brings me to another question. I am looking at using higher order elements and using this tau and delta leads to spurious oscillations in problems with shocks. Do you have a recommendation on a better tau and delta definition for higher order elements? I was considering deriving these matrices based on the residualfree bubbles approach, but could also experiment with definitions. Manav 
From: Kirk, Benjamin (JSCEG311) <benjamin.kirk1@na...>  20130418 16:54:59

On Apr 18, 2013, at 10:37 AM, Manav Bhatia <bhatiamanav@...> wrote: > That brings me to another question. I am looking at using higher order elements and using this tau and delta leads to spurious oscillations in problems with shocks. Delta for higher order elements is definitely ongoing work. I can recommend a much superior tau, though, which I've switched to almost exclusively. See AIAA20113411, and it is directly applicable to higher order elements. I'd be happy to talk about this some more offline if you are interested. Ben 
From: Manav Bhatia <bhatiamanav@gm...>  20130418 16:25:48

Hi Jed, I greatly appreciate your comments. I would have missed these nuances that you pointed out as I come from an engineering background and am trying to slowly bring myself upto speed with the mathematical aspects of FEM and CFD. So, all of this discussion is highly valuable and educational to me. I do have a few questions about what all of this might imply from a practical standpoint, that I hope you and the others may weigh in on.  How critical is this lack of conservation properties for general application of CFD for transonic and supersonic aerodynamic analyses?  Is this apparant nonideal conservation behavior a primary reason for interest in DG vs continuous FEM?  The typical DG methods that I have seen do not add a stabilization within the domain, but handle the boundary terms to ensure continuity of flux across the discontinuous elements. However, a straightforward application of Galerkin method is known to be unstable. So, I am assuming that the stabilization and nonoscillatory behavior comes from forgoing the requirement of continuity and the treatment of boundary terms. However, would this also not imply that if one continued to increase the element size while increasing the polynomial order, the solution within the will at some point become unstable/oscillatory?  My previous questions is movitvated by an interest to move to higher order interpolation functions (read isogeometric methods) for applications in design optimization of aerodynamic vehicles. There, I am interested in really large element sizes (if possible). From what I understand, the continuous FEM methods would be better suited than DG. But, I don't know if the conservation properties that you pointed out might become an issue. I would appreciate your comments. Thanks, Manav On Thu, Apr 18, 2013 at 10:30 AM, Jed Brown <jed@...> wrote: > Manav Bhatia <bhatiamanav@...> writes: > > Jed: I am curious about your comment on lack of conservation of the > > GLS schemes. I did a bit of search and came across the following two > > papers. They make a case for conservation properties of the methods. I > > am curious what you think. > > Sure, I'm familiar with these papers. > > > Hughes, T. J. R., Engel, G., Mazzei, L., & Larson, M. G. (2000). The > > Continuous Galerkin Method Is Locally Conservative. Journal of > > Computational Physics, 163(2), 467–488. doi:10.1006/jcph.2000.6577 > > > > Abstract: We examine the conservation law structure of the continuous > > Galerkin method. We employ the scalar, advection–diffusion equation as > > a model problem for this purpose, but our results are quite general > > and apply to timedependent, nonlinear systems as well. In addition to > > global conservation laws, we establish local con servation laws which > > pertain to subdomains consisting of a union of elements as well as > > individual elements. These results are somewhat surprising and > > contradict the widely held opinion that the continuous Galerkin method > > is not locally conser votive. > > This paper changes the definition of local conservation. I wouldn't say > it's "surprising" at all because it is exactly the conservation > statement induced by the choice of test space. In essence, the > continuous Galerkin conservation statement is smeared out over the width > of one cell where as the DG or finite volume conservation statement has > no such smearing. On coarse grids, one cell can be mighty big. > > > Hughes, T. J. R., & Wells, G. N. (2005). Conservation properties for > > the Galerkin and stabilised forms of the advection–diffusion and > > incompressible Navier–Stokes equations. Computer Methods in Applied > > Mechanics and Engineering, 194(911), > > 1141–1159. doi:10.1016/j.cma.2004.06.034 > > > > Abstract: A common criticism of continuous Galerkin finite element > > methods is their perceived lack of conservation. This may in fact be > > true for incompressible flows when advective, rather than > > conservative, weak forms are employed. However, advective forms are > > often preferred on grounds of accuracy despite violation of > > conservation. It is shown here that this deficiency can be easily > > remedied, and conservative procedures for advective forms can be > > developed from multiscale concepts. As a result, conservative > > stabilised finite element procedures are presented for the > > advection–diffusion and incompressible Navier–Stokes equations. > > This paper is specific to incompressible flow, but it's mostly > investigating the "advective" form > > v \cdot \nabla v > > as compared to the divergence form > > \nabla \cdot (v \otimes v) > > With stabilization, they are able to make a weak conservation statement > ("smeared" as in the other paper) using the advective form. Note that > when using the identity > > \nabla \cdot (u \otimes a) = a \cdot \nabla u + u (\nabla\cdot a) > > where 'a' is a discrete velocity field, we rarely have that 'a' is > exactly divergence free. Indeed, it is generally only weakly > divergencefree unless we use a stable element pair with a discontinuous > pressure. Their analysis assumes that 'a' is exactly divergencefree > and still only makes a weak conservation statement. > > Again, the difference between strong and weak conservation is more > significant on coarse grids. With agglomerationbased multigrid (FV or > DG), a coarsegrid cell satisfies exactly the same conservation > statement as the corresponding agglomerated finegrid cells. > > > As an aside, we can see mass conservation problems already for Stokes. > We need only choose a discontinuous body force (as in RayleighTaylor > initiation) or discontinuous viscosity to find a velocity field that has > serious nonconservative artifacts on coarse grids when using stabilized > finite elements. This is why finite element methods for problems like > subduction must use stable finite element pairs with discontinuous > pressure. (Some use almoststable Q1P0, but these still have > problems.) > 
From: Jed Brown <jed@59...>  20130418 17:09:34

Manav Bhatia <bhatiamanav@...> writes: >  How critical is this lack of conservation properties for general > application of CFD for transonic and supersonic aerodynamic analyses? Ah, a question for the ages. A lot of CFD engineers will say that local conservation is essential and they won't even consider a method that isn't. Most commercial CFD packages use finite volume methods for this reason. At the other end of the spectrum are the FOSLS advocates that say the engineers just don't understand what they want and insist that conservation always sacrifices something else. Other least squares advocates (e.g. Pavel Bochev) insist that conservation is essential, but that you should get it by using compatible spaces in the LS formulation (erasing some notable benefits of LS in the process). In the end, it depends on the problem. >  Is this apparant nonideal conservation behavior a primary reason > for interest in DG vs continuous FEM? Yes, conservation and stability. Linear DG satisfies a cellwise entropy inequality so that it can often be used without limiting. Strict monotonicity is not practically available for continuous FEM. >  The typical DG methods that I have seen do not add a stabilization > within the domain, but handle the boundary terms to ensure continuity of > flux across the discontinuous elements. However, a straightforward > application of Galerkin method is known to be unstable. So, I am assuming > that the stabilization and nonoscillatory behavior comes from forgoing the > requirement of continuity and the treatment of boundary terms. However, > would this also not imply that if one continued to increase the element > size while increasing the polynomial order, the solution within the will at > some point become unstable/oscillatory? Yes, DG stability is only cellwise. Normal flux limiting reduces the method to second order even in smooth regions. So if you go to highorder, you need more exotic limiting to prevent internal oscillations (see Zhang and Shu 2010 and 2011). 
From: Jed Brown <jed@59...>  20130418 17:31:50

Manav Bhatia <bhatiamanav@...> writes: > Hi Jed, > > All of this is really intriguing! Would you be able to suggest a book > (or some paper) that discusses these topics in more detail? > > I am eager to read more on this, and think/talk more intelligently about > it. I think survey is a decent place to start reading: http://zjw.public.iastate.edu/papers/2007jpashighorder.pdf 
From: lorenzo alessio botti <lorenzoalessiobotti@gm...>  20130418 21:54:49

Interesting... In my experience dG works without stabilization and additional artificial viscosity in convectiondominated incompressible flows (let's consider incompressible flows to avoid issues with discontinuities and monotonicity). One interesting point is that some formulations that works very well in practice are not conservative since the convective term is not written in divergence form. E.g. the skewsymmetric form introduced by Temam in combination with centered fluxes works perfectly. With this choice the energy is conserved (even if the velocity field is only weekly divergence free), but not the momentum. In general I think that discretizations based on numerical fluxes are more robust, in particular when the fluxes are based on the physics. Relaxing the continuity requirements might help in case of insufficient spatial resolution and week imposition of boundary conditions also helps. I admit that my experience with supg is limited, I'm not able to "compare" with dG. Lorenzo On Apr 18, 2013 7:09 PM, "Jed Brown" <jed@...> wrote: > Manav Bhatia <bhatiamanav@...> writes: > >  How critical is this lack of conservation properties for general > > application of CFD for transonic and supersonic aerodynamic analyses? > > Ah, a question for the ages. A lot of CFD engineers will say that local > conservation is essential and they won't even consider a method that > isn't. Most commercial CFD packages use finite volume methods for this > reason. > > At the other end of the spectrum are the FOSLS advocates that say the > engineers just don't understand what they want and insist that > conservation always sacrifices something else. Other least squares > advocates (e.g. Pavel Bochev) insist that conservation is essential, but > that you should get it by using compatible spaces in the LS formulation > (erasing some notable benefits of LS in the process). > > In the end, it depends on the problem. > > >  Is this apparant nonideal conservation behavior a primary reason > > for interest in DG vs continuous FEM? > > Yes, conservation and stability. Linear DG satisfies a cellwise > entropy inequality so that it can often be used without limiting. > Strict monotonicity is not practically available for continuous FEM. > > >  The typical DG methods that I have seen do not add a stabilization > > within the domain, but handle the boundary terms to ensure continuity of > > flux across the discontinuous elements. However, a straightforward > > application of Galerkin method is known to be unstable. So, I am assuming > > that the stabilization and nonoscillatory behavior comes from forgoing > the > > requirement of continuity and the treatment of boundary terms. However, > > would this also not imply that if one continued to increase the element > > size while increasing the polynomial order, the solution within the will > at > > some point become unstable/oscillatory? > > Yes, DG stability is only cellwise. Normal flux limiting reduces the > method to second order even in smooth regions. So if you go to > highorder, you need more exotic limiting to prevent internal > oscillations (see Zhang and Shu 2010 and 2011). > > >  > Precog is a nextgeneration analytics platform capable of advanced > analytics on semistructured data. The platform includes APIs for building > apps and a phenomenal toolset for data science. Developers can use > our toolset for easy data analysis & visualization. Get a free account! > http://www2.precog.com/precogplatform/slashdotnewsletter > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > 
From: Jed Brown <jed@59...>  20130418 22:29:35

lorenzo alessio botti <lorenzoalessiobotti@...> writes: > In my experience dG works without stabilization and additional artificial > viscosity This is generally attributed to the cell entropy inequality. http://www.ams.org/journals/mcom/199462206/S00255718199412232327/ This is the best stability result I'm aware of for any high order linear spatial discretization. 
From: Manav Bhatia <bhatiamanav@gm...>  20130418 22:52:28

On Apr 18, 2013, at 6:29 PM, Jed Brown <jed@...> wrote: > lorenzo alessio botti <lorenzoalessiobotti@...> writes: > >> In my experience dG works without stabilization and additional artificial >> viscosity > > This is generally attributed to the cell entropy inequality. > > http://www.ams.org/journals/mcom/199462206/S00255718199412232327/ > > This is the best stability result I'm aware of for any high order linear > spatial discretization. Well, the question in my mind is: what happens with DG when the element is increased to the size of the complete flow domain (so we have one element) and the interpolation order is increased gradually. Wouldn't DG in this case reduce to the basic Galerkin method, which is known to be unstable? Or is DG still guaranteed to give stable solutions? Manav 
From: Jed Brown <jed@59...>  20130418 22:58:37

Manav Bhatia <bhatiamanav@...> writes: > Well, the question in my mind is: what happens with DG when the > element is increased to the size of the complete flow domain (so we > have one element) and the interpolation order is increased gradually. > > Wouldn't DG in this case reduce to the basic Galerkin method, which is > known to be unstable? Or is DG still guaranteed to give stable > solutions? You would be giving up local conservation and a local entropy inequality. You're welcome to think of continuous Galerkin as a single huge DG element that happens to use modest order compact basis functions. DG only gives you precise control of wholecell behavior, and the main problem with high order is dealing with any instabilities that arise inside the cell. 
From: lorenzo alessio botti <lorenzoalessiobotti@gm...>  20130419 08:16:53

> Well, the question in my mind is: what happens with DG when the element is increased to the size of the complete flow domain (so we have one element) and the interpolation order is / > > > increased gradually. > Wouldn't DG in this case reduce to the basic Galerkin method, which is known to be unstable? Or is DG still guaranteed to give stable solutions? A method with global test and trial functions is kind of a spectral method. Anyway depending on the formulation there are still significant differences. The way you impose boundary conditions might influence the accuracy and the stability of the method. If you weekly impose boundary conditions based on a dG formulation you will notice that Dirichlet boundary conditions might not be exactly satisfied. This reflects an insufficient spatial resolution and indicates the ability to distribute discretization errors on the boundary of the domain (one element). I mean that the error on the boundary is consistent with the discretization error inside the domain and, in my experience, this is not bad. Lorenzo On Fri, Apr 19, 2013 at 12:52 AM, Manav Bhatia <bhatiamanav@...>wrote: > > > On Apr 18, 2013, at 6:29 PM, Jed Brown <jed@...> wrote: > > > lorenzo alessio botti <lorenzoalessiobotti@...> writes: > > > >> In my experience dG works without stabilization and additional > artificial > >> viscosity > > > > This is generally attributed to the cell entropy inequality. > > > > http://www.ams.org/journals/mcom/199462206/S00255718199412232327/ > > > > This is the best stability result I'm aware of for any high order linear > > spatial discretization. > > Well, the question in my mind is: what happens with DG when the element > is increased to the size of the complete flow domain (so we have one > element) and the interpolation order is increased gradually. > > Wouldn't DG in this case reduce to the basic Galerkin method, which is > known to be unstable? Or is DG still guaranteed to give stable solutions? > > Manav > > 
From: Kirk, Benjamin (JSCEG311) <benjamin.kirk1@na...>  20130418 23:03:07

On Apr 18, 2013, at 5:29 PM, Jed Brown <jed@...> wrote: > lorenzo alessio botti <lorenzoalessiobotti@...> writes: > >> In my experience dG works without stabilization and additional artificial >> viscosity well you probably should clarify that  you are certainly "unwinding" at the cell interfaces to get an upwind bias in the scheme, right? So that could be alternatively looked at as a central + diffusive discretrization… So I would contend the artificial viscosity is (i) less direct and (ii) physically based, but could be thought of as viscosity nonetheless. Certainly if you computed the interface cell flux as the average of the neighbors things would go to hell in a hurry? 
From: Kirk, Benjamin (JSCEG311) <benjamin.kirk1@na...>  20130418 23:41:23

So I thought I should share my general thoughts based on experience with both continuous FEM & FV codes for these types of problems. And since this is archived it'll probably come back to bite me, so keep in mind this is *my opinion based on my personal research and application.* 1.) For highorder discretizations of elliptic equations, continuous FEM is the goto method. 2.) For highorder discretizations of hyperbolic equations, DG methods are about the best you can do, particularly unstructured. Combined with an explicit, multgridaccelerated scheme, it is a powerful technique. 3.) The problems come in when you have both: Continuous FEM: diffusive terms "easy," convective terms "hard & researchy" Discontinous FEM: convective terms easy, diffusive terms "hard & researchy" 4.) Shock capturing is a challenge for every scheme. In the F.V. world we rely on good alignment of the mesh to shockwaves or else the results really suffer. *Everything* does something O(h) at the shock, so you either live with it, or control h directly via refinement, or indirectly. It is this latter category that I would contend has been the recent trend in the DG community with 'subcell shock capturing'. In my mind taking a p=5 element and replacing it with 256 p=1 elements for shock capturing purposes is just h refinement by another name. 5.) For 2ndorder accurate discretizations it is hard to beat classic F.V. High order F.V. with its broad stencils is a nightmare. 6.) For things like the compressible NS equations, the underlying characteristicsbased behavior of the system is important, and the most effective numerical methods deal with the characteristic form of the equations at some level. Historically SUPG/GLS has only been concerned with the "flow direction" and, again in my opinion, been at a disadvantage because they did not treat this crucial aspect. That has been the motivation for the recent tau stuff I alluded to earlier. For a derivation of that tau from a characteristics argument see https://github.com/libMesh/libmesh/wiki/documents/presentations/2013/fins_FEF2013.pdf The attached paper shows some good results  basically with work you can make either scheme give high order results, but there is no panacea. 
From: lorenzo alessio botti <lorenzoalessiobotti@gm...>  20130419 08:45:27

> well you probably should clarify that  you are certainly "unwinding" at the cell interfaces to get an upwind bias in the scheme, right? So that could be alternatively looked at as a central + >diffusive discretrization… So I would contend the artificial viscosity is (i) less direct and (ii) physically based, but could be thought of as viscosity nonetheless. >Certainly if you computed the interface cell flux as the average of the neighbors things would go to hell in a hurry? The funny thing is that I do not unwind or upwind. The formulation I was talking about is based on centered fluxes, that is they are computed as the average of the neighbors things. Not all the centered fluxes works. The formulation based on the skewsymmetric form of convective term and centered fluxes introduced by Di Pietro and Ern http://www.ams.org/journals/mcom/201079271/S0025571810023331/ actually works in convectiondominated flows. With convectiondominated I mean the following http://youtu.be/G45eHThnUQ, not Re=10^6 RANS. By the way that was done with Gnuid, which is based on libMesh ;). I also use Riemann solvers things for incompressible flows. Artificial compressibility is introduced only at the flux level and there is no \frac{\partial pressure}{\partial time}. What I can say is that the difference is appreciable at low order when the jumps at interfaces are significants. Riemann solver based fluxes helps stability but when the order is increased the differences tend to disappear, as expected. For compressible flows and RANS we use exact Riemann solvers as a first guess. Inexact Riemann solvers introducing additional dissipation might help in some (hard to desperate) cases. Lorenzo On Fri, Apr 19, 2013 at 1:02 AM, Kirk, Benjamin (JSCEG311) < benjamin.kirk1@...> wrote: > > On Apr 18, 2013, at 5:29 PM, Jed Brown <jed@...> wrote: > > > lorenzo alessio botti <lorenzoalessiobotti@...> writes: > > > >> In my experience dG works without stabilization and additional > artificial > >> viscosity > > well you probably should clarify that  you are certainly "unwinding" at > the cell interfaces to get an upwind bias in the scheme, right? So that > could be alternatively looked at as a central + diffusive discretrization… > So I would contend the artificial viscosity is (i) less direct and (ii) > physically based, but could be thought of as viscosity nonetheless. > > Certainly if you computed the interface cell flux as the average of the > neighbors things would go to hell in a hurry? > > > 
From: Jed Brown <jed@59...>  20130418 23:29:50

"Kirk, Benjamin (JSCEG311)" <benjamin.kirk1@...> writes: > well you probably should clarify that  you are certainly "unwinding" > at the cell interfaces to get an upwind bias in the scheme, right? So > that could be alternatively looked at as a central + diffusive > discretrization… So I would contend the artificial viscosity is (i) > less direct and (ii) physically based, but could be thought of as > viscosity nonetheless. 1. "upwinding" here means the solution of a Riemann problem. If you use a Godunov flux, then the "upwinding" is introducing no numerical viscosity. I think of Riemann problems as being very fundamental when solving problems that do not have continuous solutions. 2. Numerical dissipation introduced by an approximate Riemann solver is decoupled from the convergence rate of the method. The Riemann solve has no tunable parameters, does not depend on the grid, and can attain any order of accuracy purely by raising the order of reconstruction (in FV) or the basis order (in DG). Compare this to SUPG, for example, which has an O(h) term. Viscous fluxes are messier with DG: they have tunable parameters and the tradeoffs are never satisfying. > Certainly if you computed the interface cell flux as the average of > the neighbors things would go to hell in a hurry? Yes, that's unstable. 
From: Kirk, Benjamin (JSCEG311) <benjamin.kirk1@na...>  20130418 23:53:21

On Apr 18, 2013, at 6:29 PM, Jed Brown <jed@...> wrote: > 2. Numerical dissipation introduced by an approximate Riemann solver is > decoupled from the convergence rate of the method. The Riemann solve > has no tunable parameters, does not depend on the grid, and can attain > any order of accuracy purely by raising the order of reconstruction (in > FV) or the basis order (in DG). Compare this to SUPG, for example, > which has an O(h) term. No tunable parameters? What approximate Riemann solver are you referring to? The plethora of choices seems like a "tunable parameter" to me, let alone eigenvalue limiting, entropy fixes, etc.. Granted, as the difference between left and right states decreases (as is the case for high order DG with smooth solutions) the importance of these choices is lessened, but still… 