From: Vijay S. Mahadevan <vijay.m@gm...>  20100126 18:20:56

Hi all, I have a highly nonlinear diffusion problem whose diffusion coefficient can vary as T^alpha where alpha can range from 1 to 4. Such problems occur in radiative transfer (alpha=3.5) often. I modified the example 10 and took parts of the navierstokes example to create a new one to solve this problem. The boundary conditions are of the mixed type (Robin) and hence acts as the source for the energy wave that propagates in to the domain as time proceeds. The issue is that I get negative temperatures at the first element during assembly, immaterial of the refinement or the resolution of the domain. I do not understand this behavior and was wondering if there was a requirement with continuous FEM (Lagrange basis) that was not satisfied in these problems with nonsmooth solutions. I am certain that there should be a fix to make sure that negative solutions are avoided and that energy was conserved. But all my literature search has not led to any decent solutions yet. I've attached the example and the makefile for the libmesh program which demonstrates this issue. Any suggestions for obtaining valid solutions will be much appreciated. The command line for my run looks like ./nonlinearopt init_timestep 0 n_timesteps 2000 dt 0.0005 n 100 n_refinements 0 Also, if you find any bugs in the implementation, please do let me know. Also, since it is a timedependent, nonlinear problem, when resolved, it could make as a good example program. Just a thought ! Thanks, Vijay 
From: Vijay S. Mahadevan <vijay.m@gm...>  20100126 18:20:56

Hi all, I have a highly nonlinear diffusion problem whose diffusion coefficient can vary as T^alpha where alpha can range from 1 to 4. Such problems occur in radiative transfer (alpha=3.5) often. I modified the example 10 and took parts of the navierstokes example to create a new one to solve this problem. The boundary conditions are of the mixed type (Robin) and hence acts as the source for the energy wave that propagates in to the domain as time proceeds. The issue is that I get negative temperatures at the first element during assembly, immaterial of the refinement or the resolution of the domain. I do not understand this behavior and was wondering if there was a requirement with continuous FEM (Lagrange basis) that was not satisfied in these problems with nonsmooth solutions. I am certain that there should be a fix to make sure that negative solutions are avoided and that energy was conserved. But all my literature search has not led to any decent solutions yet. I've attached the example and the makefile for the libmesh program which demonstrates this issue. Any suggestions for obtaining valid solutions will be much appreciated. The command line for my run looks like ./nonlinearopt init_timestep 0 n_timesteps 2000 dt 0.0005 n 100 n_refinements 0 Also, if you find any bugs in the implementation, please do let me know. Also, since it is a timedependent, nonlinear problem, when resolved, it could make as a good example program. Just a thought ! Thanks, Vijay 
From: Roy Stogner <roystgnr@ic...>  20100126 18:33:55

On Tue, 26 Jan 2010, Vijay S. Mahadevan wrote: > The issue is that I get negative temperatures at the first element > during assembly, immaterial of the refinement or the resolution of the > domain. I do not understand this behavior and was wondering if there > was a requirement with continuous FEM (Lagrange basis) that was not > satisfied in these problems with nonsmooth solutions. I am certain > that there should be a fix to make sure that negative solutions are > avoided and that energy was conserved. But all my literature search > has not led to any decent solutions yet. Not sure about this particular problem (example wasn't attached, and I wouldn't have time to read it if it was), but I've managed to avoid similar "outofbounds solution" failures just by being careful about the time discretization. For example the Galerkin H^2 formulation of CahnHilliard can blow up with CrankNicholson integration (where each timestep can effectively extrapolate the solution out of the domain of the equations) but works with trapezoidal integration (where the residual terms at the endoftimestep smoothly prevent that class of error). Not sure what the best way to go is when you don't have that same level of feedback in the residual, though. Oscillations in convectiondominated transport can lead to negative concentrations, which can then crash later physics despite being "reasonable" solutions to the transport equations.  Roy 
From: Vijay S. Mahadevan <vijay.m@gm...>  20100126 18:43:04

Roy, I got an email saying that the attachments are awaiting moderator approval. That might explain why you did not get the attachment. I'm solving a pure diffusion problem and there is no convection here. But I do understand that time integration makes a big difference and even making delt=1e10 does not seem to help. The negativity occurs on the first step, the first call to nonlinear residual. When you say trapezoidal rule, are you talking about Implicit midpoint here because CN is based on the trapezoidal rule and is not Lstable (spurious oscillations are not damped). One other interesting thing that I just noticed is that if I make my initial solution bigger: originally it was uniformly 1e5 and now I made it to 1e0, the nonlinear iteration and timestepping with CN proceeds correctly. So now I'm wondering whether nondimensionalization in some form could be the answer but with a diffusion coefficient as T^4, not sure if this will help much either. Thanks for the parallel comparisons though. Vijay On Tue, Jan 26, 2010 at 12:33 PM, Roy Stogner <roystgnr@...> wrote: > > On Tue, 26 Jan 2010, Vijay S. Mahadevan wrote: > >> The issue is that I get negative temperatures at the first element >> during assembly, immaterial of the refinement or the resolution of the >> domain. I do not understand this behavior and was wondering if there >> was a requirement with continuous FEM (Lagrange basis) that was not >> satisfied in these problems with nonsmooth solutions. I am certain >> that there should be a fix to make sure that negative solutions are >> avoided and that energy was conserved. But all my literature search >> has not led to any decent solutions yet. > > Not sure about this particular problem (example wasn't attached, and I > wouldn't have time to read it if it was), but I've managed to avoid > similar "outofbounds solution" failures just by being careful about > the time discretization. For example the Galerkin H^2 formulation of > CahnHilliard can blow up with CrankNicholson integration (where each > timestep can effectively extrapolate the solution out of the domain of > the equations) but works with trapezoidal integration (where the > residual terms at the endoftimestep smoothly prevent that class of > error). > > Not sure what the best way to go is when you don't have that same > level of feedback in the residual, though. Oscillations in > convectiondominated transport can lead to negative concentrations, > which can then crash later physics despite being "reasonable" > solutions to the transport equations. >  > Roy > 
From: Kirk, Benjamin (JSCEG311) <benjamin.kirk1@na...>  20100126 18:52:33

> I'm solving a pure diffusion problem and there is no convection here. > But I do understand that time integration makes a big difference and > even making delt=1e10 does not seem to help. The negativity occurs on > the first step, the first call to nonlinear residual. When you say > trapezoidal rule, are you talking about Implicit midpoint here because > CN is based on the trapezoidal rule and is not Lstable (spurious > oscillations are not damped). You can test implicit Euler easily enough by changing the .5 at line 703 to 0 and the .5 at line 714 to 1. The only other thing which could be an issue going forward is that the BCs appear to be evaluated at the current time, which I think is inconsistent with CN. I believe the correct approach would be to evaluate them at both the current and the previous time and apply the average, as CN is built around a discretization at t_(n+1/2)... (this is not an issue for your current BCs which have no time dependence.) 
From: Vijay S. Mahadevan <vijay.m@gm...>  20100126 19:05:55

Ben, I'll try the BackwardEuler and see if that eliminates the problems. I'll let you know how this goes. Also, I do a basic form of inexact newton iteration (look at the linear tolerance being set at the end of nonlinear loop). Maybe I'm getting the wrong residual here and setting the tolerance incorrectly. I'll check this again. Vijay On Tue, Jan 26, 2010 at 12:52 PM, Kirk, Benjamin (JSCEG311) <benjamin.kirk1@...> wrote: >> I'm solving a pure diffusion problem and there is no convection here. >> But I do understand that time integration makes a big difference and >> even making delt=1e10 does not seem to help. The negativity occurs on >> the first step, the first call to nonlinear residual. When you say >> trapezoidal rule, are you talking about Implicit midpoint here because >> CN is based on the trapezoidal rule and is not Lstable (spurious >> oscillations are not damped). > > You can test implicit Euler easily enough by changing the .5 at line 703 to > 0 and the .5 at line 714 to 1. > > The only other thing which could be an issue going forward is that the BCs > appear to be evaluated at the current time, which I think is inconsistent > with CN. I believe the correct approach would be to evaluate them at both > the current and the previous time and apply the average, as CN is built > around a discretization at t_(n+1/2)... (this is not an issue for your > current BCs which have no time dependence.) > > 
From: Kirk, Benjamin (JSCEG311) <benjamin.kirk1@na...>  20100126 19:01:29

> One other interesting thing that I just noticed is that if I make my > initial solution bigger: originally it was uniformly 1e5 and now I > made it to 1e0, the nonlinear iteration and timestepping with CN > proceeds correctly. So now I'm wondering whether nondimensionalization > in some form could be the answer but with a diffusion coefficient as > T^4, not sure if this will help much either. You might also simply try increasing the tolerance on your linear solver  looks like it is TOLERANCE, which by default is 1.e6. 
From: Jed Brown <jed@59...>  20100126 19:06:28

On Tue, 26 Jan 2010 12:42:58 0600, "Vijay S. Mahadevan" <vijay.m@...> wrote: > I'm solving a pure diffusion problem and there is no convection here. > But I do understand that time integration makes a big difference and > even making delt=1e10 does not seem to help. The negativity occurs on > the first step, the first call to nonlinear residual. When you say > trapezoidal rule, are you talking about Implicit midpoint here because > CN is based on the trapezoidal rule and is not Lstable (spurious > oscillations are not damped). Implicit midpoint is also not Lstable, it actually has exactly the same stability function as trapezoid: (1 + z/2) / (1  z/2) Implicit Euler, BDF2, and various implicit RungeKutta schemes are Lstable. Also, if you're feeling adventurous, I'd love to hear how your system works with TSGL (in PETScdev). These are A and Lstable methods of order and stage order 1 through 5, with adaptive controllers (though the adaptive controllers may not robust, they haven't had much tuning). For oscillations, as in hyperbolic systems and I think not the issue here, you may need a strong stability preserving integrator. There do not exist SSP integrators without a CFL constraint and order greater than 1 (implicit Euler is the only SSP method without a CFL constraint). Jed 
From: Vijay S. Mahadevan <vijay.m@gm...>  20100126 19:23:28

Jed, I know that Implicit Midpoint (IM2) is not Lstable either but it is my understanding that it has a lower truncation error because the coefficient in front of the error is smaller compared to that of CN. So I usually prefer IM2 to CN. I'm curious about the TSGL methods and will try out those integrators soon. It might be a stretch to do that in the current problem implementation but I have other coupled problems that could use adaptive, higher order temporal integration. If there are references explaining the properties of these methods, their butcher tableau and such, please do let me know. Also, are these implemented only in petscdev currently since I do not see it in the latest release of petsc. Vijay On Tue, Jan 26, 2010 at 1:08 PM, Jed Brown <jed@...> wrote: > On Tue, 26 Jan 2010 12:42:58 0600, "Vijay S. Mahadevan" <vijay.m@...> wrote: >> I'm solving a pure diffusion problem and there is no convection here. >> But I do understand that time integration makes a big difference and >> even making delt=1e10 does not seem to help. The negativity occurs on >> the first step, the first call to nonlinear residual. When you say >> trapezoidal rule, are you talking about Implicit midpoint here because >> CN is based on the trapezoidal rule and is not Lstable (spurious >> oscillations are not damped). > > Implicit midpoint is also not Lstable, it actually has exactly the same > stability function as trapezoid: > > (1 + z/2) / (1  z/2) > > Implicit Euler, BDF2, and various implicit RungeKutta schemes are > Lstable. Also, if you're feeling adventurous, I'd love to hear how > your system works with TSGL (in PETScdev). These are A and Lstable > methods of order and stage order 1 through 5, with adaptive controllers > (though the adaptive controllers may not robust, they haven't had much > tuning). > > For oscillations, as in hyperbolic systems and I think not the issue > here, you may need a strong stability preserving integrator. There do > not exist SSP integrators without a CFL constraint and order greater > than 1 (implicit Euler is the only SSP method without a CFL constraint). > > Jed > 
From: Jed Brown <jed@59...>  20100126 19:49:31

On Tue, 26 Jan 2010 13:23:19 0600, "Vijay S. Mahadevan" <vijay.m@...> wrote: > I know that Implicit Midpoint (IM2) is not Lstable either but it is > my understanding that it has a lower truncation error because the > coefficient in front of the error is smaller compared to that of CN. > So I usually prefer IM2 to CN. Hmm, it's easier to handle boundary conditions in IM2, but I thought the truncation error was the same. > I'm curious about the TSGL methods and will try out those integrators > soon. It might be a stretch to do that in the current problem > implementation but I have other coupled problems that could use > adaptive, higher order temporal integration. If there are references > explaining the properties of these methods, their butcher tableau and > such, please do let me know. Also, are these implemented only in > petscdev currently since I do not see it in the latest release of > petsc. These are only in petscdev, I added them in the fall. http://www.mcs.anl.gov/petsc/petscas/snapshots/petscdev/docs/manualpages/TS/TSGL.html the primary reference is John Butcher and Z. Jackieweicz and W. Wright, On error propagation in general linear methods for ordinary differential equations, Journal of Complexity, Vol 23 (46), 2007 tables for many of the methods are here http://www.math.auckland.ac.nz/~hpod/atlas/ IIRC, somehow those tables have errors in error estimators, in TSGL they are computed automatically according to the algorithms in the 2007 paper. The implementation only uses the implicit form, and is also suitable for DAEs (probably only index 1). This implicit form is also better for finite element methods because assembling the mass matrix separately is a bad idea, see TSSetIFunction and TSSetIJacobian. http://www.mcs.anl.gov/petsc/petscas/snapshots/petscdev/docs/manualpages/TS/TSSetIFunction.html http://www.mcs.anl.gov/petsc/petscas/snapshots/petscdev/docs/manualpages/TS/TSSetIJacobian.html Let me know if you have questions. Suggestions about controllers would also be appreciated, my controllers just try to maximise the effective step size over the available schemes subject to a local truncation error tolerance, with a very naive smoothing procedure to (hopefully) keep it from bouncing between schemes too often. But there is a plugin architecture so hopefully we'll accumulate a library of controllers. Starting procedures are also a bit naive, just starting with a small implicit Euler step and moving to higher order schemes as smoothness of the solution indicates. The usual alternative is to start with a singly implicit RungeKutta scheme, but they suffer from reduced stage order and I haven't gotten around to writing one. Jed 