## [Libmesh-devel] Simple TimeSolver questions

 [Libmesh-devel] Simple TimeSolver questions From: Roy Stogner - 2007-08-13 22:37:15 ```To get some more reliability out of my Cahn-Hilliard simulations on large timesteps, I'm implementing a theta method which, instead of the more traditional: u_new - u_old = delta_t * F(theta*u_new + (1-theta)*u_old) will solve: u_new - u_old = delta_t * (theta*F(u_new) + (1-theta)*u_old) This should give a second order method that can be more reliable (in my problem, in particular, it will avoid the extreme case where for traditional Crank-Nicholson F(u_new) can end up infinite and spoil the next timestep) at the cost of requiring twice as many evaluations of F. But what is it called? I don't want to commit the "EuleresqueSolver" class or "EulerLikeSolver" class if there's a better name out there, and this is such a simple scheme that I can't believe it hasn't been invented and named a hundred years ago, but I have no idea what that name would be. AdaptiveTimeSolver currently calculates u_dt and u_halfdt by taking one whole timestep and two half size timesteps, and then uses (u_dt - u_halfdt) as an error estimate while using u_halfdt as the final solution for the timestep. Would it make more sense to do a little Richardson extrapolation and use (2*u_halfdt - u_dt) for the timestep's final solution? I know that can give you second order accuracy from an otherwise first order accurate scheme, but do you always get an order deltat improvement in this way, and more importantly, how much is it likely to hurt stability? Perhaps I'll just make it a option, off by default. --- Roy ```

 [Libmesh-devel] Simple TimeSolver questions From: Roy Stogner - 2007-08-13 22:37:15 ```To get some more reliability out of my Cahn-Hilliard simulations on large timesteps, I'm implementing a theta method which, instead of the more traditional: u_new - u_old = delta_t * F(theta*u_new + (1-theta)*u_old) will solve: u_new - u_old = delta_t * (theta*F(u_new) + (1-theta)*u_old) This should give a second order method that can be more reliable (in my problem, in particular, it will avoid the extreme case where for traditional Crank-Nicholson F(u_new) can end up infinite and spoil the next timestep) at the cost of requiring twice as many evaluations of F. But what is it called? I don't want to commit the "EuleresqueSolver" class or "EulerLikeSolver" class if there's a better name out there, and this is such a simple scheme that I can't believe it hasn't been invented and named a hundred years ago, but I have no idea what that name would be. AdaptiveTimeSolver currently calculates u_dt and u_halfdt by taking one whole timestep and two half size timesteps, and then uses (u_dt - u_halfdt) as an error estimate while using u_halfdt as the final solution for the timestep. Would it make more sense to do a little Richardson extrapolation and use (2*u_halfdt - u_dt) for the timestep's final solution? I know that can give you second order accuracy from an otherwise first order accurate scheme, but do you always get an order deltat improvement in this way, and more importantly, how much is it likely to hurt stability? Perhaps I'll just make it a option, off by default. --- Roy ```
 [Libmesh-devel] Simple TimeSolver questions From: John Peterson - 2007-08-13 23:09:28 ```Roy Stogner writes: > > To get some more reliability out of my Cahn-Hilliard simulations on > large timesteps, I'm implementing a theta method which, instead of the > more traditional: > > u_new - u_old = delta_t * F(theta*u_new + (1-theta)*u_old) > > will solve: > > u_new - u_old = delta_t * (theta*F(u_new) + (1-theta)*u_old) You meant u_new - u_old = delta_t * (theta*F(u_new) + (1-theta)*F(u_old)) but yea, that is the method I initially learned as "Crank-Nicolson". The problem is, you always learn this in the context of e.g. linear heat conduction and so the two schemes you mention are equivalent. I'd call this new method LinearizedEuler, or maybe Euler2, to signify it has 2 residual evaluations. -J > This should give a second order method that can be more reliable (in > my problem, in particular, it will avoid the extreme case where for > traditional Crank-Nicholson F(u_new) can end up infinite and spoil the > next timestep) at the cost of requiring twice as many evaluations of > F. > > But what is it called? I don't want to commit the "EuleresqueSolver" > class or "EulerLikeSolver" class if there's a better name out there, > and this is such a simple scheme that I can't believe it hasn't been > invented and named a hundred years ago, but I have no idea what that > name would be. > > > AdaptiveTimeSolver currently calculates u_dt and u_halfdt by taking > one whole timestep and two half size timesteps, and then uses > (u_dt - u_halfdt) as an error estimate while using u_halfdt as the > final solution for the timestep. Would it make more sense to do > a little Richardson extrapolation and use (2*u_halfdt - u_dt) for the > timestep's final solution? I know that can give you second order > accuracy from an otherwise first order accurate scheme, but do you > always get an order deltat improvement in this way, and more > importantly, how much is it likely to hurt stability? Perhaps I'll > just make it a option, off by default. > --- > Roy ```