Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
Close
From: Manav Bhatia <bhatiamanav@gm...>  20130221 19:56:08

Hi, I would appreciate some help in understanding if the existing System classes can handle PDEs (linear/nonlinear) with second order time derivative. I am going through the FEMSystem documentation and it seems like these are suitable for a system of the form du/dt = f(u). Ofcourse, I can change my second order system from d^2u/dt^2 = f(u) to d/dt {u; \tilde{u}} = [0 I; 0 0] {u; \tilde{u}} + {0; f(u)} but that changes the dimension of my sparse system by a factor of two, and I am not sure if the FEMSystem takes into account the modification in the sparsity pattern. Additionally, within the existing framework, how does one account for *nonlinear* systems that have a mass matrix [M] du/dt = f(u) Extending this question forward, how about PDEs with second order time derivative? I would appreciate your comments. Thanks, Manav 
From: Roy Stogner <roystgnr@ic...>  20130221 20:26:01

On Thu, 21 Feb 2013, Manav Bhatia wrote: > I would appreciate some help in understanding if the existing System > classes can handle PDEs (linear/nonlinear) with second order time > derivative. IIRC We have one existing system (NewmarkSystem?) designed for second order time derivatives. I have no idea what its limitations are or whether it would be suitable for you. > I am going through the FEMSystem documentation and it seems > like these are suitable for a system of the form > > du/dt = f(u). Currently, yes. There's nothing about the APIs that would prevent you from adding a UnsteadySolver subclass that integrates u''=f(u), and we'd love to get a patch like that, but we don't have any such subclass in the library yet. Worse: if you wanted to handle even more general secondorder time derivatives, with mixtures of u'' and u' in the same equation, I think we'd need some modification to the APIs before we could handle that naturally. > Ofcourse, I can change my second order system from > > d^2u/dt^2 = f(u) > > to > > d/dt {u; \tilde{u}} = [0 I; 0 0] {u; \tilde{u}} + {0; f(u)} > > but that changes the dimension of my sparse system by a factor of two, This is probably the easiest way to get a secondorder system up and running with FEMSystem. > and I am not sure if the FEMSystem takes into account the modification in > the sparsity pattern. You do the add_variable() for your auxilliary u' variable in your system initialization, and the sparsity pattern will be modified to assume that every residual term corresponding to a u degree of freedom on an element is coupled to every u' degree of freedom on that element. This will work; you'll just have unnecessary zeros in your sparse matrix or you'll have to use our DofMap::_dof_coupling API to avoid them. > Additionally, within the existing framework, how does one account for > *nonlinear* systems that have a mass matrix > > [M] du/dt = f(u) Right now, we allow you to request an "elem_fixed_solution" that can be used for either loworder time integration of nonlinear mass matrices or for nonlinear stabilization in test functions. We're actually planning a (backwardsincompatible, unfortunately) change in this API, to make it less unnecessarily confusing and to allow for higherorder time integration of nonlinear mass matrices and/or for nonlinear mass matrices with nonlinear stabilization in test functions. If you'd like, I could probably push that change up higher on my todo list. > Extending this question forward, how about PDEs with second order time > derivative? Now that's a good question. I've only done secondorder time integration by breaking it up into firstorder systems. I'd like to support direct secondorder integrators, but I'd need help designing the API so as to make it as broadly applicable as possible.  Roy 
From: Manav Bhatia <bhatiamanav@gm...>  20130221 21:36:48

Hi Roy, In the FEM code that I have written, I have implemented Newmark Method for first and second order nonlinear systems, with and without mass and damping matrices. While converting secondorder to firstorder systems I create a new sparsity patter data structure where the submatrices inherit their patterns from the original system, or get the additional nonzeros that show up from the Identity matrices. I can look at porting this over to libMesh API, and your idea of dof_coupling may serve the necessary purpose here. I can look into this. I am now sure if I understand the concept of "elem_fixed_solution". For the Euler equations L (u) = 0 L = d/dt + A_i d/d x_i that I have implemented, I get the following equations \int_\Omega w^T \du/dt d\Omega  \int_\Omega dw^T/d x_i A_i u d\Omega + \int_\Lambda w^T flux(u) \Lambda + \int_\Omega L(w)^T \tau (d/dt + A_i d/dx_i)u d\Omega = 0 This gives the mass matrix [M] = \int_\Omega w^T \du/dt d\Omega + \int_\Omega L(w)^T \tau du/dt d\Omega and the rhs function f(u) =  ( \int_\Omega dw^T/d x_i A_i u d\Omega + \int_\Lambda w^T flux(u) \Lambda +\int_\Omega L(w)^T \tau A_i du/dx_i d\Omega ) How does the concept of elem_fixed_solution affect this situation with a mass matrix. Now, for steady solutions, I have successfully time marched by assumed the mass matrix to be identity, which brings me back to du/dt = f (u). But, I am curious about timeaccurate simulations, where this assumption may not be accurate. I would appreciate your comments. Thanks, Manav On Thu, Feb 21, 2013 at 3:25 PM, Roy Stogner <roystgnr@...>wrote: > > On Thu, 21 Feb 2013, Manav Bhatia wrote: > > I would appreciate some help in understanding if the existing System >> classes can handle PDEs (linear/nonlinear) with second order time >> derivative. >> > > IIRC We have one existing system (NewmarkSystem?) designed for second > order time derivatives. I have no idea what its limitations are or > whether it would be suitable for you. > > > I am going through the FEMSystem documentation and it seems >> like these are suitable for a system of the form >> >> du/dt = f(u). >> > > Currently, yes. There's nothing about the APIs that would prevent you > from adding a UnsteadySolver subclass that integrates u''=f(u), and > we'd love to get a patch like that, but we don't have any such > subclass in the library yet. > > Worse: if you wanted to handle even more general secondorder time > derivatives, with mixtures of u'' and u' in the same equation, I think > we'd need some modification to the APIs before we could handle that > naturally. > > > Ofcourse, I can change my second order system from >> >> d^2u/dt^2 = f(u) >> >> to >> >> d/dt {u; \tilde{u}} = [0 I; 0 0] {u; \tilde{u}} + {0; f(u)} >> >> but that changes the dimension of my sparse system by a factor of two, >> > > This is probably the easiest way to get a secondorder system up and > running with FEMSystem. > > > and I am not sure if the FEMSystem takes into account the modification in >> the sparsity pattern. >> > > You do the add_variable() for your auxilliary u' variable in your > system initialization, and the sparsity pattern will be modified to > assume that every residual term corresponding to a u degree of freedom > on an element is coupled to every u' degree of freedom on that > element. This will work; you'll just have unnecessary zeros in your > sparse matrix or you'll have to use our DofMap::_dof_coupling API to > avoid them. > > > Additionally, within the existing framework, how does one account for >> *nonlinear* systems that have a mass matrix >> >> [M] du/dt = f(u) >> > > Right now, we allow you to request an "elem_fixed_solution" that can > be used for either loworder time integration of nonlinear mass > matrices or for nonlinear stabilization in test functions. We're > actually planning a (backwardsincompatible, unfortunately) change in > this API, to make it less unnecessarily confusing and to allow for > higherorder time integration of nonlinear mass matrices and/or for > nonlinear mass matrices with nonlinear stabilization in test > functions. If you'd like, I could probably push that change up higher > on my todo list. > > > Extending this question forward, how about PDEs with second order time >> derivative? >> > > Now that's a good question. I've only done secondorder time > integration by breaking it up into firstorder systems. I'd like to > support direct secondorder integrators, but I'd need help designing > the API so as to make it as broadly applicable as possible. >  > Roy > 
From: Roy Stogner <roystgnr@ic...>  20130228 23:36:06

Sorry about the delays, but I'm finally finding some time to work through old emails now. On Thu, 21 Feb 2013, Manav Bhatia wrote: > I am now sure if I understand the concept of "elem_fixed_solution". That's my fault. The FEMSystem time integration APIs were "prematurely optimized", readability really suffered, and we're going to have to change the APIs soon anyway to enable stabilized nonlinear mass matrices. At the moment, in mass_residual(): elem_fixed_solution is equal to some fixed (integratordependent) value of u during your timestep elem_solution is equal to du/dt at the time of evaluation. > [M] = \int_\Omega w^T \du/dt d\Omega + \int_\Omega L(w)^T \tau du/dt d\Omega > > How does the concept of elem_fixed_solution affect this situation with a mass matrix. Assuming your L(w) is really L(u)(w) (e.g. due to advection velocity depending on u) then you'll want to use elem_fixed_solution as the input for that, or you'll want to yell at me to fix the API right now so you can start writing to something saner.  Roy 
From: Manav Bhatia <bhatiamanav@gm...>  20130222 04:44:27

On Feb 21, 2013, at 3:25 PM, Roy Stogner <roystgnr@...> wrote: > >> Additionally, within the existing framework, how does one account for >> *nonlinear* systems that have a mass matrix >> >> [M] du/dt = f(u) > > Right now, we allow you to request an "elem_fixed_solution" that can > be used for either loworder time integration of nonlinear mass > matrices or for nonlinear stabilization in test functions. We're > actually planning a (backwardsincompatible, unfortunately) change in > this API, to make it less unnecessarily confusing and to allow for > higherorder time integration of nonlinear mass matrices and/or for > nonlinear mass matrices with nonlinear stabilization in test > functions. If you'd like, I could probably push that change up higher > on my todo list. > OK, I think I may have started to understand this a little better now (although still a little confused). Please correct me as you see fit. I see that the method mass_residual is used in the classes Euler_Solver and Euler2_Solver. What is throwing me off is that the class documentation of Euler2_Solver says: * Euler solves u' = f(theta*u_new + (1theta)*u_old), * Euler2 solves u' = theta*f(u_new) + (1theta)*f(u_old) which seems to imply that no mass matrix is accounted for. Are the u and f(u) here the solution after discretization, or before discretization? My current guess is that the u' in these equations should actually be replaced with the product [M] u' , like one would encounter postdiscretization. What is your assessment? Also, while going through the code, I came across the following in euler_solver.C (lines 124129) // We do a trick here to avoid using a non1 // elem_solution_derivative: context.elem_jacobian *= 1.0; jacobian_computed = _system.mass_residual(jacobian_computed, context) && jacobian_computed; context.elem_jacobian *= 1.0; What is the need to multiply the elem_jacobian by 1.0 both before and after the function call? Wouldn't the call to mass_residual overwrite the elem_jacobian anyways? Thanks, Manav 
From: Roy Stogner <roystgnr@ic...>  20130228 20:10:13

On Thu, 21 Feb 2013, Manav Bhatia wrote: > What is throwing me off is that the class documentation of Euler2_Solver says: > > * Euler solves u' = f(theta*u_new + (1theta)*u_old), > * Euler2 solves u' = theta*f(u_new) + (1theta)*f(u_old) > > which seems to imply that no mass matrix is accounted for. This is a flaw in the documentation; I'll fix it ASAP. > Also, while going through the code, I came across the following in euler_solver.C (lines 124129) > > // We do a trick here to avoid using a non1 > // elem_solution_derivative: > context.elem_jacobian *= 1.0; > jacobian_computed = _system.mass_residual(jacobian_computed, context) && > jacobian_computed; > context.elem_jacobian *= 1.0; > > > What is the need to multiply the elem_jacobian by 1.0 both before > and after the function call? Wouldn't the call to mass_residual > overwrite the elem_jacobian anyways? The call to mass_residual (or anything else) is supposed to *augment* the Jacobian, rather than writing to a separate jacobian that would then just be added to the original. This was my bright idea for minimizing L1 cache use and is one of the "premature optimization" examples I was talking about. A problem with that is, if mass_residual is computing a jacobian w.r.t. elem_solution, but elem_solution is storing something whose derivative w.r.t. the real solution isn't 1.0, then the user has to remember to multiply those jacobian terms by elem_solution_derivative to take the chain rule into account. The way we do scaling with deltat and 1.0 bits was intended to try and avoid this complication for most codes.  Roy 
From: Manav Bhatia <bhatiamanav@gm...>  20130222 04:51:45

On Feb 21, 2013, at 3:25 PM, Roy Stogner <roystgnr@...> wrote: >> >> Additionally, within the existing framework, how does one account for >> *nonlinear* systems that have a mass matrix >> >> [M] du/dt = f(u) > > Right now, we allow you to request an "elem_fixed_solution" that can > be used for either loworder time integration of nonlinear mass > matrices or for nonlinear stabilization in test functions. We're > actually planning a (backwardsincompatible, unfortunately) change in > this API, to make it less unnecessarily confusing and to allow for > higherorder time integration of nonlinear mass matrices and/or for > nonlinear mass matrices with nonlinear stabilization in test > functions. If you'd like, I could probably push that change up higher > on my todo list. Upon further study, it seems like the elem_fixed_solution stores the element solution in situations where the elem_solution may contain the velocity or delta solution. Is this correct? Manav 
From: Roy Stogner <roystgnr@ic...>  20130301 20:58:30

On Thu, 21 Feb 2013, Manav Bhatia wrote: > Upon further study, it seems like the elem_fixed_solution stores the > element solution in situations where the elem_solution may contain > the velocity or delta solution. Is this correct? Currently, yes.  Roy 
From: Manav Bhatia <bhatiamanav@gm...>  20130301 07:07:17

Thanks a lot for the response, Roy. I did study the code further, and was able to make some progress. I am currently debugging some AMR issues in my code. I think some clarity in the API would be helpful. I learnt the hard way since I assumed that my flux boundary conditions were supposed to go into side_constraint, only to realize that it was not going to get multiplied by dt unless it was in side_residual. Making the change did solve the problem, but there was a confusion about what was supposed to go into side_constraint vs side_residual. Maybe documenting it would be helpful. As a followup, when elem_residual is called, is it possible to get access to the du/dt term? I have one more question. (I had sent a separate email earlier this morning, but seems like I am having trouble with my emails and it got lost somewhere). So, here is the question: Lines 419 in newton_solver.cpp sets the steplength to 1.0, line 420 makes a call to line_search which returns the value of step length. However, there is no lvalue speficied to the call to line search, so that steplength remain 1.0. I think the call to line_search should be Real steplength = this>line_search(.....); Comment? Thanks, Manav On Thu, Feb 28, 2013 at 3:05 PM, Roy Stogner <roystgnr@...>wrote: > > On Thu, 21 Feb 2013, Manav Bhatia wrote: > > Upon further study, it seems like the elem_fixed_solution stores the >> element solution in situations where the elem_solution may contain >> the velocity or delta solution. Is this correct? >> > > Currently, yes. >  > Roy 
From: Roy Stogner <roystgnr@ic...>  20130301 01:54:12

On Thu, 28 Feb 2013, Manav Bhatia wrote: > I think some clarity in the API would be helpful. I learnt the hard way since I assumed that my > flux boundary conditions were supposed to go into side_constraint, only to realize that it was > not going to get multiplied by dt unless it was in side_residual. Making the change did solve > the problem, but there was a confusion about what was supposed to go into side_constraint vs > side_residual. Maybe documenting it would be helpful. See what you think about the updated diff_physics.h method comments in 96ca8ddf3208b29d20ba203750524aea3573f3be ? > As a followup, when elem_residual is called, is it possible to get access to the du/dt term? Right now, no. That's one of the other motivations for an API change.  Roy 