You can subscribe to this list here.
2003 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}
(2) 
_{Oct}
(2) 
_{Nov}
(27) 
_{Dec}
(31) 

2004 
_{Jan}
(6) 
_{Feb}
(15) 
_{Mar}
(33) 
_{Apr}
(10) 
_{May}
(46) 
_{Jun}
(11) 
_{Jul}
(21) 
_{Aug}
(15) 
_{Sep}
(13) 
_{Oct}
(23) 
_{Nov}
(1) 
_{Dec}
(8) 
2005 
_{Jan}
(27) 
_{Feb}
(57) 
_{Mar}
(86) 
_{Apr}
(23) 
_{May}
(37) 
_{Jun}
(34) 
_{Jul}
(24) 
_{Aug}
(17) 
_{Sep}
(50) 
_{Oct}
(24) 
_{Nov}
(10) 
_{Dec}
(60) 
2006 
_{Jan}
(47) 
_{Feb}
(46) 
_{Mar}
(127) 
_{Apr}
(19) 
_{May}
(26) 
_{Jun}
(62) 
_{Jul}
(47) 
_{Aug}
(51) 
_{Sep}
(61) 
_{Oct}
(42) 
_{Nov}
(50) 
_{Dec}
(33) 
2007 
_{Jan}
(60) 
_{Feb}
(55) 
_{Mar}
(77) 
_{Apr}
(102) 
_{May}
(82) 
_{Jun}
(102) 
_{Jul}
(169) 
_{Aug}
(117) 
_{Sep}
(80) 
_{Oct}
(37) 
_{Nov}
(51) 
_{Dec}
(43) 
2008 
_{Jan}
(71) 
_{Feb}
(94) 
_{Mar}
(98) 
_{Apr}
(125) 
_{May}
(54) 
_{Jun}
(119) 
_{Jul}
(60) 
_{Aug}
(111) 
_{Sep}
(118) 
_{Oct}
(125) 
_{Nov}
(119) 
_{Dec}
(94) 
2009 
_{Jan}
(109) 
_{Feb}
(38) 
_{Mar}
(93) 
_{Apr}
(88) 
_{May}
(29) 
_{Jun}
(57) 
_{Jul}
(53) 
_{Aug}
(48) 
_{Sep}
(68) 
_{Oct}
(151) 
_{Nov}
(23) 
_{Dec}
(35) 
2010 
_{Jan}
(84) 
_{Feb}
(60) 
_{Mar}
(184) 
_{Apr}
(112) 
_{May}
(60) 
_{Jun}
(90) 
_{Jul}
(23) 
_{Aug}
(70) 
_{Sep}
(119) 
_{Oct}
(27) 
_{Nov}
(47) 
_{Dec}
(54) 
2011 
_{Jan}
(22) 
_{Feb}
(19) 
_{Mar}
(92) 
_{Apr}
(93) 
_{May}
(35) 
_{Jun}
(91) 
_{Jul}
(32) 
_{Aug}
(61) 
_{Sep}
(7) 
_{Oct}
(69) 
_{Nov}
(81) 
_{Dec}
(23) 
2012 
_{Jan}
(64) 
_{Feb}
(95) 
_{Mar}
(35) 
_{Apr}
(36) 
_{May}
(63) 
_{Jun}
(98) 
_{Jul}
(70) 
_{Aug}
(171) 
_{Sep}
(149) 
_{Oct}
(64) 
_{Nov}
(67) 
_{Dec}
(126) 
2013 
_{Jan}
(108) 
_{Feb}
(104) 
_{Mar}
(171) 
_{Apr}
(133) 
_{May}
(108) 
_{Jun}
(100) 
_{Jul}
(93) 
_{Aug}
(126) 
_{Sep}
(74) 
_{Oct}
(59) 
_{Nov}
(145) 
_{Dec}
(93) 
2014 
_{Jan}
(38) 
_{Feb}
(45) 
_{Mar}
(26) 
_{Apr}
(41) 
_{May}
(125) 
_{Jun}
(70) 
_{Jul}
(61) 
_{Aug}
(66) 
_{Sep}
(60) 
_{Oct}
(110) 
_{Nov}
(27) 
_{Dec}
(30) 
2015 
_{Jan}
(35) 
_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 






1
(4) 
2
(1) 
3

4
(2) 
5
(1) 
6
(1) 
7

8
(2) 
9

10

11

12

13
(6) 
14
(8) 
15
(1) 
16
(1) 
17
(2) 
18
(2) 
19
(2) 
20
(5) 
21
(15) 
22

23
(1) 
24

25
(5) 
26
(7) 
27
(7) 
28
(3) 
29
(18) 

From: John Peterson <peterson@cf...>  20080221 20:57:52

pcorreia@... writes: > >>> I want to solve a pde equation involving a parameter to be incremented > >>> sucessively. After obtaining a solution I want to use it as a guess > >>> for > >>> the system to be solved next. How can I implement this? > > > >> I'm not an expert on Libmesh, but I recall that PETSC allows one to > >> set the iinitial guess. If PETSc can be accessed through the libmesh > >> interface then it should be possible quite easily. > > > > Indeed it is  this is in fact the default behavior in libMesh. Whatever > > is in the System.solution vector is handed off to the linear solver as the > > initial guess. You should be able to verify this by calling > > > > equation_systems.solve(); > > equation_systems.solve(); > > > > And seeing that the second solve converges in 0 iterations. > > > In fact I obtain: > Is this being run in parallel by any chance? I wonder if you need a system.update() in between the solves so that during the second assembly you get the most recent values from current_local_solution? Any chance you could post a short version of your code that shows the problem? J 
From: Benjamin Kirk <benjamin.kirk@na...>  20080221 20:23:10

>>> I'm not an expert on Libmesh, but I recall that PETSC allows one to >>> set the iinitial guess. If PETSc can be accessed through the libmesh >>> interface then it should be possible quite easily. >> >> Indeed it is  this is in fact the default behavior in libMesh. Whatever >> is in the System.solution vector is handed off to the linear solver as the >> initial guess. You should be able to verify this by calling >> >> equation_systems.solve(); >> equation_systems.solve(); >> >> And seeing that the second solve converges in 0 iterations. > ... > > It seems there is no change... > I'm not sure what is going on in your application... If I take ex4 and add a second solve as mentioned above and run $ ./ex4dbg d 2 n 20 ksp_converged_reason I get Linear solve converged due to CONVERGED_RTOL iterations 78 Linear solve converged due to CONVERGED_RTOL iterations 0 
From: <pcorreia@ue...>  20080221 20:01:53

>>> I want to solve a pde equation involving a parameter to be incremented >>> sucessively. After obtaining a solution I want to use it as a guess >>> for >>> the system to be solved next. How can I implement this? > >> I'm not an expert on Libmesh, but I recall that PETSC allows one to >> set the iinitial guess. If PETSc can be accessed through the libmesh >> interface then it should be possible quite easily. > > Indeed it is  this is in fact the default behavior in libMesh. Whatever > is in the System.solution vector is handed off to the linear solver as the > initial guess. You should be able to verify this by calling > > equation_systems.solve(); > equation_systems.solve(); > > And seeing that the second solve converges in 0 iterations. In fact I obtain:   Matrix Assembly Performance: Alive time=0.587006, Active time=0.543755   Event nCalls Total Avg Percent of   Time Time Active Time      BCs 1536 0.0074 0.000005 1.36   Ke 512 0.0040 0.000008 0.74   elem init 512 0.0096 0.000019 1.76   side integration 1536 0.5228 0.000340 96.14    Totals: 4096 0.5438 100.00   Transport_2 linear solver converged at step: 14, final residual: 9.89093e06   Matrix Assembly Performance: Alive time=0.844928, Active time=0.803292   Event nCalls Total Avg Percent of   Time Time Active Time      BCs 1536 0.0082 0.000005 1.02   Ke 512 0.0042 0.000008 0.52   elem init 512 0.0097 0.000019 1.21   side integration 1536 0.7812 0.000509 97.25    Totals: 4096 0.8033 100.00   Transport_2A linear solver converged at step: 14, final residual: 9.89093e06 It seems there is no change... > > Ben > 
From: Roy Stogner <roystgnr@ic...>  20080221 19:27:40

On Thu, 21 Feb 2008, Roy Stogner wrote: >> initial guess. You should be able to verify this by calling >> >> equation_systems.solve(); >> equation_systems.solve(); >> >> And seeing that the second solve converges in 0 iterations. > > Wait, doesn't that depend on your tolerance settings? If you're using > a relative residual reduction, the second solve will start from a > smaller initial residual than the first solve, and will have to do > work to make that residual even smaller. Wait, actually, it depends. That applies to nonlinear solvers where the assembled residual is a function of the initial guess for the solution, but for linear problems where the right hand side of the equation is a solutionindependent forcing function you're right.  Roy 
From: Roy Stogner <roystgnr@ic...>  20080221 19:25:47

On Thu, 21 Feb 2008, Benjamin Kirk wrote: >>> I want to solve a pde equation involving a parameter to be incremented >>> sucessively. After obtaining a solution I want to use it as a guess for >>> the system to be solved next. How can I implement this? > >> I'm not an expert on Libmesh, but I recall that PETSC allows one to >> set the iinitial guess. If PETSc can be accessed through the libmesh >> interface then it should be possible quite easily. > > Indeed it is  this is in fact the default behavior in libMesh. Whatever > is in the System.solution vector is handed off to the linear solver as the > initial guess. You should be able to verify this by calling > > equation_systems.solve(); > equation_systems.solve(); > > And seeing that the second solve converges in 0 iterations. Wait, doesn't that depend on your tolerance settings? If you're using a relative residual reduction, the second solve will start from a smaller initial residual than the first solve, and will have to do work to make that residual even smaller.  Roy 
From: Benjamin Kirk <benjamin.kirk@na...>  20080221 19:18:33

>> I want to solve a pde equation involving a parameter to be incremented >> sucessively. After obtaining a solution I want to use it as a guess for >> the system to be solved next. How can I implement this? > I'm not an expert on Libmesh, but I recall that PETSC allows one to > set the iinitial guess. If PETSc can be accessed through the libmesh > interface then it should be possible quite easily. Indeed it is  this is in fact the default behavior in libMesh. Whatever is in the System.solution vector is handed off to the linear solver as the initial guess. You should be able to verify this by calling equation_systems.solve(); equation_systems.solve(); And seeing that the second solve converges in 0 iterations. Ben 
From: Roy Stogner <roystgnr@ic...>  20080221 19:16:16

On Thu, 21 Feb 2008, Nachiket Gokhale wrote: > I'm not an expert on Libmesh, but I recall that PETSC allows one to > set the iinitial guess. If PETSc can be accessed through the libmesh > interface then it should be possible quite easily. The behavior in libMesh should (in this case) be practically automatic: the current System.solution value gets used as the initial guess in the next solve. I've only verified this behavior for the NewtonSolver I wrote, but I seem to recall us fixing one of the other solvers to make sure it worked that way too. In other words, after solving your system with parameter 1, you ought to be able to just solve it again with parameter 2 and it'll begin iterating from the parameter 1 solution. If one of the solver classes doesn't have that behavior let us know; I'd consider it a bug.  Roy 
From: Nachiket Gokhale <gokhalen@gm...>  20080221 19:00:59

I'm not an expert on Libmesh, but I recall that PETSC allows one to set the iinitial guess. If PETSc can be accessed through the libmesh interface then it should be possible quite easily. I'll wait for the experts to give an authoritative answer. :) Nachiket On Thu, Feb 21, 2008 at 1:52 PM, <pcorreia@...> wrote: > Hi, > I want to solve a pde equation involving a parameter to be incremented > sucessively. After obtaining a solution I want to use it as a guess for > the system to be solved next. How can I implement this? > Thanks > Paulo > > > > > >  > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > 
From: <pcorreia@ue...>  20080221 18:52:51

Hi, I want to solve a pde equation involving a parameter to be incremented sucessively. After obtaining a solution I want to use it as a guess for the system to be solved next. How can I implement this? Thanks Paulo 
From: John Peterson <peterson@cf...>  20080221 16:20:08

Hi, Yes, we have run the PETSc nonlinear solver before. Before we can help you, it seems you will need to ask a specific question, provide the exact error messages you are getting (translated to English where appropriate) and also tell us what you have tried doing to fix the problem so far. Also, I copied your email to the libmeshusers list. This is the best place to get answers (better than just asking 1 of the developers) and it automatically archives the discussions so they can potentially help out others later. J TCHOUANMO Stephane DOCTORANT writes: > Hi John, > > Once again I'm coming back to you to have some infos on Libmesh. > Have you ever tried to run the non linear solver PETSC or any other > nonlinear solver in Libmesh? I know that the code is supposed to > work with PETSC's non linear solver and tried to make it run in > vain. So I was wondering if you had experienced it in the past so > you could help me out. I have errors basically on MPI in Libmesh > and I really don't understand what the deal with PETSC is. Do you > have any idea about running PETSC in the code? > > Thanks a lot. > > Stéphane > 
From: Vijay S. Mahadevan <vijay.m@gm...>  20080221 04:57:31

> I don't disagree that the optimal mesh for each component would be great to > have, but I just hope you aren't underestimating the overhead involved in > seeking that composite mesh. This is *especially* true in the transient > case. When you add it all together the transient+adaptive+nonlinear+linear > nested looping really can get out of control. Yes, I understand the complications this is going to involve. That is why I am planning to attack this one step at a time. I already have the code in place to solve a single physics (diffusionreaction) with transient+nonlinear+linear loops in place, where I manually create and use my SNES object and application context. I personally think that the nonlinear solver in Libmesh needs more work in terms of providing routines to set different options that PETSc SNES object exposes. But then again, this might be specific to my current implementation and hence don't want to generalize this. Anyway, the next step would be to design a custom interface to deal with the different physics and implement the coupling in a consistent way, so as to not lose the spatial accuracy, while using different meshes. If all those work according to plan, then adaptivity in space will be included and then later adaptivity in time using higher order IRK schemes. That way, I will be able to test all the solution procedure thoroughly and have confidence in the final solution. I am curious as to what kind of multiphysics problems you have solved with Libmesh before and what kind of approach you took for those. I gather you used a single mesh for both the physics but where you able to preserve the accuracy of the coupled solution in space and time ? And did you use Operatorsplitting with iterations over the coupled physics ? ~Vijay On Wed, Feb 20, 2008 at 9:57 PM, Benjamin Kirk <benjamin.kirk@...> wrote: > > > Multiphysics problems usually have physics with different length > > scales and different time scales. It is necessary to use appropriate > > meshes depending on the physics to resolve the evolution of solution > > and using a single mesh (union of all physics meshes) will lead to a > > very high DoF than needed and I'll literally be overkilling the > > problem. > > I don't disagree that the optimal mesh for each component would be great to > have, but I just hope you aren't underestimating the overhead involved in > seeking that composite mesh. This is *especially* true in the transient > case. When you add it all together the transient+adaptive+nonlinear+linear > nested looping really can get out of control. > > My experience with a number of transient multiphysics problems has shown > this repeatedly. The "optimal" mesh in a transient reactive problem is > elusive  it will be different at each timestep! Sure, the DOF count may > be lower, but when you roll it all together my tried and true approach is to > throw more mesh than you need at the current timestep but then go a while > before refining (and reallocating (and repartitioning (and projecting > ...))). I can almost guarantee much faster walltime with this approach. If > the implicit system gets big you can run on a bigger machine, right? ;) > > Ben > > 
From: Benjamin Kirk <benjamin.kirk@na...>  20080221 03:58:01

> Multiphysics problems usually have physics with different length > scales and different time scales. It is necessary to use appropriate > meshes depending on the physics to resolve the evolution of solution > and using a single mesh (union of all physics meshes) will lead to a > very high DoF than needed and I'll literally be overkilling the > problem. I don't disagree that the optimal mesh for each component would be great to have, but I just hope you aren't underestimating the overhead involved in seeking that composite mesh. This is *especially* true in the transient case. When you add it all together the transient+adaptive+nonlinear+linear nested looping really can get out of control. My experience with a number of transient multiphysics problems has shown this repeatedly. The "optimal" mesh in a transient reactive problem is elusive  it will be different at each timestep! Sure, the DOF count may be lower, but when you roll it all together my tried and true approach is to throw more mesh than you need at the current timestep but then go a while before refining (and reallocating (and repartitioning (and projecting ...))). I can almost guarantee much faster walltime with this approach. If the implicit system gets big you can run on a bigger machine, right? ;) Ben 
From: Vijay S. Mahadevan <vijay.m@gm...>  20080221 03:13:46

Thanks for all the responses. It is always great to hear different opinions ! Roy, Thanks for the detailed response. I now understand that a Fully implicit coupled solve, if need be, has to be performed only by manually managing the libmesh objects (EquationSystems, Mesh and System). I also understand the problem with deriving the sparsity pattern for a given coupled equation system since this is something I've been working on for the past few weeks. >> if you're concerned with performance then you'll need to create (and maintain through >> adaptivity) lookup tables to quickly find elements in one mesh which overlap an element in another. Yes. This is something I have to attack soon since nonoverlapping meshes in different physics is inevitable in the problem I am solving and performance of the solution procedure is quite an important consideration. Also, thanks very much for pointing me to the MultiMesh. I will try to find out more about it and see how useful it is for my case. John, What you suggested is a perfect OperatorSplitting algorithm :) and hence will have the stability limit and a first order in time penalty unless I iterate over the solution for each physics. I've already tried this before although not with Libmesh, and can tell you that this is not an option for problems I will be dealing with. I do plan to have that option open in order to be used for physicsbased preconditioning to accelerate convergence of the fully implicit nonlinear solve/time step. Again, thanks for confirming that I need to maintain my own equationsystems and mesh ! Ben, Multiphysics problems usually have physics with different length scales and different time scales. It is necessary to use appropriate meshes depending on the physics to resolve the evolution of solution and using a single mesh (union of all physics meshes) will lead to a very high DoF than needed and I'll literally be overkilling the problem. The other issue you raise is a very valid one and interpolation error would reduce the spatial convergence to first order, if not done properly. There are several ways I think will preserve the spatial accuracy when coupling solutions on different meshes and this is currently a hot research topic in the multiphysics community. I will also be dealing with this very soon when my coupled diffusion code is ready and will let you know my findings. Everyone, thanks for all the suggestions and help in making me understand the capability and limitations of the current implementation of LibMesh. I am going to go ahead with the idea to maintain separate meshes for each physics, compute the residuals, and solve the nonlinear system with SNES manually. I will definitely have a lot of questions when I dig in more into this. Appreciate all the help and feel free to suggest alternative implementations. Vijay On Wed, Feb 20, 2008 at 7:31 PM, Roy Stogner <roystgnr@...> wrote: > > On Wed, 20 Feb 2008, Benjamin Kirk wrote: > > > If these truly need to be in the same implicit system, though, I cannot > > think of *why* you would want to have them on two separate discretizations?? > > To use separate refinement patterns. For a simple example, the solute > layers you might want to adaptively resolve in a concentration > variable for a transport problem won't necessarily have anything to do > with the boundary layers and/or corner singularities in the > velocity/pressure variables. It's not a bad idea, it's just very hard > to do in such a way that the computational overhead of maintaining and > integrating on separate meshes doesn't overwhelm the computational > benefit of getting the same solution quality with fewer redundant > degrees of freedom. >  > Roy > >  > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > 
From: Roy Stogner <roystgnr@ic...>  20080221 01:31:37

On Wed, 20 Feb 2008, Benjamin Kirk wrote: > If these truly need to be in the same implicit system, though, I cannot > think of *why* you would want to have them on two separate discretizations?? To use separate refinement patterns. For a simple example, the solute layers you might want to adaptively resolve in a concentration variable for a transport problem won't necessarily have anything to do with the boundary layers and/or corner singularities in the velocity/pressure variables. It's not a bad idea, it's just very hard to do in such a way that the computational overhead of maintaining and integrating on separate meshes doesn't overwhelm the computational benefit of getting the same solution quality with fewer redundant degrees of freedom.  Roy 
From: Benjamin Kirk <benjamin.kirk@na...>  20080221 01:20:37

>> For simplification, consider 2 physics on the same domain: Consider >> the 3D heat conduction and a neutron diffusion model (both are >> nonlinear diffusionreaction equations) which are described over the >> same 3D domain. Now, can I get away with using a single >> EquationSystems object with a single mesh even though the element >> sizes used for different physics could be different ? And since I >> would define 'mesh' to be the representation of the domain using >> different elements, and since the elements used in terms of size and >> type could be different for different physics, these would be >> different meshes. Does that make sense ? The question makes sense, the motivation does not (to me anyway). > > You basically want to have two (or more) completely unstructured, > overlapping grids with a different variable defined on each one, > if I understand correctly... > > The problem is that it's easier to describe/conceptualize than it is > to implement, at least in a completely general sense, and while in the > LibMesh framework. > > The approaches that Ben described for TaylorHood elements where you have > a linear pressure sitting on top of a quadratic velocity field are, > I imagine, too specialized for what you want to do. (Another issue > which arises in these mixed methods is the compatibility of the spaces, > and that can be tricky to work out.) And this is a key point... Loosely coupling two separate systems on different meshes is not too hard and often done in practice. You could, for example, have a temperature field on a tetrahedral discretization of the same space as a hexahedral discretization of a velocity field. Each would give rise to its own implicit system. The only compatiblity you have between the discretizations is the interpolation error you are willing to take when you interpolate the solution of one system as the input data of another. If these truly need to be in the same implicit system, though, I cannot think of *why* you would want to have them on two separate discretizations?? Ben 