From: John P. <pet...@cf...> - 2008-02-20 22:24:45
|
Hi, Hmm... you didn't mention before that you wanted to pose simultaneous problems in different space dimensions, or you did I didn't understand that's what you were doing... So there's no flow at all in the porous material? It seems that the two sets of equations are only coupled through the boundary, and thus a decoupled, iterative approach to solving the two is a natural way to handle it. Why not declare two EquationSystems each containing a single System, one for the 3D heat conduction and one for the 1D fluid flow? Then, depending on whatever decoupled scheme you will use to solve the two, you can define custom functions which pull values out of one EquationSystem/System/Mesh and use them in the other. The idea behind an EquationSystems object was to hold multiple Systems of PDEs all posed on the same **domain**. Here you have 2 domains so I would use two EquationSystems. My $0.02, John Vijay S. Mahadevan writes: > Ben, > > Sorry about the late reply. I got caught with some other things. > > > > But my question then is whether this > > > would entail excessive overhead say in creating 2-3 such Systems and > > > managing them manually to interface and control the interactions. > > > > Probably not the easiest solution... > > Do you see a different route to doing the same thing ? I would be very > interested in knowing the alternatives since although possible, this > approach might require quite a few modifications and hacks to the > design of my code based on Libmesh. > > > I guess it depends on what you mean by a 'mesh.' I can easily conceive of a > > single mesh which meets your needs. It can contain overlapping elements of > > different types if need be, and of different sizes. You would want to use > > the element subdomain flag or something like that to associate a set of > > elements with a set of physics, but the point is that all the elements for > > the multiple sets of physics lie in the same 'mesh.' > > I would define mesh as the actual discretization closely related to a > single physics. Since different physics have different characteristic > length scales, imposing the same mesh or discretization for all > physics is far from ideal. I understand that you can associate > different FE basis with different variables in the system but isn't > the type (triangle, quad) of the element still the same in each case ? > Or maybe I have misunderstood you here since i have only been using > the build cube function for all my preliminary work. > > When you say "all the elements for the multiple sets of physics lie in > the same 'mesh.' ", do you mean that the 'mesh' is the union of all > the meshes for individual physics ? Then do you create elements that > lie only on elements that you are interested in ? For example in 1-D, > i could create a fine mesh and make this my base mesh with size h. > Maybe 1 physics will then use an element with size h and another > physics could use elements of size 2h and another possibly uses > elements of size 4h. If this is what you suggested, I can buy this > idea. > > I have attached a pdf document along with the mail showing the PDEs > that am dealing with. I just copied and pasted few things in there and > wrote a short explanation. So pardon me if certain things aren't very > clear. I'll be glad to give you any details to understand this better. > Hope this helps clarify the context of the original problem. > > Thanks and awaiting your reply. > > Vijay > > On Mon, Feb 18, 2008 at 6:22 PM, Benjamin Kirk <ben...@na...> wrote: > > > The penalty of using operator-splitting is that you end up with a > > > discrete system that has only conditional stability in time > > > integration since the coupling is explicit. If you do iterate between > > > the different operators at each time step, such an issue can be > > > avoided but at the increased cost of coupled physics iterations/time > > > step. > > > > Right -- but I assumed you were going to do this anyway since you mentioned > > separate systems. In the end each system ends up with its own linear > > system, so the two system solution is not what you want. > > > > > > > My goal is to perform fully implicit coupling (without operator > > > splitting) of 2 different physics and use the SNES Petsc object to > > > solve the nonlinear system. I guess from your previous answer I glean > > > that this is currently not possible due to the issue with the > > > association of the mesh to EquationSystems. I was thinking about a > > > "hack" to simulate such a behavior by creating a dummy EquationSystems > > > object/System (Nonlinear or Linear) and associate a corresponding mesh > > > to the EquationSystems object. But my question then is whether this > > > would entail excessive overhead say in creating 2-3 such Systems and > > > managing them manually to interface and control the interactions. > > > > Probably not the easiest solution... > > > > > > > On a sidenote, it seems to be an odd design decision in Libmesh IMHO > > > to associate the mesh to EquationSystems rather than System. Since > > > EquationSystems is a collection of different Systems that can reside > > > on different meshes individually, and no one other than the system > > > managing the solution needs to know about the computational mesh for > > > the different variables. If this was the case, you could create meshes > > > for each system, associate it with the unknown variables and add them > > > to a collection, the EquationSystems object. And that makes a lot of > > > sense to me. > > > > I guess it depends on what you mean by a 'mesh.' I can easily conceive of a > > single mesh which meets your needs. It can contain overlapping elements of > > different types if need be, and of different sizes. You would want to use > > the element subdomain flag or something like that to associate a set of > > elements with a set of physics, but the point is that all the elements for > > the multiple sets of physics lie in the same 'mesh.' > > > > This would require some work in the DofMap to allocate dofs only to a subset > > of the active elements, but that is do-able. > > > > > > > Also I would like to know the reason for such a design > > > and if it is something that is necessary in the solution procedure. If > > > I have naively overlooked important aspects in my statement, feel free > > > to set me straight ! > > > > I can't say for sure without knowing the PDE(s) you want to solve and the > > proposed discretization, but it may very well fall into the existing > > framework. In your example you mention velocity/pressure on a staggered > > grid. This is exactly the way ex11/ex13 work, for example, but in disguise. > > The velocity is interpolated in a quadratic basis, the pressure interpolated > > linearly. The end result is that the 'effective pressure mesh' is a factor > > of 2 coarser in each dimension than the 'velocity mesh.' But in typical > > finite element fashion this is done by using separate element types for the > > different variables rather than a separate mesh. > > > > If that sounds more like what you are talking about then everything should > > work as-is. > > > > > > > Thanks for all the help. > > > > No problem! Again, point us to the PDEs and we might have some more insight. > > > > -Ben > > > > |