From: Ataollah M. <am...@ti...> - 2011-12-14 14:20:12
|
Dear all, I wonder if there is compatible versionIs it possible to use PETScNonlinearSolver class with SNESVI (PETSc/3.2) to solve a variational inequality problem? Thanks, Ata Mesgarnejad |
From: Dmitry K. <ka...@mc...> - 2011-12-14 14:25:05
|
We (ANL) are working on an patch to libMesh to enable this functionality. The motivation is to use it through MOOSE, but it will be exposed as a libMesh API. Dmitry. On Wed, Dec 14, 2011 at 7:53 AM, Ataollah Mesgarnejad < am...@ti...> wrote: > Dear all, > > I wonder if there is compatible versionIs it possible to > use PETScNonlinearSolver class with SNESVI (PETSc/3.2) to solve a > variational inequality problem? > > Thanks, > Ata Mesgarnejad > > ------------------------------------------------------------------------------ > Cloud Computing - Latest Buzzword or a Glimpse of the Future? > This paper surveys cloud computing today: What are the benefits? > Why are businesses embracing it? What are its payoffs and pitfalls? > http://www.accelacomm.com/jaw/sdnl/114/51425149/ > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users > |
From: Dmitry K. <ka...@mc...> - 2012-05-01 01:40:22
|
Ata, example7 segfaults when you run in parallel? Does it do it on 2 procs? Dmitry. On Mon, Apr 30, 2012 at 8:31 PM, Ataollah Mesgarnejad < am...@ti...> wrote: > Dmitry, > > I had a very good experience on single processor but on multiple > processors I get segmentation error at line where we call solve (I tried > both my code and example7) . I tried your example without -vi > --use-petsc-dm and it works just fine. Right now I'm trying to compile a > debug version and look into it; meanwhile I thought it would be a good idea > if I let you know. > > Thanks, > Ata > > On Apr 30, 2012, at 1:30 PM, Dmitry Karpeev wrote: > > > > On Mon, Apr 30, 2012 at 1:28 PM, Ataollah Mesgarnejad < > am...@ti...> wrote: > >> >> On Apr 30, 2012, at 1:13 PM, Dmitry Karpeev wrote: >> >> >> >> On Mon, Apr 30, 2012 at 1:03 PM, Ataollah Mesgarnejad < >> am...@ti...> wrote: >> >>> Dmitry, >>> >>> I had time to come back and work on the SNESVI/Libmesh program. The way >>> you setup your code is particularly hard for me to follow in my own code >>> since it requires a significant amount of extra work (to define classes >>> and setup the inheritances..). What I did instead was to try add a >>> TransientNonlinearSystem to my EquationSystems object: >>> >>> TransientNonlinearImplicitSystem & fracture_system = >>> equation_system.add_system<TransientNonlinearImplicitSystem>("fracture_system"); >>> >>> and then to add the bounds and Objective and Residual functions: >>> >>> fracture_system.attach_init_function(init_ic_fracture); >>> fracture_system.nonlinear_solver->jacobian= PsiFormJacobian; >>> fracture_system.nonlinear_solver->residual= PsiFormResidual; >>> fracture_system.nonlinear_solver->bounds = PsiFormBounds; >>> >>> what I see now is that SNES solve does not apply the bounds. So here is >>> my questions: >>> >>> 1- Is there inherently wrong with setting up the system specially the >>> bounds the easy way like I did? >>> >> Ata, >> Doing things this "simple" way is just fine: there is no reason to follow >> the relatively convoluted structure I used in that >> example. When I have time, I'll rewrite it to simplify things. >> >>> 2- Where is the right SNES type is chosen ? I guess this should be hard >>> wired somewhere in NonlinearImplicitSystem! >>> 3- When I use -snes_view (with your example or my own code ) I get >>> snes_type ls? shouldn't this be VI ? >>> >> >> What's likely missing is the command-line option --use-petsc-dm. It's >> necessary since libMesh wants to preserve the usual behavior as >> the default. In particular, because SNESVI and the DM hookup aren't really >> possible prior to petsc-3.2. >> In fact, it might not be possible without a recent petsc-dev (let me >> verify that, once I get to my computer). >> >> In the meantime, could you give --use-petsc-dm a try? >> >> I just saw that in the example help line and used it. It works like a >> charm. >> > Great! > Dmitry. > >> I'll go on check for the convergence and see what I can get. >> >> Thanks, >> Ata >> >> Thanks. >> >> Dmitry. >> >>> >>> Thanks, >>> Ata >>> >>> On Apr 12, 2012, at 10:00 AM, Dmitry Karpeev wrote: >>> >>> Convergence is something we might need to work on -- I noticed some >>> chatter in a simple bounded biharmonic heat equation (a stripped down >>> version of Cahn-Hilliard). >>> Thanks. >>> Dmitry. >>> >>> On Thu, Apr 12, 2012 at 9:55 AM, Ataollah Mesgarnejad < >>> am...@ti...> wrote: >>> >>>> Thanks Dmitry, >>>> >>>> It would be wonderful. I made a wrapper for SNESVI in myself for my >>>> program but had difficulty achieving good convergence. Right now I am >>>> occupied with other issues but I'll get back to this as soon as I get a >>>> chance. >>>> >>>> Ata >>>> >>>> On Apr 10, 2012, at 9:55 PM, Dmitry Karpeev wrote: >>>> >>>> Ata, >>>> There is now a way to use SNESVI from libMesh. >>>> If interested, pull a recent revision from sourceforge and take a look >>>> at examples/miscellaneous/miscellaneous_ex7. >>>> Let me know, if you have any questions. >>>> >>>> Thanks. >>>> Dmitry. >>>> >>>> On Wed, Dec 14, 2011 at 12:45 PM, Dmitry Karpeev <ka...@mc...>wrote: >>>> >>>>> Okay, thanks for your feedback. >>>>> I'll keep you updated on this work. >>>>> >>>>> Dmitry. >>>>> >>>>> >>>>> On Wed, Dec 14, 2011 at 12:42 PM, Ataollah Mesgarnejad < >>>>> am...@ti...> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Wed, Dec 14, 2011 at 12:35 PM, Dmitry Karpeev <ka...@mc... >>>>>> > wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Wed, Dec 14, 2011 at 12:19 PM, Ataollah Mesgarnejad < >>>>>>> am...@ti...> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Wed, Dec 14, 2011 at 11:59 AM, Dmitry Karpeev < >>>>>>>> ka...@mc...> wrote: >>>>>>>> >>>>>>>>> Ata, >>>>>>>>> >>>>>>>>> A couple of things: >>>>>>>>> 1. SNESVI solves a VI under what's known as "box constraints": the >>>>>>>>> Vec x that the residual operates on is entrywise between Vecs xlow and >>>>>>>>> xhigh. In order to enable >>>>>>>>> constraints on a linear combination of the Vec entries (as you >>>>>>>>> suggest), the problem >>>>>>>>> has to be augmented with extra variables -- these linear >>>>>>>>> combinations. This, of course, >>>>>>>>> changes the Jacobian, preconditioner, etc. The user provides only >>>>>>>>> the Jacobian for the original system, so libMesh would have to >>>>>>>>> automatically construct the Jacobian for the extended system, and >>>>>>>>> precondition it using the user provided PC for the original subsystem. This >>>>>>>>> is not necessarily the unreasonable thing to do in many circumstances, but >>>>>>>>> will take more thinking than a straighforward nodal constraint. The user >>>>>>>>> API for specifying the constraints, however, becomes rather straighforward. >>>>>>>>> >>>>>>>>> 2. In terms of FEM convergence, constraining at the nodes is >>>>>>>>> sufficient (see, for example, >>>>>>>>> Glowinski's book: >>>>>>>>> http://books.google.com/books/about/Numerical_analysis_of_variational_inequa.html?id=Pf4ed2mtbx4C >>>>>>>>> ) >>>>>>>>> You can show that if you impose the nodal constraints, in the >>>>>>>>> limit of mesh size going to 0 the inequality is satisfied a.e. >>>>>>>>> >>>>>>>>> In this case no funny manipulation of the Jacobian or the PC is >>>>>>>>> necessary, but the user API to specify the constraint is a bit amiguous: >>>>>>>>> how do you constrain the node? From a containing element? Then there are >>>>>>>>> multiple, possibly incompatible, constraints. >>>>>>>>> >>>>>>>> Or you can simply go through all the nodes in the system once using >>>>>>>> node iterator. Here is how I think you can do it: >>>>>>>> >>>>>>>> ------------------------------------- >>>>>>>> //make two PetscVector ub,lb from two variables you already added >>>>>>>> to the system >>>>>>>> NumericVector<Number> &ub_sys = fracture_system.get_vector("ub"); >>>>>>>> NumericVector<Number> &lb_sys = fracture_system.get_vector("lb"); >>>>>>>> PetscVector<Number>&lb = >>>>>>>> libmesh_cast_ptr<PetscVector<Number>*>(&ub_sys); >>>>>>>> PetscVector<Number>&ub = >>>>>>>> libmesh_cast_ptr<PetscVector<Number>*>(&lb_sys); >>>>>>>> >>>>>>>> *then in the bound routine do:* >>>>>>>> >>>>>>>> // Get a constant reference to the mesh object. >>>>>>>> const MeshBase& mesh = es.get_mesh(); >>>>>>>> //Get the iterator >>>>>>>> MeshBase::const_node_iterator nd = >>>>>>>> mesh.local_nodes_begin(); >>>>>>>> const MeshBase::const_node_iterator end_nd = mesh.local_nodes_end(); >>>>>>>> for (; nd != end_nd; nd++){ >>>>>>>> const Node* node = *nd; >>>>>>>> //get the degree of freedom of node >>>>>>>> unsigned int dn = node->dof_number(0,0,0); // system >>>>>>>> 0 , var 0, component 0 >>>>>>>> //set the ub,lb >>>>>>>> ub(dn)= A; >>>>>>>> lb(dn) = B; >>>>>>>> } >>>>>>>> //close the vectors >>>>>>>> ub.close(); >>>>>>>> lb.close(); >>>>>>>> >>>>>>> >>>>>>> You could do that. That's probably the mode we can support almost >>>>>>> immediately. >>>>>>> The issue here is that you lose some of the "context" by processing >>>>>>> each node by >>>>>>> itself rather as part of an element. For example what if each >>>>>>> element is a different material? >>>>>>> I guess in that case that property should be set on the nodes as >>>>>>> well. >>>>>>> >>>>>> >>>>>> Exactly one can always pass mat prop as a Constant MONOMIAL (e.g. >>>>>> system.add_variable("mat_prop",CONSTANT,MONOMIAL);) variable with values >>>>>> on each node. >>>>>> >>>>>> Ata >>>>>> >>>>>> >>>>>>> Dmitry. >>>>>>> >>>>>>>> ----------------------------------------------------------------- >>>>>>>> Ata >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> Dmitry. >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, Dec 14, 2011 at 11:17 AM, Ataollah Mesgarnejad < >>>>>>>>> am...@ti...> wrote: >>>>>>>>> >>>>>>>>>> I hope I understood the whole thing correctly. >>>>>>>>>> >>>>>>>>>> On Wed, Dec 14, 2011 at 10:56 AM, Dmitry Karpeev < >>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Wed, Dec 14, 2011 at 10:50 AM, Ataollah Mesgarnejad < >>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>> >>>>>>>>>>>> I don't think setting the variable bound on each >>>>>>>>>>>> node separately is a good idea in general. >>>>>>>>>>> >>>>>>>>>>> I'm not sure what you mean by this. Could you clarify? I don't >>>>>>>>>>> think I'm advocating this. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> Bounds should be set as a sum on each elements degrees of >>>>>>>>>>>> freedom; this way there should not be any nodal inconsistencies. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> I'm not sure what you mean here either. A bound is usually not >>>>>>>>>>> a sum (e.g., a bound on a concentration is 0 <= c <=1). >>>>>>>>>>> Perhaps your application expresses the total bound as a sum? >>>>>>>>>>> I'd be very interested in learning more about that case. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> OK here is my thought process: one optimally wants to bound the >>>>>>>>>> variable over the whole Element domain between lb,ub. So we would have >>>>>>>>>> something like this: >>>>>>>>>> >>>>>>>>>> lb \leq \sum_{i=1}^{n=NDOF} \phi_{i}(x) \hat{u}_{i} \leq ub, >>>>>>>>>> \forall x \in El >>>>>>>>>> (Eq-1) >>>>>>>>>> >>>>>>>>>> so it's not exactly that the nodal values should be bounded >>>>>>>>>> (\hat{u}_{i}) unless it's a FD application where \phi_i(x) is delta >>>>>>>>>> function !! That said I think given a normalized basis, bounding nodal >>>>>>>>>> values is stricter requirement that (Eq-1). if that's what you meant by >>>>>>>>>> inconsistencies between node bounds then the stronger bound would be my >>>>>>>>>> first priority. >>>>>>>>>> >>>>>>>>>> Ata >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> Of course it might run into a situations with semi-smooth VI. >>>>>>>>>>> >>>>>>>>>>> The type of VI solver is independent of how the bounds are >>>>>>>>>>> specified: once we know the upper and lower bounds for each element of a >>>>>>>>>>> Vec, we can solve the VI with either a semismooth method or an active set >>>>>>>>>>> method. >>>>>>>>>>> >>>>>>>>>>> I really appreciate your feedback. I hope we can get to the >>>>>>>>>>> point where things are as clear as possible so that you can >>>>>>>>>>> profitably use the new capabilities. >>>>>>>>>>> >>>>>>>>>>> Dmitry. >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Thanks, >>>>>>>>>>>> Ata >>>>>>>>>>>> >>>>>>>>>>>> On Wed, Dec 14, 2011 at 9:47 AM, Dmitry Karpeev < >>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Are you familiar with SNESVI? The main thing I'm looking at >>>>>>>>>>>>> right now is the libMesh API for specifying >>>>>>>>>>>>> the bounds on the variables. It seems to me that the user >>>>>>>>>>>>> would ideally want to specify the bounds on the variable >>>>>>>>>>>>> one element at a time, since that's the default traversal mode >>>>>>>>>>>>> for libMesh. Then, of course, many nodal values >>>>>>>>>>>>> are traversed many time, allowing for an inconsistency. The >>>>>>>>>>>>> question is, what you, as a user, expect in that situation? >>>>>>>>>>>>> Should the latest bound that the user has set be applied or >>>>>>>>>>>>> the tightest of the bounds the user has supplied? >>>>>>>>>>>>> >>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Wed, Dec 14, 2011 at 9:28 AM, Ataollah Mesgarnejad < >>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks Dmitry, I look forward to use it. Also if it's >>>>>>>>>>>>>> available as a dev version I would love to try to work with it. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Ata >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 8:24 AM, Dmitry Karpeev < >>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> We (ANL) are working on an patch to libMesh to enable this >>>>>>>>>>>>>>> functionality. >>>>>>>>>>>>>>> The motivation is to use it through MOOSE, but it will be >>>>>>>>>>>>>>> exposed as a libMesh API. >>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 7:53 AM, Ataollah Mesgarnejad < >>>>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Dear all, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I wonder if there is compatible versionIs it possible to >>>>>>>>>>>>>>>> use PETScNonlinearSolver class with SNESVI (PETSc/3.2) to >>>>>>>>>>>>>>>> solve a >>>>>>>>>>>>>>>> variational inequality problem? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>> Ata Mesgarnejad >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>>>>>> Cloud Computing - Latest Buzzword or a Glimpse of the >>>>>>>>>>>>>>>> Future? >>>>>>>>>>>>>>>> This paper surveys cloud computing today: What are the >>>>>>>>>>>>>>>> benefits? >>>>>>>>>>>>>>>> Why are businesses embracing it? What are its payoffs and >>>>>>>>>>>>>>>> pitfalls? >>>>>>>>>>>>>>>> http://www.accelacomm.com/jaw/sdnl/114/51425149/ >>>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>>> Libmesh-users mailing list >>>>>>>>>>>>>>>> Lib...@li... >>>>>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/libmesh-users >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>>>> Louisiana State University >>>>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>> Louisiana State University >>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> A. Mesgarnejad >>>>>>>> PhD Student, Research Assistant >>>>>>>> Mechanical Engineering Department >>>>>>>> Louisiana State University >>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>> Baton Rouge, La 70803 >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> A. Mesgarnejad >>>>>> PhD Student, Research Assistant >>>>>> Mechanical Engineering Department >>>>>> Louisiana State University >>>>>> 2203 Patrick F. Taylor Hall >>>>>> Baton Rouge, La 70803 >>>>>> >>>>> >>>>> >>>> >>>> >>> >>> >> >> > > |
From: Dmitry K. <ka...@mc...> - 2012-05-01 01:56:17
|
Okay, thanks. I'll take a look. Dmitry. On Mon, Apr 30, 2012 at 8:54 PM, Ataollah Mesgarnejad < am...@ti...> wrote: > > On Apr 30, 2012, at 8:39 PM, Dmitry Karpeev wrote: > > Ata, > example7 segfaults when you run in parallel? > > yes it does. > > Does it do it on 2 procs? > > > yes here is the GBD stack: > > (gdb) where > #0 0x00000038fbaaada0 in __nanosleep_nocancel () from /lib64/libc.so.6 > #1 0x00000038fbaaac30 in sleep () from /lib64/libc.so.6 > #2 0x00002ba5744c2c40 in PetscSleep(double) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #3 0x00002ba57450affa in PetscAttachDebugger() () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #4 0x00002ba57450b95f in PetscAttachDebuggerErrorHandler(int, int, char > const*, char const*, char const*, int, PetscErrorType, char const*, void*) > () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #5 0x00002ba57450e7be in PetscError(int, int, char const*, char const*, > char const*, int, PetscErrorType, char const*, ...) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #6 0x00002ba57450fea7 in PetscDefaultSignalHandler(int, void*) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #7 0x00002ba57450f883 in PetscSignalHandler_Private () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #8 <signal handler called> > #9 0x00002ba574b16340 in DMCoarsenHookAdd(_p_DM*, int (*)(_p_DM*, _p_DM*, > void*), int (*)(_p_DM*, _p_Mat*, _p_Vec*, _p_Mat*, _p_DM*, void*), void*) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #10 0x00002ba574cec431 in DMSNESGetContext(_p_DM*, _n_SNESDM**) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #11 0x00002ba574ceba34 in DMSNESGetFunction(_p_DM*, int (**)(_p_SNES*, > _p_Vec*, ---Type <return> to continue, or q <return> to quit--- > _p_Vec*, void*), void**) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #12 0x00002ba574d20d1c in SNESGetFunction(_p_SNES*, _p_Vec**, int > (**)(_p_SNES*, _p_Vec*, _p_Vec*, void*), void**) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #13 0x00002ba574d289eb in SNESSetUp(_p_SNES*) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #14 0x00002ba574d276b0 in SNESSolve(_p_SNES*, _p_Vec*, _p_Vec*) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > > #15 0x00000000010976ab in > libMesh::PetscDMNonlinearSolver<libMesh::Number>::solve (this=0x245f730, > jac_in=..., x_in=..., r_in=...) > at src/solvers/petsc_dm_nonlinear_solver.C:450 > #16 0x0000000000cd604f in libMesh::NonlinearImplicitSystem::solve ( > this=0x245f900) at src/systems/nonlinear_implicit_system.C:178 > #17 0x0000000000426082 in Biharmonic::step (this=0x245c750, > dt=@0x7fffce4543b8) > at biharmonic.C:110 > #18 0x0000000000426373 in Biharmonic::run (this=0x245c750) at > biharmonic.C:152 > warning: Range for type (null) has invalid bounds 0..-111 > #19 0x0000000000436cf5 in main (argc=4, argv=0x7fffce4545a8) > at miscellaneous_ex7.C:64 > > Best, > Ata > > > Dmitry. > > On Mon, Apr 30, 2012 at 8:31 PM, Ataollah Mesgarnejad < > am...@ti...> wrote: > >> Dmitry, >> >> I had a very good experience on single processor but on multiple >> processors I get segmentation error at line where we call solve (I tried >> both my code and example7) . I tried your example without -vi >> --use-petsc-dm and it works just fine. Right now I'm trying to compile a >> debug version and look into it; meanwhile I thought it would be a good idea >> if I let you know. >> >> Thanks, >> Ata >> >> On Apr 30, 2012, at 1:30 PM, Dmitry Karpeev wrote: >> >> >> >> On Mon, Apr 30, 2012 at 1:28 PM, Ataollah Mesgarnejad < >> am...@ti...> wrote: >> >>> >>> On Apr 30, 2012, at 1:13 PM, Dmitry Karpeev wrote: >>> >>> >>> >>> On Mon, Apr 30, 2012 at 1:03 PM, Ataollah Mesgarnejad < >>> am...@ti...> wrote: >>> >>>> Dmitry, >>>> >>>> I had time to come back and work on the SNESVI/Libmesh program. The way >>>> you setup your code is particularly hard for me to follow in my own code >>>> since it requires a significant amount of extra work (to define classes >>>> and setup the inheritances..). What I did instead was to try add a >>>> TransientNonlinearSystem to my EquationSystems object: >>>> >>>> TransientNonlinearImplicitSystem & fracture_system = >>>> equation_system.add_system<TransientNonlinearImplicitSystem>("fracture_system"); >>>> >>>> and then to add the bounds and Objective and Residual functions: >>>> >>>> fracture_system.attach_init_function(init_ic_fracture); >>>> fracture_system.nonlinear_solver->jacobian= PsiFormJacobian; >>>> fracture_system.nonlinear_solver->residual= PsiFormResidual; >>>> fracture_system.nonlinear_solver->bounds = PsiFormBounds; >>>> >>>> what I see now is that SNES solve does not apply the bounds. So here is >>>> my questions: >>>> >>>> 1- Is there inherently wrong with setting up the system specially the >>>> bounds the easy way like I did? >>>> >>> Ata, >>> Doing things this "simple" way is just fine: there is no reason to >>> follow the relatively convoluted structure I used in that >>> example. When I have time, I'll rewrite it to simplify things. >>> >>>> 2- Where is the right SNES type is chosen ? I guess this should be hard >>>> wired somewhere in NonlinearImplicitSystem! >>>> 3- When I use -snes_view (with your example or my own code ) I get >>>> snes_type ls? shouldn't this be VI ? >>>> >>> >>> What's likely missing is the command-line option --use-petsc-dm. It's >>> necessary since libMesh wants to preserve the usual behavior as >>> the default. In particular, because SNESVI and the DM hookup aren't really >>> possible prior to petsc-3.2. >>> In fact, it might not be possible without a recent petsc-dev (let me >>> verify that, once I get to my computer). >>> >>> In the meantime, could you give --use-petsc-dm a try? >>> >>> I just saw that in the example help line and used it. It works like a >>> charm. >>> >> Great! >> Dmitry. >> >>> I'll go on check for the convergence and see what I can get. >>> >>> Thanks, >>> Ata >>> >>> Thanks. >>> >>> Dmitry. >>> >>>> >>>> Thanks, >>>> Ata >>>> >>>> On Apr 12, 2012, at 10:00 AM, Dmitry Karpeev wrote: >>>> >>>> Convergence is something we might need to work on -- I noticed some >>>> chatter in a simple bounded biharmonic heat equation (a stripped down >>>> version of Cahn-Hilliard). >>>> Thanks. >>>> Dmitry. >>>> >>>> On Thu, Apr 12, 2012 at 9:55 AM, Ataollah Mesgarnejad < >>>> am...@ti...> wrote: >>>> >>>>> Thanks Dmitry, >>>>> >>>>> It would be wonderful. I made a wrapper for SNESVI in myself for my >>>>> program but had difficulty achieving good convergence. Right now I am >>>>> occupied with other issues but I'll get back to this as soon as I get a >>>>> chance. >>>>> >>>>> Ata >>>>> >>>>> On Apr 10, 2012, at 9:55 PM, Dmitry Karpeev wrote: >>>>> >>>>> Ata, >>>>> There is now a way to use SNESVI from libMesh. >>>>> If interested, pull a recent revision from sourceforge and take a look >>>>> at examples/miscellaneous/miscellaneous_ex7. >>>>> Let me know, if you have any questions. >>>>> >>>>> Thanks. >>>>> Dmitry. >>>>> >>>>> On Wed, Dec 14, 2011 at 12:45 PM, Dmitry Karpeev <ka...@mc...>wrote: >>>>> >>>>>> Okay, thanks for your feedback. >>>>>> I'll keep you updated on this work. >>>>>> >>>>>> Dmitry. >>>>>> >>>>>> >>>>>> On Wed, Dec 14, 2011 at 12:42 PM, Ataollah Mesgarnejad < >>>>>> am...@ti...> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Wed, Dec 14, 2011 at 12:35 PM, Dmitry Karpeev < >>>>>>> ka...@mc...> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Wed, Dec 14, 2011 at 12:19 PM, Ataollah Mesgarnejad < >>>>>>>> am...@ti...> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, Dec 14, 2011 at 11:59 AM, Dmitry Karpeev < >>>>>>>>> ka...@mc...> wrote: >>>>>>>>> >>>>>>>>>> Ata, >>>>>>>>>> >>>>>>>>>> A couple of things: >>>>>>>>>> 1. SNESVI solves a VI under what's known as "box constraints": >>>>>>>>>> the Vec x that the residual operates on is entrywise between Vecs xlow and >>>>>>>>>> xhigh. In order to enable >>>>>>>>>> constraints on a linear combination of the Vec entries (as you >>>>>>>>>> suggest), the problem >>>>>>>>>> has to be augmented with extra variables -- these linear >>>>>>>>>> combinations. This, of course, >>>>>>>>>> changes the Jacobian, preconditioner, etc. The user provides >>>>>>>>>> only the Jacobian for the original system, so libMesh would have to >>>>>>>>>> automatically construct the Jacobian for the extended system, and >>>>>>>>>> precondition it using the user provided PC for the original subsystem. This >>>>>>>>>> is not necessarily the unreasonable thing to do in many circumstances, but >>>>>>>>>> will take more thinking than a straighforward nodal constraint. The user >>>>>>>>>> API for specifying the constraints, however, becomes rather straighforward. >>>>>>>>>> >>>>>>>>>> 2. In terms of FEM convergence, constraining at the nodes is >>>>>>>>>> sufficient (see, for example, >>>>>>>>>> Glowinski's book: >>>>>>>>>> http://books.google.com/books/about/Numerical_analysis_of_variational_inequa.html?id=Pf4ed2mtbx4C >>>>>>>>>> ) >>>>>>>>>> You can show that if you impose the nodal constraints, in the >>>>>>>>>> limit of mesh size going to 0 the inequality is satisfied a.e. >>>>>>>>>> >>>>>>>>>> In this case no funny manipulation of the Jacobian or the PC is >>>>>>>>>> necessary, but the user API to specify the constraint is a bit amiguous: >>>>>>>>>> how do you constrain the node? From a containing element? Then there are >>>>>>>>>> multiple, possibly incompatible, constraints. >>>>>>>>>> >>>>>>>>> Or you can simply go through all the nodes in the system once >>>>>>>>> using node iterator. Here is how I think you can do it: >>>>>>>>> >>>>>>>>> ------------------------------------- >>>>>>>>> //make two PetscVector ub,lb from two variables you already added >>>>>>>>> to the system >>>>>>>>> NumericVector<Number> &ub_sys = fracture_system.get_vector("ub"); >>>>>>>>> NumericVector<Number> &lb_sys = fracture_system.get_vector("lb"); >>>>>>>>> PetscVector<Number>&lb = >>>>>>>>> libmesh_cast_ptr<PetscVector<Number>*>(&ub_sys); >>>>>>>>> PetscVector<Number>&ub = >>>>>>>>> libmesh_cast_ptr<PetscVector<Number>*>(&lb_sys); >>>>>>>>> >>>>>>>>> *then in the bound routine do:* >>>>>>>>> >>>>>>>>> // Get a constant reference to the mesh object. >>>>>>>>> const MeshBase& mesh = es.get_mesh(); >>>>>>>>> //Get the iterator >>>>>>>>> MeshBase::const_node_iterator nd = >>>>>>>>> mesh.local_nodes_begin(); >>>>>>>>> const MeshBase::const_node_iterator end_nd = >>>>>>>>> mesh.local_nodes_end(); >>>>>>>>> for (; nd != end_nd; nd++){ >>>>>>>>> const Node* node = *nd; >>>>>>>>> //get the degree of freedom of node >>>>>>>>> unsigned int dn = node->dof_number(0,0,0); // >>>>>>>>> system 0 , var 0, component 0 >>>>>>>>> //set the ub,lb >>>>>>>>> ub(dn)= A; >>>>>>>>> lb(dn) = B; >>>>>>>>> } >>>>>>>>> //close the vectors >>>>>>>>> ub.close(); >>>>>>>>> lb.close(); >>>>>>>>> >>>>>>>> >>>>>>>> You could do that. That's probably the mode we can support almost >>>>>>>> immediately. >>>>>>>> The issue here is that you lose some of the "context" by processing >>>>>>>> each node by >>>>>>>> itself rather as part of an element. For example what if each >>>>>>>> element is a different material? >>>>>>>> I guess in that case that property should be set on the nodes as >>>>>>>> well. >>>>>>>> >>>>>>> >>>>>>> Exactly one can always pass mat prop as a Constant MONOMIAL (e.g. >>>>>>> system.add_variable("mat_prop",CONSTANT,MONOMIAL);) variable with values >>>>>>> on each node. >>>>>>> >>>>>>> Ata >>>>>>> >>>>>>> >>>>>>>> Dmitry. >>>>>>>> >>>>>>>>> ----------------------------------------------------------------- >>>>>>>>> Ata >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> Dmitry. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Wed, Dec 14, 2011 at 11:17 AM, Ataollah Mesgarnejad < >>>>>>>>>> am...@ti...> wrote: >>>>>>>>>> >>>>>>>>>>> I hope I understood the whole thing correctly. >>>>>>>>>>> >>>>>>>>>>> On Wed, Dec 14, 2011 at 10:56 AM, Dmitry Karpeev < >>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Wed, Dec 14, 2011 at 10:50 AM, Ataollah Mesgarnejad < >>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> I don't think setting the variable bound on each >>>>>>>>>>>>> node separately is a good idea in general. >>>>>>>>>>>> >>>>>>>>>>>> I'm not sure what you mean by this. Could you clarify? I >>>>>>>>>>>> don't think I'm advocating this. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> Bounds should be set as a sum on each elements degrees of >>>>>>>>>>>>> freedom; this way there should not be any nodal inconsistencies. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> I'm not sure what you mean here either. A bound is usually not >>>>>>>>>>>> a sum (e.g., a bound on a concentration is 0 <= c <=1). >>>>>>>>>>>> Perhaps your application expresses the total bound as a sum? >>>>>>>>>>>> I'd be very interested in learning more about that case. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> OK here is my thought process: one optimally wants to bound the >>>>>>>>>>> variable over the whole Element domain between lb,ub. So we would have >>>>>>>>>>> something like this: >>>>>>>>>>> >>>>>>>>>>> lb \leq \sum_{i=1}^{n=NDOF} \phi_{i}(x) \hat{u}_{i} \leq ub, >>>>>>>>>>> \forall x \in El >>>>>>>>>>> (Eq-1) >>>>>>>>>>> >>>>>>>>>>> so it's not exactly that the nodal values should be bounded >>>>>>>>>>> (\hat{u}_{i}) unless it's a FD application where \phi_i(x) is delta >>>>>>>>>>> function !! That said I think given a normalized basis, bounding nodal >>>>>>>>>>> values is stricter requirement that (Eq-1). if that's what you meant by >>>>>>>>>>> inconsistencies between node bounds then the stronger bound would be my >>>>>>>>>>> first priority. >>>>>>>>>>> >>>>>>>>>>> Ata >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> Of course it might run into a situations with semi-smooth VI. >>>>>>>>>>>> >>>>>>>>>>>> The type of VI solver is independent of how the bounds are >>>>>>>>>>>> specified: once we know the upper and lower bounds for each element of a >>>>>>>>>>>> Vec, we can solve the VI with either a semismooth method or an active set >>>>>>>>>>>> method. >>>>>>>>>>>> >>>>>>>>>>>> I really appreciate your feedback. I hope we can get to the >>>>>>>>>>>> point where things are as clear as possible so that you can >>>>>>>>>>>> profitably use the new capabilities. >>>>>>>>>>>> >>>>>>>>>>>> Dmitry. >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks, >>>>>>>>>>>>> Ata >>>>>>>>>>>>> >>>>>>>>>>>>> On Wed, Dec 14, 2011 at 9:47 AM, Dmitry Karpeev < >>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Are you familiar with SNESVI? The main thing I'm looking at >>>>>>>>>>>>>> right now is the libMesh API for specifying >>>>>>>>>>>>>> the bounds on the variables. It seems to me that the user >>>>>>>>>>>>>> would ideally want to specify the bounds on the variable >>>>>>>>>>>>>> one element at a time, since that's the default traversal >>>>>>>>>>>>>> mode for libMesh. Then, of course, many nodal values >>>>>>>>>>>>>> are traversed many time, allowing for an inconsistency. The >>>>>>>>>>>>>> question is, what you, as a user, expect in that situation? >>>>>>>>>>>>>> Should the latest bound that the user has set be applied or >>>>>>>>>>>>>> the tightest of the bounds the user has supplied? >>>>>>>>>>>>>> >>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 9:28 AM, Ataollah Mesgarnejad < >>>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thanks Dmitry, I look forward to use it. Also if it's >>>>>>>>>>>>>>> available as a dev version I would love to try to work with it. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Ata >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 8:24 AM, Dmitry Karpeev < >>>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> We (ANL) are working on an patch to libMesh to enable this >>>>>>>>>>>>>>>> functionality. >>>>>>>>>>>>>>>> The motivation is to use it through MOOSE, but it will be >>>>>>>>>>>>>>>> exposed as a libMesh API. >>>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 7:53 AM, Ataollah Mesgarnejad < >>>>>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Dear all, >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> I wonder if there is compatible versionIs it possible to >>>>>>>>>>>>>>>>> use PETScNonlinearSolver class with SNESVI (PETSc/3.2) to >>>>>>>>>>>>>>>>> solve a >>>>>>>>>>>>>>>>> variational inequality problem? >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>> Ata Mesgarnejad >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>>>>>>> Cloud Computing - Latest Buzzword or a Glimpse of the >>>>>>>>>>>>>>>>> Future? >>>>>>>>>>>>>>>>> This paper surveys cloud computing today: What are the >>>>>>>>>>>>>>>>> benefits? >>>>>>>>>>>>>>>>> Why are businesses embracing it? What are its payoffs and >>>>>>>>>>>>>>>>> pitfalls? >>>>>>>>>>>>>>>>> http://www.accelacomm.com/jaw/sdnl/114/51425149/ >>>>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>>>> Libmesh-users mailing list >>>>>>>>>>>>>>>>> Lib...@li... >>>>>>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/libmesh-users >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>>>>> Louisiana State University >>>>>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>>> Louisiana State University >>>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> A. Mesgarnejad >>>>>>>>> PhD Student, Research Assistant >>>>>>>>> Mechanical Engineering Department >>>>>>>>> Louisiana State University >>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>> Baton Rouge, La 70803 >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> A. Mesgarnejad >>>>>>> PhD Student, Research Assistant >>>>>>> Mechanical Engineering Department >>>>>>> Louisiana State University >>>>>>> 2203 Patrick F. Taylor Hall >>>>>>> Baton Rouge, La 70803 >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>> >>>> >>> >>> >> >> > > |
From: Dmitry K. <ka...@mc...> - 2012-05-01 13:30:03
|
Ata, I can't reproduce the problem: miscellaneous_ex7 runs fine for me on 2 procs with a relatively recent version of petsc-dev. Can you send me the full PETSc error output? Thanks. Dmitry. On Mon, Apr 30, 2012 at 8:56 PM, Ataollah Mesgarnejad < am...@ti...> wrote: > > On Apr 30, 2012, at 8:39 PM, Dmitry Karpeev wrote: > > Ata, > example7 segfaults when you run in parallel? > Does it do it on 2 procs? > > Yes and yes. > > here is the stack: > (gdb) where > #0 0x00000038fbaaada0 in __nanosleep_nocancel () from /lib64/libc.so.6 > #1 0x00000038fbaaac30 in sleep () from /lib64/libc.so.6 > #2 0x00002ae457b35c40 in PetscSleep(double) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #3 0x00002ae457b7dffa in PetscAttachDebugger() () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #4 0x00002ae457b7e95f in PetscAttachDebuggerErrorHandler(int, int, char > const*, char const*, char const*, int, PetscErrorType, char const*, void*) > () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #5 0x00002ae457b817be in PetscError(int, int, char const*, char const*, > char const*, int, PetscErrorType, char const*, ...) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #6 0x00002ae457b82ea7 in PetscDefaultSignalHandler(int, void*) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #7 0x00002ae457b82883 in PetscSignalHandler_Private () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #8 <signal handler called> > #9 0x00002ae458189340 in DMCoarsenHookAdd(_p_DM*, int (*)(_p_DM*, _p_DM*, > void*), int (*)(_p_DM*, _p_Mat*, _p_Vec*, _p_Mat*, _p_DM*, void*), void*) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #10 0x00002ae45835f431 in DMSNESGetContext(_p_DM*, _n_SNESDM**) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #11 0x00002ae45835ea34 in DMSNESGetFunction(_p_DM*, int (**)(_p_SNES*, > _p_Vec*, ---Type <return> to continue, or q <return> to quit--- > _p_Vec*, void*), void**) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #12 0x00002ae458393d1c in SNESGetFunction(_p_SNES*, _p_Vec**, int > (**)(_p_SNES*, _p_Vec*, _p_Vec*, void*), void**) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #13 0x00002ae45839b9eb in SNESSetUp(_p_SNES*) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #14 0x00002ae45839a6b0 in SNESSolve(_p_SNES*, _p_Vec*, _p_Vec*) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > > #15 0x00000000010976ab in > libMesh::PetscDMNonlinearSolver<libMesh::Number>::solve (this=0x35506d0, > jac_in=..., x_in=..., r_in=...) > at src/solvers/petsc_dm_nonlinear_solver.C:450 > #16 0x0000000000cd604f in libMesh::NonlinearImplicitSystem::solve ( > this=0x353b7e0) at src/systems/nonlinear_implicit_system.C:178 > #17 0x0000000000426082 in Biharmonic::step (this=0x34f2000, > dt=@0x7fff95edca28) > at biharmonic.C:110 > #18 0x0000000000426373 in Biharmonic::run (this=0x34f2000) at > biharmonic.C:152 > warning: Range for type (null) has invalid bounds 0..-111 > #19 0x0000000000436cf5 in main (argc=12, argv=0x7fff95edcc18) > at miscellaneous_ex7.C:64 > > Ata > > > Dmitry. > > On Mon, Apr 30, 2012 at 8:31 PM, Ataollah Mesgarnejad < > am...@ti...> wrote: > >> Dmitry, >> >> I had a very good experience on single processor but on multiple >> processors I get segmentation error at line where we call solve (I tried >> both my code and example7) . I tried your example without -vi >> --use-petsc-dm and it works just fine. Right now I'm trying to compile a >> debug version and look into it; meanwhile I thought it would be a good idea >> if I let you know. >> >> Thanks, >> Ata >> >> On Apr 30, 2012, at 1:30 PM, Dmitry Karpeev wrote: >> >> >> >> On Mon, Apr 30, 2012 at 1:28 PM, Ataollah Mesgarnejad < >> am...@ti...> wrote: >> >>> >>> On Apr 30, 2012, at 1:13 PM, Dmitry Karpeev wrote: >>> >>> >>> >>> On Mon, Apr 30, 2012 at 1:03 PM, Ataollah Mesgarnejad < >>> am...@ti...> wrote: >>> >>>> Dmitry, >>>> >>>> I had time to come back and work on the SNESVI/Libmesh program. The way >>>> you setup your code is particularly hard for me to follow in my own code >>>> since it requires a significant amount of extra work (to define classes >>>> and setup the inheritances..). What I did instead was to try add a >>>> TransientNonlinearSystem to my EquationSystems object: >>>> >>>> TransientNonlinearImplicitSystem & fracture_system = >>>> equation_system.add_system<TransientNonlinearImplicitSystem>("fracture_system"); >>>> >>>> and then to add the bounds and Objective and Residual functions: >>>> >>>> fracture_system.attach_init_function(init_ic_fracture); >>>> fracture_system.nonlinear_solver->jacobian= PsiFormJacobian; >>>> fracture_system.nonlinear_solver->residual= PsiFormResidual; >>>> fracture_system.nonlinear_solver->bounds = PsiFormBounds; >>>> >>>> what I see now is that SNES solve does not apply the bounds. So here is >>>> my questions: >>>> >>>> 1- Is there inherently wrong with setting up the system specially the >>>> bounds the easy way like I did? >>>> >>> Ata, >>> Doing things this "simple" way is just fine: there is no reason to >>> follow the relatively convoluted structure I used in that >>> example. When I have time, I'll rewrite it to simplify things. >>> >>>> 2- Where is the right SNES type is chosen ? I guess this should be hard >>>> wired somewhere in NonlinearImplicitSystem! >>>> 3- When I use -snes_view (with your example or my own code ) I get >>>> snes_type ls? shouldn't this be VI ? >>>> >>> >>> What's likely missing is the command-line option --use-petsc-dm. It's >>> necessary since libMesh wants to preserve the usual behavior as >>> the default. In particular, because SNESVI and the DM hookup aren't really >>> possible prior to petsc-3.2. >>> In fact, it might not be possible without a recent petsc-dev (let me >>> verify that, once I get to my computer). >>> >>> In the meantime, could you give --use-petsc-dm a try? >>> >>> I just saw that in the example help line and used it. It works like a >>> charm. >>> >> Great! >> Dmitry. >> >>> I'll go on check for the convergence and see what I can get. >>> >>> Thanks, >>> Ata >>> >>> Thanks. >>> >>> Dmitry. >>> >>>> >>>> Thanks, >>>> Ata >>>> >>>> On Apr 12, 2012, at 10:00 AM, Dmitry Karpeev wrote: >>>> >>>> Convergence is something we might need to work on -- I noticed some >>>> chatter in a simple bounded biharmonic heat equation (a stripped down >>>> version of Cahn-Hilliard). >>>> Thanks. >>>> Dmitry. >>>> >>>> On Thu, Apr 12, 2012 at 9:55 AM, Ataollah Mesgarnejad < >>>> am...@ti...> wrote: >>>> >>>>> Thanks Dmitry, >>>>> >>>>> It would be wonderful. I made a wrapper for SNESVI in myself for my >>>>> program but had difficulty achieving good convergence. Right now I am >>>>> occupied with other issues but I'll get back to this as soon as I get a >>>>> chance. >>>>> >>>>> Ata >>>>> >>>>> On Apr 10, 2012, at 9:55 PM, Dmitry Karpeev wrote: >>>>> >>>>> Ata, >>>>> There is now a way to use SNESVI from libMesh. >>>>> If interested, pull a recent revision from sourceforge and take a look >>>>> at examples/miscellaneous/miscellaneous_ex7. >>>>> Let me know, if you have any questions. >>>>> >>>>> Thanks. >>>>> Dmitry. >>>>> >>>>> On Wed, Dec 14, 2011 at 12:45 PM, Dmitry Karpeev <ka...@mc...>wrote: >>>>> >>>>>> Okay, thanks for your feedback. >>>>>> I'll keep you updated on this work. >>>>>> >>>>>> Dmitry. >>>>>> >>>>>> >>>>>> On Wed, Dec 14, 2011 at 12:42 PM, Ataollah Mesgarnejad < >>>>>> am...@ti...> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Wed, Dec 14, 2011 at 12:35 PM, Dmitry Karpeev < >>>>>>> ka...@mc...> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Wed, Dec 14, 2011 at 12:19 PM, Ataollah Mesgarnejad < >>>>>>>> am...@ti...> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, Dec 14, 2011 at 11:59 AM, Dmitry Karpeev < >>>>>>>>> ka...@mc...> wrote: >>>>>>>>> >>>>>>>>>> Ata, >>>>>>>>>> >>>>>>>>>> A couple of things: >>>>>>>>>> 1. SNESVI solves a VI under what's known as "box constraints": >>>>>>>>>> the Vec x that the residual operates on is entrywise between Vecs xlow and >>>>>>>>>> xhigh. In order to enable >>>>>>>>>> constraints on a linear combination of the Vec entries (as you >>>>>>>>>> suggest), the problem >>>>>>>>>> has to be augmented with extra variables -- these linear >>>>>>>>>> combinations. This, of course, >>>>>>>>>> changes the Jacobian, preconditioner, etc. The user provides >>>>>>>>>> only the Jacobian for the original system, so libMesh would have to >>>>>>>>>> automatically construct the Jacobian for the extended system, and >>>>>>>>>> precondition it using the user provided PC for the original subsystem. This >>>>>>>>>> is not necessarily the unreasonable thing to do in many circumstances, but >>>>>>>>>> will take more thinking than a straighforward nodal constraint. The user >>>>>>>>>> API for specifying the constraints, however, becomes rather straighforward. >>>>>>>>>> >>>>>>>>>> 2. In terms of FEM convergence, constraining at the nodes is >>>>>>>>>> sufficient (see, for example, >>>>>>>>>> Glowinski's book: >>>>>>>>>> http://books.google.com/books/about/Numerical_analysis_of_variational_inequa.html?id=Pf4ed2mtbx4C >>>>>>>>>> ) >>>>>>>>>> You can show that if you impose the nodal constraints, in the >>>>>>>>>> limit of mesh size going to 0 the inequality is satisfied a.e. >>>>>>>>>> >>>>>>>>>> In this case no funny manipulation of the Jacobian or the PC is >>>>>>>>>> necessary, but the user API to specify the constraint is a bit amiguous: >>>>>>>>>> how do you constrain the node? From a containing element? Then there are >>>>>>>>>> multiple, possibly incompatible, constraints. >>>>>>>>>> >>>>>>>>> Or you can simply go through all the nodes in the system once >>>>>>>>> using node iterator. Here is how I think you can do it: >>>>>>>>> >>>>>>>>> ------------------------------------- >>>>>>>>> //make two PetscVector ub,lb from two variables you already added >>>>>>>>> to the system >>>>>>>>> NumericVector<Number> &ub_sys = fracture_system.get_vector("ub"); >>>>>>>>> NumericVector<Number> &lb_sys = fracture_system.get_vector("lb"); >>>>>>>>> PetscVector<Number>&lb = >>>>>>>>> libmesh_cast_ptr<PetscVector<Number>*>(&ub_sys); >>>>>>>>> PetscVector<Number>&ub = >>>>>>>>> libmesh_cast_ptr<PetscVector<Number>*>(&lb_sys); >>>>>>>>> >>>>>>>>> *then in the bound routine do:* >>>>>>>>> >>>>>>>>> // Get a constant reference to the mesh object. >>>>>>>>> const MeshBase& mesh = es.get_mesh(); >>>>>>>>> //Get the iterator >>>>>>>>> MeshBase::const_node_iterator nd = >>>>>>>>> mesh.local_nodes_begin(); >>>>>>>>> const MeshBase::const_node_iterator end_nd = >>>>>>>>> mesh.local_nodes_end(); >>>>>>>>> for (; nd != end_nd; nd++){ >>>>>>>>> const Node* node = *nd; >>>>>>>>> //get the degree of freedom of node >>>>>>>>> unsigned int dn = node->dof_number(0,0,0); // >>>>>>>>> system 0 , var 0, component 0 >>>>>>>>> //set the ub,lb >>>>>>>>> ub(dn)= A; >>>>>>>>> lb(dn) = B; >>>>>>>>> } >>>>>>>>> //close the vectors >>>>>>>>> ub.close(); >>>>>>>>> lb.close(); >>>>>>>>> >>>>>>>> >>>>>>>> You could do that. That's probably the mode we can support almost >>>>>>>> immediately. >>>>>>>> The issue here is that you lose some of the "context" by processing >>>>>>>> each node by >>>>>>>> itself rather as part of an element. For example what if each >>>>>>>> element is a different material? >>>>>>>> I guess in that case that property should be set on the nodes as >>>>>>>> well. >>>>>>>> >>>>>>> >>>>>>> Exactly one can always pass mat prop as a Constant MONOMIAL (e.g. >>>>>>> system.add_variable("mat_prop",CONSTANT,MONOMIAL);) variable with values >>>>>>> on each node. >>>>>>> >>>>>>> Ata >>>>>>> >>>>>>> >>>>>>>> Dmitry. >>>>>>>> >>>>>>>>> ----------------------------------------------------------------- >>>>>>>>> Ata >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> Dmitry. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Wed, Dec 14, 2011 at 11:17 AM, Ataollah Mesgarnejad < >>>>>>>>>> am...@ti...> wrote: >>>>>>>>>> >>>>>>>>>>> I hope I understood the whole thing correctly. >>>>>>>>>>> >>>>>>>>>>> On Wed, Dec 14, 2011 at 10:56 AM, Dmitry Karpeev < >>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Wed, Dec 14, 2011 at 10:50 AM, Ataollah Mesgarnejad < >>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> I don't think setting the variable bound on each >>>>>>>>>>>>> node separately is a good idea in general. >>>>>>>>>>>> >>>>>>>>>>>> I'm not sure what you mean by this. Could you clarify? I >>>>>>>>>>>> don't think I'm advocating this. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> Bounds should be set as a sum on each elements degrees of >>>>>>>>>>>>> freedom; this way there should not be any nodal inconsistencies. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> I'm not sure what you mean here either. A bound is usually not >>>>>>>>>>>> a sum (e.g., a bound on a concentration is 0 <= c <=1). >>>>>>>>>>>> Perhaps your application expresses the total bound as a sum? >>>>>>>>>>>> I'd be very interested in learning more about that case. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> OK here is my thought process: one optimally wants to bound the >>>>>>>>>>> variable over the whole Element domain between lb,ub. So we would have >>>>>>>>>>> something like this: >>>>>>>>>>> >>>>>>>>>>> lb \leq \sum_{i=1}^{n=NDOF} \phi_{i}(x) \hat{u}_{i} \leq ub, >>>>>>>>>>> \forall x \in El >>>>>>>>>>> (Eq-1) >>>>>>>>>>> >>>>>>>>>>> so it's not exactly that the nodal values should be bounded >>>>>>>>>>> (\hat{u}_{i}) unless it's a FD application where \phi_i(x) is delta >>>>>>>>>>> function !! That said I think given a normalized basis, bounding nodal >>>>>>>>>>> values is stricter requirement that (Eq-1). if that's what you meant by >>>>>>>>>>> inconsistencies between node bounds then the stronger bound would be my >>>>>>>>>>> first priority. >>>>>>>>>>> >>>>>>>>>>> Ata >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> Of course it might run into a situations with semi-smooth VI. >>>>>>>>>>>> >>>>>>>>>>>> The type of VI solver is independent of how the bounds are >>>>>>>>>>>> specified: once we know the upper and lower bounds for each element of a >>>>>>>>>>>> Vec, we can solve the VI with either a semismooth method or an active set >>>>>>>>>>>> method. >>>>>>>>>>>> >>>>>>>>>>>> I really appreciate your feedback. I hope we can get to the >>>>>>>>>>>> point where things are as clear as possible so that you can >>>>>>>>>>>> profitably use the new capabilities. >>>>>>>>>>>> >>>>>>>>>>>> Dmitry. >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks, >>>>>>>>>>>>> Ata >>>>>>>>>>>>> >>>>>>>>>>>>> On Wed, Dec 14, 2011 at 9:47 AM, Dmitry Karpeev < >>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Are you familiar with SNESVI? The main thing I'm looking at >>>>>>>>>>>>>> right now is the libMesh API for specifying >>>>>>>>>>>>>> the bounds on the variables. It seems to me that the user >>>>>>>>>>>>>> would ideally want to specify the bounds on the variable >>>>>>>>>>>>>> one element at a time, since that's the default traversal >>>>>>>>>>>>>> mode for libMesh. Then, of course, many nodal values >>>>>>>>>>>>>> are traversed many time, allowing for an inconsistency. The >>>>>>>>>>>>>> question is, what you, as a user, expect in that situation? >>>>>>>>>>>>>> Should the latest bound that the user has set be applied or >>>>>>>>>>>>>> the tightest of the bounds the user has supplied? >>>>>>>>>>>>>> >>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 9:28 AM, Ataollah Mesgarnejad < >>>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thanks Dmitry, I look forward to use it. Also if it's >>>>>>>>>>>>>>> available as a dev version I would love to try to work with it. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Ata >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 8:24 AM, Dmitry Karpeev < >>>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> We (ANL) are working on an patch to libMesh to enable this >>>>>>>>>>>>>>>> functionality. >>>>>>>>>>>>>>>> The motivation is to use it through MOOSE, but it will be >>>>>>>>>>>>>>>> exposed as a libMesh API. >>>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 7:53 AM, Ataollah Mesgarnejad < >>>>>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Dear all, >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> I wonder if there is compatible versionIs it possible to >>>>>>>>>>>>>>>>> use PETScNonlinearSolver class with SNESVI (PETSc/3.2) to >>>>>>>>>>>>>>>>> solve a >>>>>>>>>>>>>>>>> variational inequality problem? >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>> Ata Mesgarnejad >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>>>>>>> Cloud Computing - Latest Buzzword or a Glimpse of the >>>>>>>>>>>>>>>>> Future? >>>>>>>>>>>>>>>>> This paper surveys cloud computing today: What are the >>>>>>>>>>>>>>>>> benefits? >>>>>>>>>>>>>>>>> Why are businesses embracing it? What are its payoffs and >>>>>>>>>>>>>>>>> pitfalls? >>>>>>>>>>>>>>>>> http://www.accelacomm.com/jaw/sdnl/114/51425149/ >>>>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>>>> Libmesh-users mailing list >>>>>>>>>>>>>>>>> Lib...@li... >>>>>>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/libmesh-users >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>>>>> Louisiana State University >>>>>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>>> Louisiana State University >>>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> A. Mesgarnejad >>>>>>>>> PhD Student, Research Assistant >>>>>>>>> Mechanical Engineering Department >>>>>>>>> Louisiana State University >>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>> Baton Rouge, La 70803 >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> A. Mesgarnejad >>>>>>> PhD Student, Research Assistant >>>>>>> Mechanical Engineering Department >>>>>>> Louisiana State University >>>>>>> 2203 Patrick F. Taylor Hall >>>>>>> Baton Rouge, La 70803 >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>> >>>> >>> >>> >> >> > > |
From: Dmitry K. <ka...@mc...> - 2012-05-01 13:49:56
|
As an fyi: I just built and ran miscellaneous_ex7 in parallel with the latest petsc-dev: http://petsc.cs.iit.edu/petsc/petsc-dev/rev/12d4a1ea0aca If you decide you want to upgrade to this (or later) petsc-dev, you'd need a small patch for petsc_dm_nonlinear_solver.C to account for the latest API changes (attached). I am working on a bigger patch that adds more functionality to DM_libMesh and streamline miscellaneous_ex7. That will take some time to percolate up into the libMesh tree. In the meantime you should be able to run with the small fix I've attached. Let me know how it works. Dmitry. On Tue, May 1, 2012 at 8:35 AM, Ataollah Mesgarnejad <am...@ti... > wrote: > Dmitry, > > I believe that the error is indeed due to petsc-dev version. I'm going to > check this during the next few hours and get back to you. > > Thanks, > Ata > > > On Tue, May 1, 2012 at 8:29 AM, Dmitry Karpeev <ka...@mc...>wrote: > >> Ata, >> I can't reproduce the problem: miscellaneous_ex7 runs fine for me on 2 >> procs with a relatively recent version >> of petsc-dev. Can you send me the full PETSc error output? >> >> Thanks. >> Dmitry. >> >> >> On Mon, Apr 30, 2012 at 8:56 PM, Ataollah Mesgarnejad < >> am...@ti...> wrote: >> >>> >>> On Apr 30, 2012, at 8:39 PM, Dmitry Karpeev wrote: >>> >>> Ata, >>> example7 segfaults when you run in parallel? >>> Does it do it on 2 procs? >>> >>> Yes and yes. >>> >>> here is the stack: >>> (gdb) where >>> #0 0x00000038fbaaada0 in __nanosleep_nocancel () from /lib64/libc.so.6 >>> #1 0x00000038fbaaac30 in sleep () from /lib64/libc.so.6 >>> #2 0x00002ae457b35c40 in PetscSleep(double) () >>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>> #3 0x00002ae457b7dffa in PetscAttachDebugger() () >>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>> #4 0x00002ae457b7e95f in PetscAttachDebuggerErrorHandler(int, int, char >>> const*, char const*, char const*, int, PetscErrorType, char const*, void*) >>> () >>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>> #5 0x00002ae457b817be in PetscError(int, int, char const*, char const*, >>> char const*, int, PetscErrorType, char const*, ...) () >>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>> #6 0x00002ae457b82ea7 in PetscDefaultSignalHandler(int, void*) () >>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>> #7 0x00002ae457b82883 in PetscSignalHandler_Private () >>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>> #8 <signal handler called> >>> #9 0x00002ae458189340 in DMCoarsenHookAdd(_p_DM*, int (*)(_p_DM*, >>> _p_DM*, void*), int (*)(_p_DM*, _p_Mat*, _p_Vec*, _p_Mat*, _p_DM*, void*), >>> void*) () >>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>> #10 0x00002ae45835f431 in DMSNESGetContext(_p_DM*, _n_SNESDM**) () >>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>> #11 0x00002ae45835ea34 in DMSNESGetFunction(_p_DM*, int (**)(_p_SNES*, >>> _p_Vec*, ---Type <return> to continue, or q <return> to quit--- >>> _p_Vec*, void*), void**) () >>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>> #12 0x00002ae458393d1c in SNESGetFunction(_p_SNES*, _p_Vec**, int >>> (**)(_p_SNES*, _p_Vec*, _p_Vec*, void*), void**) () >>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>> #13 0x00002ae45839b9eb in SNESSetUp(_p_SNES*) () >>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>> #14 0x00002ae45839a6b0 in SNESSolve(_p_SNES*, _p_Vec*, _p_Vec*) () >>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>> >>> #15 0x00000000010976ab in >>> libMesh::PetscDMNonlinearSolver<libMesh::Number>::solve (this=0x35506d0, >>> jac_in=..., x_in=..., r_in=...) >>> at src/solvers/petsc_dm_nonlinear_solver.C:450 >>> #16 0x0000000000cd604f in libMesh::NonlinearImplicitSystem::solve ( >>> this=0x353b7e0) at src/systems/nonlinear_implicit_system.C:178 >>> #17 0x0000000000426082 in Biharmonic::step (this=0x34f2000, >>> dt=@0x7fff95edca28) >>> at biharmonic.C:110 >>> #18 0x0000000000426373 in Biharmonic::run (this=0x34f2000) at >>> biharmonic.C:152 >>> warning: Range for type (null) has invalid bounds 0..-111 >>> #19 0x0000000000436cf5 in main (argc=12, argv=0x7fff95edcc18) >>> at miscellaneous_ex7.C:64 >>> >>> Ata >>> >>> >>> Dmitry. >>> >>> On Mon, Apr 30, 2012 at 8:31 PM, Ataollah Mesgarnejad < >>> am...@ti...> wrote: >>> >>>> Dmitry, >>>> >>>> I had a very good experience on single processor but on multiple >>>> processors I get segmentation error at line where we call solve (I tried >>>> both my code and example7) . I tried your example without -vi >>>> --use-petsc-dm and it works just fine. Right now I'm trying to compile a >>>> debug version and look into it; meanwhile I thought it would be a good idea >>>> if I let you know. >>>> >>>> Thanks, >>>> Ata >>>> >>>> On Apr 30, 2012, at 1:30 PM, Dmitry Karpeev wrote: >>>> >>>> >>>> >>>> On Mon, Apr 30, 2012 at 1:28 PM, Ataollah Mesgarnejad < >>>> am...@ti...> wrote: >>>> >>>>> >>>>> On Apr 30, 2012, at 1:13 PM, Dmitry Karpeev wrote: >>>>> >>>>> >>>>> >>>>> On Mon, Apr 30, 2012 at 1:03 PM, Ataollah Mesgarnejad < >>>>> am...@ti...> wrote: >>>>> >>>>>> Dmitry, >>>>>> >>>>>> I had time to come back and work on the SNESVI/Libmesh program. The >>>>>> way you setup your code is particularly hard for me to follow in my own >>>>>> code since it requires a significant amount of extra work (to define >>>>>> classes and setup the inheritances..). What I did instead was to try add a >>>>>> TransientNonlinearSystem to my EquationSystems object: >>>>>> >>>>>> TransientNonlinearImplicitSystem & fracture_system = >>>>>> equation_system.add_system<TransientNonlinearImplicitSystem>("fracture_system"); >>>>>> >>>>>> and then to add the bounds and Objective and Residual functions: >>>>>> >>>>>> fracture_system.attach_init_function(init_ic_fracture); >>>>>> fracture_system.nonlinear_solver->jacobian= PsiFormJacobian; >>>>>> fracture_system.nonlinear_solver->residual= PsiFormResidual; >>>>>> fracture_system.nonlinear_solver->bounds = PsiFormBounds; >>>>>> >>>>>> what I see now is that SNES solve does not apply the bounds. So here >>>>>> is my questions: >>>>>> >>>>>> 1- Is there inherently wrong with setting up the system specially >>>>>> the bounds the easy way like I did? >>>>>> >>>>> Ata, >>>>> Doing things this "simple" way is just fine: there is no reason to >>>>> follow the relatively convoluted structure I used in that >>>>> example. When I have time, I'll rewrite it to simplify things. >>>>> >>>>>> 2- Where is the right SNES type is chosen ? I guess this should be >>>>>> hard wired somewhere in NonlinearImplicitSystem! >>>>>> 3- When I use -snes_view (with your example or my own code ) I get >>>>>> snes_type ls? shouldn't this be VI ? >>>>>> >>>>> >>>>> What's likely missing is the command-line option --use-petsc-dm. It's >>>>> necessary since libMesh wants to preserve the usual behavior as >>>>> the default. In particular, because SNESVI and the DM hookup aren't really >>>>> possible prior to petsc-3.2. >>>>> In fact, it might not be possible without a recent petsc-dev (let me >>>>> verify that, once I get to my computer). >>>>> >>>>> In the meantime, could you give --use-petsc-dm a try? >>>>> >>>>> I just saw that in the example help line and used it. It works like a >>>>> charm. >>>>> >>>> Great! >>>> Dmitry. >>>> >>>>> I'll go on check for the convergence and see what I can get. >>>>> >>>>> Thanks, >>>>> Ata >>>>> >>>>> Thanks. >>>>> >>>>> Dmitry. >>>>> >>>>>> >>>>>> Thanks, >>>>>> Ata >>>>>> >>>>>> On Apr 12, 2012, at 10:00 AM, Dmitry Karpeev wrote: >>>>>> >>>>>> Convergence is something we might need to work on -- I noticed some >>>>>> chatter in a simple bounded biharmonic heat equation (a stripped down >>>>>> version of Cahn-Hilliard). >>>>>> Thanks. >>>>>> Dmitry. >>>>>> >>>>>> On Thu, Apr 12, 2012 at 9:55 AM, Ataollah Mesgarnejad < >>>>>> am...@ti...> wrote: >>>>>> >>>>>>> Thanks Dmitry, >>>>>>> >>>>>>> It would be wonderful. I made a wrapper for SNESVI in myself for my >>>>>>> program but had difficulty achieving good convergence. Right now I am >>>>>>> occupied with other issues but I'll get back to this as soon as I get a >>>>>>> chance. >>>>>>> >>>>>>> Ata >>>>>>> >>>>>>> On Apr 10, 2012, at 9:55 PM, Dmitry Karpeev wrote: >>>>>>> >>>>>>> Ata, >>>>>>> There is now a way to use SNESVI from libMesh. >>>>>>> If interested, pull a recent revision from sourceforge and take a >>>>>>> look at examples/miscellaneous/miscellaneous_ex7. >>>>>>> Let me know, if you have any questions. >>>>>>> >>>>>>> Thanks. >>>>>>> Dmitry. >>>>>>> >>>>>>> On Wed, Dec 14, 2011 at 12:45 PM, Dmitry Karpeev < >>>>>>> ka...@mc...> wrote: >>>>>>> >>>>>>>> Okay, thanks for your feedback. >>>>>>>> I'll keep you updated on this work. >>>>>>>> >>>>>>>> Dmitry. >>>>>>>> >>>>>>>> >>>>>>>> On Wed, Dec 14, 2011 at 12:42 PM, Ataollah Mesgarnejad < >>>>>>>> am...@ti...> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, Dec 14, 2011 at 12:35 PM, Dmitry Karpeev < >>>>>>>>> ka...@mc...> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Wed, Dec 14, 2011 at 12:19 PM, Ataollah Mesgarnejad < >>>>>>>>>> am...@ti...> wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Wed, Dec 14, 2011 at 11:59 AM, Dmitry Karpeev < >>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>> >>>>>>>>>>>> Ata, >>>>>>>>>>>> >>>>>>>>>>>> A couple of things: >>>>>>>>>>>> 1. SNESVI solves a VI under what's known as "box constraints": >>>>>>>>>>>> the Vec x that the residual operates on is entrywise between Vecs xlow and >>>>>>>>>>>> xhigh. In order to enable >>>>>>>>>>>> constraints on a linear combination of the Vec entries (as you >>>>>>>>>>>> suggest), the problem >>>>>>>>>>>> has to be augmented with extra variables -- these linear >>>>>>>>>>>> combinations. This, of course, >>>>>>>>>>>> changes the Jacobian, preconditioner, etc. The user provides >>>>>>>>>>>> only the Jacobian for the original system, so libMesh would have to >>>>>>>>>>>> automatically construct the Jacobian for the extended system, and >>>>>>>>>>>> precondition it using the user provided PC for the original subsystem. This >>>>>>>>>>>> is not necessarily the unreasonable thing to do in many circumstances, but >>>>>>>>>>>> will take more thinking than a straighforward nodal constraint. The user >>>>>>>>>>>> API for specifying the constraints, however, becomes rather straighforward. >>>>>>>>>>>> >>>>>>>>>>>> 2. In terms of FEM convergence, constraining at the nodes is >>>>>>>>>>>> sufficient (see, for example, >>>>>>>>>>>> Glowinski's book: >>>>>>>>>>>> http://books.google.com/books/about/Numerical_analysis_of_variational_inequa.html?id=Pf4ed2mtbx4C >>>>>>>>>>>> ) >>>>>>>>>>>> You can show that if you impose the nodal constraints, in the >>>>>>>>>>>> limit of mesh size going to 0 the inequality is satisfied a.e. >>>>>>>>>>>> >>>>>>>>>>>> In this case no funny manipulation of the Jacobian or the PC is >>>>>>>>>>>> necessary, but the user API to specify the constraint is a bit amiguous: >>>>>>>>>>>> how do you constrain the node? From a containing element? Then there are >>>>>>>>>>>> multiple, possibly incompatible, constraints. >>>>>>>>>>>> >>>>>>>>>>> Or you can simply go through all the nodes in the system once >>>>>>>>>>> using node iterator. Here is how I think you can do it: >>>>>>>>>>> >>>>>>>>>>> ------------------------------------- >>>>>>>>>>> //make two PetscVector ub,lb from two variables you already >>>>>>>>>>> added to the system >>>>>>>>>>> NumericVector<Number> &ub_sys = fracture_system.get_vector("ub"); >>>>>>>>>>> NumericVector<Number> &lb_sys = fracture_system.get_vector("lb"); >>>>>>>>>>> PetscVector<Number>&lb = >>>>>>>>>>> libmesh_cast_ptr<PetscVector<Number>*>(&ub_sys); >>>>>>>>>>> PetscVector<Number>&ub = >>>>>>>>>>> libmesh_cast_ptr<PetscVector<Number>*>(&lb_sys); >>>>>>>>>>> >>>>>>>>>>> *then in the bound routine do:* >>>>>>>>>>> >>>>>>>>>>> // Get a constant reference to the mesh object. >>>>>>>>>>> const MeshBase& mesh = es.get_mesh(); >>>>>>>>>>> //Get the iterator >>>>>>>>>>> MeshBase::const_node_iterator nd = >>>>>>>>>>> mesh.local_nodes_begin(); >>>>>>>>>>> const MeshBase::const_node_iterator end_nd = >>>>>>>>>>> mesh.local_nodes_end(); >>>>>>>>>>> for (; nd != end_nd; nd++){ >>>>>>>>>>> const Node* node = *nd; >>>>>>>>>>> //get the degree of freedom of node >>>>>>>>>>> unsigned int dn = node->dof_number(0,0,0); // >>>>>>>>>>> system 0 , var 0, component 0 >>>>>>>>>>> //set the ub,lb >>>>>>>>>>> ub(dn)= A; >>>>>>>>>>> lb(dn) = B; >>>>>>>>>>> } >>>>>>>>>>> //close the vectors >>>>>>>>>>> ub.close(); >>>>>>>>>>> lb.close(); >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> You could do that. That's probably the mode we can support almost >>>>>>>>>> immediately. >>>>>>>>>> The issue here is that you lose some of the "context" by >>>>>>>>>> processing each node by >>>>>>>>>> itself rather as part of an element. For example what if each >>>>>>>>>> element is a different material? >>>>>>>>>> I guess in that case that property should be set on the nodes as >>>>>>>>>> well. >>>>>>>>>> >>>>>>>>> >>>>>>>>> Exactly one can always pass mat prop as a Constant MONOMIAL (e.g. >>>>>>>>> system.add_variable("mat_prop",CONSTANT,MONOMIAL);) variable with values >>>>>>>>> on each node. >>>>>>>>> >>>>>>>>> Ata >>>>>>>>> >>>>>>>>> >>>>>>>>>> Dmitry. >>>>>>>>>> >>>>>>>>>>> ----------------------------------------------------------------- >>>>>>>>>>> Ata >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Dmitry. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Wed, Dec 14, 2011 at 11:17 AM, Ataollah Mesgarnejad < >>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> I hope I understood the whole thing correctly. >>>>>>>>>>>>> >>>>>>>>>>>>> On Wed, Dec 14, 2011 at 10:56 AM, Dmitry Karpeev < >>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 10:50 AM, Ataollah Mesgarnejad < >>>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> I don't think setting the variable bound on each >>>>>>>>>>>>>>> node separately is a good idea in general. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I'm not sure what you mean by this. Could you clarify? I >>>>>>>>>>>>>> don't think I'm advocating this. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> Bounds should be set as a sum on each elements degrees of >>>>>>>>>>>>>>> freedom; this way there should not be any nodal inconsistencies. >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> I'm not sure what you mean here either. A bound is usually >>>>>>>>>>>>>> not a sum (e.g., a bound on a concentration is 0 <= c <=1). >>>>>>>>>>>>>> Perhaps your application expresses the total bound as a sum? >>>>>>>>>>>>>> I'd be very interested in learning more about that case. >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> OK here is my thought process: one optimally wants to bound >>>>>>>>>>>>> the variable over the whole Element domain between lb,ub. So we would have >>>>>>>>>>>>> something like this: >>>>>>>>>>>>> >>>>>>>>>>>>> lb \leq \sum_{i=1}^{n=NDOF} \phi_{i}(x) \hat{u}_{i} \leq ub, >>>>>>>>>>>>> \forall x \in El >>>>>>>>>>>>> (Eq-1) >>>>>>>>>>>>> >>>>>>>>>>>>> so it's not exactly that the nodal values should be bounded >>>>>>>>>>>>> (\hat{u}_{i}) unless it's a FD application where \phi_i(x) is delta >>>>>>>>>>>>> function !! That said I think given a normalized basis, bounding nodal >>>>>>>>>>>>> values is stricter requirement that (Eq-1). if that's what you meant by >>>>>>>>>>>>> inconsistencies between node bounds then the stronger bound would be my >>>>>>>>>>>>> first priority. >>>>>>>>>>>>> >>>>>>>>>>>>> Ata >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Of course it might run into a situations with semi-smooth VI. >>>>>>>>>>>>>> >>>>>>>>>>>>>> The type of VI solver is independent of how the bounds are >>>>>>>>>>>>>> specified: once we know the upper and lower bounds for each element of a >>>>>>>>>>>>>> Vec, we can solve the VI with either a semismooth method or an active set >>>>>>>>>>>>>> method. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I really appreciate your feedback. I hope we can get to the >>>>>>>>>>>>>> point where things are as clear as possible so that you can >>>>>>>>>>>>>> profitably use the new capabilities. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>> Ata >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 9:47 AM, Dmitry Karpeev < >>>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Are you familiar with SNESVI? The main thing I'm looking >>>>>>>>>>>>>>>> at right now is the libMesh API for specifying >>>>>>>>>>>>>>>> the bounds on the variables. It seems to me that the user >>>>>>>>>>>>>>>> would ideally want to specify the bounds on the variable >>>>>>>>>>>>>>>> one element at a time, since that's the default traversal >>>>>>>>>>>>>>>> mode for libMesh. Then, of course, many nodal values >>>>>>>>>>>>>>>> are traversed many time, allowing for an inconsistency. >>>>>>>>>>>>>>>> The question is, what you, as a user, expect in that situation? >>>>>>>>>>>>>>>> Should the latest bound that the user has set be applied >>>>>>>>>>>>>>>> or the tightest of the bounds the user has supplied? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 9:28 AM, Ataollah Mesgarnejad < >>>>>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Thanks Dmitry, I look forward to use it. Also if it's >>>>>>>>>>>>>>>>> available as a dev version I would love to try to work with it. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Ata >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 8:24 AM, Dmitry Karpeev < >>>>>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> We (ANL) are working on an patch to libMesh to enable >>>>>>>>>>>>>>>>>> this functionality. >>>>>>>>>>>>>>>>>> The motivation is to use it through MOOSE, but it will be >>>>>>>>>>>>>>>>>> exposed as a libMesh API. >>>>>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 7:53 AM, Ataollah Mesgarnejad < >>>>>>>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Dear all, >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> I wonder if there is compatible versionIs it possible to >>>>>>>>>>>>>>>>>>> use PETScNonlinearSolver class with SNESVI (PETSc/3.2) >>>>>>>>>>>>>>>>>>> to solve a >>>>>>>>>>>>>>>>>>> variational inequality problem? >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>> Ata Mesgarnejad >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>> Cloud Computing - Latest Buzzword or a Glimpse of the >>>>>>>>>>>>>>>>>>> Future? >>>>>>>>>>>>>>>>>>> This paper surveys cloud computing today: What are the >>>>>>>>>>>>>>>>>>> benefits? >>>>>>>>>>>>>>>>>>> Why are businesses embracing it? What are its payoffs >>>>>>>>>>>>>>>>>>> and pitfalls? >>>>>>>>>>>>>>>>>>> http://www.accelacomm.com/jaw/sdnl/114/51425149/ >>>>>>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>>>>>> Libmesh-users mailing list >>>>>>>>>>>>>>>>>>> Lib...@li... >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/libmesh-users >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>>>>>>> Louisiana State University >>>>>>>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>>>>> Louisiana State University >>>>>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>> Louisiana State University >>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> A. Mesgarnejad >>>>>>>>> PhD Student, Research Assistant >>>>>>>>> Mechanical Engineering Department >>>>>>>>> Louisiana State University >>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>> Baton Rouge, La 70803 >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>> >>>> >>> >>> >> > > > -- > A. Mesgarnejad > PhD Student, Research Assistant > Mechanical Engineering Department > Louisiana State University > 2203 Patrick F. Taylor Hall > Baton Rouge, La 70803 > |
From: Roy S. <roy...@ic...> - 2012-05-01 14:14:15
|
On Tue, 1 May 2012, Dmitry Karpeev wrote: > As an fyi: I just built and ran miscellaneous_ex7 in parallel with the > latest petsc-dev: > http://petsc.cs.iit.edu/petsc/petsc-dev/rev/12d4a1ea0aca > If you decide you want to upgrade to this (or later) petsc-dev, you'd need > a small patch for > petsc_dm_nonlinear_solver.C to account for the latest API > changes (attached). The patch didn't make it through the mailing list for me; would you send me another copy via direct email? Thanks, --- Roy |
From: Dmitry K. <ka...@mc...> - 2012-05-01 14:19:32
|
Sorry about that. Here it is. Dmitry. On Tue, May 1, 2012 at 9:14 AM, Roy Stogner <roy...@ic...>wrote: > > On Tue, 1 May 2012, Dmitry Karpeev wrote: > > As an fyi: I just built and ran miscellaneous_ex7 in parallel with the >> latest petsc-dev: >> http://petsc.cs.iit.edu/petsc/**petsc-dev/rev/12d4a1ea0aca<http://petsc.cs.iit.edu/petsc/petsc-dev/rev/12d4a1ea0aca> >> If you decide you want to upgrade to this (or later) petsc-dev, you'd need >> a small patch for >> petsc_dm_nonlinear_solver.C to account for the latest API >> changes (attached). >> > > The patch didn't make it through the mailing list for me; would you > send me another copy via direct email? > > Thanks, > --- > Roy > |
From: Roy S. <roy...@ic...> - 2012-05-01 14:29:13
|
On Tue, 1 May 2012, Dmitry Karpeev wrote: > Sorry about that. Here it is.Dmitry. Probably not your fault; IIRC we configured the list serv to be pretty paranoid about filtering attachments to discourage spam and viruses. Did the old version of the API make it into a PETSc release, or did it get changed before making it out of PETSc-dev? It'd be nice if the latter were true; we've got more than enough PETSC_VERSION_LESS_THAN tests scattered throughout the code.. --- Roy |
From: Dmitry K. <ka...@mc...> - 2012-05-01 14:37:55
|
There hasn't been a release yet (I assume you mean petsc-3.3?). We expect it will happen shortly, though, so this new API will be in petsc-3.3. This is more of an internal change -- users rarely call PetscObjectTypeCompare(). libMesh, of course, is not a user per se. Dmitry. On Tue, May 1, 2012 at 9:29 AM, Roy Stogner <roy...@ic...>wrote: > > On Tue, 1 May 2012, Dmitry Karpeev wrote: > > Sorry about that. Here it is.Dmitry. >> > > Probably not your fault; IIRC we configured the list serv to be pretty > paranoid about filtering attachments to discourage spam and viruses. > > Did the old version of the API make it into a PETSc release, or did it > get changed before making it out of PETSc-dev? It'd be nice if the > latter were true; we've got more than enough PETSC_VERSION_LESS_THAN > tests scattered throughout the code.. > --- > Roy |
From: Ataollah M. <am...@ti...> - 2012-05-01 23:21:16
|
Dmitry, Is there a way so SNES doesn't go calculate the Jacobian for every iteration (my Jac does't change)? I used to use a flag to check if it's assembled but when I do that with petsc-dm, KSP associated with SNES gives DIVERGED_BREAKDOWN for the second iteration! looks like petsc-dm doesn't make a copy of the matrix so when it's used in KSP solve it gets jumbled up. Here is how it looks -with the flag: applying variable bounds ... done NL step 0, |residual|_2 = 8.90674e-04 SNES Jacobian assembly... done Linear solve converged due to CONVERGED_RTOL iterations 6 NL step 1, |residual|_2 = 2.90987e-05 Linear solve did not converge due to DIVERGED_BREAKDOWN iterations 1 this happens for every SNES solve. ----------------------- -without it: applying variable bounds ... done NL step 0, |residual|_2 = 6.08030e-02 SNES hessian assembly... done Linear solve converged due to CONVERGED_RTOL iterations 6 NL step 1, |residual|_2 = 2.61557e-07 SNES hessian assembly... done Linear solve converged due to CONVERGED_RTOL iterations 6 NL step 2, |residual|_2 = 1.35354e-12 Fracture system solved at nonlinear iteration 2 , final nonlinear residual norm: 1.35354e-12 ps: these are not from the same time step but you get the idea. Best, Ata On May 1, 2012, at 8:49 AM, Dmitry Karpeev wrote: > As an fyi: I just built and ran miscellaneous_ex7 in parallel with the latest petsc-dev: > http://petsc.cs.iit.edu/petsc/petsc-dev/rev/12d4a1ea0aca > If you decide you want to upgrade to this (or later) petsc-dev, you'd need a small patch for > petsc_dm_nonlinear_solver.C to account for the latest API changes (attached). > I am working on a bigger patch that adds more functionality to DM_libMesh and streamline > miscellaneous_ex7. That will take some time to percolate up into the libMesh tree. In the > meantime you should be able to run with the small fix I've attached. > > Let me know how it works. > Dmitry. > > > On Tue, May 1, 2012 at 8:35 AM, Ataollah Mesgarnejad <am...@ti...> wrote: > Dmitry, > > I believe that the error is indeed due to petsc-dev version. I'm going to check this during the next few hours and get back to you. > > Thanks, > Ata > > > On Tue, May 1, 2012 at 8:29 AM, Dmitry Karpeev <ka...@mc...> wrote: > Ata, > I can't reproduce the problem: miscellaneous_ex7 runs fine for me on 2 procs with a relatively recent version > of petsc-dev. Can you send me the full PETSc error output? > > Thanks. > Dmitry. > > > On Mon, Apr 30, 2012 at 8:56 PM, Ataollah Mesgarnejad <am...@ti...> wrote: > > On Apr 30, 2012, at 8:39 PM, Dmitry Karpeev wrote: > >> Ata, >> example7 segfaults when you run in parallel? >> Does it do it on 2 procs? > Yes and yes. > > here is the stack: > (gdb) where > #0 0x00000038fbaaada0 in __nanosleep_nocancel () from /lib64/libc.so.6 > #1 0x00000038fbaaac30 in sleep () from /lib64/libc.so.6 > #2 0x00002ae457b35c40 in PetscSleep(double) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #3 0x00002ae457b7dffa in PetscAttachDebugger() () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #4 0x00002ae457b7e95f in PetscAttachDebuggerErrorHandler(int, int, char const*, char const*, char const*, int, PetscErrorType, char const*, void*) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #5 0x00002ae457b817be in PetscError(int, int, char const*, char const*, char const*, int, PetscErrorType, char const*, ...) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #6 0x00002ae457b82ea7 in PetscDefaultSignalHandler(int, void*) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #7 0x00002ae457b82883 in PetscSignalHandler_Private () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #8 <signal handler called> > #9 0x00002ae458189340 in DMCoarsenHookAdd(_p_DM*, int (*)(_p_DM*, _p_DM*, void*), int (*)(_p_DM*, _p_Mat*, _p_Vec*, _p_Mat*, _p_DM*, void*), void*) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #10 0x00002ae45835f431 in DMSNESGetContext(_p_DM*, _n_SNESDM**) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #11 0x00002ae45835ea34 in DMSNESGetFunction(_p_DM*, int (**)(_p_SNES*, _p_Vec*, ---Type <return> to continue, or q <return> to quit--- > _p_Vec*, void*), void**) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #12 0x00002ae458393d1c in SNESGetFunction(_p_SNES*, _p_Vec**, int (**)(_p_SNES*, _p_Vec*, _p_Vec*, void*), void**) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #13 0x00002ae45839b9eb in SNESSetUp(_p_SNES*) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > #14 0x00002ae45839a6b0 in SNESSolve(_p_SNES*, _p_Vec*, _p_Vec*) () > from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so > > #15 0x00000000010976ab in libMesh::PetscDMNonlinearSolver<libMesh::Number>::solve (this=0x35506d0, jac_in=..., x_in=..., r_in=...) > at src/solvers/petsc_dm_nonlinear_solver.C:450 > #16 0x0000000000cd604f in libMesh::NonlinearImplicitSystem::solve ( > this=0x353b7e0) at src/systems/nonlinear_implicit_system.C:178 > #17 0x0000000000426082 in Biharmonic::step (this=0x34f2000, dt=@0x7fff95edca28) > at biharmonic.C:110 > #18 0x0000000000426373 in Biharmonic::run (this=0x34f2000) at biharmonic.C:152 > warning: Range for type (null) has invalid bounds 0..-111 > #19 0x0000000000436cf5 in main (argc=12, argv=0x7fff95edcc18) > at miscellaneous_ex7.C:64 > > Ata > >> >> Dmitry. >> >> On Mon, Apr 30, 2012 at 8:31 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >> Dmitry, >> >> I had a very good experience on single processor but on multiple processors I get segmentation error at line where we call solve (I tried both my code and example7) . I tried your example without -vi --use-petsc-dm and it works just fine. Right now I'm trying to compile a debug version and look into it; meanwhile I thought it would be a good idea if I let you know. >> >> Thanks, >> Ata >> >> On Apr 30, 2012, at 1:30 PM, Dmitry Karpeev wrote: >> >>> >>> >>> On Mon, Apr 30, 2012 at 1:28 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>> >>> On Apr 30, 2012, at 1:13 PM, Dmitry Karpeev wrote: >>> >>>> >>>> >>>> On Mon, Apr 30, 2012 at 1:03 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>> Dmitry, >>>> >>>> I had time to come back and work on the SNESVI/Libmesh program. The way you setup your code is particularly hard for me to follow in my own code since it requires a significant amount of extra work (to define classes and setup the inheritances..). What I did instead was to try add a TransientNonlinearSystem to my EquationSystems object: >>>> >>>> TransientNonlinearImplicitSystem & fracture_system = equation_system.add_system<TransientNonlinearImplicitSystem>("fracture_system"); >>>> >>>> and then to add the bounds and Objective and Residual functions: >>>> >>>> fracture_system.attach_init_function(init_ic_fracture); >>>> fracture_system.nonlinear_solver->jacobian= PsiFormJacobian; >>>> fracture_system.nonlinear_solver->residual= PsiFormResidual; >>>> fracture_system.nonlinear_solver->bounds = PsiFormBounds; >>>> >>>> what I see now is that SNES solve does not apply the bounds. So here is my questions: >>>> >>>> 1- Is there inherently wrong with setting up the system specially the bounds the easy way like I did? >>>> Ata, >>>> Doing things this "simple" way is just fine: there is no reason to follow the relatively convoluted structure I used in that >>>> example. When I have time, I'll rewrite it to simplify things. >>>> 2- Where is the right SNES type is chosen ? I guess this should be hard wired somewhere in NonlinearImplicitSystem! >>>> 3- When I use -snes_view (with your example or my own code ) I get snes_type ls? shouldn't this be VI ? >>>> >>>> What's likely missing is the command-line option --use-petsc-dm. It's necessary since libMesh wants to preserve the usual behavior as the default. In particular, because SNESVI and the DM hookup aren't really possible prior to petsc-3.2. >>>> In fact, it might not be possible without a recent petsc-dev (let me verify that, once I get to my computer). >>>> >>>> In the meantime, could you give --use-petsc-dm a try? >>>> >>> >>> I just saw that in the example help line and used it. It works like a charm. >>> Great! >>> Dmitry. >>> I'll go on check for the convergence and see what I can get. >>> >>> Thanks, >>> Ata >>>> Thanks. >>>> >>>> Dmitry. >>>> >>>> Thanks, >>>> Ata >>>> >>>> On Apr 12, 2012, at 10:00 AM, Dmitry Karpeev wrote: >>>> >>>>> Convergence is something we might need to work on -- I noticed some chatter in a simple bounded biharmonic heat equation (a stripped down version of Cahn-Hilliard). >>>>> Thanks. >>>>> Dmitry. >>>>> >>>>> On Thu, Apr 12, 2012 at 9:55 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>> Thanks Dmitry, >>>>> >>>>> It would be wonderful. I made a wrapper for SNESVI in myself for my program but had difficulty achieving good convergence. Right now I am occupied with other issues but I'll get back to this as soon as I get a chance. >>>>> >>>>> Ata >>>>> >>>>> On Apr 10, 2012, at 9:55 PM, Dmitry Karpeev wrote: >>>>> >>>>>> Ata, >>>>>> There is now a way to use SNESVI from libMesh. >>>>>> If interested, pull a recent revision from sourceforge and take a look at examples/miscellaneous/miscellaneous_ex7. >>>>>> Let me know, if you have any questions. >>>>>> >>>>>> Thanks. >>>>>> Dmitry. >>>>>> >>>>>> On Wed, Dec 14, 2011 at 12:45 PM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>> Okay, thanks for your feedback. >>>>>> I'll keep you updated on this work. >>>>>> >>>>>> Dmitry. >>>>>> >>>>>> >>>>>> On Wed, Dec 14, 2011 at 12:42 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>> >>>>>> >>>>>> On Wed, Dec 14, 2011 at 12:35 PM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>> >>>>>> >>>>>> On Wed, Dec 14, 2011 at 12:19 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>> >>>>>> >>>>>> On Wed, Dec 14, 2011 at 11:59 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>> Ata, >>>>>> >>>>>> A couple of things: >>>>>> 1. SNESVI solves a VI under what's known as "box constraints": the Vec x that the residual operates on is entrywise between Vecs xlow and xhigh. In order to enable >>>>>> constraints on a linear combination of the Vec entries (as you suggest), the problem >>>>>> has to be augmented with extra variables -- these linear combinations. This, of course, >>>>>> changes the Jacobian, preconditioner, etc. The user provides only the Jacobian for the original system, so libMesh would have to automatically construct the Jacobian for the extended system, and precondition it using the user provided PC for the original subsystem. This is not necessarily the unreasonable thing to do in many circumstances, but will take more thinking than a straighforward nodal constraint. The user API for specifying the constraints, however, becomes rather straighforward. >>>>>> >>>>>> 2. In terms of FEM convergence, constraining at the nodes is sufficient (see, for example, >>>>>> Glowinski's book: http://books.google.com/books/about/Numerical_analysis_of_variational_inequa.html?id=Pf4ed2mtbx4C) >>>>>> You can show that if you impose the nodal constraints, in the limit of mesh size going to 0 the inequality is satisfied a.e. >>>>>> >>>>>> In this case no funny manipulation of the Jacobian or the PC is necessary, but the user API to specify the constraint is a bit amiguous: how do you constrain the node? From a containing element? Then there are multiple, possibly incompatible, constraints. >>>>>> Or you can simply go through all the nodes in the system once using node iterator. Here is how I think you can do it: >>>>>> >>>>>> ------------------------------------- >>>>>> //make two PetscVector ub,lb from two variables you already added to the system >>>>>> NumericVector<Number> &ub_sys = fracture_system.get_vector("ub"); >>>>>> NumericVector<Number> &lb_sys = fracture_system.get_vector("lb"); >>>>>> PetscVector<Number>&lb = libmesh_cast_ptr<PetscVector<Number>*>(&ub_sys); >>>>>> PetscVector<Number>&ub = libmesh_cast_ptr<PetscVector<Number>*>(&lb_sys); >>>>>> >>>>>> then in the bound routine do: >>>>>> >>>>>> // Get a constant reference to the mesh object. >>>>>> const MeshBase& mesh = es.get_mesh(); >>>>>> //Get the iterator >>>>>> MeshBase::const_node_iterator nd = mesh.local_nodes_begin(); >>>>>> const MeshBase::const_node_iterator end_nd = mesh.local_nodes_end(); >>>>>> for (; nd != end_nd; nd++){ >>>>>> const Node* node = *nd; >>>>>> //get the degree of freedom of node >>>>>> unsigned int dn = node->dof_number(0,0,0); // system 0 , var 0, component 0 >>>>>> //set the ub,lb >>>>>> ub(dn)= A; >>>>>> lb(dn) = B; >>>>>> } >>>>>> //close the vectors >>>>>> ub.close(); >>>>>> lb.close(); >>>>>> >>>>>> You could do that. That's probably the mode we can support almost immediately. >>>>>> The issue here is that you lose some of the "context" by processing each node by >>>>>> itself rather as part of an element. For example what if each element is a different material? >>>>>> I guess in that case that property should be set on the nodes as well. >>>>>> >>>>>> Exactly one can always pass mat prop as a Constant MONOMIAL (e.g. system.add_variable("mat_prop",CONSTANT,MONOMIAL);) variable with values on each node. >>>>>> >>>>>> Ata >>>>>> >>>>>> Dmitry. >>>>>> ----------------------------------------------------------------- >>>>>> Ata >>>>>> >>>>>> >>>>>> Dmitry. >>>>>> >>>>>> >>>>>> On Wed, Dec 14, 2011 at 11:17 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>> I hope I understood the whole thing correctly. >>>>>> >>>>>> On Wed, Dec 14, 2011 at 10:56 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>> >>>>>> >>>>>> On Wed, Dec 14, 2011 at 10:50 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>> I don't think setting the variable bound on each node separately is a good idea in general. >>>>>> I'm not sure what you mean by this. Could you clarify? I don't think I'm advocating this. >>>>>> >>>>>> Bounds should be set as a sum on each elements degrees of freedom; this way there should not be any nodal inconsistencies. >>>>>> I'm not sure what you mean here either. A bound is usually not a sum (e.g., a bound on a concentration is 0 <= c <=1). >>>>>> Perhaps your application expresses the total bound as a sum? I'd be very interested in learning more about that case. >>>>>> >>>>>> OK here is my thought process: one optimally wants to bound the variable over the whole Element domain between lb,ub. So we would have something like this: >>>>>> >>>>>> lb \leq \sum_{i=1}^{n=NDOF} \phi_{i}(x) \hat{u}_{i} \leq ub, \forall x \in El (Eq-1) >>>>>> >>>>>> so it's not exactly that the nodal values should be bounded (\hat{u}_{i}) unless it's a FD application where \phi_i(x) is delta function !! That said I think given a normalized basis, bounding nodal values is stricter requirement that (Eq-1). if that's what you meant by inconsistencies between node bounds then the stronger bound would be my first priority. >>>>>> >>>>>> Ata >>>>>> >>>>>> >>>>>> Of course it might run into a situations with semi-smooth VI. >>>>>> The type of VI solver is independent of how the bounds are specified: once we know the upper and lower bounds for each element of a Vec, we can solve the VI with either a semismooth method or an active set method. >>>>>> >>>>>> I really appreciate your feedback. I hope we can get to the point where things are as clear as possible so that you can >>>>>> profitably use the new capabilities. >>>>>> >>>>>> Dmitry. >>>>>> >>>>>> Thanks, >>>>>> Ata >>>>>> >>>>>> On Wed, Dec 14, 2011 at 9:47 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>> Are you familiar with SNESVI? The main thing I'm looking at right now is the libMesh API for specifying >>>>>> the bounds on the variables. It seems to me that the user would ideally want to specify the bounds on the variable >>>>>> one element at a time, since that's the default traversal mode for libMesh. Then, of course, many nodal values >>>>>> are traversed many time, allowing for an inconsistency. The question is, what you, as a user, expect in that situation? >>>>>> Should the latest bound that the user has set be applied or the tightest of the bounds the user has supplied? >>>>>> >>>>>> Dmitry. >>>>>> >>>>>> >>>>>> On Wed, Dec 14, 2011 at 9:28 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>> Thanks Dmitry, I look forward to use it. Also if it's available as a dev version I would love to try to work with it. >>>>>> >>>>>> Ata >>>>>> >>>>>> >>>>>> On Wed, Dec 14, 2011 at 8:24 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>> We (ANL) are working on an patch to libMesh to enable this functionality. >>>>>> The motivation is to use it through MOOSE, but it will be exposed as a libMesh API. >>>>>> Dmitry. >>>>>> >>>>>> On Wed, Dec 14, 2011 at 7:53 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>> Dear all, >>>>>> >>>>>> I wonder if there is compatible versionIs it possible to >>>>>> use PETScNonlinearSolver class with SNESVI (PETSc/3.2) to solve a >>>>>> variational inequality problem? >>>>>> >>>>>> Thanks, >>>>>> Ata Mesgarnejad >>>>>> ------------------------------------------------------------------------------ >>>>>> Cloud Computing - Latest Buzzword or a Glimpse of the Future? >>>>>> This paper surveys cloud computing today: What are the benefits? >>>>>> Why are businesses embracing it? What are its payoffs and pitfalls? >>>>>> http://www.accelacomm.com/jaw/sdnl/114/51425149/ >>>>>> _______________________________________________ >>>>>> Libmesh-users mailing list >>>>>> Lib...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/libmesh-users >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> A. Mesgarnejad >>>>>> PhD Student, Research Assistant >>>>>> Mechanical Engineering Department >>>>>> Louisiana State University >>>>>> 2203 Patrick F. Taylor Hall >>>>>> Baton Rouge, La 70803 >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> A. Mesgarnejad >>>>>> PhD Student, Research Assistant >>>>>> Mechanical Engineering Department >>>>>> Louisiana State University >>>>>> 2203 Patrick F. Taylor Hall >>>>>> Baton Rouge, La 70803 >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> A. Mesgarnejad >>>>>> PhD Student, Research Assistant >>>>>> Mechanical Engineering Department >>>>>> Louisiana State University >>>>>> 2203 Patrick F. Taylor Hall >>>>>> Baton Rouge, La 70803 >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> A. Mesgarnejad >>>>>> PhD Student, Research Assistant >>>>>> Mechanical Engineering Department >>>>>> Louisiana State University >>>>>> 2203 Patrick F. Taylor Hall >>>>>> Baton Rouge, La 70803 >>>>>> >>>>>> >>>>> >>>>> >>>> >>>> >>> >>> >> >> > > > > > > -- > A. Mesgarnejad > PhD Student, Research Assistant > Mechanical Engineering Department > Louisiana State University > 2203 Patrick F. Taylor Hall > Baton Rouge, La 70803 > > <petsc_dm.patch> |
From: Dmitry K. <ka...@mc...> - 2012-05-02 03:16:44
|
Yeah, it's a bit of a hack right now -- DMCreateMatrix simply increfs and return the matrix from the underlying NonlinearImplicitSystem. The reason is that initially the matrix is preallocated by unassembled, and MatDuplicate doesn't want to duplicate an unassembled matrix. I can fix that, but I'm a bit puzzled by the fact that KSP messes with the matrix. Can you run with -snes_view and send me the output? Incidentally, there _was_ a reference counting bug that led to a breakdown very much like what you reported. I'm attaching that patch and a small improvement to DMVariableBounds. I need to figure out how to go about creating matrices from DM_libMesh. Dmitry. On Tue, May 1, 2012 at 6:21 PM, Ataollah Mesgarnejad <am...@ti... > wrote: > Dmitry, > > Is there a way so SNES doesn't go calculate the Jacobian for every > iteration (my Jac does't change)? I used to use a flag to check if it's > assembled but when I do that with petsc-dm, KSP associated with SNES gives > DIVERGED_BREAKDOWN for the second iteration! looks like petsc-dm doesn't > make a copy of the matrix so when it's used in KSP solve it gets jumbled up. > > Here is how it looks > > -with the flag: > > applying variable bounds ... done > NL step 0, |residual|_2 = 8.90674e-04 > SNES Jacobian assembly... done > Linear solve converged due to CONVERGED_RTOL iterations 6 > NL step 1, |residual|_2 = 2.90987e-05 > Linear solve did not converge due to DIVERGED_BREAKDOWN iterations 1 > > this happens for every SNES solve. > ----------------------- > > -without it: > > applying variable bounds ... done > NL step 0, |residual|_2 = 6.08030e-02 > SNES hessian assembly... done > Linear solve converged due to CONVERGED_RTOL iterations 6 > NL step 1, |residual|_2 = 2.61557e-07 > SNES hessian assembly... done > Linear solve converged due to CONVERGED_RTOL iterations 6 > NL step 2, |residual|_2 = 1.35354e-12 > Fracture system solved at nonlinear iteration 2 , final nonlinear residual > norm: 1.35354e-12 > > ps: these are not from the same time step but you get the idea. > > Best, > Ata > > On May 1, 2012, at 8:49 AM, Dmitry Karpeev wrote: > > As an fyi: I just built and ran miscellaneous_ex7 in parallel with the > latest petsc-dev: > http://petsc.cs.iit.edu/petsc/petsc-dev/rev/12d4a1ea0aca > If you decide you want to upgrade to this (or later) petsc-dev, you'd need > a small patch for > petsc_dm_nonlinear_solver.C to account for the latest API > changes (attached). > I am working on a bigger patch that adds more functionality to DM_libMesh > and streamline > miscellaneous_ex7. That will take some time to percolate up into the > libMesh tree. In the > meantime you should be able to run with the small fix I've attached. > > Let me know how it works. > Dmitry. > > > On Tue, May 1, 2012 at 8:35 AM, Ataollah Mesgarnejad < > am...@ti...> wrote: > >> Dmitry, >> >> I believe that the error is indeed due to petsc-dev version. I'm going to >> check this during the next few hours and get back to you. >> >> Thanks, >> Ata >> >> >> On Tue, May 1, 2012 at 8:29 AM, Dmitry Karpeev <ka...@mc...>wrote: >> >>> Ata, >>> I can't reproduce the problem: miscellaneous_ex7 runs fine for me on 2 >>> procs with a relatively recent version >>> of petsc-dev. Can you send me the full PETSc error output? >>> >>> Thanks. >>> Dmitry. >>> >>> >>> On Mon, Apr 30, 2012 at 8:56 PM, Ataollah Mesgarnejad < >>> am...@ti...> wrote: >>> >>>> >>>> On Apr 30, 2012, at 8:39 PM, Dmitry Karpeev wrote: >>>> >>>> Ata, >>>> example7 segfaults when you run in parallel? >>>> Does it do it on 2 procs? >>>> >>>> Yes and yes. >>>> >>>> here is the stack: >>>> (gdb) where >>>> #0 0x00000038fbaaada0 in __nanosleep_nocancel () from /lib64/libc.so.6 >>>> #1 0x00000038fbaaac30 in sleep () from /lib64/libc.so.6 >>>> #2 0x00002ae457b35c40 in PetscSleep(double) () >>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>> #3 0x00002ae457b7dffa in PetscAttachDebugger() () >>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>> #4 0x00002ae457b7e95f in PetscAttachDebuggerErrorHandler(int, int, >>>> char const*, char const*, char const*, int, PetscErrorType, char const*, >>>> void*) () >>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>> #5 0x00002ae457b817be in PetscError(int, int, char const*, char >>>> const*, char const*, int, PetscErrorType, char const*, ...) () >>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>> #6 0x00002ae457b82ea7 in PetscDefaultSignalHandler(int, void*) () >>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>> #7 0x00002ae457b82883 in PetscSignalHandler_Private () >>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>> #8 <signal handler called> >>>> #9 0x00002ae458189340 in DMCoarsenHookAdd(_p_DM*, int (*)(_p_DM*, >>>> _p_DM*, void*), int (*)(_p_DM*, _p_Mat*, _p_Vec*, _p_Mat*, _p_DM*, void*), >>>> void*) () >>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>> #10 0x00002ae45835f431 in DMSNESGetContext(_p_DM*, _n_SNESDM**) () >>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>> #11 0x00002ae45835ea34 in DMSNESGetFunction(_p_DM*, int (**)(_p_SNES*, >>>> _p_Vec*, ---Type <return> to continue, or q <return> to quit--- >>>> _p_Vec*, void*), void**) () >>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>> #12 0x00002ae458393d1c in SNESGetFunction(_p_SNES*, _p_Vec**, int >>>> (**)(_p_SNES*, _p_Vec*, _p_Vec*, void*), void**) () >>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>> #13 0x00002ae45839b9eb in SNESSetUp(_p_SNES*) () >>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>> #14 0x00002ae45839a6b0 in SNESSolve(_p_SNES*, _p_Vec*, _p_Vec*) () >>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>> >>>> #15 0x00000000010976ab in >>>> libMesh::PetscDMNonlinearSolver<libMesh::Number>::solve (this=0x35506d0, >>>> jac_in=..., x_in=..., r_in=...) >>>> at src/solvers/petsc_dm_nonlinear_solver.C:450 >>>> #16 0x0000000000cd604f in libMesh::NonlinearImplicitSystem::solve ( >>>> this=0x353b7e0) at src/systems/nonlinear_implicit_system.C:178 >>>> #17 0x0000000000426082 in Biharmonic::step (this=0x34f2000, >>>> dt=@0x7fff95edca28) >>>> at biharmonic.C:110 >>>> #18 0x0000000000426373 in Biharmonic::run (this=0x34f2000) at >>>> biharmonic.C:152 >>>> warning: Range for type (null) has invalid bounds 0..-111 >>>> #19 0x0000000000436cf5 in main (argc=12, argv=0x7fff95edcc18) >>>> at miscellaneous_ex7.C:64 >>>> >>>> Ata >>>> >>>> >>>> Dmitry. >>>> >>>> On Mon, Apr 30, 2012 at 8:31 PM, Ataollah Mesgarnejad < >>>> am...@ti...> wrote: >>>> >>>>> Dmitry, >>>>> >>>>> I had a very good experience on single processor but on multiple >>>>> processors I get segmentation error at line where we call solve (I tried >>>>> both my code and example7) . I tried your example without -vi >>>>> --use-petsc-dm and it works just fine. Right now I'm trying to compile a >>>>> debug version and look into it; meanwhile I thought it would be a good idea >>>>> if I let you know. >>>>> >>>>> Thanks, >>>>> Ata >>>>> >>>>> On Apr 30, 2012, at 1:30 PM, Dmitry Karpeev wrote: >>>>> >>>>> >>>>> >>>>> On Mon, Apr 30, 2012 at 1:28 PM, Ataollah Mesgarnejad < >>>>> am...@ti...> wrote: >>>>> >>>>>> >>>>>> On Apr 30, 2012, at 1:13 PM, Dmitry Karpeev wrote: >>>>>> >>>>>> >>>>>> >>>>>> On Mon, Apr 30, 2012 at 1:03 PM, Ataollah Mesgarnejad < >>>>>> am...@ti...> wrote: >>>>>> >>>>>>> Dmitry, >>>>>>> >>>>>>> I had time to come back and work on the SNESVI/Libmesh program. The >>>>>>> way you setup your code is particularly hard for me to follow in my own >>>>>>> code since it requires a significant amount of extra work (to define >>>>>>> classes and setup the inheritances..). What I did instead was to try add a >>>>>>> TransientNonlinearSystem to my EquationSystems object: >>>>>>> >>>>>>> TransientNonlinearImplicitSystem & fracture_system = >>>>>>> equation_system.add_system<TransientNonlinearImplicitSystem>("fracture_system"); >>>>>>> >>>>>>> and then to add the bounds and Objective and Residual functions: >>>>>>> >>>>>>> fracture_system.attach_init_function(init_ic_fracture); >>>>>>> fracture_system.nonlinear_solver->jacobian= PsiFormJacobian; >>>>>>> fracture_system.nonlinear_solver->residual= PsiFormResidual; >>>>>>> fracture_system.nonlinear_solver->bounds = PsiFormBounds; >>>>>>> >>>>>>> what I see now is that SNES solve does not apply the bounds. So here >>>>>>> is my questions: >>>>>>> >>>>>>> 1- Is there inherently wrong with setting up the system specially >>>>>>> the bounds the easy way like I did? >>>>>>> >>>>>> Ata, >>>>>> Doing things this "simple" way is just fine: there is no reason to >>>>>> follow the relatively convoluted structure I used in that >>>>>> example. When I have time, I'll rewrite it to simplify things. >>>>>> >>>>>>> 2- Where is the right SNES type is chosen ? I guess this should be >>>>>>> hard wired somewhere in NonlinearImplicitSystem! >>>>>>> 3- When I use -snes_view (with your example or my own code ) I get >>>>>>> snes_type ls? shouldn't this be VI ? >>>>>>> >>>>>> >>>>>> What's likely missing is the command-line option --use-petsc-dm. >>>>>> It's necessary since libMesh wants to preserve the usual behavior as >>>>>> the default. In particular, because SNESVI and the DM hookup aren't really >>>>>> possible prior to petsc-3.2. >>>>>> In fact, it might not be possible without a recent petsc-dev (let me >>>>>> verify that, once I get to my computer). >>>>>> >>>>>> In the meantime, could you give --use-petsc-dm a try? >>>>>> >>>>>> I just saw that in the example help line and used it. It works like a >>>>>> charm. >>>>>> >>>>> Great! >>>>> Dmitry. >>>>> >>>>>> I'll go on check for the convergence and see what I can get. >>>>>> >>>>>> Thanks, >>>>>> Ata >>>>>> >>>>>> Thanks. >>>>>> >>>>>> Dmitry. >>>>>> >>>>>>> >>>>>>> Thanks, >>>>>>> Ata >>>>>>> >>>>>>> On Apr 12, 2012, at 10:00 AM, Dmitry Karpeev wrote: >>>>>>> >>>>>>> Convergence is something we might need to work on -- I noticed some >>>>>>> chatter in a simple bounded biharmonic heat equation (a stripped down >>>>>>> version of Cahn-Hilliard). >>>>>>> Thanks. >>>>>>> Dmitry. >>>>>>> >>>>>>> On Thu, Apr 12, 2012 at 9:55 AM, Ataollah Mesgarnejad < >>>>>>> am...@ti...> wrote: >>>>>>> >>>>>>>> Thanks Dmitry, >>>>>>>> >>>>>>>> It would be wonderful. I made a wrapper for SNESVI in myself for my >>>>>>>> program but had difficulty achieving good convergence. Right now I am >>>>>>>> occupied with other issues but I'll get back to this as soon as I get a >>>>>>>> chance. >>>>>>>> >>>>>>>> Ata >>>>>>>> >>>>>>>> On Apr 10, 2012, at 9:55 PM, Dmitry Karpeev wrote: >>>>>>>> >>>>>>>> Ata, >>>>>>>> There is now a way to use SNESVI from libMesh. >>>>>>>> If interested, pull a recent revision from sourceforge and take a >>>>>>>> look at examples/miscellaneous/miscellaneous_ex7. >>>>>>>> Let me know, if you have any questions. >>>>>>>> >>>>>>>> Thanks. >>>>>>>> Dmitry. >>>>>>>> >>>>>>>> On Wed, Dec 14, 2011 at 12:45 PM, Dmitry Karpeev < >>>>>>>> ka...@mc...> wrote: >>>>>>>> >>>>>>>>> Okay, thanks for your feedback. >>>>>>>>> I'll keep you updated on this work. >>>>>>>>> >>>>>>>>> Dmitry. >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, Dec 14, 2011 at 12:42 PM, Ataollah Mesgarnejad < >>>>>>>>> am...@ti...> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Wed, Dec 14, 2011 at 12:35 PM, Dmitry Karpeev < >>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Wed, Dec 14, 2011 at 12:19 PM, Ataollah Mesgarnejad < >>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Wed, Dec 14, 2011 at 11:59 AM, Dmitry Karpeev < >>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Ata, >>>>>>>>>>>>> >>>>>>>>>>>>> A couple of things: >>>>>>>>>>>>> 1. SNESVI solves a VI under what's known as "box constraints": >>>>>>>>>>>>> the Vec x that the residual operates on is entrywise between Vecs xlow and >>>>>>>>>>>>> xhigh. In order to enable >>>>>>>>>>>>> constraints on a linear combination of the Vec entries (as you >>>>>>>>>>>>> suggest), the problem >>>>>>>>>>>>> has to be augmented with extra variables -- these linear >>>>>>>>>>>>> combinations. This, of course, >>>>>>>>>>>>> changes the Jacobian, preconditioner, etc. The user provides >>>>>>>>>>>>> only the Jacobian for the original system, so libMesh would have to >>>>>>>>>>>>> automatically construct the Jacobian for the extended system, and >>>>>>>>>>>>> precondition it using the user provided PC for the original subsystem. This >>>>>>>>>>>>> is not necessarily the unreasonable thing to do in many circumstances, but >>>>>>>>>>>>> will take more thinking than a straighforward nodal constraint. The user >>>>>>>>>>>>> API for specifying the constraints, however, becomes rather straighforward. >>>>>>>>>>>>> >>>>>>>>>>>>> 2. In terms of FEM convergence, constraining at the nodes is >>>>>>>>>>>>> sufficient (see, for example, >>>>>>>>>>>>> Glowinski's book: >>>>>>>>>>>>> http://books.google.com/books/about/Numerical_analysis_of_variational_inequa.html?id=Pf4ed2mtbx4C >>>>>>>>>>>>> ) >>>>>>>>>>>>> You can show that if you impose the nodal constraints, in the >>>>>>>>>>>>> limit of mesh size going to 0 the inequality is satisfied a.e. >>>>>>>>>>>>> >>>>>>>>>>>>> In this case no funny manipulation of the Jacobian or the PC >>>>>>>>>>>>> is necessary, but the user API to specify the constraint is a bit amiguous: >>>>>>>>>>>>> how do you constrain the node? From a containing element? Then there are >>>>>>>>>>>>> multiple, possibly incompatible, constraints. >>>>>>>>>>>>> >>>>>>>>>>>> Or you can simply go through all the nodes in the system once >>>>>>>>>>>> using node iterator. Here is how I think you can do it: >>>>>>>>>>>> >>>>>>>>>>>> ------------------------------------- >>>>>>>>>>>> //make two PetscVector ub,lb from two variables you already >>>>>>>>>>>> added to the system >>>>>>>>>>>> NumericVector<Number> &ub_sys = >>>>>>>>>>>> fracture_system.get_vector("ub"); >>>>>>>>>>>> NumericVector<Number> &lb_sys = >>>>>>>>>>>> fracture_system.get_vector("lb"); >>>>>>>>>>>> PetscVector<Number>&lb = >>>>>>>>>>>> libmesh_cast_ptr<PetscVector<Number>*>(&ub_sys); >>>>>>>>>>>> PetscVector<Number>&ub = >>>>>>>>>>>> libmesh_cast_ptr<PetscVector<Number>*>(&lb_sys); >>>>>>>>>>>> >>>>>>>>>>>> *then in the bound routine do:* >>>>>>>>>>>> >>>>>>>>>>>> // Get a constant reference to the mesh object. >>>>>>>>>>>> const MeshBase& mesh = es.get_mesh(); >>>>>>>>>>>> //Get the iterator >>>>>>>>>>>> MeshBase::const_node_iterator nd = >>>>>>>>>>>> mesh.local_nodes_begin(); >>>>>>>>>>>> const MeshBase::const_node_iterator end_nd = >>>>>>>>>>>> mesh.local_nodes_end(); >>>>>>>>>>>> for (; nd != end_nd; nd++){ >>>>>>>>>>>> const Node* node = *nd; >>>>>>>>>>>> //get the degree of freedom of node >>>>>>>>>>>> unsigned int dn = node->dof_number(0,0,0); // >>>>>>>>>>>> system 0 , var 0, component 0 >>>>>>>>>>>> //set the ub,lb >>>>>>>>>>>> ub(dn)= A; >>>>>>>>>>>> lb(dn) = B; >>>>>>>>>>>> } >>>>>>>>>>>> //close the vectors >>>>>>>>>>>> ub.close(); >>>>>>>>>>>> lb.close(); >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> You could do that. That's probably the mode we can support >>>>>>>>>>> almost immediately. >>>>>>>>>>> The issue here is that you lose some of the "context" by >>>>>>>>>>> processing each node by >>>>>>>>>>> itself rather as part of an element. For example what if each >>>>>>>>>>> element is a different material? >>>>>>>>>>> I guess in that case that property should be set on the nodes as >>>>>>>>>>> well. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Exactly one can always pass mat prop as a Constant MONOMIAL (e.g. >>>>>>>>>> system.add_variable("mat_prop",CONSTANT,MONOMIAL);) variable with values >>>>>>>>>> on each node. >>>>>>>>>> >>>>>>>>>> Ata >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Dmitry. >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> ----------------------------------------------------------------- >>>>>>>>>>>> Ata >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Wed, Dec 14, 2011 at 11:17 AM, Ataollah Mesgarnejad < >>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> I hope I understood the whole thing correctly. >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 10:56 AM, Dmitry Karpeev < >>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 10:50 AM, Ataollah Mesgarnejad < >>>>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I don't think setting the variable bound on each >>>>>>>>>>>>>>>> node separately is a good idea in general. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I'm not sure what you mean by this. Could you clarify? I >>>>>>>>>>>>>>> don't think I'm advocating this. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> Bounds should be set as a sum on each elements degrees of >>>>>>>>>>>>>>>> freedom; this way there should not be any nodal inconsistencies. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I'm not sure what you mean here either. A bound is usually >>>>>>>>>>>>>>> not a sum (e.g., a bound on a concentration is 0 <= c <=1). >>>>>>>>>>>>>>> Perhaps your application expresses the total bound as a sum? >>>>>>>>>>>>>>> I'd be very interested in learning more about that case. >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> OK here is my thought process: one optimally wants to bound >>>>>>>>>>>>>> the variable over the whole Element domain between lb,ub. So we would have >>>>>>>>>>>>>> something like this: >>>>>>>>>>>>>> >>>>>>>>>>>>>> lb \leq \sum_{i=1}^{n=NDOF} \phi_{i}(x) \hat{u}_{i} \leq ub, >>>>>>>>>>>>>> \forall x \in El >>>>>>>>>>>>>> (Eq-1) >>>>>>>>>>>>>> >>>>>>>>>>>>>> so it's not exactly that the nodal values should be bounded >>>>>>>>>>>>>> (\hat{u}_{i}) unless it's a FD application where \phi_i(x) is delta >>>>>>>>>>>>>> function !! That said I think given a normalized basis, bounding nodal >>>>>>>>>>>>>> values is stricter requirement that (Eq-1). if that's what you meant by >>>>>>>>>>>>>> inconsistencies between node bounds then the stronger bound would be my >>>>>>>>>>>>>> first priority. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Ata >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Of course it might run into a situations with semi-smooth >>>>>>>>>>>>>>>> VI. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> The type of VI solver is independent of how the bounds are >>>>>>>>>>>>>>> specified: once we know the upper and lower bounds for each element of a >>>>>>>>>>>>>>> Vec, we can solve the VI with either a semismooth method or an active set >>>>>>>>>>>>>>> method. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I really appreciate your feedback. I hope we can get to the >>>>>>>>>>>>>>> point where things are as clear as possible so that you can >>>>>>>>>>>>>>> profitably use the new capabilities. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>> Ata >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 9:47 AM, Dmitry Karpeev < >>>>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Are you familiar with SNESVI? The main thing I'm looking >>>>>>>>>>>>>>>>> at right now is the libMesh API for specifying >>>>>>>>>>>>>>>>> the bounds on the variables. It seems to me that the user >>>>>>>>>>>>>>>>> would ideally want to specify the bounds on the variable >>>>>>>>>>>>>>>>> one element at a time, since that's the default traversal >>>>>>>>>>>>>>>>> mode for libMesh. Then, of course, many nodal values >>>>>>>>>>>>>>>>> are traversed many time, allowing for an inconsistency. >>>>>>>>>>>>>>>>> The question is, what you, as a user, expect in that situation? >>>>>>>>>>>>>>>>> Should the latest bound that the user has set be applied >>>>>>>>>>>>>>>>> or the tightest of the bounds the user has supplied? >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 9:28 AM, Ataollah Mesgarnejad < >>>>>>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Thanks Dmitry, I look forward to use it. Also if it's >>>>>>>>>>>>>>>>>> available as a dev version I would love to try to work with it. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Ata >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 8:24 AM, Dmitry Karpeev < >>>>>>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> We (ANL) are working on an patch to libMesh to enable >>>>>>>>>>>>>>>>>>> this functionality. >>>>>>>>>>>>>>>>>>> The motivation is to use it through MOOSE, but it will >>>>>>>>>>>>>>>>>>> be exposed as a libMesh API. >>>>>>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 7:53 AM, Ataollah Mesgarnejad < >>>>>>>>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Dear all, >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> I wonder if there is compatible versionIs it possible to >>>>>>>>>>>>>>>>>>>> use PETScNonlinearSolver class with SNESVI (PETSc/3.2) >>>>>>>>>>>>>>>>>>>> to solve a >>>>>>>>>>>>>>>>>>>> variational inequality problem? >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>>> Ata Mesgarnejad >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>> Cloud Computing - Latest Buzzword or a Glimpse of the >>>>>>>>>>>>>>>>>>>> Future? >>>>>>>>>>>>>>>>>>>> This paper surveys cloud computing today: What are the >>>>>>>>>>>>>>>>>>>> benefits? >>>>>>>>>>>>>>>>>>>> Why are businesses embracing it? What are its payoffs >>>>>>>>>>>>>>>>>>>> and pitfalls? >>>>>>>>>>>>>>>>>>>> http://www.accelacomm.com/jaw/sdnl/114/51425149/ >>>>>>>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>>>>>>> Libmesh-users mailing list >>>>>>>>>>>>>>>>>>>> Lib...@li... >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/libmesh-users >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>>>>>>>> Louisiana State University >>>>>>>>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>>>>>> Louisiana State University >>>>>>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>> Louisiana State University >>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> A. Mesgarnejad >>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>> Mechanical Engineering Department >>>>>>>>>> Louisiana State University >>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>> >>>> >>> >> >> >> -- >> A. Mesgarnejad >> PhD Student, Research Assistant >> Mechanical Engineering Department >> Louisiana State University >> 2203 Patrick F. Taylor Hall >> Baton Rouge, La 70803 >> > > <petsc_dm.patch> > > > |
From: Ataollah M. <am...@ti...> - 2012-05-02 13:21:27
|
On May 1, 2012, at 10:16 PM, Dmitry Karpeev wrote: > Yeah, it's a bit of a hack right now -- DMCreateMatrix simply increfs and return the matrix from the underlying NonlinearImplicitSystem. The reason is that initially the matrix is preallocated by unassembled, and MatDuplicate > doesn't want to duplicate an unassembled matrix. I can fix that, but I'm a bit puzzled by the fact that KSP messes > with the matrix. Can you run with -snes_view and send me the output? > Dmitry, Here it is: ------------- with the flag ----------------------- Linear solve converged due to CONVERGED_RTOL iterations 44 Calculating surface energy... done Calculating fracture energy... done elastic energy:13.74054,fracture energy:-0.00000,total:13.74054 applying variable bounds ... done NL step 0, |residual|_2 = 2.41046e-01 SNES Jacobian assembly... done Linear solve converged due to CONVERGED_RTOL iterations 3 NL step 1, |residual|_2 = 5.10144e-07 Linear solve did not converge due to DIVERGED_BREAKDOWN iterations 1 SNES Object: 1 MPI processes type: virs maximum iterations=50, maximum function evaluations=10000 tolerances: relative=1e-08, absolute=1e-35, solution=1e-08 total number of linear solver iterations=3 total number of function evaluations=2 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: ilu ILU: out-of-place factorization 0 levels of fill tolerance for zero pivot 2.22045e-14 using diagonal shift to prevent zero pivot matrix ordering: natural factor fill ratio given 1, needed 1 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=153, cols=153 package used to perform factorization: petsc total: nonzeros=2401, allocated nonzeros=2401 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=153, cols=153 total: nonzeros=2401, allocated nonzeros=2401 total number of mallocs used during MatSetValues calls =0 not using I-node routines SNESLineSearch Object: 1 MPI processes type: basic maxstep=1e+08, minlambda=1e-12 tolerances: relative=1e-08, absolute=1e-15, lambda=1e-08 maximum iterations=1 Fracture system solved at nonlinear iteration 1 , final nonlinear residual norm: 5.10144e-07 --- without the flag ------ applying variable bounds ... done NL step 0, |residual|_2 = 2.41046e-01 SNES Jacobian assembly... done Linear solve converged due to CONVERGED_RTOL iterations 3 NL step 1, |residual|_2 = 5.10144e-07 SNES Jacobian assembly... done Linear solve converged due to CONVERGED_RTOL iterations 3 NL step 2, |residual|_2 = 1.61153e-12 SNES Object: 1 MPI processes type: virs maximum iterations=50, maximum function evaluations=10000 tolerances: relative=1e-08, absolute=1e-35, solution=1e-08 total number of linear solver iterations=6 total number of function evaluations=3 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: ilu ILU: out-of-place factorization 0 levels of fill tolerance for zero pivot 2.22045e-14 using diagonal shift to prevent zero pivot matrix ordering: natural factor fill ratio given 1, needed 1 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=153, cols=153 package used to perform factorization: petsc total: nonzeros=2401, allocated nonzeros=2401 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=153, cols=153 total: nonzeros=2401, allocated nonzeros=2401 total number of mallocs used during MatSetValues calls =0 not using I-node routines SNESLineSearch Object: 1 MPI processes type: basic maxstep=1e+08, minlambda=1e-12 tolerances: relative=1e-08, absolute=1e-15, lambda=1e-08 maximum iterations=1 Fracture system solved at nonlinear iteration 2 , final nonlinear residual norm: 1.61153e-12 > Incidentally, there _was_ a reference counting bug that led to a breakdown very much like what you reported. > I'm attaching that patch and a small improvement to DMVariableBounds. I need to figure out how to go about creating matrices from DM_libMesh. > Let me know if you found anything or if I can be of any help. Thanks, Ata > Dmitry. > > On Tue, May 1, 2012 at 6:21 PM, Ataollah Mesgarnejad <am...@ti...> wrote: > Dmitry, > > Is there a way so SNES doesn't go calculate the Jacobian for every iteration (my Jac does't change)? I used to use a flag to check if it's assembled but when I do that with petsc-dm, KSP associated with SNES gives DIVERGED_BREAKDOWN for the second iteration! looks like petsc-dm doesn't make a copy of the matrix so when it's used in KSP solve it gets jumbled up. > > Here is how it looks > > -with the flag: > > applying variable bounds ... done > NL step 0, |residual|_2 = 8.90674e-04 > SNES Jacobian assembly... done > Linear solve converged due to CONVERGED_RTOL iterations 6 > NL step 1, |residual|_2 = 2.90987e-05 > Linear solve did not converge due to DIVERGED_BREAKDOWN iterations 1 > > this happens for every SNES solve. > ----------------------- > > -without it: > > applying variable bounds ... done > NL step 0, |residual|_2 = 6.08030e-02 > SNES hessian assembly... done > Linear solve converged due to CONVERGED_RTOL iterations 6 > NL step 1, |residual|_2 = 2.61557e-07 > SNES hessian assembly... done > Linear solve converged due to CONVERGED_RTOL iterations 6 > NL step 2, |residual|_2 = 1.35354e-12 > Fracture system solved at nonlinear iteration 2 , final nonlinear residual norm: 1.35354e-12 > > ps: these are not from the same time step but you get the idea. > > Best, > Ata > > On May 1, 2012, at 8:49 AM, Dmitry Karpeev wrote: > >> As an fyi: I just built and ran miscellaneous_ex7 in parallel with the latest petsc-dev: >> http://petsc.cs.iit.edu/petsc/petsc-dev/rev/12d4a1ea0aca >> If you decide you want to upgrade to this (or later) petsc-dev, you'd need a small patch for >> petsc_dm_nonlinear_solver.C to account for the latest API changes (attached). >> I am working on a bigger patch that adds more functionality to DM_libMesh and streamline >> miscellaneous_ex7. That will take some time to percolate up into the libMesh tree. In the >> meantime you should be able to run with the small fix I've attached. >> >> Let me know how it works. >> Dmitry. >> >> >> On Tue, May 1, 2012 at 8:35 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >> Dmitry, >> >> I believe that the error is indeed due to petsc-dev version. I'm going to check this during the next few hours and get back to you. >> >> Thanks, >> Ata >> >> >> On Tue, May 1, 2012 at 8:29 AM, Dmitry Karpeev <ka...@mc...> wrote: >> Ata, >> I can't reproduce the problem: miscellaneous_ex7 runs fine for me on 2 procs with a relatively recent version >> of petsc-dev. Can you send me the full PETSc error output? >> >> Thanks. >> Dmitry. >> >> >> On Mon, Apr 30, 2012 at 8:56 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >> >> On Apr 30, 2012, at 8:39 PM, Dmitry Karpeev wrote: >> >>> Ata, >>> example7 segfaults when you run in parallel? >>> Does it do it on 2 procs? >> Yes and yes. >> >> here is the stack: >> (gdb) where >> #0 0x00000038fbaaada0 in __nanosleep_nocancel () from /lib64/libc.so.6 >> #1 0x00000038fbaaac30 in sleep () from /lib64/libc.so.6 >> #2 0x00002ae457b35c40 in PetscSleep(double) () >> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >> #3 0x00002ae457b7dffa in PetscAttachDebugger() () >> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >> #4 0x00002ae457b7e95f in PetscAttachDebuggerErrorHandler(int, int, char const*, char const*, char const*, int, PetscErrorType, char const*, void*) () >> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >> #5 0x00002ae457b817be in PetscError(int, int, char const*, char const*, char const*, int, PetscErrorType, char const*, ...) () >> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >> #6 0x00002ae457b82ea7 in PetscDefaultSignalHandler(int, void*) () >> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >> #7 0x00002ae457b82883 in PetscSignalHandler_Private () >> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >> #8 <signal handler called> >> #9 0x00002ae458189340 in DMCoarsenHookAdd(_p_DM*, int (*)(_p_DM*, _p_DM*, void*), int (*)(_p_DM*, _p_Mat*, _p_Vec*, _p_Mat*, _p_DM*, void*), void*) () >> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >> #10 0x00002ae45835f431 in DMSNESGetContext(_p_DM*, _n_SNESDM**) () >> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >> #11 0x00002ae45835ea34 in DMSNESGetFunction(_p_DM*, int (**)(_p_SNES*, _p_Vec*, ---Type <return> to continue, or q <return> to quit--- >> _p_Vec*, void*), void**) () >> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >> #12 0x00002ae458393d1c in SNESGetFunction(_p_SNES*, _p_Vec**, int (**)(_p_SNES*, _p_Vec*, _p_Vec*, void*), void**) () >> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >> #13 0x00002ae45839b9eb in SNESSetUp(_p_SNES*) () >> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >> #14 0x00002ae45839a6b0 in SNESSolve(_p_SNES*, _p_Vec*, _p_Vec*) () >> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >> >> #15 0x00000000010976ab in libMesh::PetscDMNonlinearSolver<libMesh::Number>::solve (this=0x35506d0, jac_in=..., x_in=..., r_in=...) >> at src/solvers/petsc_dm_nonlinear_solver.C:450 >> #16 0x0000000000cd604f in libMesh::NonlinearImplicitSystem::solve ( >> this=0x353b7e0) at src/systems/nonlinear_implicit_system.C:178 >> #17 0x0000000000426082 in Biharmonic::step (this=0x34f2000, dt=@0x7fff95edca28) >> at biharmonic.C:110 >> #18 0x0000000000426373 in Biharmonic::run (this=0x34f2000) at biharmonic.C:152 >> warning: Range for type (null) has invalid bounds 0..-111 >> #19 0x0000000000436cf5 in main (argc=12, argv=0x7fff95edcc18) >> at miscellaneous_ex7.C:64 >> >> Ata >> >>> >>> Dmitry. >>> >>> On Mon, Apr 30, 2012 at 8:31 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>> Dmitry, >>> >>> I had a very good experience on single processor but on multiple processors I get segmentation error at line where we call solve (I tried both my code and example7) . I tried your example without -vi --use-petsc-dm and it works just fine. Right now I'm trying to compile a debug version and look into it; meanwhile I thought it would be a good idea if I let you know. >>> >>> Thanks, >>> Ata >>> >>> On Apr 30, 2012, at 1:30 PM, Dmitry Karpeev wrote: >>> >>>> >>>> >>>> On Mon, Apr 30, 2012 at 1:28 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>> >>>> On Apr 30, 2012, at 1:13 PM, Dmitry Karpeev wrote: >>>> >>>>> >>>>> >>>>> On Mon, Apr 30, 2012 at 1:03 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>> Dmitry, >>>>> >>>>> I had time to come back and work on the SNESVI/Libmesh program. The way you setup your code is particularly hard for me to follow in my own code since it requires a significant amount of extra work (to define classes and setup the inheritances..). What I did instead was to try add a TransientNonlinearSystem to my EquationSystems object: >>>>> >>>>> TransientNonlinearImplicitSystem & fracture_system = equation_system.add_system<TransientNonlinearImplicitSystem>("fracture_system"); >>>>> >>>>> and then to add the bounds and Objective and Residual functions: >>>>> >>>>> fracture_system.attach_init_function(init_ic_fracture); >>>>> fracture_system.nonlinear_solver->jacobian= PsiFormJacobian; >>>>> fracture_system.nonlinear_solver->residual= PsiFormResidual; >>>>> fracture_system.nonlinear_solver->bounds = PsiFormBounds; >>>>> >>>>> what I see now is that SNES solve does not apply the bounds. So here is my questions: >>>>> >>>>> 1- Is there inherently wrong with setting up the system specially the bounds the easy way like I did? >>>>> Ata, >>>>> Doing things this "simple" way is just fine: there is no reason to follow the relatively convoluted structure I used in that >>>>> example. When I have time, I'll rewrite it to simplify things. >>>>> 2- Where is the right SNES type is chosen ? I guess this should be hard wired somewhere in NonlinearImplicitSystem! >>>>> 3- When I use -snes_view (with your example or my own code ) I get snes_type ls? shouldn't this be VI ? >>>>> >>>>> What's likely missing is the command-line option --use-petsc-dm. It's necessary since libMesh wants to preserve the usual behavior as the default. In particular, because SNESVI and the DM hookup aren't really possible prior to petsc-3.2. >>>>> In fact, it might not be possible without a recent petsc-dev (let me verify that, once I get to my computer). >>>>> >>>>> In the meantime, could you give --use-petsc-dm a try? >>>>> >>>> >>>> I just saw that in the example help line and used it. It works like a charm. >>>> Great! >>>> Dmitry. >>>> I'll go on check for the convergence and see what I can get. >>>> >>>> Thanks, >>>> Ata >>>>> Thanks. >>>>> >>>>> Dmitry. >>>>> >>>>> Thanks, >>>>> Ata >>>>> >>>>> On Apr 12, 2012, at 10:00 AM, Dmitry Karpeev wrote: >>>>> >>>>>> Convergence is something we might need to work on -- I noticed some chatter in a simple bounded biharmonic heat equation (a stripped down version of Cahn-Hilliard). >>>>>> Thanks. >>>>>> Dmitry. >>>>>> >>>>>> On Thu, Apr 12, 2012 at 9:55 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>> Thanks Dmitry, >>>>>> >>>>>> It would be wonderful. I made a wrapper for SNESVI in myself for my program but had difficulty achieving good convergence. Right now I am occupied with other issues but I'll get back to this as soon as I get a chance. >>>>>> >>>>>> Ata >>>>>> >>>>>> On Apr 10, 2012, at 9:55 PM, Dmitry Karpeev wrote: >>>>>> >>>>>>> Ata, >>>>>>> There is now a way to use SNESVI from libMesh. >>>>>>> If interested, pull a recent revision from sourceforge and take a look at examples/miscellaneous/miscellaneous_ex7. >>>>>>> Let me know, if you have any questions. >>>>>>> >>>>>>> Thanks. >>>>>>> Dmitry. >>>>>>> >>>>>>> On Wed, Dec 14, 2011 at 12:45 PM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>> Okay, thanks for your feedback. >>>>>>> I'll keep you updated on this work. >>>>>>> >>>>>>> Dmitry. >>>>>>> >>>>>>> >>>>>>> On Wed, Dec 14, 2011 at 12:42 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>> >>>>>>> >>>>>>> On Wed, Dec 14, 2011 at 12:35 PM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>> >>>>>>> >>>>>>> On Wed, Dec 14, 2011 at 12:19 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>> >>>>>>> >>>>>>> On Wed, Dec 14, 2011 at 11:59 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>> Ata, >>>>>>> >>>>>>> A couple of things: >>>>>>> 1. SNESVI solves a VI under what's known as "box constraints": the Vec x that the residual operates on is entrywise between Vecs xlow and xhigh. In order to enable >>>>>>> constraints on a linear combination of the Vec entries (as you suggest), the problem >>>>>>> has to be augmented with extra variables -- these linear combinations. This, of course, >>>>>>> changes the Jacobian, preconditioner, etc. The user provides only the Jacobian for the original system, so libMesh would have to automatically construct the Jacobian for the extended system, and precondition it using the user provided PC for the original subsystem. This is not necessarily the unreasonable thing to do in many circumstances, but will take more thinking than a straighforward nodal constraint. The user API for specifying the constraints, however, becomes rather straighforward. >>>>>>> >>>>>>> 2. In terms of FEM convergence, constraining at the nodes is sufficient (see, for example, >>>>>>> Glowinski's book: http://books.google.com/books/about/Numerical_analysis_of_variational_inequa.html?id=Pf4ed2mtbx4C) >>>>>>> You can show that if you impose the nodal constraints, in the limit of mesh size going to 0 the inequality is satisfied a.e. >>>>>>> >>>>>>> In this case no funny manipulation of the Jacobian or the PC is necessary, but the user API to specify the constraint is a bit amiguous: how do you constrain the node? From a containing element? Then there are multiple, possibly incompatible, constraints. >>>>>>> Or you can simply go through all the nodes in the system once using node iterator. Here is how I think you can do it: >>>>>>> >>>>>>> ------------------------------------- >>>>>>> //make two PetscVector ub,lb from two variables you already added to the system >>>>>>> NumericVector<Number> &ub_sys = fracture_system.get_vector("ub"); >>>>>>> NumericVector<Number> &lb_sys = fracture_system.get_vector("lb"); >>>>>>> PetscVector<Number>&lb = libmesh_cast_ptr<PetscVector<Number>*>(&ub_sys); >>>>>>> PetscVector<Number>&ub = libmesh_cast_ptr<PetscVector<Number>*>(&lb_sys); >>>>>>> >>>>>>> then in the bound routine do: >>>>>>> >>>>>>> // Get a constant reference to the mesh object. >>>>>>> const MeshBase& mesh = es.get_mesh(); >>>>>>> //Get the iterator >>>>>>> MeshBase::const_node_iterator nd = mesh.local_nodes_begin(); >>>>>>> const MeshBase::const_node_iterator end_nd = mesh.local_nodes_end(); >>>>>>> for (; nd != end_nd; nd++){ >>>>>>> const Node* node = *nd; >>>>>>> //get the degree of freedom of node >>>>>>> unsigned int dn = node->dof_number(0,0,0); // system 0 , var 0, component 0 >>>>>>> //set the ub,lb >>>>>>> ub(dn)= A; >>>>>>> lb(dn) = B; >>>>>>> } >>>>>>> //close the vectors >>>>>>> ub.close(); >>>>>>> lb.close(); >>>>>>> >>>>>>> You could do that. That's probably the mode we can support almost immediately. >>>>>>> The issue here is that you lose some of the "context" by processing each node by >>>>>>> itself rather as part of an element. For example what if each element is a different material? >>>>>>> I guess in that case that property should be set on the nodes as well. >>>>>>> >>>>>>> Exactly one can always pass mat prop as a Constant MONOMIAL (e.g. system.add_variable("mat_prop",CONSTANT,MONOMIAL);) variable with values on each node. >>>>>>> >>>>>>> Ata >>>>>>> >>>>>>> Dmitry. >>>>>>> ----------------------------------------------------------------- >>>>>>> Ata >>>>>>> >>>>>>> >>>>>>> Dmitry. >>>>>>> >>>>>>> >>>>>>> On Wed, Dec 14, 2011 at 11:17 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>> I hope I understood the whole thing correctly. >>>>>>> >>>>>>> On Wed, Dec 14, 2011 at 10:56 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>> >>>>>>> >>>>>>> On Wed, Dec 14, 2011 at 10:50 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>> I don't think setting the variable bound on each node separately is a good idea in general. >>>>>>> I'm not sure what you mean by this. Could you clarify? I don't think I'm advocating this. >>>>>>> >>>>>>> Bounds should be set as a sum on each elements degrees of freedom; this way there should not be any nodal inconsistencies. >>>>>>> I'm not sure what you mean here either. A bound is usually not a sum (e.g., a bound on a concentration is 0 <= c <=1). >>>>>>> Perhaps your application expresses the total bound as a sum? I'd be very interested in learning more about that case. >>>>>>> >>>>>>> OK here is my thought process: one optimally wants to bound the variable over the whole Element domain between lb,ub. So we would have something like this: >>>>>>> >>>>>>> lb \leq \sum_{i=1}^{n=NDOF} \phi_{i}(x) \hat{u}_{i} \leq ub, \forall x \in El (Eq-1) >>>>>>> >>>>>>> so it's not exactly that the nodal values should be bounded (\hat{u}_{i}) unless it's a FD application where \phi_i(x) is delta function !! That said I think given a normalized basis, bounding nodal values is stricter requirement that (Eq-1). if that's what you meant by inconsistencies between node bounds then the stronger bound would be my first priority. >>>>>>> >>>>>>> Ata >>>>>>> >>>>>>> >>>>>>> Of course it might run into a situations with semi-smooth VI. >>>>>>> The type of VI solver is independent of how the bounds are specified: once we know the upper and lower bounds for each element of a Vec, we can solve the VI with either a semismooth method or an active set method. >>>>>>> >>>>>>> I really appreciate your feedback. I hope we can get to the point where things are as clear as possible so that you can >>>>>>> profitably use the new capabilities. >>>>>>> >>>>>>> Dmitry. >>>>>>> >>>>>>> Thanks, >>>>>>> Ata >>>>>>> >>>>>>> On Wed, Dec 14, 2011 at 9:47 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>> Are you familiar with SNESVI? The main thing I'm looking at right now is the libMesh API for specifying >>>>>>> the bounds on the variables. It seems to me that the user would ideally want to specify the bounds on the variable >>>>>>> one element at a time, since that's the default traversal mode for libMesh. Then, of course, many nodal values >>>>>>> are traversed many time, allowing for an inconsistency. The question is, what you, as a user, expect in that situation? >>>>>>> Should the latest bound that the user has set be applied or the tightest of the bounds the user has supplied? >>>>>>> >>>>>>> Dmitry. >>>>>>> >>>>>>> >>>>>>> On Wed, Dec 14, 2011 at 9:28 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>> Thanks Dmitry, I look forward to use it. Also if it's available as a dev version I would love to try to work with it. >>>>>>> >>>>>>> Ata >>>>>>> >>>>>>> >>>>>>> On Wed, Dec 14, 2011 at 8:24 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>> We (ANL) are working on an patch to libMesh to enable this functionality. >>>>>>> The motivation is to use it through MOOSE, but it will be exposed as a libMesh API. >>>>>>> Dmitry. >>>>>>> >>>>>>> On Wed, Dec 14, 2011 at 7:53 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>> Dear all, >>>>>>> >>>>>>> I wonder if there is compatible versionIs it possible to >>>>>>> use PETScNonlinearSolver class with SNESVI (PETSc/3.2) to solve a >>>>>>> variational inequality problem? >>>>>>> >>>>>>> Thanks, >>>>>>> Ata Mesgarnejad >>>>>>> ------------------------------------------------------------------------------ >>>>>>> Cloud Computing - Latest Buzzword or a Glimpse of the Future? >>>>>>> This paper surveys cloud computing today: What are the benefits? >>>>>>> Why are businesses embracing it? What are its payoffs and pitfalls? >>>>>>> http://www.accelacomm.com/jaw/sdnl/114/51425149/ >>>>>>> _______________________________________________ >>>>>>> Libmesh-users mailing list >>>>>>> Lib...@li... >>>>>>> https://lists.sourceforge.net/lists/listinfo/libmesh-users >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> A. Mesgarnejad >>>>>>> PhD Student, Research Assistant >>>>>>> Mechanical Engineering Department >>>>>>> Louisiana State University >>>>>>> 2203 Patrick F. Taylor Hall >>>>>>> Baton Rouge, La 70803 >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> A. Mesgarnejad >>>>>>> PhD Student, Research Assistant >>>>>>> Mechanical Engineering Department >>>>>>> Louisiana State University >>>>>>> 2203 Patrick F. Taylor Hall >>>>>>> Baton Rouge, La 70803 >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> A. Mesgarnejad >>>>>>> PhD Student, Research Assistant >>>>>>> Mechanical Engineering Department >>>>>>> Louisiana State University >>>>>>> 2203 Patrick F. Taylor Hall >>>>>>> Baton Rouge, La 70803 >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> A. Mesgarnejad >>>>>>> PhD Student, Research Assistant >>>>>>> Mechanical Engineering Department >>>>>>> Louisiana State University >>>>>>> 2203 Patrick F. Taylor Hall >>>>>>> Baton Rouge, La 70803 >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>> >>>> >>> >>> >> >> >> >> >> >> -- >> A. Mesgarnejad >> PhD Student, Research Assistant >> Mechanical Engineering Department >> Louisiana State University >> 2203 Patrick F. Taylor Hall >> Baton Rouge, La 70803 >> >> <petsc_dm.patch> > > > <0001-PETSc-DM-incref-bug-fix.patch><0002-PETSc-DM-initializes-bounds-with-Inf-now.patch> |
From: Ataollah M. <am...@ti...> - 2012-05-05 17:09:03
|
Dmitry, I just tried it without bounds and I get the same DIVERGED_BREAKDOWN error. Ata > Ata, > I'm trying to reproduce the problem that you are observing, so I wanted to understand better what you are doing there. > Do you just skip Jacobian evaluation and return from the ComputeJacobian call immediately? > Also, without --use-petsc-dm, how are you able to run with VIRS (I remember you mentioned your own wrapper to SNESVI)? Finally, do you see this problem when running with SNES (not SNESVI)? I wonder if the issue is in the interaction with the bounds. > > Thanks for your patience. > Dmitry. > > On Wed, May 2, 2012 at 7:51 AM, Ataollah Mesgarnejad <am...@ti...> wrote: > > On May 1, 2012, at 10:16 PM, Dmitry Karpeev wrote: > >> Yeah, it's a bit of a hack right now -- DMCreateMatrix simply increfs and return the matrix from the underlying NonlinearImplicitSystem. The reason is that initially the matrix is preallocated by unassembled, and MatDuplicate >> doesn't want to duplicate an unassembled matrix. I can fix that, but I'm a bit puzzled by the fact that KSP messes >> with the matrix. Can you run with -snes_view and send me the output? >> > > Dmitry, > > Here it is: > > ------------- with the flag ----------------------- > > Linear solve converged due to CONVERGED_RTOL iterations 44 > Calculating surface energy... done > Calculating fracture energy... done > elastic energy:13.74054,fracture energy:-0.00000,total:13.74054 > applying variable bounds ... done > NL step 0, |residual|_2 = 2.41046e-01 > SNES Jacobian assembly... done > Linear solve converged due to CONVERGED_RTOL iterations 3 > NL step 1, |residual|_2 = 5.10144e-07 > Linear solve did not converge due to DIVERGED_BREAKDOWN iterations 1 > SNES Object: 1 MPI processes > type: virs > maximum iterations=50, maximum function evaluations=10000 > tolerances: relative=1e-08, absolute=1e-35, solution=1e-08 > total number of linear solver iterations=3 > total number of function evaluations=2 > KSP Object: 1 MPI processes > type: gmres > GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement > GMRES: happy breakdown tolerance 1e-30 > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using PRECONDITIONED norm type for convergence test > PC Object: 1 MPI processes > type: ilu > ILU: out-of-place factorization > 0 levels of fill > tolerance for zero pivot 2.22045e-14 > using diagonal shift to prevent zero pivot > matrix ordering: natural > factor fill ratio given 1, needed 1 > Factored matrix follows: > Matrix Object: 1 MPI processes > type: seqaij > rows=153, cols=153 > package used to perform factorization: petsc > total: nonzeros=2401, allocated nonzeros=2401 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > linear system matrix = precond matrix: > Matrix Object: 1 MPI processes > type: seqaij > rows=153, cols=153 > total: nonzeros=2401, allocated nonzeros=2401 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > SNESLineSearch Object: 1 MPI processes > type: basic > maxstep=1e+08, minlambda=1e-12 > tolerances: relative=1e-08, absolute=1e-15, lambda=1e-08 > maximum iterations=1 > Fracture system solved at nonlinear iteration 1 , final nonlinear residual norm: 5.10144e-07 > > --- without the flag ------ > > applying variable bounds ... done > NL step 0, |residual|_2 = 2.41046e-01 > SNES Jacobian assembly... done > Linear solve converged due to CONVERGED_RTOL iterations 3 > NL step 1, |residual|_2 = 5.10144e-07 > SNES Jacobian assembly... done > Linear solve converged due to CONVERGED_RTOL iterations 3 > NL step 2, |residual|_2 = 1.61153e-12 > SNES Object: 1 MPI processes > type: virs > maximum iterations=50, maximum function evaluations=10000 > tolerances: relative=1e-08, absolute=1e-35, solution=1e-08 > total number of linear solver iterations=6 > total number of function evaluations=3 > KSP Object: 1 MPI processes > type: gmres > GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement > GMRES: happy breakdown tolerance 1e-30 > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using PRECONDITIONED norm type for convergence test > PC Object: 1 MPI processes > type: ilu > ILU: out-of-place factorization > 0 levels of fill > tolerance for zero pivot 2.22045e-14 > using diagonal shift to prevent zero pivot > matrix ordering: natural > factor fill ratio given 1, needed 1 > Factored matrix follows: > Matrix Object: 1 MPI processes > type: seqaij > rows=153, cols=153 > package used to perform factorization: petsc > total: nonzeros=2401, allocated nonzeros=2401 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > linear system matrix = precond matrix: > Matrix Object: 1 MPI processes > type: seqaij > rows=153, cols=153 > total: nonzeros=2401, allocated nonzeros=2401 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > SNESLineSearch Object: 1 MPI processes > type: basic > maxstep=1e+08, minlambda=1e-12 > tolerances: relative=1e-08, absolute=1e-15, lambda=1e-08 > maximum iterations=1 > Fracture system solved at nonlinear iteration 2 , final nonlinear residual norm: 1.61153e-12 > > > >> Incidentally, there _was_ a reference counting bug that led to a breakdown very much like what you reported. >> I'm attaching that patch and a small improvement to DMVariableBounds. I need to figure out how to go about creating matrices from DM_libMesh. >> > > Let me know if you found anything or if I can be of any help. > > Thanks, > Ata > >> Dmitry. >> >> On Tue, May 1, 2012 at 6:21 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >> Dmitry, >> >> Is there a way so SNES doesn't go calculate the Jacobian for every iteration (my Jac does't change)? I used to use a flag to check if it's assembled but when I do that with petsc-dm, KSP associated with SNES gives DIVERGED_BREAKDOWN for the second iteration! looks like petsc-dm doesn't make a copy of the matrix so when it's used in KSP solve it gets jumbled up. >> >> Here is how it looks >> >> -with the flag: >> >> applying variable bounds ... done >> NL step 0, |residual|_2 = 8.90674e-04 >> SNES Jacobian assembly... done >> Linear solve converged due to CONVERGED_RTOL iterations 6 >> NL step 1, |residual|_2 = 2.90987e-05 >> Linear solve did not converge due to DIVERGED_BREAKDOWN iterations 1 >> >> this happens for every SNES solve. >> ----------------------- >> >> -without it: >> >> applying variable bounds ... done >> NL step 0, |residual|_2 = 6.08030e-02 >> SNES hessian assembly... done >> Linear solve converged due to CONVERGED_RTOL iterations 6 >> NL step 1, |residual|_2 = 2.61557e-07 >> SNES hessian assembly... done >> Linear solve converged due to CONVERGED_RTOL iterations 6 >> NL step 2, |residual|_2 = 1.35354e-12 >> Fracture system solved at nonlinear iteration 2 , final nonlinear residual norm: 1.35354e-12 >> >> ps: these are not from the same time step but you get the idea. >> >> Best, >> Ata >> >> On May 1, 2012, at 8:49 AM, Dmitry Karpeev wrote: >> >>> As an fyi: I just built and ran miscellaneous_ex7 in parallel with the latest petsc-dev: >>> http://petsc.cs.iit.edu/petsc/petsc-dev/rev/12d4a1ea0aca >>> If you decide you want to upgrade to this (or later) petsc-dev, you'd need a small patch for >>> petsc_dm_nonlinear_solver.C to account for the latest API changes (attached). >>> I am working on a bigger patch that adds more functionality to DM_libMesh and streamline >>> miscellaneous_ex7. That will take some time to percolate up into the libMesh tree. In the >>> meantime you should be able to run with the small fix I've attached. >>> >>> Let me know how it works. >>> Dmitry. >>> >>> >>> On Tue, May 1, 2012 at 8:35 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>> Dmitry, >>> >>> I believe that the error is indeed due to petsc-dev version. I'm going to check this during the next few hours and get back to you. >>> >>> Thanks, >>> Ata >>> >>> >>> On Tue, May 1, 2012 at 8:29 AM, Dmitry Karpeev <ka...@mc...> wrote: >>> Ata, >>> I can't reproduce the problem: miscellaneous_ex7 runs fine for me on 2 procs with a relatively recent version >>> of petsc-dev. Can you send me the full PETSc error output? >>> >>> Thanks. >>> Dmitry. >>> >>> >>> On Mon, Apr 30, 2012 at 8:56 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>> >>> On Apr 30, 2012, at 8:39 PM, Dmitry Karpeev wrote: >>> >>>> Ata, >>>> example7 segfaults when you run in parallel? >>>> Does it do it on 2 procs? >>> Yes and yes. >>> >>> here is the stack: >>> (gdb) where >>> #0 0x00000038fbaaada0 in __nanosleep_nocancel () from /lib64/libc.so.6 >>> #1 0x00000038fbaaac30 in sleep () from /lib64/libc.so.6 >>> #2 0x00002ae457b35c40 in PetscSleep(double) () >>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>> #3 0x00002ae457b7dffa in PetscAttachDebugger() () >>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>> #4 0x00002ae457b7e95f in PetscAttachDebuggerErrorHandler(int, int, char const*, char const*, char const*, int, PetscErrorType, char const*, void*) () >>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>> #5 0x00002ae457b817be in PetscError(int, int, char const*, char const*, char const*, int, PetscErrorType, char const*, ...) () >>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>> #6 0x00002ae457b82ea7 in PetscDefaultSignalHandler(int, void*) () >>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>> #7 0x00002ae457b82883 in PetscSignalHandler_Private () >>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>> #8 <signal handler called> >>> #9 0x00002ae458189340 in DMCoarsenHookAdd(_p_DM*, int (*)(_p_DM*, _p_DM*, void*), int (*)(_p_DM*, _p_Mat*, _p_Vec*, _p_Mat*, _p_DM*, void*), void*) () >>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>> #10 0x00002ae45835f431 in DMSNESGetContext(_p_DM*, _n_SNESDM**) () >>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>> #11 0x00002ae45835ea34 in DMSNESGetFunction(_p_DM*, int (**)(_p_SNES*, _p_Vec*, ---Type <return> to continue, or q <return> to quit--- >>> _p_Vec*, void*), void**) () >>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>> #12 0x00002ae458393d1c in SNESGetFunction(_p_SNES*, _p_Vec**, int (**)(_p_SNES*, _p_Vec*, _p_Vec*, void*), void**) () >>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>> #13 0x00002ae45839b9eb in SNESSetUp(_p_SNES*) () >>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>> #14 0x00002ae45839a6b0 in SNESSolve(_p_SNES*, _p_Vec*, _p_Vec*) () >>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>> >>> #15 0x00000000010976ab in libMesh::PetscDMNonlinearSolver<libMesh::Number>::solve (this=0x35506d0, jac_in=..., x_in=..., r_in=...) >>> at src/solvers/petsc_dm_nonlinear_solver.C:450 >>> #16 0x0000000000cd604f in libMesh::NonlinearImplicitSystem::solve ( >>> this=0x353b7e0) at src/systems/nonlinear_implicit_system.C:178 >>> #17 0x0000000000426082 in Biharmonic::step (this=0x34f2000, dt=@0x7fff95edca28) >>> at biharmonic.C:110 >>> #18 0x0000000000426373 in Biharmonic::run (this=0x34f2000) at biharmonic.C:152 >>> warning: Range for type (null) has invalid bounds 0..-111 >>> #19 0x0000000000436cf5 in main (argc=12, argv=0x7fff95edcc18) >>> at miscellaneous_ex7.C:64 >>> >>> Ata >>> >>>> >>>> Dmitry. >>>> >>>> On Mon, Apr 30, 2012 at 8:31 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>> Dmitry, >>>> >>>> I had a very good experience on single processor but on multiple processors I get segmentation error at line where we call solve (I tried both my code and example7) . I tried your example without -vi --use-petsc-dm and it works just fine. Right now I'm trying to compile a debug version and look into it; meanwhile I thought it would be a good idea if I let you know. >>>> >>>> Thanks, >>>> Ata >>>> >>>> On Apr 30, 2012, at 1:30 PM, Dmitry Karpeev wrote: >>>> >>>>> >>>>> >>>>> On Mon, Apr 30, 2012 at 1:28 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>> >>>>> On Apr 30, 2012, at 1:13 PM, Dmitry Karpeev wrote: >>>>> >>>>>> >>>>>> >>>>>> On Mon, Apr 30, 2012 at 1:03 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>> Dmitry, >>>>>> >>>>>> I had time to come back and work on the SNESVI/Libmesh program. The way you setup your code is particularly hard for me to follow in my own code since it requires a significant amount of extra work (to define classes and setup the inheritances..). What I did instead was to try add a TransientNonlinearSystem to my EquationSystems object: >>>>>> >>>>>> TransientNonlinearImplicitSystem & fracture_system = equation_system.add_system<TransientNonlinearImplicitSystem>("fracture_system"); >>>>>> >>>>>> and then to add the bounds and Objective and Residual functions: >>>>>> >>>>>> fracture_system.attach_init_function(init_ic_fracture); >>>>>> fracture_system.nonlinear_solver->jacobian= PsiFormJacobian; >>>>>> fracture_system.nonlinear_solver->residual= PsiFormResidual; >>>>>> fracture_system.nonlinear_solver->bounds = PsiFormBounds; >>>>>> >>>>>> what I see now is that SNES solve does not apply the bounds. So here is my questions: >>>>>> >>>>>> 1- Is there inherently wrong with setting up the system specially the bounds the easy way like I did? >>>>>> Ata, >>>>>> Doing things this "simple" way is just fine: there is no reason to follow the relatively convoluted structure I used in that >>>>>> example. When I have time, I'll rewrite it to simplify things. >>>>>> 2- Where is the right SNES type is chosen ? I guess this should be hard wired somewhere in NonlinearImplicitSystem! >>>>>> 3- When I use -snes_view (with your example or my own code ) I get snes_type ls? shouldn't this be VI ? >>>>>> >>>>>> What's likely missing is the command-line option --use-petsc-dm. It's necessary since libMesh wants to preserve the usual behavior as the default. In particular, because SNESVI and the DM hookup aren't really possible prior to petsc-3.2. >>>>>> In fact, it might not be possible without a recent petsc-dev (let me verify that, once I get to my computer). >>>>>> >>>>>> In the meantime, could you give --use-petsc-dm a try? >>>>>> >>>>> >>>>> I just saw that in the example help line and used it. It works like a charm. >>>>> Great! >>>>> Dmitry. >>>>> I'll go on check for the convergence and see what I can get. >>>>> >>>>> Thanks, >>>>> Ata >>>>>> Thanks. >>>>>> >>>>>> Dmitry. >>>>>> >>>>>> Thanks, >>>>>> Ata >>>>>> >>>>>> On Apr 12, 2012, at 10:00 AM, Dmitry Karpeev wrote: >>>>>> >>>>>>> Convergence is something we might need to work on -- I noticed some chatter in a simple bounded biharmonic heat equation (a stripped down version of Cahn-Hilliard). >>>>>>> Thanks. >>>>>>> Dmitry. >>>>>>> >>>>>>> On Thu, Apr 12, 2012 at 9:55 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>> Thanks Dmitry, >>>>>>> >>>>>>> It would be wonderful. I made a wrapper for SNESVI in myself for my program but had difficulty achieving good convergence. Right now I am occupied with other issues but I'll get back to this as soon as I get a chance. >>>>>>> >>>>>>> Ata >>>>>>> >>>>>>> On Apr 10, 2012, at 9:55 PM, Dmitry Karpeev wrote: >>>>>>> >>>>>>>> Ata, >>>>>>>> There is now a way to use SNESVI from libMesh. >>>>>>>> If interested, pull a recent revision from sourceforge and take a look at examples/miscellaneous/miscellaneous_ex7. >>>>>>>> Let me know, if you have any questions. >>>>>>>> >>>>>>>> Thanks. >>>>>>>> Dmitry. >>>>>>>> >>>>>>>> On Wed, Dec 14, 2011 at 12:45 PM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>>> Okay, thanks for your feedback. >>>>>>>> I'll keep you updated on this work. >>>>>>>> >>>>>>>> Dmitry. >>>>>>>> >>>>>>>> >>>>>>>> On Wed, Dec 14, 2011 at 12:42 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>> >>>>>>>> >>>>>>>> On Wed, Dec 14, 2011 at 12:35 PM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>>> >>>>>>>> >>>>>>>> On Wed, Dec 14, 2011 at 12:19 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>> >>>>>>>> >>>>>>>> On Wed, Dec 14, 2011 at 11:59 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>>> Ata, >>>>>>>> >>>>>>>> A couple of things: >>>>>>>> 1. SNESVI solves a VI under what's known as "box constraints": the Vec x that the residual operates on is entrywise between Vecs xlow and xhigh. In order to enable >>>>>>>> constraints on a linear combination of the Vec entries (as you suggest), the problem >>>>>>>> has to be augmented with extra variables -- these linear combinations. This, of course, >>>>>>>> changes the Jacobian, preconditioner, etc. The user provides only the Jacobian for the original system, so libMesh would have to automatically construct the Jacobian for the extended system, and precondition it using the user provided PC for the original subsystem. This is not necessarily the unreasonable thing to do in many circumstances, but will take more thinking than a straighforward nodal constraint. The user API for specifying the constraints, however, becomes rather straighforward. >>>>>>>> >>>>>>>> 2. In terms of FEM convergence, constraining at the nodes is sufficient (see, for example, >>>>>>>> Glowinski's book: http://books.google.com/books/about/Numerical_analysis_of_variational_inequa.html?id=Pf4ed2mtbx4C) >>>>>>>> You can show that if you impose the nodal constraints, in the limit of mesh size going to 0 the inequality is satisfied a.e. >>>>>>>> >>>>>>>> In this case no funny manipulation of the Jacobian or the PC is necessary, but the user API to specify the constraint is a bit amiguous: how do you constrain the node? From a containing element? Then there are multiple, possibly incompatible, constraints. >>>>>>>> Or you can simply go through all the nodes in the system once using node iterator. Here is how I think you can do it: >>>>>>>> >>>>>>>> ------------------------------------- >>>>>>>> //make two PetscVector ub,lb from two variables you already added to the system >>>>>>>> NumericVector<Number> &ub_sys = fracture_system.get_vector("ub"); >>>>>>>> NumericVector<Number> &lb_sys = fracture_system.get_vector("lb"); >>>>>>>> PetscVector<Number>&lb = libmesh_cast_ptr<PetscVector<Number>*>(&ub_sys); >>>>>>>> PetscVector<Number>&ub = libmesh_cast_ptr<PetscVector<Number>*>(&lb_sys); >>>>>>>> >>>>>>>> then in the bound routine do: >>>>>>>> >>>>>>>> // Get a constant reference to the mesh object. >>>>>>>> const MeshBase& mesh = es.get_mesh(); >>>>>>>> //Get the iterator >>>>>>>> MeshBase::const_node_iterator nd = mesh.local_nodes_begin(); >>>>>>>> const MeshBase::const_node_iterator end_nd = mesh.local_nodes_end(); >>>>>>>> for (; nd != end_nd; nd++){ >>>>>>>> const Node* node = *nd; >>>>>>>> //get the degree of freedom of node >>>>>>>> unsigned int dn = node->dof_number(0,0,0); // system 0 , var 0, component 0 >>>>>>>> //set the ub,lb >>>>>>>> ub(dn)= A; >>>>>>>> lb(dn) = B; >>>>>>>> } >>>>>>>> //close the vectors >>>>>>>> ub.close(); >>>>>>>> lb.close(); >>>>>>>> >>>>>>>> You could do that. That's probably the mode we can support almost immediately. >>>>>>>> The issue here is that you lose some of the "context" by processing each node by >>>>>>>> itself rather as part of an element. For example what if each element is a different material? >>>>>>>> I guess in that case that property should be set on the nodes as well. >>>>>>>> >>>>>>>> Exactly one can always pass mat prop as a Constant MONOMIAL (e.g. system.add_variable("mat_prop",CONSTANT,MONOMIAL);) variable with values on each node. >>>>>>>> >>>>>>>> Ata >>>>>>>> >>>>>>>> Dmitry. >>>>>>>> ----------------------------------------------------------------- >>>>>>>> Ata >>>>>>>> >>>>>>>> >>>>>>>> Dmitry. >>>>>>>> >>>>>>>> >>>>>>>> On Wed, Dec 14, 2011 at 11:17 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>> I hope I understood the whole thing correctly. >>>>>>>> >>>>>>>> On Wed, Dec 14, 2011 at 10:56 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>>> >>>>>>>> >>>>>>>> On Wed, Dec 14, 2011 at 10:50 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>> I don't think setting the variable bound on each node separately is a good idea in general. >>>>>>>> I'm not sure what you mean by this. Could you clarify? I don't think I'm advocating this. >>>>>>>> >>>>>>>> Bounds should be set as a sum on each elements degrees of freedom; this way there should not be any nodal inconsistencies. >>>>>>>> I'm not sure what you mean here either. A bound is usually not a sum (e.g., a bound on a concentration is 0 <= c <=1). >>>>>>>> Perhaps your application expresses the total bound as a sum? I'd be very interested in learning more about that case. >>>>>>>> >>>>>>>> OK here is my thought process: one optimally wants to bound the variable over the whole Element domain between lb,ub. So we would have something like this: >>>>>>>> >>>>>>>> lb \leq \sum_{i=1}^{n=NDOF} \phi_{i}(x) \hat{u}_{i} \leq ub, \forall x \in El (Eq-1) >>>>>>>> >>>>>>>> so it's not exactly that the nodal values should be bounded (\hat{u}_{i}) unless it's a FD application where \phi_i(x) is delta function !! That said I think given a normalized basis, bounding nodal values is stricter requirement that (Eq-1). if that's what you meant by inconsistencies between node bounds then the stronger bound would be my first priority. >>>>>>>> >>>>>>>> Ata >>>>>>>> >>>>>>>> >>>>>>>> Of course it might run into a situations with semi-smooth VI. >>>>>>>> The type of VI solver is independent of how the bounds are specified: once we know the upper and lower bounds for each element of a Vec, we can solve the VI with either a semismooth method or an active set method. >>>>>>>> >>>>>>>> I really appreciate your feedback. I hope we can get to the point where things are as clear as possible so that you can >>>>>>>> profitably use the new capabilities. >>>>>>>> >>>>>>>> Dmitry. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Ata >>>>>>>> >>>>>>>> On Wed, Dec 14, 2011 at 9:47 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>>> Are you familiar with SNESVI? The main thing I'm looking at right now is the libMesh API for specifying >>>>>>>> the bounds on the variables. It seems to me that the user would ideally want to specify the bounds on the variable >>>>>>>> one element at a time, since that's the default traversal mode for libMesh. Then, of course, many nodal values >>>>>>>> are traversed many time, allowing for an inconsistency. The question is, what you, as a user, expect in that situation? >>>>>>>> Should the latest bound that the user has set be applied or the tightest of the bounds the user has supplied? >>>>>>>> >>>>>>>> Dmitry. >>>>>>>> >>>>>>>> >>>>>>>> On Wed, Dec 14, 2011 at 9:28 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>> Thanks Dmitry, I look forward to use it. Also if it's available as a dev version I would love to try to work with it. >>>>>>>> >>>>>>>> Ata >>>>>>>> >>>>>>>> >>>>>>>> On Wed, Dec 14, 2011 at 8:24 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>>> We (ANL) are working on an patch to libMesh to enable this functionality. >>>>>>>> The motivation is to use it through MOOSE, but it will be exposed as a libMesh API. >>>>>>>> Dmitry. >>>>>>>> >>>>>>>> On Wed, Dec 14, 2011 at 7:53 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>> Dear all, >>>>>>>> >>>>>>>> I wonder if there is compatible versionIs it possible to >>>>>>>> use PETScNonlinearSolver class with SNESVI (PETSc/3.2) to solve a >>>>>>>> variational inequality problem? >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Ata Mesgarnejad >>>>>>>> ------------------------------------------------------------------------------ >>>>>>>> Cloud Computing - Latest Buzzword or a Glimpse of the Future? >>>>>>>> This paper surveys cloud computing today: What are the benefits? >>>>>>>> Why are businesses embracing it? What are its payoffs and pitfalls? >>>>>>>> http://www.accelacomm.com/jaw/sdnl/114/51425149/ >>>>>>>> _______________________________________________ >>>>>>>> Libmesh-users mailing list >>>>>>>> Lib...@li... >>>>>>>> https://lists.sourceforge.net/lists/listinfo/libmesh-users >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> A. Mesgarnejad >>>>>>>> PhD Student, Research Assistant >>>>>>>> Mechanical Engineering Department >>>>>>>> Louisiana State University >>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>> Baton Rouge, La 70803 >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> A. Mesgarnejad >>>>>>>> PhD Student, Research Assistant >>>>>>>> Mechanical Engineering Department >>>>>>>> Louisiana State University >>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>> Baton Rouge, La 70803 >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> A. Mesgarnejad >>>>>>>> PhD Student, Research Assistant >>>>>>>> Mechanical Engineering Department >>>>>>>> Louisiana State University >>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>> Baton Rouge, La 70803 >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> A. Mesgarnejad >>>>>>>> PhD Student, Research Assistant >>>>>>>> Mechanical Engineering Department >>>>>>>> Louisiana State University >>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>> Baton Rouge, La 70803 >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>> >>>> >>> >>> >>> >>> >>> >>> -- >>> A. Mesgarnejad >>> PhD Student, Research Assistant >>> Mechanical Engineering Department >>> Louisiana State University >>> 2203 Patrick F. Taylor Hall >>> Baton Rouge, La 70803 >>> >>> <petsc_dm.patch> >> >> >> <0001-PETSc-DM-incref-bug-fix.patch><0002-PETSc-DM-initializes-bounds-with-Inf-now.patch> > > |
From: Dmitry K. <ka...@mc...> - 2012-05-05 17:22:06
|
Okay, that helps. I'm looking further into it. Dmitry. On Sat, May 5, 2012 at 12:08 PM, Ataollah Mesgarnejad < am...@ti...> wrote: > Dmitry, > > I just tried it without bounds and I get the same DIVERGED_BREAKDOWN error. > > Ata > > Ata, > I'm trying to reproduce the problem that you are observing, so I wanted to > understand better what you are doing there. > Do you just skip Jacobian evaluation and return from the ComputeJacobian > call immediately? > Also, without --use-petsc-dm, how are you able to run with VIRS (I > remember you mentioned your own wrapper to SNESVI)? Finally, do you see > this problem when running with SNES (not SNESVI)? I wonder if the issue is > in the interaction with the bounds. > > Thanks for your patience. > Dmitry. > > On Wed, May 2, 2012 at 7:51 AM, Ataollah Mesgarnejad < > am...@ti...> wrote: > >> >> On May 1, 2012, at 10:16 PM, Dmitry Karpeev wrote: >> >> Yeah, it's a bit of a hack right now -- DMCreateMatrix simply increfs and >> return the matrix from the underlying NonlinearImplicitSystem. The reason >> is that initially the matrix is preallocated by unassembled, and >> MatDuplicate >> doesn't want to duplicate an unassembled matrix. I can fix that, but I'm >> a bit puzzled by the fact that KSP messes >> with the matrix. Can you run with -snes_view and send me the output? >> >> Dmitry, >> >> Here it is: >> >> ------------- with the flag ----------------------- >> >> Linear solve converged due to CONVERGED_RTOL iterations 44 >> Calculating surface energy... done >> Calculating fracture energy... done >> elastic energy:13.74054,fracture energy:-0.00000,total:13.74054 >> applying variable bounds ... done >> NL step 0, |residual|_2 = 2.41046e-01 >> SNES Jacobian assembly... done >> Linear solve converged due to CONVERGED_RTOL iterations 3 >> NL step 1, |residual|_2 = 5.10144e-07 >> Linear solve did not converge due to DIVERGED_BREAKDOWN iterations 1 >> SNES Object: 1 MPI processes >> type: virs >> maximum iterations=50, maximum function evaluations=10000 >> tolerances: relative=1e-08, absolute=1e-35, solution=1e-08 >> total number of linear solver iterations=3 >> total number of function evaluations=2 >> KSP Object: 1 MPI processes >> type: gmres >> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt >> Orthogonalization with no iterative refinement >> GMRES: happy breakdown tolerance 1e-30 >> maximum iterations=10000, initial guess is zero >> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >> left preconditioning >> using PRECONDITIONED norm type for convergence test >> PC Object: 1 MPI processes >> type: ilu >> ILU: out-of-place factorization >> 0 levels of fill >> tolerance for zero pivot 2.22045e-14 >> using diagonal shift to prevent zero pivot >> matrix ordering: natural >> factor fill ratio given 1, needed 1 >> Factored matrix follows: >> Matrix Object: 1 MPI processes >> type: seqaij >> rows=153, cols=153 >> package used to perform factorization: petsc >> total: nonzeros=2401, allocated nonzeros=2401 >> total number of mallocs used during MatSetValues calls =0 >> not using I-node routines >> linear system matrix = precond matrix: >> Matrix Object: 1 MPI processes >> type: seqaij >> rows=153, cols=153 >> total: nonzeros=2401, allocated nonzeros=2401 >> total number of mallocs used during MatSetValues calls =0 >> not using I-node routines >> SNESLineSearch Object: 1 MPI processes >> type: basic >> maxstep=1e+08, minlambda=1e-12 >> tolerances: relative=1e-08, absolute=1e-15, lambda=1e-08 >> maximum iterations=1 >> Fracture system solved at nonlinear iteration 1 , final nonlinear >> residual norm: 5.10144e-07 >> >> --- without the flag ------ >> >> applying variable bounds ... done >> NL step 0, |residual|_2 = 2.41046e-01 >> SNES Jacobian assembly... done >> Linear solve converged due to CONVERGED_RTOL iterations 3 >> NL step 1, |residual|_2 = 5.10144e-07 >> SNES Jacobian assembly... done >> Linear solve converged due to CONVERGED_RTOL iterations 3 >> NL step 2, |residual|_2 = 1.61153e-12 >> SNES Object: 1 MPI processes >> type: virs >> maximum iterations=50, maximum function evaluations=10000 >> tolerances: relative=1e-08, absolute=1e-35, solution=1e-08 >> total number of linear solver iterations=6 >> total number of function evaluations=3 >> KSP Object: 1 MPI processes >> type: gmres >> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt >> Orthogonalization with no iterative refinement >> GMRES: happy breakdown tolerance 1e-30 >> maximum iterations=10000, initial guess is zero >> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >> left preconditioning >> using PRECONDITIONED norm type for convergence test >> PC Object: 1 MPI processes >> type: ilu >> ILU: out-of-place factorization >> 0 levels of fill >> tolerance for zero pivot 2.22045e-14 >> using diagonal shift to prevent zero pivot >> matrix ordering: natural >> factor fill ratio given 1, needed 1 >> Factored matrix follows: >> Matrix Object: 1 MPI processes >> type: seqaij >> rows=153, cols=153 >> package used to perform factorization: petsc >> total: nonzeros=2401, allocated nonzeros=2401 >> total number of mallocs used during MatSetValues calls =0 >> not using I-node routines >> linear system matrix = precond matrix: >> Matrix Object: 1 MPI processes >> type: seqaij >> rows=153, cols=153 >> total: nonzeros=2401, allocated nonzeros=2401 >> total number of mallocs used during MatSetValues calls =0 >> not using I-node routines >> SNESLineSearch Object: 1 MPI processes >> type: basic >> maxstep=1e+08, minlambda=1e-12 >> tolerances: relative=1e-08, absolute=1e-15, lambda=1e-08 >> maximum iterations=1 >> Fracture system solved at nonlinear iteration 2 , final nonlinear >> residual norm: 1.61153e-12 >> >> >> >> Incidentally, there _was_ a reference counting bug that led to a >> breakdown very much like what you reported. >> I'm attaching that patch and a small improvement to DMVariableBounds. I >> need to figure out how to go about creating matrices from DM_libMesh. >> >> Let me know if you found anything or if I can be of any help. >> >> Thanks, >> Ata >> >> Dmitry. >> >> On Tue, May 1, 2012 at 6:21 PM, Ataollah Mesgarnejad < >> am...@ti...> wrote: >> >>> Dmitry, >>> >>> Is there a way so SNES doesn't go calculate the Jacobian for every >>> iteration (my Jac does't change)? I used to use a flag to check if it's >>> assembled but when I do that with petsc-dm, KSP associated with SNES gives >>> DIVERGED_BREAKDOWN for the second iteration! looks like petsc-dm doesn't >>> make a copy of the matrix so when it's used in KSP solve it gets jumbled up. >>> >>> Here is how it looks >>> >>> -with the flag: >>> >>> applying variable bounds ... done >>> NL step 0, |residual|_2 = 8.90674e-04 >>> SNES Jacobian assembly... done >>> Linear solve converged due to CONVERGED_RTOL iterations 6 >>> NL step 1, |residual|_2 = 2.90987e-05 >>> Linear solve did not converge due to DIVERGED_BREAKDOWN iterations 1 >>> >>> this happens for every SNES solve. >>> ----------------------- >>> >>> -without it: >>> >>> applying variable bounds ... done >>> NL step 0, |residual|_2 = 6.08030e-02 >>> SNES hessian assembly... done >>> Linear solve converged due to CONVERGED_RTOL iterations 6 >>> NL step 1, |residual|_2 = 2.61557e-07 >>> SNES hessian assembly... done >>> Linear solve converged due to CONVERGED_RTOL iterations 6 >>> NL step 2, |residual|_2 = 1.35354e-12 >>> Fracture system solved at nonlinear iteration 2 , final nonlinear >>> residual norm: 1.35354e-12 >>> >>> ps: these are not from the same time step but you get the idea. >>> >>> Best, >>> Ata >>> >>> On May 1, 2012, at 8:49 AM, Dmitry Karpeev wrote: >>> >>> As an fyi: I just built and ran miscellaneous_ex7 in parallel with the >>> latest petsc-dev: >>> http://petsc.cs.iit.edu/petsc/petsc-dev/rev/12d4a1ea0aca >>> If you decide you want to upgrade to this (or later) petsc-dev, you'd >>> need a small patch for >>> petsc_dm_nonlinear_solver.C to account for the latest API >>> changes (attached). >>> I am working on a bigger patch that adds more functionality to >>> DM_libMesh and streamline >>> miscellaneous_ex7. That will take some time to percolate up into the >>> libMesh tree. In the >>> meantime you should be able to run with the small fix I've attached. >>> >>> Let me know how it works. >>> Dmitry. >>> >>> >>> On Tue, May 1, 2012 at 8:35 AM, Ataollah Mesgarnejad < >>> am...@ti...> wrote: >>> >>>> Dmitry, >>>> >>>> I believe that the error is indeed due to petsc-dev version. I'm going >>>> to check this during the next few hours and get back to you. >>>> >>>> Thanks, >>>> Ata >>>> >>>> >>>> On Tue, May 1, 2012 at 8:29 AM, Dmitry Karpeev <ka...@mc...>wrote: >>>> >>>>> Ata, >>>>> I can't reproduce the problem: miscellaneous_ex7 runs fine for me on 2 >>>>> procs with a relatively recent version >>>>> of petsc-dev. Can you send me the full PETSc error output? >>>>> >>>>> Thanks. >>>>> Dmitry. >>>>> >>>>> >>>>> On Mon, Apr 30, 2012 at 8:56 PM, Ataollah Mesgarnejad < >>>>> am...@ti...> wrote: >>>>> >>>>>> >>>>>> On Apr 30, 2012, at 8:39 PM, Dmitry Karpeev wrote: >>>>>> >>>>>> Ata, >>>>>> example7 segfaults when you run in parallel? >>>>>> Does it do it on 2 procs? >>>>>> >>>>>> Yes and yes. >>>>>> >>>>>> here is the stack: >>>>>> (gdb) where >>>>>> #0 0x00000038fbaaada0 in __nanosleep_nocancel () from >>>>>> /lib64/libc.so.6 >>>>>> #1 0x00000038fbaaac30 in sleep () from /lib64/libc.so.6 >>>>>> #2 0x00002ae457b35c40 in PetscSleep(double) () >>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>> #3 0x00002ae457b7dffa in PetscAttachDebugger() () >>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>> #4 0x00002ae457b7e95f in PetscAttachDebuggerErrorHandler(int, int, >>>>>> char const*, char const*, char const*, int, PetscErrorType, char const*, >>>>>> void*) () >>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>> #5 0x00002ae457b817be in PetscError(int, int, char const*, char >>>>>> const*, char const*, int, PetscErrorType, char const*, ...) () >>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>> #6 0x00002ae457b82ea7 in PetscDefaultSignalHandler(int, void*) () >>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>> #7 0x00002ae457b82883 in PetscSignalHandler_Private () >>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>> #8 <signal handler called> >>>>>> #9 0x00002ae458189340 in DMCoarsenHookAdd(_p_DM*, int (*)(_p_DM*, >>>>>> _p_DM*, void*), int (*)(_p_DM*, _p_Mat*, _p_Vec*, _p_Mat*, _p_DM*, void*), >>>>>> void*) () >>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>> #10 0x00002ae45835f431 in DMSNESGetContext(_p_DM*, _n_SNESDM**) () >>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>> #11 0x00002ae45835ea34 in DMSNESGetFunction(_p_DM*, int >>>>>> (**)(_p_SNES*, _p_Vec*, ---Type <return> to continue, or q <return> to >>>>>> quit--- >>>>>> _p_Vec*, void*), void**) () >>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>> #12 0x00002ae458393d1c in SNESGetFunction(_p_SNES*, _p_Vec**, int >>>>>> (**)(_p_SNES*, _p_Vec*, _p_Vec*, void*), void**) () >>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>> #13 0x00002ae45839b9eb in SNESSetUp(_p_SNES*) () >>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>> #14 0x00002ae45839a6b0 in SNESSolve(_p_SNES*, _p_Vec*, _p_Vec*) () >>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>> >>>>>> #15 0x00000000010976ab in >>>>>> libMesh::PetscDMNonlinearSolver<libMesh::Number>::solve (this=0x35506d0, >>>>>> jac_in=..., x_in=..., r_in=...) >>>>>> at src/solvers/petsc_dm_nonlinear_solver.C:450 >>>>>> #16 0x0000000000cd604f in libMesh::NonlinearImplicitSystem::solve ( >>>>>> this=0x353b7e0) at src/systems/nonlinear_implicit_system.C:178 >>>>>> #17 0x0000000000426082 in Biharmonic::step (this=0x34f2000, >>>>>> dt=@0x7fff95edca28) >>>>>> at biharmonic.C:110 >>>>>> #18 0x0000000000426373 in Biharmonic::run (this=0x34f2000) at >>>>>> biharmonic.C:152 >>>>>> warning: Range for type (null) has invalid bounds 0..-111 >>>>>> #19 0x0000000000436cf5 in main (argc=12, argv=0x7fff95edcc18) >>>>>> at miscellaneous_ex7.C:64 >>>>>> >>>>>> Ata >>>>>> >>>>>> >>>>>> Dmitry. >>>>>> >>>>>> On Mon, Apr 30, 2012 at 8:31 PM, Ataollah Mesgarnejad < >>>>>> am...@ti...> wrote: >>>>>> >>>>>>> Dmitry, >>>>>>> >>>>>>> I had a very good experience on single processor but on multiple >>>>>>> processors I get segmentation error at line where we call solve (I tried >>>>>>> both my code and example7) . I tried your example without -vi >>>>>>> --use-petsc-dm and it works just fine. Right now I'm trying to compile a >>>>>>> debug version and look into it; meanwhile I thought it would be a good idea >>>>>>> if I let you know. >>>>>>> >>>>>>> Thanks, >>>>>>> Ata >>>>>>> >>>>>>> On Apr 30, 2012, at 1:30 PM, Dmitry Karpeev wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Mon, Apr 30, 2012 at 1:28 PM, Ataollah Mesgarnejad < >>>>>>> am...@ti...> wrote: >>>>>>> >>>>>>>> >>>>>>>> On Apr 30, 2012, at 1:13 PM, Dmitry Karpeev wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Mon, Apr 30, 2012 at 1:03 PM, Ataollah Mesgarnejad < >>>>>>>> am...@ti...> wrote: >>>>>>>> >>>>>>>>> Dmitry, >>>>>>>>> >>>>>>>>> I had time to come back and work on the SNESVI/Libmesh program. >>>>>>>>> The way you setup your code is particularly hard for me to follow in my own >>>>>>>>> code since it requires a significant amount of extra work (to define >>>>>>>>> classes and setup the inheritances..). What I did instead was to try add a >>>>>>>>> TransientNonlinearSystem to my EquationSystems object: >>>>>>>>> >>>>>>>>> TransientNonlinearImplicitSystem & fracture_system = >>>>>>>>> equation_system.add_system<TransientNonlinearImplicitSystem>("fracture_system"); >>>>>>>>> >>>>>>>>> and then to add the bounds and Objective and Residual functions: >>>>>>>>> >>>>>>>>> fracture_system.attach_init_function(init_ic_fracture); >>>>>>>>> fracture_system.nonlinear_solver->jacobian= PsiFormJacobian; >>>>>>>>> fracture_system.nonlinear_solver->residual= PsiFormResidual; >>>>>>>>> fracture_system.nonlinear_solver->bounds = PsiFormBounds; >>>>>>>>> >>>>>>>>> what I see now is that SNES solve does not apply the bounds. So >>>>>>>>> here is my questions: >>>>>>>>> >>>>>>>>> 1- Is there inherently wrong with setting up the system specially >>>>>>>>> the bounds the easy way like I did? >>>>>>>>> >>>>>>>> Ata, >>>>>>>> Doing things this "simple" way is just fine: there is no reason to >>>>>>>> follow the relatively convoluted structure I used in that >>>>>>>> example. When I have time, I'll rewrite it to simplify things. >>>>>>>> >>>>>>>>> 2- Where is the right SNES type is chosen ? I guess this should be >>>>>>>>> hard wired somewhere in NonlinearImplicitSystem! >>>>>>>>> 3- When I use -snes_view (with your example or my own code ) I get >>>>>>>>> snes_type ls? shouldn't this be VI ? >>>>>>>>> >>>>>>>> >>>>>>>> What's likely missing is the command-line option --use-petsc-dm. >>>>>>>> It's necessary since libMesh wants to preserve the usual behavior as >>>>>>>> the default. In particular, because SNESVI and the DM hookup aren't really >>>>>>>> possible prior to petsc-3.2. >>>>>>>> In fact, it might not be possible without a recent petsc-dev (let >>>>>>>> me verify that, once I get to my computer). >>>>>>>> >>>>>>>> In the meantime, could you give --use-petsc-dm a try? >>>>>>>> >>>>>>>> I just saw that in the example help line and used it. It works like >>>>>>>> a charm. >>>>>>>> >>>>>>> Great! >>>>>>> Dmitry. >>>>>>> >>>>>>>> I'll go on check for the convergence and see what I can get. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Ata >>>>>>>> >>>>>>>> Thanks. >>>>>>>> >>>>>>>> Dmitry. >>>>>>>> >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Ata >>>>>>>>> >>>>>>>>> On Apr 12, 2012, at 10:00 AM, Dmitry Karpeev wrote: >>>>>>>>> >>>>>>>>> Convergence is something we might need to work on -- I noticed >>>>>>>>> some chatter in a simple bounded biharmonic heat equation (a stripped down >>>>>>>>> version of Cahn-Hilliard). >>>>>>>>> Thanks. >>>>>>>>> Dmitry. >>>>>>>>> >>>>>>>>> On Thu, Apr 12, 2012 at 9:55 AM, Ataollah Mesgarnejad < >>>>>>>>> am...@ti...> wrote: >>>>>>>>> >>>>>>>>>> Thanks Dmitry, >>>>>>>>>> >>>>>>>>>> It would be wonderful. I made a wrapper for SNESVI in myself for >>>>>>>>>> my program but had difficulty achieving good convergence. Right now I am >>>>>>>>>> occupied with other issues but I'll get back to this as soon as I get a >>>>>>>>>> chance. >>>>>>>>>> >>>>>>>>>> Ata >>>>>>>>>> >>>>>>>>>> On Apr 10, 2012, at 9:55 PM, Dmitry Karpeev wrote: >>>>>>>>>> >>>>>>>>>> Ata, >>>>>>>>>> There is now a way to use SNESVI from libMesh. >>>>>>>>>> If interested, pull a recent revision from sourceforge and take a >>>>>>>>>> look at examples/miscellaneous/miscellaneous_ex7. >>>>>>>>>> Let me know, if you have any questions. >>>>>>>>>> >>>>>>>>>> Thanks. >>>>>>>>>> Dmitry. >>>>>>>>>> >>>>>>>>>> On Wed, Dec 14, 2011 at 12:45 PM, Dmitry Karpeev < >>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>> >>>>>>>>>>> Okay, thanks for your feedback. >>>>>>>>>>> I'll keep you updated on this work. >>>>>>>>>>> >>>>>>>>>>> Dmitry. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Wed, Dec 14, 2011 at 12:42 PM, Ataollah Mesgarnejad < >>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Wed, Dec 14, 2011 at 12:35 PM, Dmitry Karpeev < >>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Wed, Dec 14, 2011 at 12:19 PM, Ataollah Mesgarnejad < >>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 11:59 AM, Dmitry Karpeev < >>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Ata, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> A couple of things: >>>>>>>>>>>>>>> 1. SNESVI solves a VI under what's known as "box >>>>>>>>>>>>>>> constraints": the Vec x that the residual operates on is entrywise between >>>>>>>>>>>>>>> Vecs xlow and xhigh. In order to enable >>>>>>>>>>>>>>> constraints on a linear combination of the Vec entries (as >>>>>>>>>>>>>>> you suggest), the problem >>>>>>>>>>>>>>> has to be augmented with extra variables -- these linear >>>>>>>>>>>>>>> combinations. This, of course, >>>>>>>>>>>>>>> changes the Jacobian, preconditioner, etc. The user >>>>>>>>>>>>>>> provides only the Jacobian for the original system, so libMesh would have >>>>>>>>>>>>>>> to automatically construct the Jacobian for the extended system, and >>>>>>>>>>>>>>> precondition it using the user provided PC for the original subsystem. This >>>>>>>>>>>>>>> is not necessarily the unreasonable thing to do in many circumstances, but >>>>>>>>>>>>>>> will take more thinking than a straighforward nodal constraint. The user >>>>>>>>>>>>>>> API for specifying the constraints, however, becomes rather straighforward. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> 2. In terms of FEM convergence, constraining at the nodes is >>>>>>>>>>>>>>> sufficient (see, for example, >>>>>>>>>>>>>>> Glowinski's book: >>>>>>>>>>>>>>> http://books.google.com/books/about/Numerical_analysis_of_variational_inequa.html?id=Pf4ed2mtbx4C >>>>>>>>>>>>>>> ) >>>>>>>>>>>>>>> You can show that if you impose the nodal constraints, in >>>>>>>>>>>>>>> the limit of mesh size going to 0 the inequality is satisfied a.e. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> In this case no funny manipulation of the Jacobian or the PC >>>>>>>>>>>>>>> is necessary, but the user API to specify the constraint is a bit amiguous: >>>>>>>>>>>>>>> how do you constrain the node? From a containing element? Then there are >>>>>>>>>>>>>>> multiple, possibly incompatible, constraints. >>>>>>>>>>>>>>> >>>>>>>>>>>>>> Or you can simply go through all the nodes in the system once >>>>>>>>>>>>>> using node iterator. Here is how I think you can do it: >>>>>>>>>>>>>> >>>>>>>>>>>>>> ------------------------------------- >>>>>>>>>>>>>> //make two PetscVector ub,lb from two variables you already >>>>>>>>>>>>>> added to the system >>>>>>>>>>>>>> NumericVector<Number> &ub_sys = >>>>>>>>>>>>>> fracture_system.get_vector("ub"); >>>>>>>>>>>>>> NumericVector<Number> &lb_sys = >>>>>>>>>>>>>> fracture_system.get_vector("lb"); >>>>>>>>>>>>>> PetscVector<Number>&lb = >>>>>>>>>>>>>> libmesh_cast_ptr<PetscVector<Number>*>(&ub_sys); >>>>>>>>>>>>>> PetscVector<Number>&ub = >>>>>>>>>>>>>> libmesh_cast_ptr<PetscVector<Number>*>(&lb_sys); >>>>>>>>>>>>>> >>>>>>>>>>>>>> *then in the bound routine do:* >>>>>>>>>>>>>> >>>>>>>>>>>>>> // Get a constant reference to the mesh object. >>>>>>>>>>>>>> const MeshBase& mesh = es.get_mesh(); >>>>>>>>>>>>>> //Get the iterator >>>>>>>>>>>>>> MeshBase::const_node_iterator nd = >>>>>>>>>>>>>> mesh.local_nodes_begin(); >>>>>>>>>>>>>> const MeshBase::const_node_iterator end_nd = >>>>>>>>>>>>>> mesh.local_nodes_end(); >>>>>>>>>>>>>> for (; nd != end_nd; nd++){ >>>>>>>>>>>>>> const Node* node = *nd; >>>>>>>>>>>>>> //get the degree of freedom of node >>>>>>>>>>>>>> unsigned int dn = node->dof_number(0,0,0); // >>>>>>>>>>>>>> system 0 , var 0, component 0 >>>>>>>>>>>>>> //set the ub,lb >>>>>>>>>>>>>> ub(dn)= A; >>>>>>>>>>>>>> lb(dn) = B; >>>>>>>>>>>>>> } >>>>>>>>>>>>>> //close the vectors >>>>>>>>>>>>>> ub.close(); >>>>>>>>>>>>>> lb.close(); >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> You could do that. That's probably the mode we can support >>>>>>>>>>>>> almost immediately. >>>>>>>>>>>>> The issue here is that you lose some of the "context" by >>>>>>>>>>>>> processing each node by >>>>>>>>>>>>> itself rather as part of an element. For example what if >>>>>>>>>>>>> each element is a different material? >>>>>>>>>>>>> I guess in that case that property should be set on the nodes >>>>>>>>>>>>> as well. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Exactly one can always pass mat prop as a Constant MONOMIAL >>>>>>>>>>>> (e.g. system.add_variable("mat_prop",CONSTANT,MONOMIAL);) variable with >>>>>>>>>>>> values on each node. >>>>>>>>>>>> >>>>>>>>>>>> Ata >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> ----------------------------------------------------------------- >>>>>>>>>>>>>> Ata >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 11:17 AM, Ataollah Mesgarnejad < >>>>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I hope I understood the whole thing correctly. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 10:56 AM, Dmitry Karpeev < >>>>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 10:50 AM, Ataollah Mesgarnejad < >>>>>>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> I don't think setting the variable bound on each >>>>>>>>>>>>>>>>>> node separately is a good idea in general. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> I'm not sure what you mean by this. Could you clarify? I >>>>>>>>>>>>>>>>> don't think I'm advocating this. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Bounds should be set as a sum on each elements degrees of >>>>>>>>>>>>>>>>>> freedom; this way there should not be any nodal inconsistencies. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> I'm not sure what you mean here either. A bound is >>>>>>>>>>>>>>>>> usually not a sum (e.g., a bound on a concentration is 0 <= c <=1). >>>>>>>>>>>>>>>>> Perhaps your application expresses the total bound as a >>>>>>>>>>>>>>>>> sum? I'd be very interested in learning more about that case. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> OK here is my thought process: one optimally wants to bound >>>>>>>>>>>>>>>> the variable over the whole Element domain between lb,ub. So we would have >>>>>>>>>>>>>>>> something like this: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> lb \leq \sum_{i=1}^{n=NDOF} \phi_{i}(x) \hat{u}_{i} \leq >>>>>>>>>>>>>>>> ub, \forall x \in El >>>>>>>>>>>>>>>> (Eq-1) >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> so it's not exactly that the nodal values should be bounded >>>>>>>>>>>>>>>> (\hat{u}_{i}) unless it's a FD application where \phi_i(x) is delta >>>>>>>>>>>>>>>> function !! That said I think given a normalized basis, bounding nodal >>>>>>>>>>>>>>>> values is stricter requirement that (Eq-1). if that's what you meant by >>>>>>>>>>>>>>>> inconsistencies between node bounds then the stronger bound would be my >>>>>>>>>>>>>>>> first priority. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Ata >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Of course it might run into a situations with semi-smooth >>>>>>>>>>>>>>>>>> VI. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> The type of VI solver is independent of how the bounds are >>>>>>>>>>>>>>>>> specified: once we know the upper and lower bounds for each element of a >>>>>>>>>>>>>>>>> Vec, we can solve the VI with either a semismooth method or an active set >>>>>>>>>>>>>>>>> method. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> I really appreciate your feedback. I hope we can get to >>>>>>>>>>>>>>>>> the point where things are as clear as possible so that you can >>>>>>>>>>>>>>>>> profitably use the new capabilities. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>> Ata >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 9:47 AM, Dmitry Karpeev < >>>>>>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Are you familiar with SNESVI? The main thing I'm >>>>>>>>>>>>>>>>>>> looking at right now is the libMesh API for specifying >>>>>>>>>>>>>>>>>>> the bounds on the variables. It seems to me that the >>>>>>>>>>>>>>>>>>> user would ideally want to specify the bounds on the variable >>>>>>>>>>>>>>>>>>> one element at a time, since that's the default >>>>>>>>>>>>>>>>>>> traversal mode for libMesh. Then, of course, many nodal values >>>>>>>>>>>>>>>>>>> are traversed many time, allowing for an inconsistency. >>>>>>>>>>>>>>>>>>> The question is, what you, as a user, expect in that situation? >>>>>>>>>>>>>>>>>>> Should the latest bound that the user has set be >>>>>>>>>>>>>>>>>>> applied or the tightest of the bounds the user has supplied? >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 9:28 AM, Ataollah Mesgarnejad < >>>>>>>>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Thanks Dmitry, I look forward to use it. Also if it's >>>>>>>>>>>>>>>>>>>> available as a dev version I would love to try to work with it. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Ata >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 8:24 AM, Dmitry Karpeev < >>>>>>>>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> We (ANL) are working on an patch to libMesh to enable >>>>>>>>>>>>>>>>>>>>> this functionality. >>>>>>>>>>>>>>>>>>>>> The motivation is to use it through MOOSE, but it will >>>>>>>>>>>>>>>>>>>>> be exposed as a libMesh API. >>>>>>>>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 7:53 AM, Ataollah Mesgarnejad >>>>>>>>>>>>>>>>>>>>> <am...@ti...> wrote: >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> Dear all, >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> I wonder if there is compatible versionIs it possible >>>>>>>>>>>>>>>>>>>>>> to >>>>>>>>>>>>>>>>>>>>>> use PETScNonlinearSolver class with SNESVI >>>>>>>>>>>>>>>>>>>>>> (PETSc/3.2) to solve a >>>>>>>>>>>>>>>>>>>>>> variational inequality problem? >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>>>>> Ata Mesgarnejad >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>> Cloud Computing - Latest Buzzword or a Glimpse of the >>>>>>>>>>>>>>>>>>>>>> Future? >>>>>>>>>>>>>>>>>>>>>> This paper surveys cloud computing today: What are >>>>>>>>>>>>>>>>>>>>>> the benefits? >>>>>>>>>>>>>>>>>>>>>> Why are businesses embracing it? What are its payoffs >>>>>>>>>>>>>>>>>>>>>> and pitfalls? >>>>>>>>>>>>>>>>>>>>>> http://www.accelacomm.com/jaw/sdnl/114/51425149/ >>>>>>>>>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>>>>>>>>> Libmesh-users mailing list >>>>>>>>>>>>>>>>>>>>>> Lib...@li... >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/libmesh-users >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>>>>>>>>>> Louisiana State University >>>>>>>>>>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>>>>>>>> Louisiana State University >>>>>>>>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>>>> Louisiana State University >>>>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>> Louisiana State University >>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>> >>>> >>>> >>>> -- >>>> A. Mesgarnejad >>>> PhD Student, Research Assistant >>>> Mechanical Engineering Department >>>> Louisiana State University >>>> 2203 Patrick F. Taylor Hall >>>> Baton Rouge, La 70803 >>>> >>> >>> <petsc_dm.patch> >>> >>> >>> >> <0001-PETSc-DM-incref-bug-fix.patch> >> <0002-PETSc-DM-initializes-bounds-with-Inf-now.patch> >> >> >> > > |
From: Ataollah M. <am...@ti...> - 2012-05-09 22:23:32
|
Dmitry, A quick question: When does the bounds routine get called? My bounds are changing constantly do I need to call the bounds function myself or does it get called for each system.solve()? Thanks, Ata On May 5, 2012, at 12:21 PM, Dmitry Karpeev wrote: > Okay, that helps. > I'm looking further into it. > Dmitry. > > On Sat, May 5, 2012 at 12:08 PM, Ataollah Mesgarnejad <am...@ti...> wrote: > Dmitry, > > I just tried it without bounds and I get the same DIVERGED_BREAKDOWN error. > > Ata >> Ata, >> I'm trying to reproduce the problem that you are observing, so I wanted to understand better what you are doing there. >> Do you just skip Jacobian evaluation and return from the ComputeJacobian call immediately? >> Also, without --use-petsc-dm, how are you able to run with VIRS (I remember you mentioned your own wrapper to SNESVI)? Finally, do you see this problem when running with SNES (not SNESVI)? I wonder if the issue is in the interaction with the bounds. >> >> Thanks for your patience. >> Dmitry. >> >> On Wed, May 2, 2012 at 7:51 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >> >> On May 1, 2012, at 10:16 PM, Dmitry Karpeev wrote: >> >>> Yeah, it's a bit of a hack right now -- DMCreateMatrix simply increfs and return the matrix from the underlying NonlinearImplicitSystem. The reason is that initially the matrix is preallocated by unassembled, and MatDuplicate >>> doesn't want to duplicate an unassembled matrix. I can fix that, but I'm a bit puzzled by the fact that KSP messes >>> with the matrix. Can you run with -snes_view and send me the output? >>> >> >> Dmitry, >> >> Here it is: >> >> ------------- with the flag ----------------------- >> >> Linear solve converged due to CONVERGED_RTOL iterations 44 >> Calculating surface energy... done >> Calculating fracture energy... done >> elastic energy:13.74054,fracture energy:-0.00000,total:13.74054 >> applying variable bounds ... done >> NL step 0, |residual|_2 = 2.41046e-01 >> SNES Jacobian assembly... done >> Linear solve converged due to CONVERGED_RTOL iterations 3 >> NL step 1, |residual|_2 = 5.10144e-07 >> Linear solve did not converge due to DIVERGED_BREAKDOWN iterations 1 >> SNES Object: 1 MPI processes >> type: virs >> maximum iterations=50, maximum function evaluations=10000 >> tolerances: relative=1e-08, absolute=1e-35, solution=1e-08 >> total number of linear solver iterations=3 >> total number of function evaluations=2 >> KSP Object: 1 MPI processes >> type: gmres >> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement >> GMRES: happy breakdown tolerance 1e-30 >> maximum iterations=10000, initial guess is zero >> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >> left preconditioning >> using PRECONDITIONED norm type for convergence test >> PC Object: 1 MPI processes >> type: ilu >> ILU: out-of-place factorization >> 0 levels of fill >> tolerance for zero pivot 2.22045e-14 >> using diagonal shift to prevent zero pivot >> matrix ordering: natural >> factor fill ratio given 1, needed 1 >> Factored matrix follows: >> Matrix Object: 1 MPI processes >> type: seqaij >> rows=153, cols=153 >> package used to perform factorization: petsc >> total: nonzeros=2401, allocated nonzeros=2401 >> total number of mallocs used during MatSetValues calls =0 >> not using I-node routines >> linear system matrix = precond matrix: >> Matrix Object: 1 MPI processes >> type: seqaij >> rows=153, cols=153 >> total: nonzeros=2401, allocated nonzeros=2401 >> total number of mallocs used during MatSetValues calls =0 >> not using I-node routines >> SNESLineSearch Object: 1 MPI processes >> type: basic >> maxstep=1e+08, minlambda=1e-12 >> tolerances: relative=1e-08, absolute=1e-15, lambda=1e-08 >> maximum iterations=1 >> Fracture system solved at nonlinear iteration 1 , final nonlinear residual norm: 5.10144e-07 >> >> --- without the flag ------ >> >> applying variable bounds ... done >> NL step 0, |residual|_2 = 2.41046e-01 >> SNES Jacobian assembly... done >> Linear solve converged due to CONVERGED_RTOL iterations 3 >> NL step 1, |residual|_2 = 5.10144e-07 >> SNES Jacobian assembly... done >> Linear solve converged due to CONVERGED_RTOL iterations 3 >> NL step 2, |residual|_2 = 1.61153e-12 >> SNES Object: 1 MPI processes >> type: virs >> maximum iterations=50, maximum function evaluations=10000 >> tolerances: relative=1e-08, absolute=1e-35, solution=1e-08 >> total number of linear solver iterations=6 >> total number of function evaluations=3 >> KSP Object: 1 MPI processes >> type: gmres >> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement >> GMRES: happy breakdown tolerance 1e-30 >> maximum iterations=10000, initial guess is zero >> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >> left preconditioning >> using PRECONDITIONED norm type for convergence test >> PC Object: 1 MPI processes >> type: ilu >> ILU: out-of-place factorization >> 0 levels of fill >> tolerance for zero pivot 2.22045e-14 >> using diagonal shift to prevent zero pivot >> matrix ordering: natural >> factor fill ratio given 1, needed 1 >> Factored matrix follows: >> Matrix Object: 1 MPI processes >> type: seqaij >> rows=153, cols=153 >> package used to perform factorization: petsc >> total: nonzeros=2401, allocated nonzeros=2401 >> total number of mallocs used during MatSetValues calls =0 >> not using I-node routines >> linear system matrix = precond matrix: >> Matrix Object: 1 MPI processes >> type: seqaij >> rows=153, cols=153 >> total: nonzeros=2401, allocated nonzeros=2401 >> total number of mallocs used during MatSetValues calls =0 >> not using I-node routines >> SNESLineSearch Object: 1 MPI processes >> type: basic >> maxstep=1e+08, minlambda=1e-12 >> tolerances: relative=1e-08, absolute=1e-15, lambda=1e-08 >> maximum iterations=1 >> Fracture system solved at nonlinear iteration 2 , final nonlinear residual norm: 1.61153e-12 >> >> >> >>> Incidentally, there _was_ a reference counting bug that led to a breakdown very much like what you reported. >>> I'm attaching that patch and a small improvement to DMVariableBounds. I need to figure out how to go about creating matrices from DM_libMesh. >>> >> >> Let me know if you found anything or if I can be of any help. >> >> Thanks, >> Ata >> >>> Dmitry. >>> >>> On Tue, May 1, 2012 at 6:21 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>> Dmitry, >>> >>> Is there a way so SNES doesn't go calculate the Jacobian for every iteration (my Jac does't change)? I used to use a flag to check if it's assembled but when I do that with petsc-dm, KSP associated with SNES gives DIVERGED_BREAKDOWN for the second iteration! looks like petsc-dm doesn't make a copy of the matrix so when it's used in KSP solve it gets jumbled up. >>> >>> Here is how it looks >>> >>> -with the flag: >>> >>> applying variable bounds ... done >>> NL step 0, |residual|_2 = 8.90674e-04 >>> SNES Jacobian assembly... done >>> Linear solve converged due to CONVERGED_RTOL iterations 6 >>> NL step 1, |residual|_2 = 2.90987e-05 >>> Linear solve did not converge due to DIVERGED_BREAKDOWN iterations 1 >>> >>> this happens for every SNES solve. >>> ----------------------- >>> >>> -without it: >>> >>> applying variable bounds ... done >>> NL step 0, |residual|_2 = 6.08030e-02 >>> SNES hessian assembly... done >>> Linear solve converged due to CONVERGED_RTOL iterations 6 >>> NL step 1, |residual|_2 = 2.61557e-07 >>> SNES hessian assembly... done >>> Linear solve converged due to CONVERGED_RTOL iterations 6 >>> NL step 2, |residual|_2 = 1.35354e-12 >>> Fracture system solved at nonlinear iteration 2 , final nonlinear residual norm: 1.35354e-12 >>> >>> ps: these are not from the same time step but you get the idea. >>> >>> Best, >>> Ata >>> >>> On May 1, 2012, at 8:49 AM, Dmitry Karpeev wrote: >>> >>>> As an fyi: I just built and ran miscellaneous_ex7 in parallel with the latest petsc-dev: >>>> http://petsc.cs.iit.edu/petsc/petsc-dev/rev/12d4a1ea0aca >>>> If you decide you want to upgrade to this (or later) petsc-dev, you'd need a small patch for >>>> petsc_dm_nonlinear_solver.C to account for the latest API changes (attached). >>>> I am working on a bigger patch that adds more functionality to DM_libMesh and streamline >>>> miscellaneous_ex7. That will take some time to percolate up into the libMesh tree. In the >>>> meantime you should be able to run with the small fix I've attached. >>>> >>>> Let me know how it works. >>>> Dmitry. >>>> >>>> >>>> On Tue, May 1, 2012 at 8:35 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>> Dmitry, >>>> >>>> I believe that the error is indeed due to petsc-dev version. I'm going to check this during the next few hours and get back to you. >>>> >>>> Thanks, >>>> Ata >>>> >>>> >>>> On Tue, May 1, 2012 at 8:29 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>> Ata, >>>> I can't reproduce the problem: miscellaneous_ex7 runs fine for me on 2 procs with a relatively recent version >>>> of petsc-dev. Can you send me the full PETSc error output? >>>> >>>> Thanks. >>>> Dmitry. >>>> >>>> >>>> On Mon, Apr 30, 2012 at 8:56 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>> >>>> On Apr 30, 2012, at 8:39 PM, Dmitry Karpeev wrote: >>>> >>>>> Ata, >>>>> example7 segfaults when you run in parallel? >>>>> Does it do it on 2 procs? >>>> Yes and yes. >>>> >>>> here is the stack: >>>> (gdb) where >>>> #0 0x00000038fbaaada0 in __nanosleep_nocancel () from /lib64/libc.so.6 >>>> #1 0x00000038fbaaac30 in sleep () from /lib64/libc.so.6 >>>> #2 0x00002ae457b35c40 in PetscSleep(double) () >>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>> #3 0x00002ae457b7dffa in PetscAttachDebugger() () >>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>> #4 0x00002ae457b7e95f in PetscAttachDebuggerErrorHandler(int, int, char const*, char const*, char const*, int, PetscErrorType, char const*, void*) () >>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>> #5 0x00002ae457b817be in PetscError(int, int, char const*, char const*, char const*, int, PetscErrorType, char const*, ...) () >>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>> #6 0x00002ae457b82ea7 in PetscDefaultSignalHandler(int, void*) () >>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>> #7 0x00002ae457b82883 in PetscSignalHandler_Private () >>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>> #8 <signal handler called> >>>> #9 0x00002ae458189340 in DMCoarsenHookAdd(_p_DM*, int (*)(_p_DM*, _p_DM*, void*), int (*)(_p_DM*, _p_Mat*, _p_Vec*, _p_Mat*, _p_DM*, void*), void*) () >>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>> #10 0x00002ae45835f431 in DMSNESGetContext(_p_DM*, _n_SNESDM**) () >>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>> #11 0x00002ae45835ea34 in DMSNESGetFunction(_p_DM*, int (**)(_p_SNES*, _p_Vec*, ---Type <return> to continue, or q <return> to quit--- >>>> _p_Vec*, void*), void**) () >>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>> #12 0x00002ae458393d1c in SNESGetFunction(_p_SNES*, _p_Vec**, int (**)(_p_SNES*, _p_Vec*, _p_Vec*, void*), void**) () >>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>> #13 0x00002ae45839b9eb in SNESSetUp(_p_SNES*) () >>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>> #14 0x00002ae45839a6b0 in SNESSolve(_p_SNES*, _p_Vec*, _p_Vec*) () >>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>> >>>> #15 0x00000000010976ab in libMesh::PetscDMNonlinearSolver<libMesh::Number>::solve (this=0x35506d0, jac_in=..., x_in=..., r_in=...) >>>> at src/solvers/petsc_dm_nonlinear_solver.C:450 >>>> #16 0x0000000000cd604f in libMesh::NonlinearImplicitSystem::solve ( >>>> this=0x353b7e0) at src/systems/nonlinear_implicit_system.C:178 >>>> #17 0x0000000000426082 in Biharmonic::step (this=0x34f2000, dt=@0x7fff95edca28) >>>> at biharmonic.C:110 >>>> #18 0x0000000000426373 in Biharmonic::run (this=0x34f2000) at biharmonic.C:152 >>>> warning: Range for type (null) has invalid bounds 0..-111 >>>> #19 0x0000000000436cf5 in main (argc=12, argv=0x7fff95edcc18) >>>> at miscellaneous_ex7.C:64 >>>> >>>> Ata >>>> >>>>> >>>>> Dmitry. >>>>> >>>>> On Mon, Apr 30, 2012 at 8:31 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>> Dmitry, >>>>> >>>>> I had a very good experience on single processor but on multiple processors I get segmentation error at line where we call solve (I tried both my code and example7) . I tried your example without -vi --use-petsc-dm and it works just fine. Right now I'm trying to compile a debug version and look into it; meanwhile I thought it would be a good idea if I let you know. >>>>> >>>>> Thanks, >>>>> Ata >>>>> >>>>> On Apr 30, 2012, at 1:30 PM, Dmitry Karpeev wrote: >>>>> >>>>>> >>>>>> >>>>>> On Mon, Apr 30, 2012 at 1:28 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>> >>>>>> On Apr 30, 2012, at 1:13 PM, Dmitry Karpeev wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Mon, Apr 30, 2012 at 1:03 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>> Dmitry, >>>>>>> >>>>>>> I had time to come back and work on the SNESVI/Libmesh program. The way you setup your code is particularly hard for me to follow in my own code since it requires a significant amount of extra work (to define classes and setup the inheritances..). What I did instead was to try add a TransientNonlinearSystem to my EquationSystems object: >>>>>>> >>>>>>> TransientNonlinearImplicitSystem & fracture_system = equation_system.add_system<TransientNonlinearImplicitSystem>("fracture_system"); >>>>>>> >>>>>>> and then to add the bounds and Objective and Residual functions: >>>>>>> >>>>>>> fracture_system.attach_init_function(init_ic_fracture); >>>>>>> fracture_system.nonlinear_solver->jacobian= PsiFormJacobian; >>>>>>> fracture_system.nonlinear_solver->residual= PsiFormResidual; >>>>>>> fracture_system.nonlinear_solver->bounds = PsiFormBounds; >>>>>>> >>>>>>> what I see now is that SNES solve does not apply the bounds. So here is my questions: >>>>>>> >>>>>>> 1- Is there inherently wrong with setting up the system specially the bounds the easy way like I did? >>>>>>> Ata, >>>>>>> Doing things this "simple" way is just fine: there is no reason to follow the relatively convoluted structure I used in that >>>>>>> example. When I have time, I'll rewrite it to simplify things. >>>>>>> 2- Where is the right SNES type is chosen ? I guess this should be hard wired somewhere in NonlinearImplicitSystem! >>>>>>> 3- When I use -snes_view (with your example or my own code ) I get snes_type ls? shouldn't this be VI ? >>>>>>> >>>>>>> What's likely missing is the command-line option --use-petsc-dm. It's necessary since libMesh wants to preserve the usual behavior as the default. In particular, because SNESVI and the DM hookup aren't really possible prior to petsc-3.2. >>>>>>> In fact, it might not be possible without a recent petsc-dev (let me verify that, once I get to my computer). >>>>>>> >>>>>>> In the meantime, could you give --use-petsc-dm a try? >>>>>>> >>>>>> >>>>>> I just saw that in the example help line and used it. It works like a charm. >>>>>> Great! >>>>>> Dmitry. >>>>>> I'll go on check for the convergence and see what I can get. >>>>>> >>>>>> Thanks, >>>>>> Ata >>>>>>> Thanks. >>>>>>> >>>>>>> Dmitry. >>>>>>> >>>>>>> Thanks, >>>>>>> Ata >>>>>>> >>>>>>> On Apr 12, 2012, at 10:00 AM, Dmitry Karpeev wrote: >>>>>>> >>>>>>>> Convergence is something we might need to work on -- I noticed some chatter in a simple bounded biharmonic heat equation (a stripped down version of Cahn-Hilliard). >>>>>>>> Thanks. >>>>>>>> Dmitry. >>>>>>>> >>>>>>>> On Thu, Apr 12, 2012 at 9:55 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>> Thanks Dmitry, >>>>>>>> >>>>>>>> It would be wonderful. I made a wrapper for SNESVI in myself for my program but had difficulty achieving good convergence. Right now I am occupied with other issues but I'll get back to this as soon as I get a chance. >>>>>>>> >>>>>>>> Ata >>>>>>>> >>>>>>>> On Apr 10, 2012, at 9:55 PM, Dmitry Karpeev wrote: >>>>>>>> >>>>>>>>> Ata, >>>>>>>>> There is now a way to use SNESVI from libMesh. >>>>>>>>> If interested, pull a recent revision from sourceforge and take a look at examples/miscellaneous/miscellaneous_ex7. >>>>>>>>> Let me know, if you have any questions. >>>>>>>>> >>>>>>>>> Thanks. >>>>>>>>> Dmitry. >>>>>>>>> >>>>>>>>> On Wed, Dec 14, 2011 at 12:45 PM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>>>> Okay, thanks for your feedback. >>>>>>>>> I'll keep you updated on this work. >>>>>>>>> >>>>>>>>> Dmitry. >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, Dec 14, 2011 at 12:42 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, Dec 14, 2011 at 12:35 PM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, Dec 14, 2011 at 12:19 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, Dec 14, 2011 at 11:59 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>>>> Ata, >>>>>>>>> >>>>>>>>> A couple of things: >>>>>>>>> 1. SNESVI solves a VI under what's known as "box constraints": the Vec x that the residual operates on is entrywise between Vecs xlow and xhigh. In order to enable >>>>>>>>> constraints on a linear combination of the Vec entries (as you suggest), the problem >>>>>>>>> has to be augmented with extra variables -- these linear combinations. This, of course, >>>>>>>>> changes the Jacobian, preconditioner, etc. The user provides only the Jacobian for the original system, so libMesh would have to automatically construct the Jacobian for the extended system, and precondition it using the user provided PC for the original subsystem. This is not necessarily the unreasonable thing to do in many circumstances, but will take more thinking than a straighforward nodal constraint. The user API for specifying the constraints, however, becomes rather straighforward. >>>>>>>>> >>>>>>>>> 2. In terms of FEM convergence, constraining at the nodes is sufficient (see, for example, >>>>>>>>> Glowinski's book: http://books.google.com/books/about/Numerical_analysis_of_variational_inequa.html?id=Pf4ed2mtbx4C) >>>>>>>>> You can show that if you impose the nodal constraints, in the limit of mesh size going to 0 the inequality is satisfied a.e. >>>>>>>>> >>>>>>>>> In this case no funny manipulation of the Jacobian or the PC is necessary, but the user API to specify the constraint is a bit amiguous: how do you constrain the node? From a containing element? Then there are multiple, possibly incompatible, constraints. >>>>>>>>> Or you can simply go through all the nodes in the system once using node iterator. Here is how I think you can do it: >>>>>>>>> >>>>>>>>> ------------------------------------- >>>>>>>>> //make two PetscVector ub,lb from two variables you already added to the system >>>>>>>>> NumericVector<Number> &ub_sys = fracture_system.get_vector("ub"); >>>>>>>>> NumericVector<Number> &lb_sys = fracture_system.get_vector("lb"); >>>>>>>>> PetscVector<Number>&lb = libmesh_cast_ptr<PetscVector<Number>*>(&ub_sys); >>>>>>>>> PetscVector<Number>&ub = libmesh_cast_ptr<PetscVector<Number>*>(&lb_sys); >>>>>>>>> >>>>>>>>> then in the bound routine do: >>>>>>>>> >>>>>>>>> // Get a constant reference to the mesh object. >>>>>>>>> const MeshBase& mesh = es.get_mesh(); >>>>>>>>> //Get the iterator >>>>>>>>> MeshBase::const_node_iterator nd = mesh.local_nodes_begin(); >>>>>>>>> const MeshBase::const_node_iterator end_nd = mesh.local_nodes_end(); >>>>>>>>> for (; nd != end_nd; nd++){ >>>>>>>>> const Node* node = *nd; >>>>>>>>> //get the degree of freedom of node >>>>>>>>> unsigned int dn = node->dof_number(0,0,0); // system 0 , var 0, component 0 >>>>>>>>> //set the ub,lb >>>>>>>>> ub(dn)= A; >>>>>>>>> lb(dn) = B; >>>>>>>>> } >>>>>>>>> //close the vectors >>>>>>>>> ub.close(); >>>>>>>>> lb.close(); >>>>>>>>> >>>>>>>>> You could do that. That's probably the mode we can support almost immediately. >>>>>>>>> The issue here is that you lose some of the "context" by processing each node by >>>>>>>>> itself rather as part of an element. For example what if each element is a different material? >>>>>>>>> I guess in that case that property should be set on the nodes as well. >>>>>>>>> >>>>>>>>> Exactly one can always pass mat prop as a Constant MONOMIAL (e.g. system.add_variable("mat_prop",CONSTANT,MONOMIAL);) variable with values on each node. >>>>>>>>> >>>>>>>>> Ata >>>>>>>>> >>>>>>>>> Dmitry. >>>>>>>>> ----------------------------------------------------------------- >>>>>>>>> Ata >>>>>>>>> >>>>>>>>> >>>>>>>>> Dmitry. >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, Dec 14, 2011 at 11:17 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>>> I hope I understood the whole thing correctly. >>>>>>>>> >>>>>>>>> On Wed, Dec 14, 2011 at 10:56 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, Dec 14, 2011 at 10:50 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>>> I don't think setting the variable bound on each node separately is a good idea in general. >>>>>>>>> I'm not sure what you mean by this. Could you clarify? I don't think I'm advocating this. >>>>>>>>> >>>>>>>>> Bounds should be set as a sum on each elements degrees of freedom; this way there should not be any nodal inconsistencies. >>>>>>>>> I'm not sure what you mean here either. A bound is usually not a sum (e.g., a bound on a concentration is 0 <= c <=1). >>>>>>>>> Perhaps your application expresses the total bound as a sum? I'd be very interested in learning more about that case. >>>>>>>>> >>>>>>>>> OK here is my thought process: one optimally wants to bound the variable over the whole Element domain between lb,ub. So we would have something like this: >>>>>>>>> >>>>>>>>> lb \leq \sum_{i=1}^{n=NDOF} \phi_{i}(x) \hat{u}_{i} \leq ub, \forall x \in El (Eq-1) >>>>>>>>> >>>>>>>>> so it's not exactly that the nodal values should be bounded (\hat{u}_{i}) unless it's a FD application where \phi_i(x) is delta function !! That said I think given a normalized basis, bounding nodal values is stricter requirement that (Eq-1). if that's what you meant by inconsistencies between node bounds then the stronger bound would be my first priority. >>>>>>>>> >>>>>>>>> Ata >>>>>>>>> >>>>>>>>> >>>>>>>>> Of course it might run into a situations with semi-smooth VI. >>>>>>>>> The type of VI solver is independent of how the bounds are specified: once we know the upper and lower bounds for each element of a Vec, we can solve the VI with either a semismooth method or an active set method. >>>>>>>>> >>>>>>>>> I really appreciate your feedback. I hope we can get to the point where things are as clear as possible so that you can >>>>>>>>> profitably use the new capabilities. >>>>>>>>> >>>>>>>>> Dmitry. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Ata >>>>>>>>> >>>>>>>>> On Wed, Dec 14, 2011 at 9:47 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>>>> Are you familiar with SNESVI? The main thing I'm looking at right now is the libMesh API for specifying >>>>>>>>> the bounds on the variables. It seems to me that the user would ideally want to specify the bounds on the variable >>>>>>>>> one element at a time, since that's the default traversal mode for libMesh. Then, of course, many nodal values >>>>>>>>> are traversed many time, allowing for an inconsistency. The question is, what you, as a user, expect in that situation? >>>>>>>>> Should the latest bound that the user has set be applied or the tightest of the bounds the user has supplied? >>>>>>>>> >>>>>>>>> Dmitry. >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, Dec 14, 2011 at 9:28 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>>> Thanks Dmitry, I look forward to use it. Also if it's available as a dev version I would love to try to work with it. >>>>>>>>> >>>>>>>>> Ata >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, Dec 14, 2011 at 8:24 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>>>> We (ANL) are working on an patch to libMesh to enable this functionality. >>>>>>>>> The motivation is to use it through MOOSE, but it will be exposed as a libMesh API. >>>>>>>>> Dmitry. >>>>>>>>> >>>>>>>>> On Wed, Dec 14, 2011 at 7:53 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>>> Dear all, >>>>>>>>> >>>>>>>>> I wonder if there is compatible versionIs it possible to >>>>>>>>> use PETScNonlinearSolver class with SNESVI (PETSc/3.2) to solve a >>>>>>>>> variational inequality problem? >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Ata Mesgarnejad >>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>> Cloud Computing - Latest Buzzword or a Glimpse of the Future? >>>>>>>>> This paper surveys cloud computing today: What are the benefits? >>>>>>>>> Why are businesses embracing it? What are its payoffs and pitfalls? >>>>>>>>> http://www.accelacomm.com/jaw/sdnl/114/51425149/ >>>>>>>>> _______________________________________________ >>>>>>>>> Libmesh-users mailing list >>>>>>>>> Lib...@li... >>>>>>>>> https://lists.sourceforge.net/lists/listinfo/libmesh-users >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> A. Mesgarnejad >>>>>>>>> PhD Student, Research Assistant >>>>>>>>> Mechanical Engineering Department >>>>>>>>> Louisiana State University >>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>> Baton Rouge, La 70803 >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> A. Mesgarnejad >>>>>>>>> PhD Student, Research Assistant >>>>>>>>> Mechanical Engineering Department >>>>>>>>> Louisiana State University >>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>> Baton Rouge, La 70803 >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> A. Mesgarnejad >>>>>>>>> PhD Student, Research Assistant >>>>>>>>> Mechanical Engineering Department >>>>>>>>> Louisiana State University >>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>> Baton Rouge, La 70803 >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> A. Mesgarnejad >>>>>>>>> PhD Student, Research Assistant >>>>>>>>> Mechanical Engineering Department >>>>>>>>> Louisiana State University >>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>> Baton Rouge, La 70803 >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>> >>>> >>>> >>>> >>>> >>>> -- >>>> A. Mesgarnejad >>>> PhD Student, Research Assistant >>>> Mechanical Engineering Department >>>> Louisiana State University >>>> 2203 Patrick F. Taylor Hall >>>> Baton Rouge, La 70803 >>>> >>>> <petsc_dm.patch> >>> >>> >>> <0001-PETSc-DM-incref-bug-fix.patch><0002-PETSc-DM-initializes-bounds-with-Inf-now.patch> >> >> > > |
From: Dmitry K. <ka...@mc...> - 2012-05-09 23:14:45
|
Ata, system.solve() will call it. I assume that by "changing constantly" you mean "once per timestep"? SNESSolve() initiates the call that eventually dispatches to your bounds calculation routine. SNES calls that once per solve. Dmitry. On Wed, May 9, 2012 at 5:22 PM, Ataollah Mesgarnejad <am...@ti... > wrote: > Dmitry, > > A quick question: When does the bounds routine get called? My bounds are > changing constantly do I need to call the bounds function myself or does it > get called for each system.solve()? > > Thanks, > Ata > > On May 5, 2012, at 12:21 PM, Dmitry Karpeev wrote: > > Okay, that helps. > I'm looking further into it. > Dmitry. > > On Sat, May 5, 2012 at 12:08 PM, Ataollah Mesgarnejad < > am...@ti...> wrote: > >> Dmitry, >> >> I just tried it without bounds and I get the same DIVERGED_BREAKDOWN >> error. >> >> Ata >> >> Ata, >> I'm trying to reproduce the problem that you are observing, so I wanted >> to understand better what you are doing there. >> Do you just skip Jacobian evaluation and return from the ComputeJacobian >> call immediately? >> Also, without --use-petsc-dm, how are you able to run with VIRS (I >> remember you mentioned your own wrapper to SNESVI)? Finally, do you see >> this problem when running with SNES (not SNESVI)? I wonder if the issue is >> in the interaction with the bounds. >> >> Thanks for your patience. >> Dmitry. >> >> On Wed, May 2, 2012 at 7:51 AM, Ataollah Mesgarnejad < >> am...@ti...> wrote: >> >>> >>> On May 1, 2012, at 10:16 PM, Dmitry Karpeev wrote: >>> >>> Yeah, it's a bit of a hack right now -- DMCreateMatrix simply increfs >>> and return the matrix from the underlying NonlinearImplicitSystem. The >>> reason is that initially the matrix is preallocated by unassembled, and >>> MatDuplicate >>> doesn't want to duplicate an unassembled matrix. I can fix that, but >>> I'm a bit puzzled by the fact that KSP messes >>> with the matrix. Can you run with -snes_view and send me the output? >>> >>> Dmitry, >>> >>> Here it is: >>> >>> ------------- with the flag ----------------------- >>> >>> Linear solve converged due to CONVERGED_RTOL iterations 44 >>> Calculating surface energy... done >>> Calculating fracture energy... done >>> elastic energy:13.74054,fracture energy:-0.00000,total:13.74054 >>> applying variable bounds ... done >>> NL step 0, |residual|_2 = 2.41046e-01 >>> SNES Jacobian assembly... done >>> Linear solve converged due to CONVERGED_RTOL iterations 3 >>> NL step 1, |residual|_2 = 5.10144e-07 >>> Linear solve did not converge due to DIVERGED_BREAKDOWN iterations 1 >>> SNES Object: 1 MPI processes >>> type: virs >>> maximum iterations=50, maximum function evaluations=10000 >>> tolerances: relative=1e-08, absolute=1e-35, solution=1e-08 >>> total number of linear solver iterations=3 >>> total number of function evaluations=2 >>> KSP Object: 1 MPI processes >>> type: gmres >>> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt >>> Orthogonalization with no iterative refinement >>> GMRES: happy breakdown tolerance 1e-30 >>> maximum iterations=10000, initial guess is zero >>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >>> left preconditioning >>> using PRECONDITIONED norm type for convergence test >>> PC Object: 1 MPI processes >>> type: ilu >>> ILU: out-of-place factorization >>> 0 levels of fill >>> tolerance for zero pivot 2.22045e-14 >>> using diagonal shift to prevent zero pivot >>> matrix ordering: natural >>> factor fill ratio given 1, needed 1 >>> Factored matrix follows: >>> Matrix Object: 1 MPI processes >>> type: seqaij >>> rows=153, cols=153 >>> package used to perform factorization: petsc >>> total: nonzeros=2401, allocated nonzeros=2401 >>> total number of mallocs used during MatSetValues calls =0 >>> not using I-node routines >>> linear system matrix = precond matrix: >>> Matrix Object: 1 MPI processes >>> type: seqaij >>> rows=153, cols=153 >>> total: nonzeros=2401, allocated nonzeros=2401 >>> total number of mallocs used during MatSetValues calls =0 >>> not using I-node routines >>> SNESLineSearch Object: 1 MPI processes >>> type: basic >>> maxstep=1e+08, minlambda=1e-12 >>> tolerances: relative=1e-08, absolute=1e-15, lambda=1e-08 >>> maximum iterations=1 >>> Fracture system solved at nonlinear iteration 1 , final nonlinear >>> residual norm: 5.10144e-07 >>> >>> --- without the flag ------ >>> >>> applying variable bounds ... done >>> NL step 0, |residual|_2 = 2.41046e-01 >>> SNES Jacobian assembly... done >>> Linear solve converged due to CONVERGED_RTOL iterations 3 >>> NL step 1, |residual|_2 = 5.10144e-07 >>> SNES Jacobian assembly... done >>> Linear solve converged due to CONVERGED_RTOL iterations 3 >>> NL step 2, |residual|_2 = 1.61153e-12 >>> SNES Object: 1 MPI processes >>> type: virs >>> maximum iterations=50, maximum function evaluations=10000 >>> tolerances: relative=1e-08, absolute=1e-35, solution=1e-08 >>> total number of linear solver iterations=6 >>> total number of function evaluations=3 >>> KSP Object: 1 MPI processes >>> type: gmres >>> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt >>> Orthogonalization with no iterative refinement >>> GMRES: happy breakdown tolerance 1e-30 >>> maximum iterations=10000, initial guess is zero >>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >>> left preconditioning >>> using PRECONDITIONED norm type for convergence test >>> PC Object: 1 MPI processes >>> type: ilu >>> ILU: out-of-place factorization >>> 0 levels of fill >>> tolerance for zero pivot 2.22045e-14 >>> using diagonal shift to prevent zero pivot >>> matrix ordering: natural >>> factor fill ratio given 1, needed 1 >>> Factored matrix follows: >>> Matrix Object: 1 MPI processes >>> type: seqaij >>> rows=153, cols=153 >>> package used to perform factorization: petsc >>> total: nonzeros=2401, allocated nonzeros=2401 >>> total number of mallocs used during MatSetValues calls =0 >>> not using I-node routines >>> linear system matrix = precond matrix: >>> Matrix Object: 1 MPI processes >>> type: seqaij >>> rows=153, cols=153 >>> total: nonzeros=2401, allocated nonzeros=2401 >>> total number of mallocs used during MatSetValues calls =0 >>> not using I-node routines >>> SNESLineSearch Object: 1 MPI processes >>> type: basic >>> maxstep=1e+08, minlambda=1e-12 >>> tolerances: relative=1e-08, absolute=1e-15, lambda=1e-08 >>> maximum iterations=1 >>> Fracture system solved at nonlinear iteration 2 , final nonlinear >>> residual norm: 1.61153e-12 >>> >>> >>> >>> Incidentally, there _was_ a reference counting bug that led to a >>> breakdown very much like what you reported. >>> I'm attaching that patch and a small improvement to DMVariableBounds. I >>> need to figure out how to go about creating matrices from DM_libMesh. >>> >>> Let me know if you found anything or if I can be of any help. >>> >>> Thanks, >>> Ata >>> >>> Dmitry. >>> >>> On Tue, May 1, 2012 at 6:21 PM, Ataollah Mesgarnejad < >>> am...@ti...> wrote: >>> >>>> Dmitry, >>>> >>>> Is there a way so SNES doesn't go calculate the Jacobian for every >>>> iteration (my Jac does't change)? I used to use a flag to check if it's >>>> assembled but when I do that with petsc-dm, KSP associated with SNES gives >>>> DIVERGED_BREAKDOWN for the second iteration! looks like petsc-dm doesn't >>>> make a copy of the matrix so when it's used in KSP solve it gets jumbled up. >>>> >>>> Here is how it looks >>>> >>>> -with the flag: >>>> >>>> applying variable bounds ... done >>>> NL step 0, |residual|_2 = 8.90674e-04 >>>> SNES Jacobian assembly... done >>>> Linear solve converged due to CONVERGED_RTOL iterations 6 >>>> NL step 1, |residual|_2 = 2.90987e-05 >>>> Linear solve did not converge due to DIVERGED_BREAKDOWN iterations 1 >>>> >>>> this happens for every SNES solve. >>>> ----------------------- >>>> >>>> -without it: >>>> >>>> applying variable bounds ... done >>>> NL step 0, |residual|_2 = 6.08030e-02 >>>> SNES hessian assembly... done >>>> Linear solve converged due to CONVERGED_RTOL iterations 6 >>>> NL step 1, |residual|_2 = 2.61557e-07 >>>> SNES hessian assembly... done >>>> Linear solve converged due to CONVERGED_RTOL iterations 6 >>>> NL step 2, |residual|_2 = 1.35354e-12 >>>> Fracture system solved at nonlinear iteration 2 , final nonlinear >>>> residual norm: 1.35354e-12 >>>> >>>> ps: these are not from the same time step but you get the idea. >>>> >>>> Best, >>>> Ata >>>> >>>> On May 1, 2012, at 8:49 AM, Dmitry Karpeev wrote: >>>> >>>> As an fyi: I just built and ran miscellaneous_ex7 in parallel with the >>>> latest petsc-dev: >>>> http://petsc.cs.iit.edu/petsc/petsc-dev/rev/12d4a1ea0aca >>>> If you decide you want to upgrade to this (or later) petsc-dev, you'd >>>> need a small patch for >>>> petsc_dm_nonlinear_solver.C to account for the latest API >>>> changes (attached). >>>> I am working on a bigger patch that adds more functionality to >>>> DM_libMesh and streamline >>>> miscellaneous_ex7. That will take some time to percolate up into the >>>> libMesh tree. In the >>>> meantime you should be able to run with the small fix I've attached. >>>> >>>> Let me know how it works. >>>> Dmitry. >>>> >>>> >>>> On Tue, May 1, 2012 at 8:35 AM, Ataollah Mesgarnejad < >>>> am...@ti...> wrote: >>>> >>>>> Dmitry, >>>>> >>>>> I believe that the error is indeed due to petsc-dev version. I'm going >>>>> to check this during the next few hours and get back to you. >>>>> >>>>> Thanks, >>>>> Ata >>>>> >>>>> >>>>> On Tue, May 1, 2012 at 8:29 AM, Dmitry Karpeev <ka...@mc...>wrote: >>>>> >>>>>> Ata, >>>>>> I can't reproduce the problem: miscellaneous_ex7 runs fine for me on >>>>>> 2 procs with a relatively recent version >>>>>> of petsc-dev. Can you send me the full PETSc error output? >>>>>> >>>>>> Thanks. >>>>>> Dmitry. >>>>>> >>>>>> >>>>>> On Mon, Apr 30, 2012 at 8:56 PM, Ataollah Mesgarnejad < >>>>>> am...@ti...> wrote: >>>>>> >>>>>>> >>>>>>> On Apr 30, 2012, at 8:39 PM, Dmitry Karpeev wrote: >>>>>>> >>>>>>> Ata, >>>>>>> example7 segfaults when you run in parallel? >>>>>>> Does it do it on 2 procs? >>>>>>> >>>>>>> Yes and yes. >>>>>>> >>>>>>> here is the stack: >>>>>>> (gdb) where >>>>>>> #0 0x00000038fbaaada0 in __nanosleep_nocancel () from >>>>>>> /lib64/libc.so.6 >>>>>>> #1 0x00000038fbaaac30 in sleep () from /lib64/libc.so.6 >>>>>>> #2 0x00002ae457b35c40 in PetscSleep(double) () >>>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>> #3 0x00002ae457b7dffa in PetscAttachDebugger() () >>>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>> #4 0x00002ae457b7e95f in PetscAttachDebuggerErrorHandler(int, int, >>>>>>> char const*, char const*, char const*, int, PetscErrorType, char const*, >>>>>>> void*) () >>>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>> #5 0x00002ae457b817be in PetscError(int, int, char const*, char >>>>>>> const*, char const*, int, PetscErrorType, char const*, ...) () >>>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>> #6 0x00002ae457b82ea7 in PetscDefaultSignalHandler(int, void*) () >>>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>> #7 0x00002ae457b82883 in PetscSignalHandler_Private () >>>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>> #8 <signal handler called> >>>>>>> #9 0x00002ae458189340 in DMCoarsenHookAdd(_p_DM*, int (*)(_p_DM*, >>>>>>> _p_DM*, void*), int (*)(_p_DM*, _p_Mat*, _p_Vec*, _p_Mat*, _p_DM*, void*), >>>>>>> void*) () >>>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>> #10 0x00002ae45835f431 in DMSNESGetContext(_p_DM*, _n_SNESDM**) () >>>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>> #11 0x00002ae45835ea34 in DMSNESGetFunction(_p_DM*, int >>>>>>> (**)(_p_SNES*, _p_Vec*, ---Type <return> to continue, or q <return> to >>>>>>> quit--- >>>>>>> _p_Vec*, void*), void**) () >>>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>> #12 0x00002ae458393d1c in SNESGetFunction(_p_SNES*, _p_Vec**, int >>>>>>> (**)(_p_SNES*, _p_Vec*, _p_Vec*, void*), void**) () >>>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>> #13 0x00002ae45839b9eb in SNESSetUp(_p_SNES*) () >>>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>> #14 0x00002ae45839a6b0 in SNESSolve(_p_SNES*, _p_Vec*, _p_Vec*) () >>>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>> >>>>>>> #15 0x00000000010976ab in >>>>>>> libMesh::PetscDMNonlinearSolver<libMesh::Number>::solve (this=0x35506d0, >>>>>>> jac_in=..., x_in=..., r_in=...) >>>>>>> at src/solvers/petsc_dm_nonlinear_solver.C:450 >>>>>>> #16 0x0000000000cd604f in libMesh::NonlinearImplicitSystem::solve ( >>>>>>> this=0x353b7e0) at src/systems/nonlinear_implicit_system.C:178 >>>>>>> #17 0x0000000000426082 in Biharmonic::step (this=0x34f2000, >>>>>>> dt=@0x7fff95edca28) >>>>>>> at biharmonic.C:110 >>>>>>> #18 0x0000000000426373 in Biharmonic::run (this=0x34f2000) at >>>>>>> biharmonic.C:152 >>>>>>> warning: Range for type (null) has invalid bounds 0..-111 >>>>>>> #19 0x0000000000436cf5 in main (argc=12, argv=0x7fff95edcc18) >>>>>>> at miscellaneous_ex7.C:64 >>>>>>> >>>>>>> Ata >>>>>>> >>>>>>> >>>>>>> Dmitry. >>>>>>> >>>>>>> On Mon, Apr 30, 2012 at 8:31 PM, Ataollah Mesgarnejad < >>>>>>> am...@ti...> wrote: >>>>>>> >>>>>>>> Dmitry, >>>>>>>> >>>>>>>> I had a very good experience on single processor but on multiple >>>>>>>> processors I get segmentation error at line where we call solve (I tried >>>>>>>> both my code and example7) . I tried your example without -vi >>>>>>>> --use-petsc-dm and it works just fine. Right now I'm trying to compile a >>>>>>>> debug version and look into it; meanwhile I thought it would be a good idea >>>>>>>> if I let you know. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Ata >>>>>>>> >>>>>>>> On Apr 30, 2012, at 1:30 PM, Dmitry Karpeev wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Mon, Apr 30, 2012 at 1:28 PM, Ataollah Mesgarnejad < >>>>>>>> am...@ti...> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> On Apr 30, 2012, at 1:13 PM, Dmitry Karpeev wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Mon, Apr 30, 2012 at 1:03 PM, Ataollah Mesgarnejad < >>>>>>>>> am...@ti...> wrote: >>>>>>>>> >>>>>>>>>> Dmitry, >>>>>>>>>> >>>>>>>>>> I had time to come back and work on the SNESVI/Libmesh program. >>>>>>>>>> The way you setup your code is particularly hard for me to follow in my own >>>>>>>>>> code since it requires a significant amount of extra work (to define >>>>>>>>>> classes and setup the inheritances..). What I did instead was to try add a >>>>>>>>>> TransientNonlinearSystem to my EquationSystems object: >>>>>>>>>> >>>>>>>>>> TransientNonlinearImplicitSystem & fracture_system = >>>>>>>>>> equation_system.add_system<TransientNonlinearImplicitSystem>("fracture_system"); >>>>>>>>>> >>>>>>>>>> and then to add the bounds and Objective and Residual functions: >>>>>>>>>> >>>>>>>>>> fracture_system.attach_init_function(init_ic_fracture); >>>>>>>>>> fracture_system.nonlinear_solver->jacobian= PsiFormJacobian; >>>>>>>>>> fracture_system.nonlinear_solver->residual= PsiFormResidual; >>>>>>>>>> fracture_system.nonlinear_solver->bounds = PsiFormBounds; >>>>>>>>>> >>>>>>>>>> what I see now is that SNES solve does not apply the bounds. So >>>>>>>>>> here is my questions: >>>>>>>>>> >>>>>>>>>> 1- Is there inherently wrong with setting up the system >>>>>>>>>> specially the bounds the easy way like I did? >>>>>>>>>> >>>>>>>>> Ata, >>>>>>>>> Doing things this "simple" way is just fine: there is no reason to >>>>>>>>> follow the relatively convoluted structure I used in that >>>>>>>>> example. When I have time, I'll rewrite it to simplify things. >>>>>>>>> >>>>>>>>>> 2- Where is the right SNES type is chosen ? I guess this should >>>>>>>>>> be hard wired somewhere in NonlinearImplicitSystem! >>>>>>>>>> 3- When I use -snes_view (with your example or my own code ) I >>>>>>>>>> get snes_type ls? shouldn't this be VI ? >>>>>>>>>> >>>>>>>>> >>>>>>>>> What's likely missing is the command-line option --use-petsc-dm. >>>>>>>>> It's necessary since libMesh wants to preserve the usual behavior as >>>>>>>>> the default. In particular, because SNESVI and the DM hookup aren't really >>>>>>>>> possible prior to petsc-3.2. >>>>>>>>> In fact, it might not be possible without a recent petsc-dev (let >>>>>>>>> me verify that, once I get to my computer). >>>>>>>>> >>>>>>>>> In the meantime, could you give --use-petsc-dm a try? >>>>>>>>> >>>>>>>>> I just saw that in the example help line and used it. It works >>>>>>>>> like a charm. >>>>>>>>> >>>>>>>> Great! >>>>>>>> Dmitry. >>>>>>>> >>>>>>>>> I'll go on check for the convergence and see what I can get. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Ata >>>>>>>>> >>>>>>>>> Thanks. >>>>>>>>> >>>>>>>>> Dmitry. >>>>>>>>> >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> Ata >>>>>>>>>> >>>>>>>>>> On Apr 12, 2012, at 10:00 AM, Dmitry Karpeev wrote: >>>>>>>>>> >>>>>>>>>> Convergence is something we might need to work on -- I noticed >>>>>>>>>> some chatter in a simple bounded biharmonic heat equation (a stripped down >>>>>>>>>> version of Cahn-Hilliard). >>>>>>>>>> Thanks. >>>>>>>>>> Dmitry. >>>>>>>>>> >>>>>>>>>> On Thu, Apr 12, 2012 at 9:55 AM, Ataollah Mesgarnejad < >>>>>>>>>> am...@ti...> wrote: >>>>>>>>>> >>>>>>>>>>> Thanks Dmitry, >>>>>>>>>>> >>>>>>>>>>> It would be wonderful. I made a wrapper for SNESVI in myself for >>>>>>>>>>> my program but had difficulty achieving good convergence. Right now I am >>>>>>>>>>> occupied with other issues but I'll get back to this as soon as I get a >>>>>>>>>>> chance. >>>>>>>>>>> >>>>>>>>>>> Ata >>>>>>>>>>> >>>>>>>>>>> On Apr 10, 2012, at 9:55 PM, Dmitry Karpeev wrote: >>>>>>>>>>> >>>>>>>>>>> Ata, >>>>>>>>>>> There is now a way to use SNESVI from libMesh. >>>>>>>>>>> If interested, pull a recent revision from sourceforge and take >>>>>>>>>>> a look at examples/miscellaneous/miscellaneous_ex7. >>>>>>>>>>> Let me know, if you have any questions. >>>>>>>>>>> >>>>>>>>>>> Thanks. >>>>>>>>>>> Dmitry. >>>>>>>>>>> >>>>>>>>>>> On Wed, Dec 14, 2011 at 12:45 PM, Dmitry Karpeev < >>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>> >>>>>>>>>>>> Okay, thanks for your feedback. >>>>>>>>>>>> I'll keep you updated on this work. >>>>>>>>>>>> >>>>>>>>>>>> Dmitry. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Wed, Dec 14, 2011 at 12:42 PM, Ataollah Mesgarnejad < >>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Wed, Dec 14, 2011 at 12:35 PM, Dmitry Karpeev < >>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 12:19 PM, Ataollah Mesgarnejad < >>>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 11:59 AM, Dmitry Karpeev < >>>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Ata, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> A couple of things: >>>>>>>>>>>>>>>> 1. SNESVI solves a VI under what's known as "box >>>>>>>>>>>>>>>> constraints": the Vec x that the residual operates on is entrywise between >>>>>>>>>>>>>>>> Vecs xlow and xhigh. In order to enable >>>>>>>>>>>>>>>> constraints on a linear combination of the Vec entries (as >>>>>>>>>>>>>>>> you suggest), the problem >>>>>>>>>>>>>>>> has to be augmented with extra variables -- these linear >>>>>>>>>>>>>>>> combinations. This, of course, >>>>>>>>>>>>>>>> changes the Jacobian, preconditioner, etc. The user >>>>>>>>>>>>>>>> provides only the Jacobian for the original system, so libMesh would have >>>>>>>>>>>>>>>> to automatically construct the Jacobian for the extended system, and >>>>>>>>>>>>>>>> precondition it using the user provided PC for the original subsystem. This >>>>>>>>>>>>>>>> is not necessarily the unreasonable thing to do in many circumstances, but >>>>>>>>>>>>>>>> will take more thinking than a straighforward nodal constraint. The user >>>>>>>>>>>>>>>> API for specifying the constraints, however, becomes rather straighforward. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> 2. In terms of FEM convergence, constraining at the nodes >>>>>>>>>>>>>>>> is sufficient (see, for example, >>>>>>>>>>>>>>>> Glowinski's book: >>>>>>>>>>>>>>>> http://books.google.com/books/about/Numerical_analysis_of_variational_inequa.html?id=Pf4ed2mtbx4C >>>>>>>>>>>>>>>> ) >>>>>>>>>>>>>>>> You can show that if you impose the nodal constraints, in >>>>>>>>>>>>>>>> the limit of mesh size going to 0 the inequality is satisfied a.e. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> In this case no funny manipulation of the Jacobian or the >>>>>>>>>>>>>>>> PC is necessary, but the user API to specify the constraint is a bit >>>>>>>>>>>>>>>> amiguous: how do you constrain the node? From a containing element? Then >>>>>>>>>>>>>>>> there are multiple, possibly incompatible, constraints. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Or you can simply go through all the nodes in the system >>>>>>>>>>>>>>> once using node iterator. Here is how I think you can do it: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> ------------------------------------- >>>>>>>>>>>>>>> //make two PetscVector ub,lb from two variables you already >>>>>>>>>>>>>>> added to the system >>>>>>>>>>>>>>> NumericVector<Number> &ub_sys = >>>>>>>>>>>>>>> fracture_system.get_vector("ub"); >>>>>>>>>>>>>>> NumericVector<Number> &lb_sys = >>>>>>>>>>>>>>> fracture_system.get_vector("lb"); >>>>>>>>>>>>>>> PetscVector<Number>&lb = >>>>>>>>>>>>>>> libmesh_cast_ptr<PetscVector<Number>*>(&ub_sys); >>>>>>>>>>>>>>> PetscVector<Number>&ub = >>>>>>>>>>>>>>> libmesh_cast_ptr<PetscVector<Number>*>(&lb_sys); >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> *then in the bound routine do:* >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> // Get a constant reference to the mesh object. >>>>>>>>>>>>>>> const MeshBase& mesh = es.get_mesh(); >>>>>>>>>>>>>>> //Get the iterator >>>>>>>>>>>>>>> MeshBase::const_node_iterator nd = >>>>>>>>>>>>>>> mesh.local_nodes_begin(); >>>>>>>>>>>>>>> const MeshBase::const_node_iterator end_nd = >>>>>>>>>>>>>>> mesh.local_nodes_end(); >>>>>>>>>>>>>>> for (; nd != end_nd; nd++){ >>>>>>>>>>>>>>> const Node* node = *nd; >>>>>>>>>>>>>>> //get the degree of freedom of node >>>>>>>>>>>>>>> unsigned int dn = node->dof_number(0,0,0); // >>>>>>>>>>>>>>> system 0 , var 0, component 0 >>>>>>>>>>>>>>> //set the ub,lb >>>>>>>>>>>>>>> ub(dn)= A; >>>>>>>>>>>>>>> lb(dn) = B; >>>>>>>>>>>>>>> } >>>>>>>>>>>>>>> //close the vectors >>>>>>>>>>>>>>> ub.close(); >>>>>>>>>>>>>>> lb.close(); >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> You could do that. That's probably the mode we can support >>>>>>>>>>>>>> almost immediately. >>>>>>>>>>>>>> The issue here is that you lose some of the "context" by >>>>>>>>>>>>>> processing each node by >>>>>>>>>>>>>> itself rather as part of an element. For example what if >>>>>>>>>>>>>> each element is a different material? >>>>>>>>>>>>>> I guess in that case that property should be set on the nodes >>>>>>>>>>>>>> as well. >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Exactly one can always pass mat prop as a Constant MONOMIAL >>>>>>>>>>>>> (e.g. system.add_variable("mat_prop",CONSTANT,MONOMIAL);) variable with >>>>>>>>>>>>> values on each node. >>>>>>>>>>>>> >>>>>>>>>>>>> Ata >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> ----------------------------------------------------------------- >>>>>>>>>>>>>>> Ata >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 11:17 AM, Ataollah Mesgarnejad < >>>>>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> I hope I understood the whole thing correctly. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 10:56 AM, Dmitry Karpeev < >>>>>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 10:50 AM, Ataollah Mesgarnejad < >>>>>>>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> I don't think setting the variable bound on each >>>>>>>>>>>>>>>>>>> node separately is a good idea in general. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> I'm not sure what you mean by this. Could you clarify? >>>>>>>>>>>>>>>>>> I don't think I'm advocating this. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Bounds should be set as a sum on each elements degrees of >>>>>>>>>>>>>>>>>>> freedom; this way there should not be any nodal inconsistencies. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> I'm not sure what you mean here either. A bound is >>>>>>>>>>>>>>>>>> usually not a sum (e.g., a bound on a concentration is 0 <= c <=1). >>>>>>>>>>>>>>>>>> Perhaps your application expresses the total bound as a >>>>>>>>>>>>>>>>>> sum? I'd be very interested in learning more about that case. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> OK here is my thought process: one optimally wants to >>>>>>>>>>>>>>>>> bound the variable over the whole Element domain between lb,ub. So we would >>>>>>>>>>>>>>>>> have something like this: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> lb \leq \sum_{i=1}^{n=NDOF} \phi_{i}(x) \hat{u}_{i} \leq >>>>>>>>>>>>>>>>> ub, \forall x \in El >>>>>>>>>>>>>>>>> (Eq-1) >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> so it's not exactly that the nodal values should be >>>>>>>>>>>>>>>>> bounded (\hat{u}_{i}) unless it's a FD application where \phi_i(x) is delta >>>>>>>>>>>>>>>>> function !! That said I think given a normalized basis, bounding nodal >>>>>>>>>>>>>>>>> values is stricter requirement that (Eq-1). if that's what you meant by >>>>>>>>>>>>>>>>> inconsistencies between node bounds then the stronger bound would be my >>>>>>>>>>>>>>>>> first priority. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Ata >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Of course it might run into a situations with >>>>>>>>>>>>>>>>>>> semi-smooth VI. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> The type of VI solver is independent of how the bounds >>>>>>>>>>>>>>>>>> are specified: once we know the upper and lower bounds for each element of >>>>>>>>>>>>>>>>>> a Vec, we can solve the VI with either a semismooth method or an active set >>>>>>>>>>>>>>>>>> method. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> I really appreciate your feedback. I hope we can get to >>>>>>>>>>>>>>>>>> the point where things are as clear as possible so that you can >>>>>>>>>>>>>>>>>> profitably use the new capabilities. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>> Ata >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 9:47 AM, Dmitry Karpeev < >>>>>>>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Are you familiar with SNESVI? The main thing I'm >>>>>>>>>>>>>>>>>>>> looking at right now is the libMesh API for specifying >>>>>>>>>>>>>>>>>>>> the bounds on the variables. It seems to me that the >>>>>>>>>>>>>>>>>>>> user would ideally want to specify the bounds on the variable >>>>>>>>>>>>>>>>>>>> one element at a time, since that's the default >>>>>>>>>>>>>>>>>>>> traversal mode for libMesh. Then, of course, many nodal values >>>>>>>>>>>>>>>>>>>> are traversed many time, allowing for an inconsistency. >>>>>>>>>>>>>>>>>>>> The question is, what you, as a user, expect in that situation? >>>>>>>>>>>>>>>>>>>> Should the latest bound that the user has set be >>>>>>>>>>>>>>>>>>>> applied or the tightest of the bounds the user has supplied? >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 9:28 AM, Ataollah Mesgarnejad < >>>>>>>>>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> Thanks Dmitry, I look forward to use it. Also if it's >>>>>>>>>>>>>>>>>>>>> available as a dev version I would love to try to work with it. >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> Ata >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 8:24 AM, Dmitry Karpeev < >>>>>>>>>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> We (ANL) are working on an patch to libMesh to enable >>>>>>>>>>>>>>>>>>>>>> this functionality. >>>>>>>>>>>>>>>>>>>>>> The motivation is to use it through MOOSE, but it >>>>>>>>>>>>>>>>>>>>>> will be exposed as a libMesh API. >>>>>>>>>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 7:53 AM, Ataollah Mesgarnejad >>>>>>>>>>>>>>>>>>>>>> <am...@ti...> wrote: >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> Dear all, >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> I wonder if there is compatible versionIs it >>>>>>>>>>>>>>>>>>>>>>> possible to >>>>>>>>>>>>>>>>>>>>>>> use PETScNonlinearSolver class with SNESVI >>>>>>>>>>>>>>>>>>>>>>> (PETSc/3.2) to solve a >>>>>>>>>>>>>>>>>>>>>>> variational inequality problem? >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>>>>>> Ata Mesgarnejad >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>>> Cloud Computing - Latest Buzzword or a Glimpse of >>>>>>>>>>>>>>>>>>>>>>> the Future? >>>>>>>>>>>>>>>>>>>>>>> This paper surveys cloud computing today: What are >>>>>>>>>>>>>>>>>>>>>>> the benefits? >>>>>>>>>>>>>>>>>>>>>>> Why are businesses embracing it? What are its >>>>>>>>>>>>>>>>>>>>>>> payoffs and pitfalls? >>>>>>>>>>>>>>>>>>>>>>> http://www.accelacomm.com/jaw/sdnl/114/51425149/ >>>>>>>>>>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>>>>>>>>>> Libmesh-users mailing list >>>>>>>>>>>>>>>>>>>>>>> Lib...@li... >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/libmesh-users >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>>>>>>>>>>> Louisiana State University >>>>>>>>>>>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>>>>>>>>> Louisiana State University >>>>>>>>>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>>>>> Louisiana State University >>>>>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>>> Louisiana State University >>>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> A. Mesgarnejad >>>>> PhD Student, Research Assistant >>>>> Mechanical Engineering Department >>>>> Louisiana State University >>>>> 2203 Patrick F. Taylor Hall >>>>> Baton Rouge, La 70803 >>>>> >>>> >>>> <petsc_dm.patch> >>>> >>>> >>>> >>> <0001-PETSc-DM-incref-bug-fix.patch> >>> <0002-PETSc-DM-initializes-bounds-with-Inf-now.patch> >>> >>> >>> >> >> > > |
From: Ataollah M. <am...@ti...> - 2012-05-09 23:22:18
|
Perfect. Thats exactly what I need. Thanks, Ata On May 9, 2012, at 6:14 PM, Dmitry Karpeev wrote: > Ata, > system.solve() will call it. > I assume that by "changing constantly" you mean "once per timestep"? > SNESSolve() initiates the call that eventually dispatches to your bounds calculation routine. > SNES calls that once per solve. > Dmitry. > > On Wed, May 9, 2012 at 5:22 PM, Ataollah Mesgarnejad <am...@ti...> wrote: > Dmitry, > > A quick question: When does the bounds routine get called? My bounds are changing constantly do I need to call the bounds function myself or does it get called for each system.solve()? > > Thanks, > Ata > > On May 5, 2012, at 12:21 PM, Dmitry Karpeev wrote: > >> Okay, that helps. >> I'm looking further into it. >> Dmitry. >> >> On Sat, May 5, 2012 at 12:08 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >> Dmitry, >> >> I just tried it without bounds and I get the same DIVERGED_BREAKDOWN error. >> >> Ata >>> Ata, >>> I'm trying to reproduce the problem that you are observing, so I wanted to understand better what you are doing there. >>> Do you just skip Jacobian evaluation and return from the ComputeJacobian call immediately? >>> Also, without --use-petsc-dm, how are you able to run with VIRS (I remember you mentioned your own wrapper to SNESVI)? Finally, do you see this problem when running with SNES (not SNESVI)? I wonder if the issue is in the interaction with the bounds. >>> >>> Thanks for your patience. >>> Dmitry. >>> >>> On Wed, May 2, 2012 at 7:51 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>> >>> On May 1, 2012, at 10:16 PM, Dmitry Karpeev wrote: >>> >>>> Yeah, it's a bit of a hack right now -- DMCreateMatrix simply increfs and return the matrix from the underlying NonlinearImplicitSystem. The reason is that initially the matrix is preallocated by unassembled, and MatDuplicate >>>> doesn't want to duplicate an unassembled matrix. I can fix that, but I'm a bit puzzled by the fact that KSP messes >>>> with the matrix. Can you run with -snes_view and send me the output? >>>> >>> >>> Dmitry, >>> >>> Here it is: >>> >>> ------------- with the flag ----------------------- >>> >>> Linear solve converged due to CONVERGED_RTOL iterations 44 >>> Calculating surface energy... done >>> Calculating fracture energy... done >>> elastic energy:13.74054,fracture energy:-0.00000,total:13.74054 >>> applying variable bounds ... done >>> NL step 0, |residual|_2 = 2.41046e-01 >>> SNES Jacobian assembly... done >>> Linear solve converged due to CONVERGED_RTOL iterations 3 >>> NL step 1, |residual|_2 = 5.10144e-07 >>> Linear solve did not converge due to DIVERGED_BREAKDOWN iterations 1 >>> SNES Object: 1 MPI processes >>> type: virs >>> maximum iterations=50, maximum function evaluations=10000 >>> tolerances: relative=1e-08, absolute=1e-35, solution=1e-08 >>> total number of linear solver iterations=3 >>> total number of function evaluations=2 >>> KSP Object: 1 MPI processes >>> type: gmres >>> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement >>> GMRES: happy breakdown tolerance 1e-30 >>> maximum iterations=10000, initial guess is zero >>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >>> left preconditioning >>> using PRECONDITIONED norm type for convergence test >>> PC Object: 1 MPI processes >>> type: ilu >>> ILU: out-of-place factorization >>> 0 levels of fill >>> tolerance for zero pivot 2.22045e-14 >>> using diagonal shift to prevent zero pivot >>> matrix ordering: natural >>> factor fill ratio given 1, needed 1 >>> Factored matrix follows: >>> Matrix Object: 1 MPI processes >>> type: seqaij >>> rows=153, cols=153 >>> package used to perform factorization: petsc >>> total: nonzeros=2401, allocated nonzeros=2401 >>> total number of mallocs used during MatSetValues calls =0 >>> not using I-node routines >>> linear system matrix = precond matrix: >>> Matrix Object: 1 MPI processes >>> type: seqaij >>> rows=153, cols=153 >>> total: nonzeros=2401, allocated nonzeros=2401 >>> total number of mallocs used during MatSetValues calls =0 >>> not using I-node routines >>> SNESLineSearch Object: 1 MPI processes >>> type: basic >>> maxstep=1e+08, minlambda=1e-12 >>> tolerances: relative=1e-08, absolute=1e-15, lambda=1e-08 >>> maximum iterations=1 >>> Fracture system solved at nonlinear iteration 1 , final nonlinear residual norm: 5.10144e-07 >>> >>> --- without the flag ------ >>> >>> applying variable bounds ... done >>> NL step 0, |residual|_2 = 2.41046e-01 >>> SNES Jacobian assembly... done >>> Linear solve converged due to CONVERGED_RTOL iterations 3 >>> NL step 1, |residual|_2 = 5.10144e-07 >>> SNES Jacobian assembly... done >>> Linear solve converged due to CONVERGED_RTOL iterations 3 >>> NL step 2, |residual|_2 = 1.61153e-12 >>> SNES Object: 1 MPI processes >>> type: virs >>> maximum iterations=50, maximum function evaluations=10000 >>> tolerances: relative=1e-08, absolute=1e-35, solution=1e-08 >>> total number of linear solver iterations=6 >>> total number of function evaluations=3 >>> KSP Object: 1 MPI processes >>> type: gmres >>> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement >>> GMRES: happy breakdown tolerance 1e-30 >>> maximum iterations=10000, initial guess is zero >>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >>> left preconditioning >>> using PRECONDITIONED norm type for convergence test >>> PC Object: 1 MPI processes >>> type: ilu >>> ILU: out-of-place factorization >>> 0 levels of fill >>> tolerance for zero pivot 2.22045e-14 >>> using diagonal shift to prevent zero pivot >>> matrix ordering: natural >>> factor fill ratio given 1, needed 1 >>> Factored matrix follows: >>> Matrix Object: 1 MPI processes >>> type: seqaij >>> rows=153, cols=153 >>> package used to perform factorization: petsc >>> total: nonzeros=2401, allocated nonzeros=2401 >>> total number of mallocs used during MatSetValues calls =0 >>> not using I-node routines >>> linear system matrix = precond matrix: >>> Matrix Object: 1 MPI processes >>> type: seqaij >>> rows=153, cols=153 >>> total: nonzeros=2401, allocated nonzeros=2401 >>> total number of mallocs used during MatSetValues calls =0 >>> not using I-node routines >>> SNESLineSearch Object: 1 MPI processes >>> type: basic >>> maxstep=1e+08, minlambda=1e-12 >>> tolerances: relative=1e-08, absolute=1e-15, lambda=1e-08 >>> maximum iterations=1 >>> Fracture system solved at nonlinear iteration 2 , final nonlinear residual norm: 1.61153e-12 >>> >>> >>> >>>> Incidentally, there _was_ a reference counting bug that led to a breakdown very much like what you reported. >>>> I'm attaching that patch and a small improvement to DMVariableBounds. I need to figure out how to go about creating matrices from DM_libMesh. >>>> >>> >>> Let me know if you found anything or if I can be of any help. >>> >>> Thanks, >>> Ata >>> >>>> Dmitry. >>>> >>>> On Tue, May 1, 2012 at 6:21 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>> Dmitry, >>>> >>>> Is there a way so SNES doesn't go calculate the Jacobian for every iteration (my Jac does't change)? I used to use a flag to check if it's assembled but when I do that with petsc-dm, KSP associated with SNES gives DIVERGED_BREAKDOWN for the second iteration! looks like petsc-dm doesn't make a copy of the matrix so when it's used in KSP solve it gets jumbled up. >>>> >>>> Here is how it looks >>>> >>>> -with the flag: >>>> >>>> applying variable bounds ... done >>>> NL step 0, |residual|_2 = 8.90674e-04 >>>> SNES Jacobian assembly... done >>>> Linear solve converged due to CONVERGED_RTOL iterations 6 >>>> NL step 1, |residual|_2 = 2.90987e-05 >>>> Linear solve did not converge due to DIVERGED_BREAKDOWN iterations 1 >>>> >>>> this happens for every SNES solve. >>>> ----------------------- >>>> >>>> -without it: >>>> >>>> applying variable bounds ... done >>>> NL step 0, |residual|_2 = 6.08030e-02 >>>> SNES hessian assembly... done >>>> Linear solve converged due to CONVERGED_RTOL iterations 6 >>>> NL step 1, |residual|_2 = 2.61557e-07 >>>> SNES hessian assembly... done >>>> Linear solve converged due to CONVERGED_RTOL iterations 6 >>>> NL step 2, |residual|_2 = 1.35354e-12 >>>> Fracture system solved at nonlinear iteration 2 , final nonlinear residual norm: 1.35354e-12 >>>> >>>> ps: these are not from the same time step but you get the idea. >>>> >>>> Best, >>>> Ata >>>> >>>> On May 1, 2012, at 8:49 AM, Dmitry Karpeev wrote: >>>> >>>>> As an fyi: I just built and ran miscellaneous_ex7 in parallel with the latest petsc-dev: >>>>> http://petsc.cs.iit.edu/petsc/petsc-dev/rev/12d4a1ea0aca >>>>> If you decide you want to upgrade to this (or later) petsc-dev, you'd need a small patch for >>>>> petsc_dm_nonlinear_solver.C to account for the latest API changes (attached). >>>>> I am working on a bigger patch that adds more functionality to DM_libMesh and streamline >>>>> miscellaneous_ex7. That will take some time to percolate up into the libMesh tree. In the >>>>> meantime you should be able to run with the small fix I've attached. >>>>> >>>>> Let me know how it works. >>>>> Dmitry. >>>>> >>>>> >>>>> On Tue, May 1, 2012 at 8:35 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>> Dmitry, >>>>> >>>>> I believe that the error is indeed due to petsc-dev version. I'm going to check this during the next few hours and get back to you. >>>>> >>>>> Thanks, >>>>> Ata >>>>> >>>>> >>>>> On Tue, May 1, 2012 at 8:29 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>> Ata, >>>>> I can't reproduce the problem: miscellaneous_ex7 runs fine for me on 2 procs with a relatively recent version >>>>> of petsc-dev. Can you send me the full PETSc error output? >>>>> >>>>> Thanks. >>>>> Dmitry. >>>>> >>>>> >>>>> On Mon, Apr 30, 2012 at 8:56 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>> >>>>> On Apr 30, 2012, at 8:39 PM, Dmitry Karpeev wrote: >>>>> >>>>>> Ata, >>>>>> example7 segfaults when you run in parallel? >>>>>> Does it do it on 2 procs? >>>>> Yes and yes. >>>>> >>>>> here is the stack: >>>>> (gdb) where >>>>> #0 0x00000038fbaaada0 in __nanosleep_nocancel () from /lib64/libc.so.6 >>>>> #1 0x00000038fbaaac30 in sleep () from /lib64/libc.so.6 >>>>> #2 0x00002ae457b35c40 in PetscSleep(double) () >>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>> #3 0x00002ae457b7dffa in PetscAttachDebugger() () >>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>> #4 0x00002ae457b7e95f in PetscAttachDebuggerErrorHandler(int, int, char const*, char const*, char const*, int, PetscErrorType, char const*, void*) () >>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>> #5 0x00002ae457b817be in PetscError(int, int, char const*, char const*, char const*, int, PetscErrorType, char const*, ...) () >>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>> #6 0x00002ae457b82ea7 in PetscDefaultSignalHandler(int, void*) () >>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>> #7 0x00002ae457b82883 in PetscSignalHandler_Private () >>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>> #8 <signal handler called> >>>>> #9 0x00002ae458189340 in DMCoarsenHookAdd(_p_DM*, int (*)(_p_DM*, _p_DM*, void*), int (*)(_p_DM*, _p_Mat*, _p_Vec*, _p_Mat*, _p_DM*, void*), void*) () >>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>> #10 0x00002ae45835f431 in DMSNESGetContext(_p_DM*, _n_SNESDM**) () >>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>> #11 0x00002ae45835ea34 in DMSNESGetFunction(_p_DM*, int (**)(_p_SNES*, _p_Vec*, ---Type <return> to continue, or q <return> to quit--- >>>>> _p_Vec*, void*), void**) () >>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>> #12 0x00002ae458393d1c in SNESGetFunction(_p_SNES*, _p_Vec**, int (**)(_p_SNES*, _p_Vec*, _p_Vec*, void*), void**) () >>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>> #13 0x00002ae45839b9eb in SNESSetUp(_p_SNES*) () >>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>> #14 0x00002ae45839a6b0 in SNESSolve(_p_SNES*, _p_Vec*, _p_Vec*) () >>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>> >>>>> #15 0x00000000010976ab in libMesh::PetscDMNonlinearSolver<libMesh::Number>::solve (this=0x35506d0, jac_in=..., x_in=..., r_in=...) >>>>> at src/solvers/petsc_dm_nonlinear_solver.C:450 >>>>> #16 0x0000000000cd604f in libMesh::NonlinearImplicitSystem::solve ( >>>>> this=0x353b7e0) at src/systems/nonlinear_implicit_system.C:178 >>>>> #17 0x0000000000426082 in Biharmonic::step (this=0x34f2000, dt=@0x7fff95edca28) >>>>> at biharmonic.C:110 >>>>> #18 0x0000000000426373 in Biharmonic::run (this=0x34f2000) at biharmonic.C:152 >>>>> warning: Range for type (null) has invalid bounds 0..-111 >>>>> #19 0x0000000000436cf5 in main (argc=12, argv=0x7fff95edcc18) >>>>> at miscellaneous_ex7.C:64 >>>>> >>>>> Ata >>>>> >>>>>> >>>>>> Dmitry. >>>>>> >>>>>> On Mon, Apr 30, 2012 at 8:31 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>> Dmitry, >>>>>> >>>>>> I had a very good experience on single processor but on multiple processors I get segmentation error at line where we call solve (I tried both my code and example7) . I tried your example without -vi --use-petsc-dm and it works just fine. Right now I'm trying to compile a debug version and look into it; meanwhile I thought it would be a good idea if I let you know. >>>>>> >>>>>> Thanks, >>>>>> Ata >>>>>> >>>>>> On Apr 30, 2012, at 1:30 PM, Dmitry Karpeev wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Mon, Apr 30, 2012 at 1:28 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>> >>>>>>> On Apr 30, 2012, at 1:13 PM, Dmitry Karpeev wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Mon, Apr 30, 2012 at 1:03 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>> Dmitry, >>>>>>>> >>>>>>>> I had time to come back and work on the SNESVI/Libmesh program. The way you setup your code is particularly hard for me to follow in my own code since it requires a significant amount of extra work (to define classes and setup the inheritances..). What I did instead was to try add a TransientNonlinearSystem to my EquationSystems object: >>>>>>>> >>>>>>>> TransientNonlinearImplicitSystem & fracture_system = equation_system.add_system<TransientNonlinearImplicitSystem>("fracture_system"); >>>>>>>> >>>>>>>> and then to add the bounds and Objective and Residual functions: >>>>>>>> >>>>>>>> fracture_system.attach_init_function(init_ic_fracture); >>>>>>>> fracture_system.nonlinear_solver->jacobian= PsiFormJacobian; >>>>>>>> fracture_system.nonlinear_solver->residual= PsiFormResidual; >>>>>>>> fracture_system.nonlinear_solver->bounds = PsiFormBounds; >>>>>>>> >>>>>>>> what I see now is that SNES solve does not apply the bounds. So here is my questions: >>>>>>>> >>>>>>>> 1- Is there inherently wrong with setting up the system specially the bounds the easy way like I did? >>>>>>>> Ata, >>>>>>>> Doing things this "simple" way is just fine: there is no reason to follow the relatively convoluted structure I used in that >>>>>>>> example. When I have time, I'll rewrite it to simplify things. >>>>>>>> 2- Where is the right SNES type is chosen ? I guess this should be hard wired somewhere in NonlinearImplicitSystem! >>>>>>>> 3- When I use -snes_view (with your example or my own code ) I get snes_type ls? shouldn't this be VI ? >>>>>>>> >>>>>>>> What's likely missing is the command-line option --use-petsc-dm. It's necessary since libMesh wants to preserve the usual behavior as the default. In particular, because SNESVI and the DM hookup aren't really possible prior to petsc-3.2. >>>>>>>> In fact, it might not be possible without a recent petsc-dev (let me verify that, once I get to my computer). >>>>>>>> >>>>>>>> In the meantime, could you give --use-petsc-dm a try? >>>>>>>> >>>>>>> >>>>>>> I just saw that in the example help line and used it. It works like a charm. >>>>>>> Great! >>>>>>> Dmitry. >>>>>>> I'll go on check for the convergence and see what I can get. >>>>>>> >>>>>>> Thanks, >>>>>>> Ata >>>>>>>> Thanks. >>>>>>>> >>>>>>>> Dmitry. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Ata >>>>>>>> >>>>>>>> On Apr 12, 2012, at 10:00 AM, Dmitry Karpeev wrote: >>>>>>>> >>>>>>>>> Convergence is something we might need to work on -- I noticed some chatter in a simple bounded biharmonic heat equation (a stripped down version of Cahn-Hilliard). >>>>>>>>> Thanks. >>>>>>>>> Dmitry. >>>>>>>>> >>>>>>>>> On Thu, Apr 12, 2012 at 9:55 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>>> Thanks Dmitry, >>>>>>>>> >>>>>>>>> It would be wonderful. I made a wrapper for SNESVI in myself for my program but had difficulty achieving good convergence. Right now I am occupied with other issues but I'll get back to this as soon as I get a chance. >>>>>>>>> >>>>>>>>> Ata >>>>>>>>> >>>>>>>>> On Apr 10, 2012, at 9:55 PM, Dmitry Karpeev wrote: >>>>>>>>> >>>>>>>>>> Ata, >>>>>>>>>> There is now a way to use SNESVI from libMesh. >>>>>>>>>> If interested, pull a recent revision from sourceforge and take a look at examples/miscellaneous/miscellaneous_ex7. >>>>>>>>>> Let me know, if you have any questions. >>>>>>>>>> >>>>>>>>>> Thanks. >>>>>>>>>> Dmitry. >>>>>>>>>> >>>>>>>>>> On Wed, Dec 14, 2011 at 12:45 PM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>>>>> Okay, thanks for your feedback. >>>>>>>>>> I'll keep you updated on this work. >>>>>>>>>> >>>>>>>>>> Dmitry. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Wed, Dec 14, 2011 at 12:42 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Wed, Dec 14, 2011 at 12:35 PM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Wed, Dec 14, 2011 at 12:19 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Wed, Dec 14, 2011 at 11:59 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>>>>> Ata, >>>>>>>>>> >>>>>>>>>> A couple of things: >>>>>>>>>> 1. SNESVI solves a VI under what's known as "box constraints": the Vec x that the residual operates on is entrywise between Vecs xlow and xhigh. In order to enable >>>>>>>>>> constraints on a linear combination of the Vec entries (as you suggest), the problem >>>>>>>>>> has to be augmented with extra variables -- these linear combinations. This, of course, >>>>>>>>>> changes the Jacobian, preconditioner, etc. The user provides only the Jacobian for the original system, so libMesh would have to automatically construct the Jacobian for the extended system, and precondition it using the user provided PC for the original subsystem. This is not necessarily the unreasonable thing to do in many circumstances, but will take more thinking than a straighforward nodal constraint. The user API for specifying the constraints, however, becomes rather straighforward. >>>>>>>>>> >>>>>>>>>> 2. In terms of FEM convergence, constraining at the nodes is sufficient (see, for example, >>>>>>>>>> Glowinski's book: http://books.google.com/books/about/Numerical_analysis_of_variational_inequa.html?id=Pf4ed2mtbx4C) >>>>>>>>>> You can show that if you impose the nodal constraints, in the limit of mesh size going to 0 the inequality is satisfied a.e. >>>>>>>>>> >>>>>>>>>> In this case no funny manipulation of the Jacobian or the PC is necessary, but the user API to specify the constraint is a bit amiguous: how do you constrain the node? From a containing element? Then there are multiple, possibly incompatible, constraints. >>>>>>>>>> Or you can simply go through all the nodes in the system once using node iterator. Here is how I think you can do it: >>>>>>>>>> >>>>>>>>>> ------------------------------------- >>>>>>>>>> //make two PetscVector ub,lb from two variables you already added to the system >>>>>>>>>> NumericVector<Number> &ub_sys = fracture_system.get_vector("ub"); >>>>>>>>>> NumericVector<Number> &lb_sys = fracture_system.get_vector("lb"); >>>>>>>>>> PetscVector<Number>&lb = libmesh_cast_ptr<PetscVector<Number>*>(&ub_sys); >>>>>>>>>> PetscVector<Number>&ub = libmesh_cast_ptr<PetscVector<Number>*>(&lb_sys); >>>>>>>>>> >>>>>>>>>> then in the bound routine do: >>>>>>>>>> >>>>>>>>>> // Get a constant reference to the mesh object. >>>>>>>>>> const MeshBase& mesh = es.get_mesh(); >>>>>>>>>> //Get the iterator >>>>>>>>>> MeshBase::const_node_iterator nd = mesh.local_nodes_begin(); >>>>>>>>>> const MeshBase::const_node_iterator end_nd = mesh.local_nodes_end(); >>>>>>>>>> for (; nd != end_nd; nd++){ >>>>>>>>>> const Node* node = *nd; >>>>>>>>>> //get the degree of freedom of node >>>>>>>>>> unsigned int dn = node->dof_number(0,0,0); // system 0 , var 0, component 0 >>>>>>>>>> //set the ub,lb >>>>>>>>>> ub(dn)= A; >>>>>>>>>> lb(dn) = B; >>>>>>>>>> } >>>>>>>>>> //close the vectors >>>>>>>>>> ub.close(); >>>>>>>>>> lb.close(); >>>>>>>>>> >>>>>>>>>> You could do that. That's probably the mode we can support almost immediately. >>>>>>>>>> The issue here is that you lose some of the "context" by processing each node by >>>>>>>>>> itself rather as part of an element. For example what if each element is a different material? >>>>>>>>>> I guess in that case that property should be set on the nodes as well. >>>>>>>>>> >>>>>>>>>> Exactly one can always pass mat prop as a Constant MONOMIAL (e.g. system.add_variable("mat_prop",CONSTANT,MONOMIAL);) variable with values on each node. >>>>>>>>>> >>>>>>>>>> Ata >>>>>>>>>> >>>>>>>>>> Dmitry. >>>>>>>>>> ----------------------------------------------------------------- >>>>>>>>>> Ata >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Dmitry. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Wed, Dec 14, 2011 at 11:17 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>>>> I hope I understood the whole thing correctly. >>>>>>>>>> >>>>>>>>>> On Wed, Dec 14, 2011 at 10:56 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Wed, Dec 14, 2011 at 10:50 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>>>> I don't think setting the variable bound on each node separately is a good idea in general. >>>>>>>>>> I'm not sure what you mean by this. Could you clarify? I don't think I'm advocating this. >>>>>>>>>> >>>>>>>>>> Bounds should be set as a sum on each elements degrees of freedom; this way there should not be any nodal inconsistencies. >>>>>>>>>> I'm not sure what you mean here either. A bound is usually not a sum (e.g., a bound on a concentration is 0 <= c <=1). >>>>>>>>>> Perhaps your application expresses the total bound as a sum? I'd be very interested in learning more about that case. >>>>>>>>>> >>>>>>>>>> OK here is my thought process: one optimally wants to bound the variable over the whole Element domain between lb,ub. So we would have something like this: >>>>>>>>>> >>>>>>>>>> lb \leq \sum_{i=1}^{n=NDOF} \phi_{i}(x) \hat{u}_{i} \leq ub, \forall x \in El (Eq-1) >>>>>>>>>> >>>>>>>>>> so it's not exactly that the nodal values should be bounded (\hat{u}_{i}) unless it's a FD application where \phi_i(x) is delta function !! That said I think given a normalized basis, bounding nodal values is stricter requirement that (Eq-1). if that's what you meant by inconsistencies between node bounds then the stronger bound would be my first priority. >>>>>>>>>> >>>>>>>>>> Ata >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Of course it might run into a situations with semi-smooth VI. >>>>>>>>>> The type of VI solver is independent of how the bounds are specified: once we know the upper and lower bounds for each element of a Vec, we can solve the VI with either a semismooth method or an active set method. >>>>>>>>>> >>>>>>>>>> I really appreciate your feedback. I hope we can get to the point where things are as clear as possible so that you can >>>>>>>>>> profitably use the new capabilities. >>>>>>>>>> >>>>>>>>>> Dmitry. >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> Ata >>>>>>>>>> >>>>>>>>>> On Wed, Dec 14, 2011 at 9:47 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>>>>> Are you familiar with SNESVI? The main thing I'm looking at right now is the libMesh API for specifying >>>>>>>>>> the bounds on the variables. It seems to me that the user would ideally want to specify the bounds on the variable >>>>>>>>>> one element at a time, since that's the default traversal mode for libMesh. Then, of course, many nodal values >>>>>>>>>> are traversed many time, allowing for an inconsistency. The question is, what you, as a user, expect in that situation? >>>>>>>>>> Should the latest bound that the user has set be applied or the tightest of the bounds the user has supplied? >>>>>>>>>> >>>>>>>>>> Dmitry. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Wed, Dec 14, 2011 at 9:28 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>>>> Thanks Dmitry, I look forward to use it. Also if it's available as a dev version I would love to try to work with it. >>>>>>>>>> >>>>>>>>>> Ata >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Wed, Dec 14, 2011 at 8:24 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>>>>> We (ANL) are working on an patch to libMesh to enable this functionality. >>>>>>>>>> The motivation is to use it through MOOSE, but it will be exposed as a libMesh API. >>>>>>>>>> Dmitry. >>>>>>>>>> >>>>>>>>>> On Wed, Dec 14, 2011 at 7:53 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>>>> Dear all, >>>>>>>>>> >>>>>>>>>> I wonder if there is compatible versionIs it possible to >>>>>>>>>> use PETScNonlinearSolver class with SNESVI (PETSc/3.2) to solve a >>>>>>>>>> variational inequality problem? >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> Ata Mesgarnejad >>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>> Cloud Computing - Latest Buzzword or a Glimpse of the Future? >>>>>>>>>> This paper surveys cloud computing today: What are the benefits? >>>>>>>>>> Why are businesses embracing it? What are its payoffs and pitfalls? >>>>>>>>>> http://www.accelacomm.com/jaw/sdnl/114/51425149/ >>>>>>>>>> _______________________________________________ >>>>>>>>>> Libmesh-users mailing list >>>>>>>>>> Lib...@li... >>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/libmesh-users >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> A. Mesgarnejad >>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>> Mechanical Engineering Department >>>>>>>>>> Louisiana State University >>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> A. Mesgarnejad >>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>> Mechanical Engineering Department >>>>>>>>>> Louisiana State University >>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> A. Mesgarnejad >>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>> Mechanical Engineering Department >>>>>>>>>> Louisiana State University >>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> A. Mesgarnejad >>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>> Mechanical Engineering Department >>>>>>>>>> Louisiana State University >>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> A. Mesgarnejad >>>>> PhD Student, Research Assistant >>>>> Mechanical Engineering Department >>>>> Louisiana State University >>>>> 2203 Patrick F. Taylor Hall >>>>> Baton Rouge, La 70803 >>>>> >>>>> <petsc_dm.patch> >>>> >>>> >>>> <0001-PETSc-DM-incref-bug-fix.patch><0002-PETSc-DM-initializes-bounds-with-Inf-now.patch> >>> >>> >> >> > > |
From: Ataollah M. <am...@ti...> - 2012-06-12 17:12:17
|
Dmitry, I think you have made a change to petsc-dm solver I can't seem to be able to access petsc-dm solver anymore using --use-petsc-dm? Can you please tell me how I use it again ? Thanks, Ata On Wed, May 9, 2012 at 6:21 PM, Ataollah Mesgarnejad <am...@ti... > wrote: > Perfect. Thats exactly what I need. > > Thanks, > Ata > > On May 9, 2012, at 6:14 PM, Dmitry Karpeev wrote: > > Ata, > system.solve() will call it. > I assume that by "changing constantly" you mean "once per timestep"? > SNESSolve() initiates the call that eventually dispatches to your bounds > calculation routine. > SNES calls that once per solve. > Dmitry. > > On Wed, May 9, 2012 at 5:22 PM, Ataollah Mesgarnejad < > am...@ti...> wrote: > >> Dmitry, >> >> A quick question: When does the bounds routine get called? My bounds are >> changing constantly do I need to call the bounds function myself or does it >> get called for each system.solve()? >> >> Thanks, >> Ata >> >> On May 5, 2012, at 12:21 PM, Dmitry Karpeev wrote: >> >> Okay, that helps. >> I'm looking further into it. >> Dmitry. >> >> On Sat, May 5, 2012 at 12:08 PM, Ataollah Mesgarnejad < >> am...@ti...> wrote: >> >>> Dmitry, >>> >>> I just tried it without bounds and I get the same DIVERGED_BREAKDOWN >>> error. >>> >>> Ata >>> >>> Ata, >>> I'm trying to reproduce the problem that you are observing, so I wanted >>> to understand better what you are doing there. >>> Do you just skip Jacobian evaluation and return from the ComputeJacobian >>> call immediately? >>> Also, without --use-petsc-dm, how are you able to run with VIRS (I >>> remember you mentioned your own wrapper to SNESVI)? Finally, do you see >>> this problem when running with SNES (not SNESVI)? I wonder if the issue is >>> in the interaction with the bounds. >>> >>> Thanks for your patience. >>> Dmitry. >>> >>> On Wed, May 2, 2012 at 7:51 AM, Ataollah Mesgarnejad < >>> am...@ti...> wrote: >>> >>>> >>>> On May 1, 2012, at 10:16 PM, Dmitry Karpeev wrote: >>>> >>>> Yeah, it's a bit of a hack right now -- DMCreateMatrix simply increfs >>>> and return the matrix from the underlying NonlinearImplicitSystem. The >>>> reason is that initially the matrix is preallocated by unassembled, and >>>> MatDuplicate >>>> doesn't want to duplicate an unassembled matrix. I can fix that, but >>>> I'm a bit puzzled by the fact that KSP messes >>>> with the matrix. Can you run with -snes_view and send me the output? >>>> >>>> Dmitry, >>>> >>>> Here it is: >>>> >>>> ------------- with the flag ----------------------- >>>> >>>> Linear solve converged due to CONVERGED_RTOL iterations 44 >>>> Calculating surface energy... done >>>> Calculating fracture energy... done >>>> elastic energy:13.74054,fracture energy:-0.00000,total:13.74054 >>>> applying variable bounds ... done >>>> NL step 0, |residual|_2 = 2.41046e-01 >>>> SNES Jacobian assembly... done >>>> Linear solve converged due to CONVERGED_RTOL iterations 3 >>>> NL step 1, |residual|_2 = 5.10144e-07 >>>> Linear solve did not converge due to DIVERGED_BREAKDOWN iterations 1 >>>> SNES Object: 1 MPI processes >>>> type: virs >>>> maximum iterations=50, maximum function evaluations=10000 >>>> tolerances: relative=1e-08, absolute=1e-35, solution=1e-08 >>>> total number of linear solver iterations=3 >>>> total number of function evaluations=2 >>>> KSP Object: 1 MPI processes >>>> type: gmres >>>> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt >>>> Orthogonalization with no iterative refinement >>>> GMRES: happy breakdown tolerance 1e-30 >>>> maximum iterations=10000, initial guess is zero >>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >>>> left preconditioning >>>> using PRECONDITIONED norm type for convergence test >>>> PC Object: 1 MPI processes >>>> type: ilu >>>> ILU: out-of-place factorization >>>> 0 levels of fill >>>> tolerance for zero pivot 2.22045e-14 >>>> using diagonal shift to prevent zero pivot >>>> matrix ordering: natural >>>> factor fill ratio given 1, needed 1 >>>> Factored matrix follows: >>>> Matrix Object: 1 MPI processes >>>> type: seqaij >>>> rows=153, cols=153 >>>> package used to perform factorization: petsc >>>> total: nonzeros=2401, allocated nonzeros=2401 >>>> total number of mallocs used during MatSetValues calls =0 >>>> not using I-node routines >>>> linear system matrix = precond matrix: >>>> Matrix Object: 1 MPI processes >>>> type: seqaij >>>> rows=153, cols=153 >>>> total: nonzeros=2401, allocated nonzeros=2401 >>>> total number of mallocs used during MatSetValues calls =0 >>>> not using I-node routines >>>> SNESLineSearch Object: 1 MPI processes >>>> type: basic >>>> maxstep=1e+08, minlambda=1e-12 >>>> tolerances: relative=1e-08, absolute=1e-15, lambda=1e-08 >>>> maximum iterations=1 >>>> Fracture system solved at nonlinear iteration 1 , final nonlinear >>>> residual norm: 5.10144e-07 >>>> >>>> --- without the flag ------ >>>> >>>> applying variable bounds ... done >>>> NL step 0, |residual|_2 = 2.41046e-01 >>>> SNES Jacobian assembly... done >>>> Linear solve converged due to CONVERGED_RTOL iterations 3 >>>> NL step 1, |residual|_2 = 5.10144e-07 >>>> SNES Jacobian assembly... done >>>> Linear solve converged due to CONVERGED_RTOL iterations 3 >>>> NL step 2, |residual|_2 = 1.61153e-12 >>>> SNES Object: 1 MPI processes >>>> type: virs >>>> maximum iterations=50, maximum function evaluations=10000 >>>> tolerances: relative=1e-08, absolute=1e-35, solution=1e-08 >>>> total number of linear solver iterations=6 >>>> total number of function evaluations=3 >>>> KSP Object: 1 MPI processes >>>> type: gmres >>>> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt >>>> Orthogonalization with no iterative refinement >>>> GMRES: happy breakdown tolerance 1e-30 >>>> maximum iterations=10000, initial guess is zero >>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >>>> left preconditioning >>>> using PRECONDITIONED norm type for convergence test >>>> PC Object: 1 MPI processes >>>> type: ilu >>>> ILU: out-of-place factorization >>>> 0 levels of fill >>>> tolerance for zero pivot 2.22045e-14 >>>> using diagonal shift to prevent zero pivot >>>> matrix ordering: natural >>>> factor fill ratio given 1, needed 1 >>>> Factored matrix follows: >>>> Matrix Object: 1 MPI processes >>>> type: seqaij >>>> rows=153, cols=153 >>>> package used to perform factorization: petsc >>>> total: nonzeros=2401, allocated nonzeros=2401 >>>> total number of mallocs used during MatSetValues calls =0 >>>> not using I-node routines >>>> linear system matrix = precond matrix: >>>> Matrix Object: 1 MPI processes >>>> type: seqaij >>>> rows=153, cols=153 >>>> total: nonzeros=2401, allocated nonzeros=2401 >>>> total number of mallocs used during MatSetValues calls =0 >>>> not using I-node routines >>>> SNESLineSearch Object: 1 MPI processes >>>> type: basic >>>> maxstep=1e+08, minlambda=1e-12 >>>> tolerances: relative=1e-08, absolute=1e-15, lambda=1e-08 >>>> maximum iterations=1 >>>> Fracture system solved at nonlinear iteration 2 , final nonlinear >>>> residual norm: 1.61153e-12 >>>> >>>> >>>> >>>> Incidentally, there _was_ a reference counting bug that led to a >>>> breakdown very much like what you reported. >>>> I'm attaching that patch and a small improvement to DMVariableBounds. >>>> I need to figure out how to go about creating matrices from DM_libMesh. >>>> >>>> Let me know if you found anything or if I can be of any help. >>>> >>>> Thanks, >>>> Ata >>>> >>>> Dmitry. >>>> >>>> On Tue, May 1, 2012 at 6:21 PM, Ataollah Mesgarnejad < >>>> am...@ti...> wrote: >>>> >>>>> Dmitry, >>>>> >>>>> Is there a way so SNES doesn't go calculate the Jacobian for every >>>>> iteration (my Jac does't change)? I used to use a flag to check if it's >>>>> assembled but when I do that with petsc-dm, KSP associated with SNES gives >>>>> DIVERGED_BREAKDOWN for the second iteration! looks like petsc-dm doesn't >>>>> make a copy of the matrix so when it's used in KSP solve it gets jumbled up. >>>>> >>>>> Here is how it looks >>>>> >>>>> -with the flag: >>>>> >>>>> applying variable bounds ... done >>>>> NL step 0, |residual|_2 = 8.90674e-04 >>>>> SNES Jacobian assembly... done >>>>> Linear solve converged due to CONVERGED_RTOL iterations 6 >>>>> NL step 1, |residual|_2 = 2.90987e-05 >>>>> Linear solve did not converge due to DIVERGED_BREAKDOWN iterations 1 >>>>> >>>>> this happens for every SNES solve. >>>>> ----------------------- >>>>> >>>>> -without it: >>>>> >>>>> applying variable bounds ... done >>>>> NL step 0, |residual|_2 = 6.08030e-02 >>>>> SNES hessian assembly... done >>>>> Linear solve converged due to CONVERGED_RTOL iterations 6 >>>>> NL step 1, |residual|_2 = 2.61557e-07 >>>>> SNES hessian assembly... done >>>>> Linear solve converged due to CONVERGED_RTOL iterations 6 >>>>> NL step 2, |residual|_2 = 1.35354e-12 >>>>> Fracture system solved at nonlinear iteration 2 , final nonlinear >>>>> residual norm: 1.35354e-12 >>>>> >>>>> ps: these are not from the same time step but you get the idea. >>>>> >>>>> Best, >>>>> Ata >>>>> >>>>> On May 1, 2012, at 8:49 AM, Dmitry Karpeev wrote: >>>>> >>>>> As an fyi: I just built and ran miscellaneous_ex7 in parallel with the >>>>> latest petsc-dev: >>>>> http://petsc.cs.iit.edu/petsc/petsc-dev/rev/12d4a1ea0aca >>>>> If you decide you want to upgrade to this (or later) petsc-dev, you'd >>>>> need a small patch for >>>>> petsc_dm_nonlinear_solver.C to account for the latest API >>>>> changes (attached). >>>>> I am working on a bigger patch that adds more functionality to >>>>> DM_libMesh and streamline >>>>> miscellaneous_ex7. That will take some time to percolate up into the >>>>> libMesh tree. In the >>>>> meantime you should be able to run with the small fix I've attached. >>>>> >>>>> Let me know how it works. >>>>> Dmitry. >>>>> >>>>> >>>>> On Tue, May 1, 2012 at 8:35 AM, Ataollah Mesgarnejad < >>>>> am...@ti...> wrote: >>>>> >>>>>> Dmitry, >>>>>> >>>>>> I believe that the error is indeed due to petsc-dev version. I'm >>>>>> going to check this during the next few hours and get back to you. >>>>>> >>>>>> Thanks, >>>>>> Ata >>>>>> >>>>>> >>>>>> On Tue, May 1, 2012 at 8:29 AM, Dmitry Karpeev <ka...@mc...>wrote: >>>>>> >>>>>>> Ata, >>>>>>> I can't reproduce the problem: miscellaneous_ex7 runs fine for me on >>>>>>> 2 procs with a relatively recent version >>>>>>> of petsc-dev. Can you send me the full PETSc error output? >>>>>>> >>>>>>> Thanks. >>>>>>> Dmitry. >>>>>>> >>>>>>> >>>>>>> On Mon, Apr 30, 2012 at 8:56 PM, Ataollah Mesgarnejad < >>>>>>> am...@ti...> wrote: >>>>>>> >>>>>>>> >>>>>>>> On Apr 30, 2012, at 8:39 PM, Dmitry Karpeev wrote: >>>>>>>> >>>>>>>> Ata, >>>>>>>> example7 segfaults when you run in parallel? >>>>>>>> Does it do it on 2 procs? >>>>>>>> >>>>>>>> Yes and yes. >>>>>>>> >>>>>>>> here is the stack: >>>>>>>> (gdb) where >>>>>>>> #0 0x00000038fbaaada0 in __nanosleep_nocancel () from >>>>>>>> /lib64/libc.so.6 >>>>>>>> #1 0x00000038fbaaac30 in sleep () from /lib64/libc.so.6 >>>>>>>> #2 0x00002ae457b35c40 in PetscSleep(double) () >>>>>>>> from >>>>>>>> /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>>> #3 0x00002ae457b7dffa in PetscAttachDebugger() () >>>>>>>> from >>>>>>>> /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>>> #4 0x00002ae457b7e95f in PetscAttachDebuggerErrorHandler(int, int, >>>>>>>> char const*, char const*, char const*, int, PetscErrorType, char const*, >>>>>>>> void*) () >>>>>>>> from >>>>>>>> /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>>> #5 0x00002ae457b817be in PetscError(int, int, char const*, char >>>>>>>> const*, char const*, int, PetscErrorType, char const*, ...) () >>>>>>>> from >>>>>>>> /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>>> #6 0x00002ae457b82ea7 in PetscDefaultSignalHandler(int, void*) () >>>>>>>> from >>>>>>>> /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>>> #7 0x00002ae457b82883 in PetscSignalHandler_Private () >>>>>>>> from >>>>>>>> /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>>> #8 <signal handler called> >>>>>>>> #9 0x00002ae458189340 in DMCoarsenHookAdd(_p_DM*, int (*)(_p_DM*, >>>>>>>> _p_DM*, void*), int (*)(_p_DM*, _p_Mat*, _p_Vec*, _p_Mat*, _p_DM*, void*), >>>>>>>> void*) () >>>>>>>> from >>>>>>>> /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>>> #10 0x00002ae45835f431 in DMSNESGetContext(_p_DM*, _n_SNESDM**) () >>>>>>>> from >>>>>>>> /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>>> #11 0x00002ae45835ea34 in DMSNESGetFunction(_p_DM*, int >>>>>>>> (**)(_p_SNES*, _p_Vec*, ---Type <return> to continue, or q <return> to >>>>>>>> quit--- >>>>>>>> _p_Vec*, void*), void**) () >>>>>>>> from >>>>>>>> /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>>> #12 0x00002ae458393d1c in SNESGetFunction(_p_SNES*, _p_Vec**, int >>>>>>>> (**)(_p_SNES*, _p_Vec*, _p_Vec*, void*), void**) () >>>>>>>> from >>>>>>>> /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>>> #13 0x00002ae45839b9eb in SNESSetUp(_p_SNES*) () >>>>>>>> from >>>>>>>> /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>>> #14 0x00002ae45839a6b0 in SNESSolve(_p_SNES*, _p_Vec*, _p_Vec*) () >>>>>>>> from >>>>>>>> /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>>> >>>>>>>> #15 0x00000000010976ab in >>>>>>>> libMesh::PetscDMNonlinearSolver<libMesh::Number>::solve (this=0x35506d0, >>>>>>>> jac_in=..., x_in=..., r_in=...) >>>>>>>> at src/solvers/petsc_dm_nonlinear_solver.C:450 >>>>>>>> #16 0x0000000000cd604f in libMesh::NonlinearImplicitSystem::solve ( >>>>>>>> this=0x353b7e0) at src/systems/nonlinear_implicit_system.C:178 >>>>>>>> #17 0x0000000000426082 in Biharmonic::step (this=0x34f2000, >>>>>>>> dt=@0x7fff95edca28) >>>>>>>> at biharmonic.C:110 >>>>>>>> #18 0x0000000000426373 in Biharmonic::run (this=0x34f2000) at >>>>>>>> biharmonic.C:152 >>>>>>>> warning: Range for type (null) has invalid bounds 0..-111 >>>>>>>> #19 0x0000000000436cf5 in main (argc=12, argv=0x7fff95edcc18) >>>>>>>> at miscellaneous_ex7.C:64 >>>>>>>> >>>>>>>> Ata >>>>>>>> >>>>>>>> >>>>>>>> Dmitry. >>>>>>>> >>>>>>>> On Mon, Apr 30, 2012 at 8:31 PM, Ataollah Mesgarnejad < >>>>>>>> am...@ti...> wrote: >>>>>>>> >>>>>>>>> Dmitry, >>>>>>>>> >>>>>>>>> I had a very good experience on single processor but on multiple >>>>>>>>> processors I get segmentation error at line where we call solve (I tried >>>>>>>>> both my code and example7) . I tried your example without -vi >>>>>>>>> --use-petsc-dm and it works just fine. Right now I'm trying to compile a >>>>>>>>> debug version and look into it; meanwhile I thought it would be a good idea >>>>>>>>> if I let you know. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Ata >>>>>>>>> >>>>>>>>> On Apr 30, 2012, at 1:30 PM, Dmitry Karpeev wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Mon, Apr 30, 2012 at 1:28 PM, Ataollah Mesgarnejad < >>>>>>>>> am...@ti...> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Apr 30, 2012, at 1:13 PM, Dmitry Karpeev wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Mon, Apr 30, 2012 at 1:03 PM, Ataollah Mesgarnejad < >>>>>>>>>> am...@ti...> wrote: >>>>>>>>>> >>>>>>>>>>> Dmitry, >>>>>>>>>>> >>>>>>>>>>> I had time to come back and work on the SNESVI/Libmesh program. >>>>>>>>>>> The way you setup your code is particularly hard for me to follow in my own >>>>>>>>>>> code since it requires a significant amount of extra work (to define >>>>>>>>>>> classes and setup the inheritances..). What I did instead was to try add a >>>>>>>>>>> TransientNonlinearSystem to my EquationSystems object: >>>>>>>>>>> >>>>>>>>>>> TransientNonlinearImplicitSystem & fracture_system = >>>>>>>>>>> equation_system.add_system<TransientNonlinearImplicitSystem>("fracture_system"); >>>>>>>>>>> >>>>>>>>>>> and then to add the bounds and Objective and Residual functions: >>>>>>>>>>> >>>>>>>>>>> fracture_system.attach_init_function(init_ic_fracture); >>>>>>>>>>> fracture_system.nonlinear_solver->jacobian= PsiFormJacobian; >>>>>>>>>>> fracture_system.nonlinear_solver->residual= PsiFormResidual; >>>>>>>>>>> fracture_system.nonlinear_solver->bounds = PsiFormBounds; >>>>>>>>>>> >>>>>>>>>>> what I see now is that SNES solve does not apply the bounds. So >>>>>>>>>>> here is my questions: >>>>>>>>>>> >>>>>>>>>>> 1- Is there inherently wrong with setting up the system >>>>>>>>>>> specially the bounds the easy way like I did? >>>>>>>>>>> >>>>>>>>>> Ata, >>>>>>>>>> Doing things this "simple" way is just fine: there is no reason >>>>>>>>>> to follow the relatively convoluted structure I used in that >>>>>>>>>> example. When I have time, I'll rewrite it to simplify things. >>>>>>>>>> >>>>>>>>>>> 2- Where is the right SNES type is chosen ? I guess this should >>>>>>>>>>> be hard wired somewhere in NonlinearImplicitSystem! >>>>>>>>>>> 3- When I use -snes_view (with your example or my own code ) I >>>>>>>>>>> get snes_type ls? shouldn't this be VI ? >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> What's likely missing is the command-line option --use-petsc-dm. >>>>>>>>>> It's necessary since libMesh wants to preserve the usual behavior as >>>>>>>>>> the default. In particular, because SNESVI and the DM hookup aren't really >>>>>>>>>> possible prior to petsc-3.2. >>>>>>>>>> In fact, it might not be possible without a recent petsc-dev (let >>>>>>>>>> me verify that, once I get to my computer). >>>>>>>>>> >>>>>>>>>> In the meantime, could you give --use-petsc-dm a try? >>>>>>>>>> >>>>>>>>>> I just saw that in the example help line and used it. It works >>>>>>>>>> like a charm. >>>>>>>>>> >>>>>>>>> Great! >>>>>>>>> Dmitry. >>>>>>>>> >>>>>>>>>> I'll go on check for the convergence and see what I can get. >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> Ata >>>>>>>>>> >>>>>>>>>> Thanks. >>>>>>>>>> >>>>>>>>>> Dmitry. >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> Ata >>>>>>>>>>> >>>>>>>>>>> On Apr 12, 2012, at 10:00 AM, Dmitry Karpeev wrote: >>>>>>>>>>> >>>>>>>>>>> Convergence is something we might need to work on -- I noticed >>>>>>>>>>> some chatter in a simple bounded biharmonic heat equation (a stripped down >>>>>>>>>>> version of Cahn-Hilliard). >>>>>>>>>>> Thanks. >>>>>>>>>>> Dmitry. >>>>>>>>>>> >>>>>>>>>>> On Thu, Apr 12, 2012 at 9:55 AM, Ataollah Mesgarnejad < >>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>> >>>>>>>>>>>> Thanks Dmitry, >>>>>>>>>>>> >>>>>>>>>>>> It would be wonderful. I made a wrapper for SNESVI in myself >>>>>>>>>>>> for my program but had difficulty achieving good convergence. Right now I >>>>>>>>>>>> am occupied with other issues but I'll get back to this as soon as I get a >>>>>>>>>>>> chance. >>>>>>>>>>>> >>>>>>>>>>>> Ata >>>>>>>>>>>> >>>>>>>>>>>> On Apr 10, 2012, at 9:55 PM, Dmitry Karpeev wrote: >>>>>>>>>>>> >>>>>>>>>>>> Ata, >>>>>>>>>>>> There is now a way to use SNESVI from libMesh. >>>>>>>>>>>> If interested, pull a recent revision from sourceforge and take >>>>>>>>>>>> a look at examples/miscellaneous/miscellaneous_ex7. >>>>>>>>>>>> Let me know, if you have any questions. >>>>>>>>>>>> >>>>>>>>>>>> Thanks. >>>>>>>>>>>> Dmitry. >>>>>>>>>>>> >>>>>>>>>>>> On Wed, Dec 14, 2011 at 12:45 PM, Dmitry Karpeev < >>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Okay, thanks for your feedback. >>>>>>>>>>>>> I'll keep you updated on this work. >>>>>>>>>>>>> >>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Wed, Dec 14, 2011 at 12:42 PM, Ataollah Mesgarnejad < >>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 12:35 PM, Dmitry Karpeev < >>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 12:19 PM, Ataollah Mesgarnejad < >>>>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 11:59 AM, Dmitry Karpeev < >>>>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Ata, >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> A couple of things: >>>>>>>>>>>>>>>>> 1. SNESVI solves a VI under what's known as "box >>>>>>>>>>>>>>>>> constraints": the Vec x that the residual operates on is entrywise between >>>>>>>>>>>>>>>>> Vecs xlow and xhigh. In order to enable >>>>>>>>>>>>>>>>> constraints on a linear combination of the Vec entries (as >>>>>>>>>>>>>>>>> you suggest), the problem >>>>>>>>>>>>>>>>> has to be augmented with extra variables -- these linear >>>>>>>>>>>>>>>>> combinations. This, of course, >>>>>>>>>>>>>>>>> changes the Jacobian, preconditioner, etc. The user >>>>>>>>>>>>>>>>> provides only the Jacobian for the original system, so libMesh would have >>>>>>>>>>>>>>>>> to automatically construct the Jacobian for the extended system, and >>>>>>>>>>>>>>>>> precondition it using the user provided PC for the original subsystem. This >>>>>>>>>>>>>>>>> is not necessarily the unreasonable thing to do in many circumstances, but >>>>>>>>>>>>>>>>> will take more thinking than a straighforward nodal constraint. The user >>>>>>>>>>>>>>>>> API for specifying the constraints, however, becomes rather straighforward. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> 2. In terms of FEM convergence, constraining at the nodes >>>>>>>>>>>>>>>>> is sufficient (see, for example, >>>>>>>>>>>>>>>>> Glowinski's book: >>>>>>>>>>>>>>>>> http://books.google.com/books/about/Numerical_analysis_of_variational_inequa.html?id=Pf4ed2mtbx4C >>>>>>>>>>>>>>>>> ) >>>>>>>>>>>>>>>>> You can show that if you impose the nodal constraints, in >>>>>>>>>>>>>>>>> the limit of mesh size going to 0 the inequality is satisfied a.e. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> In this case no funny manipulation of the Jacobian or the >>>>>>>>>>>>>>>>> PC is necessary, but the user API to specify the constraint is a bit >>>>>>>>>>>>>>>>> amiguous: how do you constrain the node? From a containing element? Then >>>>>>>>>>>>>>>>> there are multiple, possibly incompatible, constraints. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Or you can simply go through all the nodes in the system >>>>>>>>>>>>>>>> once using node iterator. Here is how I think you can do it: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> ------------------------------------- >>>>>>>>>>>>>>>> //make two PetscVector ub,lb from two variables you already >>>>>>>>>>>>>>>> added to the system >>>>>>>>>>>>>>>> NumericVector<Number> &ub_sys = >>>>>>>>>>>>>>>> fracture_system.get_vector("ub"); >>>>>>>>>>>>>>>> NumericVector<Number> &lb_sys = >>>>>>>>>>>>>>>> fracture_system.get_vector("lb"); >>>>>>>>>>>>>>>> PetscVector<Number>&lb = >>>>>>>>>>>>>>>> libmesh_cast_ptr<PetscVector<Number>*>(&ub_sys); >>>>>>>>>>>>>>>> PetscVector<Number>&ub = >>>>>>>>>>>>>>>> libmesh_cast_ptr<PetscVector<Number>*>(&lb_sys); >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> *then in the bound routine do:* >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> // Get a constant reference to the mesh object. >>>>>>>>>>>>>>>> const MeshBase& mesh = es.get_mesh(); >>>>>>>>>>>>>>>> //Get the iterator >>>>>>>>>>>>>>>> MeshBase::const_node_iterator nd = >>>>>>>>>>>>>>>> mesh.local_nodes_begin(); >>>>>>>>>>>>>>>> const MeshBase::const_node_iterator end_nd = >>>>>>>>>>>>>>>> mesh.local_nodes_end(); >>>>>>>>>>>>>>>> for (; nd != end_nd; nd++){ >>>>>>>>>>>>>>>> const Node* node = *nd; >>>>>>>>>>>>>>>> //get the degree of freedom of node >>>>>>>>>>>>>>>> unsigned int dn = node->dof_number(0,0,0); >>>>>>>>>>>>>>>> // system 0 , var 0, component 0 >>>>>>>>>>>>>>>> //set the ub,lb >>>>>>>>>>>>>>>> ub(dn)= A; >>>>>>>>>>>>>>>> lb(dn) = B; >>>>>>>>>>>>>>>> } >>>>>>>>>>>>>>>> //close the vectors >>>>>>>>>>>>>>>> ub.close(); >>>>>>>>>>>>>>>> lb.close(); >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> You could do that. That's probably the mode we can support >>>>>>>>>>>>>>> almost immediately. >>>>>>>>>>>>>>> The issue here is that you lose some of the "context" by >>>>>>>>>>>>>>> processing each node by >>>>>>>>>>>>>>> itself rather as part of an element. For example what if >>>>>>>>>>>>>>> each element is a different material? >>>>>>>>>>>>>>> I guess in that case that property should be set on the >>>>>>>>>>>>>>> nodes as well. >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Exactly one can always pass mat prop as a Constant MONOMIAL >>>>>>>>>>>>>> (e.g. system.add_variable("mat_prop",CONSTANT,MONOMIAL);) variable with >>>>>>>>>>>>>> values on each node. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Ata >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> ----------------------------------------------------------------- >>>>>>>>>>>>>>>> Ata >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 11:17 AM, Ataollah Mesgarnejad < >>>>>>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> I hope I understood the whole thing correctly. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 10:56 AM, Dmitry Karpeev < >>>>>>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 10:50 AM, Ataollah Mesgarnejad < >>>>>>>>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> I don't think setting the variable bound on each >>>>>>>>>>>>>>>>>>>> node separately is a good idea in general. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> I'm not sure what you mean by this. Could you clarify? >>>>>>>>>>>>>>>>>>> I don't think I'm advocating this. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Bounds should be set as a sum on each elements degrees of >>>>>>>>>>>>>>>>>>>> freedom; this way there should not be any nodal inconsistencies. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> I'm not sure what you mean here either. A bound is >>>>>>>>>>>>>>>>>>> usually not a sum (e.g., a bound on a concentration is 0 <= c <=1). >>>>>>>>>>>>>>>>>>> Perhaps your application expresses the total bound as a >>>>>>>>>>>>>>>>>>> sum? I'd be very interested in learning more about that case. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> OK here is my thought process: one optimally wants to >>>>>>>>>>>>>>>>>> bound the variable over the whole Element domain between lb,ub. So we would >>>>>>>>>>>>>>>>>> have something like this: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> lb \leq \sum_{i=1}^{n=NDOF} \phi_{i}(x) \hat{u}_{i} \leq >>>>>>>>>>>>>>>>>> ub, \forall x \in El >>>>>>>>>>>>>>>>>> (Eq-1) >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> so it's not exactly that the nodal values should be >>>>>>>>>>>>>>>>>> bounded (\hat{u}_{i}) unless it's a FD application where \phi_i(x) is delta >>>>>>>>>>>>>>>>>> function !! That said I think given a normalized basis, bounding nodal >>>>>>>>>>>>>>>>>> values is stricter requirement that (Eq-1). if that's what you meant by >>>>>>>>>>>>>>>>>> inconsistencies between node bounds then the stronger bound would be my >>>>>>>>>>>>>>>>>> first priority. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Ata >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Of course it might run into a situations with >>>>>>>>>>>>>>>>>>>> semi-smooth VI. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> The type of VI solver is independent of how the bounds >>>>>>>>>>>>>>>>>>> are specified: once we know the upper and lower bounds for each element of >>>>>>>>>>>>>>>>>>> a Vec, we can solve the VI with either a semismooth method or an active set >>>>>>>>>>>>>>>>>>> method. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> I really appreciate your feedback. I hope we can get to >>>>>>>>>>>>>>>>>>> the point where things are as clear as possible so that you can >>>>>>>>>>>>>>>>>>> profitably use the new capabilities. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>>> Ata >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 9:47 AM, Dmitry Karpeev < >>>>>>>>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> Are you familiar with SNESVI? The main thing I'm >>>>>>>>>>>>>>>>>>>>> looking at right now is the libMesh API for specifying >>>>>>>>>>>>>>>>>>>>> the bounds on the variables. It seems to me that the >>>>>>>>>>>>>>>>>>>>> user would ideally want to specify the bounds on the variable >>>>>>>>>>>>>>>>>>>>> one element at a time, since that's the default >>>>>>>>>>>>>>>>>>>>> traversal mode for libMesh. Then, of course, many nodal values >>>>>>>>>>>>>>>>>>>>> are traversed many time, allowing for an >>>>>>>>>>>>>>>>>>>>> inconsistency. The question is, what you, as a user, expect in that >>>>>>>>>>>>>>>>>>>>> situation? >>>>>>>>>>>>>>>>>>>>> Should the latest bound that the user has set be >>>>>>>>>>>>>>>>>>>>> applied or the tightest of the bounds the user has supplied? >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 9:28 AM, Ataollah Mesgarnejad >>>>>>>>>>>>>>>>>>>>> <am...@ti...> wrote: >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> Thanks Dmitry, I look forward to use it. Also if it's >>>>>>>>>>>>>>>>>>>>>> available as a dev version I would love to try to work with it. >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> Ata >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 8:24 AM, Dmitry Karpeev < >>>>>>>>>>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> We (ANL) are working on an patch to libMesh to >>>>>>>>>>>>>>>>>>>>>>> enable this functionality. >>>>>>>>>>>>>>>>>>>>>>> The motivation is to use it through MOOSE, but it >>>>>>>>>>>>>>>>>>>>>>> will be exposed as a libMesh API. >>>>>>>>>>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 7:53 AM, Ataollah >>>>>>>>>>>>>>>>>>>>>>> Mesgarnejad <am...@ti...> wrote: >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> Dear all, >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> I wonder if there is compatible versionIs it >>>>>>>>>>>>>>>>>>>>>>>> possible to >>>>>>>>>>>>>>>>>>>>>>>> use PETScNonlinearSolver class with SNESVI >>>>>>>>>>>>>>>>>>>>>>>> (PETSc/3.2) to solve a >>>>>>>>>>>>>>>>>>>>>>>> variational inequality problem? >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>>>>>>> Ata Mesgarnejad >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>>>> Cloud Computing - Latest Buzzword or a Glimpse of >>>>>>>>>>>>>>>>>>>>>>>> the Future? >>>>>>>>>>>>>>>>>>>>>>>> This paper surveys cloud computing today: What are >>>>>>>>>>>>>>>>>>>>>>>> the benefits? >>>>>>>>>>>>>>>>>>>>>>>> Why are businesses embracing it? What are its >>>>>>>>>>>>>>>>>>>>>>>> payoffs and pitfalls? >>>>>>>>>>>>>>>>>>>>>>>> http://www.accelacomm.com/jaw/sdnl/114/51425149/ >>>>>>>>>>>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>>>>>>>>>>> Libmesh-users mailing list >>>>>>>>>>>>>>>>>>>>>>>> Lib...@li... >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/libmesh-users >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>>>>>>>>>>>> Louisiana State University >>>>>>>>>>>>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>>>>>>>>>> Louisiana State University >>>>>>>>>>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>>>>>> Louisiana State University >>>>>>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>>>> Louisiana State University >>>>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> A. Mesgarnejad >>>>>> PhD Student, Research Assistant >>>>>> Mechanical Engineering Department >>>>>> Louisiana State University >>>>>> 2203 Patrick F. Taylor Hall >>>>>> Baton Rouge, La 70803 >>>>>> >>>>> >>>>> <petsc_dm.patch> >>>>> >>>>> >>>>> >>>> <0001-PETSc-DM-incref-bug-fix.patch> >>>> <0002-PETSc-DM-initializes-bounds-with-Inf-now.patch> >>>> >>>> >>>> >>> >>> >> >> > > -- A. Mesgarnejad PhD Student, Research Assistant Mechanical Engineering Department Louisiana State University 2203 Patrick F. Taylor Hall Baton Rouge, La 70803 |
From: Dmitry K. <ka...@mc...> - 2012-06-12 18:10:10
|
Ata, You should be able to use --use-petsc-dm as before. The issue may be that I no longer use virs as the default SNES type when you attach bounds() or bounds_object() to NonlinearSolver. The reason for that was a bad interaction between SNESVI and DMCreateXXXDecomposition(). That bad interaction should be gone now, but to use SNESVI you have to explicitly give -snes_type virs, etc. Let me know whether this solves your problem. Thanks. Dmitry. On Tue, Jun 12, 2012 at 11:40 AM, Ataollah Mesgarnejad < am...@ti...> wrote: > Dmitry, > > I think you have made a change to petsc-dm solver I can't seem to be able > to access petsc-dm solver anymore using --use-petsc-dm? Can you please tell > me how I use it again ? > > Thanks, > Ata > > On Wed, May 9, 2012 at 6:21 PM, Ataollah Mesgarnejad < > am...@ti...> wrote: > >> Perfect. Thats exactly what I need. >> >> Thanks, >> Ata >> >> On May 9, 2012, at 6:14 PM, Dmitry Karpeev wrote: >> >> Ata, >> system.solve() will call it. >> I assume that by "changing constantly" you mean "once per timestep"? >> SNESSolve() initiates the call that eventually dispatches to your bounds >> calculation routine. >> SNES calls that once per solve. >> Dmitry. >> >> On Wed, May 9, 2012 at 5:22 PM, Ataollah Mesgarnejad < >> am...@ti...> wrote: >> >>> Dmitry, >>> >>> A quick question: When does the bounds routine get called? My bounds are >>> changing constantly do I need to call the bounds function myself or does it >>> get called for each system.solve()? >>> >>> Thanks, >>> Ata >>> >>> On May 5, 2012, at 12:21 PM, Dmitry Karpeev wrote: >>> >>> Okay, that helps. >>> I'm looking further into it. >>> Dmitry. >>> >>> On Sat, May 5, 2012 at 12:08 PM, Ataollah Mesgarnejad < >>> am...@ti...> wrote: >>> >>>> Dmitry, >>>> >>>> I just tried it without bounds and I get the same DIVERGED_BREAKDOWN >>>> error. >>>> >>>> Ata >>>> >>>> Ata, >>>> I'm trying to reproduce the problem that you are observing, so I wanted >>>> to understand better what you are doing there. >>>> Do you just skip Jacobian evaluation and return from the >>>> ComputeJacobian call immediately? >>>> Also, without --use-petsc-dm, how are you able to run with VIRS (I >>>> remember you mentioned your own wrapper to SNESVI)? Finally, do you see >>>> this problem when running with SNES (not SNESVI)? I wonder if the issue is >>>> in the interaction with the bounds. >>>> >>>> Thanks for your patience. >>>> Dmitry. >>>> >>>> On Wed, May 2, 2012 at 7:51 AM, Ataollah Mesgarnejad < >>>> am...@ti...> wrote: >>>> >>>>> >>>>> On May 1, 2012, at 10:16 PM, Dmitry Karpeev wrote: >>>>> >>>>> Yeah, it's a bit of a hack right now -- DMCreateMatrix simply increfs >>>>> and return the matrix from the underlying NonlinearImplicitSystem. The >>>>> reason is that initially the matrix is preallocated by unassembled, and >>>>> MatDuplicate >>>>> doesn't want to duplicate an unassembled matrix. I can fix that, but >>>>> I'm a bit puzzled by the fact that KSP messes >>>>> with the matrix. Can you run with -snes_view and send me the output? >>>>> >>>>> Dmitry, >>>>> >>>>> Here it is: >>>>> >>>>> ------------- with the flag ----------------------- >>>>> >>>>> Linear solve converged due to CONVERGED_RTOL iterations 44 >>>>> Calculating surface energy... done >>>>> Calculating fracture energy... done >>>>> elastic energy:13.74054,fracture energy:-0.00000,total:13.74054 >>>>> applying variable bounds ... done >>>>> NL step 0, |residual|_2 = 2.41046e-01 >>>>> SNES Jacobian assembly... done >>>>> Linear solve converged due to CONVERGED_RTOL iterations 3 >>>>> NL step 1, |residual|_2 = 5.10144e-07 >>>>> Linear solve did not converge due to DIVERGED_BREAKDOWN iterations 1 >>>>> SNES Object: 1 MPI processes >>>>> type: virs >>>>> maximum iterations=50, maximum function evaluations=10000 >>>>> tolerances: relative=1e-08, absolute=1e-35, solution=1e-08 >>>>> total number of linear solver iterations=3 >>>>> total number of function evaluations=2 >>>>> KSP Object: 1 MPI processes >>>>> type: gmres >>>>> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt >>>>> Orthogonalization with no iterative refinement >>>>> GMRES: happy breakdown tolerance 1e-30 >>>>> maximum iterations=10000, initial guess is zero >>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >>>>> left preconditioning >>>>> using PRECONDITIONED norm type for convergence test >>>>> PC Object: 1 MPI processes >>>>> type: ilu >>>>> ILU: out-of-place factorization >>>>> 0 levels of fill >>>>> tolerance for zero pivot 2.22045e-14 >>>>> using diagonal shift to prevent zero pivot >>>>> matrix ordering: natural >>>>> factor fill ratio given 1, needed 1 >>>>> Factored matrix follows: >>>>> Matrix Object: 1 MPI processes >>>>> type: seqaij >>>>> rows=153, cols=153 >>>>> package used to perform factorization: petsc >>>>> total: nonzeros=2401, allocated nonzeros=2401 >>>>> total number of mallocs used during MatSetValues calls =0 >>>>> not using I-node routines >>>>> linear system matrix = precond matrix: >>>>> Matrix Object: 1 MPI processes >>>>> type: seqaij >>>>> rows=153, cols=153 >>>>> total: nonzeros=2401, allocated nonzeros=2401 >>>>> total number of mallocs used during MatSetValues calls =0 >>>>> not using I-node routines >>>>> SNESLineSearch Object: 1 MPI processes >>>>> type: basic >>>>> maxstep=1e+08, minlambda=1e-12 >>>>> tolerances: relative=1e-08, absolute=1e-15, lambda=1e-08 >>>>> maximum iterations=1 >>>>> Fracture system solved at nonlinear iteration 1 , final nonlinear >>>>> residual norm: 5.10144e-07 >>>>> >>>>> --- without the flag ------ >>>>> >>>>> applying variable bounds ... done >>>>> NL step 0, |residual|_2 = 2.41046e-01 >>>>> SNES Jacobian assembly... done >>>>> Linear solve converged due to CONVERGED_RTOL iterations 3 >>>>> NL step 1, |residual|_2 = 5.10144e-07 >>>>> SNES Jacobian assembly... done >>>>> Linear solve converged due to CONVERGED_RTOL iterations 3 >>>>> NL step 2, |residual|_2 = 1.61153e-12 >>>>> SNES Object: 1 MPI processes >>>>> type: virs >>>>> maximum iterations=50, maximum function evaluations=10000 >>>>> tolerances: relative=1e-08, absolute=1e-35, solution=1e-08 >>>>> total number of linear solver iterations=6 >>>>> total number of function evaluations=3 >>>>> KSP Object: 1 MPI processes >>>>> type: gmres >>>>> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt >>>>> Orthogonalization with no iterative refinement >>>>> GMRES: happy breakdown tolerance 1e-30 >>>>> maximum iterations=10000, initial guess is zero >>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >>>>> left preconditioning >>>>> using PRECONDITIONED norm type for convergence test >>>>> PC Object: 1 MPI processes >>>>> type: ilu >>>>> ILU: out-of-place factorization >>>>> 0 levels of fill >>>>> tolerance for zero pivot 2.22045e-14 >>>>> using diagonal shift to prevent zero pivot >>>>> matrix ordering: natural >>>>> factor fill ratio given 1, needed 1 >>>>> Factored matrix follows: >>>>> Matrix Object: 1 MPI processes >>>>> type: seqaij >>>>> rows=153, cols=153 >>>>> package used to perform factorization: petsc >>>>> total: nonzeros=2401, allocated nonzeros=2401 >>>>> total number of mallocs used during MatSetValues calls =0 >>>>> not using I-node routines >>>>> linear system matrix = precond matrix: >>>>> Matrix Object: 1 MPI processes >>>>> type: seqaij >>>>> rows=153, cols=153 >>>>> total: nonzeros=2401, allocated nonzeros=2401 >>>>> total number of mallocs used during MatSetValues calls =0 >>>>> not using I-node routines >>>>> SNESLineSearch Object: 1 MPI processes >>>>> type: basic >>>>> maxstep=1e+08, minlambda=1e-12 >>>>> tolerances: relative=1e-08, absolute=1e-15, lambda=1e-08 >>>>> maximum iterations=1 >>>>> Fracture system solved at nonlinear iteration 2 , final nonlinear >>>>> residual norm: 1.61153e-12 >>>>> >>>>> >>>>> >>>>> Incidentally, there _was_ a reference counting bug that led to a >>>>> breakdown very much like what you reported. >>>>> I'm attaching that patch and a small improvement to DMVariableBounds. >>>>> I need to figure out how to go about creating matrices from DM_libMesh. >>>>> >>>>> Let me know if you found anything or if I can be of any help. >>>>> >>>>> Thanks, >>>>> Ata >>>>> >>>>> Dmitry. >>>>> >>>>> On Tue, May 1, 2012 at 6:21 PM, Ataollah Mesgarnejad < >>>>> am...@ti...> wrote: >>>>> >>>>>> Dmitry, >>>>>> >>>>>> Is there a way so SNES doesn't go calculate the Jacobian for every >>>>>> iteration (my Jac does't change)? I used to use a flag to check if it's >>>>>> assembled but when I do that with petsc-dm, KSP associated with SNES gives >>>>>> DIVERGED_BREAKDOWN for the second iteration! looks like petsc-dm doesn't >>>>>> make a copy of the matrix so when it's used in KSP solve it gets jumbled up. >>>>>> >>>>>> Here is how it looks >>>>>> >>>>>> -with the flag: >>>>>> >>>>>> applying variable bounds ... done >>>>>> NL step 0, |residual|_2 = 8.90674e-04 >>>>>> SNES Jacobian assembly... done >>>>>> Linear solve converged due to CONVERGED_RTOL iterations 6 >>>>>> NL step 1, |residual|_2 = 2.90987e-05 >>>>>> Linear solve did not converge due to DIVERGED_BREAKDOWN iterations 1 >>>>>> >>>>>> this happens for every SNES solve. >>>>>> ----------------------- >>>>>> >>>>>> -without it: >>>>>> >>>>>> applying variable bounds ... done >>>>>> NL step 0, |residual|_2 = 6.08030e-02 >>>>>> SNES hessian assembly... done >>>>>> Linear solve converged due to CONVERGED_RTOL iterations 6 >>>>>> NL step 1, |residual|_2 = 2.61557e-07 >>>>>> SNES hessian assembly... done >>>>>> Linear solve converged due to CONVERGED_RTOL iterations 6 >>>>>> NL step 2, |residual|_2 = 1.35354e-12 >>>>>> Fracture system solved at nonlinear iteration 2 , final nonlinear >>>>>> residual norm: 1.35354e-12 >>>>>> >>>>>> ps: these are not from the same time step but you get the idea. >>>>>> >>>>>> Best, >>>>>> Ata >>>>>> >>>>>> On May 1, 2012, at 8:49 AM, Dmitry Karpeev wrote: >>>>>> >>>>>> As an fyi: I just built and ran miscellaneous_ex7 in parallel with >>>>>> the latest petsc-dev: >>>>>> http://petsc.cs.iit.edu/petsc/petsc-dev/rev/12d4a1ea0aca >>>>>> If you decide you want to upgrade to this (or later) petsc-dev, you'd >>>>>> need a small patch for >>>>>> petsc_dm_nonlinear_solver.C to account for the latest API >>>>>> changes (attached). >>>>>> I am working on a bigger patch that adds more functionality to >>>>>> DM_libMesh and streamline >>>>>> miscellaneous_ex7. That will take some time to percolate up into the >>>>>> libMesh tree. In the >>>>>> meantime you should be able to run with the small fix I've attached. >>>>>> >>>>>> Let me know how it works. >>>>>> Dmitry. >>>>>> >>>>>> >>>>>> On Tue, May 1, 2012 at 8:35 AM, Ataollah Mesgarnejad < >>>>>> am...@ti...> wrote: >>>>>> >>>>>>> Dmitry, >>>>>>> >>>>>>> I believe that the error is indeed due to petsc-dev version. I'm >>>>>>> going to check this during the next few hours and get back to you. >>>>>>> >>>>>>> Thanks, >>>>>>> Ata >>>>>>> >>>>>>> >>>>>>> On Tue, May 1, 2012 at 8:29 AM, Dmitry Karpeev <ka...@mc...>wrote: >>>>>>> >>>>>>>> Ata, >>>>>>>> I can't reproduce the problem: miscellaneous_ex7 runs fine for me >>>>>>>> on 2 procs with a relatively recent version >>>>>>>> of petsc-dev. Can you send me the full PETSc error output? >>>>>>>> >>>>>>>> Thanks. >>>>>>>> Dmitry. >>>>>>>> >>>>>>>> >>>>>>>> On Mon, Apr 30, 2012 at 8:56 PM, Ataollah Mesgarnejad < >>>>>>>> am...@ti...> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> On Apr 30, 2012, at 8:39 PM, Dmitry Karpeev wrote: >>>>>>>>> >>>>>>>>> Ata, >>>>>>>>> example7 segfaults when you run in parallel? >>>>>>>>> Does it do it on 2 procs? >>>>>>>>> >>>>>>>>> Yes and yes. >>>>>>>>> >>>>>>>>> here is the stack: >>>>>>>>> (gdb) where >>>>>>>>> #0 0x00000038fbaaada0 in __nanosleep_nocancel () from >>>>>>>>> /lib64/libc.so.6 >>>>>>>>> #1 0x00000038fbaaac30 in sleep () from /lib64/libc.so.6 >>>>>>>>> #2 0x00002ae457b35c40 in PetscSleep(double) () >>>>>>>>> from >>>>>>>>> /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>>>> #3 0x00002ae457b7dffa in PetscAttachDebugger() () >>>>>>>>> from >>>>>>>>> /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>>>> #4 0x00002ae457b7e95f in PetscAttachDebuggerErrorHandler(int, >>>>>>>>> int, char const*, char const*, char const*, int, PetscErrorType, char >>>>>>>>> const*, void*) () >>>>>>>>> from >>>>>>>>> /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>>>> #5 0x00002ae457b817be in PetscError(int, int, char const*, char >>>>>>>>> const*, char const*, int, PetscErrorType, char const*, ...) () >>>>>>>>> from >>>>>>>>> /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>>>> #6 0x00002ae457b82ea7 in PetscDefaultSignalHandler(int, void*) () >>>>>>>>> from >>>>>>>>> /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>>>> #7 0x00002ae457b82883 in PetscSignalHandler_Private () >>>>>>>>> from >>>>>>>>> /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>>>> #8 <signal handler called> >>>>>>>>> #9 0x00002ae458189340 in DMCoarsenHookAdd(_p_DM*, int (*)(_p_DM*, >>>>>>>>> _p_DM*, void*), int (*)(_p_DM*, _p_Mat*, _p_Vec*, _p_Mat*, _p_DM*, void*), >>>>>>>>> void*) () >>>>>>>>> from >>>>>>>>> /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>>>> #10 0x00002ae45835f431 in DMSNESGetContext(_p_DM*, _n_SNESDM**) () >>>>>>>>> from >>>>>>>>> /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>>>> #11 0x00002ae45835ea34 in DMSNESGetFunction(_p_DM*, int >>>>>>>>> (**)(_p_SNES*, _p_Vec*, ---Type <return> to continue, or q <return> to >>>>>>>>> quit--- >>>>>>>>> _p_Vec*, void*), void**) () >>>>>>>>> from >>>>>>>>> /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>>>> #12 0x00002ae458393d1c in SNESGetFunction(_p_SNES*, _p_Vec**, int >>>>>>>>> (**)(_p_SNES*, _p_Vec*, _p_Vec*, void*), void**) () >>>>>>>>> from >>>>>>>>> /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>>>> #13 0x00002ae45839b9eb in SNESSetUp(_p_SNES*) () >>>>>>>>> from >>>>>>>>> /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>>>> #14 0x00002ae45839a6b0 in SNESSolve(_p_SNES*, _p_Vec*, _p_Vec*) () >>>>>>>>> from >>>>>>>>> /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>>>>> >>>>>>>>> #15 0x00000000010976ab in >>>>>>>>> libMesh::PetscDMNonlinearSolver<libMesh::Number>::solve (this=0x35506d0, >>>>>>>>> jac_in=..., x_in=..., r_in=...) >>>>>>>>> at src/solvers/petsc_dm_nonlinear_solver.C:450 >>>>>>>>> #16 0x0000000000cd604f in libMesh::NonlinearImplicitSystem::solve ( >>>>>>>>> this=0x353b7e0) at src/systems/nonlinear_implicit_system.C:178 >>>>>>>>> #17 0x0000000000426082 in Biharmonic::step (this=0x34f2000, >>>>>>>>> dt=@0x7fff95edca28) >>>>>>>>> at biharmonic.C:110 >>>>>>>>> #18 0x0000000000426373 in Biharmonic::run (this=0x34f2000) at >>>>>>>>> biharmonic.C:152 >>>>>>>>> warning: Range for type (null) has invalid bounds 0..-111 >>>>>>>>> #19 0x0000000000436cf5 in main (argc=12, argv=0x7fff95edcc18) >>>>>>>>> at miscellaneous_ex7.C:64 >>>>>>>>> >>>>>>>>> Ata >>>>>>>>> >>>>>>>>> >>>>>>>>> Dmitry. >>>>>>>>> >>>>>>>>> On Mon, Apr 30, 2012 at 8:31 PM, Ataollah Mesgarnejad < >>>>>>>>> am...@ti...> wrote: >>>>>>>>> >>>>>>>>>> Dmitry, >>>>>>>>>> >>>>>>>>>> I had a very good experience on single processor but on multiple >>>>>>>>>> processors I get segmentation error at line where we call solve (I tried >>>>>>>>>> both my code and example7) . I tried your example without -vi >>>>>>>>>> --use-petsc-dm and it works just fine. Right now I'm trying to compile a >>>>>>>>>> debug version and look into it; meanwhile I thought it would be a good idea >>>>>>>>>> if I let you know. >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> Ata >>>>>>>>>> >>>>>>>>>> On Apr 30, 2012, at 1:30 PM, Dmitry Karpeev wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Mon, Apr 30, 2012 at 1:28 PM, Ataollah Mesgarnejad < >>>>>>>>>> am...@ti...> wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Apr 30, 2012, at 1:13 PM, Dmitry Karpeev wrote: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Mon, Apr 30, 2012 at 1:03 PM, Ataollah Mesgarnejad < >>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>> >>>>>>>>>>>> Dmitry, >>>>>>>>>>>> >>>>>>>>>>>> I had time to come back and work on the SNESVI/Libmesh program. >>>>>>>>>>>> The way you setup your code is particularly hard for me to follow in my own >>>>>>>>>>>> code since it requires a significant amount of extra work (to define >>>>>>>>>>>> classes and setup the inheritances..). What I did instead was to try add a >>>>>>>>>>>> TransientNonlinearSystem to my EquationSystems object: >>>>>>>>>>>> >>>>>>>>>>>> TransientNonlinearImplicitSystem & fracture_system = >>>>>>>>>>>> equation_system.add_system<TransientNonlinearImplicitSystem>("fracture_system"); >>>>>>>>>>>> >>>>>>>>>>>> and then to add the bounds and Objective and Residual functions: >>>>>>>>>>>> >>>>>>>>>>>> fracture_system.attach_init_function(init_ic_fracture); >>>>>>>>>>>> fracture_system.nonlinear_solver->jacobian= PsiFormJacobian; >>>>>>>>>>>> fracture_system.nonlinear_solver->residual= PsiFormResidual; >>>>>>>>>>>> fracture_system.nonlinear_solver->bounds = PsiFormBounds; >>>>>>>>>>>> >>>>>>>>>>>> what I see now is that SNES solve does not apply the bounds. So >>>>>>>>>>>> here is my questions: >>>>>>>>>>>> >>>>>>>>>>>> 1- Is there inherently wrong with setting up the system >>>>>>>>>>>> specially the bounds the easy way like I did? >>>>>>>>>>>> >>>>>>>>>>> Ata, >>>>>>>>>>> Doing things this "simple" way is just fine: there is no reason >>>>>>>>>>> to follow the relatively convoluted structure I used in that >>>>>>>>>>> example. When I have time, I'll rewrite it to simplify things. >>>>>>>>>>> >>>>>>>>>>>> 2- Where is the right SNES type is chosen ? I guess this should >>>>>>>>>>>> be hard wired somewhere in NonlinearImplicitSystem! >>>>>>>>>>>> 3- When I use -snes_view (with your example or my own code ) I >>>>>>>>>>>> get snes_type ls? shouldn't this be VI ? >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> What's likely missing is the command-line option --use-petsc-dm. >>>>>>>>>>> It's necessary since libMesh wants to preserve the usual behavior as >>>>>>>>>>> the default. In particular, because SNESVI and the DM hookup aren't really >>>>>>>>>>> possible prior to petsc-3.2. >>>>>>>>>>> In fact, it might not be possible without a recent petsc-dev >>>>>>>>>>> (let me verify that, once I get to my computer). >>>>>>>>>>> >>>>>>>>>>> In the meantime, could you give --use-petsc-dm a try? >>>>>>>>>>> >>>>>>>>>>> I just saw that in the example help line and used it. It works >>>>>>>>>>> like a charm. >>>>>>>>>>> >>>>>>>>>> Great! >>>>>>>>>> Dmitry. >>>>>>>>>> >>>>>>>>>>> I'll go on check for the convergence and see what I can get. >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> Ata >>>>>>>>>>> >>>>>>>>>>> Thanks. >>>>>>>>>>> >>>>>>>>>>> Dmitry. >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Thanks, >>>>>>>>>>>> Ata >>>>>>>>>>>> >>>>>>>>>>>> On Apr 12, 2012, at 10:00 AM, Dmitry Karpeev wrote: >>>>>>>>>>>> >>>>>>>>>>>> Convergence is something we might need to work on -- I noticed >>>>>>>>>>>> some chatter in a simple bounded biharmonic heat equation (a stripped down >>>>>>>>>>>> version of Cahn-Hilliard). >>>>>>>>>>>> Thanks. >>>>>>>>>>>> Dmitry. >>>>>>>>>>>> >>>>>>>>>>>> On Thu, Apr 12, 2012 at 9:55 AM, Ataollah Mesgarnejad < >>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Thanks Dmitry, >>>>>>>>>>>>> >>>>>>>>>>>>> It would be wonderful. I made a wrapper for SNESVI in myself >>>>>>>>>>>>> for my program but had difficulty achieving good convergence. Right now I >>>>>>>>>>>>> am occupied with other issues but I'll get back to this as soon as I get a >>>>>>>>>>>>> chance. >>>>>>>>>>>>> >>>>>>>>>>>>> Ata >>>>>>>>>>>>> >>>>>>>>>>>>> On Apr 10, 2012, at 9:55 PM, Dmitry Karpeev wrote: >>>>>>>>>>>>> >>>>>>>>>>>>> Ata, >>>>>>>>>>>>> There is now a way to use SNESVI from libMesh. >>>>>>>>>>>>> If interested, pull a recent revision from sourceforge and >>>>>>>>>>>>> take a look at examples/miscellaneous/miscellaneous_ex7. >>>>>>>>>>>>> Let me know, if you have any questions. >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks. >>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>> >>>>>>>>>>>>> On Wed, Dec 14, 2011 at 12:45 PM, Dmitry Karpeev < >>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Okay, thanks for your feedback. >>>>>>>>>>>>>> I'll keep you updated on this work. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 12:42 PM, Ataollah Mesgarnejad < >>>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 12:35 PM, Dmitry Karpeev < >>>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 12:19 PM, Ataollah Mesgarnejad < >>>>>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 11:59 AM, Dmitry Karpeev < >>>>>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Ata, >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> A couple of things: >>>>>>>>>>>>>>>>>> 1. SNESVI solves a VI under what's known as "box >>>>>>>>>>>>>>>>>> constraints": the Vec x that the residual operates on is entrywise between >>>>>>>>>>>>>>>>>> Vecs xlow and xhigh. In order to enable >>>>>>>>>>>>>>>>>> constraints on a linear combination of the Vec entries >>>>>>>>>>>>>>>>>> (as you suggest), the problem >>>>>>>>>>>>>>>>>> has to be augmented with extra variables -- these linear >>>>>>>>>>>>>>>>>> combinations. This, of course, >>>>>>>>>>>>>>>>>> changes the Jacobian, preconditioner, etc. The user >>>>>>>>>>>>>>>>>> provides only the Jacobian for the original system, so libMesh would have >>>>>>>>>>>>>>>>>> to automatically construct the Jacobian for the extended system, and >>>>>>>>>>>>>>>>>> precondition it using the user provided PC for the original subsystem. This >>>>>>>>>>>>>>>>>> is not necessarily the unreasonable thing to do in many circumstances, but >>>>>>>>>>>>>>>>>> will take more thinking than a straighforward nodal constraint. The user >>>>>>>>>>>>>>>>>> API for specifying the constraints, however, becomes rather straighforward. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> 2. In terms of FEM convergence, constraining at the nodes >>>>>>>>>>>>>>>>>> is sufficient (see, for example, >>>>>>>>>>>>>>>>>> Glowinski's book: >>>>>>>>>>>>>>>>>> http://books.google.com/books/about/Numerical_analysis_of_variational_inequa.html?id=Pf4ed2mtbx4C >>>>>>>>>>>>>>>>>> ) >>>>>>>>>>>>>>>>>> You can show that if you impose the nodal constraints, in >>>>>>>>>>>>>>>>>> the limit of mesh size going to 0 the inequality is satisfied a.e. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> In this case no funny manipulation of the Jacobian or the >>>>>>>>>>>>>>>>>> PC is necessary, but the user API to specify the constraint is a bit >>>>>>>>>>>>>>>>>> amiguous: how do you constrain the node? From a containing element? Then >>>>>>>>>>>>>>>>>> there are multiple, possibly incompatible, constraints. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Or you can simply go through all the nodes in the system >>>>>>>>>>>>>>>>> once using node iterator. Here is how I think you can do it: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> ------------------------------------- >>>>>>>>>>>>>>>>> //make two PetscVector ub,lb from two variables you >>>>>>>>>>>>>>>>> already added to the system >>>>>>>>>>>>>>>>> NumericVector<Number> &ub_sys = >>>>>>>>>>>>>>>>> fracture_system.get_vector("ub"); >>>>>>>>>>>>>>>>> NumericVector<Number> &lb_sys = >>>>>>>>>>>>>>>>> fracture_system.get_vector("lb"); >>>>>>>>>>>>>>>>> PetscVector<Number>&lb = >>>>>>>>>>>>>>>>> libmesh_cast_ptr<PetscVector<Number>*>(&ub_sys); >>>>>>>>>>>>>>>>> PetscVector<Number>&ub = >>>>>>>>>>>>>>>>> libmesh_cast_ptr<PetscVector<Number>*>(&lb_sys); >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> *then in the bound routine do:* >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> // Get a constant reference to the mesh object. >>>>>>>>>>>>>>>>> const MeshBase& mesh = es.get_mesh(); >>>>>>>>>>>>>>>>> //Get the iterator >>>>>>>>>>>>>>>>> MeshBase::const_node_iterator nd = >>>>>>>>>>>>>>>>> mesh.local_nodes_begin(); >>>>>>>>>>>>>>>>> const MeshBase::const_node_iterator end_nd = >>>>>>>>>>>>>>>>> mesh.local_nodes_end(); >>>>>>>>>>>>>>>>> for (; nd != end_nd; nd++){ >>>>>>>>>>>>>>>>> const Node* node = *nd; >>>>>>>>>>>>>>>>> //get the degree of freedom of node >>>>>>>>>>>>>>>>> unsigned int dn = node->dof_number(0,0,0); >>>>>>>>>>>>>>>>> // system 0 , var 0, component 0 >>>>>>>>>>>>>>>>> //set the ub,lb >>>>>>>>>>>>>>>>> ub(dn)= A; >>>>>>>>>>>>>>>>> lb(dn) = B; >>>>>>>>>>>>>>>>> } >>>>>>>>>>>>>>>>> //close the vectors >>>>>>>>>>>>>>>>> ub.close(); >>>>>>>>>>>>>>>>> lb.close(); >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> You could do that. That's probably the mode we can support >>>>>>>>>>>>>>>> almost immediately. >>>>>>>>>>>>>>>> The issue here is that you lose some of the "context" by >>>>>>>>>>>>>>>> processing each node by >>>>>>>>>>>>>>>> itself rather as part of an element. For example what if >>>>>>>>>>>>>>>> each element is a different material? >>>>>>>>>>>>>>>> I guess in that case that property should be set on the >>>>>>>>>>>>>>>> nodes as well. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Exactly one can always pass mat prop as a Constant MONOMIAL >>>>>>>>>>>>>>> (e.g. system.add_variable("mat_prop",CONSTANT,MONOMIAL);) variable with >>>>>>>>>>>>>>> values on each node. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Ata >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> ----------------------------------------------------------------- >>>>>>>>>>>>>>>>> Ata >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 11:17 AM, Ataollah Mesgarnejad < >>>>>>>>>>>>>>>>>> am...@ti...> wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> I hope I understood the whole thing correctly. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 10:56 AM, Dmitry Karpeev < >>>>>>>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 10:50 AM, Ataollah Mesgarnejad >>>>>>>>>>>>>>>>>>>> <am...@ti...> wrote: >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> I don't think setting the variable bound on each >>>>>>>>>>>>>>>>>>>>> node separately is a good idea in general. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> I'm not sure what you mean by this. Could you clarify? >>>>>>>>>>>>>>>>>>>> I don't think I'm advocating this. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Bounds should be set as a sum on each elements degrees >>>>>>>>>>>>>>>>>>>>> of freedom; this way there should not be any nodal inconsistencies. >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> I'm not sure what you mean here either. A bound is >>>>>>>>>>>>>>>>>>>> usually not a sum (e.g., a bound on a concentration is 0 <= c <=1). >>>>>>>>>>>>>>>>>>>> Perhaps your application expresses the total bound as a >>>>>>>>>>>>>>>>>>>> sum? I'd be very interested in learning more about that case. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> OK here is my thought process: one optimally wants to >>>>>>>>>>>>>>>>>>> bound the variable over the whole Element domain between lb,ub. So we would >>>>>>>>>>>>>>>>>>> have something like this: >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> lb \leq \sum_{i=1}^{n=NDOF} \phi_{i}(x) \hat{u}_{i} \leq >>>>>>>>>>>>>>>>>>> ub, \forall x \in El >>>>>>>>>>>>>>>>>>> (Eq-1) >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> so it's not exactly that the nodal values should be >>>>>>>>>>>>>>>>>>> bounded (\hat{u}_{i}) unless it's a FD application where \phi_i(x) is delta >>>>>>>>>>>>>>>>>>> function !! That said I think given a normalized basis, bounding nodal >>>>>>>>>>>>>>>>>>> values is stricter requirement that (Eq-1). if that's what you meant by >>>>>>>>>>>>>>>>>>> inconsistencies between node bounds then the stronger bound would be my >>>>>>>>>>>>>>>>>>> first priority. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Ata >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> Of course it might run into a situations with >>>>>>>>>>>>>>>>>>>>> semi-smooth VI. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> The type of VI solver is independent of how the bounds >>>>>>>>>>>>>>>>>>>> are specified: once we know the upper and lower bounds for each element of >>>>>>>>>>>>>>>>>>>> a Vec, we can solve the VI with either a semismooth method or an active set >>>>>>>>>>>>>>>>>>>> method. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> I really appreciate your feedback. I hope we can get >>>>>>>>>>>>>>>>>>>> to the point where things are as clear as possible so that you can >>>>>>>>>>>>>>>>>>>> profitably use the new capabilities. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>>>> Ata >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 9:47 AM, Dmitry Karpeev < >>>>>>>>>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> Are you familiar with SNESVI? The main thing I'm >>>>>>>>>>>>>>>>>>>>>> looking at right now is the libMesh API for specifying >>>>>>>>>>>>>>>>>>>>>> the bounds on the variables. It seems to me that the >>>>>>>>>>>>>>>>>>>>>> user would ideally want to specify the bounds on the variable >>>>>>>>>>>>>>>>>>>>>> one element at a time, since that's the default >>>>>>>>>>>>>>>>>>>>>> traversal mode for libMesh. Then, of course, many nodal values >>>>>>>>>>>>>>>>>>>>>> are traversed many time, allowing for an >>>>>>>>>>>>>>>>>>>>>> inconsistency. The question is, what you, as a user, expect in that >>>>>>>>>>>>>>>>>>>>>> situation? >>>>>>>>>>>>>>>>>>>>>> Should the latest bound that the user has set be >>>>>>>>>>>>>>>>>>>>>> applied or the tightest of the bounds the user has supplied? >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 9:28 AM, Ataollah Mesgarnejad >>>>>>>>>>>>>>>>>>>>>> <am...@ti...> wrote: >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> Thanks Dmitry, I look forward to use it. Also if >>>>>>>>>>>>>>>>>>>>>>> it's available as a dev version I would love to try to work with it. >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> Ata >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 8:24 AM, Dmitry Karpeev < >>>>>>>>>>>>>>>>>>>>>>> ka...@mc...> wrote: >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> We (ANL) are working on an patch to libMesh to >>>>>>>>>>>>>>>>>>>>>>>> enable this functionality. >>>>>>>>>>>>>>>>>>>>>>>> The motivation is to use it through MOOSE, but it >>>>>>>>>>>>>>>>>>>>>>>> will be exposed as a libMesh API. >>>>>>>>>>>>>>>>>>>>>>>> Dmitry. >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> On Wed, Dec 14, 2011 at 7:53 AM, Ataollah >>>>>>>>>>>>>>>>>>>>>>>> Mesgarnejad <am...@ti...> wrote: >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> Dear all, >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> I wonder if there is compatible versionIs it >>>>>>>>>>>>>>>>>>>>>>>>> possible to >>>>>>>>>>>>>>>>>>>>>>>>> use PETScNonlinearSolver class with SNESVI >>>>>>>>>>>>>>>>>>>>>>>>> (PETSc/3.2) to solve a >>>>>>>>>>>>>>>>>>>>>>>>> variational inequality problem? >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>>>>>>>>> Ata Mesgarnejad >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>>>>>>>>>>>>>>> Cloud Computing - Latest Buzzword or a Glimpse of >>>>>>>>>>>>>>>>>>>>>>>>> the Future? >>>>>>>>>>>>>>>>>>>>>>>>> This paper surveys cloud computing today: What are >>>>>>>>>>>>>>>>>>>>>>>>> the benefits? >>>>>>>>>>>>>>>>>>>>>>>>> Why are businesses embracing it? What are its >>>>>>>>>>>>>>>>>>>>>>>>> payoffs and pitfalls? >>>>>>>>>>>>>>>>>>>>>>>>> http://www.accelacomm.com/jaw/sdnl/114/51425149/ >>>>>>>>>>>>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>>>>>>>>>>>> Libmesh-users mailing list >>>>>>>>>>>>>>>>>>>>>>>>> Lib...@li... >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/libmesh-users >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>>>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>>>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>>>>>>>>>>>>> Louisiana State University >>>>>>>>>>>>>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>>>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>>>>>>>>>>> Louisiana State University >>>>>>>>>>>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>>>>>>> Louisiana State University >>>>>>>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>>>>>> Louisiana State University >>>>>>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> A. Mesgarnejad >>>>>>> PhD Student, Research Assistant >>>>>>> Mechanical Engineering Department >>>>>>> Louisiana State University >>>>>>> 2203 Patrick F. Taylor Hall >>>>>>> Baton Rouge, La 70803 >>>>>>> >>>>>> >>>>>> <petsc_dm.patch> >>>>>> >>>>>> >>>>>> >>>>> <0001-PETSc-DM-incref-bug-fix.patch> >>>>> <0002-PETSc-DM-initializes-bounds-with-Inf-now.patch> >>>>> >>>>> >>>>> >>>> >>>> >>> >>> >> >> > > > -- > A. Mesgarnejad > PhD Student, Research Assistant > Mechanical Engineering Department > Louisiana State University > 2203 Patrick F. Taylor Hall > Baton Rouge, La 70803 > |
From: Ataollah M. <am...@ti...> - 2012-06-12 20:01:59
|
Thanks Dmitry, that did the trick. Ata On Jun 12, 2012, at 1:09 PM, Dmitry Karpeev wrote: > Ata, > You should be able to use --use-petsc-dm as before. > The issue may be that I no longer use virs as the default SNES type when you attach bounds() or bounds_object() to NonlinearSolver. The reason for that was a bad interaction between SNESVI and DMCreateXXXDecomposition(). > That bad interaction should be gone now, but to use SNESVI you have to explicitly give -snes_type virs, etc. > Let me know whether this solves your problem. > Thanks. > Dmitry. > > > On Tue, Jun 12, 2012 at 11:40 AM, Ataollah Mesgarnejad <am...@ti...> wrote: > Dmitry, > > I think you have made a change to petsc-dm solver I can't seem to be able to access petsc-dm solver anymore using --use-petsc-dm? Can you please tell me how I use it again ? > > Thanks, > Ata > > On Wed, May 9, 2012 at 6:21 PM, Ataollah Mesgarnejad <am...@ti...> wrote: > Perfect. Thats exactly what I need. > > Thanks, > Ata > > On May 9, 2012, at 6:14 PM, Dmitry Karpeev wrote: > >> Ata, >> system.solve() will call it. >> I assume that by "changing constantly" you mean "once per timestep"? >> SNESSolve() initiates the call that eventually dispatches to your bounds calculation routine. >> SNES calls that once per solve. >> Dmitry. >> >> On Wed, May 9, 2012 at 5:22 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >> Dmitry, >> >> A quick question: When does the bounds routine get called? My bounds are changing constantly do I need to call the bounds function myself or does it get called for each system.solve()? >> >> Thanks, >> Ata >> >> On May 5, 2012, at 12:21 PM, Dmitry Karpeev wrote: >> >>> Okay, that helps. >>> I'm looking further into it. >>> Dmitry. >>> >>> On Sat, May 5, 2012 at 12:08 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>> Dmitry, >>> >>> I just tried it without bounds and I get the same DIVERGED_BREAKDOWN error. >>> >>> Ata >>>> Ata, >>>> I'm trying to reproduce the problem that you are observing, so I wanted to understand better what you are doing there. >>>> Do you just skip Jacobian evaluation and return from the ComputeJacobian call immediately? >>>> Also, without --use-petsc-dm, how are you able to run with VIRS (I remember you mentioned your own wrapper to SNESVI)? Finally, do you see this problem when running with SNES (not SNESVI)? I wonder if the issue is in the interaction with the bounds. >>>> >>>> Thanks for your patience. >>>> Dmitry. >>>> >>>> On Wed, May 2, 2012 at 7:51 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>> >>>> On May 1, 2012, at 10:16 PM, Dmitry Karpeev wrote: >>>> >>>>> Yeah, it's a bit of a hack right now -- DMCreateMatrix simply increfs and return the matrix from the underlying NonlinearImplicitSystem. The reason is that initially the matrix is preallocated by unassembled, and MatDuplicate >>>>> doesn't want to duplicate an unassembled matrix. I can fix that, but I'm a bit puzzled by the fact that KSP messes >>>>> with the matrix. Can you run with -snes_view and send me the output? >>>>> >>>> >>>> Dmitry, >>>> >>>> Here it is: >>>> >>>> ------------- with the flag ----------------------- >>>> >>>> Linear solve converged due to CONVERGED_RTOL iterations 44 >>>> Calculating surface energy... done >>>> Calculating fracture energy... done >>>> elastic energy:13.74054,fracture energy:-0.00000,total:13.74054 >>>> applying variable bounds ... done >>>> NL step 0, |residual|_2 = 2.41046e-01 >>>> SNES Jacobian assembly... done >>>> Linear solve converged due to CONVERGED_RTOL iterations 3 >>>> NL step 1, |residual|_2 = 5.10144e-07 >>>> Linear solve did not converge due to DIVERGED_BREAKDOWN iterations 1 >>>> SNES Object: 1 MPI processes >>>> type: virs >>>> maximum iterations=50, maximum function evaluations=10000 >>>> tolerances: relative=1e-08, absolute=1e-35, solution=1e-08 >>>> total number of linear solver iterations=3 >>>> total number of function evaluations=2 >>>> KSP Object: 1 MPI processes >>>> type: gmres >>>> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement >>>> GMRES: happy breakdown tolerance 1e-30 >>>> maximum iterations=10000, initial guess is zero >>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >>>> left preconditioning >>>> using PRECONDITIONED norm type for convergence test >>>> PC Object: 1 MPI processes >>>> type: ilu >>>> ILU: out-of-place factorization >>>> 0 levels of fill >>>> tolerance for zero pivot 2.22045e-14 >>>> using diagonal shift to prevent zero pivot >>>> matrix ordering: natural >>>> factor fill ratio given 1, needed 1 >>>> Factored matrix follows: >>>> Matrix Object: 1 MPI processes >>>> type: seqaij >>>> rows=153, cols=153 >>>> package used to perform factorization: petsc >>>> total: nonzeros=2401, allocated nonzeros=2401 >>>> total number of mallocs used during MatSetValues calls =0 >>>> not using I-node routines >>>> linear system matrix = precond matrix: >>>> Matrix Object: 1 MPI processes >>>> type: seqaij >>>> rows=153, cols=153 >>>> total: nonzeros=2401, allocated nonzeros=2401 >>>> total number of mallocs used during MatSetValues calls =0 >>>> not using I-node routines >>>> SNESLineSearch Object: 1 MPI processes >>>> type: basic >>>> maxstep=1e+08, minlambda=1e-12 >>>> tolerances: relative=1e-08, absolute=1e-15, lambda=1e-08 >>>> maximum iterations=1 >>>> Fracture system solved at nonlinear iteration 1 , final nonlinear residual norm: 5.10144e-07 >>>> >>>> --- without the flag ------ >>>> >>>> applying variable bounds ... done >>>> NL step 0, |residual|_2 = 2.41046e-01 >>>> SNES Jacobian assembly... done >>>> Linear solve converged due to CONVERGED_RTOL iterations 3 >>>> NL step 1, |residual|_2 = 5.10144e-07 >>>> SNES Jacobian assembly... done >>>> Linear solve converged due to CONVERGED_RTOL iterations 3 >>>> NL step 2, |residual|_2 = 1.61153e-12 >>>> SNES Object: 1 MPI processes >>>> type: virs >>>> maximum iterations=50, maximum function evaluations=10000 >>>> tolerances: relative=1e-08, absolute=1e-35, solution=1e-08 >>>> total number of linear solver iterations=6 >>>> total number of function evaluations=3 >>>> KSP Object: 1 MPI processes >>>> type: gmres >>>> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement >>>> GMRES: happy breakdown tolerance 1e-30 >>>> maximum iterations=10000, initial guess is zero >>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000 >>>> left preconditioning >>>> using PRECONDITIONED norm type for convergence test >>>> PC Object: 1 MPI processes >>>> type: ilu >>>> ILU: out-of-place factorization >>>> 0 levels of fill >>>> tolerance for zero pivot 2.22045e-14 >>>> using diagonal shift to prevent zero pivot >>>> matrix ordering: natural >>>> factor fill ratio given 1, needed 1 >>>> Factored matrix follows: >>>> Matrix Object: 1 MPI processes >>>> type: seqaij >>>> rows=153, cols=153 >>>> package used to perform factorization: petsc >>>> total: nonzeros=2401, allocated nonzeros=2401 >>>> total number of mallocs used during MatSetValues calls =0 >>>> not using I-node routines >>>> linear system matrix = precond matrix: >>>> Matrix Object: 1 MPI processes >>>> type: seqaij >>>> rows=153, cols=153 >>>> total: nonzeros=2401, allocated nonzeros=2401 >>>> total number of mallocs used during MatSetValues calls =0 >>>> not using I-node routines >>>> SNESLineSearch Object: 1 MPI processes >>>> type: basic >>>> maxstep=1e+08, minlambda=1e-12 >>>> tolerances: relative=1e-08, absolute=1e-15, lambda=1e-08 >>>> maximum iterations=1 >>>> Fracture system solved at nonlinear iteration 2 , final nonlinear residual norm: 1.61153e-12 >>>> >>>> >>>> >>>>> Incidentally, there _was_ a reference counting bug that led to a breakdown very much like what you reported. >>>>> I'm attaching that patch and a small improvement to DMVariableBounds. I need to figure out how to go about creating matrices from DM_libMesh. >>>>> >>>> >>>> Let me know if you found anything or if I can be of any help. >>>> >>>> Thanks, >>>> Ata >>>> >>>>> Dmitry. >>>>> >>>>> On Tue, May 1, 2012 at 6:21 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>> Dmitry, >>>>> >>>>> Is there a way so SNES doesn't go calculate the Jacobian for every iteration (my Jac does't change)? I used to use a flag to check if it's assembled but when I do that with petsc-dm, KSP associated with SNES gives DIVERGED_BREAKDOWN for the second iteration! looks like petsc-dm doesn't make a copy of the matrix so when it's used in KSP solve it gets jumbled up. >>>>> >>>>> Here is how it looks >>>>> >>>>> -with the flag: >>>>> >>>>> applying variable bounds ... done >>>>> NL step 0, |residual|_2 = 8.90674e-04 >>>>> SNES Jacobian assembly... done >>>>> Linear solve converged due to CONVERGED_RTOL iterations 6 >>>>> NL step 1, |residual|_2 = 2.90987e-05 >>>>> Linear solve did not converge due to DIVERGED_BREAKDOWN iterations 1 >>>>> >>>>> this happens for every SNES solve. >>>>> ----------------------- >>>>> >>>>> -without it: >>>>> >>>>> applying variable bounds ... done >>>>> NL step 0, |residual|_2 = 6.08030e-02 >>>>> SNES hessian assembly... done >>>>> Linear solve converged due to CONVERGED_RTOL iterations 6 >>>>> NL step 1, |residual|_2 = 2.61557e-07 >>>>> SNES hessian assembly... done >>>>> Linear solve converged due to CONVERGED_RTOL iterations 6 >>>>> NL step 2, |residual|_2 = 1.35354e-12 >>>>> Fracture system solved at nonlinear iteration 2 , final nonlinear residual norm: 1.35354e-12 >>>>> >>>>> ps: these are not from the same time step but you get the idea. >>>>> >>>>> Best, >>>>> Ata >>>>> >>>>> On May 1, 2012, at 8:49 AM, Dmitry Karpeev wrote: >>>>> >>>>>> As an fyi: I just built and ran miscellaneous_ex7 in parallel with the latest petsc-dev: >>>>>> http://petsc.cs.iit.edu/petsc/petsc-dev/rev/12d4a1ea0aca >>>>>> If you decide you want to upgrade to this (or later) petsc-dev, you'd need a small patch for >>>>>> petsc_dm_nonlinear_solver.C to account for the latest API changes (attached). >>>>>> I am working on a bigger patch that adds more functionality to DM_libMesh and streamline >>>>>> miscellaneous_ex7. That will take some time to percolate up into the libMesh tree. In the >>>>>> meantime you should be able to run with the small fix I've attached. >>>>>> >>>>>> Let me know how it works. >>>>>> Dmitry. >>>>>> >>>>>> >>>>>> On Tue, May 1, 2012 at 8:35 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>> Dmitry, >>>>>> >>>>>> I believe that the error is indeed due to petsc-dev version. I'm going to check this during the next few hours and get back to you. >>>>>> >>>>>> Thanks, >>>>>> Ata >>>>>> >>>>>> >>>>>> On Tue, May 1, 2012 at 8:29 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>> Ata, >>>>>> I can't reproduce the problem: miscellaneous_ex7 runs fine for me on 2 procs with a relatively recent version >>>>>> of petsc-dev. Can you send me the full PETSc error output? >>>>>> >>>>>> Thanks. >>>>>> Dmitry. >>>>>> >>>>>> >>>>>> On Mon, Apr 30, 2012 at 8:56 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>> >>>>>> On Apr 30, 2012, at 8:39 PM, Dmitry Karpeev wrote: >>>>>> >>>>>>> Ata, >>>>>>> example7 segfaults when you run in parallel? >>>>>>> Does it do it on 2 procs? >>>>>> Yes and yes. >>>>>> >>>>>> here is the stack: >>>>>> (gdb) where >>>>>> #0 0x00000038fbaaada0 in __nanosleep_nocancel () from /lib64/libc.so.6 >>>>>> #1 0x00000038fbaaac30 in sleep () from /lib64/libc.so.6 >>>>>> #2 0x00002ae457b35c40 in PetscSleep(double) () >>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>> #3 0x00002ae457b7dffa in PetscAttachDebugger() () >>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>> #4 0x00002ae457b7e95f in PetscAttachDebuggerErrorHandler(int, int, char const*, char const*, char const*, int, PetscErrorType, char const*, void*) () >>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>> #5 0x00002ae457b817be in PetscError(int, int, char const*, char const*, char const*, int, PetscErrorType, char const*, ...) () >>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>> #6 0x00002ae457b82ea7 in PetscDefaultSignalHandler(int, void*) () >>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>> #7 0x00002ae457b82883 in PetscSignalHandler_Private () >>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>> #8 <signal handler called> >>>>>> #9 0x00002ae458189340 in DMCoarsenHookAdd(_p_DM*, int (*)(_p_DM*, _p_DM*, void*), int (*)(_p_DM*, _p_Mat*, _p_Vec*, _p_Mat*, _p_DM*, void*), void*) () >>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>> #10 0x00002ae45835f431 in DMSNESGetContext(_p_DM*, _n_SNESDM**) () >>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>> #11 0x00002ae45835ea34 in DMSNESGetFunction(_p_DM*, int (**)(_p_SNES*, _p_Vec*, ---Type <return> to continue, or q <return> to quit--- >>>>>> _p_Vec*, void*), void**) () >>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>> #12 0x00002ae458393d1c in SNESGetFunction(_p_SNES*, _p_Vec**, int (**)(_p_SNES*, _p_Vec*, _p_Vec*, void*), void**) () >>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>> #13 0x00002ae45839b9eb in SNESSetUp(_p_SNES*) () >>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>> #14 0x00002ae45839a6b0 in SNESSolve(_p_SNES*, _p_Vec*, _p_Vec*) () >>>>>> from /share/apps/petsc-dev/Linux-intel12.1-mef90-O/lib/libpetsc.so >>>>>> >>>>>> #15 0x00000000010976ab in libMesh::PetscDMNonlinearSolver<libMesh::Number>::solve (this=0x35506d0, jac_in=..., x_in=..., r_in=...) >>>>>> at src/solvers/petsc_dm_nonlinear_solver.C:450 >>>>>> #16 0x0000000000cd604f in libMesh::NonlinearImplicitSystem::solve ( >>>>>> this=0x353b7e0) at src/systems/nonlinear_implicit_system.C:178 >>>>>> #17 0x0000000000426082 in Biharmonic::step (this=0x34f2000, dt=@0x7fff95edca28) >>>>>> at biharmonic.C:110 >>>>>> #18 0x0000000000426373 in Biharmonic::run (this=0x34f2000) at biharmonic.C:152 >>>>>> warning: Range for type (null) has invalid bounds 0..-111 >>>>>> #19 0x0000000000436cf5 in main (argc=12, argv=0x7fff95edcc18) >>>>>> at miscellaneous_ex7.C:64 >>>>>> >>>>>> Ata >>>>>> >>>>>>> >>>>>>> Dmitry. >>>>>>> >>>>>>> On Mon, Apr 30, 2012 at 8:31 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>> Dmitry, >>>>>>> >>>>>>> I had a very good experience on single processor but on multiple processors I get segmentation error at line where we call solve (I tried both my code and example7) . I tried your example without -vi --use-petsc-dm and it works just fine. Right now I'm trying to compile a debug version and look into it; meanwhile I thought it would be a good idea if I let you know. >>>>>>> >>>>>>> Thanks, >>>>>>> Ata >>>>>>> >>>>>>> On Apr 30, 2012, at 1:30 PM, Dmitry Karpeev wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Mon, Apr 30, 2012 at 1:28 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>> >>>>>>>> On Apr 30, 2012, at 1:13 PM, Dmitry Karpeev wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Mon, Apr 30, 2012 at 1:03 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>>> Dmitry, >>>>>>>>> >>>>>>>>> I had time to come back and work on the SNESVI/Libmesh program. The way you setup your code is particularly hard for me to follow in my own code since it requires a significant amount of extra work (to define classes and setup the inheritances..). What I did instead was to try add a TransientNonlinearSystem to my EquationSystems object: >>>>>>>>> >>>>>>>>> TransientNonlinearImplicitSystem & fracture_system = equation_system.add_system<TransientNonlinearImplicitSystem>("fracture_system"); >>>>>>>>> >>>>>>>>> and then to add the bounds and Objective and Residual functions: >>>>>>>>> >>>>>>>>> fracture_system.attach_init_function(init_ic_fracture); >>>>>>>>> fracture_system.nonlinear_solver->jacobian= PsiFormJacobian; >>>>>>>>> fracture_system.nonlinear_solver->residual= PsiFormResidual; >>>>>>>>> fracture_system.nonlinear_solver->bounds = PsiFormBounds; >>>>>>>>> >>>>>>>>> what I see now is that SNES solve does not apply the bounds. So here is my questions: >>>>>>>>> >>>>>>>>> 1- Is there inherently wrong with setting up the system specially the bounds the easy way like I did? >>>>>>>>> Ata, >>>>>>>>> Doing things this "simple" way is just fine: there is no reason to follow the relatively convoluted structure I used in that >>>>>>>>> example. When I have time, I'll rewrite it to simplify things. >>>>>>>>> 2- Where is the right SNES type is chosen ? I guess this should be hard wired somewhere in NonlinearImplicitSystem! >>>>>>>>> 3- When I use -snes_view (with your example or my own code ) I get snes_type ls? shouldn't this be VI ? >>>>>>>>> >>>>>>>>> What's likely missing is the command-line option --use-petsc-dm. It's necessary since libMesh wants to preserve the usual behavior as the default. In particular, because SNESVI and the DM hookup aren't really possible prior to petsc-3.2. >>>>>>>>> In fact, it might not be possible without a recent petsc-dev (let me verify that, once I get to my computer). >>>>>>>>> >>>>>>>>> In the meantime, could you give --use-petsc-dm a try? >>>>>>>>> >>>>>>>> >>>>>>>> I just saw that in the example help line and used it. It works like a charm. >>>>>>>> Great! >>>>>>>> Dmitry. >>>>>>>> I'll go on check for the convergence and see what I can get. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Ata >>>>>>>>> Thanks. >>>>>>>>> >>>>>>>>> Dmitry. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Ata >>>>>>>>> >>>>>>>>> On Apr 12, 2012, at 10:00 AM, Dmitry Karpeev wrote: >>>>>>>>> >>>>>>>>>> Convergence is something we might need to work on -- I noticed some chatter in a simple bounded biharmonic heat equation (a stripped down version of Cahn-Hilliard). >>>>>>>>>> Thanks. >>>>>>>>>> Dmitry. >>>>>>>>>> >>>>>>>>>> On Thu, Apr 12, 2012 at 9:55 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>>>> Thanks Dmitry, >>>>>>>>>> >>>>>>>>>> It would be wonderful. I made a wrapper for SNESVI in myself for my program but had difficulty achieving good convergence. Right now I am occupied with other issues but I'll get back to this as soon as I get a chance. >>>>>>>>>> >>>>>>>>>> Ata >>>>>>>>>> >>>>>>>>>> On Apr 10, 2012, at 9:55 PM, Dmitry Karpeev wrote: >>>>>>>>>> >>>>>>>>>>> Ata, >>>>>>>>>>> There is now a way to use SNESVI from libMesh. >>>>>>>>>>> If interested, pull a recent revision from sourceforge and take a look at examples/miscellaneous/miscellaneous_ex7. >>>>>>>>>>> Let me know, if you have any questions. >>>>>>>>>>> >>>>>>>>>>> Thanks. >>>>>>>>>>> Dmitry. >>>>>>>>>>> >>>>>>>>>>> On Wed, Dec 14, 2011 at 12:45 PM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>>>>>> Okay, thanks for your feedback. >>>>>>>>>>> I'll keep you updated on this work. >>>>>>>>>>> >>>>>>>>>>> Dmitry. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Wed, Dec 14, 2011 at 12:42 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Wed, Dec 14, 2011 at 12:35 PM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Wed, Dec 14, 2011 at 12:19 PM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Wed, Dec 14, 2011 at 11:59 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>>>>>> Ata, >>>>>>>>>>> >>>>>>>>>>> A couple of things: >>>>>>>>>>> 1. SNESVI solves a VI under what's known as "box constraints": the Vec x that the residual operates on is entrywise between Vecs xlow and xhigh. In order to enable >>>>>>>>>>> constraints on a linear combination of the Vec entries (as you suggest), the problem >>>>>>>>>>> has to be augmented with extra variables -- these linear combinations. This, of course, >>>>>>>>>>> changes the Jacobian, preconditioner, etc. The user provides only the Jacobian for the original system, so libMesh would have to automatically construct the Jacobian for the extended system, and precondition it using the user provided PC for the original subsystem. This is not necessarily the unreasonable thing to do in many circumstances, but will take more thinking than a straighforward nodal constraint. The user API for specifying the constraints, however, becomes rather straighforward. >>>>>>>>>>> >>>>>>>>>>> 2. In terms of FEM convergence, constraining at the nodes is sufficient (see, for example, >>>>>>>>>>> Glowinski's book: http://books.google.com/books/about/Numerical_analysis_of_variational_inequa.html?id=Pf4ed2mtbx4C) >>>>>>>>>>> You can show that if you impose the nodal constraints, in the limit of mesh size going to 0 the inequality is satisfied a.e. >>>>>>>>>>> >>>>>>>>>>> In this case no funny manipulation of the Jacobian or the PC is necessary, but the user API to specify the constraint is a bit amiguous: how do you constrain the node? From a containing element? Then there are multiple, possibly incompatible, constraints. >>>>>>>>>>> Or you can simply go through all the nodes in the system once using node iterator. Here is how I think you can do it: >>>>>>>>>>> >>>>>>>>>>> ------------------------------------- >>>>>>>>>>> //make two PetscVector ub,lb from two variables you already added to the system >>>>>>>>>>> NumericVector<Number> &ub_sys = fracture_system.get_vector("ub"); >>>>>>>>>>> NumericVector<Number> &lb_sys = fracture_system.get_vector("lb"); >>>>>>>>>>> PetscVector<Number>&lb = libmesh_cast_ptr<PetscVector<Number>*>(&ub_sys); >>>>>>>>>>> PetscVector<Number>&ub = libmesh_cast_ptr<PetscVector<Number>*>(&lb_sys); >>>>>>>>>>> >>>>>>>>>>> then in the bound routine do: >>>>>>>>>>> >>>>>>>>>>> // Get a constant reference to the mesh object. >>>>>>>>>>> const MeshBase& mesh = es.get_mesh(); >>>>>>>>>>> //Get the iterator >>>>>>>>>>> MeshBase::const_node_iterator nd = mesh.local_nodes_begin(); >>>>>>>>>>> const MeshBase::const_node_iterator end_nd = mesh.local_nodes_end(); >>>>>>>>>>> for (; nd != end_nd; nd++){ >>>>>>>>>>> const Node* node = *nd; >>>>>>>>>>> //get the degree of freedom of node >>>>>>>>>>> unsigned int dn = node->dof_number(0,0,0); // system 0 , var 0, component 0 >>>>>>>>>>> //set the ub,lb >>>>>>>>>>> ub(dn)= A; >>>>>>>>>>> lb(dn) = B; >>>>>>>>>>> } >>>>>>>>>>> //close the vectors >>>>>>>>>>> ub.close(); >>>>>>>>>>> lb.close(); >>>>>>>>>>> >>>>>>>>>>> You could do that. That's probably the mode we can support almost immediately. >>>>>>>>>>> The issue here is that you lose some of the "context" by processing each node by >>>>>>>>>>> itself rather as part of an element. For example what if each element is a different material? >>>>>>>>>>> I guess in that case that property should be set on the nodes as well. >>>>>>>>>>> >>>>>>>>>>> Exactly one can always pass mat prop as a Constant MONOMIAL (e.g. system.add_variable("mat_prop",CONSTANT,MONOMIAL);) variable with values on each node. >>>>>>>>>>> >>>>>>>>>>> Ata >>>>>>>>>>> >>>>>>>>>>> Dmitry. >>>>>>>>>>> ----------------------------------------------------------------- >>>>>>>>>>> Ata >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Dmitry. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Wed, Dec 14, 2011 at 11:17 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>>>>> I hope I understood the whole thing correctly. >>>>>>>>>>> >>>>>>>>>>> On Wed, Dec 14, 2011 at 10:56 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Wed, Dec 14, 2011 at 10:50 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>>>>> I don't think setting the variable bound on each node separately is a good idea in general. >>>>>>>>>>> I'm not sure what you mean by this. Could you clarify? I don't think I'm advocating this. >>>>>>>>>>> >>>>>>>>>>> Bounds should be set as a sum on each elements degrees of freedom; this way there should not be any nodal inconsistencies. >>>>>>>>>>> I'm not sure what you mean here either. A bound is usually not a sum (e.g., a bound on a concentration is 0 <= c <=1). >>>>>>>>>>> Perhaps your application expresses the total bound as a sum? I'd be very interested in learning more about that case. >>>>>>>>>>> >>>>>>>>>>> OK here is my thought process: one optimally wants to bound the variable over the whole Element domain between lb,ub. So we would have something like this: >>>>>>>>>>> >>>>>>>>>>> lb \leq \sum_{i=1}^{n=NDOF} \phi_{i}(x) \hat{u}_{i} \leq ub, \forall x \in El (Eq-1) >>>>>>>>>>> >>>>>>>>>>> so it's not exactly that the nodal values should be bounded (\hat{u}_{i}) unless it's a FD application where \phi_i(x) is delta function !! That said I think given a normalized basis, bounding nodal values is stricter requirement that (Eq-1). if that's what you meant by inconsistencies between node bounds then the stronger bound would be my first priority. >>>>>>>>>>> >>>>>>>>>>> Ata >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Of course it might run into a situations with semi-smooth VI. >>>>>>>>>>> The type of VI solver is independent of how the bounds are specified: once we know the upper and lower bounds for each element of a Vec, we can solve the VI with either a semismooth method or an active set method. >>>>>>>>>>> >>>>>>>>>>> I really appreciate your feedback. I hope we can get to the point where things are as clear as possible so that you can >>>>>>>>>>> profitably use the new capabilities. >>>>>>>>>>> >>>>>>>>>>> Dmitry. >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> Ata >>>>>>>>>>> >>>>>>>>>>> On Wed, Dec 14, 2011 at 9:47 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>>>>>> Are you familiar with SNESVI? The main thing I'm looking at right now is the libMesh API for specifying >>>>>>>>>>> the bounds on the variables. It seems to me that the user would ideally want to specify the bounds on the variable >>>>>>>>>>> one element at a time, since that's the default traversal mode for libMesh. Then, of course, many nodal values >>>>>>>>>>> are traversed many time, allowing for an inconsistency. The question is, what you, as a user, expect in that situation? >>>>>>>>>>> Should the latest bound that the user has set be applied or the tightest of the bounds the user has supplied? >>>>>>>>>>> >>>>>>>>>>> Dmitry. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Wed, Dec 14, 2011 at 9:28 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>>>>> Thanks Dmitry, I look forward to use it. Also if it's available as a dev version I would love to try to work with it. >>>>>>>>>>> >>>>>>>>>>> Ata >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Wed, Dec 14, 2011 at 8:24 AM, Dmitry Karpeev <ka...@mc...> wrote: >>>>>>>>>>> We (ANL) are working on an patch to libMesh to enable this functionality. >>>>>>>>>>> The motivation is to use it through MOOSE, but it will be exposed as a libMesh API. >>>>>>>>>>> Dmitry. >>>>>>>>>>> >>>>>>>>>>> On Wed, Dec 14, 2011 at 7:53 AM, Ataollah Mesgarnejad <am...@ti...> wrote: >>>>>>>>>>> Dear all, >>>>>>>>>>> >>>>>>>>>>> I wonder if there is compatible versionIs it possible to >>>>>>>>>>> use PETScNonlinearSolver class with SNESVI (PETSc/3.2) to solve a >>>>>>>>>>> variational inequality problem? >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> Ata Mesgarnejad >>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>> Cloud Computing - Latest Buzzword or a Glimpse of the Future? >>>>>>>>>>> This paper surveys cloud computing today: What are the benefits? >>>>>>>>>>> Why are businesses embracing it? What are its payoffs and pitfalls? >>>>>>>>>>> http://www.accelacomm.com/jaw/sdnl/114/51425149/ >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> Libmesh-users mailing list >>>>>>>>>>> Lib...@li... >>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/libmesh-users >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>> Louisiana State University >>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>> Louisiana State University >>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>> Louisiana State University >>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> A. Mesgarnejad >>>>>>>>>>> PhD Student, Research Assistant >>>>>>>>>>> Mechanical Engineering Department >>>>>>>>>>> Louisiana State University >>>>>>>>>>> 2203 Patrick F. Taylor Hall >>>>>>>>>>> Baton Rouge, La 70803 >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> A. Mesgarnejad >>>>>> PhD Student, Research Assistant >>>>>> Mechanical Engineering Department >>>>>> Louisiana State University >>>>>> 2203 Patrick F. Taylor Hall >>>>>> Baton Rouge, La 70803 >>>>>> >>>>>> <petsc_dm.patch> >>>>> >>>>> >>>>> <0001-PETSc-DM-incref-bug-fix.patch><0002-PETSc-DM-initializes-bounds-with-Inf-now.patch> >>>> >>>> >>> >>> >> >> > > > > > -- > A. Mesgarnejad > PhD Student, Research Assistant > Mechanical Engineering Department > Louisiana State University > 2203 Patrick F. Taylor Hall > Baton Rouge, La 70803 > |