You can subscribe to this list here.
2003 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}
(2) 
_{Oct}
(2) 
_{Nov}
(27) 
_{Dec}
(31) 

2004 
_{Jan}
(6) 
_{Feb}
(15) 
_{Mar}
(33) 
_{Apr}
(10) 
_{May}
(46) 
_{Jun}
(11) 
_{Jul}
(21) 
_{Aug}
(15) 
_{Sep}
(13) 
_{Oct}
(23) 
_{Nov}
(1) 
_{Dec}
(8) 
2005 
_{Jan}
(27) 
_{Feb}
(57) 
_{Mar}
(86) 
_{Apr}
(23) 
_{May}
(37) 
_{Jun}
(34) 
_{Jul}
(24) 
_{Aug}
(17) 
_{Sep}
(50) 
_{Oct}
(24) 
_{Nov}
(10) 
_{Dec}
(60) 
2006 
_{Jan}
(47) 
_{Feb}
(46) 
_{Mar}
(127) 
_{Apr}
(19) 
_{May}
(26) 
_{Jun}
(62) 
_{Jul}
(47) 
_{Aug}
(51) 
_{Sep}
(61) 
_{Oct}
(42) 
_{Nov}
(50) 
_{Dec}
(33) 
2007 
_{Jan}
(60) 
_{Feb}
(55) 
_{Mar}
(77) 
_{Apr}
(102) 
_{May}
(82) 
_{Jun}
(102) 
_{Jul}
(169) 
_{Aug}
(117) 
_{Sep}
(80) 
_{Oct}
(37) 
_{Nov}
(51) 
_{Dec}
(43) 
2008 
_{Jan}
(71) 
_{Feb}
(94) 
_{Mar}
(98) 
_{Apr}
(125) 
_{May}
(54) 
_{Jun}
(119) 
_{Jul}
(60) 
_{Aug}
(111) 
_{Sep}
(118) 
_{Oct}
(125) 
_{Nov}
(119) 
_{Dec}
(94) 
2009 
_{Jan}
(109) 
_{Feb}
(38) 
_{Mar}
(93) 
_{Apr}
(88) 
_{May}
(29) 
_{Jun}
(57) 
_{Jul}
(53) 
_{Aug}
(48) 
_{Sep}
(68) 
_{Oct}
(151) 
_{Nov}
(23) 
_{Dec}
(35) 
2010 
_{Jan}
(84) 
_{Feb}
(60) 
_{Mar}
(184) 
_{Apr}
(112) 
_{May}
(60) 
_{Jun}
(90) 
_{Jul}
(23) 
_{Aug}
(70) 
_{Sep}
(119) 
_{Oct}
(27) 
_{Nov}
(47) 
_{Dec}
(54) 
2011 
_{Jan}
(22) 
_{Feb}
(19) 
_{Mar}
(92) 
_{Apr}
(93) 
_{May}
(35) 
_{Jun}
(91) 
_{Jul}
(32) 
_{Aug}
(61) 
_{Sep}
(7) 
_{Oct}
(69) 
_{Nov}
(81) 
_{Dec}
(23) 
2012 
_{Jan}
(64) 
_{Feb}
(95) 
_{Mar}
(35) 
_{Apr}
(36) 
_{May}
(63) 
_{Jun}
(98) 
_{Jul}
(70) 
_{Aug}
(171) 
_{Sep}
(149) 
_{Oct}
(64) 
_{Nov}
(67) 
_{Dec}
(126) 
2013 
_{Jan}
(108) 
_{Feb}
(104) 
_{Mar}
(171) 
_{Apr}
(133) 
_{May}
(108) 
_{Jun}
(100) 
_{Jul}
(93) 
_{Aug}
(126) 
_{Sep}
(74) 
_{Oct}
(59) 
_{Nov}
(145) 
_{Dec}
(93) 
2014 
_{Jan}
(38) 
_{Feb}
(45) 
_{Mar}
(26) 
_{Apr}
(41) 
_{May}
(125) 
_{Jun}
(70) 
_{Jul}
(61) 
_{Aug}
(66) 
_{Sep}
(60) 
_{Oct}
(110) 
_{Nov}
(27) 
_{Dec}
(30) 
2015 
_{Jan}
(43) 
_{Feb}
(67) 
_{Mar}
(71) 
_{Apr}
(92) 
_{May}
(39) 
_{Jun}
(15) 
_{Jul}
(46) 
_{Aug}
(63) 
_{Sep}
(84) 
_{Oct}
(82) 
_{Nov}
(69) 
_{Dec}
(45) 
2016 
_{Jan}
(92) 
_{Feb}
(91) 
_{Mar}
(148) 
_{Apr}
(43) 
_{May}
(58) 
_{Jun}
(117) 
_{Jul}
(92) 
_{Aug}
(140) 
_{Sep}
(49) 
_{Oct}
(31) 
_{Nov}

_{Dec}

S  M  T  W  T  F  S 





1
(13) 
2

3
(1) 
4

5
(1) 
6

7
(2) 
8
(11) 
9
(12) 
10
(6) 
11

12
(2) 
13
(8) 
14
(19) 
15
(5) 
16
(5) 
17
(2) 
18

19
(2) 
20

21

22
(7) 
23

24

25

26
(2) 
27
(11) 
28
(5) 
29
(8) 
30
(3) 
31

From: Roy Stogner <roystgnr@ic...>  20140530 13:47:37

On Fri, 30 May 2014, Robert wrote: > Are there any additional step necessary in each timestep to apply > the boundary conditions or what else am I missing? One little step, one big one: To apply the boundary conditions, you'll want to run EquationSystems::reinit() before each time step. Don't trust the accuracy without testing. If you're not updating System::time to match the *end* of your time step rather than the beginning when reinit gets called, then your functions will get called with the wrong time and you'll have an O(deltat) error from that.  Roy 
From: Robert <libmesh@ro...>  20140530 12:30:54

Hello, I want to apply dirichlet boundary conditions which change their value over time: class BdyFunction : public FunctionBase<Number> { public: BdyFunction (unsigned int var, Real disp) : _var(var), _disp(disp) { this>_initialized = true; } virtual Number operator() (const Point&, const Real = 0) { libmesh_not_implemented(); } virtual void operator() (const Point& p, const Real time, DenseVector<Number>& output) { output.zero(); output(_var) = _disp*time } virtual AutoPtr<FunctionBase<Number> > clone() const { return AutoPtr<FunctionBase<Number> > (new BdyFunction(_var, _disp)); } private: const unsigned int _var; Real _disp; }; and then add it with sth like this std::set<boundary_id_type> top; top.insert(5); BdyFunction bfunc(w_var, 0.1); this>get_dof_map().add_dirichlet_boundary(DirichletBoundary (top, w_only, &bfunc)); in a FEMSystem::init_data(..) Are there any additional step necessary in each timestep to apply the boundary conditions or what else am I missing? Robert 
From: Boyce Griffith <griffith@ci...>  20140530 00:37:08

On May 29, 2014, at 7:31 PM, Vikram Garg <vikram.v.garg@...> wrote: > (Putting this back on the list) I see. My commands for the parallel ilu > might be out of date. Does anyone know the current syntax ? Run with help to see the commandline options. I would recommend piping this through less so that you can search through all of them. If you are mainly interested in correctness at this point, try decreasing the residual tolerance (via ksp_rtol). PETSc has to be configured to provide a parallel LU or ILU solver (e.g. an external package such as SuperLU). What is happening in parallel is probably something like GMRES with additive Schwarz preconditioning with LU or ILU solves on each subdomain (i.e., in this case, on each processor). You can run with ksp_view to see what is actually being used as the solver.  Boyce > > Thanks. > > > On Thu, May 29, 2014 at 7:28 PM, walter kou <walter4code@...> wrote: > >> Yes, >> in serial, I put "pc_type ilu" or just put "pc_type lu", the residual is >> small. >> >> In parallel, I will put "sub_pc_type ilu", the residual is still big. >> >> >> On Thu, May 29, 2014 at 6:26 PM, Vikram Garg <vikram.v.garg@...> >> wrote: >> >>> So it works in serial, but not in parallel ? >>> >>> >>> On Thu, May 29, 2014 at 7:25 PM, walter kou <walter4code@...> >>> wrote: >>> >>>> sub_pc_type ilu: >>>> >>>> The residual is big. >>>> >>>> 250 KSP Residual norm 1.289444373237e05 % max 3.713155970657e+05 min >>>> 2.280681360715e03 max/min 1.628090637568e+08 >>>> >>>> Linear solver converged at step: 25, final residual: 1.28944e05 >>>> >>>> >>>> On Thu, May 29, 2014 at 6:21 PM, Vikram Garg <vikram.v.garg@...> >>>> wrote: >>>> >>>>> Right, I dont remember the syntax for ilu in parallel, but I believe it >>>>> is >>>>> >>>>> sub_pc_type ilu >>>>> >>>>> Try that. >>>>> >>>>> >>>>> On Thu, May 29, 2014 at 7:18 PM, walter kou <walter4code@...> >>>>> wrote: >>>>> >>>>>> The above is mpiexec np 2, >>>>>> Running with single processor is OK. >>>>>> >>>>>> 249 KSP Residual norm 4.369357920020e19 % max 5.529938049388e+01 min >>>>>> 3.238293945443e16 max/min 1.707670193797e+17 >>>>>> 250 KSP Residual norm 4.369357920020e19 % max 5.530031976618e+01 min >>>>>> 5.037092721368e16 max/min 1.097861858520e+17 >>>>>> >>>>>> Linear solver converged at step: 29, final residual: 4.36936e19 >>>>>> >>>>>> >>>>>> On Thu, May 29, 2014 at 6:15 PM, walter kou <walter4code@...> >>>>>> wrote: >>>>>> >>>>>>> It seems I do not have PETSc ILU: >>>>>>> >>>>>>> [0]PETSC ERROR:  Error Message >>>>>>>  >>>>>>> [0]PETSC ERROR: No support for this operation for this object type! >>>>>>> [0]PETSC ERROR: Matrix format mpiaij does not have a builtin PETSc >>>>>>> >>>>>>> >>>>>>> On Thu, May 29, 2014 at 6:11 PM, Vikram Garg <vikram.v.garg@... >>>>>>>> wrote: >>>>>>> >>>>>>>> That looks good. So it most likely a linear solver settings issue. >>>>>>>> Try these options: >>>>>>>> >>>>>>>> ksp_monitor_singular_value ksp_gmres_modifiedgramschmidt >>>>>>>> ksp_gmres_restart 500 pc_type ilu pc_factor_levels 4 >>>>>>>> >>>>>>>> >>>>>>>> On Thu, May 29, 2014 at 7:10 PM, walter kou <walter4code@...> >>>>>>>> wrote: >>>>>>>> >>>>>>>>> I can only try single processor : pc_type lu >>>>>>>>> The residual is good. >>>>>>>>> >>>>>>>>> Linear solver converged at step: 53, final residual: 4.67731e30 >>>>>>>>> begin solve: iteration #54 >>>>>>>>> >>>>>>>>> Linear solver converged at step: 54, final residual: 1.35495e30 >>>>>>>>> begin solve: iteration #55 >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Thu, May 29, 2014 at 6:07 PM, Vikram Garg < >>>>>>>>> vikram.v.garg@...> wrote: >>>>>>>>> >>>>>>>>>> I see. What happens with LU ? >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Thu, May 29, 2014 at 7:06 PM, walter kou <walter4code@... >>>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> tried ksp_gmres_restart 5000, with residual as below >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Linear solver converged at step: 53, final residual: 1.5531e05 >>>>>>>>>>> begin solve: iteration #54 >>>>>>>>>>> >>>>>>>>>>> Linear solver converged at step: 54, final residual: 1.55013e05 >>>>>>>>>>> begin solve: iteration #55 >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Thu, May 29, 2014 at 6:01 PM, Vikram Garg < >>>>>>>>>>> vikram.v.garg@...> wrote: >>>>>>>>>>> >>>>>>>>>>>> Try using a higher number of restart steps, say 500, >>>>>>>>>>>> >>>>>>>>>>>> ksp_gmres_restart 500 >>>>>>>>>>>> >>>>>>>>>>>> Thanks. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Thu, May 29, 2014 at 7:00 PM, walter kou < >>>>>>>>>>>> walter4code@...> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> output from the ksp_monitor_singular_value ? >>>>>>>>>>>>> >>>>>>>>>>>>> 250 KSP Residual norm 8.867989901517e05 % max >>>>>>>>>>>>> 1.372837955534e+05 min 1.502965355599e+01 max/min 9.134195611495e+03 >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Thu, May 29, 2014 at 5:54 PM, Vikram Garg < >>>>>>>>>>>>> vikram.v.garg@...> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Try the following solver options: >>>>>>>>>>>>>> >>>>>>>>>>>>>> pc_type lu pc_factor_mat_solver_package superlu >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> What was the output from the ksp_monitor_singular_value ? >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Thu, May 29, 2014 at 6:52 PM, walter kou < >>>>>>>>>>>>>> walter4code@...> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hi Vikram, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> How to try using just lu or super lu? I am pretty ignorant in >>>>>>>>>>>>>>> playing with proper ksp options in the command line. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Could you point out any introductory materials on this? >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thanks so much. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Wed, May 28, 2014 at 11:54 AM, Vikram Garg < >>>>>>>>>>>>>>> vikram.v.garg@...> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Hey Walter, >>>>>>>>>>>>>>>> Have you tried using just lu or super >>>>>>>>>>>>>>>> lu ? You might also want to check and see whats the output for >>>>>>>>>>>>>>>> ksp_monitor_singular_value and increase the gmres restart steps. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Thanks. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Mon, May 26, 2014 at 1:57 PM, walter kou < >>>>>>>>>>>>>>>> walter4code@...> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Hi all, >>>>>>>>>>>>>>>>> I ran a larger case: with elements = 200, and found for >>>>>>>>>>>>>>>>> each calculation in >>>>>>>>>>>>>>>>> the iteration, system.final_linear_residual() is about 0.5%. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> 1) Is the system.final_linear_residual() r = b A X* (X* >>>>>>>>>>>>>>>>> is the solution) >>>>>>>>>>>>>>>>> ? right? >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> 2) It seems final residual is too big, and the equation is >>>>>>>>>>>>>>>>> not solved well >>>>>>>>>>>>>>>>> (here b is about 1e4). Does anyone have suggestion in >>>>>>>>>>>>>>>>> playing with >>>>>>>>>>>>>>>>> solvers of Ax=b? >>>>>>>>>>>>>>>>> Here my case is on nonlinear elasticity, and A is almost >>>>>>>>>>>>>>>>> symmetrical >>>>>>>>>>>>>>>>> positive definite (only components influenced by boundary >>>>>>>>>>>>>>>>> conditions will >>>>>>>>>>>>>>>>> break the symmetry). >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Also, following the suggestion of Paul, I use ksp_view and >>>>>>>>>>>>>>>>> find my solver >>>>>>>>>>>>>>>>> information is as below: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> KSP Object: 4 MPI processes >>>>>>>>>>>>>>>>> type: gmres >>>>>>>>>>>>>>>>> GMRES: restart=30, using Classical (unmodified) >>>>>>>>>>>>>>>>> GramSchmidt >>>>>>>>>>>>>>>>> Orthogonalization with no iterative refinement >>>>>>>>>>>>>>>>> GMRES: happy breakdown tolerance 1e30 >>>>>>>>>>>>>>>>> maximum iterations=250 >>>>>>>>>>>>>>>>> tolerances: relative=1e08, absolute=1e50, >>>>>>>>>>>>>>>>> divergence=10000 >>>>>>>>>>>>>>>>> left preconditioning >>>>>>>>>>>>>>>>> using nonzero initial guess >>>>>>>>>>>>>>>>> using PRECONDITIONED norm type for convergence test >>>>>>>>>>>>>>>>> PC Object: 4 MPI processes >>>>>>>>>>>>>>>>> type: bjacobi >>>>>>>>>>>>>>>>> block Jacobi: number of blocks = 4 >>>>>>>>>>>>>>>>> Local solve is same for all blocks, in the following >>>>>>>>>>>>>>>>> KSP and PC objects: >>>>>>>>>>>>>>>>> KSP Object: (sub_) 1 MPI processes >>>>>>>>>>>>>>>>> type: preonly >>>>>>>>>>>>>>>>> maximum iterations=10000, initial guess is zero >>>>>>>>>>>>>>>>> tolerances: relative=1e05, absolute=1e50, >>>>>>>>>>>>>>>>> divergence=10000 >>>>>>>>>>>>>>>>> left preconditioning >>>>>>>>>>>>>>>>> using NONE norm type for convergence test >>>>>>>>>>>>>>>>> PC Object: (sub_) 1 MPI processes >>>>>>>>>>>>>>>>> type: ilu >>>>>>>>>>>>>>>>> ILU: outofplace factorization >>>>>>>>>>>>>>>>> 0 levels of fill >>>>>>>>>>>>>>>>> tolerance for zero pivot 2.22045e14 >>>>>>>>>>>>>>>>> using diagonal shift to prevent zero pivot >>>>>>>>>>>>>>>>> matrix ordering: natural >>>>>>>>>>>>>>>>> factor fill ratio given 1, needed 1 >>>>>>>>>>>>>>>>> Factored matrix follows: >>>>>>>>>>>>>>>>> Matrix Object: 1 MPI processes >>>>>>>>>>>>>>>>> type: seqaij >>>>>>>>>>>>>>>>> rows=324, cols=324 >>>>>>>>>>>>>>>>> package used to perform factorization: petsc >>>>>>>>>>>>>>>>> total: nonzeros=16128, allocated nonzeros=16128 >>>>>>>>>>>>>>>>> total number of mallocs used during >>>>>>>>>>>>>>>>> MatSetValues calls =0 >>>>>>>>>>>>>>>>> not using Inode routines >>>>>>>>>>>>>>>>> linear system matrix = precond matrix: >>>>>>>>>>>>>>>>> Matrix Object: () 1 MPI processes >>>>>>>>>>>>>>>>> type: seqaij >>>>>>>>>>>>>>>>> rows=324, cols=324 >>>>>>>>>>>>>>>>> total: nonzeros=16128, allocated nonzeros=19215 >>>>>>>>>>>>>>>>> total number of mallocs used during MatSetValues >>>>>>>>>>>>>>>>> calls =0 >>>>>>>>>>>>>>>>> not using Inode routines >>>>>>>>>>>>>>>>> linear system matrix = precond matrix: >>>>>>>>>>>>>>>>> Matrix Object: () 4 MPI processes >>>>>>>>>>>>>>>>> type: mpiaij >>>>>>>>>>>>>>>>> rows=990, cols=990 >>>>>>>>>>>>>>>>> total: nonzeros=58590, allocated nonzeros=64512 >>>>>>>>>>>>>>>>> total number of mallocs used during MatSetValues calls >>>>>>>>>>>>>>>>> =0 >>>>>>>>>>>>>>>>> not using Inode (on process 0) routines >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> /********************************************************* >>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Walter >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Thu, May 22, 2014 at 12:18 PM, Paul T. Bauman < >>>>>>>>>>>>>>>>> ptbauman@...> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> On Thu, May 22, 2014 at 12:11 PM, walter kou < >>>>>>>>>>>>>>>>> walter4code@...>wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> OK, but libMesh calls a library, defaulting to PETSc if >>>>>>>>>>>>>>>>> it's installed. >>>>>>>>>>>>>>>>>>> Which library are you using? >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> PETSc3.3 >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> I recommend checking out the PETSc documentation ( >>>>>>>>>>>>>>>>>> http://www.mcs.anl.gov/petsc/petscas/documentation/) >>>>>>>>>>>>>>>>> and tutorials. But >>>>>>>>>>>>>>>>>> you'll want to start with ksp_view to get the parameters >>>>>>>>>>>>>>>>> PETSc is using. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>  >>>>>>>>>>>>>>>>> The best possible search technologies are now affordable >>>>>>>>>>>>>>>>> for all companies. >>>>>>>>>>>>>>>>> Download your FREE open source Enterprise Search Engine >>>>>>>>>>>>>>>>> today! >>>>>>>>>>>>>>>>> Our experts will assist you in its installation for $59/mo, >>>>>>>>>>>>>>>>> no commitment. >>>>>>>>>>>>>>>>> Test it for FREE on our Cloud platform anytime! >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> http://pubads.g.doubleclick.net/gampad/clk?id=145328191&iu=/4140/ostg.clktrk >>>>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>>>> Libmeshusers mailing list >>>>>>>>>>>>>>>>> Libmeshusers@... >>>>>>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/libmeshusers >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>  >>>>>>>>>>>>>>>> Vikram Garg >>>>>>>>>>>>>>>> Postdoctoral Associate >>>>>>>>>>>>>>>> Center for Computational Engineering >>>>>>>>>>>>>>>> Massachusetts Institute of Technology >>>>>>>>>>>>>>>> http://web.mit.edu/vikramvg/www/ >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> http://www.runforindia.org/runners/vikramg >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>  >>>>>>>>>>>>>> Vikram Garg >>>>>>>>>>>>>> Postdoctoral Associate >>>>>>>>>>>>>> Center for Computational Engineering >>>>>>>>>>>>>> Massachusetts Institute of Technology >>>>>>>>>>>>>> http://web.mit.edu/vikramvg/www/ >>>>>>>>>>>>>> >>>>>>>>>>>>>> http://www.runforindia.org/runners/vikramg >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>  >>>>>>>>>>>> Vikram Garg >>>>>>>>>>>> Postdoctoral Associate >>>>>>>>>>>> Center for Computational Engineering >>>>>>>>>>>> Massachusetts Institute of Technology >>>>>>>>>>>> http://web.mit.edu/vikramvg/www/ >>>>>>>>>>>> >>>>>>>>>>>> http://www.runforindia.org/runners/vikramg >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>  >>>>>>>>>> Vikram Garg >>>>>>>>>> Postdoctoral Associate >>>>>>>>>> Center for Computational Engineering >>>>>>>>>> Massachusetts Institute of Technology >>>>>>>>>> http://web.mit.edu/vikramvg/www/ >>>>>>>>>> >>>>>>>>>> http://www.runforindia.org/runners/vikramg >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>  >>>>>>>> Vikram Garg >>>>>>>> Postdoctoral Associate >>>>>>>> Center for Computational Engineering >>>>>>>> Massachusetts Institute of Technology >>>>>>>> http://web.mit.edu/vikramvg/www/ >>>>>>>> >>>>>>>> http://www.runforindia.org/runners/vikramg >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>> >>>>> >>>>>  >>>>> Vikram Garg >>>>> Postdoctoral Associate >>>>> Center for Computational Engineering >>>>> Massachusetts Institute of Technology >>>>> http://web.mit.edu/vikramvg/www/ >>>>> >>>>> http://www.runforindia.org/runners/vikramg >>>>> >>>> >>>> >>> >>> >>>  >>> Vikram Garg >>> Postdoctoral Associate >>> Center for Computational Engineering >>> Massachusetts Institute of Technology >>> http://web.mit.edu/vikramvg/www/ >>> >>> http://www.runforindia.org/runners/vikramg >>> >> >> > > >  > Vikram Garg > Postdoctoral Associate > Center for Computational Engineering > Massachusetts Institute of Technology > http://web.mit.edu/vikramvg/www/ > > http://www.runforindia.org/runners/vikramg >  > Time is money. Stop wasting it! Get your web API in 5 minutes. > http://www.restlet.com/download > http://p.sf.net/sfu/restlet > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers 
From: Vikram Garg <vikram.garg@gm...>  20140529 23:31:48

(Putting this back on the list) I see. My commands for the parallel ilu might be out of date. Does anyone know the current syntax ? Thanks. On Thu, May 29, 2014 at 7:28 PM, walter kou <walter4code@...> wrote: > Yes, > in serial, I put "pc_type ilu" or just put "pc_type lu", the residual is > small. > > In parallel, I will put "sub_pc_type ilu", the residual is still big. > > > On Thu, May 29, 2014 at 6:26 PM, Vikram Garg <vikram.v.garg@...> > wrote: > >> So it works in serial, but not in parallel ? >> >> >> On Thu, May 29, 2014 at 7:25 PM, walter kou <walter4code@...> >> wrote: >> >>> sub_pc_type ilu: >>> >>> The residual is big. >>> >>> 250 KSP Residual norm 1.289444373237e05 % max 3.713155970657e+05 min >>> 2.280681360715e03 max/min 1.628090637568e+08 >>> >>> Linear solver converged at step: 25, final residual: 1.28944e05 >>> >>> >>> On Thu, May 29, 2014 at 6:21 PM, Vikram Garg <vikram.v.garg@...> >>> wrote: >>> >>>> Right, I dont remember the syntax for ilu in parallel, but I believe it >>>> is >>>> >>>> sub_pc_type ilu >>>> >>>> Try that. >>>> >>>> >>>> On Thu, May 29, 2014 at 7:18 PM, walter kou <walter4code@...> >>>> wrote: >>>> >>>>> The above is mpiexec np 2, >>>>> Running with single processor is OK. >>>>> >>>>> 249 KSP Residual norm 4.369357920020e19 % max 5.529938049388e+01 min >>>>> 3.238293945443e16 max/min 1.707670193797e+17 >>>>> 250 KSP Residual norm 4.369357920020e19 % max 5.530031976618e+01 min >>>>> 5.037092721368e16 max/min 1.097861858520e+17 >>>>> >>>>> Linear solver converged at step: 29, final residual: 4.36936e19 >>>>> >>>>> >>>>> On Thu, May 29, 2014 at 6:15 PM, walter kou <walter4code@...> >>>>> wrote: >>>>> >>>>>> It seems I do not have PETSc ILU: >>>>>> >>>>>> [0]PETSC ERROR:  Error Message >>>>>>  >>>>>> [0]PETSC ERROR: No support for this operation for this object type! >>>>>> [0]PETSC ERROR: Matrix format mpiaij does not have a builtin PETSc >>>>>> >>>>>> >>>>>> On Thu, May 29, 2014 at 6:11 PM, Vikram Garg <vikram.v.garg@... >>>>>> > wrote: >>>>>> >>>>>>> That looks good. So it most likely a linear solver settings issue. >>>>>>> Try these options: >>>>>>> >>>>>>> ksp_monitor_singular_value ksp_gmres_modifiedgramschmidt >>>>>>> ksp_gmres_restart 500 pc_type ilu pc_factor_levels 4 >>>>>>> >>>>>>> >>>>>>> On Thu, May 29, 2014 at 7:10 PM, walter kou <walter4code@...> >>>>>>> wrote: >>>>>>> >>>>>>>> I can only try single processor : pc_type lu >>>>>>>> The residual is good. >>>>>>>> >>>>>>>> Linear solver converged at step: 53, final residual: 4.67731e30 >>>>>>>> begin solve: iteration #54 >>>>>>>> >>>>>>>> Linear solver converged at step: 54, final residual: 1.35495e30 >>>>>>>> begin solve: iteration #55 >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Thu, May 29, 2014 at 6:07 PM, Vikram Garg < >>>>>>>> vikram.v.garg@...> wrote: >>>>>>>> >>>>>>>>> I see. What happens with LU ? >>>>>>>>> >>>>>>>>> >>>>>>>>> On Thu, May 29, 2014 at 7:06 PM, walter kou <walter4code@... >>>>>>>>> > wrote: >>>>>>>>> >>>>>>>>>> tried ksp_gmres_restart 5000, with residual as below >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Linear solver converged at step: 53, final residual: 1.5531e05 >>>>>>>>>> begin solve: iteration #54 >>>>>>>>>> >>>>>>>>>> Linear solver converged at step: 54, final residual: 1.55013e05 >>>>>>>>>> begin solve: iteration #55 >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Thu, May 29, 2014 at 6:01 PM, Vikram Garg < >>>>>>>>>> vikram.v.garg@...> wrote: >>>>>>>>>> >>>>>>>>>>> Try using a higher number of restart steps, say 500, >>>>>>>>>>> >>>>>>>>>>> ksp_gmres_restart 500 >>>>>>>>>>> >>>>>>>>>>> Thanks. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Thu, May 29, 2014 at 7:00 PM, walter kou < >>>>>>>>>>> walter4code@...> wrote: >>>>>>>>>>> >>>>>>>>>>>> output from the ksp_monitor_singular_value ? >>>>>>>>>>>> >>>>>>>>>>>> 250 KSP Residual norm 8.867989901517e05 % max >>>>>>>>>>>> 1.372837955534e+05 min 1.502965355599e+01 max/min 9.134195611495e+03 >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Thu, May 29, 2014 at 5:54 PM, Vikram Garg < >>>>>>>>>>>> vikram.v.garg@...> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Try the following solver options: >>>>>>>>>>>>> >>>>>>>>>>>>> pc_type lu pc_factor_mat_solver_package superlu >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> What was the output from the ksp_monitor_singular_value ? >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Thu, May 29, 2014 at 6:52 PM, walter kou < >>>>>>>>>>>>> walter4code@...> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hi Vikram, >>>>>>>>>>>>>> >>>>>>>>>>>>>> How to try using just lu or super lu? I am pretty ignorant in >>>>>>>>>>>>>> playing with proper ksp options in the command line. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Could you point out any introductory materials on this? >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks so much. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Wed, May 28, 2014 at 11:54 AM, Vikram Garg < >>>>>>>>>>>>>> vikram.v.garg@...> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hey Walter, >>>>>>>>>>>>>>> Have you tried using just lu or super >>>>>>>>>>>>>>> lu ? You might also want to check and see whats the output for >>>>>>>>>>>>>>> ksp_monitor_singular_value and increase the gmres restart steps. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thanks. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Mon, May 26, 2014 at 1:57 PM, walter kou < >>>>>>>>>>>>>>> walter4code@...> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Hi all, >>>>>>>>>>>>>>>> I ran a larger case: with elements = 200, and found for >>>>>>>>>>>>>>>> each calculation in >>>>>>>>>>>>>>>> the iteration, system.final_linear_residual() is about 0.5%. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> 1) Is the system.final_linear_residual() r = b A X* (X* >>>>>>>>>>>>>>>> is the solution) >>>>>>>>>>>>>>>> ? right? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> 2) It seems final residual is too big, and the equation is >>>>>>>>>>>>>>>> not solved well >>>>>>>>>>>>>>>> (here b is about 1e4). Does anyone have suggestion in >>>>>>>>>>>>>>>> playing with >>>>>>>>>>>>>>>> solvers of Ax=b? >>>>>>>>>>>>>>>> Here my case is on nonlinear elasticity, and A is almost >>>>>>>>>>>>>>>> symmetrical >>>>>>>>>>>>>>>> positive definite (only components influenced by boundary >>>>>>>>>>>>>>>> conditions will >>>>>>>>>>>>>>>> break the symmetry). >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Also, following the suggestion of Paul, I use ksp_view and >>>>>>>>>>>>>>>> find my solver >>>>>>>>>>>>>>>> information is as below: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> KSP Object: 4 MPI processes >>>>>>>>>>>>>>>> type: gmres >>>>>>>>>>>>>>>> GMRES: restart=30, using Classical (unmodified) >>>>>>>>>>>>>>>> GramSchmidt >>>>>>>>>>>>>>>> Orthogonalization with no iterative refinement >>>>>>>>>>>>>>>> GMRES: happy breakdown tolerance 1e30 >>>>>>>>>>>>>>>> maximum iterations=250 >>>>>>>>>>>>>>>> tolerances: relative=1e08, absolute=1e50, >>>>>>>>>>>>>>>> divergence=10000 >>>>>>>>>>>>>>>> left preconditioning >>>>>>>>>>>>>>>> using nonzero initial guess >>>>>>>>>>>>>>>> using PRECONDITIONED norm type for convergence test >>>>>>>>>>>>>>>> PC Object: 4 MPI processes >>>>>>>>>>>>>>>> type: bjacobi >>>>>>>>>>>>>>>> block Jacobi: number of blocks = 4 >>>>>>>>>>>>>>>> Local solve is same for all blocks, in the following >>>>>>>>>>>>>>>> KSP and PC objects: >>>>>>>>>>>>>>>> KSP Object: (sub_) 1 MPI processes >>>>>>>>>>>>>>>> type: preonly >>>>>>>>>>>>>>>> maximum iterations=10000, initial guess is zero >>>>>>>>>>>>>>>> tolerances: relative=1e05, absolute=1e50, >>>>>>>>>>>>>>>> divergence=10000 >>>>>>>>>>>>>>>> left preconditioning >>>>>>>>>>>>>>>> using NONE norm type for convergence test >>>>>>>>>>>>>>>> PC Object: (sub_) 1 MPI processes >>>>>>>>>>>>>>>> type: ilu >>>>>>>>>>>>>>>> ILU: outofplace factorization >>>>>>>>>>>>>>>> 0 levels of fill >>>>>>>>>>>>>>>> tolerance for zero pivot 2.22045e14 >>>>>>>>>>>>>>>> using diagonal shift to prevent zero pivot >>>>>>>>>>>>>>>> matrix ordering: natural >>>>>>>>>>>>>>>> factor fill ratio given 1, needed 1 >>>>>>>>>>>>>>>> Factored matrix follows: >>>>>>>>>>>>>>>> Matrix Object: 1 MPI processes >>>>>>>>>>>>>>>> type: seqaij >>>>>>>>>>>>>>>> rows=324, cols=324 >>>>>>>>>>>>>>>> package used to perform factorization: petsc >>>>>>>>>>>>>>>> total: nonzeros=16128, allocated nonzeros=16128 >>>>>>>>>>>>>>>> total number of mallocs used during >>>>>>>>>>>>>>>> MatSetValues calls =0 >>>>>>>>>>>>>>>> not using Inode routines >>>>>>>>>>>>>>>> linear system matrix = precond matrix: >>>>>>>>>>>>>>>> Matrix Object: () 1 MPI processes >>>>>>>>>>>>>>>> type: seqaij >>>>>>>>>>>>>>>> rows=324, cols=324 >>>>>>>>>>>>>>>> total: nonzeros=16128, allocated nonzeros=19215 >>>>>>>>>>>>>>>> total number of mallocs used during MatSetValues >>>>>>>>>>>>>>>> calls =0 >>>>>>>>>>>>>>>> not using Inode routines >>>>>>>>>>>>>>>> linear system matrix = precond matrix: >>>>>>>>>>>>>>>> Matrix Object: () 4 MPI processes >>>>>>>>>>>>>>>> type: mpiaij >>>>>>>>>>>>>>>> rows=990, cols=990 >>>>>>>>>>>>>>>> total: nonzeros=58590, allocated nonzeros=64512 >>>>>>>>>>>>>>>> total number of mallocs used during MatSetValues calls >>>>>>>>>>>>>>>> =0 >>>>>>>>>>>>>>>> not using Inode (on process 0) routines >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> /********************************************************* >>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Walter >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Thu, May 22, 2014 at 12:18 PM, Paul T. Bauman < >>>>>>>>>>>>>>>> ptbauman@...> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> > On Thu, May 22, 2014 at 12:11 PM, walter kou < >>>>>>>>>>>>>>>> walter4code@...>wrote: >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> >> OK, but libMesh calls a library, defaulting to PETSc if >>>>>>>>>>>>>>>> it's installed. >>>>>>>>>>>>>>>> >> Which library are you using? >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> >> PETSc3.3 >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> > I recommend checking out the PETSc documentation ( >>>>>>>>>>>>>>>> > http://www.mcs.anl.gov/petsc/petscas/documentation/) >>>>>>>>>>>>>>>> and tutorials. But >>>>>>>>>>>>>>>> > you'll want to start with ksp_view to get the parameters >>>>>>>>>>>>>>>> PETSc is using. >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>  >>>>>>>>>>>>>>>> The best possible search technologies are now affordable >>>>>>>>>>>>>>>> for all companies. >>>>>>>>>>>>>>>> Download your FREE open source Enterprise Search Engine >>>>>>>>>>>>>>>> today! >>>>>>>>>>>>>>>> Our experts will assist you in its installation for $59/mo, >>>>>>>>>>>>>>>> no commitment. >>>>>>>>>>>>>>>> Test it for FREE on our Cloud platform anytime! >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> http://pubads.g.doubleclick.net/gampad/clk?id=145328191&iu=/4140/ostg.clktrk >>>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>>> Libmeshusers mailing list >>>>>>>>>>>>>>>> Libmeshusers@... >>>>>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/libmeshusers >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>  >>>>>>>>>>>>>>> Vikram Garg >>>>>>>>>>>>>>> Postdoctoral Associate >>>>>>>>>>>>>>> Center for Computational Engineering >>>>>>>>>>>>>>> Massachusetts Institute of Technology >>>>>>>>>>>>>>> http://web.mit.edu/vikramvg/www/ >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> http://www.runforindia.org/runners/vikramg >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>  >>>>>>>>>>>>> Vikram Garg >>>>>>>>>>>>> Postdoctoral Associate >>>>>>>>>>>>> Center for Computational Engineering >>>>>>>>>>>>> Massachusetts Institute of Technology >>>>>>>>>>>>> http://web.mit.edu/vikramvg/www/ >>>>>>>>>>>>> >>>>>>>>>>>>> http://www.runforindia.org/runners/vikramg >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>  >>>>>>>>>>> Vikram Garg >>>>>>>>>>> Postdoctoral Associate >>>>>>>>>>> Center for Computational Engineering >>>>>>>>>>> Massachusetts Institute of Technology >>>>>>>>>>> http://web.mit.edu/vikramvg/www/ >>>>>>>>>>> >>>>>>>>>>> http://www.runforindia.org/runners/vikramg >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>  >>>>>>>>> Vikram Garg >>>>>>>>> Postdoctoral Associate >>>>>>>>> Center for Computational Engineering >>>>>>>>> Massachusetts Institute of Technology >>>>>>>>> http://web.mit.edu/vikramvg/www/ >>>>>>>>> >>>>>>>>> http://www.runforindia.org/runners/vikramg >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>>  >>>>>>> Vikram Garg >>>>>>> Postdoctoral Associate >>>>>>> Center for Computational Engineering >>>>>>> Massachusetts Institute of Technology >>>>>>> http://web.mit.edu/vikramvg/www/ >>>>>>> >>>>>>> http://www.runforindia.org/runners/vikramg >>>>>>> >>>>>> >>>>>> >>>>> >>>> >>>> >>>>  >>>> Vikram Garg >>>> Postdoctoral Associate >>>> Center for Computational Engineering >>>> Massachusetts Institute of Technology >>>> http://web.mit.edu/vikramvg/www/ >>>> >>>> http://www.runforindia.org/runners/vikramg >>>> >>> >>> >> >> >>  >> Vikram Garg >> Postdoctoral Associate >> Center for Computational Engineering >> Massachusetts Institute of Technology >> http://web.mit.edu/vikramvg/www/ >> >> http://www.runforindia.org/runners/vikramg >> > >  Vikram Garg Postdoctoral Associate Center for Computational Engineering Massachusetts Institute of Technology http://web.mit.edu/vikramvg/www/ http://www.runforindia.org/runners/vikramg 
From: walter kou <walter4code@gm...>  20140529 23:06:32

tried ksp_gmres_restart 5000, with residual as below Linear solver converged at step: 53, final residual: 1.5531e05 begin solve: iteration #54 Linear solver converged at step: 54, final residual: 1.55013e05 begin solve: iteration #55 On Thu, May 29, 2014 at 6:01 PM, Vikram Garg <vikram.v.garg@...> wrote: > Try using a higher number of restart steps, say 500, > > ksp_gmres_restart 500 > > Thanks. > > > On Thu, May 29, 2014 at 7:00 PM, walter kou <walter4code@...> wrote: > >> output from the ksp_monitor_singular_value ? >> >> 250 KSP Residual norm 8.867989901517e05 % max 1.372837955534e+05 min >> 1.502965355599e+01 max/min 9.134195611495e+03 >> >> >> >> On Thu, May 29, 2014 at 5:54 PM, Vikram Garg <vikram.v.garg@...> >> wrote: >> >>> Try the following solver options: >>> >>> pc_type lu pc_factor_mat_solver_package superlu >>> >>> >>> What was the output from the ksp_monitor_singular_value ? >>> >>> >>> Thanks. >>> >>> >>> On Thu, May 29, 2014 at 6:52 PM, walter kou <walter4code@...> >>> wrote: >>> >>>> Hi Vikram, >>>> >>>> How to try using just lu or super lu? I am pretty ignorant in playing >>>> with proper ksp options in the command line. >>>> >>>> Could you point out any introductory materials on this? >>>> >>>> Thanks so much. >>>> >>>> >>>> >>>> >>>> On Wed, May 28, 2014 at 11:54 AM, Vikram Garg <vikram.v.garg@...> >>>> wrote: >>>> >>>>> Hey Walter, >>>>> Have you tried using just lu or super lu ? You >>>>> might also want to check and see whats the output for >>>>> ksp_monitor_singular_value and increase the gmres restart steps. >>>>> >>>>> Thanks. >>>>> >>>>> >>>>> On Mon, May 26, 2014 at 1:57 PM, walter kou <walter4code@...> >>>>> wrote: >>>>> >>>>>> Hi all, >>>>>> I ran a larger case: with elements = 200, and found for each >>>>>> calculation in >>>>>> the iteration, system.final_linear_residual() is about 0.5%. >>>>>> >>>>>> 1) Is the system.final_linear_residual() r = b A X* (X* is the >>>>>> solution) >>>>>> ? right? >>>>>> >>>>>> 2) It seems final residual is too big, and the equation is not solved >>>>>> well >>>>>> (here b is about 1e4). Does anyone have suggestion in playing with >>>>>> solvers of Ax=b? >>>>>> Here my case is on nonlinear elasticity, and A is almost symmetrical >>>>>> positive definite (only components influenced by boundary conditions >>>>>> will >>>>>> break the symmetry). >>>>>> >>>>>> >>>>>> Also, following the suggestion of Paul, I use ksp_view and find my >>>>>> solver >>>>>> information is as below: >>>>>> >>>>>> KSP Object: 4 MPI processes >>>>>> type: gmres >>>>>> GMRES: restart=30, using Classical (unmodified) GramSchmidt >>>>>> Orthogonalization with no iterative refinement >>>>>> GMRES: happy breakdown tolerance 1e30 >>>>>> maximum iterations=250 >>>>>> tolerances: relative=1e08, absolute=1e50, divergence=10000 >>>>>> left preconditioning >>>>>> using nonzero initial guess >>>>>> using PRECONDITIONED norm type for convergence test >>>>>> PC Object: 4 MPI processes >>>>>> type: bjacobi >>>>>> block Jacobi: number of blocks = 4 >>>>>> Local solve is same for all blocks, in the following KSP and PC >>>>>> objects: >>>>>> KSP Object: (sub_) 1 MPI processes >>>>>> type: preonly >>>>>> maximum iterations=10000, initial guess is zero >>>>>> tolerances: relative=1e05, absolute=1e50, divergence=10000 >>>>>> left preconditioning >>>>>> using NONE norm type for convergence test >>>>>> PC Object: (sub_) 1 MPI processes >>>>>> type: ilu >>>>>> ILU: outofplace factorization >>>>>> 0 levels of fill >>>>>> tolerance for zero pivot 2.22045e14 >>>>>> using diagonal shift to prevent zero pivot >>>>>> matrix ordering: natural >>>>>> factor fill ratio given 1, needed 1 >>>>>> Factored matrix follows: >>>>>> Matrix Object: 1 MPI processes >>>>>> type: seqaij >>>>>> rows=324, cols=324 >>>>>> package used to perform factorization: petsc >>>>>> total: nonzeros=16128, allocated nonzeros=16128 >>>>>> total number of mallocs used during MatSetValues calls =0 >>>>>> not using Inode routines >>>>>> linear system matrix = precond matrix: >>>>>> Matrix Object: () 1 MPI processes >>>>>> type: seqaij >>>>>> rows=324, cols=324 >>>>>> total: nonzeros=16128, allocated nonzeros=19215 >>>>>> total number of mallocs used during MatSetValues calls =0 >>>>>> not using Inode routines >>>>>> linear system matrix = precond matrix: >>>>>> Matrix Object: () 4 MPI processes >>>>>> type: mpiaij >>>>>> rows=990, cols=990 >>>>>> total: nonzeros=58590, allocated nonzeros=64512 >>>>>> total number of mallocs used during MatSetValues calls =0 >>>>>> not using Inode (on process 0) routines >>>>>> >>>>>> >>>>>> >>>>>> /********************************************************* >>>>>> Thanks, >>>>>> >>>>>> Walter >>>>>> >>>>>> >>>>>> On Thu, May 22, 2014 at 12:18 PM, Paul T. Bauman <ptbauman@...> >>>>>> wrote: >>>>>> >>>>>> > >>>>>> > >>>>>> > >>>>>> > On Thu, May 22, 2014 at 12:11 PM, walter kou <walter4code@... >>>>>> >wrote: >>>>>> > >>>>>> >> OK, but libMesh calls a library, defaulting to PETSc if it's >>>>>> installed. >>>>>> >> Which library are you using? >>>>>> >> >>>>>> >> PETSc3.3 >>>>>> >> >>>>>> > >>>>>> > I recommend checking out the PETSc documentation ( >>>>>> > http://www.mcs.anl.gov/petsc/petscas/documentation/) and >>>>>> tutorials. But >>>>>> > you'll want to start with ksp_view to get the parameters PETSc is >>>>>> using. >>>>>> > >>>>>> >>>>>>  >>>>>> The best possible search technologies are now affordable for all >>>>>> companies. >>>>>> Download your FREE open source Enterprise Search Engine today! >>>>>> Our experts will assist you in its installation for $59/mo, no >>>>>> commitment. >>>>>> Test it for FREE on our Cloud platform anytime! >>>>>> >>>>>> http://pubads.g.doubleclick.net/gampad/clk?id=145328191&iu=/4140/ostg.clktrk >>>>>> _______________________________________________ >>>>>> Libmeshusers mailing list >>>>>> Libmeshusers@... >>>>>> https://lists.sourceforge.net/lists/listinfo/libmeshusers >>>>>> >>>>> >>>>> >>>>> >>>>>  >>>>> Vikram Garg >>>>> Postdoctoral Associate >>>>> Center for Computational Engineering >>>>> Massachusetts Institute of Technology >>>>> http://web.mit.edu/vikramvg/www/ >>>>> >>>>> http://www.runforindia.org/runners/vikramg >>>>> >>>> >>>> >>> >>> >>>  >>> Vikram Garg >>> Postdoctoral Associate >>> Center for Computational Engineering >>> Massachusetts Institute of Technology >>> http://web.mit.edu/vikramvg/www/ >>> >>> http://www.runforindia.org/runners/vikramg >>> >> >> > > >  > Vikram Garg > Postdoctoral Associate > Center for Computational Engineering > Massachusetts Institute of Technology > http://web.mit.edu/vikramvg/www/ > > http://www.runforindia.org/runners/vikramg > 
From: Vikram Garg <vikram.garg@gm...>  20140529 23:02:10

Try using a higher number of restart steps, say 500, ksp_gmres_restart 500 Thanks. On Thu, May 29, 2014 at 7:00 PM, walter kou <walter4code@...> wrote: > output from the ksp_monitor_singular_value ? > > 250 KSP Residual norm 8.867989901517e05 % max 1.372837955534e+05 min > 1.502965355599e+01 max/min 9.134195611495e+03 > > > > On Thu, May 29, 2014 at 5:54 PM, Vikram Garg <vikram.v.garg@...> > wrote: > >> Try the following solver options: >> >> pc_type lu pc_factor_mat_solver_package superlu >> >> >> What was the output from the ksp_monitor_singular_value ? >> >> >> Thanks. >> >> >> On Thu, May 29, 2014 at 6:52 PM, walter kou <walter4code@...> >> wrote: >> >>> Hi Vikram, >>> >>> How to try using just lu or super lu? I am pretty ignorant in playing >>> with proper ksp options in the command line. >>> >>> Could you point out any introductory materials on this? >>> >>> Thanks so much. >>> >>> >>> >>> >>> On Wed, May 28, 2014 at 11:54 AM, Vikram Garg <vikram.v.garg@...> >>> wrote: >>> >>>> Hey Walter, >>>> Have you tried using just lu or super lu ? You >>>> might also want to check and see whats the output for >>>> ksp_monitor_singular_value and increase the gmres restart steps. >>>> >>>> Thanks. >>>> >>>> >>>> On Mon, May 26, 2014 at 1:57 PM, walter kou <walter4code@...> >>>> wrote: >>>> >>>>> Hi all, >>>>> I ran a larger case: with elements = 200, and found for each >>>>> calculation in >>>>> the iteration, system.final_linear_residual() is about 0.5%. >>>>> >>>>> 1) Is the system.final_linear_residual() r = b A X* (X* is the >>>>> solution) >>>>> ? right? >>>>> >>>>> 2) It seems final residual is too big, and the equation is not solved >>>>> well >>>>> (here b is about 1e4). Does anyone have suggestion in playing with >>>>> solvers of Ax=b? >>>>> Here my case is on nonlinear elasticity, and A is almost symmetrical >>>>> positive definite (only components influenced by boundary conditions >>>>> will >>>>> break the symmetry). >>>>> >>>>> >>>>> Also, following the suggestion of Paul, I use ksp_view and find my >>>>> solver >>>>> information is as below: >>>>> >>>>> KSP Object: 4 MPI processes >>>>> type: gmres >>>>> GMRES: restart=30, using Classical (unmodified) GramSchmidt >>>>> Orthogonalization with no iterative refinement >>>>> GMRES: happy breakdown tolerance 1e30 >>>>> maximum iterations=250 >>>>> tolerances: relative=1e08, absolute=1e50, divergence=10000 >>>>> left preconditioning >>>>> using nonzero initial guess >>>>> using PRECONDITIONED norm type for convergence test >>>>> PC Object: 4 MPI processes >>>>> type: bjacobi >>>>> block Jacobi: number of blocks = 4 >>>>> Local solve is same for all blocks, in the following KSP and PC >>>>> objects: >>>>> KSP Object: (sub_) 1 MPI processes >>>>> type: preonly >>>>> maximum iterations=10000, initial guess is zero >>>>> tolerances: relative=1e05, absolute=1e50, divergence=10000 >>>>> left preconditioning >>>>> using NONE norm type for convergence test >>>>> PC Object: (sub_) 1 MPI processes >>>>> type: ilu >>>>> ILU: outofplace factorization >>>>> 0 levels of fill >>>>> tolerance for zero pivot 2.22045e14 >>>>> using diagonal shift to prevent zero pivot >>>>> matrix ordering: natural >>>>> factor fill ratio given 1, needed 1 >>>>> Factored matrix follows: >>>>> Matrix Object: 1 MPI processes >>>>> type: seqaij >>>>> rows=324, cols=324 >>>>> package used to perform factorization: petsc >>>>> total: nonzeros=16128, allocated nonzeros=16128 >>>>> total number of mallocs used during MatSetValues calls =0 >>>>> not using Inode routines >>>>> linear system matrix = precond matrix: >>>>> Matrix Object: () 1 MPI processes >>>>> type: seqaij >>>>> rows=324, cols=324 >>>>> total: nonzeros=16128, allocated nonzeros=19215 >>>>> total number of mallocs used during MatSetValues calls =0 >>>>> not using Inode routines >>>>> linear system matrix = precond matrix: >>>>> Matrix Object: () 4 MPI processes >>>>> type: mpiaij >>>>> rows=990, cols=990 >>>>> total: nonzeros=58590, allocated nonzeros=64512 >>>>> total number of mallocs used during MatSetValues calls =0 >>>>> not using Inode (on process 0) routines >>>>> >>>>> >>>>> >>>>> /********************************************************* >>>>> Thanks, >>>>> >>>>> Walter >>>>> >>>>> >>>>> On Thu, May 22, 2014 at 12:18 PM, Paul T. Bauman <ptbauman@...> >>>>> wrote: >>>>> >>>>> > >>>>> > >>>>> > >>>>> > On Thu, May 22, 2014 at 12:11 PM, walter kou <walter4code@... >>>>> >wrote: >>>>> > >>>>> >> OK, but libMesh calls a library, defaulting to PETSc if it's >>>>> installed. >>>>> >> Which library are you using? >>>>> >> >>>>> >> PETSc3.3 >>>>> >> >>>>> > >>>>> > I recommend checking out the PETSc documentation ( >>>>> > http://www.mcs.anl.gov/petsc/petscas/documentation/) and >>>>> tutorials. But >>>>> > you'll want to start with ksp_view to get the parameters PETSc is >>>>> using. >>>>> > >>>>> >>>>>  >>>>> The best possible search technologies are now affordable for all >>>>> companies. >>>>> Download your FREE open source Enterprise Search Engine today! >>>>> Our experts will assist you in its installation for $59/mo, no >>>>> commitment. >>>>> Test it for FREE on our Cloud platform anytime! >>>>> >>>>> http://pubads.g.doubleclick.net/gampad/clk?id=145328191&iu=/4140/ostg.clktrk >>>>> _______________________________________________ >>>>> Libmeshusers mailing list >>>>> Libmeshusers@... >>>>> https://lists.sourceforge.net/lists/listinfo/libmeshusers >>>>> >>>> >>>> >>>> >>>>  >>>> Vikram Garg >>>> Postdoctoral Associate >>>> Center for Computational Engineering >>>> Massachusetts Institute of Technology >>>> http://web.mit.edu/vikramvg/www/ >>>> >>>> http://www.runforindia.org/runners/vikramg >>>> >>> >>> >> >> >>  >> Vikram Garg >> Postdoctoral Associate >> Center for Computational Engineering >> Massachusetts Institute of Technology >> http://web.mit.edu/vikramvg/www/ >> >> http://www.runforindia.org/runners/vikramg >> > >  Vikram Garg Postdoctoral Associate Center for Computational Engineering Massachusetts Institute of Technology http://web.mit.edu/vikramvg/www/ http://www.runforindia.org/runners/vikramg 
From: walter kou <walter4code@gm...>  20140529 23:00:39

output from the ksp_monitor_singular_value ? 250 KSP Residual norm 8.867989901517e05 % max 1.372837955534e+05 min 1.502965355599e+01 max/min 9.134195611495e+03 On Thu, May 29, 2014 at 5:54 PM, Vikram Garg <vikram.v.garg@...> wrote: > Try the following solver options: > > pc_type lu pc_factor_mat_solver_package superlu > > > What was the output from the ksp_monitor_singular_value ? > > > Thanks. > > > On Thu, May 29, 2014 at 6:52 PM, walter kou <walter4code@...> wrote: > >> Hi Vikram, >> >> How to try using just lu or super lu? I am pretty ignorant in playing >> with proper ksp options in the command line. >> >> Could you point out any introductory materials on this? >> >> Thanks so much. >> >> >> >> >> On Wed, May 28, 2014 at 11:54 AM, Vikram Garg <vikram.v.garg@...> >> wrote: >> >>> Hey Walter, >>> Have you tried using just lu or super lu ? You >>> might also want to check and see whats the output for >>> ksp_monitor_singular_value and increase the gmres restart steps. >>> >>> Thanks. >>> >>> >>> On Mon, May 26, 2014 at 1:57 PM, walter kou <walter4code@...> >>> wrote: >>> >>>> Hi all, >>>> I ran a larger case: with elements = 200, and found for each >>>> calculation in >>>> the iteration, system.final_linear_residual() is about 0.5%. >>>> >>>> 1) Is the system.final_linear_residual() r = b A X* (X* is the >>>> solution) >>>> ? right? >>>> >>>> 2) It seems final residual is too big, and the equation is not solved >>>> well >>>> (here b is about 1e4). Does anyone have suggestion in playing with >>>> solvers of Ax=b? >>>> Here my case is on nonlinear elasticity, and A is almost symmetrical >>>> positive definite (only components influenced by boundary conditions >>>> will >>>> break the symmetry). >>>> >>>> >>>> Also, following the suggestion of Paul, I use ksp_view and find my >>>> solver >>>> information is as below: >>>> >>>> KSP Object: 4 MPI processes >>>> type: gmres >>>> GMRES: restart=30, using Classical (unmodified) GramSchmidt >>>> Orthogonalization with no iterative refinement >>>> GMRES: happy breakdown tolerance 1e30 >>>> maximum iterations=250 >>>> tolerances: relative=1e08, absolute=1e50, divergence=10000 >>>> left preconditioning >>>> using nonzero initial guess >>>> using PRECONDITIONED norm type for convergence test >>>> PC Object: 4 MPI processes >>>> type: bjacobi >>>> block Jacobi: number of blocks = 4 >>>> Local solve is same for all blocks, in the following KSP and PC >>>> objects: >>>> KSP Object: (sub_) 1 MPI processes >>>> type: preonly >>>> maximum iterations=10000, initial guess is zero >>>> tolerances: relative=1e05, absolute=1e50, divergence=10000 >>>> left preconditioning >>>> using NONE norm type for convergence test >>>> PC Object: (sub_) 1 MPI processes >>>> type: ilu >>>> ILU: outofplace factorization >>>> 0 levels of fill >>>> tolerance for zero pivot 2.22045e14 >>>> using diagonal shift to prevent zero pivot >>>> matrix ordering: natural >>>> factor fill ratio given 1, needed 1 >>>> Factored matrix follows: >>>> Matrix Object: 1 MPI processes >>>> type: seqaij >>>> rows=324, cols=324 >>>> package used to perform factorization: petsc >>>> total: nonzeros=16128, allocated nonzeros=16128 >>>> total number of mallocs used during MatSetValues calls =0 >>>> not using Inode routines >>>> linear system matrix = precond matrix: >>>> Matrix Object: () 1 MPI processes >>>> type: seqaij >>>> rows=324, cols=324 >>>> total: nonzeros=16128, allocated nonzeros=19215 >>>> total number of mallocs used during MatSetValues calls =0 >>>> not using Inode routines >>>> linear system matrix = precond matrix: >>>> Matrix Object: () 4 MPI processes >>>> type: mpiaij >>>> rows=990, cols=990 >>>> total: nonzeros=58590, allocated nonzeros=64512 >>>> total number of mallocs used during MatSetValues calls =0 >>>> not using Inode (on process 0) routines >>>> >>>> >>>> >>>> /********************************************************* >>>> Thanks, >>>> >>>> Walter >>>> >>>> >>>> On Thu, May 22, 2014 at 12:18 PM, Paul T. Bauman <ptbauman@...> >>>> wrote: >>>> >>>> > >>>> > >>>> > >>>> > On Thu, May 22, 2014 at 12:11 PM, walter kou <walter4code@... >>>> >wrote: >>>> > >>>> >> OK, but libMesh calls a library, defaulting to PETSc if it's >>>> installed. >>>> >> Which library are you using? >>>> >> >>>> >> PETSc3.3 >>>> >> >>>> > >>>> > I recommend checking out the PETSc documentation ( >>>> > http://www.mcs.anl.gov/petsc/petscas/documentation/) and tutorials. >>>> But >>>> > you'll want to start with ksp_view to get the parameters PETSc is >>>> using. >>>> > >>>> >>>>  >>>> The best possible search technologies are now affordable for all >>>> companies. >>>> Download your FREE open source Enterprise Search Engine today! >>>> Our experts will assist you in its installation for $59/mo, no >>>> commitment. >>>> Test it for FREE on our Cloud platform anytime! >>>> >>>> http://pubads.g.doubleclick.net/gampad/clk?id=145328191&iu=/4140/ostg.clktrk >>>> _______________________________________________ >>>> Libmeshusers mailing list >>>> Libmeshusers@... >>>> https://lists.sourceforge.net/lists/listinfo/libmeshusers >>>> >>> >>> >>> >>>  >>> Vikram Garg >>> Postdoctoral Associate >>> Center for Computational Engineering >>> Massachusetts Institute of Technology >>> http://web.mit.edu/vikramvg/www/ >>> >>> http://www.runforindia.org/runners/vikramg >>> >> >> > > >  > Vikram Garg > Postdoctoral Associate > Center for Computational Engineering > Massachusetts Institute of Technology > http://web.mit.edu/vikramvg/www/ > > http://www.runforindia.org/runners/vikramg > 
From: Vikram Garg <vikram.garg@gm...>  20140529 22:55:15

Try the following solver options: pc_type lu pc_factor_mat_solver_package superlu What was the output from the ksp_monitor_singular_value ? Thanks. On Thu, May 29, 2014 at 6:52 PM, walter kou <walter4code@...> wrote: > Hi Vikram, > > How to try using just lu or super lu? I am pretty ignorant in playing with > proper ksp options in the command line. > > Could you point out any introductory materials on this? > > Thanks so much. > > > > > On Wed, May 28, 2014 at 11:54 AM, Vikram Garg <vikram.v.garg@...> > wrote: > >> Hey Walter, >> Have you tried using just lu or super lu ? You might >> also want to check and see whats the output for ksp_monitor_singular_value >> and increase the gmres restart steps. >> >> Thanks. >> >> >> On Mon, May 26, 2014 at 1:57 PM, walter kou <walter4code@...> >> wrote: >> >>> Hi all, >>> I ran a larger case: with elements = 200, and found for each calculation >>> in >>> the iteration, system.final_linear_residual() is about 0.5%. >>> >>> 1) Is the system.final_linear_residual() r = b A X* (X* is the >>> solution) >>> ? right? >>> >>> 2) It seems final residual is too big, and the equation is not solved >>> well >>> (here b is about 1e4). Does anyone have suggestion in playing with >>> solvers of Ax=b? >>> Here my case is on nonlinear elasticity, and A is almost symmetrical >>> positive definite (only components influenced by boundary conditions will >>> break the symmetry). >>> >>> >>> Also, following the suggestion of Paul, I use ksp_view and find my >>> solver >>> information is as below: >>> >>> KSP Object: 4 MPI processes >>> type: gmres >>> GMRES: restart=30, using Classical (unmodified) GramSchmidt >>> Orthogonalization with no iterative refinement >>> GMRES: happy breakdown tolerance 1e30 >>> maximum iterations=250 >>> tolerances: relative=1e08, absolute=1e50, divergence=10000 >>> left preconditioning >>> using nonzero initial guess >>> using PRECONDITIONED norm type for convergence test >>> PC Object: 4 MPI processes >>> type: bjacobi >>> block Jacobi: number of blocks = 4 >>> Local solve is same for all blocks, in the following KSP and PC >>> objects: >>> KSP Object: (sub_) 1 MPI processes >>> type: preonly >>> maximum iterations=10000, initial guess is zero >>> tolerances: relative=1e05, absolute=1e50, divergence=10000 >>> left preconditioning >>> using NONE norm type for convergence test >>> PC Object: (sub_) 1 MPI processes >>> type: ilu >>> ILU: outofplace factorization >>> 0 levels of fill >>> tolerance for zero pivot 2.22045e14 >>> using diagonal shift to prevent zero pivot >>> matrix ordering: natural >>> factor fill ratio given 1, needed 1 >>> Factored matrix follows: >>> Matrix Object: 1 MPI processes >>> type: seqaij >>> rows=324, cols=324 >>> package used to perform factorization: petsc >>> total: nonzeros=16128, allocated nonzeros=16128 >>> total number of mallocs used during MatSetValues calls =0 >>> not using Inode routines >>> linear system matrix = precond matrix: >>> Matrix Object: () 1 MPI processes >>> type: seqaij >>> rows=324, cols=324 >>> total: nonzeros=16128, allocated nonzeros=19215 >>> total number of mallocs used during MatSetValues calls =0 >>> not using Inode routines >>> linear system matrix = precond matrix: >>> Matrix Object: () 4 MPI processes >>> type: mpiaij >>> rows=990, cols=990 >>> total: nonzeros=58590, allocated nonzeros=64512 >>> total number of mallocs used during MatSetValues calls =0 >>> not using Inode (on process 0) routines >>> >>> >>> >>> /********************************************************* >>> Thanks, >>> >>> Walter >>> >>> >>> On Thu, May 22, 2014 at 12:18 PM, Paul T. Bauman <ptbauman@...> >>> wrote: >>> >>> > >>> > >>> > >>> > On Thu, May 22, 2014 at 12:11 PM, walter kou <walter4code@... >>> >wrote: >>> > >>> >> OK, but libMesh calls a library, defaulting to PETSc if it's >>> installed. >>> >> Which library are you using? >>> >> >>> >> PETSc3.3 >>> >> >>> > >>> > I recommend checking out the PETSc documentation ( >>> > http://www.mcs.anl.gov/petsc/petscas/documentation/) and tutorials. >>> But >>> > you'll want to start with ksp_view to get the parameters PETSc is >>> using. >>> > >>> >>>  >>> The best possible search technologies are now affordable for all >>> companies. >>> Download your FREE open source Enterprise Search Engine today! >>> Our experts will assist you in its installation for $59/mo, no >>> commitment. >>> Test it for FREE on our Cloud platform anytime! >>> >>> http://pubads.g.doubleclick.net/gampad/clk?id=145328191&iu=/4140/ostg.clktrk >>> _______________________________________________ >>> Libmeshusers mailing list >>> Libmeshusers@... >>> https://lists.sourceforge.net/lists/listinfo/libmeshusers >>> >> >> >> >>  >> Vikram Garg >> Postdoctoral Associate >> Center for Computational Engineering >> Massachusetts Institute of Technology >> http://web.mit.edu/vikramvg/www/ >> >> http://www.runforindia.org/runners/vikramg >> > >  Vikram Garg Postdoctoral Associate Center for Computational Engineering Massachusetts Institute of Technology http://web.mit.edu/vikramvg/www/ http://www.runforindia.org/runners/vikramg 
From: walter kou <walter4code@gm...>  20140529 22:52:57

Hi Vikram, How to try using just lu or super lu? I am pretty ignorant in playing with proper ksp options in the command line. Could you point out any introductory materials on this? Thanks so much. On Wed, May 28, 2014 at 11:54 AM, Vikram Garg <vikram.v.garg@...> wrote: > Hey Walter, > Have you tried using just lu or super lu ? You might > also want to check and see whats the output for ksp_monitor_singular_value > and increase the gmres restart steps. > > Thanks. > > > On Mon, May 26, 2014 at 1:57 PM, walter kou <walter4code@...> wrote: > >> Hi all, >> I ran a larger case: with elements = 200, and found for each calculation >> in >> the iteration, system.final_linear_residual() is about 0.5%. >> >> 1) Is the system.final_linear_residual() r = b A X* (X* is the solution) >> ? right? >> >> 2) It seems final residual is too big, and the equation is not solved well >> (here b is about 1e4). Does anyone have suggestion in playing with >> solvers of Ax=b? >> Here my case is on nonlinear elasticity, and A is almost symmetrical >> positive definite (only components influenced by boundary conditions will >> break the symmetry). >> >> >> Also, following the suggestion of Paul, I use ksp_view and find my solver >> information is as below: >> >> KSP Object: 4 MPI processes >> type: gmres >> GMRES: restart=30, using Classical (unmodified) GramSchmidt >> Orthogonalization with no iterative refinement >> GMRES: happy breakdown tolerance 1e30 >> maximum iterations=250 >> tolerances: relative=1e08, absolute=1e50, divergence=10000 >> left preconditioning >> using nonzero initial guess >> using PRECONDITIONED norm type for convergence test >> PC Object: 4 MPI processes >> type: bjacobi >> block Jacobi: number of blocks = 4 >> Local solve is same for all blocks, in the following KSP and PC >> objects: >> KSP Object: (sub_) 1 MPI processes >> type: preonly >> maximum iterations=10000, initial guess is zero >> tolerances: relative=1e05, absolute=1e50, divergence=10000 >> left preconditioning >> using NONE norm type for convergence test >> PC Object: (sub_) 1 MPI processes >> type: ilu >> ILU: outofplace factorization >> 0 levels of fill >> tolerance for zero pivot 2.22045e14 >> using diagonal shift to prevent zero pivot >> matrix ordering: natural >> factor fill ratio given 1, needed 1 >> Factored matrix follows: >> Matrix Object: 1 MPI processes >> type: seqaij >> rows=324, cols=324 >> package used to perform factorization: petsc >> total: nonzeros=16128, allocated nonzeros=16128 >> total number of mallocs used during MatSetValues calls =0 >> not using Inode routines >> linear system matrix = precond matrix: >> Matrix Object: () 1 MPI processes >> type: seqaij >> rows=324, cols=324 >> total: nonzeros=16128, allocated nonzeros=19215 >> total number of mallocs used during MatSetValues calls =0 >> not using Inode routines >> linear system matrix = precond matrix: >> Matrix Object: () 4 MPI processes >> type: mpiaij >> rows=990, cols=990 >> total: nonzeros=58590, allocated nonzeros=64512 >> total number of mallocs used during MatSetValues calls =0 >> not using Inode (on process 0) routines >> >> >> >> /********************************************************* >> Thanks, >> >> Walter >> >> >> On Thu, May 22, 2014 at 12:18 PM, Paul T. Bauman <ptbauman@...> >> wrote: >> >> > >> > >> > >> > On Thu, May 22, 2014 at 12:11 PM, walter kou <walter4code@... >> >wrote: >> > >> >> OK, but libMesh calls a library, defaulting to PETSc if it's installed. >> >> Which library are you using? >> >> >> >> PETSc3.3 >> >> >> > >> > I recommend checking out the PETSc documentation ( >> > http://www.mcs.anl.gov/petsc/petscas/documentation/) and tutorials. >> But >> > you'll want to start with ksp_view to get the parameters PETSc is >> using. >> > >> >>  >> The best possible search technologies are now affordable for all >> companies. >> Download your FREE open source Enterprise Search Engine today! >> Our experts will assist you in its installation for $59/mo, no commitment. >> Test it for FREE on our Cloud platform anytime! >> >> http://pubads.g.doubleclick.net/gampad/clk?id=145328191&iu=/4140/ostg.clktrk >> _______________________________________________ >> Libmeshusers mailing list >> Libmeshusers@... >> https://lists.sourceforge.net/lists/listinfo/libmeshusers >> > > > >  > Vikram Garg > Postdoctoral Associate > Center for Computational Engineering > Massachusetts Institute of Technology > http://web.mit.edu/vikramvg/www/ > > http://www.runforindia.org/runners/vikramg > 
From: John Peterson <jwpeterson@gm...>  20140529 14:22:37

On Wed, May 28, 2014 at 6:09 PM, subramanya sadasiva <potaman@...>wrote: > Hi, > I was running a modified version of abaqus_IO to allow it to read more > general abaqus cae files. I was able to buildit without any trouble till > the last version of Libmesh (0.93pre?) . however, with the current version, > I get the following errors > > > src/abaqus_io.C:52:12: error: ‘ElemType’ was not declared in this scope > std::map<ElemType, ElementDefinition> eletypes; > ^ > src/abaqus_io.C:52:12: note: suggested alternative: > In file included from > /home/ssadasiv/software/libmesh_builds/libmesh_opt_new/include/libmesh/elem.h:29:0, > from src/abaqus_io.C:27: > /home/ssadasiv/software/libmesh_builds/libmesh_opt_new/include/libmesh/enum_elem_type.h:30:6: > note: ‘libMesh::ElemType’ > enum ElemType {EDGE2=0, // 0 > ^ > src/abaqus_io.C:52:39: error: template argument 1 is invalid > std::map<ElemType, ElementDefinition> eletypes; > ^ > I can't see any difference in the code. The changes are not in these > lines. So I am not sure exactly why this is happening. > I have attached my modified version of the AbaqusIO.C and AbaqusIO.h > You need to put using namespace libMesh; in abaqus_io.C. Doing development this way (maintaining separate files outside the version control system) is tedious and error prone. You should maintain modified files as local commits which you can rebase on upstream periodically. This is also essential if you ever want your changes to make it back into upstream...  John 
From: subramanya sadasiva <potaman@ou...>  20140529 00:09:16

Hi, I was running a modified version of abaqus_IO to allow it to read more general abaqus cae files. I was able to buildit without any trouble till the last version of Libmesh (0.93pre?) . however, with the current version, I get the following errors src/abaqus_io.C:52:12: error: ‘ElemType’ was not declared in this scope std::map<ElemType, ElementDefinition> eletypes; ^ src/abaqus_io.C:52:12: note: suggested alternative: In file included from /home/ssadasiv/software/libmesh_builds/libmesh_opt_new/include/libmesh/elem.h:29:0, from src/abaqus_io.C:27: /home/ssadasiv/software/libmesh_builds/libmesh_opt_new/include/libmesh/enum_elem_type.h:30:6: note: ‘libMesh::ElemType’ enum ElemType {EDGE2=0, // 0 ^ src/abaqus_io.C:52:39: error: template argument 1 is invalid std::map<ElemType, ElementDefinition> eletypes; ^ I can't see any difference in the code. The changes are not in these lines. So I am not sure exactly why this is happening. I have attached my modified version of the AbaqusIO.C and AbaqusIO.h Thanks, Subramanya 
From: subramanya sadasiva <potaman@ou...>  20140528 18:00:00

I think I've figured out what the problem is. My mesh has two sidesets (From ABAQUS) that have the same elements and nodes. This seems to confuse mesh>boundary_info>has_boundary_info(elem,side,b_id) , and it returns false for both of them. Subramanya > Date: Wed, 28 May 2014 12:43:21 0500 > From: roystgnr@... > To: potaman@... > Subject: RE: [Libmeshusers] dirichlet boundaries not being applied. > > > On Wed, 28 May 2014, subramanya sadasiva wrote: > > > I can't find print constraints, but print_info from the dofmap does show the constraints > > Sorry, it's print_dof_constraints(), not juts print_constraints(). >  > Roy 
From: Vikram Garg <vikram.garg@gm...>  20140528 16:54:30

Hey Walter, Have you tried using just lu or super lu ? You might also want to check and see whats the output for ksp_monitor_singular_value and increase the gmres restart steps. Thanks. On Mon, May 26, 2014 at 1:57 PM, walter kou <walter4code@...> wrote: > Hi all, > I ran a larger case: with elements = 200, and found for each calculation in > the iteration, system.final_linear_residual() is about 0.5%. > > 1) Is the system.final_linear_residual() r = b A X* (X* is the solution) > ? right? > > 2) It seems final residual is too big, and the equation is not solved well > (here b is about 1e4). Does anyone have suggestion in playing with > solvers of Ax=b? > Here my case is on nonlinear elasticity, and A is almost symmetrical > positive definite (only components influenced by boundary conditions will > break the symmetry). > > > Also, following the suggestion of Paul, I use ksp_view and find my solver > information is as below: > > KSP Object: 4 MPI processes > type: gmres > GMRES: restart=30, using Classical (unmodified) GramSchmidt > Orthogonalization with no iterative refinement > GMRES: happy breakdown tolerance 1e30 > maximum iterations=250 > tolerances: relative=1e08, absolute=1e50, divergence=10000 > left preconditioning > using nonzero initial guess > using PRECONDITIONED norm type for convergence test > PC Object: 4 MPI processes > type: bjacobi > block Jacobi: number of blocks = 4 > Local solve is same for all blocks, in the following KSP and PC > objects: > KSP Object: (sub_) 1 MPI processes > type: preonly > maximum iterations=10000, initial guess is zero > tolerances: relative=1e05, absolute=1e50, divergence=10000 > left preconditioning > using NONE norm type for convergence test > PC Object: (sub_) 1 MPI processes > type: ilu > ILU: outofplace factorization > 0 levels of fill > tolerance for zero pivot 2.22045e14 > using diagonal shift to prevent zero pivot > matrix ordering: natural > factor fill ratio given 1, needed 1 > Factored matrix follows: > Matrix Object: 1 MPI processes > type: seqaij > rows=324, cols=324 > package used to perform factorization: petsc > total: nonzeros=16128, allocated nonzeros=16128 > total number of mallocs used during MatSetValues calls =0 > not using Inode routines > linear system matrix = precond matrix: > Matrix Object: () 1 MPI processes > type: seqaij > rows=324, cols=324 > total: nonzeros=16128, allocated nonzeros=19215 > total number of mallocs used during MatSetValues calls =0 > not using Inode routines > linear system matrix = precond matrix: > Matrix Object: () 4 MPI processes > type: mpiaij > rows=990, cols=990 > total: nonzeros=58590, allocated nonzeros=64512 > total number of mallocs used during MatSetValues calls =0 > not using Inode (on process 0) routines > > > > /********************************************************* > Thanks, > > Walter > > > On Thu, May 22, 2014 at 12:18 PM, Paul T. Bauman <ptbauman@...> > wrote: > > > > > > > > > On Thu, May 22, 2014 at 12:11 PM, walter kou <walter4code@... > >wrote: > > > >> OK, but libMesh calls a library, defaulting to PETSc if it's installed. > >> Which library are you using? > >> > >> PETSc3.3 > >> > > > > I recommend checking out the PETSc documentation ( > > http://www.mcs.anl.gov/petsc/petscas/documentation/) and tutorials. But > > you'll want to start with ksp_view to get the parameters PETSc is using. > > > >  > The best possible search technologies are now affordable for all companies. > Download your FREE open source Enterprise Search Engine today! > Our experts will assist you in its installation for $59/mo, no commitment. > Test it for FREE on our Cloud platform anytime! > > http://pubads.g.doubleclick.net/gampad/clk?id=145328191&iu=/4140/ostg.clktrk > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers >  Vikram Garg Postdoctoral Associate Center for Computational Engineering Massachusetts Institute of Technology http://web.mit.edu/vikramvg/www/ http://www.runforindia.org/runners/vikramg 
From: Roy Stogner <roystgnr@ic...>  20140528 15:26:02

On Wed, 28 May 2014, subramanya sadasiva wrote: > I am trying to use dirichlet boundaries to apply a dirichlet > boundary on an internal boundary. This doesn't seem to be working > and I seem to keep getting 0 . I have heterogenously constrained the > element matrix and vector, but the dirichlet boundary conditions > just don't seem to be called. Any ideas? When you print_constraints(), do the dirichlet constraints have nonzero right hand sides?  Roy 
From: subramanya sadasiva <potaman@ou...>  20140528 14:49:18

I am trying to use dirichlet boundaries to apply a dirichlet boundary on an internal boundary. This doesn't seem to be working and I seem to keep getting 0 . I have heterogenously constrained the element matrix and vector, but the dirichlet boundary conditions just don't seem to be called. Any ideas?Subramanya 
From: Paul T. Bauman <ptbauman@gm...>  20140528 14:18:18

On Tue, May 27, 2014 at 5:02 PM, Miguel Angel Salazar de Troya < salazardetroya@...> wrote: > > I am going to try to implement these functions with the LAGRANGE_VEC > because it would work better for me. > OK, great! > So I have the first questions: > > Function compute_proj_constraints. I understand it imposes a constraint on > the hanging nodes so they have a value related to their side neighboring > nodes. How are they related? Linear interpolation? > They are related by enforcing continuity. You'll see that, for example, C1 continuous elements enforce both function and derivative continuity. For HDiv and HCurl, it'll be different (normal and tangential, respectively). > Do you know of a paper or document where this projection is mathematically > written? > Closest I've seen is in Leszek Demkowicz's books (but those are really focused on his code). You might also look at some of Abani Patra's papers (oriented towards hp, but may still be useful). @roystgnr, @benkirk, and @jwpeterson might also have some suggestions. What's done is adding an equation to the global system that enforces this continuity. Have a look at src/fe/fe_lagrange.C, lagrange_compute_constraints. You see that dofs get inserted into the constraint row. If you wanted to generalize this for LAGRANGE_VEC, I wouldn't be opposed; IIRC, I was going to try and write the compute_proj_constraints to be more general and have everyone call that, but it may be simpler for you to just do an analog of the Lagrange case. @roystgnr, opinions? > With regards to this issue, could it be easier to do adaptive mesh > refinement on unstructured grids? Does libmesh support this? > This is what libMesh does. I'm not sure I understand the question. Best, Paul 
From: Miguel Angel Salazar de Troya <salazardetroya@gm...>  20140527 21:02:57

Thanks for your response I am going to try to implement these functions with the LAGRANGE_VEC because it would work better for me. So I have the first questions: Function compute_proj_constraints. I understand it imposes a constraint on the hanging nodes so they have a value related to their side neighboring nodes. How are they related? Linear interpolation? Do you know of a paper or document where this projection is mathematically written? With regards to this issue, could it be easier to do adaptive mesh refinement on unstructured grids? Does libmesh support this? Miguel On Tue, May 27, 2014 at 11:48 AM, Paul T. Bauman <ptbauman@...> wrote: > On Tue, May 27, 2014 at 12:45 PM, Miguel Angel Salazar de Troya < > salazardetroya@...> wrote: > >> Thanks for your response. I would like to, but it might be in over my >> head since I am just starting with libmesh. >> > > Everyone has to start somewhere. ;) I started by refactoring finite > element classes and adding LAGRANGE_VEC. :) > > >> So far, the best alternative to use AMR in a elasticity problems would be >> using a system of equations? >> > > Instead of using vectorvalued elements for the (I'm assuming) > displacement field, have each component (u,v,w) be a LAGRANGE variable. You > should be able to do the same formulation (since you're doing LAGRANGE and > not anything fancy), you'll just have to either 1. populate data structures > locally for the tensor operations or 2. manually do the tensor operations. > > In fact, I believe one of the examples is an elasticity example... > > HTH, > > Paul >  *Miguel Angel Salazar de Troya* Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at UrbanaChampaign (217) 5502360 salaza11@... 
From: Kirk, Benjamin (JSCEG311) <benjamin.kirk@na...>  20140527 18:52:42

Try copying the makefile from one of the examples then and using it for your application. They are all the same. $ find $LIBMESH_ROOT/examples name Makefile On May 27, 2014, at 10:53 AM, walter kou <walter4code@...> wrote: > Thanks for your message, I did "make install", and I did find the Make.common in the default installed place: /usr/local > > However, I tried to use this make file, get many linkage errors such as: > /usr/local/lib/libmesh_opt.a(libopt_laex_put_loadbal_param_cc.o): In function `ex_put_loadbal_param_cc': > ex_put_loadbal_param_cc.c:(.text+0xf0): undefined reference to `nc_inq_format' > > My makefile is like: > ///////////////////////////////////////////////////////// makefile ///////////////////////////////////// > LIBMESH_DIR =/usr/local > include $(LIBMESH_DIR)/Make.common > > ############################################################################### > > # source files > srcfiles := $(wildcard *.C) > > # > # object files > objects := $(patsubst %.C, %.$(objsuffix), $(srcfiles)) > ############################################################################### > > target := ./test > > > all:: $(target) > > # Production rules: how to make the target  depends on library configuration > $(target): $(objects) > @echo "Linking "$@"..." > @$(libmesh_CXX) $(libmesh_CXXFLAGS) $(objects) o $@ $(libmesh_LIBS) $(libmesh_LDFLAGS) > > > /////////////////////////////////////////////////////////////////////////////// makefile /////////////////////////////// > By the way, I could run examples. > > > > > On Tue, May 27, 2014 at 10:33 AM, Kirk, Benjamin (JSCEG311) <benjamin.kirk@...> wrote: > Did you do a 'make install'? > > Make.common should exist in the install directory. > > Ben > > > On May 27, 2014, at 10:27 AM, walter kou <walter4code@...> wrote: > >> Hi Roy, >> >> Thanks for your message, what do you mean by "Make.common still exists"? I >> do see Make.common in several subfolders such as /examples, /contrib. But >> I do not see Make.common in the parent folder that version 0.8.0 called. >> >> Walter >> >> >> On Mon, May 26, 2014 at 10:57 PM, Roy Stogner <roystgnr@...>wrote: >> >>> >>> On Mon, 26 May 2014, walter kou wrote: >>> >>> Recently, I switched from libmesh0.8.0 to libmesh 0.9.3, and found >>>> Make.commonbased way in 0.8.0 did not work for 0.9.3. >>>> >>>> Is there example/instruction on writing a Makefile for libmesh 0.9.3, if >>>> one wants to include new codes? >>>> >>> >>> Make.common still exists. For me the only change required to my app >>> makefiles >>> was that I now use libtool (for some reason which I don't remember a year >>> later): >>> >>>  @$(libmesh_CXX) $(libmesh_CXXFLAGS) $(objects) o $@ >>> $(libmesh_LIBS) $(libmesh_LDFLAGS) $(EXTERNAL_FLAGS) >>> + @$(libmesh_LIBTOOL) tag=CXX $(LIBTOOLFLAGS) mode=link >>> $(libmesh_CXX) $(libmesh_CXXFLAGS) $(objects) o $@ $(libmesh_LIBS) >>> $(libmesh_LDFLAG >>> S) $(EXTERNAL_FLAGS) >>> >>>  >>> Roy >>> >>  >> The best possible search technologies are now affordable for all companies. >> Download your FREE open source Enterprise Search Engine today! >> Our experts will assist you in its installation for $59/mo, no commitment. >> Test it for FREE on our Cloud platform anytime! >> http://pubads.g.doubleclick.net/gampad/clk?id=145328191&iu=/4140/ostg.clktrk >> _______________________________________________ >> Libmeshusers mailing list >> Libmeshusers@... >> https://lists.sourceforge.net/lists/listinfo/libmeshusers > > 
From: Paul T. Bauman <ptbauman@gm...>  20140527 16:49:05

On Tue, May 27, 2014 at 12:45 PM, Miguel Angel Salazar de Troya < salazardetroya@...> wrote: > Thanks for your response. I would like to, but it might be in over my head > since I am just starting with libmesh. > Everyone has to start somewhere. ;) I started by refactoring finite element classes and adding LAGRANGE_VEC. :) > So far, the best alternative to use AMR in a elasticity problems would be > using a system of equations? > Instead of using vectorvalued elements for the (I'm assuming) displacement field, have each component (u,v,w) be a LAGRANGE variable. You should be able to do the same formulation (since you're doing LAGRANGE and not anything fancy), you'll just have to either 1. populate data structures locally for the tensor operations or 2. manually do the tensor operations. In fact, I believe one of the examples is an elasticity example... HTH, Paul 
From: Miguel Angel Salazar de Troya <salazardetroya@gm...>  20140527 16:45:31

Thanks for your response. I would like to, but it might be in over my head since I am just starting with libmesh. So far, the best alternative to use AMR in a elasticity problems would be using a system of equations? Miguel On Tue, May 27, 2014 at 11:31 AM, Paul T. Bauman <ptbauman@...> wrote: > On Tue, May 27, 2014 at 12:27 PM, John Peterson <jwpeterson@...>wrote: > >> On Tue, May 27, 2014 at 10:21 AM, Miguel Angel Salazar de Troya < >> salazardetroya@...> wrote: >> >> > Hello >> > >> > I want to use the AMR in a problem with LAGRANGE_VEC elements, but I >> would >> > like to know if this is possible. I tried to run >> equation_systems.reinit () >> > >> >> It might be possible, though I don't know if Paul has ever tested that. > > > Unfortunately, it is not currently possible to do mesh refinement with > LAGRANGE_VEC (or any of the vectorvalued elements). The main bits to be > implemented (updated) are compute_proj_constraints, coarsened_dof_values, > and periodic_constraints (if you want periodic boundary conditions). These > will get updated by me eventually when it becomes a wall for me, but > there's no telling when that will happen. > > I can help guide you in implementing this if you feel like digging into > the code. > > Sorry, > > Paul >  *Miguel Angel Salazar de Troya* Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at UrbanaChampaign (217) 5502360 salaza11@... 
From: Paul T. Bauman <ptbauman@gm...>  20140527 16:31:54

On Tue, May 27, 2014 at 12:27 PM, John Peterson <jwpeterson@...>wrote: > On Tue, May 27, 2014 at 10:21 AM, Miguel Angel Salazar de Troya < > salazardetroya@...> wrote: > > > Hello > > > > I want to use the AMR in a problem with LAGRANGE_VEC elements, but I > would > > like to know if this is possible. I tried to run equation_systems.reinit > () > > > > It might be possible, though I don't know if Paul has ever tested that. Unfortunately, it is not currently possible to do mesh refinement with LAGRANGE_VEC (or any of the vectorvalued elements). The main bits to be implemented (updated) are compute_proj_constraints, coarsened_dof_values, and periodic_constraints (if you want periodic boundary conditions). These will get updated by me eventually when it becomes a wall for me, but there's no telling when that will happen. I can help guide you in implementing this if you feel like digging into the code. Sorry, Paul 
From: John Peterson <jwpeterson@gm...>  20140527 16:27:32

On Tue, May 27, 2014 at 10:21 AM, Miguel Angel Salazar de Troya < salazardetroya@...> wrote: > Hello > > I want to use the AMR in a problem with LAGRANGE_VEC elements, but I would > like to know if this is possible. I tried to run equation_systems.reinit () > It might be possible, though I don't know if Paul has ever tested that. > as a first step before writing the rest of the AMR code like in example > adaptivity_ex3, but I had this error: > > [1] src/fe/fe_base.C, line 431, compiled May 21 2014 at 12:15:20 > ERROR: Bad FEType.family= 41 > [0] src/fe/fe_base.C, line 431, compiled May 21 2014 at 12:15:20 > The problem is that you are trying to call FEBase::build with a vector FE type. You have to call FEVectorBase::build instead: AutoPtr<FEVectorBase> fe (FEVectorBase::build(dim, fe_type)); See examples/vector_fe/* for more vector FE code...  John 
From: Miguel Angel Salazar de Troya <salazardetroya@gm...>  20140527 16:21:59

Hello I want to use the AMR in a problem with LAGRANGE_VEC elements, but I would like to know if this is possible. I tried to run equation_systems.reinit () as a first step before writing the rest of the AMR code like in example adaptivity_ex3, but I had this error: [1] src/fe/fe_base.C, line 431, compiled May 21 2014 at 12:15:20 ERROR: Bad FEType.family= 41 [0] src/fe/fe_base.C, line 431, compiled May 21 2014 at 12:15:20  MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 1. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them.   mpirun has exited due to process rank 0 with PID 16383 on node miguelPrecisionWorkStationT3500 exiting improperly. There are two reasons this could occur: 1. this process did not call "init" before exiting, but others in the job did. This can cause a job to hang indefinitely while it waits for all processes to call "init". By rule, if one process calls "init", then ALL processes must call "init" prior to termination. 2. this process called "init", but exited without calling "finalize". By rule, all processes that call "init" MUST call "finalize" prior to exiting or it will be considered an "abnormal termination" This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). Thanks in advance Miguel  *Miguel Angel Salazar de Troya* Graduate Research Assistant Department of Mechanical Science and Engineering University of Illinois at UrbanaChampaign (217) 5502360 salaza11@... 
From: walter kou <walter4code@gm...>  20140527 15:53:56

Thanks for your message, I did "make install", and I did find the Make.common in the default installed place: /usr/local However, I tried to use this make file, get many linkage errors such as: /usr/local/lib/libmesh_opt.a(libopt_laex_put_loadbal_param_cc.o): In function `ex_put_loadbal_param_cc': ex_put_loadbal_param_cc.c:(.text+0xf0): undefined reference to `nc_inq_format' My makefile is like: ///////////////////////////////////////////////////////// makefile ///////////////////////////////////// LIBMESH_DIR =/usr/local include $(LIBMESH_DIR)/Make.common ############################################################################### # source files srcfiles := $(wildcard *.C) # # object files objects := $(patsubst %.C, %.$(objsuffix), $(srcfiles)) ############################################################################### target := ./test all:: $(target) # Production rules: how to make the target  depends on library configuration $(target): $(objects) @echo "Linking "$@"..." @$(libmesh_CXX) $(libmesh_CXXFLAGS) $(objects) o $@ $(libmesh_LIBS) $(libmesh_LDFLAGS) /////////////////////////////////////////////////////////////////////////////// makefile /////////////////////////////// By the way, I could run examples. On Tue, May 27, 2014 at 10:33 AM, Kirk, Benjamin (JSCEG311) < benjamin.kirk@...> wrote: > Did you do a 'make install'? > > Make.common should exist in the install directory. > > Ben > > > On May 27, 2014, at 10:27 AM, walter kou <walter4code@...> wrote: > > > Hi Roy, > > > > Thanks for your message, what do you mean by "Make.common still exists"? > I > > do see Make.common in several subfolders such as /examples, /contrib. > But > > I do not see Make.common in the parent folder that version 0.8.0 called. > > > > Walter > > > > > > On Mon, May 26, 2014 at 10:57 PM, Roy Stogner <roystgnr@... > >wrote: > > > >> > >> On Mon, 26 May 2014, walter kou wrote: > >> > >> Recently, I switched from libmesh0.8.0 to libmesh 0.9.3, and found > >>> Make.commonbased way in 0.8.0 did not work for 0.9.3. > >>> > >>> Is there example/instruction on writing a Makefile for libmesh 0.9.3, > if > >>> one wants to include new codes? > >>> > >> > >> Make.common still exists. For me the only change required to my app > >> makefiles > >> was that I now use libtool (for some reason which I don't remember a > year > >> later): > >> > >>  @$(libmesh_CXX) $(libmesh_CXXFLAGS) $(objects) o $@ > >> $(libmesh_LIBS) $(libmesh_LDFLAGS) $(EXTERNAL_FLAGS) > >> + @$(libmesh_LIBTOOL) tag=CXX $(LIBTOOLFLAGS) mode=link > >> $(libmesh_CXX) $(libmesh_CXXFLAGS) $(objects) o $@ $(libmesh_LIBS) > >> $(libmesh_LDFLAG > >> S) $(EXTERNAL_FLAGS) > >> > >>  > >> Roy > >> > > >  > > The best possible search technologies are now affordable for all > companies. > > Download your FREE open source Enterprise Search Engine today! > > Our experts will assist you in its installation for $59/mo, no > commitment. > > Test it for FREE on our Cloud platform anytime! > > > http://pubads.g.doubleclick.net/gampad/clk?id=145328191&iu=/4140/ostg.clktrk > > _______________________________________________ > > Libmeshusers mailing list > > Libmeshusers@... > > https://lists.sourceforge.net/lists/listinfo/libmeshusers > > 
From: Kirk, Benjamin (JSCEG311) <benjamin.kirk@na...>  20140527 15:33:41

Did you do a 'make install'? Make.common should exist in the install directory. Ben On May 27, 2014, at 10:27 AM, walter kou <walter4code@...> wrote: > Hi Roy, > > Thanks for your message, what do you mean by "Make.common still exists"? I > do see Make.common in several subfolders such as /examples, /contrib. But > I do not see Make.common in the parent folder that version 0.8.0 called. > > Walter > > > On Mon, May 26, 2014 at 10:57 PM, Roy Stogner <roystgnr@...>wrote: > >> >> On Mon, 26 May 2014, walter kou wrote: >> >> Recently, I switched from libmesh0.8.0 to libmesh 0.9.3, and found >>> Make.commonbased way in 0.8.0 did not work for 0.9.3. >>> >>> Is there example/instruction on writing a Makefile for libmesh 0.9.3, if >>> one wants to include new codes? >>> >> >> Make.common still exists. For me the only change required to my app >> makefiles >> was that I now use libtool (for some reason which I don't remember a year >> later): >> >>  @$(libmesh_CXX) $(libmesh_CXXFLAGS) $(objects) o $@ >> $(libmesh_LIBS) $(libmesh_LDFLAGS) $(EXTERNAL_FLAGS) >> + @$(libmesh_LIBTOOL) tag=CXX $(LIBTOOLFLAGS) mode=link >> $(libmesh_CXX) $(libmesh_CXXFLAGS) $(objects) o $@ $(libmesh_LIBS) >> $(libmesh_LDFLAG >> S) $(EXTERNAL_FLAGS) >> >>  >> Roy >> >  > The best possible search technologies are now affordable for all companies. > Download your FREE open source Enterprise Search Engine today! > Our experts will assist you in its installation for $59/mo, no commitment. > Test it for FREE on our Cloud platform anytime! > http://pubads.g.doubleclick.net/gampad/clk?id=145328191&iu=/4140/ostg.clktrk > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers 