You can subscribe to this list here.
2003 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}
(2) 
_{Oct}
(2) 
_{Nov}
(27) 
_{Dec}
(31) 

2004 
_{Jan}
(6) 
_{Feb}
(15) 
_{Mar}
(33) 
_{Apr}
(10) 
_{May}
(46) 
_{Jun}
(11) 
_{Jul}
(21) 
_{Aug}
(15) 
_{Sep}
(13) 
_{Oct}
(23) 
_{Nov}
(1) 
_{Dec}
(8) 
2005 
_{Jan}
(27) 
_{Feb}
(57) 
_{Mar}
(86) 
_{Apr}
(23) 
_{May}
(37) 
_{Jun}
(34) 
_{Jul}
(24) 
_{Aug}
(17) 
_{Sep}
(50) 
_{Oct}
(24) 
_{Nov}
(10) 
_{Dec}
(60) 
2006 
_{Jan}
(47) 
_{Feb}
(46) 
_{Mar}
(127) 
_{Apr}
(19) 
_{May}
(26) 
_{Jun}
(62) 
_{Jul}
(47) 
_{Aug}
(51) 
_{Sep}
(61) 
_{Oct}
(42) 
_{Nov}
(50) 
_{Dec}
(33) 
2007 
_{Jan}
(60) 
_{Feb}
(55) 
_{Mar}
(77) 
_{Apr}
(102) 
_{May}
(82) 
_{Jun}
(102) 
_{Jul}
(169) 
_{Aug}
(117) 
_{Sep}
(80) 
_{Oct}
(37) 
_{Nov}
(51) 
_{Dec}
(43) 
2008 
_{Jan}
(71) 
_{Feb}
(94) 
_{Mar}
(98) 
_{Apr}
(125) 
_{May}
(54) 
_{Jun}
(119) 
_{Jul}
(60) 
_{Aug}
(111) 
_{Sep}
(118) 
_{Oct}
(125) 
_{Nov}
(119) 
_{Dec}
(94) 
2009 
_{Jan}
(109) 
_{Feb}
(38) 
_{Mar}
(93) 
_{Apr}
(88) 
_{May}
(29) 
_{Jun}
(57) 
_{Jul}
(53) 
_{Aug}
(48) 
_{Sep}
(68) 
_{Oct}
(151) 
_{Nov}
(23) 
_{Dec}
(35) 
2010 
_{Jan}
(84) 
_{Feb}
(60) 
_{Mar}
(184) 
_{Apr}
(112) 
_{May}
(60) 
_{Jun}
(90) 
_{Jul}
(23) 
_{Aug}
(70) 
_{Sep}
(119) 
_{Oct}
(27) 
_{Nov}
(47) 
_{Dec}
(54) 
2011 
_{Jan}
(22) 
_{Feb}
(19) 
_{Mar}
(92) 
_{Apr}
(93) 
_{May}
(35) 
_{Jun}
(91) 
_{Jul}
(32) 
_{Aug}
(61) 
_{Sep}
(7) 
_{Oct}
(69) 
_{Nov}
(81) 
_{Dec}
(23) 
2012 
_{Jan}
(64) 
_{Feb}
(95) 
_{Mar}
(35) 
_{Apr}
(36) 
_{May}
(63) 
_{Jun}
(98) 
_{Jul}
(70) 
_{Aug}
(171) 
_{Sep}
(149) 
_{Oct}
(64) 
_{Nov}
(67) 
_{Dec}
(126) 
2013 
_{Jan}
(108) 
_{Feb}
(104) 
_{Mar}
(171) 
_{Apr}
(133) 
_{May}
(108) 
_{Jun}
(100) 
_{Jul}
(93) 
_{Aug}
(126) 
_{Sep}
(74) 
_{Oct}
(59) 
_{Nov}
(145) 
_{Dec}
(93) 
2014 
_{Jan}
(38) 
_{Feb}
(45) 
_{Mar}
(26) 
_{Apr}
(41) 
_{May}
(125) 
_{Jun}
(70) 
_{Jul}
(61) 
_{Aug}
(66) 
_{Sep}
(60) 
_{Oct}
(110) 
_{Nov}
(27) 
_{Dec}
(30) 
2015 
_{Jan}
(43) 
_{Feb}
(67) 
_{Mar}
(9) 
_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 

1
(1) 
2
(1) 
3
(14) 
4

5
(1) 
6
(2) 
7

8

9
(1) 
10
(8) 
11
(2) 
12

13
(7) 
14
(1) 
15
(2) 
16
(13) 
17

18
(8) 
19
(8) 
20

21

22

23
(3) 
24
(6) 
25
(7) 
26

27

28

29

30
(2) 
31
(6) 




From: Roy Stogner <roystgnr@ic...>  20090303 22:36:28

On Tue, 3 Mar 2009, Andrea Hawkins wrote: > So, in using this fine feature I believe I have found a bug. I think so too. But before we get into the technical details, I think it's a good idea to step back, look at the big picture, and focus on what's important: svn blame says Ben did it. (yes, after I've had too many incidents of commiting insufficiently tested code, my first reaction upon finding a bug is rushing to svn to see whether or not it was one of mine...) > When calling variable_first and variable_last they both return a value > stored in a vector _var_first_local_df. In looking at these routines, it > appears that _var_first_local_df should be a vector of length # of vars, > containing only the first dof for each. n_vars + 1  we leave the total number of dofs in there too, to make variable_last(n_vars) more straightforward. That's probably what that second push_back was trying to do; it just should have gone outside of the loop. > In looking where it gets defined in dof_map.C, in > DofMap::distribute_local_dofs_var_major, there is a push back at the > beginning of the loop and at the end, which appears to account for the extra > values. That would do it. Thanks! Doublecheck that the following patch fixes the problem? If so I'll commit it.  Roy Index: src/base/dof_map.C ===================================================================  src/base/dof_map.C (revision 3287) +++ src/base/dof_map.C (working copy) @@ 900,9 +900,11 @@ next_free_dof += elem>n_comp(sys_num,var); } } // end loop on elements  _var_first_local_df.push_back(next_free_dof); } // end loop on variables + // Cache the last local dof number too + _var_first_local_df.push_back(next_free_dof); + #ifdef DEBUG // Make sure we didn't miss any nodes MeshTools::libmesh_assert_valid_node_procids(mesh); 
From: Andrea Hawkins <andjhawkins@gm...>  20090303 22:17:59

So, in using this fine feature I believe I have found a bug. When calling variable_first and variable_last they both return a value stored in a vector _var_first_local_df. In looking at these routines, it appears that _var_first_local_df should be a vector of length # of vars, containing only the first dof for each. However, if you view the vector it has length 2*(# of vars) and a form a b b c c d d... In looking where it gets defined in dof_map.C, in DofMap::distribute_local_dofs_var_major, there is a push back at the beginning of the loop and at the end, which appears to account for the extra values. Andrea 
From: Roy Stogner <roystgnr@ic...>  20090303 20:54:57

On Tue, 3 Mar 2009, Andrea Hawkins wrote: > Doesn't this come back to the same problem though? Or is there a way > to locally get the global dofs for the individual variables? Ah! I see the confusion: "variable_first_local_dof", etc. have misleading names. They're supposed to return "the global index of the first dof of this variable which is local to this processor". Just the dof is local; the index isn't. We don't use local dof indexing in the libMesh API. Iterating on each processor between the first dof (inclusive) and last dof (noninclusive) should be all you need to do.  Roy 
From: Roy Stogner <roystgnr@ic...>  20090303 20:32:12

On Tue, 3 Mar 2009, Andrea Hawkins wrote: > Is there a way to know how many processors are working with a given > dof? (i.e. I could add a fraction with every processor such that it > added up to one?) Yes: the fraction is "1" on the processor that owns a dof, and "0" for all the others. ;) Even boundary dofs with coefficient data that *exists* on multiple processors are still *owned* by only one proc. Just have each processor identityout its own rows and you're fine there... figuring out what to do about the columns might be trickier, though; just put up with the asymmetric matrix, since those columns' contributions will be 0 anyways?  Roy 
From: John Peterson <jwpeterson@gm...>  20090303 20:24:04

On Tue, Mar 3, 2009 at 2:12 PM, Andrea Hawkins <andjhawkins@...> wrote: > Yes, a hack for a hack! =) > > Anyway... That explains my troubles!! > > So, if I'm running on multiple processors, then there is not a nice easy way > to get all the dof's for a variable? Not exactly easy, but could you get them on each processor and then gather them?  John 
From: STEPHANE TCHOUANMO <tchouanm@ms...>  20090303 20:15:28

Oups! Seems like you've got it Ben. Here is my Konsole output with info for a small case (9261 dofs). [0] PetscInitialize(): PETSc successfully started: number of processors = 1 [0] PetscGetHostName(): Rejecting domainname, likely is NIS linuxstchouan. [0] PetscInitialize(): Running on machine: linuxstchouan Mesh Information: mesh_dimension()=3 spatial_dimension()=3 n_nodes()=9261 n_elem()=8000 n_local_elem()=8000 n_active_elem()=8000 n_subdomains()=1 n_processors()=1 processor_id()=0 ... [0] VecScatterCreate(): Special case: sequential vector general scatter EquationSystems n_systems()=1 System "dc" Type "TransientNonlinearImplicit" Variables="P" Finite Element Types="LAGRANGE" Approximation Orders="FIRST" n_dofs()=9261 n_local_dofs()=9261 n_constrained_dofs()=0 n_vectors()=3 ==> Solving time step 0, time = 0.01 ... [0] VecScatterCreate(): Special case: sequential vector general scatter NL step 0, residual_2 = 4.119734e05 [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 2080374783 [0] PetscCommDuplicate(): returning tag 2147483632 [0] PetscCommDuplicate(): Using internal PETSc communicator 2080374784 2080374782 [0] PetscCommDuplicate(): returning tag 2147483641 [0] PetscCommDuplicate(): returning tag 2147483631 [0] VecScatterCreate(): Special case: sequential vector general scatter [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 9261 X 9261; storage space: 56570 unneeded,226981 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 18286 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 27 [0] Mat_CheckInode(): Found 9261 nodes out of 9261 rows. Not using Inode routines [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 9261 X 9261; storage space: 0 unneeded,226981 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 27 [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 9261 X 9261; storage space: 0 unneeded,226981 used [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0 [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 27 So I have: Number of mallocs during MatSetValues() is 18286 Its only non zero at this point. The rest of the time its zero. The value probably increases exponentially with the number of unknowns in the system. It also seems to depend on the type of elements I use. Here I have Hexes but I will further work with tetras. What should I do to fix this Ben? Thanks a lot. Stephane > From: benjamin.kirk1@... > To: tchouanm@... > CC: libmeshusers@... > Date: Tue, 3 Mar 2009 13:42:24 0600 > Subject: Re: [Libmeshusers] Petsc slowness on a single processor machine > > > Thanks Ben. > > > > I use mpich21.0.7. > > > > After a discussion with PETSc developpers, the problem might come from lots of > > allocation made by LibMesh within the call of PETSc. In fact if you look at > > the PETSc log summary of the problem I solve, you can clearly see that most of > > the time (more than 90%) is spent in the SNESSolve stage. The KSPSolve stage > > for solving the linear system in Newton takes at most 5% of the time. > > Actually, my problem is really at the very first Newton iteration which can > > last a hour out of a 3 hours total time resolution. Here is the behavior i > > have: > > > > ==> Solving time step 0, time = 0.01 > > NL step 0, residual_2 = 5.346581e05 > > .. 1 hour .. > > NL step 1, residual_2 = 8.790777e10 > > > > ==> Solving time step 1, time = 2.000000e02 > > NL step 0, residual_2 = 6.043076e05 > > NL step 1, residual_2 = 9.936468e10 > > ... > > ... > > etc. until the end for a total CPU of 3 hours. > > Finally I always get the right solution but i dont understand the sudden stop > > at the beginning. > > It might not be only VecScatterCreate but i think its a whole bunch of memory > > allocation that happens. > > > > What do you think? > > > I think the problem is most definitely in the sparse matrix allocation. > libMesh builds the graph of (what it thinks is) your sparse matrix so that > the underlying PETSc data structures can be allocated perfectly. If for > some reason the linear system you are assembling has a different structure > than what we thought it would, insertions into the sparse matrix can be > horrifically slow the first time you assemble the linear system. > > what you should look for is something like 'number of mallocs called during > MatSetValues' when you run with info. we want that to be 0. what is it on > the first linear solve? what type of elements are you using?? > > Ben > _________________________________________________________________ More than messages–check out the rest of the Windows Live™. http://www.microsoft.com/windows/windowslive/ 
From: Andrea Hawkins <andjhawkins@gm...>  20090303 20:12:38

Yes, a hack for a hack! =) Anyway... That explains my troubles!! So, if I'm running on multiple processors, then there is not a nice easy way to get all the dof's for a variable? Is there a way to know how many processors are working with a given dof? (i.e. I could add a fraction with every processor such that it added up to one?) Andrea On Tue, Mar 3, 2009 at 2:07 PM, Roy Stogner <roystgnr@...>wrote: > > On Tue, 3 Mar 2009, Andrea Hawkins wrote: > > I'm trying to decouple one or two of the variables from the system >> and so just insert a 1 on the diagonal corresponding to their dof. >> > > Ah, makes sense. I was worried that you might be having to use a hack > to accomplish something that we ought to have a proper API for. But > if you're using a hack to accomplish a hack, that's fine, I do that > all the time. ;) > > I had been under the impression that the dof's for each variable >> were contiguous, but upon closer investigation they aren't. So, I >> must have somehow turned on the nodemajordofs? Where would that >> have been set without me knowing? >> > > Nowhere  although I plan to soon add a way to enable it in C++ code, > right now it's only pulled from the command line. > > But keep in mind that the numbering isn't a choice between "first by > node, then by variable" or "first by variable, then by node". It's > "first by processor, then by node, then by variable" or "first by > processor, then by variable, then by node" (defaulting to the latter). > We always order by processor first to simplify the linear algebra > interface, so if you're running on more than one processor you'll > have noncontiguous variable blocks. >  > Roy > 
From: Roy Stogner <roystgnr@ic...>  20090303 20:07:52

On Tue, 3 Mar 2009, Andrea Hawkins wrote: > I'm trying to decouple one or two of the variables from the system > and so just insert a 1 on the diagonal corresponding to their dof. Ah, makes sense. I was worried that you might be having to use a hack to accomplish something that we ought to have a proper API for. But if you're using a hack to accomplish a hack, that's fine, I do that all the time. ;) > I had been under the impression that the dof's for each variable > were contiguous, but upon closer investigation they aren't. So, I > must have somehow turned on the nodemajordofs? Where would that > have been set without me knowing? Nowhere  although I plan to soon add a way to enable it in C++ code, right now it's only pulled from the command line. But keep in mind that the numbering isn't a choice between "first by node, then by variable" or "first by variable, then by node". It's "first by processor, then by node, then by variable" or "first by processor, then by variable, then by node" (defaulting to the latter). We always order by processor first to simplify the linear algebra interface, so if you're running on more than one processor you'll have noncontiguous variable blocks.  Roy 
From: Roy Stogner <roystgnr@ic...>  20090303 19:56:00

On Tue, 3 Mar 2009, Andrea Hawkins wrote: > I am working with a system of 8 variables and am wondering if there is an > easy way to just get the global degrees of freedom associated with each > variable. I know you can do it for each element, but is there a way to get > them all at once? I think DofMap::variable_first_local_dof(varnum) (and its last_local counterpart) are what you want; with the default numbering scheme that'll give you the processorlocal contiguous block of dof indices that are all for that one variable. Or with the "nodemajordofs" numbering scheme it'll give you invalid ids... the workaround is "don't turn on nodemajordofs". What do you need them for?  Roy 
From: Andrea Hawkins <andjhawkins@gm...>  20090303 19:43:32

Hello I am working with a system of 8 variables and am wondering if there is an easy way to just get the global degrees of freedom associated with each variable. I know you can do it for each element, but is there a way to get them all at once? Thanks! Andrea 
From: Kirk, Benjamin (JSCEG311) <benjamin.kirk1@na...>  20090303 19:42:29

> Thanks Ben. > > I use mpich21.0.7. > > After a discussion with PETSc developpers, the problem might come from lots of > allocation made by LibMesh within the call of PETSc. In fact if you look at > the PETSc log summary of the problem I solve, you can clearly see that most of > the time (more than 90%) is spent in the SNESSolve stage. The KSPSolve stage > for solving the linear system in Newton takes at most 5% of the time. > Actually, my problem is really at the very first Newton iteration which can > last a hour out of a 3 hours total time resolution. Here is the behavior i > have: > > ==> Solving time step 0, time = 0.01 > NL step 0, residual_2 = 5.346581e05 > .. 1 hour .. > NL step 1, residual_2 = 8.790777e10 > > ==> Solving time step 1, time = 2.000000e02 > NL step 0, residual_2 = 6.043076e05 > NL step 1, residual_2 = 9.936468e10 > ... > ... > etc. until the end for a total CPU of 3 hours. > Finally I always get the right solution but i dont understand the sudden stop > at the beginning. > It might not be only VecScatterCreate but i think its a whole bunch of memory > allocation that happens. > > What do you think? I think the problem is most definitely in the sparse matrix allocation. libMesh builds the graph of (what it thinks is) your sparse matrix so that the underlying PETSc data structures can be allocated perfectly. If for some reason the linear system you are assembling has a different structure than what we thought it would, insertions into the sparse matrix can be horrifically slow the first time you assemble the linear system. what you should look for is something like 'number of mallocs called during MatSetValues' when you run with info. we want that to be 0. what is it on the first linear solve? what type of elements are you using?? Ben 
From: STEPHANE TCHOUANMO <tchouanm@ms...>  20090303 19:34:36

Thanks Ben. I use mpich21.0.7. After a discussion with PETSc developpers, the problem might come from lots of allocation made by LibMesh within the call of PETSc. In fact if you look at the PETSc log summary of the problem I solve, you can clearly see that most of the time (more than 90%) is spent in the SNESSolve stage. The KSPSolve stage for solving the linear system in Newton takes at most 5% of the time. Actually, my problem is really at the very first Newton iteration which can last a hour out of a 3 hours total time resolution. Here is the behavior i have: ==> Solving time step 0, time = 0.01 NL step 0, residual_2 = 5.346581e05 .. 1 hour .. NL step 1, residual_2 = 8.790777e10 ==> Solving time step 1, time = 2.000000e02 NL step 0, residual_2 = 6.043076e05 NL step 1, residual_2 = 9.936468e10 ... ... etc. until the end for a total CPU of 3 hours. Finally I always get the right solution but i dont understand the sudden stop at the beginning. It might not be only VecScatterCreate but i think its a whole bunch of memory allocation that happens. What do you think? Thanks. Stephane > From: benjamin.kirk1@... > To: tchouanm@... > CC: libmeshusers@... > Date: Sun, 1 Mar 2009 14:53:25 0600 > Subject: Re: [Libmeshusers] Petsc slowness on a single processor machine > > > > Thanks for your answer Roy, > > > > I use libmesh0.6.2 and petsc2.3.3p13 versions. > > I sent a mail to petsc users and we'll see what they say about that > > VecScatterCreate. > > Right now im checking whats going on in petsc_vector.C. > > Hope to get the answer quick. > > What MPI are you using? VecScatterCreate should be particularly trivial on > one processor, so this issue is perplexing... > > Ben > _________________________________________________________________ News, entertainment and everything you care about at Live.com. Get it now! http://www.live.com/getstarted.aspx 
From: Roy Stogner <roystgnr@ic...>  20090303 01:54:06

On Mon, 2 Mar 2009, John Peterson wrote: > On Mon, Mar 2, 2009 at 3:48 PM, yunfei zhu <yzhu2009@...> wrote: >> >> Considering a tri3, as following, >> >> 3 >> * >> * c * >> * * * >> * * >> * a* *b * >> 1 * * * * *2 >> points a,b,c are quadrature points. >> (L[i] are the area coordinates) >> p(0) is the area coordinate L[1] at a quadrature point. >> p(1) is the area coordinate L[2] at a quadrature point. John is right, and to be more specific, here is why: for our master triangle, 1 is at the origin, 2 is at (1,0), 3 is at (0,1). So p(0) = L(2) and p(1) = L(3)  Roy 
From: John Peterson <jwpeterson@gm...>  20090303 01:38:49

On Mon, Mar 2, 2009 at 3:48 PM, yunfei zhu <yzhu2009@...> wrote: > Hi all, > I am feeling a little confused about the following codes in the > file:fe_lagrange_shape_2D.C. > > case TRI3: > case TRI6: > { > const Real zeta1 = p(0); > const Real zeta2 = p(1); > const Real zeta0 = 1.  zeta1  zeta2; > > libmesh_assert (i<3); > > switch(i) > { > case 0: > return zeta0; > > case 1: > return zeta1; > > case 2: > return zeta2; > > default: > libmesh_error(); > > } > > Considering a tri3, as following, > > 3 > * > * c * > * * * > * * > * a* *b * > 1 * * * * *2 > points a,b,c are quadrature points. > (L[i] are the area coordinates) > p(0) is the area coordinate L[1] at a quadrature point. > p(1) is the area coordinate L[2] at a quadrature point. > so, > zeta1=L[1] > zeta2=L[2] > zeta0=L[3] > > we have shape function : > phi[i][a]=L[i][a] > > If i=0, we should have: phi[0][a]=L[1][a], > but the code return zeta0, which is L[3], not L[1]. zeta0 as given above is the correct basis function for node zero.  John 