You can subscribe to this list here.
2003 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}
(2) 
_{Oct}
(2) 
_{Nov}
(27) 
_{Dec}
(31) 

2004 
_{Jan}
(6) 
_{Feb}
(15) 
_{Mar}
(33) 
_{Apr}
(10) 
_{May}
(46) 
_{Jun}
(11) 
_{Jul}
(21) 
_{Aug}
(15) 
_{Sep}
(13) 
_{Oct}
(23) 
_{Nov}
(1) 
_{Dec}
(8) 
2005 
_{Jan}
(27) 
_{Feb}
(57) 
_{Mar}
(86) 
_{Apr}
(23) 
_{May}
(37) 
_{Jun}
(34) 
_{Jul}
(24) 
_{Aug}
(17) 
_{Sep}
(50) 
_{Oct}
(24) 
_{Nov}
(10) 
_{Dec}
(60) 
2006 
_{Jan}
(47) 
_{Feb}
(46) 
_{Mar}
(127) 
_{Apr}
(19) 
_{May}
(26) 
_{Jun}
(62) 
_{Jul}
(47) 
_{Aug}
(51) 
_{Sep}
(61) 
_{Oct}
(42) 
_{Nov}
(50) 
_{Dec}
(33) 
2007 
_{Jan}
(60) 
_{Feb}
(55) 
_{Mar}
(77) 
_{Apr}
(102) 
_{May}
(82) 
_{Jun}
(102) 
_{Jul}
(169) 
_{Aug}
(117) 
_{Sep}
(80) 
_{Oct}
(37) 
_{Nov}
(51) 
_{Dec}
(43) 
2008 
_{Jan}
(71) 
_{Feb}
(94) 
_{Mar}
(98) 
_{Apr}
(125) 
_{May}
(54) 
_{Jun}
(119) 
_{Jul}
(60) 
_{Aug}
(111) 
_{Sep}
(118) 
_{Oct}
(125) 
_{Nov}
(119) 
_{Dec}
(94) 
2009 
_{Jan}
(109) 
_{Feb}
(38) 
_{Mar}
(93) 
_{Apr}
(88) 
_{May}
(29) 
_{Jun}
(57) 
_{Jul}
(53) 
_{Aug}
(48) 
_{Sep}
(68) 
_{Oct}
(151) 
_{Nov}
(23) 
_{Dec}
(35) 
2010 
_{Jan}
(84) 
_{Feb}
(60) 
_{Mar}
(184) 
_{Apr}
(112) 
_{May}
(60) 
_{Jun}
(90) 
_{Jul}
(23) 
_{Aug}
(70) 
_{Sep}
(119) 
_{Oct}
(27) 
_{Nov}
(47) 
_{Dec}
(54) 
2011 
_{Jan}
(22) 
_{Feb}
(19) 
_{Mar}
(92) 
_{Apr}
(93) 
_{May}
(35) 
_{Jun}
(91) 
_{Jul}
(32) 
_{Aug}
(61) 
_{Sep}
(7) 
_{Oct}
(69) 
_{Nov}
(81) 
_{Dec}
(23) 
2012 
_{Jan}
(64) 
_{Feb}
(95) 
_{Mar}
(35) 
_{Apr}
(36) 
_{May}
(63) 
_{Jun}
(98) 
_{Jul}
(70) 
_{Aug}
(171) 
_{Sep}
(149) 
_{Oct}
(64) 
_{Nov}
(67) 
_{Dec}
(126) 
2013 
_{Jan}
(108) 
_{Feb}
(104) 
_{Mar}
(171) 
_{Apr}
(133) 
_{May}
(108) 
_{Jun}
(100) 
_{Jul}
(93) 
_{Aug}
(126) 
_{Sep}
(74) 
_{Oct}
(59) 
_{Nov}
(145) 
_{Dec}
(93) 
2014 
_{Jan}
(38) 
_{Feb}
(45) 
_{Mar}
(26) 
_{Apr}
(41) 
_{May}
(125) 
_{Jun}
(70) 
_{Jul}
(61) 
_{Aug}
(66) 
_{Sep}
(60) 
_{Oct}
(110) 
_{Nov}
(27) 
_{Dec}
(30) 
2015 
_{Jan}
(43) 
_{Feb}
(67) 
_{Mar}
(71) 
_{Apr}
(92) 
_{May}
(39) 
_{Jun}
(15) 
_{Jul}
(38) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 





1

2

3

4

5
(2) 
6

7

8
(1) 
9
(1) 
10
(1) 
11

12
(8) 
13
(5) 
14
(3) 
15

16
(5) 
17

18

19
(5) 
20
(3) 
21

22

23

24

25

26

27

28
(4) 
29
(5) 
30
(10) 
31
(1) 
From: Kirk, Benjamin (JSCEG) <benjamin.kirk1@na...>  20080512 17:34:53

Can you try running the code with implicit_neighbor_dofs on the command line? By default the matrix sparsity is computed assuming continuous shape functions, but for a DG forumation this is not a good assumption. Let me know if that works for you, if so we can look at putting something more sophisticated in the library to detect this situation and handling it automatically. Ben ________________________________ From: libmeshusersbounces@... on behalf of Roy Stogner Sent: Sun 5/11/2008 10:31 PM To: luyi Cc: libmeshusers@... Subject: Re: [Libmeshusers] about the long time for the assemble of first step! On Mon, 12 May 2008, luyi wrote: > Now I write a DG programme on libmesh, when I use the FIRST or higher > order basis, the first step is too slow, > besides this everything is ok, I use XYZ basis and I check the code by > perf_log, the time ratio is up to snuff. The first time step (especially in a nonadaptive run) has some overhead that later timesteps don't, but it's also possible you've found a bug. If we're not preallocating the sparse matrix with the correct sparsity pattern, for example, PETSc might take quite a long time on the first assembly. Could you run your code for only one time step and check the perf_log on that, to see what part of the code is taking too long?  Roy  This SF.net email is sponsored by the 2008 JavaOne(SM) Conference Don't miss this year's exciting event. There's still time to save $100. Use priority code J8TL2D2. http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone _______________________________________________ Libmeshusers mailing list Libmeshusers@... https://lists.sourceforge.net/lists/listinfo/libmeshusers 
From: John Peterson <jwpeterson@gm...>  20080512 14:20:51

Hi, Could you possibly resend the timing results with the original/correct spacing? They're a little tough to read. It looks like you are inserting two (or more?) different element matrices for each element and it looks like the "neighbor matrix insertion" is taking the most time. This appears to be an atypical usage pattern which I don't understand (for example, is there only one neighbor per element?) and may be either a library bug or a bug in your implementation. As Roy already mentioned, the most likely culprit in cases where matrix assembly takes too long is incorrect memory preallocation; your symptoms combined with the fact that you are performing a nonstandard (at least for LibMesh) assembly procedure leads me to think that Roy is on the right track. Simply saying that > I check the init funciton in petsc_matrix.C and it is ok, the matrix > size is dofs X dofs before assemble, is not really meaningful, one needs to know the number of nonzeros per row and the coupling between the different DoFs to be sure the sparsity pattern is set correctly. I would say that you should post your code so that we can take a look, but I'm not sure anyone has enough time to give it such a detailed examination. What might be more helpful is a PDF describing the finite element method you are attempting to implement... then we could give a few pointers on what might be the right approach to take in LibMesh. Finally, slow performance aside, does your code actually produce good answers or are we spending time debugging something that's not conceptually correct? J On Mon, May 12, 2008 at 12:41 AM, luyi <luyi06@...> wrote: > Hello, > > I make a DG programme on libmesh, there is a problem that the first step > takes so long when I use > high order basis, the performance log is as follow: > >  >  Matrix Assembly Performance: Alive time=683.179, Active time=682.44  >  >  Event nCalls Total Avg Percent of  >  Time Time Active Time  >  >   >  Assemble init 1 0.0001 0.000120 0.00  >  bounray matrix 134 0.0041 0.000030 0.00  >  elem init 1470 0.0757 0.000051 0.01  >  interior matrix 4276 0.2926 0.000068 0.04  >  mass matrix 5880 0.0607 0.000010 0.01  >  matrix insertion 1470 173.4288 0.117979 25.41  >  neighbor matrix insertion 4276 508.5776 0.118938 74.52  >  >  Totals: 17507 682.4396 100.00  >  > > Why the matrix insertion spend more than 10 minutes even there is only > 17460 dofs? > code like this: > perf_log.stop_event("interior matrix"); > > perf_log.start_event ("neighbor matrix insertion"); > > euler_system.matrix>add_matrix(Kn,dof_indices,neighbor_dof_indices); > > perf_log.stop_event ("neighbor matrix insertion"); > > perf_log.start_event ("matrix insertion"); > > euler_system.matrix>add_matrix(Ke, dof_indices); > euler_system.rhs>add_vector(Fe, dof_indices); > > perf_log.stop_event ("matrix insertion"); > > > > >  > This SF.net email is sponsored by the 2008 JavaOne(SM) Conference > Don't miss this year's exciting event. There's still time to save $100. > Use priority code J8TL2D2. > http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > > 
From: John Peterson <jwpeterson@gm...>  20080512 13:54:13

Sounds like a reasonable change. I'd say go for it. J On Sun, May 11, 2008 at 10:57 PM, Roy Stogner <roy@...> wrote: > > In the process of fixing ParallelMesh bugs, I've decided that the name > of the Elem::is_neighbor() method is a bit ambiguous. If > elemA>is_neighbor(elemB) is true, does that mean that > elemA>neighbor(somedirection) == elemB or does it mean the converse? > In an nonconforming adaptively refined mesh, neighborness isn't a > symmetric relation. > > Since the implementation of the function is to return true if and only > if elemA>neighbor(elemB), I've decided that has_neighbor() is a more > intuitive name. I'll be committing that (in a batch with some > ParallelMesh bugfixes) eventually. If anyone has a better name > suggestion, or if anyone wants to see is_neighbor() kept around > (temporarily or permanently) for API compatibility's sake, let me > know. >  > Roy > >  > This SF.net email is sponsored by the 2008 JavaOne(SM) Conference > Don't miss this year's exciting event. There's still time to save $100. > Use priority code J8TL2D2. > http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone > _______________________________________________ > Libmeshdevel mailing list > Libmeshdevel@... > https://lists.sourceforge.net/lists/listinfo/libmeshdevel > 
From: luyi <luyi06@ma...>  20080512 09:50:50

hello, I check the init funciton in petsc_matrix.C and it is ok, the matrix size is dofs X dofs before assemble, and is that any difference for DG programme of "dof_map.compute_sparsity (this>get_mesh())"? I run ex13opt and it is also need more time in the first step, I am curious about that when you run the SUPG programme with millions dofs how to deal with this? Thank you very much! Luyi 2008512 
From: luyi <luyi06@ma...>  20080512 05:41:48

Hello, I make a DG programme on libmesh, there is a problem that the first step takes so long when I use high order basis, the performance log is as follow:   Matrix Assembly Performance: Alive time=683.179, Active time=682.44    Event nCalls Total Avg Percent of   Time Time Active Time      Assemble init 1 0.0001 0.000120 0.00   bounray matrix 134 0.0041 0.000030 0.00   elem init 1470 0.0757 0.000051 0.01   interior matrix 4276 0.2926 0.000068 0.04   mass matrix 5880 0.0607 0.000010 0.01   matrix insertion 1470 173.4288 0.117979 25.41   neighbor matrix insertion 4276 508.5776 0.118938 74.52    Totals: 17507 682.4396 100.00   Why the matrix insertion spend more than 10 minutes even there is only 17460 dofs? code like this: perf_log.stop_event("interior matrix"); perf_log.start_event ("neighbor matrix insertion"); euler_system.matrix>add_matrix(Kn,dof_indices,neighbor_dof_indices); perf_log.stop_event ("neighbor matrix insertion"); perf_log.start_event ("matrix insertion"); euler_system.matrix>add_matrix(Ke, dof_indices); euler_system.rhs>add_vector(Fe, dof_indices); perf_log.stop_event ("matrix insertion"); 
From: Roy Stogner <roy@st...>  20080512 03:57:16

In the process of fixing ParallelMesh bugs, I've decided that the name of the Elem::is_neighbor() method is a bit ambiguous. If elemA>is_neighbor(elemB) is true, does that mean that elemA>neighbor(somedirection) == elemB or does it mean the converse? In an nonconforming adaptively refined mesh, neighborness isn't a symmetric relation. Since the implementation of the function is to return true if and only if elemA>neighbor(elemB), I've decided that has_neighbor() is a more intuitive name. I'll be committing that (in a batch with some ParallelMesh bugfixes) eventually. If anyone has a better name suggestion, or if anyone wants to see is_neighbor() kept around (temporarily or permanently) for API compatibility's sake, let me know.  Roy 
From: Roy Stogner <roy@st...>  20080512 03:31:38

On Mon, 12 May 2008, luyi wrote: > Now I write a DG programme on libmesh, when I use the FIRST or higher > order basis, the first step is too slow, > besides this everything is ok, I use XYZ basis and I check the code by > perf_log, the time ratio is up to snuff. The first time step (especially in a nonadaptive run) has some overhead that later timesteps don't, but it's also possible you've found a bug. If we're not preallocating the sparse matrix with the correct sparsity pattern, for example, PETSc might take quite a long time on the first assembly. Could you run your code for only one time step and check the perf_log on that, to see what part of the code is taking too long?  Roy 
From: luyi <luyi06@ma...>  20080512 03:18:48

Hello, Now I write a DG programme on libmesh, when I use the FIRST or higher order basis, the first step is too slow, besides this everything is ok, I use XYZ basis and I check the code by perf_log, the time ratio is up to snuff. I want to know when you run the millions dof progamme, how long the first step cost? thank you! Luyi 