You can subscribe to this list here.
2003 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}
(2) 
_{Oct}
(2) 
_{Nov}
(27) 
_{Dec}
(31) 

2004 
_{Jan}
(6) 
_{Feb}
(15) 
_{Mar}
(33) 
_{Apr}
(10) 
_{May}
(46) 
_{Jun}
(11) 
_{Jul}
(21) 
_{Aug}
(15) 
_{Sep}
(13) 
_{Oct}
(23) 
_{Nov}
(1) 
_{Dec}
(8) 
2005 
_{Jan}
(27) 
_{Feb}
(57) 
_{Mar}
(86) 
_{Apr}
(23) 
_{May}
(37) 
_{Jun}
(34) 
_{Jul}
(24) 
_{Aug}
(17) 
_{Sep}
(50) 
_{Oct}
(24) 
_{Nov}
(10) 
_{Dec}
(60) 
2006 
_{Jan}
(47) 
_{Feb}
(46) 
_{Mar}
(127) 
_{Apr}
(19) 
_{May}
(26) 
_{Jun}
(62) 
_{Jul}
(47) 
_{Aug}
(51) 
_{Sep}
(61) 
_{Oct}
(42) 
_{Nov}
(50) 
_{Dec}
(33) 
2007 
_{Jan}
(60) 
_{Feb}
(55) 
_{Mar}
(77) 
_{Apr}
(102) 
_{May}
(82) 
_{Jun}
(102) 
_{Jul}
(169) 
_{Aug}
(117) 
_{Sep}
(80) 
_{Oct}
(37) 
_{Nov}
(51) 
_{Dec}
(43) 
2008 
_{Jan}
(71) 
_{Feb}
(94) 
_{Mar}
(98) 
_{Apr}
(125) 
_{May}
(54) 
_{Jun}
(119) 
_{Jul}
(60) 
_{Aug}
(111) 
_{Sep}
(118) 
_{Oct}
(125) 
_{Nov}
(119) 
_{Dec}
(94) 
2009 
_{Jan}
(109) 
_{Feb}
(38) 
_{Mar}
(93) 
_{Apr}
(88) 
_{May}
(29) 
_{Jun}
(57) 
_{Jul}
(53) 
_{Aug}
(48) 
_{Sep}
(68) 
_{Oct}
(151) 
_{Nov}
(23) 
_{Dec}
(35) 
2010 
_{Jan}
(84) 
_{Feb}
(60) 
_{Mar}
(184) 
_{Apr}
(112) 
_{May}
(60) 
_{Jun}
(90) 
_{Jul}
(23) 
_{Aug}
(70) 
_{Sep}
(119) 
_{Oct}
(27) 
_{Nov}
(47) 
_{Dec}
(54) 
2011 
_{Jan}
(22) 
_{Feb}
(19) 
_{Mar}
(92) 
_{Apr}
(93) 
_{May}
(35) 
_{Jun}
(91) 
_{Jul}
(32) 
_{Aug}
(61) 
_{Sep}
(7) 
_{Oct}
(69) 
_{Nov}
(81) 
_{Dec}
(23) 
2012 
_{Jan}
(64) 
_{Feb}
(95) 
_{Mar}
(35) 
_{Apr}
(36) 
_{May}
(63) 
_{Jun}
(98) 
_{Jul}
(70) 
_{Aug}
(171) 
_{Sep}
(149) 
_{Oct}
(64) 
_{Nov}
(67) 
_{Dec}
(126) 
2013 
_{Jan}
(108) 
_{Feb}
(104) 
_{Mar}
(171) 
_{Apr}
(133) 
_{May}
(108) 
_{Jun}
(100) 
_{Jul}
(93) 
_{Aug}
(126) 
_{Sep}
(74) 
_{Oct}
(59) 
_{Nov}
(145) 
_{Dec}
(93) 
2014 
_{Jan}
(38) 
_{Feb}
(45) 
_{Mar}
(26) 
_{Apr}
(41) 
_{May}
(125) 
_{Jun}
(70) 
_{Jul}
(61) 
_{Aug}
(66) 
_{Sep}
(60) 
_{Oct}
(110) 
_{Nov}
(27) 
_{Dec}
(30) 
2015 
_{Jan}
(43) 
_{Feb}
(67) 
_{Mar}
(71) 
_{Apr}
(92) 
_{May}
(39) 
_{Jun}
(15) 
_{Jul}
(46) 
_{Aug}
(63) 
_{Sep}
(84) 
_{Oct}
(82) 
_{Nov}
(69) 
_{Dec}
(45) 
2016 
_{Jan}
(92) 
_{Feb}
(91) 
_{Mar}
(148) 
_{Apr}
(43) 
_{May}
(58) 
_{Jun}
(117) 
_{Jul}
(92) 
_{Aug}
(140) 
_{Sep}
(49) 
_{Oct}
(33) 
_{Nov}
(85) 
_{Dec}
(40) 
2017 
_{Jan}
(41) 
_{Feb}
(36) 
_{Mar}
(49) 
_{Apr}
(35) 
_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 







1

2

3

4

5
(9) 
6
(14) 
7
(4) 
8

9

10
(4) 
11
(3) 
12
(7) 
13
(3) 
14
(4) 
15

16

17

18
(9) 
19
(6) 
20
(11) 
21
(4) 
22
(1) 
23

24
(2) 
25
(1) 
26
(8) 
27

28
(3) 
29
(6) 
30
(1) 






From: Dmitry Karpeyev <karpeev@mc...>  20130610 15:51:35

It may be a good idea to combine this study with looking at the output of log_summary (assuming PETSc is your solver engine) to see if there is a correlation with the growth in communication or other components of matrix assembly. Dmitry. On Mon, Jun 10, 2013 at 10:16 AM, Ataollah Mesgarnejad <amesga1@...> wrote: > Cody, > > I'm not sure if you saw the graph I uploaded it again here: https://dl.dropboxusercontent.com/u/19391830/scaling.jpg. > > In all these runs the NDOFs/Processor is less than 10000. What is bothering me is that the enforce_constraints_exactly is taking up more and more time as number of processor grows for the same problem. > > Now I can think of an explanation that the NDOFs/Processor is so low that the communication time is becoming a problem! That said my main concern is that I'm using the DirichletBCs API badly and that results in bad scaling. > > PS: Sorry for multiple copies I forgot to CC libMesh user list. > > Best, > Ata > On Jun 10, 2013, at 9:48 AM, Cody Permann <codypermann@...> wrote: > >> Ata, >> >> You might be scaling past the reasonable limit for libMesh. I don't know what solver you are using, but for a strong scaling study we generally don't go below 10,000 local DOFs. This is the recommended floor for PETSc too: >> http://www.mcs.anl.gov/petsc/documentation/faq.html#slowerparallel >> >> Before you start drawing conclusions about scaling, you might start with a bigger problem and see if it scales well to the ~20,000 local DOF range. >> >> Cody >> >> >> >> On Mon, Jun 10, 2013 at 8:42 AM, Ataollah Mesgarnejad <amesga1@...> wrote: >> Dear all, >> >> I've been doing some scaling tests on my code. When I look at time (or % of time) spent at each stage in libMesh log I see that the enforce_constraints_exactly stage in DofMap is scaling very bad. I was wondering if anyone can comment. >> >> Here is my EquationSystems.print_info(): >> >> EquationSystems >> n_systems()=2 >> System #0, "elasticity_system" >> Type "TransientLinearImplicit" >> Variables={ "u" "v" } >> Finite Element Types="LAGRANGE", "JACOBI_20_00" >> Infinite Element Mapping="CARTESIAN" >> Approximation Orders="FIRST", "THIRD" >> n_dofs()=48660 >> n_local_dofs()=930 >> n_constrained_dofs()=1048 >> n_local_constrained_dofs()=56 >> n_vectors()=3 >> n_matrices()=1 >> DofMap Sparsity >> Average OnProcessor Bandwidth <= 13.6478 >> Average OffProcessor Bandwidth <= 0.904233 >> Maximum OnProcessor Bandwidth <= 20 >> Maximum OffProcessor Bandwidth <= 16 >> DofMap Constraints >> Number of DoF Constraints = 1048 >> Average DoF Constraint Length= 0 >> Number of Node Constraints = 0 >> System #1, "fracture_system" >> Type "TransientNonlinearImplicit" >> Variables="psi" >> Finite Element Types="LAGRANGE", "JACOBI_20_00" >> Infinite Element Mapping="CARTESIAN" >> Approximation Orders="FIRST", "THIRD" >> n_dofs()=24330 >> n_local_dofs()=465 >> n_constrained_dofs()=167 >> n_local_constrained_dofs()=0 >> n_vectors()=3 >> n_matrices()=1 >> DofMap Sparsity >> Average OnProcessor Bandwidth <= 6.82388 >> Average OffProcessor Bandwidth <= 0.452117 >> Maximum OnProcessor Bandwidth <= 10 >> Maximum OffProcessor Bandwidth <= 8 >> DofMap Constraints >> Number of DoF Constraints = 167 >> Average DoF Constraint Length= 0 >> Number of Node Constraints = 0 >> >> >> and here is the how scaling looks for every stage that took > 1% of time: >> >> >> >> Best, >> Ata >>  >> How ServiceNow helps IT people transform IT departments: >> 1. A cloud service to automate IT design, transition and operations >> 2. Dashboards that offer highlevel views of enterprise services >> 3. A single system of record for all IT processes >> http://p.sf.net/sfu/servicenowd2dj >> _______________________________________________ >> Libmeshusers mailing list >> Libmeshusers@... >> https://lists.sourceforge.net/lists/listinfo/libmeshusers >> >> > >  > How ServiceNow helps IT people transform IT departments: > 1. A cloud service to automate IT design, transition and operations > 2. Dashboards that offer highlevel views of enterprise services > 3. A single system of record for all IT processes > http://p.sf.net/sfu/servicenowd2dj > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers 
From: Ataollah Mesgarnejad <amesga1@ti...>  20130610 15:16:21

Cody, I'm not sure if you saw the graph I uploaded it again here: https://dl.dropboxusercontent.com/u/19391830/scaling.jpg. In all these runs the NDOFs/Processor is less than 10000. What is bothering me is that the enforce_constraints_exactly is taking up more and more time as number of processor grows for the same problem. Now I can think of an explanation that the NDOFs/Processor is so low that the communication time is becoming a problem! That said my main concern is that I'm using the DirichletBCs API badly and that results in bad scaling. PS: Sorry for multiple copies I forgot to CC libMesh user list. Best, Ata On Jun 10, 2013, at 9:48 AM, Cody Permann <codypermann@...> wrote: > Ata, > > You might be scaling past the reasonable limit for libMesh. I don't know what solver you are using, but for a strong scaling study we generally don't go below 10,000 local DOFs. This is the recommended floor for PETSc too: > http://www.mcs.anl.gov/petsc/documentation/faq.html#slowerparallel > > Before you start drawing conclusions about scaling, you might start with a bigger problem and see if it scales well to the ~20,000 local DOF range. > > Cody > > > > On Mon, Jun 10, 2013 at 8:42 AM, Ataollah Mesgarnejad <amesga1@...> wrote: > Dear all, > > I've been doing some scaling tests on my code. When I look at time (or % of time) spent at each stage in libMesh log I see that the enforce_constraints_exactly stage in DofMap is scaling very bad. I was wondering if anyone can comment. > > Here is my EquationSystems.print_info(): > > EquationSystems > n_systems()=2 > System #0, "elasticity_system" > Type "TransientLinearImplicit" > Variables={ "u" "v" } > Finite Element Types="LAGRANGE", "JACOBI_20_00" > Infinite Element Mapping="CARTESIAN" > Approximation Orders="FIRST", "THIRD" > n_dofs()=48660 > n_local_dofs()=930 > n_constrained_dofs()=1048 > n_local_constrained_dofs()=56 > n_vectors()=3 > n_matrices()=1 > DofMap Sparsity > Average OnProcessor Bandwidth <= 13.6478 > Average OffProcessor Bandwidth <= 0.904233 > Maximum OnProcessor Bandwidth <= 20 > Maximum OffProcessor Bandwidth <= 16 > DofMap Constraints > Number of DoF Constraints = 1048 > Average DoF Constraint Length= 0 > Number of Node Constraints = 0 > System #1, "fracture_system" > Type "TransientNonlinearImplicit" > Variables="psi" > Finite Element Types="LAGRANGE", "JACOBI_20_00" > Infinite Element Mapping="CARTESIAN" > Approximation Orders="FIRST", "THIRD" > n_dofs()=24330 > n_local_dofs()=465 > n_constrained_dofs()=167 > n_local_constrained_dofs()=0 > n_vectors()=3 > n_matrices()=1 > DofMap Sparsity > Average OnProcessor Bandwidth <= 6.82388 > Average OffProcessor Bandwidth <= 0.452117 > Maximum OnProcessor Bandwidth <= 10 > Maximum OffProcessor Bandwidth <= 8 > DofMap Constraints > Number of DoF Constraints = 167 > Average DoF Constraint Length= 0 > Number of Node Constraints = 0 > > > and here is the how scaling looks for every stage that took > 1% of time: > > > > Best, > Ata >  > How ServiceNow helps IT people transform IT departments: > 1. A cloud service to automate IT design, transition and operations > 2. Dashboards that offer highlevel views of enterprise services > 3. A single system of record for all IT processes > http://p.sf.net/sfu/servicenowd2dj > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > > 
From: Cody Permann <codypermann@gm...>  20130610 14:48:45

Ata, You might be scaling past the reasonable limit for libMesh. I don't know what solver you are using, but for a strong scaling study we generally don't go below 10,000 local DOFs. This is the recommended floor for PETSc too: http://www.mcs.anl.gov/petsc/documentation/faq.html#slowerparallel Before you start drawing conclusions about scaling, you might start with a bigger problem and see if it scales well to the ~20,000 local DOF range. Cody On Mon, Jun 10, 2013 at 8:42 AM, Ataollah Mesgarnejad < amesga1@...> wrote: > Dear all, > > I've been doing some scaling tests on my code. When I look at time (or % > of time) spent at each stage in libMesh log I see that the > enforce_constraints_exactly stage in DofMap is scaling very bad. I was > wondering if anyone can comment. > > Here is my EquationSystems.print_info(): > > EquationSystems > n_systems()=2 > System #0, "elasticity_system" > Type "TransientLinearImplicit" > Variables={ "u" "v" } > Finite Element Types="LAGRANGE", "JACOBI_20_00" > Infinite Element Mapping="CARTESIAN" > Approximation Orders="FIRST", "THIRD" > n_dofs()=48660 > n_local_dofs()=930 > n_constrained_dofs()=1048 > n_local_constrained_dofs()=56 > n_vectors()=3 > n_matrices()=1 > DofMap Sparsity > Average OnProcessor Bandwidth <= 13.6478 > Average OffProcessor Bandwidth <= 0.904233 > Maximum OnProcessor Bandwidth <= 20 > Maximum OffProcessor Bandwidth <= 16 > DofMap Constraints > Number of DoF Constraints = 1048 > Average DoF Constraint Length= 0 > Number of Node Constraints = 0 > System #1, "fracture_system" > Type "TransientNonlinearImplicit" > Variables="psi" > Finite Element Types="LAGRANGE", "JACOBI_20_00" > Infinite Element Mapping="CARTESIAN" > Approximation Orders="FIRST", "THIRD" > n_dofs()=24330 > n_local_dofs()=465 > n_constrained_dofs()=167 > n_local_constrained_dofs()=0 > n_vectors()=3 > n_matrices()=1 > DofMap Sparsity > Average OnProcessor Bandwidth <= 6.82388 > Average OffProcessor Bandwidth <= 0.452117 > Maximum OnProcessor Bandwidth <= 10 > Maximum OffProcessor Bandwidth <= 8 > DofMap Constraints > Number of DoF Constraints = 167 > Average DoF Constraint Length= 0 > Number of Node Constraints = 0 > > > and here is the how scaling looks for every stage that took > 1% of time: > > > > Best, > Ata > >  > How ServiceNow helps IT people transform IT departments: > 1. A cloud service to automate IT design, transition and operations > 2. Dashboards that offer highlevel views of enterprise services > 3. A single system of record for all IT processes > http://p.sf.net/sfu/servicenowd2dj > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers > > 
From: Ataollah Mesgarnejad <amesga1@ti...>  20130610 14:42:15

Dear all, I've been doing some scaling tests on my code. When I look at time (or % of time) spent at each stage in libMesh log I see that the enforce_constraints_exactly stage in DofMap is scaling very bad. I was wondering if anyone can comment. Here is my EquationSystems.print_info(): EquationSystems n_systems()=2 System #0, "elasticity_system" Type "TransientLinearImplicit" Variables={ "u" "v" } Finite Element Types="LAGRANGE", "JACOBI_20_00" Infinite Element Mapping="CARTESIAN" Approximation Orders="FIRST", "THIRD" n_dofs()=48660 n_local_dofs()=930 n_constrained_dofs()=1048 n_local_constrained_dofs()=56 n_vectors()=3 n_matrices()=1 DofMap Sparsity Average OnProcessor Bandwidth <= 13.6478 Average OffProcessor Bandwidth <= 0.904233 Maximum OnProcessor Bandwidth <= 20 Maximum OffProcessor Bandwidth <= 16 DofMap Constraints Number of DoF Constraints = 1048 Average DoF Constraint Length= 0 Number of Node Constraints = 0 System #1, "fracture_system" Type "TransientNonlinearImplicit" Variables="psi" Finite Element Types="LAGRANGE", "JACOBI_20_00" Infinite Element Mapping="CARTESIAN" Approximation Orders="FIRST", "THIRD" n_dofs()=24330 n_local_dofs()=465 n_constrained_dofs()=167 n_local_constrained_dofs()=0 n_vectors()=3 n_matrices()=1 DofMap Sparsity Average OnProcessor Bandwidth <= 6.82388 Average OffProcessor Bandwidth <= 0.452117 Maximum OnProcessor Bandwidth <= 10 Maximum OffProcessor Bandwidth <= 8 DofMap Constraints Number of DoF Constraints = 167 Average DoF Constraint Length= 0 Number of Node Constraints = 0 and here is the how scaling looks for every stage that took > 1% of time: 