Screenshot instructions:
Windows
Mac
Red Hat Linux
Ubuntu
Click URL instructions:
Right-click on ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)
You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
(2) |
Oct
|
Nov
|
Dec
(6) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(11) |
Feb
(5) |
Mar
(4) |
Apr
(9) |
May
(14) |
Jun
(10) |
Jul
(2) |
Aug
(6) |
Sep
|
Oct
(2) |
Nov
(3) |
Dec
(7) |
2004 |
Jan
(32) |
Feb
(15) |
Mar
(19) |
Apr
(9) |
May
(9) |
Jun
(7) |
Jul
(21) |
Aug
(20) |
Sep
(14) |
Oct
(55) |
Nov
(22) |
Dec
(20) |
2005 |
Jan
(14) |
Feb
(17) |
Mar
(10) |
Apr
(25) |
May
(11) |
Jun
(33) |
Jul
(19) |
Aug
(13) |
Sep
(29) |
Oct
(16) |
Nov
(35) |
Dec
(18) |
2006 |
Jan
(27) |
Feb
(10) |
Mar
(46) |
Apr
(43) |
May
(15) |
Jun
(41) |
Jul
(20) |
Aug
(10) |
Sep
(30) |
Oct
(16) |
Nov
(31) |
Dec
(22) |
2007 |
Jan
(16) |
Feb
(28) |
Mar
(36) |
Apr
(28) |
May
(32) |
Jun
(22) |
Jul
(18) |
Aug
(21) |
Sep
(32) |
Oct
(62) |
Nov
(13) |
Dec
(52) |
2008 |
Jan
(29) |
Feb
(16) |
Mar
(3) |
Apr
(16) |
May
(29) |
Jun
(26) |
Jul
(10) |
Aug
(30) |
Sep
(51) |
Oct
(21) |
Nov
(18) |
Dec
(59) |
2009 |
Jan
(29) |
Feb
(7) |
Mar
(20) |
Apr
(7) |
May
(37) |
Jun
(35) |
Jul
(72) |
Aug
(63) |
Sep
(26) |
Oct
(12) |
Nov
(22) |
Dec
(29) |
2010 |
Jan
(26) |
Feb
(11) |
Mar
(50) |
Apr
(11) |
May
(16) |
Jun
(21) |
Jul
(29) |
Aug
(43) |
Sep
(36) |
Oct
(10) |
Nov
(24) |
Dec
(10) |
2011 |
Jan
(12) |
Feb
(31) |
Mar
(26) |
Apr
(6) |
May
(19) |
Jun
(54) |
Jul
(32) |
Aug
(17) |
Sep
(18) |
Oct
(25) |
Nov
(11) |
Dec
(17) |
2012 |
Jan
(9) |
Feb
(17) |
Mar
(25) |
Apr
(22) |
May
(39) |
Jun
(54) |
Jul
(23) |
Aug
(12) |
Sep
(33) |
Oct
(31) |
Nov
(25) |
Dec
(27) |
2013 |
Jan
(25) |
Feb
(37) |
Mar
(1) |
Apr
(40) |
May
(38) |
Jun
(23) |
Jul
(8) |
Aug
(19) |
Sep
(11) |
Oct
(16) |
Nov
(18) |
Dec
(9) |
2014 |
Jan
(5) |
Feb
(24) |
Mar
(8) |
Apr
|
May
(44) |
Jun
(14) |
Jul
(15) |
Aug
(16) |
Sep
|
Oct
(6) |
Nov
(12) |
Dec
(13) |
2015 |
Jan
(6) |
Feb
(7) |
Mar
(9) |
Apr
(2) |
May
(8) |
Jun
(4) |
Jul
(6) |
Aug
(7) |
Sep
(1) |
Oct
(2) |
Nov
(4) |
Dec
(4) |
2016 |
Jan
(3) |
Feb
(1) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
(4) |
Jul
(1) |
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
(2) |
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
1
|
2
|
3
(1) |
4
|
5
|
6
(1) |
7
|
8
(1) |
9
|
10
(3) |
11
(9) |
12
(2) |
13
|
14
|
15
(3) |
16
(2) |
17
(2) |
18
(2) |
19
(2) |
20
|
21
(4) |
22
(3) |
23
(3) |
24
(12) |
25
(2) |
26
(2) |
27
|
28
|
29
|
30
(7) |
31
(1) |
|
|
|
From: David Gohara <sdg0919@gm...> - 2007-10-30 15:13:03
|
> Sorry... mail problems today... dime IS tied to the processor. So each processor will have nx, ny, nz grid points (based on dime). If the processor count is 1 (pdime 1 1 1) then the number of total grid points is equal to dime. When you split the calculation up into multiple processors each processor gets a grid dimensioned at the value of dime. This is true of APBS 0.4.0. Dave On Oct 30, 2007, at 10:07 AM, David Gohara wrote: > > On Oct 30, 2007, at 8:24 AM, Mark Abraham wrote: > >> David Gohara wrote: >>> Hi Mark, >>> >>> dime specifies the global dimension and pdime simply specifies the >>> breakup in each direction (x,y,z). In the case of MPI it's the >>> number >>> of MPI tasks that will be generated in each dimension. For >>> asynchronous >>> mode (async) it will break the job into N x M x O tasks, but run >>> them >>> serially. >> >> Hi David, >> >> Thanks for the reply, but I don't think that's quite what I asked. >> Your >> statement that "dime specifies the global dimension" doesn't tell me >> whether "setting dime constrains the number of grid points per >> processor >> or the number of grid points over the whole processor array". >> Re-phrasing, when pdime != 1 1 1, I'm asking whether "global >> dimension" >> is intensive or extensive with respect to the component calculations? >> >> I now see in the 0.5.1 documentation that dime is the number of grid >> points *per processor*. If that also applies to 0.4.0 then I'm >> happy. If >> not, how does the break-up occur and where does it get output? >> >> Regards, >> >> Mark >> >> ------------------------------------------------------------------------- >> This SF.net email is sponsored by: Splunk Inc. >> Still grepping through log files to find problems? Stop. >> Now Search log events and configuration files using AJAX and a >> browser. >> Download your FREE copy of Splunk now >> http://get.splunk.com/ >> _______________________________________________ >> apbs-users mailing list >> apbs-users@... >> https://lists.sourceforge.net/lists/listinfo/apbs-users > |
From: Nathan Baker <baker@cc...> - 2007-10-30 15:12:39
|
Hi Mark -- I've updated the APBS user manual a bit to clarify this. This update will be available with the next APBS release but I've attached it here for reference. Sorry for the confusion. -- Nathan |
From: David Gohara <sdg0919@gm...> - 2007-10-30 15:07:18
|
Hi Mark, Sorry I misspoke. dime IS tied to the processor. So each processor will have nx, ny, nz grid points (based on dime). If the processor count is 1 (pdime 1 1 1) then the number of total grid points is equal to dime. On Oct 30, 2007, at 8:24 AM, Mark Abraham wrote: > David Gohara wrote: >> Hi Mark, >> >> dime specifies the global dimension and pdime simply specifies the >> breakup in each direction (x,y,z). In the case of MPI it's the >> number >> of MPI tasks that will be generated in each dimension. For >> asynchronous >> mode (async) it will break the job into N x M x O tasks, but run them >> serially. > > Hi David, > > Thanks for the reply, but I don't think that's quite what I asked. > Your > statement that "dime specifies the global dimension" doesn't tell me > whether "setting dime constrains the number of grid points per > processor > or the number of grid points over the whole processor array". > Re-phrasing, when pdime != 1 1 1, I'm asking whether "global > dimension" > is intensive or extensive with respect to the component calculations? > > I now see in the 0.5.1 documentation that dime is the number of grid > points *per processor*. If that also applies to 0.4.0 then I'm > happy. If > not, how does the break-up occur and where does it get output? > > Regards, > > Mark > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. > Still grepping through log files to find problems? Stop. > Now Search log events and configuration files using AJAX and a > browser. > Download your FREE copy of Splunk now >> http://get.splunk.com/ > _______________________________________________ > apbs-users mailing list > apbs-users@... > https://lists.sourceforge.net/lists/listinfo/apbs-users |
From: Gernot Kieseritzky <gernotf@ch...> - 2007-10-30 14:08:14
|
Hi! Mark Abraham schrieb: > Hi, > > When using mg-para with (for example) "pdime 2 2 2", does setting dime > constrain the number of grid points on each processor, or the number of > grid points over the whole processor array? The User Guide doesn't seem > to imply either of them. In the context of mg-para the "dime" parameter gives the number of grid points per processor. It is explained on http://apbs.sourceforge.net/doc/user-guide/index.html#mg-para : "Please note that some of the parameters change in meaning a bit for this type of calculation. In particular, dime should be interpreted as the number of grid points per processor. This interpretation helps manage the amount of memory per-processor -- generally the limiting resource for most calculations." You can use the "tools/manip/psize.py" tool to find a reasonable combination of pdime, dime and grid resolution. Greetings, Gernot Kieseritzky |
From: Mark Abraham <Mark.Abraham@an...> - 2007-10-30 13:24:59
|
David Gohara wrote: > Hi Mark, > > dime specifies the global dimension and pdime simply specifies the > breakup in each direction (x,y,z). In the case of MPI it's the number > of MPI tasks that will be generated in each dimension. For asynchronous > mode (async) it will break the job into N x M x O tasks, but run them > serially. Hi David, Thanks for the reply, but I don't think that's quite what I asked. Your statement that "dime specifies the global dimension" doesn't tell me whether "setting dime constrains the number of grid points per processor or the number of grid points over the whole processor array". Re-phrasing, when pdime != 1 1 1, I'm asking whether "global dimension" is intensive or extensive with respect to the component calculations? I now see in the 0.5.1 documentation that dime is the number of grid points *per processor*. If that also applies to 0.4.0 then I'm happy. If not, how does the break-up occur and where does it get output? Regards, Mark |
From: David Gohara <sdg0919@gm...> - 2007-10-30 09:23:36
|
Hi Mark, dime specifies the global dimension and pdime simply specifies the breakup in each direction (x,y,z). In the case of MPI it's the number of MPI tasks that will be generated in each dimension. For asynchronous mode (async) it will break the job into N x M x O tasks, but run them serially. Hope that helps, Dave On Oct 30, 2007, at 12:14 AM, Mark Abraham wrote: > Hi, > > When using mg-para with (for example) "pdime 2 2 2", does setting dime > constrain the number of grid points on each processor, or the number > of > grid points over the whole processor array? The User Guide doesn't > seem > to imply either of them. > > Thanks, > > Mark > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. > Still grepping through log files to find problems? Stop. > Now Search log events and configuration files using AJAX and a > browser. > Download your FREE copy of Splunk now >> http://get.splunk.com/ > _______________________________________________ > apbs-users mailing list > apbs-users@... > https://lists.sourceforge.net/lists/listinfo/apbs-users |
From: Mark Abraham <Mark.Abraham@an...> - 2007-10-30 05:44:38
|
Hi, When using mg-para with (for example) "pdime 2 2 2", does setting dime constrain the number of grid points on each processor, or the number of grid points over the whole processor array? The User Guide doesn't seem to imply either of them. Thanks, Mark |