Thread: [Apbs-users] A simple question about using APBS in parallel
Biomolecular electrostatics software
Brought to you by:
sobolevnrm
From: Michael G. L. <ml...@um...> - 2004-08-13 06:27:06
|
Hi, I'm sorry if this is a simple question, but I couldn't quite piece it together from the examples .. I have a system that's too big to fit into the 512M of memory that I have. So far, I've only done calculations via mg-auto, but I decided to try mg-para for these. The problem is, I can't seem to get APBS to take up a small amount of memory. My latest try was this: /usr/local/apbs-0.3.1/tools/manip/inputgen.py --GMEMCEIL=128 --METHOD=async bromo_aligned.pqr which makes a bunch of files like bromo_aligned-PE0.in (all the way through bromo_aligned-PE63.in) and one called bromo_aligned-para.in. When I run apbs on bromo_aligned-PE0.in, it eats up a little over 1G of memory and is finally killed when I run out of swap space. I've also tried /usr/local/apbs-0.3.1/tools/manip/inputgen.py --METHOD=para bromo_aligned.pqr and running the resulting bromo_aligned-para.in (via mpirun .. everything compiled with gcc on AMD Athlon 1600s and 1400s running Red Hat 7.1) on 8 processors, each of which have 512M of memory (I'm guessing that 8 is the correct number because the .in file says "pdime 2 2 2"), but it ends up just like the above run .. one process sucks up all available memory on the machine that I start things from and then gets killed. It starts processes on the other 7 machines, but while they never take up any significant memory or processor power, they also don't seem to do anything useful. Any guesses what I'm doing wrong? For reference, I've tacked bromo_aligned-PE0.in onto the end of this message. It's the one that was generated with GMEMCEIL=128, but still wants to use well over 1G of memory. Please let me know if you need any other information to figure out what I'm doing wrong! Thanks, -michael --- begin bromo_aligned-PE0.in --- read mol pqr bromo_aligned.pqr end elec mg-para pdime 4 4 4 ofrac 0.1 dime 225 193 193 cglen 169.611 135.662 136.534 fglen 119.771 99.801 100.314 cgcent mol 1 fgcent mol 1 async 0 mol 1 lpbe bcfl 1 ion 1 0.150 2.0 ion -1 0.150 2.0 pdie 2.0 sdie 78.54 srfm 1 chgm 1 srad 1.4 swin 0.3 temp 298.15 gamma 0.105 calcenergy 1 calcforce 0 end elec mg-para pdime 4 4 4 ofrac 0.1 dime 225 193 193 cglen 169.611 135.662 136.534 fglen 119.771 99.801 100.314 cgcent mol 1 fgcent mol 1 async 0 mol 1 lpbe bcfl 1 ion 1 0.150 2.0 ion -1 0.150 2.0 pdie 2.0 sdie 2.00 srfm 1 chgm 1 srad 1.4 swin 0.3 temp 298.15 gamma 0.105 calcenergy 1 calcforce 0 end print energy 2 - 1 end quit --- end bromo_aligned-PE0.in --- -- This isn't a democracy;| _ |Michael Lerner it's a cheer-ocracy. | ASCII ribbon campaign ( ) | Michigan -Torrence, Bring It On| - against HTML email X | Biophysics | / \ | mlerner@umich |
From: Ben C. <bj...@he...> - 2004-08-13 12:49:01
|
Hi Michael, I ran into the problem with the async calculations a while ago and with Nathan's help worked out it's a very minor adjustment to the 'dime' settings. Your grid *for each* of your separate runs would come in at around 1.28GB (~ 160*[225*193*193]/1024/1024). You need to specify the dimensions of the *local* grid size for that part of the async run, so in your case with a 4 4 4 array this would be a grid of "1/64th" the size of the whole grid needed to contain the molecule of interest. I wrote something to do this as part of the inputgen.py script which I can pass over to you (it also tries to optimise the grid spacing to be as close to square as possible - one day I woke up being paranoid about rectangular grid spacing :-S). As for your para example it could be exactly the same thing but you'll need to send in the input file and maybe some details of how you submit parallel runs on your machines. Hope that helps, Ben Benjamin Carrington Department of Pharmacology University of Cambridge, UK > -----Original Message----- > From: apb...@ch... [mailto:apbs-users- > ad...@ch...] On Behalf Of Michael George Lerner > Sent: 13 August 2004 05:27 > To: apb...@ch... > Subject: [Apbs-users] A simple question about using APBS in parallel > > > Hi, > > I'm sorry if this is a simple question, but I couldn't quite piece it > together from the examples .. > > I have a system that's too big to fit into the 512M of memory that I have. > So far, I've only done calculations via mg-auto, but I decided to try > mg-para for these. The problem is, I can't seem to get APBS to take up a > small amount of memory. My latest try was this: > > /usr/local/apbs-0.3.1/tools/manip/inputgen.py --GMEMCEIL=128 --METHOD=async > bromo_aligned.pqr > > which makes a bunch of files like bromo_aligned-PE0.in (all the way > through bromo_aligned-PE63.in) and one called bromo_aligned-para.in. > When I run apbs on bromo_aligned-PE0.in, it eats up a little over 1G of > memory and is finally killed when I run out of swap space. I've also > tried > > /usr/local/apbs-0.3.1/tools/manip/inputgen.py --METHOD=para bromo_aligned.pqr > > and running the resulting bromo_aligned-para.in (via mpirun .. everything > compiled with gcc on AMD Athlon 1600s and 1400s running Red Hat 7.1) on 8 > processors, each of which have 512M of memory (I'm guessing that 8 is the > correct number because the .in file says "pdime 2 2 2"), but it ends up > just like the above run .. one process sucks up all available memory on > the machine that I start things from and then gets killed. It starts > processes on the other 7 machines, but while they never take up any > significant memory or processor power, they also don't seem to do > anything useful. > > Any guesses what I'm doing wrong? > > For reference, I've tacked bromo_aligned-PE0.in onto the end of this > message. It's the one that was generated with GMEMCEIL=128, but still > wants to use well over 1G of memory. Please let me know if you need any > other information to figure out what I'm doing wrong! > > Thanks, > > -michael > > --- begin bromo_aligned-PE0.in --- > read > mol pqr bromo_aligned.pqr > end > elec > mg-para > pdime 4 4 4 > ofrac 0.1 > dime 225 193 193 > cglen 169.611 135.662 136.534 > fglen 119.771 99.801 100.314 > cgcent mol 1 > fgcent mol 1 > async 0 > mol 1 > lpbe > bcfl 1 > ion 1 0.150 2.0 > ion -1 0.150 2.0 > pdie 2.0 > sdie 78.54 > srfm 1 > chgm 1 > srad 1.4 > swin 0.3 > temp 298.15 > gamma 0.105 > calcenergy 1 > calcforce 0 > end > elec > mg-para > pdime 4 4 4 > ofrac 0.1 > dime 225 193 193 > cglen 169.611 135.662 136.534 > fglen 119.771 99.801 100.314 > cgcent mol 1 > fgcent mol 1 > async 0 > mol 1 > lpbe > bcfl 1 > ion 1 0.150 2.0 > ion -1 0.150 2.0 > pdie 2.0 > sdie 2.00 > srfm 1 > chgm 1 > srad 1.4 > swin 0.3 > temp 298.15 > gamma 0.105 > calcenergy 1 > calcforce 0 > end > > print energy 2 - 1 end > > quit > --- end bromo_aligned-PE0.in --- > > -- > This isn't a democracy;| _ |Michael Lerner > it's a cheer-ocracy. | ASCII ribbon campaign ( ) | Michigan > -Torrence, Bring It On| - against HTML email X | Biophysics > | / \ | mlerner@umich > _______________________________________________ > apbs-users mailing list > apb...@ch... > http://cholla.wustl.edu/mailman/listinfo/apbs-users |
From: Michael G. L. <ml...@um...> - 2004-08-13 16:50:41
|
Hi Ben, > I ran into the problem with the async calculations a while ago and with > Nathan's help worked out it's a very minor adjustment to the 'dime' > settings. Your grid *for each* of your separate runs would come in at around > 1.28GB (~ 160*[225*193*193]/1024/1024). You need to specify the dimensions > of the *local* grid size for that part of the async run, so in your case > with a 4 4 4 array this would be a grid of "1/64th" the size of the whole > grid needed to contain the molecule of interest. Oops! I thought that APBS automatically did the slicing and dicing for me. > I wrote something to do this as part of the inputgen.py script which I > can pass over to you (it also tries to optimise the grid spacing to be > as close to square as possible - one day I woke up being paranoid about > rectangular grid spacing :-S). Yes, it would be great if you could send that along! (It would also be great if that made it into the standard inputgen.py.) Do I need to give new cg/fglen and cg/fgcent parameters for each async file as well? > As for your para example it could be exactly the same thing but you'll need > to send in the input file and maybe some details of how you submit parallel > runs on your machines. The para .in files that inputgen.py generates look just like the async ones, except without the "async X" line. Again, I just assumed that APBS took care of slicing up the grid for me. What are the para .in files *supposed* to look like? It might be worth noting that I saw a mesh.m file in the examples directory, but didn't know how it worked .. do I need one of those? I didn't see it mentioned in the manual, so I thought it might be safe to ignore .. but it looks like that might be the file that tells APBS how to slice up the grid. Thanks, -michael > > Hope that helps, > Ben > > > > Benjamin Carrington > Department of Pharmacology > University of Cambridge, UK > > > > > > -----Original Message----- > > From: apb...@ch... [mailto:apbs-users- > > ad...@ch...] On Behalf Of Michael George Lerner > > Sent: 13 August 2004 05:27 > > To: apb...@ch... > > Subject: [Apbs-users] A simple question about using APBS in parallel > > > > > > Hi, > > > > I'm sorry if this is a simple question, but I couldn't quite piece it > > together from the examples .. > > > > I have a system that's too big to fit into the 512M of memory that I have. > > So far, I've only done calculations via mg-auto, but I decided to try > > mg-para for these. The problem is, I can't seem to get APBS to take up a > > small amount of memory. My latest try was this: > > > > /usr/local/apbs-0.3.1/tools/manip/inputgen.py --GMEMCEIL=128 > --METHOD=async > > bromo_aligned.pqr > > > > which makes a bunch of files like bromo_aligned-PE0.in (all the way > > through bromo_aligned-PE63.in) and one called bromo_aligned-para.in. > > When I run apbs on bromo_aligned-PE0.in, it eats up a little over 1G of > > memory and is finally killed when I run out of swap space. I've also > > tried > > > > /usr/local/apbs-0.3.1/tools/manip/inputgen.py --METHOD=para > bromo_aligned.pqr > > > > and running the resulting bromo_aligned-para.in (via mpirun .. everything > > compiled with gcc on AMD Athlon 1600s and 1400s running Red Hat 7.1) on 8 > > processors, each of which have 512M of memory (I'm guessing that 8 is the > > correct number because the .in file says "pdime 2 2 2"), but it ends up > > just like the above run .. one process sucks up all available memory on > > the machine that I start things from and then gets killed. It starts > > processes on the other 7 machines, but while they never take up any > > significant memory or processor power, they also don't seem to do > > anything useful. > > > > Any guesses what I'm doing wrong? > > > > For reference, I've tacked bromo_aligned-PE0.in onto the end of this > > message. It's the one that was generated with GMEMCEIL=128, but still > > wants to use well over 1G of memory. Please let me know if you need any > > other information to figure out what I'm doing wrong! > > > > Thanks, > > > > -michael > > > > --- begin bromo_aligned-PE0.in --- > > read > > mol pqr bromo_aligned.pqr > > end > > elec > > mg-para > > pdime 4 4 4 > > ofrac 0.1 > > dime 225 193 193 > > cglen 169.611 135.662 136.534 > > fglen 119.771 99.801 100.314 > > cgcent mol 1 > > fgcent mol 1 > > async 0 > > mol 1 > > lpbe > > bcfl 1 > > ion 1 0.150 2.0 > > ion -1 0.150 2.0 > > pdie 2.0 > > sdie 78.54 > > srfm 1 > > chgm 1 > > srad 1.4 > > swin 0.3 > > temp 298.15 > > gamma 0.105 > > calcenergy 1 > > calcforce 0 > > end > > elec > > mg-para > > pdime 4 4 4 > > ofrac 0.1 > > dime 225 193 193 > > cglen 169.611 135.662 136.534 > > fglen 119.771 99.801 100.314 > > cgcent mol 1 > > fgcent mol 1 > > async 0 > > mol 1 > > lpbe > > bcfl 1 > > ion 1 0.150 2.0 > > ion -1 0.150 2.0 > > pdie 2.0 > > sdie 2.00 > > srfm 1 > > chgm 1 > > srad 1.4 > > swin 0.3 > > temp 298.15 > > gamma 0.105 > > calcenergy 1 > > calcforce 0 > > end > > > > print energy 2 - 1 end > > > > quit > > --- end bromo_aligned-PE0.in --- > > > > -- > > This isn't a democracy;| _ |Michael Lerner > > it's a cheer-ocracy. | ASCII ribbon campaign ( ) | Michigan > > -Torrence, Bring It On| - against HTML email X | Biophysics > > | / \ | mlerner@umich > > _______________________________________________ > > apbs-users mailing list > > apb...@ch... > > http://cholla.wustl.edu/mailman/listinfo/apbs-users > > |
From: Nathan B. <ba...@bi...> - 2004-08-13 19:27:37
|
Hi Michael -- There are some example mg-para input files distributed with apbs (actin-dimer, born); additionally, there are some versions of the APBS inputgen.py and PDB2PQR which will generate parallel-ready input files for you. I'll Cc Todd and see if he can point you all to the right place. Thanks, Nathan Michael George Lerner wrote: > Hi Ben, > > >>I ran into the problem with the async calculations a while ago and with >>Nathan's help worked out it's a very minor adjustment to the 'dime' >>settings. Your grid *for each* of your separate runs would come in at around >>1.28GB (~ 160*[225*193*193]/1024/1024). You need to specify the dimensions >>of the *local* grid size for that part of the async run, so in your case >>with a 4 4 4 array this would be a grid of "1/64th" the size of the whole >>grid needed to contain the molecule of interest. > > > Oops! I thought that APBS automatically did the slicing and dicing for > me. > > >>I wrote something to do this as part of the inputgen.py script which I >>can pass over to you (it also tries to optimise the grid spacing to be >>as close to square as possible - one day I woke up being paranoid about >>rectangular grid spacing :-S). > > > Yes, it would be great if you could send that along! (It would also be > great if that made it into the standard inputgen.py.) Do I need to give > new cg/fglen and cg/fgcent parameters for each async file as well? > > >>As for your para example it could be exactly the same thing but you'll need >>to send in the input file and maybe some details of how you submit parallel >>runs on your machines. > > > The para .in files that inputgen.py generates look just like the async > ones, except without the "async X" line. Again, I just assumed that APBS > took care of slicing up the grid for me. What are the para .in files > *supposed* to look like? > > It might be worth noting that I saw a mesh.m file in the examples > directory, but didn't know how it worked .. do I need one of those? I > didn't see it mentioned in the manual, so I thought it might be safe to > ignore .. but it looks like that might be the file that tells APBS how to > slice up the grid. > > Thanks, > > -michael > > >>Hope that helps, >>Ben >> >> >> >>Benjamin Carrington >>Department of Pharmacology >>University of Cambridge, UK >> >> >> >> >> >>>-----Original Message----- >>>From: apb...@ch... [mailto:apbs-users- >>>ad...@ch...] On Behalf Of Michael George Lerner >>>Sent: 13 August 2004 05:27 >>>To: apb...@ch... >>>Subject: [Apbs-users] A simple question about using APBS in parallel >>> >>> >>>Hi, >>> >>>I'm sorry if this is a simple question, but I couldn't quite piece it >>>together from the examples .. >>> >>>I have a system that's too big to fit into the 512M of memory that I have. >>>So far, I've only done calculations via mg-auto, but I decided to try >>>mg-para for these. The problem is, I can't seem to get APBS to take up a >>>small amount of memory. My latest try was this: >>> >>>/usr/local/apbs-0.3.1/tools/manip/inputgen.py --GMEMCEIL=128 >> >>--METHOD=async >> >>>bromo_aligned.pqr >>> >>>which makes a bunch of files like bromo_aligned-PE0.in (all the way >>>through bromo_aligned-PE63.in) and one called bromo_aligned-para.in. >>>When I run apbs on bromo_aligned-PE0.in, it eats up a little over 1G of >>>memory and is finally killed when I run out of swap space. I've also >>>tried >>> >>>/usr/local/apbs-0.3.1/tools/manip/inputgen.py --METHOD=para >> >>bromo_aligned.pqr >> >>>and running the resulting bromo_aligned-para.in (via mpirun .. everything >>>compiled with gcc on AMD Athlon 1600s and 1400s running Red Hat 7.1) on 8 >>>processors, each of which have 512M of memory (I'm guessing that 8 is the >>>correct number because the .in file says "pdime 2 2 2"), but it ends up >>>just like the above run .. one process sucks up all available memory on >>>the machine that I start things from and then gets killed. It starts >>>processes on the other 7 machines, but while they never take up any >>>significant memory or processor power, they also don't seem to do >>>anything useful. >>> >>>Any guesses what I'm doing wrong? >>> >>>For reference, I've tacked bromo_aligned-PE0.in onto the end of this >>>message. It's the one that was generated with GMEMCEIL=128, but still >>>wants to use well over 1G of memory. Please let me know if you need any >>>other information to figure out what I'm doing wrong! >>> >>>Thanks, >>> >>>-michael >>> >>>--- begin bromo_aligned-PE0.in --- >>>read >>> mol pqr bromo_aligned.pqr >>>end >>>elec >>> mg-para >>> pdime 4 4 4 >>> ofrac 0.1 >>> dime 225 193 193 >>> cglen 169.611 135.662 136.534 >>> fglen 119.771 99.801 100.314 >>> cgcent mol 1 >>> fgcent mol 1 >>> async 0 >>> mol 1 >>> lpbe >>> bcfl 1 >>> ion 1 0.150 2.0 >>> ion -1 0.150 2.0 >>> pdie 2.0 >>> sdie 78.54 >>> srfm 1 >>> chgm 1 >>> srad 1.4 >>> swin 0.3 >>> temp 298.15 >>> gamma 0.105 >>> calcenergy 1 >>> calcforce 0 >>>end >>>elec >>> mg-para >>> pdime 4 4 4 >>> ofrac 0.1 >>> dime 225 193 193 >>> cglen 169.611 135.662 136.534 >>> fglen 119.771 99.801 100.314 >>> cgcent mol 1 >>> fgcent mol 1 >>> async 0 >>> mol 1 >>> lpbe >>> bcfl 1 >>> ion 1 0.150 2.0 >>> ion -1 0.150 2.0 >>> pdie 2.0 >>> sdie 2.00 >>> srfm 1 >>> chgm 1 >>> srad 1.4 >>> swin 0.3 >>> temp 298.15 >>> gamma 0.105 >>> calcenergy 1 >>> calcforce 0 >>>end >>> >>>print energy 2 - 1 end >>> >>>quit >>>--- end bromo_aligned-PE0.in --- >>> >>>-- >>>This isn't a democracy;| _ |Michael Lerner >>> it's a cheer-ocracy. | ASCII ribbon campaign ( ) | Michigan >>>-Torrence, Bring It On| - against HTML email X | Biophysics >>> | / \ | mlerner@umich >>>_______________________________________________ >>>apbs-users mailing list >>>apb...@ch... >>>http://cholla.wustl.edu/mailman/listinfo/apbs-users >> >> > _______________________________________________ > apbs-users mailing list > apb...@ch... > http://cholla.wustl.edu/mailman/listinfo/apbs-users -- Nathan A. Baker, Assistant Professor Washington University in St. Louis School of Medicine Dept. of Biochemistry and Molecular Biophysics Center for Computational Biology 700 S. Euclid Ave., Campus Box 8036, St. Louis, MO 63110 Phone: (314) 362-2040, Fax: (314) 362-0234 URL: http://www.biochem.wustl.edu/~baker |