Thread: [Apbs-users] calculations with large systems
Biomolecular electrostatics software
Brought to you by:
sobolevnrm
From: Mark E. <ma...@st...> - 2005-03-22 23:33:29
|
Hi, I want to do some calculations of large systems, preferably at 0.5 angstrom grid resolution (though that's looking almost impossible at the moment). I have been able to set up parallel runs using mpich (lammpi gave unexpected exit errors, possibly something that could be easily fixed in the apbs code, though I admit I didn't follow up on it). I am still having big memory problems, however, and requesting many machines on the cluster available to me results in large lag times. One problem I have is that I don't fully understand how the system is broken up for parallel runs. I have two major questions: 1. What is the appropriate way to set up the fine and course grid lengths? I don't want to focus on any part of the system - I want to cover the whole thing at the same resolution. So should I use the same value for fglen and cglen? (e.g. 160 160 160) Or, if I break my system down into smaller pieces, do I then choose fglen for the individual pieces (e.g. if I broke this system down into 8 pieces with 10% overlap, I would choose something like 90 90 90). And what should dime be in this case? At least 321 321 321, or at least 181 181 181? (These numbers would have to be corrected upwards to hit the right numbers based on my nlev.) 2. Is it possible to ask mg-para to do its calculations sequentially (or do the equivalent some other way), so that I can do them on the same processor without running into memory nightmares. Thanks, Mark ........................................................................ ..... Mark Engelhardt S257 Clarke Center 318 Campus Drive Stanford University Stanford, CA 94305 http://www.stanford.edu/~marke/ |
From: Nathan B. <ba...@bi...> - 2005-03-23 14:58:52
|
Hi Mark -- > 1. What is the appropriate way to set up the fine and course grid > lengths? I don't want to focus on any part of the system - I want to Setting fglen = cglen as you suggest should be fine. Also the psize.py script (or PDB2PQR -- with less flexibility) will also provide suggested parallel settings for what you describe. > 10% overlap, I would choose something like 90 90 90). And what should > dime be in this case? At least 321 321 321, or at least 181 181 181? The dime parameter should be set to the desired number of grid points for each partition; something like 181, etc. Again, psize.py can help you choose this. > 2. Is it possible to ask mg-para to do its calculations sequentially > (or do the equivalent some other way), so that I can do them on the > same processor without running into memory nightmares. Yes, see http://agave.wustl.edu/apbs/doc/html/user-guide/x645.html#async > Mark Engelhardt > S257 Clarke Center > 318 Campus Drive > Stanford University > Stanford, CA 94305 > > http://www.stanford.edu/~marke/ > > _______________________________________________ > apbs-users mailing list > apb...@ch... > http://cholla.wustl.edu/mailman/listinfo/apbs-users -- Nathan A. Baker, Assistant Professor Washington University in St. Louis School of Medicine Dept. of Biochemistry and Molecular Biophysics Center for Computational Biology 700 S. Euclid Ave., Campus Box 8036, St. Louis, MO 63110 Phone: (314) 362-2040, Fax: (314) 362-0234 URL: http://www.biochem.wustl.edu/~baker |