|
From: Pontus P. <pon...@fa...> - 2007-10-02 09:57:30
|
Darin > NONMEM, perl, and g77 are available on the execution hosts, but not on > the submitting hosts. My users perform all of the NONMEM control steam > and dataset creation on designated submit/login hosts and submit their > jobs to Grid Engine which then dispatches said jobs to our NONMEM > execution hosts. NONMEM is not available on these submit/login hosts, > only our execution hosts. It isn't clear to me which commands require > the execution of NONMEM from reading the documentation, bare in mind > that I'm not a PK/PD scientist so I'm not familiar with all the terminology. That's ok, I'm not a PK/PD scientist either(I've got a masters in computer science). Regarding your setup, it sounds like you can submit the PsN jobs to run on your execution hosts and that should get you started. Then you can configure PsN to use SGE and PsN can submit the NONMEM jobs to be run on other execution hosts. The PsN commands that execute NONMEM are: execute, bootstrap, cdd, llp, mc_cdd, mcs, scm, se_of_eta Those that don't: check_termination, create_cond_data, creat_cont_model, create_extra_data_model, create_subsets, data_stats, gam42toconf,single_valued_columns, sumo, unwrap_data > > No, PsN implements its own Perl module for executing NONMEM. The module > > is loosely based on the nmfe6 script. This module can be used as a > > script and is as such submitted to the grid system. > > Which modules is this and what does it do? It is the "nonmem.pm" file. It is only used internally, and is pretty much a nmfe replacement. It looks in the configuration file to find which nonmem installation to use and verifies that nmlink can be found, runs nmtran, compiles and links the resulting fortran files. > > Our script creates a temporary scratch directory in the current working > directory of and named after the control stream and compiles and runs > NONMEM from within this scratch directory. This affords our scientist > the ability to run as many jobs from within a single working directory > with out having to be concerned with temporary files that NONMEM creates > clobbering each other. > > Say you have 20 NONMEM runs, run1.ctl - run20.ctl and they reside in > /some/directory/with/many_nm_jobs. When run1.ctl is submitted and > executed the directory /some/directory/with/many_nm_jobs/run1.ctl.dir/ > is created and the job is executed within and contains the FCON, FDATA, > INTER, FSUBS*, PRDERR, FILE*, nonmem, etc files. If you submitted all 20 > jobs to the grid then you'd have 20 separate > /some/directory/with/many_nm_jobs/run*.ctl.dir/ directories. Using "execute" in the PsN suite, you get exactly the same behavior. Using the bootstrap(for example), you get an additional level of directories to separate run directories from generated input and consolidated output files. > > > Deploying PsN in an SGE environment the requires three things: > > > > * Making PsN available on the submitting hosts and execution hosts. > > > > * Making NONMEM and a Fortran compiler available on the execution hosts. > > > > * Adding the path to NONMEM in the PsN configuration file. And selecting > > the compiler used. Queue and resource selection can be entered as well. > > I've added the path for NM5, NM6, the compiler and compiler options. How > would I specify the queue, resources, etc, that you've mentioned? I see > that common_options.pm defines run_on_sge, sge_resource:s, and > sge_queue:s, how are these used? Either on the command line: execute --sge_resource="x86" --sge_queue="not_default" --run_on_sge example_control_file.mod (notice that this are "basic" options and are available for all commands that run NONMEM) Or you can use them in the configuration file: [default_options] sge_resource=x86 (notice that abbreviations can be used on the command line, but not in the configuration file) You can also have different configurations for the different commands: [default_bootstrap_options] sge_resource=boostrap_specific Regards, Pontus |