From: abhisek M. <abh...@gm...> - 2018-04-05 05:37:25
|
Hi, I'm not being able to run relion of scipion. Despite re building the scipion from source it keep crashing, however xmipp is working fine with MPI processes. Error report: 00368: mpirun -np 2 -bynode `which relion_refine_mpi` --gpu --tau2_fudge 2 --scale --dont_combine_weights_via_disc --iter 25 --norm --psi_step 10.0 --ctf --offset_range 5.0 --oversampling 1 --pool 3 --o Runs/002019_ProtRelionClassify2D/extra/relion --i Runs/002019_ProtRelionClassify2D/input_particles.star --particle_diameter 264.6 --K 30 --preread_images --flatten_solvent --zero_mask --offset_step 2.0 --angpix 1.89 --j 2 00369: -------------------------------------------------------------------------- 00370: The following command line options and corresponding MCA parameter have 00371: been deprecated and replaced as follows: 00372: 00373: Command line options: 00374: Deprecated: --bynode, -bynode 00375: Replacement: --map-by node 00376: 00377: Equivalent MCA parameter: 00378: Deprecated: rmaps_base_bynode 00379: Replacement: rmaps_base_mapping_policy=node 00380: 00381: The deprecated forms *will* disappear in a future version of Open MPI. 00382: Please update to the new syntax. 00383: -------------------------------------------------------------------------- 00384: [localhost.localdomain:12543] PMIX ERROR: BAD-PARAM in file ../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c at line 1005 00385: *** An error occurred in MPI_Init 00386: *** on a NULL communicator 00387: *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, 00388: *** and potentially your MPI job) 00389: [localhost.localdomain:12543] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed! 00390: [localhost.localdomain:12542] PMIX ERROR: BAD-PARAM in file ../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c at line 1005 00391: *** An error occurred in MPI_Init 00392: *** on a NULL communicator 00393: *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, 00394: *** and potentially your MPI job) 00395: [localhost.localdomain:12542] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed! 00396: ------------------------------------------------------- 00397: Primary job terminated normally, but 1 process returned 00398: a non-zero exit code.. Per user-direction, the job has been aborted. 00399: ------------------------------------------------------- 00400: -------------------------------------------------------------------------- 00401: mpirun detected that one or more processes exited with non-zero status, thus causing 00402: the job to be terminated. The first process to do so was: 00403: 00404: Process name: [[57465,1],1] 00405: Exit code: 1 00406: -------------------------------------------------------------------------- Please suggest me a fix. Thank you. -- Abhisek Mondal *Senior Research Fellow* *Structural Biology and Bioinformatics Division* *CSIR-Indian Institute of Chemical Biology* *Kolkata 700032* *INDIA* |