You can subscribe to this list here.
2016 |
Jan
(2) |
Feb
(13) |
Mar
(9) |
Apr
(4) |
May
(5) |
Jun
(2) |
Jul
(8) |
Aug
(3) |
Sep
(25) |
Oct
(7) |
Nov
(49) |
Dec
(15) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2017 |
Jan
(24) |
Feb
(36) |
Mar
(53) |
Apr
(44) |
May
(37) |
Jun
(34) |
Jul
(12) |
Aug
(15) |
Sep
(14) |
Oct
(9) |
Nov
(9) |
Dec
(7) |
2018 |
Jan
(16) |
Feb
(9) |
Mar
(27) |
Apr
(39) |
May
(8) |
Jun
(24) |
Jul
(22) |
Aug
(11) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
2019 |
Jan
(4) |
Feb
(5) |
Mar
|
Apr
(1) |
May
(21) |
Jun
(13) |
Jul
(31) |
Aug
(22) |
Sep
(9) |
Oct
(19) |
Nov
(24) |
Dec
(12) |
2020 |
Jan
(30) |
Feb
(12) |
Mar
(16) |
Apr
(4) |
May
(37) |
Jun
(17) |
Jul
(19) |
Aug
(15) |
Sep
(26) |
Oct
(84) |
Nov
(64) |
Dec
(55) |
2021 |
Jan
(18) |
Feb
(58) |
Mar
(26) |
Apr
(88) |
May
(51) |
Jun
(36) |
Jul
(31) |
Aug
(37) |
Sep
(79) |
Oct
(15) |
Nov
(29) |
Dec
(8) |
2022 |
Jan
(5) |
Feb
(8) |
Mar
(29) |
Apr
(21) |
May
(11) |
Jun
(11) |
Jul
(18) |
Aug
(16) |
Sep
(6) |
Oct
(10) |
Nov
(23) |
Dec
(1) |
2023 |
Jan
(18) |
Feb
|
Mar
(4) |
Apr
|
May
(3) |
Jun
(10) |
Jul
(1) |
Aug
|
Sep
|
Oct
(1) |
Nov
(3) |
Dec
(5) |
2024 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
(1) |
Feb
|
Mar
|
Apr
(5) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Patrick G. <pg...@ma...> - 2018-06-26 17:56:02
|
The default python on Ubuntu 18.04 seems to still be 2.7: cnsit@rossmann:/usr/bin$ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=18.04 DISTRIB_CODENAME=bionic DISTRIB_DESCRIPTION="Ubuntu 18.04 LTS" cnsit@rossmann:/usr/bin$ ls -l python lrwxrwxrwx 1 root root 9 May 8 12:15 python -> python2.7 On 06/25/2018 05:44 PM, Jose Miguel de la Rosa Trevin wrote: > Hi Bruno, > > Maybe Ubuntu 18 comes with Python 3 by default? Although we have tested > the install with it. > Did you search about this error in Ubuntu? Just in case could give us > some hint. > > Best, > Jose Miguel > > > On Mon, Jun 25, 2018 at 10:04 PM Bruno Matsuyama > <bru...@gm... <mailto:bru...@gm...>> wrote: > > Hello, > > I'm having some trouble to install scipion and I hope that somebody > here can help me. I'm using Ubuntu 18.04, and after installing the > dependencies, when I tried to generate the configuration files, I > get the following message: > > ./scipion config > > Traceback (most recent call last): > File "./scipion", line 34, in <module> > import subprocess > File "/usr/lib/python2.7/subprocess.py", line 72, in <module> > import select > ImportError: No module named select > > Did someone had the same error or know how to fix it ? > > Best Regards, > > Bruno > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! > http://sdm.link/slashdot_______________________________________________ > scipion-users mailing list > sci...@li... > <mailto:sci...@li...> > https://lists.sourceforge.net/lists/listinfo/scipion-users > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > |
From: Jose M. de la R. T. <del...@gm...> - 2018-06-25 22:44:34
|
Hi Bruno, Maybe Ubuntu 18 comes with Python 3 by default? Although we have tested the install with it. Did you search about this error in Ubuntu? Just in case could give us some hint. Best, Jose Miguel On Mon, Jun 25, 2018 at 10:04 PM Bruno Matsuyama <bru...@gm...> wrote: > Hello, > > I'm having some trouble to install scipion and I hope that somebody here > can help me. I'm using Ubuntu 18.04, and after installing the dependencies, > when I tried to generate the configuration files, I get the following > message: > > ./scipion config > > Traceback (most recent call last): > File "./scipion", line 34, in <module> > import subprocess > File "/usr/lib/python2.7/subprocess.py", line 72, in <module> > import select > ImportError: No module named select > > Did someone had the same error or know how to fix it ? > > Best Regards, > > Bruno > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > |
From: Bruno M. <bru...@gm...> - 2018-06-25 20:04:16
|
Hello, I'm having some trouble to install scipion and I hope that somebody here can help me. I'm using Ubuntu 18.04, and after installing the dependencies, when I tried to generate the configuration files, I get the following message: ./scipion config Traceback (most recent call last): File "./scipion", line 34, in <module> import subprocess File "/usr/lib/python2.7/subprocess.py", line 72, in <module> import select ImportError: No module named select Did someone had the same error or know how to fix it ? Best Regards, Bruno |
From: Pablo C. <pc...@cn...> - 2018-06-22 15:14:58
|
Dear users...in the context of a EU project we that has helped to develop Scipion and Scipion Web tools, we are running a survey..... If you've ever used Scipion Web tools or ScipionCLoud (doubt this last one), please, go over it. If you've only used Scipion (Desktop) do not bother. The link to the survey is here: https://www.structuralbiology.eu/survey/west-life-user-survey <https://www.structuralbiology.eu/survey/west-life-user-survey> All the best, Pablo. |
From: Manoel P. <Man...@un...> - 2018-06-21 15:15:50
|
Dear Users, The answer is coming from Gregory Sharov. Indeed, I made a mistake I started the classification asking with X CPUs and found out that it was too much... I stopped the job, reassigned fewer CPUs and restarted it. I think for some unknown (to me) reasons it kept the initial number of CPUs. Killing the job and restarting it sloved everything! Thank very much to all of you for the help! Cheers, Manoël Prouteau, Ph.D. Scientific Collaborator Department of Molecular Biology Sciences III - University of Geneva Quai Ernest Ansermet, 30 1211 Geneve 04 Switzerland (+41) 022 379 61 18 man...@un... http://www.unige.ch ________________________________ De : Gregory Sharov <sha...@gm...> Envoyé : jeudi, 21 juin 2018 16:13:05 À : Mailing list for Scipion users Objet : Re: [scipion-users] Problem with CL2D Hello Manoel, from the attached output it looks like you were running it first with 4 MPIs but then the second step continued with 32mpis. It could be that you are running out of memory and you job is getting killed.. On Thu, Jun 21, 2018, 14:58 Bart Alewijnse <sca...@gm...<mailto:sca...@gm...>> wrote: In my experience, seemingly random kills are often the kernel's out-of-memory handler dealing with over-allocating processes. Maybe a window size or other parameter thing? On Jun 21, 2018, at 15:40, Carlos Oscar Sorzano <co...@cn...<mailto:co...@cn...>> wrote: Dear Manoel, from the stdout there is no obvious reason why the process has finished. There is no error other than it has been killed. In some machines there is a limit on the time a process can be running, and beyond this time, processes have to be submitted through a queue. I don't know if this could be the case in this case. Kind regards, Carlos Oscar On 21/06/2018 14:18, Manoel Prouteau wrote: Dear users, I am just starting using Scipion for CL2D classification of a small set of manually picked objects. I get an error while the softwaer starts the second step of the command. Can you help me understanding the problem? You can find the error in the run.stdout here: 00001: RUNNING PROTOCOL ----------------- 00002: PID: 9060 00003: Scipion: v1.1 (2017-06-14) Balbino 00004: currentDir: /data/prouteau/Mano_newdata_frames2-16_DW/TOROID-Sides 00005: workingDir: Runs/000400_XmippProtCL2D 00006: runMode: Continue 00007: MPI: 4 00008: threads: 1 00009: len(steps) 13 len(prevSteps) 0 00010: Starting at step: 1 00011: Running steps 00012: STARTED: convertInputStep, step 1 00013: 2018-06-20 15:03:42.471496 00014: FINISHED: convertInputStep, step 1 00015: 2018-06-20 15:03:43.256208 00016: STARTED: runJob, step 2 00017: 2018-06-20 15:03:43.312482 00018: mpirun -np 4 -bynode `which xmipp_mpi_classify_CL2D` -i Runs/000400_XmippProtCL2D/tmp/input_particles.xmd --odir Runs/000400_XmippProtCL2D/extra --oroot level --nref 15 --iter 10 --distance correlation --classicalMultiref --nref0 4 00019: -------------------------------------------------------------------------- 00020: The following command line options and corresponding MCA parameter have 00021: been deprecated and replaced as follows: 00022: 00023: Command line options: 00024: Deprecated: --bynode, -bynode 00025: Replacement: --map-by node 00026: 00027: Equivalent MCA parameter: 00028: Deprecated: rmaps_base_bynode 00029: Replacement: rmaps_base_mapping_policy=node 00030: 00031: The deprecated forms *will* disappear in a future version of Open MPI. 00032: Please update to the new syntax. 00033: -------------------------------------------------------------------------- 00034: Input images: Runs/000400_XmippProtCL2D/tmp/input_particles.xmd 00035: Output root: level 00036: Output dir: Runs/000400_XmippProtCL2D/extra 00037: Iterations: 10 00038: CodesSel0: 00039: Codes0: 4 00040: Codes: 15 00041: Neighbours: 4 00042: Minimum node size: 20 00043: Use Correlation: 1 00044: Classical Multiref: 1 00045: Classical Split: 0 00046: Maximum shift: 10 00047: Classify all images: 0 00048: Normalize images: 1 00049: Mirror images: 1 00050: Align images: 1 00051: Initializing ... 00052: 0/ 0 sec. ............................................................ 00053: Quantizing with 4 codes... 00054: Iteration 1 ... 00055: 13/ 25 sec. ...............................RUNNING PROTOCOL ----------------- 00056: PID: 9099 00057: Scipion: v1.1 (2017-06-14) Balbino 00058: currentDir: /data/prouteau/Mano_newdata_frames2-16_DW/TOROID-Sides 00059: workingDir: Runs/000400_XmippProtCL2D 00060: runMode: Continue 00061: MPI: 32 00062: threads: 1 00063: len(steps) 13 len(prevSteps) 13 00064: Starting at step: 2 00065: Running steps 00066: STARTED: runJob, step 2 00067: 2018-06-20 15:04:06.958333 00068: mpirun -np 32 -bynode `which xmipp_mpi_classify_CL2D` -i Runs/000400_XmippProtCL2D/tmp/input_particles.xmd --odir Runs/000400_XmippProtCL2D/extra --oroot level --nref 15 --iter 10 --distance correlation --classicalMultiref --nref0 4 00069: -------------------------------------------------------------------------- 00070: The following command line options and corresponding MCA parameter have 00071: been deprecated and replaced as follows: 00072: 00073: Command line options: 00074: Deprecated: --bynode, -bynode 00075: Replacement: --map-by node 00076: 00077: Equivalent MCA parameter: 00078: Deprecated: rmaps_base_bynode 00079: Replacement: rmaps_base_mapping_policy=node 00080: 00081: The deprecated forms *will* disappear in a future version of Open MPI. 00082: Please update to the new syntax. 00083: -------------------------------------------------------------------------- 00084: Input images: Runs/000400_XmippProtCL2D/tmp/input_particles.xmd 00085: Output root: level 00086: Output dir: Runs/000400_XmippProtCL2D/extra 00087: Iterations: 10 00088: CodesSel0: 00089: Codes0: 4 00090: Codes: 15 00091: Neighbours: 4 00092: Minimum node size: 20 00093: Use Correlation: 1 00094: Classical Multiref: 1 00095: Classical Split: 0 00096: Maximum shift: 10 00097: Classify all images: 0 00098: Normalize images: 1 00099: Mirror images: 1 00100: Align images: 1 00101: Initializing ... 00102: 0/ 0 sec. ............................................................ 00103: Quantizing with 4 codes... 00104: Iteration 1 ... 00105: 10/ 10 sec. ............................................................ 00106: 00107: Average correlation with input vectors=0.0310552 00108: Number of assignment changes=0 00109: Iteration 2 ... 00110: 10/ 10 sec. ............................................................ 00111: 00112: Average correlation with input vectors=0.0882044 00113: Number of assignment changes=324 00114: Iteration 3 ... 00115: 10/ 10 sec. ............................................................ 00116: 00117: Average correlation with input vectors=0.107101 00118: Number of assignment changes=378 00119: Iteration 4 ... 00120: 9/ 9 sec. ............................................................ 00121: 00122: Average correlation with input vectors=0.122994 00123: Number of assignment changes=225 00124: Iteration 5 ... 00125: 10/ 10 sec. ............................................................ 00126: 00127: Average correlation with input vectors=0.119519 00128: Number of assignment changes=290 00129: Iteration 6 ... 00130: 9/ 9 sec. ............................................................ 00131: 00132: Average correlation with input vectors=0.127653 00133: Number of assignment changes=233 00134: Iteration 7 ... 00135: 10/ 10 sec. ............................................................ 00136: 00137: Average correlation with input vectors=0.127296 00138: Number of assignment changes=223 00139: Iteration 8 ... 00140: 9/ 9 sec. ............................................................ 00141: 00142: Average correlation with input vectors=0.129356 00143: Number of assignment changes=236 00144: Iteration 9 ... 00145: 10/ 10 sec. ............................................................ 00146: 00147: Average correlation with input vectors=0.143878 00148: Number of assignment changes=126 00149: Iteration 10 ... 00150: 9/ 9 sec. ............................................................ 00151: 00152: Average correlation with input vectors=0.138916 00153: Number of assignment changes=187 00154: Spliting nodes ... 00155: Currently there are 5 nodes 00156: Currently there are 6 nodes 00157: Currently there are 7 nodes 00158: Currently there are 8 nodes 00159: Quantizing with 8 codes... 00160: Iteration 1 ... 00161: 28/ 28 sec. ............................................................ 00162: 00163: Average correlation with input vectors=0.139535 00164: Number of assignment changes=0 00165: Iteration 2 ... 00166: 26/ 26 sec. ............................................................ 00167: 00168: Average correlation with input vectors=0.153304 00169: Number of assignment changes=181 00170: Iteration 3 ... 00171: 26/ 26 sec. ............................................................ 00172: 00173: Average correlation with input vectors=0.159167 00174: Number of assignment changes=265 00175: Iteration 4 ... 00176: 25/ 25 sec. ............................................................ 00177: 00178: Average correlation with input vectors=0.151184 00179: Number of assignment changes=424 00180: Iteration 5 ... 00181: 25/ 25 sec. ............................................................ 00182: 00183: Average correlation with input vectors=0.155143 00184: Number of assignment changes=177 00185: Iteration 6 ... 00186: 23/ 23 sec. ............................................................ 00187: 00188: Average correlation with input vectors=0.147184 00189: Number of assignment changes=263 00190: Iteration 7 ... 00191: 27/ 27 sec. ............................................................ 00192: 00193: Average correlation with input vectors=0.159538 00194: Number of assignment changes=119 00195: Iteration 8 ... 00196: 25/ 25 sec. ............................................................ 00197: 00198: Average correlation with input vectors=0.160486 00199: Number of assignment changes=139 00200: Iteration 9 ... 00201: 26/ 26 sec. ............................................................ 00202: 00203: Average correlation with input vectors=0.164716 00204: Number of assignment changes=120 00205: Iteration 10 ... 00206: 27/ 27 sec. ............................................................ 00207: 00208: Average correlation with input vectors=0.162771 00209: Number of assignment changes=130 00210: Spliting nodes ... 00211: Currently there are 9 nodes 00212: Currently there are 10 nodes 00213: Currently there are 11 nodes 00214: Currently there are 12 nodes 00215: -------------------------------------------------------------------------- 00216: mpirun noticed that process rank 11 with PID 9147 on node smaug exited on signal 9 (Killed). 00217: -------------------------------------------------------------------------- 00218: Traceback (most recent call last): 00219: File "/opt/scipion/pyworkflow/protocol/protocol.py", line 182, in run 00220: self._run() 00221: File "/opt/scipion/pyworkflow/protocol/protocol.py", line 228, in _run 00222: resultFiles = self._runFunc() 00223: File "/opt/scipion/pyworkflow/protocol/protocol.py", line 224, in _runFunc 00224: return self._func(*self._args) 00225: File "/opt/scipion/pyworkflow/protocol/protocol.py", line 1077, in runJob 00226: self._stepsExecutor.runJob(self._log, program, arguments, **kwargs) 00227: File "/opt/scipion/pyworkflow/protocol/executor.py", line 56, in runJob 00228: env=env, cwd=cwd) 00229: File "/opt/scipion/pyworkflow/utils/process.py", line 51, in runJob 00230: return runCommand(command, env, cwd) 00231: File "/opt/scipion/pyworkflow/utils/process.py", line 65, in runCommand 00232: check_call(command, shell=True, stdout=sys.stdout, stderr=sys.stderr, env=env, cwd=cwd) 00233: File "/opt/scipion/software/lib/python2.7/subprocess.py", line 540, in check_call 00234: raise CalledProcessError(retcode, cmd) 00235: CalledProcessError: Command 'mpirun -np 32 -bynode `which xmipp_mpi_classify_CL2D` -i Runs/000400_XmippProtCL2D/tmp/input_particles.xmd --odir Runs/000400_XmippProtCL2D/extra --oroot level --nref 15 --iter 10 --distance correlation --classicalMultiref --nref0 4' returned non-zero exit status 137 00236: Protocol failed: Command 'mpirun -np 32 -bynode `which xmipp_mpi_classify_CL2D` -i Runs/000400_XmippProtCL2D/tmp/input_particles.xmd --odir Runs/000400_XmippProtCL2D/extra --oroot level --nref 15 --iter 10 --distance correlation --classicalMultiref --nref0 4' returned non-zero exit status 137 00237: FAILED: runJob, step 2 00238: 2018-06-20 15:31:45.758171 00239: ------------------- PROTOCOL FAILED (DONE 2/13) Thanks in advance for your help, Cheers, Manoël Prouteau, Ph.D. Scientific Collaborator Department of Molecular Biology Sciences III - University of Geneva Quai Ernest Ansermet, 30 1211 Geneve 04 Switzerland (+41) 022 379 61 18 man...@un...<mailto:man...@un...> http://www.unige.ch ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ scipion-users mailing list sci...@li...<mailto:sci...@li...> https://lists.sourceforge.net/lists/listinfo/scipion-users -- ------------------------------------------------------------------------ Carlos Oscar Sánchez Sorzano e-mail: co...@cn...<mailto:co...@cn...> Biocomputing unit http://i2pc.es/coss National Center of Biotechnology (CSIC) c/Darwin, 3 Campus Universidad Autónoma (Cantoblanco) Tlf: 34-91-585 4510 28049 MADRID (SPAIN) Fax: 34-91-585 4506 ------------------------------------------------------------------------ ________________________________ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org<http://Slashdot.org>! http://sdm.link/slashdot ________________________________ scipion-users mailing list sci...@li...<mailto:sci...@li...> https://lists.sourceforge.net/lists/listinfo/scipion-users ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot_______________________________________________ scipion-users mailing list sci...@li...<mailto:sci...@li...> https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Gregory S. <sha...@gm...> - 2018-06-21 14:13:22
|
Hello Manoel, from the attached output it looks like you were running it first with 4 MPIs but then the second step continued with 32mpis. It could be that you are running out of memory and you job is getting killed.. On Thu, Jun 21, 2018, 14:58 Bart Alewijnse <sca...@gm...> wrote: > > In my experience, seemingly random kills are often the kernel's > out-of-memory handler dealing with over-allocating processes. > Maybe a window size or other parameter thing? > > On Jun 21, 2018, at 15:40, Carlos Oscar Sorzano <co...@cn...> wrote: >> >> Dear Manoel, >> >> >> from the stdout there is no obvious reason why the process has finished. >> There is no error other than it has been killed. In some machines there is >> a limit on the time a process can be running, and beyond this time, >> processes have to be submitted through a queue. I don't know if this could >> be the case in this case. >> >> >> Kind regards, Carlos Oscar >> >> On 21/06/2018 14:18, Manoel Prouteau wrote: >> >> Dear users, >> >> >> I am just starting using Scipion for CL2D classification of a small set >> of manually picked objects. >> >> I get an error while the softwaer starts the second step of the command. >> Can you help me understanding the problem? >> >> >> You can find the error in the run.stdout here: >> >> >> 00001: RUNNING PROTOCOL ----------------- >> 00002: PID: 9060 >> 00003: Scipion: v1.1 (2017-06-14) Balbino >> 00004: currentDir: >> /data/prouteau/Mano_newdata_frames2-16_DW/TOROID-Sides >> 00005: workingDir: Runs/000400_XmippProtCL2D >> 00006: runMode: Continue >> 00007: MPI: 4 >> 00008: threads: 1 >> 00009: len(steps) 13 len(prevSteps) 0 >> 00010: Starting at step: 1 >> 00011: Running steps >> 00012: STARTED: convertInputStep, step 1 >> 00013: 2018-06-20 15:03:42.471496 >> 00014: FINISHED: convertInputStep, step 1 >> 00015: 2018-06-20 15:03:43.256208 >> 00016: STARTED: runJob, step 2 >> 00017: 2018-06-20 15:03:43.312482 >> 00018: mpirun -np 4 -bynode `which xmipp_mpi_classify_CL2D` -i >> Runs/000400_XmippProtCL2D/tmp/input_particles.xmd --odir >> Runs/000400_XmippProtCL2D/extra --oroot level --nref 15 --iter 10 >> --distance correlation --classicalMultiref --nref0 4 >> 00019: >> -------------------------------------------------------------------------- >> 00020: The following command line options and corresponding MCA >> parameter have >> 00021: been deprecated and replaced as follows: >> 00022: >> 00023: Command line options: >> 00024: Deprecated: --bynode, -bynode >> 00025: Replacement: --map-by node >> 00026: >> 00027: Equivalent MCA parameter: >> 00028: Deprecated: rmaps_base_bynode >> 00029: Replacement: rmaps_base_mapping_policy=node >> 00030: >> 00031: The deprecated forms *will* disappear in a future version of >> Open MPI. >> 00032: Please update to the new syntax. >> 00033: >> -------------------------------------------------------------------------- >> 00034: Input images: >> Runs/000400_XmippProtCL2D/tmp/input_particles.xmd >> 00035: Output root: level >> 00036: Output dir: Runs/000400_XmippProtCL2D/extra >> 00037: Iterations: 10 >> 00038: CodesSel0: >> 00039: Codes0: 4 >> 00040: Codes: 15 >> 00041: Neighbours: 4 >> 00042: Minimum node size: 20 >> 00043: Use Correlation: 1 >> 00044: Classical Multiref: 1 >> 00045: Classical Split: 0 >> 00046: Maximum shift: 10 >> 00047: Classify all images: 0 >> 00048: Normalize images: 1 >> 00049: Mirror images: 1 >> 00050: Align images: 1 >> 00051: Initializing ... >> 00052: 0/ 0 sec. >> ............................................................ >> 00053: Quantizing with 4 codes... >> 00054: Iteration 1 ... >> 00055: 13/ 25 sec. ...............................RUNNING PROTOCOL >> ----------------- >> 00056: PID: 9099 >> 00057: Scipion: v1.1 (2017-06-14) Balbino >> 00058: currentDir: >> /data/prouteau/Mano_newdata_frames2-16_DW/TOROID-Sides >> 00059: workingDir: Runs/000400_XmippProtCL2D >> 00060: runMode: Continue >> 00061: MPI: 32 >> 00062: threads: 1 >> 00063: len(steps) 13 len(prevSteps) 13 >> 00064: Starting at step: 2 >> 00065: Running steps >> 00066: STARTED: runJob, step 2 >> 00067: 2018-06-20 15:04:06.958333 >> 00068: mpirun -np 32 -bynode `which xmipp_mpi_classify_CL2D` -i >> Runs/000400_XmippProtCL2D/tmp/input_particles.xmd --odir >> Runs/000400_XmippProtCL2D/extra --oroot level --nref 15 --iter 10 >> --distance correlation --classicalMultiref --nref0 4 >> 00069: >> -------------------------------------------------------------------------- >> 00070: The following command line options and corresponding MCA >> parameter have >> 00071: been deprecated and replaced as follows: >> 00072: >> 00073: Command line options: >> 00074: Deprecated: --bynode, -bynode >> 00075: Replacement: --map-by node >> 00076: >> 00077: Equivalent MCA parameter: >> 00078: Deprecated: rmaps_base_bynode >> 00079: Replacement: rmaps_base_mapping_policy=node >> 00080: >> 00081: The deprecated forms *will* disappear in a future version of >> Open MPI. >> 00082: Please update to the new syntax. >> 00083: >> -------------------------------------------------------------------------- >> 00084: Input images: >> Runs/000400_XmippProtCL2D/tmp/input_particles.xmd >> 00085: Output root: level >> 00086: Output dir: Runs/000400_XmippProtCL2D/extra >> 00087: Iterations: 10 >> 00088: CodesSel0: >> 00089: Codes0: 4 >> 00090: Codes: 15 >> 00091: Neighbours: 4 >> 00092: Minimum node size: 20 >> 00093: Use Correlation: 1 >> 00094: Classical Multiref: 1 >> 00095: Classical Split: 0 >> 00096: Maximum shift: 10 >> 00097: Classify all images: 0 >> 00098: Normalize images: 1 >> 00099: Mirror images: 1 >> 00100: Align images: 1 >> 00101: Initializing ... >> 00102: 0/ 0 sec. >> ............................................................ >> 00103: Quantizing with 4 codes... >> 00104: Iteration 1 ... >> 00105: 10/ 10 sec. >> ............................................................ >> 00106: >> 00107: Average correlation with input vectors=0.0310552 >> 00108: Number of assignment changes=0 >> 00109: Iteration 2 ... >> 00110: 10/ 10 sec. >> ............................................................ >> 00111: >> 00112: Average correlation with input vectors=0.0882044 >> 00113: Number of assignment changes=324 >> 00114: Iteration 3 ... >> 00115: 10/ 10 sec. >> ............................................................ >> 00116: >> 00117: Average correlation with input vectors=0.107101 >> 00118: Number of assignment changes=378 >> 00119: Iteration 4 ... >> 00120: 9/ 9 sec. >> ............................................................ >> 00121: >> 00122: Average correlation with input vectors=0.122994 >> 00123: Number of assignment changes=225 >> 00124: Iteration 5 ... >> 00125: 10/ 10 sec. >> ............................................................ >> 00126: >> 00127: Average correlation with input vectors=0.119519 >> 00128: Number of assignment changes=290 >> 00129: Iteration 6 ... >> 00130: 9/ 9 sec. >> ............................................................ >> 00131: >> 00132: Average correlation with input vectors=0.127653 >> 00133: Number of assignment changes=233 >> 00134: Iteration 7 ... >> 00135: 10/ 10 sec. >> ............................................................ >> 00136: >> 00137: Average correlation with input vectors=0.127296 >> 00138: Number of assignment changes=223 >> 00139: Iteration 8 ... >> 00140: 9/ 9 sec. >> ............................................................ >> 00141: >> 00142: Average correlation with input vectors=0.129356 >> 00143: Number of assignment changes=236 >> 00144: Iteration 9 ... >> 00145: 10/ 10 sec. >> ............................................................ >> 00146: >> 00147: Average correlation with input vectors=0.143878 >> 00148: Number of assignment changes=126 >> 00149: Iteration 10 ... >> 00150: 9/ 9 sec. >> ............................................................ >> 00151: >> 00152: Average correlation with input vectors=0.138916 >> 00153: Number of assignment changes=187 >> 00154: Spliting nodes ... >> 00155: Currently there are 5 nodes >> 00156: Currently there are 6 nodes >> 00157: Currently there are 7 nodes >> 00158: Currently there are 8 nodes >> 00159: Quantizing with 8 codes... >> 00160: Iteration 1 ... >> 00161: 28/ 28 sec. >> ............................................................ >> 00162: >> 00163: Average correlation with input vectors=0.139535 >> 00164: Number of assignment changes=0 >> 00165: Iteration 2 ... >> 00166: 26/ 26 sec. >> ............................................................ >> 00167: >> 00168: Average correlation with input vectors=0.153304 >> 00169: Number of assignment changes=181 >> 00170: Iteration 3 ... >> 00171: 26/ 26 sec. >> ............................................................ >> 00172: >> 00173: Average correlation with input vectors=0.159167 >> 00174: Number of assignment changes=265 >> 00175: Iteration 4 ... >> 00176: 25/ 25 sec. >> ............................................................ >> 00177: >> 00178: Average correlation with input vectors=0.151184 >> 00179: Number of assignment changes=424 >> 00180: Iteration 5 ... >> 00181: 25/ 25 sec. >> ............................................................ >> 00182: >> 00183: Average correlation with input vectors=0.155143 >> 00184: Number of assignment changes=177 >> 00185: Iteration 6 ... >> 00186: 23/ 23 sec. >> ............................................................ >> 00187: >> 00188: Average correlation with input vectors=0.147184 >> 00189: Number of assignment changes=263 >> 00190: Iteration 7 ... >> 00191: 27/ 27 sec. >> ............................................................ >> 00192: >> 00193: Average correlation with input vectors=0.159538 >> 00194: Number of assignment changes=119 >> 00195: Iteration 8 ... >> 00196: 25/ 25 sec. >> ............................................................ >> 00197: >> 00198: Average correlation with input vectors=0.160486 >> 00199: Number of assignment changes=139 >> 00200: Iteration 9 ... >> 00201: 26/ 26 sec. >> ............................................................ >> 00202: >> 00203: Average correlation with input vectors=0.164716 >> 00204: Number of assignment changes=120 >> 00205: Iteration 10 ... >> 00206: 27/ 27 sec. >> ............................................................ >> 00207: >> 00208: Average correlation with input vectors=0.162771 >> 00209: Number of assignment changes=130 >> 00210: Spliting nodes ... >> 00211: Currently there are 9 nodes >> 00212: Currently there are 10 nodes >> 00213: Currently there are 11 nodes >> 00214: Currently there are 12 nodes >> 00215: >> -------------------------------------------------------------------------- >> 00216: mpirun noticed that process rank 11 with PID 9147 on node smaug >> exited on signal 9 (Killed). >> 00217: >> -------------------------------------------------------------------------- >> 00218: Traceback (most recent call last): >> 00219: File "/opt/scipion/pyworkflow/protocol/protocol.py", line 182, >> in run >> 00220: self._run() >> 00221: File "/opt/scipion/pyworkflow/protocol/protocol.py", line 228, >> in _run >> 00222: resultFiles = self._runFunc() >> 00223: File "/opt/scipion/pyworkflow/protocol/protocol.py", line 224, >> in _runFunc >> 00224: return self._func(*self._args) >> 00225: File "/opt/scipion/pyworkflow/protocol/protocol.py", line >> 1077, in runJob >> 00226: self._stepsExecutor.runJob(self._log, program, arguments, >> **kwargs) >> 00227: File "/opt/scipion/pyworkflow/protocol/executor.py", line 56, >> in runJob >> 00228: env=env, cwd=cwd) >> 00229: File "/opt/scipion/pyworkflow/utils/process.py", line 51, in >> runJob >> 00230: return runCommand(command, env, cwd) >> 00231: File "/opt/scipion/pyworkflow/utils/process.py", line 65, in >> runCommand >> 00232: check_call(command, shell=True, stdout=sys.stdout, >> stderr=sys.stderr, env=env, cwd=cwd) >> 00233: File "/opt/scipion/software/lib/python2.7/subprocess.py", line >> 540, in check_call >> 00234: raise CalledProcessError(retcode, cmd) >> 00235: CalledProcessError: Command 'mpirun -np 32 -bynode `which >> xmipp_mpi_classify_CL2D` -i >> Runs/000400_XmippProtCL2D/tmp/input_particles.xmd --odir >> Runs/000400_XmippProtCL2D/extra --oroot level --nref 15 --iter 10 >> --distance correlation --classicalMultiref --nref0 4' returned non-zero >> exit status 137 >> 00236: Protocol failed: Command 'mpirun -np 32 -bynode `which >> xmipp_mpi_classify_CL2D` -i >> Runs/000400_XmippProtCL2D/tmp/input_particles.xmd --odir >> Runs/000400_XmippProtCL2D/extra --oroot level --nref 15 --iter 10 >> --distance correlation --classicalMultiref --nref0 4' returned non-zero >> exit status 137 >> 00237: FAILED: runJob, step 2 >> 00238: 2018-06-20 15:31:45.758171 >> 00239: ------------------- PROTOCOL FAILED (DONE 2/13) >> >> >> Thanks in advance for your help, >> >> >> Cheers, >> >> >> *Manoël Prouteau, Ph.D.* >> >> Scientific Collaborator >> >> Department of Molecular Biology >> >> Sciences III - University of Geneva >> >> Quai Ernest Ansermet, 30 >> >> 1211 Geneve 04 >> >> Switzerland >> >> (+41) 022 379 61 18 >> >> man...@un... >> >> http://www.unige.ch >> >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> >> >> >> _______________________________________________ >> scipion-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/scipion-users >> >> >> -- >> ------------------------------------------------------------------------ >> Carlos Oscar Sánchez Sorzano e-mail: co...@cn... >> Biocomputing unit http://i2pc.es/coss >> National Center of Biotechnology (CSIC) >> c/Darwin, 3 >> Campus Universidad Autónoma (Cantoblanco) Tlf: 34-91-585 4510 >> 28049 MADRID (SPAIN) Fax: 34-91-585 4506 >> ------------------------------------------------------------------------ >> >> ------------------------------ >> >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> >> ------------------------------ >> >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> >> > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > |
From: Bart A. <sca...@gm...> - 2018-06-21 13:58:46
|
In my experience, seemingly random kills are often the kernel's out-of-memory handler dealing with over-allocating processes. Maybe a window size or other parameter thing? On Jun 21, 2018, 15:40, at 15:40, Carlos Oscar Sorzano <co...@cn...> wrote: >Dear Manoel, > > >from the stdout there is no obvious reason why the process has >finished. >There is no error other than it has been killed. In some machines there > >is a limit on the time a process can be running, and beyond this time, >processes have to be submitted through a queue. I don't know if this >could be the case in this case. > > >Kind regards, Carlos Oscar > > >On 21/06/2018 14:18, Manoel Prouteau wrote: >> >> Dear users, >> >> >> I am just starting using Scipion for CL2D classification of a small >> set of manually picked objects. >> >> I get an error while the softwaer starts the second step of the >> command. Can you help me understanding the problem? >> >> >> You can find the error in the run.stdout here: >> >> >> 00001: RUNNING PROTOCOL ----------------- >> 00002: PID: 9060 >> 00003: Scipion: v1.1 (2017-06-14) Balbino >> 00004: currentDir: >> /data/prouteau/Mano_newdata_frames2-16_DW/TOROID-Sides >> 00005: workingDir: Runs/000400_XmippProtCL2D >> 00006: runMode: Continue >> 00007: MPI: 4 >> 00008: threads: 1 >> 00009: len(steps) 13 len(prevSteps) 0 >> 00010: Starting at step: 1 >> 00011: Running steps >> 00012: STARTED: convertInputStep, step 1 >> 00013: 2018-06-20 15:03:42.471496 >> 00014: FINISHED: convertInputStep, step 1 >> 00015: 2018-06-20 15:03:43.256208 >> 00016: STARTED: runJob, step 2 >> 00017: 2018-06-20 15:03:43.312482 >> 00018: mpirun -np 4 -bynode `which xmipp_mpi_classify_CL2D` -i >> Runs/000400_XmippProtCL2D/tmp/input_particles.xmd --odir >> Runs/000400_XmippProtCL2D/extra --oroot level --nref 15 --iter 10 >> --distance correlation --classicalMultiref --nref0 4 >> 00019: >> >-------------------------------------------------------------------------- >> 00020: The following command line options and corresponding MCA >> parameter have >> 00021: been deprecated and replaced as follows: >> 00022: >> 00023: Command line options: >> 00024: Deprecated: --bynode, -bynode >> 00025: Replacement: --map-by node >> 00026: >> 00027: Equivalent MCA parameter: >> 00028: Deprecated: rmaps_base_bynode >> 00029: Replacement: rmaps_base_mapping_policy=node >> 00030: >> 00031: The deprecated forms *will* disappear in a future version of > >> Open MPI. >> 00032: Please update to the new syntax. >> 00033: >> >-------------------------------------------------------------------------- >> 00034: Input images: >Runs/000400_XmippProtCL2D/tmp/input_particles.xmd >> 00035: Output root: level >> 00036: Output dir: Runs/000400_XmippProtCL2D/extra >> 00037: Iterations: 10 >> 00038: CodesSel0: >> 00039: Codes0: 4 >> 00040: Codes: 15 >> 00041: Neighbours: 4 >> 00042: Minimum node size: 20 >> 00043: Use Correlation: 1 >> 00044: Classical Multiref: 1 >> 00045: Classical Split: 0 >> 00046: Maximum shift: 10 >> 00047: Classify all images: 0 >> 00048: Normalize images: 1 >> 00049: Mirror images: 1 >> 00050: Align images: 1 >> 00051: Initializing ... >> 00052: 0/ 0 sec. >> ............................................................ >> 00053: Quantizing with 4 codes... >> 00054: Iteration 1 ... >> 00055: 13/ 25 sec. ...............................RUNNING >> PROTOCOL ----------------- >> 00056: PID: 9099 >> 00057: Scipion: v1.1 (2017-06-14) Balbino >> 00058: currentDir: >> /data/prouteau/Mano_newdata_frames2-16_DW/TOROID-Sides >> 00059: workingDir: Runs/000400_XmippProtCL2D >> 00060: runMode: Continue >> 00061: MPI: 32 >> 00062: threads: 1 >> 00063: len(steps) 13 len(prevSteps) 13 >> 00064: Starting at step: 2 >> 00065: Running steps >> 00066: STARTED: runJob, step 2 >> 00067: 2018-06-20 15:04:06.958333 >> 00068: mpirun -np 32 -bynode `which xmipp_mpi_classify_CL2D` -i >> Runs/000400_XmippProtCL2D/tmp/input_particles.xmd --odir >> Runs/000400_XmippProtCL2D/extra --oroot level --nref 15 --iter 10 >> --distance correlation --classicalMultiref --nref0 4 >> 00069: >> >-------------------------------------------------------------------------- >> 00070: The following command line options and corresponding MCA >> parameter have >> 00071: been deprecated and replaced as follows: >> 00072: >> 00073: Command line options: >> 00074: Deprecated: --bynode, -bynode >> 00075: Replacement: --map-by node >> 00076: >> 00077: Equivalent MCA parameter: >> 00078: Deprecated: rmaps_base_bynode >> 00079: Replacement: rmaps_base_mapping_policy=node >> 00080: >> 00081: The deprecated forms *will* disappear in a future version of > >> Open MPI. >> 00082: Please update to the new syntax. >> 00083: >> >-------------------------------------------------------------------------- >> 00084: Input images: >Runs/000400_XmippProtCL2D/tmp/input_particles.xmd >> 00085: Output root: level >> 00086: Output dir: Runs/000400_XmippProtCL2D/extra >> 00087: Iterations: 10 >> 00088: CodesSel0: >> 00089: Codes0: 4 >> 00090: Codes: 15 >> 00091: Neighbours: 4 >> 00092: Minimum node size: 20 >> 00093: Use Correlation: 1 >> 00094: Classical Multiref: 1 >> 00095: Classical Split: 0 >> 00096: Maximum shift: 10 >> 00097: Classify all images: 0 >> 00098: Normalize images: 1 >> 00099: Mirror images: 1 >> 00100: Align images: 1 >> 00101: Initializing ... >> 00102: 0/ 0 sec. >> ............................................................ >> 00103: Quantizing with 4 codes... >> 00104: Iteration 1 ... >> 00105: 10/ 10 sec. >> ............................................................ >> 00106: >> 00107: Average correlation with input vectors=0.0310552 >> 00108: Number of assignment changes=0 >> 00109: Iteration 2 ... >> 00110: 10/ 10 sec. >> ............................................................ >> 00111: >> 00112: Average correlation with input vectors=0.0882044 >> 00113: Number of assignment changes=324 >> 00114: Iteration 3 ... >> 00115: 10/ 10 sec. >> ............................................................ >> 00116: >> 00117: Average correlation with input vectors=0.107101 >> 00118: Number of assignment changes=378 >> 00119: Iteration 4 ... >> 00120: 9/ 9 sec. >> ............................................................ >> 00121: >> 00122: Average correlation with input vectors=0.122994 >> 00123: Number of assignment changes=225 >> 00124: Iteration 5 ... >> 00125: 10/ 10 sec. >> ............................................................ >> 00126: >> 00127: Average correlation with input vectors=0.119519 >> 00128: Number of assignment changes=290 >> 00129: Iteration 6 ... >> 00130: 9/ 9 sec. >> ............................................................ >> 00131: >> 00132: Average correlation with input vectors=0.127653 >> 00133: Number of assignment changes=233 >> 00134: Iteration 7 ... >> 00135: 10/ 10 sec. >> ............................................................ >> 00136: >> 00137: Average correlation with input vectors=0.127296 >> 00138: Number of assignment changes=223 >> 00139: Iteration 8 ... >> 00140: 9/ 9 sec. >> ............................................................ >> 00141: >> 00142: Average correlation with input vectors=0.129356 >> 00143: Number of assignment changes=236 >> 00144: Iteration 9 ... >> 00145: 10/ 10 sec. >> ............................................................ >> 00146: >> 00147: Average correlation with input vectors=0.143878 >> 00148: Number of assignment changes=126 >> 00149: Iteration 10 ... >> 00150: 9/ 9 sec. >> ............................................................ >> 00151: >> 00152: Average correlation with input vectors=0.138916 >> 00153: Number of assignment changes=187 >> 00154: Spliting nodes ... >> 00155: Currently there are 5 nodes >> 00156: Currently there are 6 nodes >> 00157: Currently there are 7 nodes >> 00158: Currently there are 8 nodes >> 00159: Quantizing with 8 codes... >> 00160: Iteration 1 ... >> 00161: 28/ 28 sec. >> ............................................................ >> 00162: >> 00163: Average correlation with input vectors=0.139535 >> 00164: Number of assignment changes=0 >> 00165: Iteration 2 ... >> 00166: 26/ 26 sec. >> ............................................................ >> 00167: >> 00168: Average correlation with input vectors=0.153304 >> 00169: Number of assignment changes=181 >> 00170: Iteration 3 ... >> 00171: 26/ 26 sec. >> ............................................................ >> 00172: >> 00173: Average correlation with input vectors=0.159167 >> 00174: Number of assignment changes=265 >> 00175: Iteration 4 ... >> 00176: 25/ 25 sec. >> ............................................................ >> 00177: >> 00178: Average correlation with input vectors=0.151184 >> 00179: Number of assignment changes=424 >> 00180: Iteration 5 ... >> 00181: 25/ 25 sec. >> ............................................................ >> 00182: >> 00183: Average correlation with input vectors=0.155143 >> 00184: Number of assignment changes=177 >> 00185: Iteration 6 ... >> 00186: 23/ 23 sec. >> ............................................................ >> 00187: >> 00188: Average correlation with input vectors=0.147184 >> 00189: Number of assignment changes=263 >> 00190: Iteration 7 ... >> 00191: 27/ 27 sec. >> ............................................................ >> 00192: >> 00193: Average correlation with input vectors=0.159538 >> 00194: Number of assignment changes=119 >> 00195: Iteration 8 ... >> 00196: 25/ 25 sec. >> ............................................................ >> 00197: >> 00198: Average correlation with input vectors=0.160486 >> 00199: Number of assignment changes=139 >> 00200: Iteration 9 ... >> 00201: 26/ 26 sec. >> ............................................................ >> 00202: >> 00203: Average correlation with input vectors=0.164716 >> 00204: Number of assignment changes=120 >> 00205: Iteration 10 ... >> 00206: 27/ 27 sec. >> ............................................................ >> 00207: >> 00208: Average correlation with input vectors=0.162771 >> 00209: Number of assignment changes=130 >> 00210: Spliting nodes ... >> 00211: Currently there are 9 nodes >> 00212: Currently there are 10 nodes >> 00213: Currently there are 11 nodes >> 00214: Currently there are 12 nodes >> 00215: >> >-------------------------------------------------------------------------- >> 00216: mpirun noticed that process rank 11 with PID 9147 on node >> smaug exited on signal 9 (Killed). >> 00217: >> >-------------------------------------------------------------------------- >> 00218: Traceback (most recent call last): >> 00219: File "/opt/scipion/pyworkflow/protocol/protocol.py", line >> 182, in run >> 00220: self._run() >> 00221: File "/opt/scipion/pyworkflow/protocol/protocol.py", line >> 228, in _run >> 00222: resultFiles = self._runFunc() >> 00223: File "/opt/scipion/pyworkflow/protocol/protocol.py", line >> 224, in _runFunc >> 00224: return self._func(*self._args) >> 00225: File "/opt/scipion/pyworkflow/protocol/protocol.py", line >> 1077, in runJob >> 00226: self._stepsExecutor.runJob(self._log, program, >arguments, >> **kwargs) >> 00227: File "/opt/scipion/pyworkflow/protocol/executor.py", line >> 56, in runJob >> 00228: env=env, cwd=cwd) >> 00229: File "/opt/scipion/pyworkflow/utils/process.py", line 51, >> in runJob >> 00230: return runCommand(command, env, cwd) >> 00231: File "/opt/scipion/pyworkflow/utils/process.py", line 65, >> in runCommand >> 00232: check_call(command, shell=True, stdout=sys.stdout, >> stderr=sys.stderr, env=env, cwd=cwd) >> 00233: File "/opt/scipion/software/lib/python2.7/subprocess.py", >> line 540, in check_call >> 00234: raise CalledProcessError(retcode, cmd) >> 00235: CalledProcessError: Command 'mpirun -np 32 -bynode `which >> xmipp_mpi_classify_CL2D` -i >> Runs/000400_XmippProtCL2D/tmp/input_particles.xmd --odir >> Runs/000400_XmippProtCL2D/extra --oroot level --nref 15 --iter 10 >> --distance correlation --classicalMultiref --nref0 4' returned >> non-zero exit status 137 >> 00236: Protocol failed: Command 'mpirun -np 32 -bynode `which >> xmipp_mpi_classify_CL2D` -i >> Runs/000400_XmippProtCL2D/tmp/input_particles.xmd --odir >> Runs/000400_XmippProtCL2D/extra --oroot level --nref 15 --iter 10 >> --distance correlation --classicalMultiref --nref0 4' returned >> non-zero exit status 137 >> 00237: FAILED: runJob, step 2 >> 00238: 2018-06-20 15:31:45.758171 >> 00239: ------------------- PROTOCOL FAILED (DONE 2/13) >> >> >> Thanks in advance for your help, >> >> >> Cheers, >> >> >> *Manoël Prouteau, /Ph.D./* >> >> Scientific Collaborator >> >> Department of Molecular Biology >> >> Sciences III - University of Geneva >> >> Quai Ernest Ansermet, 30 >> >> 1211 Geneve 04 >> >> Switzerland >> >> (+41) 022 379 61 18 >> >> man...@un... >> >> http://www.unige.ch >> >> >> >> >------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> >> >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users > >-- >------------------------------------------------------------------------ >Carlos Oscar Sánchez Sorzano e-mail: >co...@cn... >Biocomputing unit http://i2pc.es/coss >National Center of Biotechnology (CSIC) >c/Darwin, 3 >Campus Universidad Autónoma (Cantoblanco) Tlf: 34-91-585 4510 >28049 MADRID (SPAIN) Fax: 34-91-585 4506 >------------------------------------------------------------------------ > > > >------------------------------------------------------------------------ > >------------------------------------------------------------------------------ >Check out the vibrant tech community on one of the world's most >engaging tech sites, Slashdot.org! http://sdm.link/slashdot > >------------------------------------------------------------------------ > >_______________________________________________ >scipion-users mailing list >sci...@li... >https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Carlos O. S. <co...@cn...> - 2018-06-21 13:40:37
|
Dear Manoel, from the stdout there is no obvious reason why the process has finished. There is no error other than it has been killed. In some machines there is a limit on the time a process can be running, and beyond this time, processes have to be submitted through a queue. I don't know if this could be the case in this case. Kind regards, Carlos Oscar On 21/06/2018 14:18, Manoel Prouteau wrote: > > Dear users, > > > I am just starting using Scipion for CL2D classification of a small > set of manually picked objects. > > I get an error while the softwaer starts the second step of the > command. Can you help me understanding the problem? > > > You can find the error in the run.stdout here: > > > 00001: RUNNING PROTOCOL ----------------- > 00002: PID: 9060 > 00003: Scipion: v1.1 (2017-06-14) Balbino > 00004: currentDir: > /data/prouteau/Mano_newdata_frames2-16_DW/TOROID-Sides > 00005: workingDir: Runs/000400_XmippProtCL2D > 00006: runMode: Continue > 00007: MPI: 4 > 00008: threads: 1 > 00009: len(steps) 13 len(prevSteps) 0 > 00010: Starting at step: 1 > 00011: Running steps > 00012: STARTED: convertInputStep, step 1 > 00013: 2018-06-20 15:03:42.471496 > 00014: FINISHED: convertInputStep, step 1 > 00015: 2018-06-20 15:03:43.256208 > 00016: STARTED: runJob, step 2 > 00017: 2018-06-20 15:03:43.312482 > 00018: mpirun -np 4 -bynode `which xmipp_mpi_classify_CL2D` -i > Runs/000400_XmippProtCL2D/tmp/input_particles.xmd --odir > Runs/000400_XmippProtCL2D/extra --oroot level --nref 15 --iter 10 > --distance correlation --classicalMultiref --nref0 4 > 00019: > -------------------------------------------------------------------------- > 00020: The following command line options and corresponding MCA > parameter have > 00021: been deprecated and replaced as follows: > 00022: > 00023: Command line options: > 00024: Deprecated: --bynode, -bynode > 00025: Replacement: --map-by node > 00026: > 00027: Equivalent MCA parameter: > 00028: Deprecated: rmaps_base_bynode > 00029: Replacement: rmaps_base_mapping_policy=node > 00030: > 00031: The deprecated forms *will* disappear in a future version of > Open MPI. > 00032: Please update to the new syntax. > 00033: > -------------------------------------------------------------------------- > 00034: Input images: Runs/000400_XmippProtCL2D/tmp/input_particles.xmd > 00035: Output root: level > 00036: Output dir: Runs/000400_XmippProtCL2D/extra > 00037: Iterations: 10 > 00038: CodesSel0: > 00039: Codes0: 4 > 00040: Codes: 15 > 00041: Neighbours: 4 > 00042: Minimum node size: 20 > 00043: Use Correlation: 1 > 00044: Classical Multiref: 1 > 00045: Classical Split: 0 > 00046: Maximum shift: 10 > 00047: Classify all images: 0 > 00048: Normalize images: 1 > 00049: Mirror images: 1 > 00050: Align images: 1 > 00051: Initializing ... > 00052: 0/ 0 sec. > ............................................................ > 00053: Quantizing with 4 codes... > 00054: Iteration 1 ... > 00055: 13/ 25 sec. ...............................RUNNING > PROTOCOL ----------------- > 00056: PID: 9099 > 00057: Scipion: v1.1 (2017-06-14) Balbino > 00058: currentDir: > /data/prouteau/Mano_newdata_frames2-16_DW/TOROID-Sides > 00059: workingDir: Runs/000400_XmippProtCL2D > 00060: runMode: Continue > 00061: MPI: 32 > 00062: threads: 1 > 00063: len(steps) 13 len(prevSteps) 13 > 00064: Starting at step: 2 > 00065: Running steps > 00066: STARTED: runJob, step 2 > 00067: 2018-06-20 15:04:06.958333 > 00068: mpirun -np 32 -bynode `which xmipp_mpi_classify_CL2D` -i > Runs/000400_XmippProtCL2D/tmp/input_particles.xmd --odir > Runs/000400_XmippProtCL2D/extra --oroot level --nref 15 --iter 10 > --distance correlation --classicalMultiref --nref0 4 > 00069: > -------------------------------------------------------------------------- > 00070: The following command line options and corresponding MCA > parameter have > 00071: been deprecated and replaced as follows: > 00072: > 00073: Command line options: > 00074: Deprecated: --bynode, -bynode > 00075: Replacement: --map-by node > 00076: > 00077: Equivalent MCA parameter: > 00078: Deprecated: rmaps_base_bynode > 00079: Replacement: rmaps_base_mapping_policy=node > 00080: > 00081: The deprecated forms *will* disappear in a future version of > Open MPI. > 00082: Please update to the new syntax. > 00083: > -------------------------------------------------------------------------- > 00084: Input images: Runs/000400_XmippProtCL2D/tmp/input_particles.xmd > 00085: Output root: level > 00086: Output dir: Runs/000400_XmippProtCL2D/extra > 00087: Iterations: 10 > 00088: CodesSel0: > 00089: Codes0: 4 > 00090: Codes: 15 > 00091: Neighbours: 4 > 00092: Minimum node size: 20 > 00093: Use Correlation: 1 > 00094: Classical Multiref: 1 > 00095: Classical Split: 0 > 00096: Maximum shift: 10 > 00097: Classify all images: 0 > 00098: Normalize images: 1 > 00099: Mirror images: 1 > 00100: Align images: 1 > 00101: Initializing ... > 00102: 0/ 0 sec. > ............................................................ > 00103: Quantizing with 4 codes... > 00104: Iteration 1 ... > 00105: 10/ 10 sec. > ............................................................ > 00106: > 00107: Average correlation with input vectors=0.0310552 > 00108: Number of assignment changes=0 > 00109: Iteration 2 ... > 00110: 10/ 10 sec. > ............................................................ > 00111: > 00112: Average correlation with input vectors=0.0882044 > 00113: Number of assignment changes=324 > 00114: Iteration 3 ... > 00115: 10/ 10 sec. > ............................................................ > 00116: > 00117: Average correlation with input vectors=0.107101 > 00118: Number of assignment changes=378 > 00119: Iteration 4 ... > 00120: 9/ 9 sec. > ............................................................ > 00121: > 00122: Average correlation with input vectors=0.122994 > 00123: Number of assignment changes=225 > 00124: Iteration 5 ... > 00125: 10/ 10 sec. > ............................................................ > 00126: > 00127: Average correlation with input vectors=0.119519 > 00128: Number of assignment changes=290 > 00129: Iteration 6 ... > 00130: 9/ 9 sec. > ............................................................ > 00131: > 00132: Average correlation with input vectors=0.127653 > 00133: Number of assignment changes=233 > 00134: Iteration 7 ... > 00135: 10/ 10 sec. > ............................................................ > 00136: > 00137: Average correlation with input vectors=0.127296 > 00138: Number of assignment changes=223 > 00139: Iteration 8 ... > 00140: 9/ 9 sec. > ............................................................ > 00141: > 00142: Average correlation with input vectors=0.129356 > 00143: Number of assignment changes=236 > 00144: Iteration 9 ... > 00145: 10/ 10 sec. > ............................................................ > 00146: > 00147: Average correlation with input vectors=0.143878 > 00148: Number of assignment changes=126 > 00149: Iteration 10 ... > 00150: 9/ 9 sec. > ............................................................ > 00151: > 00152: Average correlation with input vectors=0.138916 > 00153: Number of assignment changes=187 > 00154: Spliting nodes ... > 00155: Currently there are 5 nodes > 00156: Currently there are 6 nodes > 00157: Currently there are 7 nodes > 00158: Currently there are 8 nodes > 00159: Quantizing with 8 codes... > 00160: Iteration 1 ... > 00161: 28/ 28 sec. > ............................................................ > 00162: > 00163: Average correlation with input vectors=0.139535 > 00164: Number of assignment changes=0 > 00165: Iteration 2 ... > 00166: 26/ 26 sec. > ............................................................ > 00167: > 00168: Average correlation with input vectors=0.153304 > 00169: Number of assignment changes=181 > 00170: Iteration 3 ... > 00171: 26/ 26 sec. > ............................................................ > 00172: > 00173: Average correlation with input vectors=0.159167 > 00174: Number of assignment changes=265 > 00175: Iteration 4 ... > 00176: 25/ 25 sec. > ............................................................ > 00177: > 00178: Average correlation with input vectors=0.151184 > 00179: Number of assignment changes=424 > 00180: Iteration 5 ... > 00181: 25/ 25 sec. > ............................................................ > 00182: > 00183: Average correlation with input vectors=0.155143 > 00184: Number of assignment changes=177 > 00185: Iteration 6 ... > 00186: 23/ 23 sec. > ............................................................ > 00187: > 00188: Average correlation with input vectors=0.147184 > 00189: Number of assignment changes=263 > 00190: Iteration 7 ... > 00191: 27/ 27 sec. > ............................................................ > 00192: > 00193: Average correlation with input vectors=0.159538 > 00194: Number of assignment changes=119 > 00195: Iteration 8 ... > 00196: 25/ 25 sec. > ............................................................ > 00197: > 00198: Average correlation with input vectors=0.160486 > 00199: Number of assignment changes=139 > 00200: Iteration 9 ... > 00201: 26/ 26 sec. > ............................................................ > 00202: > 00203: Average correlation with input vectors=0.164716 > 00204: Number of assignment changes=120 > 00205: Iteration 10 ... > 00206: 27/ 27 sec. > ............................................................ > 00207: > 00208: Average correlation with input vectors=0.162771 > 00209: Number of assignment changes=130 > 00210: Spliting nodes ... > 00211: Currently there are 9 nodes > 00212: Currently there are 10 nodes > 00213: Currently there are 11 nodes > 00214: Currently there are 12 nodes > 00215: > -------------------------------------------------------------------------- > 00216: mpirun noticed that process rank 11 with PID 9147 on node > smaug exited on signal 9 (Killed). > 00217: > -------------------------------------------------------------------------- > 00218: Traceback (most recent call last): > 00219: File "/opt/scipion/pyworkflow/protocol/protocol.py", line > 182, in run > 00220: self._run() > 00221: File "/opt/scipion/pyworkflow/protocol/protocol.py", line > 228, in _run > 00222: resultFiles = self._runFunc() > 00223: File "/opt/scipion/pyworkflow/protocol/protocol.py", line > 224, in _runFunc > 00224: return self._func(*self._args) > 00225: File "/opt/scipion/pyworkflow/protocol/protocol.py", line > 1077, in runJob > 00226: self._stepsExecutor.runJob(self._log, program, arguments, > **kwargs) > 00227: File "/opt/scipion/pyworkflow/protocol/executor.py", line > 56, in runJob > 00228: env=env, cwd=cwd) > 00229: File "/opt/scipion/pyworkflow/utils/process.py", line 51, > in runJob > 00230: return runCommand(command, env, cwd) > 00231: File "/opt/scipion/pyworkflow/utils/process.py", line 65, > in runCommand > 00232: check_call(command, shell=True, stdout=sys.stdout, > stderr=sys.stderr, env=env, cwd=cwd) > 00233: File "/opt/scipion/software/lib/python2.7/subprocess.py", > line 540, in check_call > 00234: raise CalledProcessError(retcode, cmd) > 00235: CalledProcessError: Command 'mpirun -np 32 -bynode `which > xmipp_mpi_classify_CL2D` -i > Runs/000400_XmippProtCL2D/tmp/input_particles.xmd --odir > Runs/000400_XmippProtCL2D/extra --oroot level --nref 15 --iter 10 > --distance correlation --classicalMultiref --nref0 4' returned > non-zero exit status 137 > 00236: Protocol failed: Command 'mpirun -np 32 -bynode `which > xmipp_mpi_classify_CL2D` -i > Runs/000400_XmippProtCL2D/tmp/input_particles.xmd --odir > Runs/000400_XmippProtCL2D/extra --oroot level --nref 15 --iter 10 > --distance correlation --classicalMultiref --nref0 4' returned > non-zero exit status 137 > 00237: FAILED: runJob, step 2 > 00238: 2018-06-20 15:31:45.758171 > 00239: ------------------- PROTOCOL FAILED (DONE 2/13) > > > Thanks in advance for your help, > > > Cheers, > > > *Manoël Prouteau, /Ph.D./* > > Scientific Collaborator > > Department of Molecular Biology > > Sciences III - University of Geneva > > Quai Ernest Ansermet, 30 > > 1211 Geneve 04 > > Switzerland > > (+41) 022 379 61 18 > > man...@un... > > http://www.unige.ch > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users -- ------------------------------------------------------------------------ Carlos Oscar Sánchez Sorzano e-mail: co...@cn... Biocomputing unit http://i2pc.es/coss National Center of Biotechnology (CSIC) c/Darwin, 3 Campus Universidad Autónoma (Cantoblanco) Tlf: 34-91-585 4510 28049 MADRID (SPAIN) Fax: 34-91-585 4506 ------------------------------------------------------------------------ |
From: Manoel P. <Man...@un...> - 2018-06-21 12:18:31
|
Dear users, I am just starting using Scipion for CL2D classification of a small set of manually picked objects. I get an error while the softwaer starts the second step of the command. Can you help me understanding the problem? You can find the error in the run.stdout here: 00001: RUNNING PROTOCOL ----------------- 00002: PID: 9060 00003: Scipion: v1.1 (2017-06-14) Balbino 00004: currentDir: /data/prouteau/Mano_newdata_frames2-16_DW/TOROID-Sides 00005: workingDir: Runs/000400_XmippProtCL2D 00006: runMode: Continue 00007: MPI: 4 00008: threads: 1 00009: len(steps) 13 len(prevSteps) 0 00010: Starting at step: 1 00011: Running steps 00012: STARTED: convertInputStep, step 1 00013: 2018-06-20 15:03:42.471496 00014: FINISHED: convertInputStep, step 1 00015: 2018-06-20 15:03:43.256208 00016: STARTED: runJob, step 2 00017: 2018-06-20 15:03:43.312482 00018: mpirun -np 4 -bynode `which xmipp_mpi_classify_CL2D` -i Runs/000400_XmippProtCL2D/tmp/input_particles.xmd --odir Runs/000400_XmippProtCL2D/extra --oroot level --nref 15 --iter 10 --distance correlation --classicalMultiref --nref0 4 00019: -------------------------------------------------------------------------- 00020: The following command line options and corresponding MCA parameter have 00021: been deprecated and replaced as follows: 00022: 00023: Command line options: 00024: Deprecated: --bynode, -bynode 00025: Replacement: --map-by node 00026: 00027: Equivalent MCA parameter: 00028: Deprecated: rmaps_base_bynode 00029: Replacement: rmaps_base_mapping_policy=node 00030: 00031: The deprecated forms *will* disappear in a future version of Open MPI. 00032: Please update to the new syntax. 00033: -------------------------------------------------------------------------- 00034: Input images: Runs/000400_XmippProtCL2D/tmp/input_particles.xmd 00035: Output root: level 00036: Output dir: Runs/000400_XmippProtCL2D/extra 00037: Iterations: 10 00038: CodesSel0: 00039: Codes0: 4 00040: Codes: 15 00041: Neighbours: 4 00042: Minimum node size: 20 00043: Use Correlation: 1 00044: Classical Multiref: 1 00045: Classical Split: 0 00046: Maximum shift: 10 00047: Classify all images: 0 00048: Normalize images: 1 00049: Mirror images: 1 00050: Align images: 1 00051: Initializing ... 00052: 0/ 0 sec. ............................................................ 00053: Quantizing with 4 codes... 00054: Iteration 1 ... 00055: 13/ 25 sec. ...............................RUNNING PROTOCOL ----------------- 00056: PID: 9099 00057: Scipion: v1.1 (2017-06-14) Balbino 00058: currentDir: /data/prouteau/Mano_newdata_frames2-16_DW/TOROID-Sides 00059: workingDir: Runs/000400_XmippProtCL2D 00060: runMode: Continue 00061: MPI: 32 00062: threads: 1 00063: len(steps) 13 len(prevSteps) 13 00064: Starting at step: 2 00065: Running steps 00066: STARTED: runJob, step 2 00067: 2018-06-20 15:04:06.958333 00068: mpirun -np 32 -bynode `which xmipp_mpi_classify_CL2D` -i Runs/000400_XmippProtCL2D/tmp/input_particles.xmd --odir Runs/000400_XmippProtCL2D/extra --oroot level --nref 15 --iter 10 --distance correlation --classicalMultiref --nref0 4 00069: -------------------------------------------------------------------------- 00070: The following command line options and corresponding MCA parameter have 00071: been deprecated and replaced as follows: 00072: 00073: Command line options: 00074: Deprecated: --bynode, -bynode 00075: Replacement: --map-by node 00076: 00077: Equivalent MCA parameter: 00078: Deprecated: rmaps_base_bynode 00079: Replacement: rmaps_base_mapping_policy=node 00080: 00081: The deprecated forms *will* disappear in a future version of Open MPI. 00082: Please update to the new syntax. 00083: -------------------------------------------------------------------------- 00084: Input images: Runs/000400_XmippProtCL2D/tmp/input_particles.xmd 00085: Output root: level 00086: Output dir: Runs/000400_XmippProtCL2D/extra 00087: Iterations: 10 00088: CodesSel0: 00089: Codes0: 4 00090: Codes: 15 00091: Neighbours: 4 00092: Minimum node size: 20 00093: Use Correlation: 1 00094: Classical Multiref: 1 00095: Classical Split: 0 00096: Maximum shift: 10 00097: Classify all images: 0 00098: Normalize images: 1 00099: Mirror images: 1 00100: Align images: 1 00101: Initializing ... 00102: 0/ 0 sec. ............................................................ 00103: Quantizing with 4 codes... 00104: Iteration 1 ... 00105: 10/ 10 sec. ............................................................ 00106: 00107: Average correlation with input vectors=0.0310552 00108: Number of assignment changes=0 00109: Iteration 2 ... 00110: 10/ 10 sec. ............................................................ 00111: 00112: Average correlation with input vectors=0.0882044 00113: Number of assignment changes=324 00114: Iteration 3 ... 00115: 10/ 10 sec. ............................................................ 00116: 00117: Average correlation with input vectors=0.107101 00118: Number of assignment changes=378 00119: Iteration 4 ... 00120: 9/ 9 sec. ............................................................ 00121: 00122: Average correlation with input vectors=0.122994 00123: Number of assignment changes=225 00124: Iteration 5 ... 00125: 10/ 10 sec. ............................................................ 00126: 00127: Average correlation with input vectors=0.119519 00128: Number of assignment changes=290 00129: Iteration 6 ... 00130: 9/ 9 sec. ............................................................ 00131: 00132: Average correlation with input vectors=0.127653 00133: Number of assignment changes=233 00134: Iteration 7 ... 00135: 10/ 10 sec. ............................................................ 00136: 00137: Average correlation with input vectors=0.127296 00138: Number of assignment changes=223 00139: Iteration 8 ... 00140: 9/ 9 sec. ............................................................ 00141: 00142: Average correlation with input vectors=0.129356 00143: Number of assignment changes=236 00144: Iteration 9 ... 00145: 10/ 10 sec. ............................................................ 00146: 00147: Average correlation with input vectors=0.143878 00148: Number of assignment changes=126 00149: Iteration 10 ... 00150: 9/ 9 sec. ............................................................ 00151: 00152: Average correlation with input vectors=0.138916 00153: Number of assignment changes=187 00154: Spliting nodes ... 00155: Currently there are 5 nodes 00156: Currently there are 6 nodes 00157: Currently there are 7 nodes 00158: Currently there are 8 nodes 00159: Quantizing with 8 codes... 00160: Iteration 1 ... 00161: 28/ 28 sec. ............................................................ 00162: 00163: Average correlation with input vectors=0.139535 00164: Number of assignment changes=0 00165: Iteration 2 ... 00166: 26/ 26 sec. ............................................................ 00167: 00168: Average correlation with input vectors=0.153304 00169: Number of assignment changes=181 00170: Iteration 3 ... 00171: 26/ 26 sec. ............................................................ 00172: 00173: Average correlation with input vectors=0.159167 00174: Number of assignment changes=265 00175: Iteration 4 ... 00176: 25/ 25 sec. ............................................................ 00177: 00178: Average correlation with input vectors=0.151184 00179: Number of assignment changes=424 00180: Iteration 5 ... 00181: 25/ 25 sec. ............................................................ 00182: 00183: Average correlation with input vectors=0.155143 00184: Number of assignment changes=177 00185: Iteration 6 ... 00186: 23/ 23 sec. ............................................................ 00187: 00188: Average correlation with input vectors=0.147184 00189: Number of assignment changes=263 00190: Iteration 7 ... 00191: 27/ 27 sec. ............................................................ 00192: 00193: Average correlation with input vectors=0.159538 00194: Number of assignment changes=119 00195: Iteration 8 ... 00196: 25/ 25 sec. ............................................................ 00197: 00198: Average correlation with input vectors=0.160486 00199: Number of assignment changes=139 00200: Iteration 9 ... 00201: 26/ 26 sec. ............................................................ 00202: 00203: Average correlation with input vectors=0.164716 00204: Number of assignment changes=120 00205: Iteration 10 ... 00206: 27/ 27 sec. ............................................................ 00207: 00208: Average correlation with input vectors=0.162771 00209: Number of assignment changes=130 00210: Spliting nodes ... 00211: Currently there are 9 nodes 00212: Currently there are 10 nodes 00213: Currently there are 11 nodes 00214: Currently there are 12 nodes 00215: -------------------------------------------------------------------------- 00216: mpirun noticed that process rank 11 with PID 9147 on node smaug exited on signal 9 (Killed). 00217: -------------------------------------------------------------------------- 00218: Traceback (most recent call last): 00219: File "/opt/scipion/pyworkflow/protocol/protocol.py", line 182, in run 00220: self._run() 00221: File "/opt/scipion/pyworkflow/protocol/protocol.py", line 228, in _run 00222: resultFiles = self._runFunc() 00223: File "/opt/scipion/pyworkflow/protocol/protocol.py", line 224, in _runFunc 00224: return self._func(*self._args) 00225: File "/opt/scipion/pyworkflow/protocol/protocol.py", line 1077, in runJob 00226: self._stepsExecutor.runJob(self._log, program, arguments, **kwargs) 00227: File "/opt/scipion/pyworkflow/protocol/executor.py", line 56, in runJob 00228: env=env, cwd=cwd) 00229: File "/opt/scipion/pyworkflow/utils/process.py", line 51, in runJob 00230: return runCommand(command, env, cwd) 00231: File "/opt/scipion/pyworkflow/utils/process.py", line 65, in runCommand 00232: check_call(command, shell=True, stdout=sys.stdout, stderr=sys.stderr, env=env, cwd=cwd) 00233: File "/opt/scipion/software/lib/python2.7/subprocess.py", line 540, in check_call 00234: raise CalledProcessError(retcode, cmd) 00235: CalledProcessError: Command 'mpirun -np 32 -bynode `which xmipp_mpi_classify_CL2D` -i Runs/000400_XmippProtCL2D/tmp/input_particles.xmd --odir Runs/000400_XmippProtCL2D/extra --oroot level --nref 15 --iter 10 --distance correlation --classicalMultiref --nref0 4' returned non-zero exit status 137 00236: Protocol failed: Command 'mpirun -np 32 -bynode `which xmipp_mpi_classify_CL2D` -i Runs/000400_XmippProtCL2D/tmp/input_particles.xmd --odir Runs/000400_XmippProtCL2D/extra --oroot level --nref 15 --iter 10 --distance correlation --classicalMultiref --nref0 4' returned non-zero exit status 137 00237: FAILED: runJob, step 2 00238: 2018-06-20 15:31:45.758171 00239: ------------------- PROTOCOL FAILED (DONE 2/13) Thanks in advance for your help, Cheers, Manoël Prouteau, Ph.D. Scientific Collaborator Department of Molecular Biology Sciences III - University of Geneva Quai Ernest Ansermet, 30 1211 Geneve 04 Switzerland (+41) 022 379 61 18 man...@un... http://www.unige.ch |
From: Pablo C. <pc...@cn...> - 2018-06-12 07:07:56
|
Dear Ann, in almost all the "protocols" (some of them do not allow this) you have the *Parallel* section. There you can specify, threads, MPI or both (depending on the protocol). On 11/06/18 16:58, Jecrois, Anne wrote: > > Dear Scipion Team > > > I currently started to use Scipion and was wondering how to specify > the number of CPU to use. Many of the jobs that I am running seems to > take a long time to finish (i.e. 2D class averages). If I want to > process my data using more computer power, where do I go to do that? > > > Thanks > > Ann > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Jecrois, A. <Ann...@um...> - 2018-06-11 15:23:58
|
Dear Scipion Team I currently started to use Scipion and was wondering how to specify the number of CPU to use. Many of the jobs that I am running seems to take a long time to finish (i.e. 2D class averages). If I want to process my data using more computer power, where do I go to do that? Thanks Ann |
From: Jose M. de la R. T. <del...@gm...> - 2018-05-24 13:56:57
|
Dear Josie, You can use the protocol 'import - coordinates' (from Import > more > import - coordinates). You need to provide the coordinates files with the same name of the micrographs (without extension). One way could be to generate eman2-like .box files and use them. In the file format entry you could select from other formats. Nonetheless, you can import coordinates from a folder (in the Xmipp Picker interface), following the naming conventions stated before and not one by one. Hope this helps, Best, Jose Miguel On Thu, May 24, 2018 at 3:39 PM, Ferreira, Josie < j.f...@im...> wrote: > Hi all, > > I have a dataset (4000 images) that I picked using cisTEM. I now want to > import the particle coordinates in batch to then extract particles from the > micrographs that have been drift-corrected and ctf-estimated in Scipion. I > have converted them to the supported format (top left origin 1,1 in > pixels). I know that you can import coordinates one image at a time using > Xmipp manual picking but this is not feasible for this many micrographs. > > Is there a way to import a text file with three columns (image name, > x_coordinate, y_coordinate)? > > Thanks for the help! > > All the best, > > Josie > > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > |
From: Ferreira, J. <j.f...@im...> - 2018-05-24 13:39:50
|
Hi all, I have a dataset (4000 images) that I picked using cisTEM. I now want to import the particle coordinates in batch to then extract particles from the micrographs that have been drift-corrected and ctf-estimated in Scipion. I have converted them to the supported format (top left origin 1,1 in pixels). I know that you can import coordinates one image at a time using Xmipp manual picking but this is not feasible for this many micrographs. Is there a way to import a text file with three columns (image name, x_coordinate, y_coordinate)? Thanks for the help! All the best, Josie |
From: Carlos O. S. <co...@cn...> - 2018-05-05 07:17:56
|
Dear all, this problem normally appears after rescaling the images. You may solve it by "preprocess particles" and renormalizing the images again after rescaling. Kind regards, Carlos Oscar On 03/05/2018 16:28, Joseph Ho wrote: > Hi, Jose: > > I ran relion particle extraction.. still not working. I also don't do > downscaling in the Xmipp but still not working. > Only I use --dont_check_norm. Then the relion 2D classification process. > Not sure why all the normalise doesn't work in my Ximpp/relion > particle extraction? > > > Thanks for your help. > > Joseph > > On Thu, May 3, 2018 at 7:22 PM, Jose Miguel de la Rosa Trevin > <del...@gm...> wrote: >> Hi Joseph, >> >> Have you extracted the particles with xmipp protocol? >> Can you try extracting with relion and check if the error remains? >> I think the relion - extract particles protocol is not in the menu (we need >> to fix that >> for the upcoming bugfix release) but you can find it with Ctrl + F in the >> searching dialog. >> >> If the relion 2D jobs still complains about the normalization when using the >> particles >> extracted with relion (I have observed this when downscaling). You can try >> to use >> the advanced option '--dont_check_norm' >> >> Hope this helps, >> Jose Miguel >> >> >> On Thu, May 3, 2018 at 12:49 PM, Joseph Ho <sbd...@gm...> wrote: >>> Hi, Grigory: >>> >>> Thanks for your reply and help. I notice that normalised error. I also >>> puzzle about this. Because I do click the normalise function during >>> the particle extraction of scipion. I followed the scipion youtube >>> (beta-gal) setup. Normalise type:Ramp; Background radius -1, >>> >>> Thanks for your help >>> >>> Joseph >>> >>> On Thu, May 3, 2018 at 6:19 PM, Gregory Sharov <sha...@gm...> >>> wrote: >>>> Hello Joseph, >>>> >>>> the error message states precisely what is wrong with your job: >>>> >>>> 00341: ERROR: It appears that these images have not been normalised to >>>> an >>>> average background value of 0 and a stddev value of 1. >>>> 00342: Note that the average and stddev >>>> values for the background are calculated: >>>> 00343: (1) for single particles: outside >>>> a >>>> circle with the particle diameter >>>> 00344: (2) for helical segments: outside >>>> a >>>> cylinder (tube) with the helical tube diameter >>>> 00345: You can use the relion_preprocess >>>> program to normalise your images >>>> 00346: If you are sure you have >>>> normalised >>>> the images correctly (also see the RELION Wiki), you can switch off this >>>> error message using the --dont_check_norm command line option >>>> >>>> >>>> Best regards, >>>> Grigory >>>> >>>> >>>> -------------------------------------------------------------------------------- >>>> Grigory Sharov, Ph.D. >>>> >>>> MRC Laboratory of Molecular Biology, >>>> Francis Crick Avenue, >>>> Cambridge Biomedical Campus, >>>> Cambridge CB2 0QH, UK. >>>> tel. +44 (0) 1223 267542 >>>> e-mail: gs...@mr... >>>> >>>> On Thu, May 3, 2018 at 4:02 AM, Joseph Ho <jo...@ga...> >>>> wrote: >>>>> Dear Sir: >>>>> >>>>> >>>>> >>>>> Relion 2D was running but I stopped it. After I tried to restart, I got >>>>> this error msg. I could not get Relion 2D running anymore. I also tried >>>>> MPI >>>>> is 1 but it still failed. I use scipion v.1.2 and I installed it from >>>>> source. >>>>> >>>>> >>>>> >>>>> The msg from run.stdout >>>>> >>>>> >>>>> >>>>> 00309: The deprecated forms *will* disappear in a future version of >>>>> Open >>>>> MPI. >>>>> 00310: Please update to the new syntax. >>>>> 00311: >>>>> >>>>> -------------------------------------------------------------------------- >>>>> 00312: >>>>> >>>>> -------------------------------------------------------------------------- >>>>> 00313: [[50020,1],0]: A high-performance Open MPI point-to-point >>>>> messaging module >>>>> 00314: was unable to find any relevant network interfaces: >>>>> 00315: >>>>> 00316: Module: OpenFabrics (openib) >>>>> 00317: Host: spgpu3 >>>>> 00318: >>>>> 00319: Another transport will be used instead, although this may >>>>> result >>>>> in >>>>> 00320: lower performance. >>>>> 00321: >>>>> >>>>> -------------------------------------------------------------------------- >>>>> 00322: === RELION MPI setup === >>>>> 00323: + Number of MPI processes = 3 >>>>> 00324: + Master (0) runs on host = spgpu3 >>>>> 00325: + Slave 1 runs on host = spgpu3 >>>>> 00326: + Slave 2 runs on host = spgpu3 >>>>> 00327: ================= >>>>> 00328: uniqueHost spgpu3 has 2 ranks. >>>>> 00329: Slave 1 will distribute threads over devices 1 2 3 >>>>> 00330: Thread 0 on slave 1 mapped to device 1 >>>>> 00331: Slave 2 will distribute threads over devices 1 2 3 >>>>> 00332: Thread 0 on slave 2 mapped to device 1 >>>>> 00333: Device 1 on spgpu3 is split between 2 slaves >>>>> 00334: Running CPU instructions in double precision. >>>>> 00335: [spgpu3:17287] 2 more processes have sent help message >>>>> help-mpi-btl-base.txt / btl:no-nics >>>>> 00336: [spgpu3:17287] Set MCA parameter "orte_base_help_aggregate" to >>>>> 0 >>>>> to see all help / error messages >>>>> 00337: + WARNING: Changing psi sampling rate (before oversampling) >>>>> to >>>>> 5.625 degrees, for more efficient GPU calculations >>>>> 00338: Estimating initial noise spectra >>>>> 00339: 1/ 20 sec ..~~(,_,"> fn_img= >>>>> 067786@Runs/000695_XmippProtCropResizeParticles/extra/output_images.stk >>>>> bg_avg= 0.429345 bg_stddev= 0.852297 bg_radius= 14.0174 >>>>> 00340: ERROR: >>>>> 00341: ERROR: It appears that these images have not been normalised >>>>> to >>>>> an average background value of 0 and a stddev value of 1. >>>>> 00342: Note that the average and stddev >>>>> values for the background are calculated: >>>>> 00343: (1) for single particles: outside >>>>> a >>>>> circle with the particle diameter >>>>> 00344: (2) for helical segments: outside >>>>> a >>>>> cylinder (tube) with the helical tube diameter >>>>> 00345: You can use the relion_preprocess >>>>> program to normalise your images >>>>> 00346: If you are sure you have >>>>> normalised >>>>> the images correctly (also see the RELION Wiki), you can switch off >>>>> this >>>>> error message using the --dont_check_norm command line option >>>>> 00347: File: >>>>> /usr/local/scipion/software/em/relion-2.1/src/ml_optimiser.cpp line: >>>>> 1879 >>>>> 00348: >>>>> >>>>> -------------------------------------------------------------------------- >>>>> 00349: MPI_ABORT was invoked on rank 2 in communicator MPI_COMM_WORLD >>>>> 00350: with errorcode 1. >>>>> 00351: >>>>> 00352: NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI >>>>> processes. >>>>> 00353: You may or may not see output from other processes, depending >>>>> on >>>>> 00354: exactly when Open MPI kills them. >>>>> 00355: >>>>> >>>>> -------------------------------------------------------------------------- >>>>> 00356: Traceback (most recent call last): >>>>> 00357: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", >>>>> line >>>>> 186, in run >>>>> 00358: self._run() >>>>> 00359: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", >>>>> line >>>>> 233, in _run >>>>> 00360: resultFiles = self._runFunc() >>>>> 00361: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", >>>>> line >>>>> 229, in _runFunc >>>>> 00362: return self._func(*self._args) >>>>> 00363: File >>>>> "/usr/local/scipion/pyworkflow/em/packages/relion/protocol_base.py", >>>>> line >>>>> 880, in runRelionStep >>>>> 00364: self.runJob(self._getProgram(), params) >>>>> 00365: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", >>>>> line >>>>> 1138, in runJob >>>>> 00366: self._stepsExecutor.runJob(self._log, program, arguments, >>>>> **kwargs) >>>>> 00367: File "/usr/local/scipion/pyworkflow/protocol/executor.py", >>>>> line >>>>> 56, in runJob >>>>> 00368: env=env, cwd=cwd) >>>>> 00369: File "/usr/local/scipion/pyworkflow/utils/process.py", line >>>>> 51, >>>>> in runJob >>>>> 00370: return runCommand(command, env, cwd) >>>>> 00371: File "/usr/local/scipion/pyworkflow/utils/process.py", line >>>>> 65, >>>>> in runCommand >>>>> 00372: check_call(command, shell=True, stdout=sys.stdout, >>>>> stderr=sys.stderr, env=env, cwd=cwd) >>>>> 00373: File >>>>> "/usr/local/scipion/software/lib/python2.7/subprocess.py", >>>>> line 186, in check_call >>>>> 00374: raise CalledProcessError(retcode, cmd) >>>>> 00375: CalledProcessError: Command 'mpirun -np 3 -bynode `which >>>>> relion_refine_mpi` --gpu 1,2,3 --tau2_fudge 2 --scale >>>>> --dont_combine_weights_via_disc --iter 25 --norm --psi_step 10.0 >>>>> --ctf >>>>> --offset_range 5.0 --oversampling 1 --pool 3 --o >>>>> Runs/000827_ProtRelionClassify2D/extra/relion --i >>>>> Runs/000827_ProtRelionClassify2D/input_particles.star >>>>> --particle_diameter >>>>> 200 --K 200 --flatten_solvent --zero_mask --offset_step 2.0 --angpix >>>>> 7.134 >>>>> --j 1' returned non-zero exit status 1 >>>>> 00376: Protocol failed: Command 'mpirun -np 3 -bynode `which >>>>> relion_refine_mpi` --gpu 1,2,3 --tau2_fudge 2 --scale >>>>> --dont_combine_weights_via_disc --iter 25 --norm --psi_step 10.0 >>>>> --ctf >>>>> --offset_range 5.0 --oversampling 1 --pool 3 --o >>>>> Runs/000827_ProtRelionClassify2D/extra/relion --i >>>>> Runs/000827_ProtRelionClassify2D/input_particles.star >>>>> --particle_diameter >>>>> 200 --K 200 --flatten_solvent --zero_mask --offset_step 2.0 --angpix >>>>> 7.134 >>>>> --j 1' returned non-zero exit status 1 >>>>> 00377: FAILED: runRelionStep, step 2 >>>>> 00378: 2018-05-03 10:35:36.970022 >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> Thanks for your help >>>>> >>>>> >>>>> >>>>> Meng-Chiao (Joseph) Ho >>>>> >>>>> Assistant Research Fellow >>>>> >>>>> Institute of Biological Chemistry, Academia Sinica >>>>> >>>>> No. 128, Sec 2, Academia Road, Nankang, Taipei 115, Taiwan >>>>> >>>>> Tel: 886-2-27855696 ext 3080/3162 >>>>> >>>>> Email: jo...@ga... >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> Check out the vibrant tech community on one of the world's most >>>>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>>>> _______________________________________________ >>>>> scipion-users mailing list >>>>> sci...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/scipion-users >>>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Check out the vibrant tech community on one of the world's most >>>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>>> _______________________________________________ >>>> scipion-users mailing list >>>> sci...@li... >>>> https://lists.sourceforge.net/lists/listinfo/scipion-users >>>> >>> >>> ------------------------------------------------------------------------------ >>> Check out the vibrant tech community on one of the world's most >>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>> _______________________________________________ >>> scipion-users mailing list >>> sci...@li... >>> https://lists.sourceforge.net/lists/listinfo/scipion-users >> >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > -- ------------------------------------------------------------------------ Carlos Oscar Sánchez Sorzano e-mail: co...@cn... Biocomputing unit http://i2pc.es/coss National Center of Biotechnology (CSIC) c/Darwin, 3 Campus Universidad Autónoma (Cantoblanco) Tlf: 34-91-585 4510 28049 MADRID (SPAIN) Fax: 34-91-585 4506 ------------------------------------------------------------------------ |
From: Joseph Ho <sbd...@gm...> - 2018-05-03 14:28:56
|
Hi, Jose: I ran relion particle extraction.. still not working. I also don't do downscaling in the Xmipp but still not working. Only I use --dont_check_norm. Then the relion 2D classification process. Not sure why all the normalise doesn't work in my Ximpp/relion particle extraction? Thanks for your help. Joseph On Thu, May 3, 2018 at 7:22 PM, Jose Miguel de la Rosa Trevin <del...@gm...> wrote: > Hi Joseph, > > Have you extracted the particles with xmipp protocol? > Can you try extracting with relion and check if the error remains? > I think the relion - extract particles protocol is not in the menu (we need > to fix that > for the upcoming bugfix release) but you can find it with Ctrl + F in the > searching dialog. > > If the relion 2D jobs still complains about the normalization when using the > particles > extracted with relion (I have observed this when downscaling). You can try > to use > the advanced option '--dont_check_norm' > > Hope this helps, > Jose Miguel > > > On Thu, May 3, 2018 at 12:49 PM, Joseph Ho <sbd...@gm...> wrote: >> >> Hi, Grigory: >> >> Thanks for your reply and help. I notice that normalised error. I also >> puzzle about this. Because I do click the normalise function during >> the particle extraction of scipion. I followed the scipion youtube >> (beta-gal) setup. Normalise type:Ramp; Background radius -1, >> >> Thanks for your help >> >> Joseph >> >> On Thu, May 3, 2018 at 6:19 PM, Gregory Sharov <sha...@gm...> >> wrote: >> > Hello Joseph, >> > >> > the error message states precisely what is wrong with your job: >> > >> > 00341: ERROR: It appears that these images have not been normalised to >> > an >> > average background value of 0 and a stddev value of 1. >> > 00342: Note that the average and stddev >> > values for the background are calculated: >> > 00343: (1) for single particles: outside >> > a >> > circle with the particle diameter >> > 00344: (2) for helical segments: outside >> > a >> > cylinder (tube) with the helical tube diameter >> > 00345: You can use the relion_preprocess >> > program to normalise your images >> > 00346: If you are sure you have >> > normalised >> > the images correctly (also see the RELION Wiki), you can switch off this >> > error message using the --dont_check_norm command line option >> > >> > >> > Best regards, >> > Grigory >> > >> > >> > -------------------------------------------------------------------------------- >> > Grigory Sharov, Ph.D. >> > >> > MRC Laboratory of Molecular Biology, >> > Francis Crick Avenue, >> > Cambridge Biomedical Campus, >> > Cambridge CB2 0QH, UK. >> > tel. +44 (0) 1223 267542 >> > e-mail: gs...@mr... >> > >> > On Thu, May 3, 2018 at 4:02 AM, Joseph Ho <jo...@ga...> >> > wrote: >> >> >> >> Dear Sir: >> >> >> >> >> >> >> >> Relion 2D was running but I stopped it. After I tried to restart, I got >> >> this error msg. I could not get Relion 2D running anymore. I also tried >> >> MPI >> >> is 1 but it still failed. I use scipion v.1.2 and I installed it from >> >> source. >> >> >> >> >> >> >> >> The msg from run.stdout >> >> >> >> >> >> >> >> 00309: The deprecated forms *will* disappear in a future version of >> >> Open >> >> MPI. >> >> 00310: Please update to the new syntax. >> >> 00311: >> >> >> >> -------------------------------------------------------------------------- >> >> 00312: >> >> >> >> -------------------------------------------------------------------------- >> >> 00313: [[50020,1],0]: A high-performance Open MPI point-to-point >> >> messaging module >> >> 00314: was unable to find any relevant network interfaces: >> >> 00315: >> >> 00316: Module: OpenFabrics (openib) >> >> 00317: Host: spgpu3 >> >> 00318: >> >> 00319: Another transport will be used instead, although this may >> >> result >> >> in >> >> 00320: lower performance. >> >> 00321: >> >> >> >> -------------------------------------------------------------------------- >> >> 00322: === RELION MPI setup === >> >> 00323: + Number of MPI processes = 3 >> >> 00324: + Master (0) runs on host = spgpu3 >> >> 00325: + Slave 1 runs on host = spgpu3 >> >> 00326: + Slave 2 runs on host = spgpu3 >> >> 00327: ================= >> >> 00328: uniqueHost spgpu3 has 2 ranks. >> >> 00329: Slave 1 will distribute threads over devices 1 2 3 >> >> 00330: Thread 0 on slave 1 mapped to device 1 >> >> 00331: Slave 2 will distribute threads over devices 1 2 3 >> >> 00332: Thread 0 on slave 2 mapped to device 1 >> >> 00333: Device 1 on spgpu3 is split between 2 slaves >> >> 00334: Running CPU instructions in double precision. >> >> 00335: [spgpu3:17287] 2 more processes have sent help message >> >> help-mpi-btl-base.txt / btl:no-nics >> >> 00336: [spgpu3:17287] Set MCA parameter "orte_base_help_aggregate" to >> >> 0 >> >> to see all help / error messages >> >> 00337: + WARNING: Changing psi sampling rate (before oversampling) >> >> to >> >> 5.625 degrees, for more efficient GPU calculations >> >> 00338: Estimating initial noise spectra >> >> 00339: 1/ 20 sec ..~~(,_,"> fn_img= >> >> 067786@Runs/000695_XmippProtCropResizeParticles/extra/output_images.stk >> >> bg_avg= 0.429345 bg_stddev= 0.852297 bg_radius= 14.0174 >> >> 00340: ERROR: >> >> 00341: ERROR: It appears that these images have not been normalised >> >> to >> >> an average background value of 0 and a stddev value of 1. >> >> 00342: Note that the average and stddev >> >> values for the background are calculated: >> >> 00343: (1) for single particles: outside >> >> a >> >> circle with the particle diameter >> >> 00344: (2) for helical segments: outside >> >> a >> >> cylinder (tube) with the helical tube diameter >> >> 00345: You can use the relion_preprocess >> >> program to normalise your images >> >> 00346: If you are sure you have >> >> normalised >> >> the images correctly (also see the RELION Wiki), you can switch off >> >> this >> >> error message using the --dont_check_norm command line option >> >> 00347: File: >> >> /usr/local/scipion/software/em/relion-2.1/src/ml_optimiser.cpp line: >> >> 1879 >> >> 00348: >> >> >> >> -------------------------------------------------------------------------- >> >> 00349: MPI_ABORT was invoked on rank 2 in communicator MPI_COMM_WORLD >> >> 00350: with errorcode 1. >> >> 00351: >> >> 00352: NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI >> >> processes. >> >> 00353: You may or may not see output from other processes, depending >> >> on >> >> 00354: exactly when Open MPI kills them. >> >> 00355: >> >> >> >> -------------------------------------------------------------------------- >> >> 00356: Traceback (most recent call last): >> >> 00357: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", >> >> line >> >> 186, in run >> >> 00358: self._run() >> >> 00359: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", >> >> line >> >> 233, in _run >> >> 00360: resultFiles = self._runFunc() >> >> 00361: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", >> >> line >> >> 229, in _runFunc >> >> 00362: return self._func(*self._args) >> >> 00363: File >> >> "/usr/local/scipion/pyworkflow/em/packages/relion/protocol_base.py", >> >> line >> >> 880, in runRelionStep >> >> 00364: self.runJob(self._getProgram(), params) >> >> 00365: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", >> >> line >> >> 1138, in runJob >> >> 00366: self._stepsExecutor.runJob(self._log, program, arguments, >> >> **kwargs) >> >> 00367: File "/usr/local/scipion/pyworkflow/protocol/executor.py", >> >> line >> >> 56, in runJob >> >> 00368: env=env, cwd=cwd) >> >> 00369: File "/usr/local/scipion/pyworkflow/utils/process.py", line >> >> 51, >> >> in runJob >> >> 00370: return runCommand(command, env, cwd) >> >> 00371: File "/usr/local/scipion/pyworkflow/utils/process.py", line >> >> 65, >> >> in runCommand >> >> 00372: check_call(command, shell=True, stdout=sys.stdout, >> >> stderr=sys.stderr, env=env, cwd=cwd) >> >> 00373: File >> >> "/usr/local/scipion/software/lib/python2.7/subprocess.py", >> >> line 186, in check_call >> >> 00374: raise CalledProcessError(retcode, cmd) >> >> 00375: CalledProcessError: Command 'mpirun -np 3 -bynode `which >> >> relion_refine_mpi` --gpu 1,2,3 --tau2_fudge 2 --scale >> >> --dont_combine_weights_via_disc --iter 25 --norm --psi_step 10.0 >> >> --ctf >> >> --offset_range 5.0 --oversampling 1 --pool 3 --o >> >> Runs/000827_ProtRelionClassify2D/extra/relion --i >> >> Runs/000827_ProtRelionClassify2D/input_particles.star >> >> --particle_diameter >> >> 200 --K 200 --flatten_solvent --zero_mask --offset_step 2.0 --angpix >> >> 7.134 >> >> --j 1' returned non-zero exit status 1 >> >> 00376: Protocol failed: Command 'mpirun -np 3 -bynode `which >> >> relion_refine_mpi` --gpu 1,2,3 --tau2_fudge 2 --scale >> >> --dont_combine_weights_via_disc --iter 25 --norm --psi_step 10.0 >> >> --ctf >> >> --offset_range 5.0 --oversampling 1 --pool 3 --o >> >> Runs/000827_ProtRelionClassify2D/extra/relion --i >> >> Runs/000827_ProtRelionClassify2D/input_particles.star >> >> --particle_diameter >> >> 200 --K 200 --flatten_solvent --zero_mask --offset_step 2.0 --angpix >> >> 7.134 >> >> --j 1' returned non-zero exit status 1 >> >> 00377: FAILED: runRelionStep, step 2 >> >> 00378: 2018-05-03 10:35:36.970022 >> >> >> >> >> >> >> >> >> >> >> >> Thanks for your help >> >> >> >> >> >> >> >> Meng-Chiao (Joseph) Ho >> >> >> >> Assistant Research Fellow >> >> >> >> Institute of Biological Chemistry, Academia Sinica >> >> >> >> No. 128, Sec 2, Academia Road, Nankang, Taipei 115, Taiwan >> >> >> >> Tel: 886-2-27855696 ext 3080/3162 >> >> >> >> Email: jo...@ga... >> >> >> >> >> >> >> >> >> >> >> >> >> >> ------------------------------------------------------------------------------ >> >> Check out the vibrant tech community on one of the world's most >> >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> >> _______________________________________________ >> >> scipion-users mailing list >> >> sci...@li... >> >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> >> >> > >> > >> > >> > ------------------------------------------------------------------------------ >> > Check out the vibrant tech community on one of the world's most >> > engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> > _______________________________________________ >> > scipion-users mailing list >> > sci...@li... >> > https://lists.sourceforge.net/lists/listinfo/scipion-users >> > >> >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > |
From: Jose M. de la R. T. <del...@gm...> - 2018-05-03 11:23:07
|
Hi Joseph, Have you extracted the particles with xmipp protocol? Can you try extracting with relion and check if the error remains? I think the relion - extract particles protocol is not in the menu (we need to fix that for the upcoming bugfix release) but you can find it with Ctrl + F in the searching dialog. If the relion 2D jobs still complains about the normalization when using the particles extracted with relion (I have observed this when downscaling). You can try to use the advanced option '--dont_check_norm' Hope this helps, Jose Miguel On Thu, May 3, 2018 at 12:49 PM, Joseph Ho <sbd...@gm...> wrote: > Hi, Grigory: > > Thanks for your reply and help. I notice that normalised error. I also > puzzle about this. Because I do click the normalise function during > the particle extraction of scipion. I followed the scipion youtube > (beta-gal) setup. Normalise type:Ramp; Background radius -1, > > Thanks for your help > > Joseph > > On Thu, May 3, 2018 at 6:19 PM, Gregory Sharov <sha...@gm...> > wrote: > > Hello Joseph, > > > > the error message states precisely what is wrong with your job: > > > > 00341: ERROR: It appears that these images have not been normalised to > an > > average background value of 0 and a stddev value of 1. > > 00342: Note that the average and stddev > > values for the background are calculated: > > 00343: (1) for single particles: outside a > > circle with the particle diameter > > 00344: (2) for helical segments: outside a > > cylinder (tube) with the helical tube diameter > > 00345: You can use the relion_preprocess > > program to normalise your images > > 00346: If you are sure you have normalised > > the images correctly (also see the RELION Wiki), you can switch off this > > error message using the --dont_check_norm command line option > > > > > > Best regards, > > Grigory > > > > ------------------------------------------------------------ > -------------------- > > Grigory Sharov, Ph.D. > > > > MRC Laboratory of Molecular Biology, > > Francis Crick Avenue, > > Cambridge Biomedical Campus, > > Cambridge CB2 0QH, UK. > > tel. +44 (0) 1223 267542 > > e-mail: gs...@mr... > > > > On Thu, May 3, 2018 at 4:02 AM, Joseph Ho <jo...@ga...> > wrote: > >> > >> Dear Sir: > >> > >> > >> > >> Relion 2D was running but I stopped it. After I tried to restart, I got > >> this error msg. I could not get Relion 2D running anymore. I also tried > MPI > >> is 1 but it still failed. I use scipion v.1.2 and I installed it from > >> source. > >> > >> > >> > >> The msg from run.stdout > >> > >> > >> > >> 00309: The deprecated forms *will* disappear in a future version of > Open > >> MPI. > >> 00310: Please update to the new syntax. > >> 00311: > >> ------------------------------------------------------------ > -------------- > >> 00312: > >> ------------------------------------------------------------ > -------------- > >> 00313: [[50020,1],0]: A high-performance Open MPI point-to-point > >> messaging module > >> 00314: was unable to find any relevant network interfaces: > >> 00315: > >> 00316: Module: OpenFabrics (openib) > >> 00317: Host: spgpu3 > >> 00318: > >> 00319: Another transport will be used instead, although this may > result > >> in > >> 00320: lower performance. > >> 00321: > >> ------------------------------------------------------------ > -------------- > >> 00322: === RELION MPI setup === > >> 00323: + Number of MPI processes = 3 > >> 00324: + Master (0) runs on host = spgpu3 > >> 00325: + Slave 1 runs on host = spgpu3 > >> 00326: + Slave 2 runs on host = spgpu3 > >> 00327: ================= > >> 00328: uniqueHost spgpu3 has 2 ranks. > >> 00329: Slave 1 will distribute threads over devices 1 2 3 > >> 00330: Thread 0 on slave 1 mapped to device 1 > >> 00331: Slave 2 will distribute threads over devices 1 2 3 > >> 00332: Thread 0 on slave 2 mapped to device 1 > >> 00333: Device 1 on spgpu3 is split between 2 slaves > >> 00334: Running CPU instructions in double precision. > >> 00335: [spgpu3:17287] 2 more processes have sent help message > >> help-mpi-btl-base.txt / btl:no-nics > >> 00336: [spgpu3:17287] Set MCA parameter "orte_base_help_aggregate" to > 0 > >> to see all help / error messages > >> 00337: + WARNING: Changing psi sampling rate (before oversampling) to > >> 5.625 degrees, for more efficient GPU calculations > >> 00338: Estimating initial noise spectra > >> 00339: 1/ 20 sec ..~~(,_,"> fn_img= > >> 067786@Runs/000695_XmippProtCropResizeParticles/extra/output_images.stk > >> bg_avg= 0.429345 bg_stddev= 0.852297 bg_radius= 14.0174 > >> 00340: ERROR: > >> 00341: ERROR: It appears that these images have not been normalised to > >> an average background value of 0 and a stddev value of 1. > >> 00342: Note that the average and stddev > >> values for the background are calculated: > >> 00343: (1) for single particles: outside > a > >> circle with the particle diameter > >> 00344: (2) for helical segments: outside > a > >> cylinder (tube) with the helical tube diameter > >> 00345: You can use the relion_preprocess > >> program to normalise your images > >> 00346: If you are sure you have > normalised > >> the images correctly (also see the RELION Wiki), you can switch off this > >> error message using the --dont_check_norm command line option > >> 00347: File: > >> /usr/local/scipion/software/em/relion-2.1/src/ml_optimiser.cpp line: > 1879 > >> 00348: > >> ------------------------------------------------------------ > -------------- > >> 00349: MPI_ABORT was invoked on rank 2 in communicator MPI_COMM_WORLD > >> 00350: with errorcode 1. > >> 00351: > >> 00352: NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI > >> processes. > >> 00353: You may or may not see output from other processes, depending > on > >> 00354: exactly when Open MPI kills them. > >> 00355: > >> ------------------------------------------------------------ > -------------- > >> 00356: Traceback (most recent call last): > >> 00357: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", > line > >> 186, in run > >> 00358: self._run() > >> 00359: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", > line > >> 233, in _run > >> 00360: resultFiles = self._runFunc() > >> 00361: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", > line > >> 229, in _runFunc > >> 00362: return self._func(*self._args) > >> 00363: File > >> "/usr/local/scipion/pyworkflow/em/packages/relion/protocol_base.py", > line > >> 880, in runRelionStep > >> 00364: self.runJob(self._getProgram(), params) > >> 00365: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", > line > >> 1138, in runJob > >> 00366: self._stepsExecutor.runJob(self._log, program, arguments, > >> **kwargs) > >> 00367: File "/usr/local/scipion/pyworkflow/protocol/executor.py", > line > >> 56, in runJob > >> 00368: env=env, cwd=cwd) > >> 00369: File "/usr/local/scipion/pyworkflow/utils/process.py", line > 51, > >> in runJob > >> 00370: return runCommand(command, env, cwd) > >> 00371: File "/usr/local/scipion/pyworkflow/utils/process.py", line > 65, > >> in runCommand > >> 00372: check_call(command, shell=True, stdout=sys.stdout, > >> stderr=sys.stderr, env=env, cwd=cwd) > >> 00373: File "/usr/local/scipion/software/ > lib/python2.7/subprocess.py", > >> line 186, in check_call > >> 00374: raise CalledProcessError(retcode, cmd) > >> 00375: CalledProcessError: Command 'mpirun -np 3 -bynode `which > >> relion_refine_mpi` --gpu 1,2,3 --tau2_fudge 2 --scale > >> --dont_combine_weights_via_disc --iter 25 --norm --psi_step 10.0 > --ctf > >> --offset_range 5.0 --oversampling 1 --pool 3 --o > >> Runs/000827_ProtRelionClassify2D/extra/relion --i > >> Runs/000827_ProtRelionClassify2D/input_particles.star > --particle_diameter > >> 200 --K 200 --flatten_solvent --zero_mask --offset_step 2.0 --angpix > 7.134 > >> --j 1' returned non-zero exit status 1 > >> 00376: Protocol failed: Command 'mpirun -np 3 -bynode `which > >> relion_refine_mpi` --gpu 1,2,3 --tau2_fudge 2 --scale > >> --dont_combine_weights_via_disc --iter 25 --norm --psi_step 10.0 > --ctf > >> --offset_range 5.0 --oversampling 1 --pool 3 --o > >> Runs/000827_ProtRelionClassify2D/extra/relion --i > >> Runs/000827_ProtRelionClassify2D/input_particles.star > --particle_diameter > >> 200 --K 200 --flatten_solvent --zero_mask --offset_step 2.0 --angpix > 7.134 > >> --j 1' returned non-zero exit status 1 > >> 00377: FAILED: runRelionStep, step 2 > >> 00378: 2018-05-03 10:35:36.970022 > >> > >> > >> > >> > >> > >> Thanks for your help > >> > >> > >> > >> Meng-Chiao (Joseph) Ho > >> > >> Assistant Research Fellow > >> > >> Institute of Biological Chemistry, Academia Sinica > >> > >> No. 128, Sec 2, Academia Road, Nankang, Taipei 115, Taiwan > >> > >> Tel: 886-2-27855696 ext 3080/3162 > >> > >> Email: jo...@ga... > >> > >> > >> > >> > >> > >> ------------------------------------------------------------ > ------------------ > >> Check out the vibrant tech community on one of the world's most > >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot > >> _______________________________________________ > >> scipion-users mailing list > >> sci...@li... > >> https://lists.sourceforge.net/lists/listinfo/scipion-users > >> > > > > > > ------------------------------------------------------------ > ------------------ > > Check out the vibrant tech community on one of the world's most > > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > _______________________________________________ > > scipion-users mailing list > > sci...@li... > > https://lists.sourceforge.net/lists/listinfo/scipion-users > > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > |
From: Joseph Ho <sbd...@gm...> - 2018-05-03 10:50:23
|
Hi, Grigory: Thanks for your reply and help. I notice that normalised error. I also puzzle about this. Because I do click the normalise function during the particle extraction of scipion. I followed the scipion youtube (beta-gal) setup. Normalise type:Ramp; Background radius -1, Thanks for your help Joseph On Thu, May 3, 2018 at 6:19 PM, Gregory Sharov <sha...@gm...> wrote: > Hello Joseph, > > the error message states precisely what is wrong with your job: > > 00341: ERROR: It appears that these images have not been normalised to an > average background value of 0 and a stddev value of 1. > 00342: Note that the average and stddev > values for the background are calculated: > 00343: (1) for single particles: outside a > circle with the particle diameter > 00344: (2) for helical segments: outside a > cylinder (tube) with the helical tube diameter > 00345: You can use the relion_preprocess > program to normalise your images > 00346: If you are sure you have normalised > the images correctly (also see the RELION Wiki), you can switch off this > error message using the --dont_check_norm command line option > > > Best regards, > Grigory > > -------------------------------------------------------------------------------- > Grigory Sharov, Ph.D. > > MRC Laboratory of Molecular Biology, > Francis Crick Avenue, > Cambridge Biomedical Campus, > Cambridge CB2 0QH, UK. > tel. +44 (0) 1223 267542 > e-mail: gs...@mr... > > On Thu, May 3, 2018 at 4:02 AM, Joseph Ho <jo...@ga...> wrote: >> >> Dear Sir: >> >> >> >> Relion 2D was running but I stopped it. After I tried to restart, I got >> this error msg. I could not get Relion 2D running anymore. I also tried MPI >> is 1 but it still failed. I use scipion v.1.2 and I installed it from >> source. >> >> >> >> The msg from run.stdout >> >> >> >> 00309: The deprecated forms *will* disappear in a future version of Open >> MPI. >> 00310: Please update to the new syntax. >> 00311: >> -------------------------------------------------------------------------- >> 00312: >> -------------------------------------------------------------------------- >> 00313: [[50020,1],0]: A high-performance Open MPI point-to-point >> messaging module >> 00314: was unable to find any relevant network interfaces: >> 00315: >> 00316: Module: OpenFabrics (openib) >> 00317: Host: spgpu3 >> 00318: >> 00319: Another transport will be used instead, although this may result >> in >> 00320: lower performance. >> 00321: >> -------------------------------------------------------------------------- >> 00322: === RELION MPI setup === >> 00323: + Number of MPI processes = 3 >> 00324: + Master (0) runs on host = spgpu3 >> 00325: + Slave 1 runs on host = spgpu3 >> 00326: + Slave 2 runs on host = spgpu3 >> 00327: ================= >> 00328: uniqueHost spgpu3 has 2 ranks. >> 00329: Slave 1 will distribute threads over devices 1 2 3 >> 00330: Thread 0 on slave 1 mapped to device 1 >> 00331: Slave 2 will distribute threads over devices 1 2 3 >> 00332: Thread 0 on slave 2 mapped to device 1 >> 00333: Device 1 on spgpu3 is split between 2 slaves >> 00334: Running CPU instructions in double precision. >> 00335: [spgpu3:17287] 2 more processes have sent help message >> help-mpi-btl-base.txt / btl:no-nics >> 00336: [spgpu3:17287] Set MCA parameter "orte_base_help_aggregate" to 0 >> to see all help / error messages >> 00337: + WARNING: Changing psi sampling rate (before oversampling) to >> 5.625 degrees, for more efficient GPU calculations >> 00338: Estimating initial noise spectra >> 00339: 1/ 20 sec ..~~(,_,"> fn_img= >> 067786@Runs/000695_XmippProtCropResizeParticles/extra/output_images.stk >> bg_avg= 0.429345 bg_stddev= 0.852297 bg_radius= 14.0174 >> 00340: ERROR: >> 00341: ERROR: It appears that these images have not been normalised to >> an average background value of 0 and a stddev value of 1. >> 00342: Note that the average and stddev >> values for the background are calculated: >> 00343: (1) for single particles: outside a >> circle with the particle diameter >> 00344: (2) for helical segments: outside a >> cylinder (tube) with the helical tube diameter >> 00345: You can use the relion_preprocess >> program to normalise your images >> 00346: If you are sure you have normalised >> the images correctly (also see the RELION Wiki), you can switch off this >> error message using the --dont_check_norm command line option >> 00347: File: >> /usr/local/scipion/software/em/relion-2.1/src/ml_optimiser.cpp line: 1879 >> 00348: >> -------------------------------------------------------------------------- >> 00349: MPI_ABORT was invoked on rank 2 in communicator MPI_COMM_WORLD >> 00350: with errorcode 1. >> 00351: >> 00352: NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI >> processes. >> 00353: You may or may not see output from other processes, depending on >> 00354: exactly when Open MPI kills them. >> 00355: >> -------------------------------------------------------------------------- >> 00356: Traceback (most recent call last): >> 00357: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line >> 186, in run >> 00358: self._run() >> 00359: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line >> 233, in _run >> 00360: resultFiles = self._runFunc() >> 00361: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line >> 229, in _runFunc >> 00362: return self._func(*self._args) >> 00363: File >> "/usr/local/scipion/pyworkflow/em/packages/relion/protocol_base.py", line >> 880, in runRelionStep >> 00364: self.runJob(self._getProgram(), params) >> 00365: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line >> 1138, in runJob >> 00366: self._stepsExecutor.runJob(self._log, program, arguments, >> **kwargs) >> 00367: File "/usr/local/scipion/pyworkflow/protocol/executor.py", line >> 56, in runJob >> 00368: env=env, cwd=cwd) >> 00369: File "/usr/local/scipion/pyworkflow/utils/process.py", line 51, >> in runJob >> 00370: return runCommand(command, env, cwd) >> 00371: File "/usr/local/scipion/pyworkflow/utils/process.py", line 65, >> in runCommand >> 00372: check_call(command, shell=True, stdout=sys.stdout, >> stderr=sys.stderr, env=env, cwd=cwd) >> 00373: File "/usr/local/scipion/software/lib/python2.7/subprocess.py", >> line 186, in check_call >> 00374: raise CalledProcessError(retcode, cmd) >> 00375: CalledProcessError: Command 'mpirun -np 3 -bynode `which >> relion_refine_mpi` --gpu 1,2,3 --tau2_fudge 2 --scale >> --dont_combine_weights_via_disc --iter 25 --norm --psi_step 10.0 --ctf >> --offset_range 5.0 --oversampling 1 --pool 3 --o >> Runs/000827_ProtRelionClassify2D/extra/relion --i >> Runs/000827_ProtRelionClassify2D/input_particles.star --particle_diameter >> 200 --K 200 --flatten_solvent --zero_mask --offset_step 2.0 --angpix 7.134 >> --j 1' returned non-zero exit status 1 >> 00376: Protocol failed: Command 'mpirun -np 3 -bynode `which >> relion_refine_mpi` --gpu 1,2,3 --tau2_fudge 2 --scale >> --dont_combine_weights_via_disc --iter 25 --norm --psi_step 10.0 --ctf >> --offset_range 5.0 --oversampling 1 --pool 3 --o >> Runs/000827_ProtRelionClassify2D/extra/relion --i >> Runs/000827_ProtRelionClassify2D/input_particles.star --particle_diameter >> 200 --K 200 --flatten_solvent --zero_mask --offset_step 2.0 --angpix 7.134 >> --j 1' returned non-zero exit status 1 >> 00377: FAILED: runRelionStep, step 2 >> 00378: 2018-05-03 10:35:36.970022 >> >> >> >> >> >> Thanks for your help >> >> >> >> Meng-Chiao (Joseph) Ho >> >> Assistant Research Fellow >> >> Institute of Biological Chemistry, Academia Sinica >> >> No. 128, Sec 2, Academia Road, Nankang, Taipei 115, Taiwan >> >> Tel: 886-2-27855696 ext 3080/3162 >> >> Email: jo...@ga... >> >> >> >> >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > |
From: Gregory S. <sha...@gm...> - 2018-05-03 10:19:34
|
Hello Joseph, the error message states precisely what is wrong with your job: *00341: ERROR: It appears that these images have not been normalised to an average background value of 0 and a stddev value of 1. * *00342: Note that the average and stddev values for the background are calculated: * *00343: (1) for single particles: outside a circle with the particle diameter * *00344: (2) for helical segments: outside a cylinder (tube) with the helical tube diameter * *00345: You can use the relion_preprocess program to normalise your images * *00346: If you are sure you have normalised the images correctly (also see the RELION Wiki), you can switch off this error message using the --dont_check_norm command line option* Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH, UK. tel. +44 (0) 1223 267542 <+44%201223%20267542> e-mail: gs...@mr... On Thu, May 3, 2018 at 4:02 AM, Joseph Ho <jo...@ga...> wrote: > Dear Sir: > > > > Relion 2D was running but I stopped it. After I tried to restart, I got > this error msg. I could not get Relion 2D running anymore. I also tried MPI > is 1 but it still failed. I use scipion v.1.2 and I installed it from > source. > > > > The msg from run.stdout > > > > 00309: The deprecated forms *will* disappear in a future version of Open > MPI. > 00310: Please update to the new syntax. > 00311: ------------------------------------------------------------ > -------------- > 00312: ------------------------------------------------------------ > -------------- > 00313: [[50020,1],0]: A high-performance Open MPI point-to-point > messaging module > 00314: was unable to find any relevant network interfaces: > 00315: > 00316: Module: OpenFabrics (openib) > 00317: Host: spgpu3 > 00318: > 00319: Another transport will be used instead, although this may result > in > 00320: lower performance. > 00321: ------------------------------------------------------------ > -------------- > 00322: === RELION MPI setup === > 00323: + Number of MPI processes = 3 > 00324: + Master (0) runs on host = spgpu3 > 00325: + Slave 1 runs on host = spgpu3 > 00326: + Slave 2 runs on host = spgpu3 > 00327: ================= > 00328: uniqueHost spgpu3 has 2 ranks. > 00329: Slave 1 will distribute threads over devices 1 2 3 > 00330: Thread 0 on slave 1 mapped to device 1 > 00331: Slave 2 will distribute threads over devices 1 2 3 > 00332: Thread 0 on slave 2 mapped to device 1 > 00333: Device 1 on spgpu3 is split between 2 slaves > 00334: Running CPU instructions in double precision. > 00335: [spgpu3:17287] 2 more processes have sent help message > help-mpi-btl-base.txt / btl:no-nics > 00336: [spgpu3:17287] Set MCA parameter "orte_base_help_aggregate" to 0 > to see all help / error messages > 00337: + WARNING: Changing psi sampling rate (before oversampling) to > 5.625 degrees, for more efficient GPU calculations > 00338: Estimating initial noise spectra > 00339: 1/ 20 sec ..~~(,_,"> fn_img= 067786@Runs/000695_ > XmippProtCropResizeParticles/extra/output_images.stk bg_avg= 0.429345 > bg_stddev= 0.852297 bg_radius= 14.0174 > 00340: ERROR: > 00341: ERROR: It appears that these images have not been normalised to > an average background value of 0 and a stddev value of 1. > 00342: Note that the average and stddev > values for the background are calculated: > 00343: (1) for single particles: outside a > circle with the particle diameter > 00344: (2) for helical segments: outside a > cylinder (tube) with the helical tube diameter > 00345: You can use the relion_preprocess > program to normalise your images > 00346: If you are sure you have normalised > the images correctly (also see the RELION Wiki), you can switch off this > error message using the --dont_check_norm command line option > 00347: File: /usr/local/scipion/software/em/relion-2.1/src/ml_optimiser.cpp > line: 1879 > 00348: ------------------------------------------------------------ > -------------- > 00349: MPI_ABORT was invoked on rank 2 in communicator MPI_COMM_WORLD > 00350: with errorcode 1. > 00351: > 00352: NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI > processes. > 00353: You may or may not see output from other processes, depending on > 00354: exactly when Open MPI kills them. > 00355: ------------------------------------------------------------ > -------------- > 00356: Traceback (most recent call last): > 00357: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", > line 186, in run > 00358: self._run() > 00359: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", > line 233, in _run > 00360: resultFiles = self._runFunc() > 00361: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", > line 229, in _runFunc > 00362: return self._func(*self._args) > 00363: File "/usr/local/scipion/pyworkflow/em/packages/relion/protocol_base.py", > line 880, in runRelionStep > 00364: self.runJob(self._getProgram(), params) > 00365: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", > line 1138, in runJob > 00366: self._stepsExecutor.runJob(self._log, program, arguments, > **kwargs) > 00367: File "/usr/local/scipion/pyworkflow/protocol/executor.py", > line 56, in runJob > 00368: env=env, cwd=cwd) > 00369: File "/usr/local/scipion/pyworkflow/utils/process.py", line > 51, in runJob > 00370: return runCommand(command, env, cwd) > 00371: File "/usr/local/scipion/pyworkflow/utils/process.py", line > 65, in runCommand > 00372: check_call(command, shell=True, stdout=sys.stdout, > stderr=sys.stderr, env=env, cwd=cwd) > 00373: File "/usr/local/scipion/software/lib/python2.7/subprocess.py", > line 186, in check_call > 00374: raise CalledProcessError(retcode, cmd) > 00375: CalledProcessError: Command 'mpirun -np 3 -bynode `which > relion_refine_mpi` --gpu 1,2,3 --tau2_fudge 2 --scale > --dont_combine_weights_via_disc --iter 25 --norm --psi_step 10.0 --ctf > --offset_range 5.0 --oversampling 1 --pool 3 --o Runs/000827_ > ProtRelionClassify2D/extra/relion --i Runs/000827_ > ProtRelionClassify2D/input_particles.star --particle_diameter 200 --K 200 > --flatten_solvent --zero_mask --offset_step 2.0 --angpix 7.134 --j 1' > returned non-zero exit status 1 > 00376: Protocol failed: Command 'mpirun -np 3 -bynode `which > relion_refine_mpi` --gpu 1,2,3 --tau2_fudge 2 --scale > --dont_combine_weights_via_disc --iter 25 --norm --psi_step 10.0 --ctf > --offset_range 5.0 --oversampling 1 --pool 3 --o Runs/000827_ > ProtRelionClassify2D/extra/relion --i Runs/000827_ > ProtRelionClassify2D/input_particles.star --particle_diameter 200 --K 200 > --flatten_solvent --zero_mask --offset_step 2.0 --angpix 7.134 --j 1' > returned non-zero exit status 1 > 00377: FAILED: runRelionStep, step 2 > 00378: 2018-05-03 10:35:36.970022 > > > > > > Thanks for your help > > > > Meng-Chiao (Joseph) Ho > > Assistant Research Fellow > > Institute of Biological Chemistry, Academia Sinica > > No. 128, Sec 2, Academia Road, Nankang, Taipei 115, Taiwan > > Tel: 886-2-27855696 ext 3080/3162 > > Email: jo...@ga... > > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |
From: Joseph H. <jo...@ga...> - 2018-05-03 04:15:23
|
Dear Sir: Relion 2D was running but I stopped it. After I tried to restart, I got this error msg. I could not get Relion 2D running anymore. I also tried MPI is 1 but it still failed. I use scipion v.1.2 and I installed it from source. The msg from run.stdout 00309: The deprecated forms *will* disappear in a future version of Open MPI. 00310: Please update to the new syntax. 00311: -------------------------------------------------------------------------- 00312: -------------------------------------------------------------------------- 00313: [[50020,1],0]: A high-performance Open MPI point-to-point messaging module 00314: was unable to find any relevant network interfaces: 00315: 00316: Module: OpenFabrics (openib) 00317: Host: spgpu3 00318: 00319: Another transport will be used instead, although this may result in 00320: lower performance. 00321: -------------------------------------------------------------------------- 00322: === RELION MPI setup === 00323: + Number of MPI processes = 3 00324: + Master (0) runs on host = spgpu3 00325: + Slave 1 runs on host = spgpu3 00326: + Slave 2 runs on host = spgpu3 00327: ================= 00328: uniqueHost spgpu3 has 2 ranks. 00329: Slave 1 will distribute threads over devices 1 2 3 00330: Thread 0 on slave 1 mapped to device 1 00331: Slave 2 will distribute threads over devices 1 2 3 00332: Thread 0 on slave 2 mapped to device 1 00333: Device 1 on spgpu3 is split between 2 slaves 00334: Running CPU instructions in double precision. 00335: [spgpu3:17287] 2 more processes have sent help message help-mpi-btl-base.txt / btl:no-nics 00336: [spgpu3:17287] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages 00337: + WARNING: Changing psi sampling rate (before oversampling) to 5.625 degrees, for more efficient GPU calculations 00338: Estimating initial noise spectra 00339: 1/ 20 sec ..~~(,_,"> fn_img= 067786@Runs/000695_XmippProtCropResizeParticles/extra/output_images.stk <mailto:067786@Runs/000695_XmippProtCropResizeParticles/extra/output_images. stk> bg_avg= 0.429345 bg_stddev= 0.852297 bg_radius= 14.0174 00340: ERROR: 00341: ERROR: It appears that these images have not been normalised to an average background value of 0 and a stddev value of 1. 00342: Note that the average and stddev values for the background are calculated: 00343: (1) for single particles: outside a circle with the particle diameter 00344: (2) for helical segments: outside a cylinder (tube) with the helical tube diameter 00345: You can use the relion_preprocess program to normalise your images 00346: If you are sure you have normalised the images correctly (also see the RELION Wiki), you can switch off this error message using the --dont_check_norm command line option 00347: File: /usr/local/scipion/software/em/relion-2.1/src/ml_optimiser.cpp line: 1879 00348: -------------------------------------------------------------------------- 00349: MPI_ABORT was invoked on rank 2 in communicator MPI_COMM_WORLD 00350: with errorcode 1. 00351: 00352: NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. 00353: You may or may not see output from other processes, depending on 00354: exactly when Open MPI kills them. 00355: -------------------------------------------------------------------------- 00356: Traceback (most recent call last): 00357: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line 186, in run 00358: self._run() 00359: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line 233, in _run 00360: resultFiles = self._runFunc() 00361: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line 229, in _runFunc 00362: return self._func(*self._args) 00363: File "/usr/local/scipion/pyworkflow/em/packages/relion/protocol_base.py", line 880, in runRelionStep 00364: self.runJob(self._getProgram(), params) 00365: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line 1138, in runJob 00366: self._stepsExecutor.runJob(self._log, program, arguments, **kwargs) 00367: File "/usr/local/scipion/pyworkflow/protocol/executor.py", line 56, in runJob 00368: env=env, cwd=cwd) 00369: File "/usr/local/scipion/pyworkflow/utils/process.py", line 51, in runJob 00370: return runCommand(command, env, cwd) 00371: File "/usr/local/scipion/pyworkflow/utils/process.py", line 65, in runCommand 00372: check_call(command, shell=True, stdout=sys.stdout, stderr=sys.stderr, env=env, cwd=cwd) 00373: File "/usr/local/scipion/software/lib/python2.7/subprocess.py", line 186, in check_call 00374: raise CalledProcessError(retcode, cmd) 00375: CalledProcessError: Command 'mpirun -np 3 -bynode `which relion_refine_mpi` --gpu 1,2,3 --tau2_fudge 2 --scale --dont_combine_weights_via_disc --iter 25 --norm --psi_step 10.0 --ctf --offset_range 5.0 --oversampling 1 --pool 3 --o Runs/000827_ProtRelionClassify2D/extra/relion --i Runs/000827_ProtRelionClassify2D/input_particles.star --particle_diameter 200 --K 200 --flatten_solvent --zero_mask --offset_step 2.0 --angpix 7.134 --j 1' returned non-zero exit status 1 00376: Protocol failed: Command 'mpirun -np 3 -bynode `which relion_refine_mpi` --gpu 1,2,3 --tau2_fudge 2 --scale --dont_combine_weights_via_disc --iter 25 --norm --psi_step 10.0 --ctf --offset_range 5.0 --oversampling 1 --pool 3 --o Runs/000827_ProtRelionClassify2D/extra/relion --i Runs/000827_ProtRelionClassify2D/input_particles.star --particle_diameter 200 --K 200 --flatten_solvent --zero_mask --offset_step 2.0 --angpix 7.134 --j 1' returned non-zero exit status 1 00377: FAILED: runRelionStep, step 2 00378: 2018-05-03 10:35:36.970022 Thanks for your help Meng-Chiao (Joseph) Ho Assistant Research Fellow Institute of Biological Chemistry, Academia Sinica No. 128, Sec 2, Academia Road, Nankang, Taipei 115, Taiwan Tel: 886-2-27855696 ext 3080/3162 Email: jo...@ga... |
From: Pablo C. <pc...@cn...> - 2018-04-26 08:43:55
|
Thanks Gregory, and no worries Patrick. emails here are welcome and are a good source of "learning" or discovering issues. Regarding the test data, you can askf for tha list, as Gregory pointed out, but you can also download them. It might be worth "open up" the test data folder, depending on your "security" policies: "/opt/scipion/data/tests" --> give writing, reading permissions to all. Al the best, Pablo. On 25/04/18 19:37, Gregory Sharov wrote: > PS. After downloading test data once, the tests can be run by normal > users as well! :) > > To see all test datasets, run /scipion testdata --list/ > > Best regards, > Grigory > > -------------------------------------------------------------------------------- > Grigory Sharov, Ph.D. > > MRC Laboratory of Molecular Biology, > Francis Crick Avenue, > Cambridge Biomedical Campus, > Cambridge CB2 0QH, UK. > tel. +44 (0) 1223 267542 <tel:+44%201223%20267542> > e-mail: gs...@mr... <mailto:gs...@mr...> > > On Wed, Apr 25, 2018 at 6:24 PM, Patrick Goetz <pg...@ma... > <mailto:pg...@ma...>> wrote: > > Thank you for clearing that up. Run as administrator, the test > completes successfully. > > On 04/25/2018 12:09 PM, Gregory Sharov wrote: > > Hi Patrick, > > in your case all test data by default is located in > /opt/scipion/data/tests folder. ScipionUserData is used only > for user projects. Test data is supposed to be downloaded by > admin/the person who installed scipion. > > Best regards, > Grigory > > -------------------------------------------------------------------------------- > Grigory Sharov, Ph.D. > > MRC Laboratory of Molecular Biology, > Francis Crick Avenue, > Cambridge Biomedical Campus, > Cambridge CB2 0QH, UK. > tel. +44 (0) 1223 267542 <tel:+44%201223%20267542> > e-mail: gs...@mr... > <mailto:gs...@mr...> > <mailto:gs...@mr... > <mailto:gs...@mr...>> > > > On Wed, Apr 25, 2018 at 5:37 PM, Patrick Goetz > <pg...@ma... <mailto:pg...@ma...> > <mailto:pg...@ma... > <mailto:pg...@ma...>>> wrote: > > Hi - > > Sorry for continuously spamming the list, but the Spider MDA > Workflow test isn't working again, I just want to make sure I > haven't introduced structural installation problems. > > First, I re-installed Scipion in /opt so that the users > can access > it, and made sure to install the spider package. Now running > scipion as an ordinary user: > > /opt/scipion/scipion tests > em.workflows.test_workflow_spiderMDA > > I get error messages and "2 Tests failed" again. Snooping > around in > the locally created ScipionUserData folder, I found this > note in > > > ~/ScipionUserData/projects/TestSpiderWorkflow/Runs/000002_ProtImportParticles/logs/run.stdout: > > ------- > Traceback (most recent call last): > File "/opt/scipion/pyworkflow/protocol/protocol.py", > line 186, in run > self._run() > File "/opt/scipion/pyworkflow/protocol/protocol.py", > line 1104, > in _run > 'Protocol.run: Validation errors:\n' + '\n'.join(errors)) > Exception: Protocol.run: Validation errors: > There are no files matching the pattern > /opt/scipion/data/tests/hemoglobin_mda/particles/*.spi > ------- > > > So apparently the test is trying to install data to > /opt/scipion/data rather than ~/ScipionUserData ? > > Am I missing a configuration step that would instruct the > system to > process all data in the user's ScipionUserData folder? > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's > most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > <mailto:sci...@li...> > <mailto:sci...@li... > <mailto:sci...@li...>> > https://lists.sourceforge.net/lists/listinfo/scipion-users > <https://lists.sourceforge.net/lists/listinfo/scipion-users> > > <https://lists.sourceforge.net/lists/listinfo/scipion-users > <https://lists.sourceforge.net/lists/listinfo/scipion-users>> > > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > > _______________________________________________ > scipion-users mailing list > sci...@li... > <mailto:sci...@li...> > https://lists.sourceforge.net/lists/listinfo/scipion-users > <https://lists.sourceforge.net/lists/listinfo/scipion-users> > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > <mailto:sci...@li...> > https://lists.sourceforge.net/lists/listinfo/scipion-users > <https://lists.sourceforge.net/lists/listinfo/scipion-users> > > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Jose M. de la R. T. <del...@gm...> - 2018-04-26 06:56:05
|
Dear Patrick, Thanks for showing Scipion to your users and providing us with feedback. See answer below. On Thu, Apr 26, 2018 at 12:30 AM, Patrick Goetz <pg...@ma...> wrote: > So, I got my users to take a look at Scipion, and we immediately had a few > more questions. > > The very first was "my favorite way to view images is e2display [.py, an > EMAN2 program] -- can we run this through Scipion?" > Nope, e2display.py is not currently accessed through the menu. The default viewer (although some other can be used in some cases) is the Xmipp viewer. Back in time we modified it to allow easy manipulation of subset creation and other features, but we can not do this with other viewers. > > e2display.py does not appear in the Scipion menu, but oddly appears to be > called through eman boxer? If I try just running it as > > /opt/scipion/software/em/eman-2.12/bin/e2display.py > You can launch it using the following command: scipion e2display.py [extra args] > > there are missing python modules (e.g. EMAN2_meta). > I don't know what EMAN2_meta is, I haven't heard about it before. Have you checked the version? > > So, first question is there any way to get e2display.py through the menu, > or is there already a superior image display utility there? > We could integrate the e2display.py, but as I said, one of the main utilities is the ability to create subset. e2display.py is a great viewer, but it has a great disadvantage when it comes to particle sets and subset...it only manage its own format and in many cases there is a need to convert the particles to hdf, which is not desired with now a days huge datasets. > > Second, the menu seems to be fixed and not generated by what applications > are installed? I haven't installed relion yet, but it appears in the > Scipion menu. > EM programs that are not install should appear in gray and with (not installed) label in the menu. > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > |
From: Patrick G. <pg...@ma...> - 2018-04-25 22:30:10
|
So, I got my users to take a look at Scipion, and we immediately had a few more questions. The very first was "my favorite way to view images is e2display [.py, an EMAN2 program] -- can we run this through Scipion?" e2display.py does not appear in the Scipion menu, but oddly appears to be called through eman boxer? If I try just running it as /opt/scipion/software/em/eman-2.12/bin/e2display.py there are missing python modules (e.g. EMAN2_meta). So, first question is there any way to get e2display.py through the menu, or is there already a superior image display utility there? Second, the menu seems to be fixed and not generated by what applications are installed? I haven't installed relion yet, but it appears in the Scipion menu. |
From: Gregory S. <sha...@gm...> - 2018-04-25 17:38:10
|
PS. After downloading test data once, the tests can be run by normal users as well! :) To see all test datasets, run *scipion testdata --list* Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH, UK. tel. +44 (0) 1223 267542 <+44%201223%20267542> e-mail: gs...@mr... On Wed, Apr 25, 2018 at 6:24 PM, Patrick Goetz <pg...@ma...> wrote: > Thank you for clearing that up. Run as administrator, the test completes > successfully. > > On 04/25/2018 12:09 PM, Gregory Sharov wrote: > >> Hi Patrick, >> >> in your case all test data by default is located in >> /opt/scipion/data/tests folder. ScipionUserData is used only for user >> projects. Test data is supposed to be downloaded by admin/the person who >> installed scipion. >> >> Best regards, >> Grigory >> >> ------------------------------------------------------------ >> -------------------- >> Grigory Sharov, Ph.D. >> >> MRC Laboratory of Molecular Biology, >> Francis Crick Avenue, >> Cambridge Biomedical Campus, >> Cambridge CB2 0QH, UK. >> tel. +44 (0) 1223 267542 <tel:+44%201223%20267542> >> e-mail: gs...@mr... <mailto:gs...@mr...> >> >> >> On Wed, Apr 25, 2018 at 5:37 PM, Patrick Goetz <pg...@ma... >> <mailto:pg...@ma...>> wrote: >> >> Hi - >> >> Sorry for continuously spamming the list, but the Spider MDA >> Workflow test isn't working again, I just want to make sure I >> haven't introduced structural installation problems. >> >> First, I re-installed Scipion in /opt so that the users can access >> it, and made sure to install the spider package. Now running >> scipion as an ordinary user: >> >> /opt/scipion/scipion tests em.workflows.test_workflow_spiderMDA >> >> I get error messages and "2 Tests failed" again. Snooping around in >> the locally created ScipionUserData folder, I found this note in >> >> ~/ScipionUserData/projects/TestSpiderWorkflow/Runs/000002_ >> ProtImportParticles/logs/run.stdout: >> >> ------- >> Traceback (most recent call last): >> File "/opt/scipion/pyworkflow/protocol/protocol.py", line 186, in >> run >> self._run() >> File "/opt/scipion/pyworkflow/protocol/protocol.py", line 1104, >> in _run >> 'Protocol.run: Validation errors:\n' + '\n'.join(errors)) >> Exception: Protocol.run: Validation errors: >> There are no files matching the pattern >> /opt/scipion/data/tests/hemoglobin_mda/particles/*.spi >> ------- >> >> >> So apparently the test is trying to install data to >> /opt/scipion/data rather than ~/ScipionUserData ? >> >> Am I missing a configuration step that would instruct the system to >> process all data in the user's ScipionUserData folder? >> >> ------------------------------------------------------------ >> ------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> <mailto:sci...@li...> >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> <https://lists.sourceforge.net/lists/listinfo/scipion-users> >> >> >> >> >> ------------------------------------------------------------ >> ------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> >> >> >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> >> > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > |
From: Patrick G. <pg...@ma...> - 2018-04-25 17:24:31
|
Thank you for clearing that up. Run as administrator, the test completes successfully. On 04/25/2018 12:09 PM, Gregory Sharov wrote: > Hi Patrick, > > in your case all test data by default is located in > /opt/scipion/data/tests folder. ScipionUserData is used only for user > projects. Test data is supposed to be downloaded by admin/the person who > installed scipion. > > Best regards, > Grigory > > -------------------------------------------------------------------------------- > Grigory Sharov, Ph.D. > > MRC Laboratory of Molecular Biology, > Francis Crick Avenue, > Cambridge Biomedical Campus, > Cambridge CB2 0QH, UK. > tel. +44 (0) 1223 267542 <tel:+44%201223%20267542> > e-mail: gs...@mr... <mailto:gs...@mr...> > > On Wed, Apr 25, 2018 at 5:37 PM, Patrick Goetz <pg...@ma... > <mailto:pg...@ma...>> wrote: > > Hi - > > Sorry for continuously spamming the list, but the Spider MDA > Workflow test isn't working again, I just want to make sure I > haven't introduced structural installation problems. > > First, I re-installed Scipion in /opt so that the users can access > it, and made sure to install the spider package. Now running > scipion as an ordinary user: > > /opt/scipion/scipion tests em.workflows.test_workflow_spiderMDA > > I get error messages and "2 Tests failed" again. Snooping around in > the locally created ScipionUserData folder, I found this note in > > ~/ScipionUserData/projects/TestSpiderWorkflow/Runs/000002_ProtImportParticles/logs/run.stdout: > > ------- > Traceback (most recent call last): > File "/opt/scipion/pyworkflow/protocol/protocol.py", line 186, in run > self._run() > File "/opt/scipion/pyworkflow/protocol/protocol.py", line 1104, > in _run > 'Protocol.run: Validation errors:\n' + '\n'.join(errors)) > Exception: Protocol.run: Validation errors: > There are no files matching the pattern > /opt/scipion/data/tests/hemoglobin_mda/particles/*.spi > ------- > > > So apparently the test is trying to install data to > /opt/scipion/data rather than ~/ScipionUserData ? > > Am I missing a configuration step that would instruct the system to > process all data in the user's ScipionUserData folder? > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > <mailto:sci...@li...> > https://lists.sourceforge.net/lists/listinfo/scipion-users > <https://lists.sourceforge.net/lists/listinfo/scipion-users> > > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > |
From: Gregory S. <sha...@gm...> - 2018-04-25 17:09:53
|
Hi Patrick, in your case all test data by default is located in /opt/scipion/data/tests folder. ScipionUserData is used only for user projects. Test data is supposed to be downloaded by admin/the person who installed scipion. Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH, UK. tel. +44 (0) 1223 267542 <+44%201223%20267542> e-mail: gs...@mr... On Wed, Apr 25, 2018 at 5:37 PM, Patrick Goetz <pg...@ma...> wrote: > Hi - > > Sorry for continuously spamming the list, but the Spider MDA Workflow test > isn't working again, I just want to make sure I haven't introduced > structural installation problems. > > First, I re-installed Scipion in /opt so that the users can access it, and > made sure to install the spider package. Now running scipion as an > ordinary user: > > /opt/scipion/scipion tests em.workflows.test_workflow_spiderMDA > > I get error messages and "2 Tests failed" again. Snooping around in the > locally created ScipionUserData folder, I found this note in > > ~/ScipionUserData/projects/TestSpiderWorkflow/Runs/000002_ > ProtImportParticles/logs/run.stdout: > > ------- > Traceback (most recent call last): > File "/opt/scipion/pyworkflow/protocol/protocol.py", line 186, in run > self._run() > File "/opt/scipion/pyworkflow/protocol/protocol.py", line 1104, in _run > 'Protocol.run: Validation errors:\n' + '\n'.join(errors)) > Exception: Protocol.run: Validation errors: > There are no files matching the pattern /opt/scipion/data/tests/hemogl > obin_mda/particles/*.spi > ------- > > > So apparently the test is trying to install data to /opt/scipion/data > rather than ~/ScipionUserData ? > > Am I missing a configuration step that would instruct the system to > process all data in the user's ScipionUserData folder? > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > |