You can subscribe to this list here.
2016 |
Jan
(2) |
Feb
(13) |
Mar
(9) |
Apr
(4) |
May
(5) |
Jun
(2) |
Jul
(8) |
Aug
(3) |
Sep
(25) |
Oct
(7) |
Nov
(49) |
Dec
(15) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2017 |
Jan
(24) |
Feb
(36) |
Mar
(53) |
Apr
(44) |
May
(37) |
Jun
(34) |
Jul
(12) |
Aug
(15) |
Sep
(14) |
Oct
(9) |
Nov
(9) |
Dec
(7) |
2018 |
Jan
(16) |
Feb
(9) |
Mar
(27) |
Apr
(39) |
May
(8) |
Jun
(24) |
Jul
(22) |
Aug
(11) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
2019 |
Jan
(4) |
Feb
(5) |
Mar
|
Apr
(1) |
May
(21) |
Jun
(13) |
Jul
(31) |
Aug
(22) |
Sep
(9) |
Oct
(19) |
Nov
(24) |
Dec
(12) |
2020 |
Jan
(30) |
Feb
(12) |
Mar
(16) |
Apr
(4) |
May
(37) |
Jun
(17) |
Jul
(19) |
Aug
(15) |
Sep
(26) |
Oct
(84) |
Nov
(64) |
Dec
(55) |
2021 |
Jan
(18) |
Feb
(58) |
Mar
(26) |
Apr
(88) |
May
(51) |
Jun
(36) |
Jul
(31) |
Aug
(37) |
Sep
(79) |
Oct
(15) |
Nov
(29) |
Dec
(8) |
2022 |
Jan
(5) |
Feb
(8) |
Mar
(29) |
Apr
(21) |
May
(11) |
Jun
(11) |
Jul
(18) |
Aug
(16) |
Sep
(6) |
Oct
(10) |
Nov
(23) |
Dec
(1) |
2023 |
Jan
(18) |
Feb
|
Mar
(4) |
Apr
|
May
(3) |
Jun
(10) |
Jul
(1) |
Aug
|
Sep
|
Oct
(1) |
Nov
(3) |
Dec
(5) |
2024 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
(1) |
Feb
|
Mar
|
Apr
(5) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Dario Saczko-B. <dar...@go...> - 2020-01-21 11:51:48
|
Hi all, I have a question regarding the speed of the GUI. Almost each window takes quite a long time to open (from some seconds up to several minutes). Sometimes it helps to kill scipion and reopen it, but this is not the most convenient workaround. Are there any settings that can be made so it runs smoother on our workstation: Intel® Xeon(R) CPU E5-2698 v3 @ 2.30GHz × 32 2x RAM DDR4 REG 32GB 2x GeForce GTX 1070 Thanks in advance Best Dario |
From: David M. <dma...@cn...> - 2020-01-20 20:37:11
|
Hi Dario and Scipion-Users, It seems that Xmipp plugin is installed but not the software below. Once the Xmipp plugin is installed, the Xmipp software appears below the Xmipp plugin in the plugin manager. You should choose a binary (for Centos or For Debian distros) o the sources to complie (other distros). More informarion can be found in the link that Gregory placed. Thanks! _____ Dr. David Maluenda Niubó 619.029.310 - dma...@cn... Centro Nacional de Biotecnología - CSIC BioComputing Unit El lun., 20 ene. 2020 17:45, Grigory Sharov <sha...@gm...> escribió: > Hi Dario, > > It may as well be that xmipp installation through Plugin manager has > failed, you should check the Plugin.err or log file for details. If that's > the case you might try to install xmipp > <https://scipion-em.github.io/docs/docs/scipion-modes/install-from-sources#step-4-installing-xmipp3-and-other-em-plugins> > from command line. > > Best regards, > Grigory > > > -------------------------------------------------------------------------------- > Grigory Sharov, Ph.D. > > MRC Laboratory of Molecular Biology, > Francis Crick Avenue, > Cambridge Biomedical Campus, > Cambridge CB2 0QH, UK. > tel. +44 (0) 1223 267228 <+44%201223%20267228> > e-mail: gs...@mr... > > > On Mon, Jan 20, 2020 at 4:37 PM Lugmayr, Wolfgang <w.l...@uk...> > wrote: > >> hi, >> >> i extended the original scipion.conf with some site information and call >> it like: >> scipion --config scipion_cssb.conf >> >> the package section below for example is not in the original scipion.conf >> >> [PACKAGES] >> EM_ROOT = software/em >> MOTIONCOR2_BIN = MotionCor2_1.2.6-Cuda101 >> MOTIONCOR2_CUDA_LIB = /usr/local/cuda-10.1/lib64 >> MOTIONCOR2_HOME = %(EM_ROOT)s/motioncor2-1.2.6 >> XMIPP_HOME = %(EM_ROOT)s/xmipp >> >> maybe this helps? >> >> cheers, >> wolfgang >> >> >> >> ----- Original Message ----- >> From: "Mailing list for Scipion users" < >> sci...@li...> >> To: "Mailing list for Scipion users" <sci...@li... >> > >> Cc: "Dario Saczko-Brack" <dar...@go...> >> Sent: Monday, 20 January, 2020 16:57:23 >> Subject: [scipion-users] Problem with Scipion and Xmipp >> >> Hi all, >> >> I have a question about the xmipp within scipion, as it seems not to >> work properly on my machine. I always get the error message upon the >> scipion start, although I have installed xmipp using the plugin manager >> (and already several times removed+installed again): >> >> > >>> WARNING: Xmipp binaries not found. Ghost active.....BOOOOOO! >> > > Please install Xmipp to get full functionality. >> > (Configuration->Plugins->scipion-em-xmipp in Scipion manager window) >> >> >> Does anyone had this problem too and knows how to deal with it? If there >> are any more information you need, let me know. >> >> >> Thanks in advance >> >> Dario >> >> >> >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> >> >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > |
From: Grigory S. <sha...@gm...> - 2020-01-20 16:46:06
|
Hi Dario, It may as well be that xmipp installation through Plugin manager has failed, you should check the Plugin.err or log file for details. If that's the case you might try to install xmipp <https://scipion-em.github.io/docs/docs/scipion-modes/install-from-sources#step-4-installing-xmipp3-and-other-em-plugins> from command line. Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH, UK. tel. +44 (0) 1223 267228 <+44%201223%20267228> e-mail: gs...@mr... On Mon, Jan 20, 2020 at 4:37 PM Lugmayr, Wolfgang <w.l...@uk...> wrote: > hi, > > i extended the original scipion.conf with some site information and call > it like: > scipion --config scipion_cssb.conf > > the package section below for example is not in the original scipion.conf > > [PACKAGES] > EM_ROOT = software/em > MOTIONCOR2_BIN = MotionCor2_1.2.6-Cuda101 > MOTIONCOR2_CUDA_LIB = /usr/local/cuda-10.1/lib64 > MOTIONCOR2_HOME = %(EM_ROOT)s/motioncor2-1.2.6 > XMIPP_HOME = %(EM_ROOT)s/xmipp > > maybe this helps? > > cheers, > wolfgang > > > > ----- Original Message ----- > From: "Mailing list for Scipion users" < > sci...@li...> > To: "Mailing list for Scipion users" <sci...@li...> > Cc: "Dario Saczko-Brack" <dar...@go...> > Sent: Monday, 20 January, 2020 16:57:23 > Subject: [scipion-users] Problem with Scipion and Xmipp > > Hi all, > > I have a question about the xmipp within scipion, as it seems not to > work properly on my machine. I always get the error message upon the > scipion start, although I have installed xmipp using the plugin manager > (and already several times removed+installed again): > > > >>> WARNING: Xmipp binaries not found. Ghost active.....BOOOOOO! > > > Please install Xmipp to get full functionality. > > (Configuration->Plugins->scipion-em-xmipp in Scipion manager window) > > > Does anyone had this problem too and knows how to deal with it? If there > are any more information you need, let me know. > > > Thanks in advance > > Dario > > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > |
From: Lugmayr, W. <w.l...@uk...> - 2020-01-20 16:37:05
|
hi, i extended the original scipion.conf with some site information and call it like: scipion --config scipion_cssb.conf the package section below for example is not in the original scipion.conf [PACKAGES] EM_ROOT = software/em MOTIONCOR2_BIN = MotionCor2_1.2.6-Cuda101 MOTIONCOR2_CUDA_LIB = /usr/local/cuda-10.1/lib64 MOTIONCOR2_HOME = %(EM_ROOT)s/motioncor2-1.2.6 XMIPP_HOME = %(EM_ROOT)s/xmipp maybe this helps? cheers, wolfgang ----- Original Message ----- From: "Mailing list for Scipion users" <sci...@li...> To: "Mailing list for Scipion users" <sci...@li...> Cc: "Dario Saczko-Brack" <dar...@go...> Sent: Monday, 20 January, 2020 16:57:23 Subject: [scipion-users] Problem with Scipion and Xmipp Hi all, I have a question about the xmipp within scipion, as it seems not to work properly on my machine. I always get the error message upon the scipion start, although I have installed xmipp using the plugin manager (and already several times removed+installed again): > >>> WARNING: Xmipp binaries not found. Ghost active.....BOOOOOO! > > Please install Xmipp to get full functionality. > (Configuration->Plugins->scipion-em-xmipp in Scipion manager window) Does anyone had this problem too and knows how to deal with it? If there are any more information you need, let me know. Thanks in advance Dario _______________________________________________ scipion-users mailing list sci...@li... https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Dario Saczko-B. <dar...@go...> - 2020-01-20 15:59:54
|
Hi all, I have a question about the xmipp within scipion, as it seems not to work properly on my machine. I always get the error message upon the scipion start, although I have installed xmipp using the plugin manager (and already several times removed+installed again): > >>> WARNING: Xmipp binaries not found. Ghost active.....BOOOOOO! > > Please install Xmipp to get full functionality. > (Configuration->Plugins->scipion-em-xmipp in Scipion manager window) Does anyone had this problem too and knows how to deal with it? If there are any more information you need, let me know. Thanks in advance Dario |
From: Lugmayr, W. <w.l...@uk...> - 2020-01-08 10:40:06
|
hi, we also had crashes but now it runs (from the commandline) with the exe Gautomatch_v0.56_sm62_cu8.0 on GTX1080Ti and P100 gpu cards. currently the V100 cards are all busy so i cannot test it. but scipion(xmipp/cuda8) had problems on V100 cards mainly because of cuda8. we have multiple cuda versions installed so we load explicitly the cuda/8.0 environment module before execution $ module show cuda/8.0 ------------------------------------------------------------------- /etc/modulefiles/cuda/8.0: module-whatis loads CUDA 8.0 prepend-path PATH /usr/local/cuda-8.0/bin prepend-path LD_LIBRARY_PATH /usr/local/cuda-8.0/lib64 prepend-path INCLUDE /usr/local/cuda-8.0/include prepend-path CPATH /usr/local/cuda-8.0/lib64/include prepend-path LIBRARY_PATH /usr/local/cuda-8.0/lib64 prepend-path LD_RUN_PATH /usr/local/cuda-8.0/lib64 ------------------------------------------------------------------- a successful quick run on a k3 file (with an old dummy template) is like: Gautomatch_v0.56_sm62_cu8.0 --apixM 0.87 --diameter 250 --cs 2.7 --apixT 0.91 -T class_averages.mrcs FoilHole_9183584_Data_9184220_9184222_20191010_0449_fractions.mrc to get more information about crashes you can try the strace command and have a look at the lines before the crash: strace Gautomatch_v0.56_sm62_cu8.0 --apixM 0.87 --diameter 250 --cs 2.7 --apixT 0.91 -T class_averages.mrcs FoilHole_9183584_Data_9184220_9184222_20191010_0449_fractions.mrc we do not use gautomatch now so often anymore so i did not test the scipion integration. maybe try deep learning pickers like crYOLO or Topaz? cheers, wolfgang From: "Wolfgang Lugmayr" <wol...@cs...> To: "Mailing list for Scipion users" <sci...@li...> Cc: "Maria Maldonado" <mm...@uc...> Sent: Wednesday, 8 January, 2020 11:17:20 Subject: Re: [scipion-users] [scipion] Issues with Gautomatch on Scipion hi, we also had crashes but now it runs (from the commandline) with the exe Gautomatch_v0.56_sm62_cu8.0 on GTX1080Ti and P100 gpu cards. currently the V100 cards are all busy so i cannot test it. but scipion(xmipp/cuda8) had problems on V100 cards mainly because of cuda8. we have multiple cuda versions installed so we load explicitly the cuda/8.0 environment module before execution $ module show cuda/8.0 ------------------------------------------------------------------- /etc/modulefiles/cuda/8.0: module-whatis loads CUDA 8.0 prepend-path PATH /usr/local/cuda-8.0/bin prepend-path LD_LIBRARY_PATH /usr/local/cuda-8.0/lib64 prepend-path INCLUDE /usr/local/cuda-8.0/include prepend-path CPATH /usr/local/cuda-8.0/lib64/include prepend-path LIBRARY_PATH /usr/local/cuda-8.0/lib64 prepend-path LD_RUN_PATH /usr/local/cuda-8.0/lib64 ------------------------------------------------------------------- a successful quick run on a k3 file (with an old dummy template) is like: Gautomatch_v0.56_sm62_cu8.0 --apixM 0.87 --diameter 250 --cs 2.7 --apixT 0.91 -T class_averages.mrcs FoilHole_9183584_Data_9184220_9184222_20191010_0449_fractions.mrc to get more information about crashes you can try the strace command and have a look at the lines before the crash: strace Gautomatch_v0.56_sm62_cu8.0 --apixM 0.87 --diameter 250 --cs 2.7 --apixT 0.91 -T class_averages.mrcs FoilHole_9183584_Data_9184220_9184222_20191010_0449_fractions.mrc we do not use gautomatch now so often anymore so i did not test the scipion integration. maybe try deep learning pickers like crYOLO or Topaz? cheers, wolfgang From: "Grigory Sharov" <sha...@gm...> To: "Maria Maldonado" <mm...@uc...> Cc: "Mailing list for Scipion users" <sci...@li...> Sent: Tuesday, 7 January, 2020 20:55:08 Subject: Re: [scipion-users] [scipion] Issues with Gautomatch on Scipion Hi Sorry to hear that. I'm afraid gautomatch and gctf do not work well on new Cuda or gpus or k3 images, and it doesn't look like their developer plans to update his software. So you might switch to other pickers. On Tue, Jan 7, 2020, 19:46 Maria Maldonado < [ mailto:mm...@uc... | mm...@uc... ] > wrote: Hi Grigory, Thank you for your message, and happy new year. I used gid 0 and the syntax error was gone, however, I still cannot get the 0.56 version to run. I tried every single 0.56 binary ran outside of Scipion and none of them worked. Some crash during normalization (segmentation fault core aborted), some did the normalization and ran through the templates before crashing, but even so, after the templates they all give a segmentation fault (core aborted) message. Do you know what could be going on? Thanks again, Maria From: Grigory Sharov < [ mailto:sha...@gm... | sha...@gm... ] > Date: Sunday, December 29, 2019 at 3:50 AM To: Maria Maldonado < [ mailto:mm...@uc... | mm...@uc... ] > Subject: Re: [scipion] Issues with Gautomatch on Scipion Hello Maria, sorry for the delayed reply. It looks like GPU list is not parsed properly. What did you put in a GPUs field in the protocol form? It accepts a string like "0 1 2" for GPU ids 0,1,2. You can try to put simply 0 there Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH, UK. tel. +44 (0) 1223 267228 e-mail: [ mailto:gs...@mr... | gs...@mr... ] On Thu, Dec 19, 2019 at 8:24 PM Maria Maldonado < [ mailto:mm...@uc... | mm...@uc... ] > wrote: BQ_BEGIN The error with MPI=1 was the following. (I am still trying 0.56 because I want to be able to use the deicer option; I have already imported the results of 0.53 into Scipion) Thanks, Maria Maldonado RUNNING PROTOCOL ----------------- HostName: [ http://unicorn.mcb.ucdavis.edu/ | unicorn.mcb.ucdavis.edu ] PID: 104269 Scipion: v2.0 (2019-04-23) Diocletian currentDir: /media/raid/lettslab/EM_processing/mmaldo/20190903_MungBean_SC_B_UCFSKrios/CIp_processing/CIstar_scipion workingDir: Runs/001122_ProtGautomatch runMode: Restart MPI: 1 threads: 1 Starting at step: 1 Running steps STARTED: convertInputStep, step 1 2019-12-19 12:16:15.438819 FINISHED: convertInputStep, step 1 2019-12-19 12:16:15.579415 STARTED: pickMicrographStep, step 2 2019-12-19 12:16:15.685537 Picking micrograph: Runs/000073_ProtImportMicrographs/extra/ucdavis_20190903_SCmb_B_0001.mrc /programs/x86_64-linux/gautomatch/0.56/bin/Gautomatch_v0.56_sm62_cu8.0 Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 --ang_step 5 --diameter 300 --lp 30 --hp 1000 --gid %(GPU)s --apixT 0.83 --cc_cutoff 0.10 --speed 2 --boxsize 250 --min_dist 300 --lsigma_cutoff 1.20 --lsigma_D 100 --lave_max 1.00 --lave_min -1.00 --lave_D 100 --detect_ice 1 --T_norm_type 1 --do_bandpass 1 /bin/sh: -c: line 0: syntax error near unexpected token `(' /bin/sh: -c: line 0: `/programs/x86_64-linux/gautomatch/0.56/bin/Gautomatch_v0.56_sm62_cu8.0 Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 --ang_step 5 --diameter 300 --lp 30 --hp 1000 --gid %(GPU)s --apixT 0.83 --cc_cutoff 0.10 --speed 2 --boxsize 250 --min_dist 300 --lsigma_cutoff 1.20 --lsigma_D 100 --lave_max 1.00 --lave_min -1.00 --lave_D 100 --detect_ice 1 --T_norm_type 1 --do_bandpass 1' Traceback (most recent call last): File "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", line 186, in run self._run() File "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", line 237, in _run resultFiles = self._runFunc() File "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", line 233, in _runFunc return self._func(*self._args) File "/programs/x86_64-linux/scipion/2.0/pyworkflow/em/protocol/protocol_particles_picking.py", line 242, in pickMicrographStep self._pickMicrograph(mic, *args) File "/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/site-packages/gautomatch/protocols/protocol_gautomatch.py", line 327, in _pickMicrograph self._pickMicrographList([mic], args) File "/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/site-packages/gautomatch/protocols/protocol_gautomatch.py", line 351, in _pickMicrographList runJob=self.runJob) File "/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/site-packages/gautomatch/__init__.py", line 103, in runGautomatch runJob(cls.getProgram(), args, env=environ) File "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", line 1311, in runJob self._stepsExecutor.runJob(self._log, program, arguments, **kwargs) File "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/executor.py", line 70, in runJob env=env, cwd=cwd, gpuList=self.getGpuList()) File "/programs/x86_64-linux/scipion/2.0/pyworkflow/utils/process.py", line 52, in runJob return runCommand(command, env, cwd) File "/programs/x86_64-linux/scipion/2.0/pyworkflow/utils/process.py", line 67, in runCommand env=env, cwd=cwd) File "/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/subprocess.py", line 190, in check_call raise CalledProcessError(retcode, cmd) CalledProcessError: Command '/programs/x86_64-linux/gautomatch/0.56/bin/Gautomatch_v0.56_sm62_cu8.0 Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 --ang_step 5 --diameter 300 --lp 30 --hp 1000 --gid %(GPU)s --apixT 0.83 --cc_cutoff 0.10 --speed 2 --boxsize 250 --min_dist 300 --lsigma_cutoff 1.20 --lsigma_D 100 --lave_max 1.00 --lave_min -1.00 --lave_D 100 --detect_ice 1 --T_norm_type 1 --do_bandpass 1' returned non-zero exit status 1 Protocol failed: Command '/programs/x86_64-linux/gautomatch/0.56/bin/Gautomatch_v0.56_sm62_cu8.0 Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 --ang_step 5 --diameter 300 --lp 30 --hp 1000 --gid %(GPU)s --apixT 0.83 --cc_cutoff 0.10 --speed 2 --boxsize 250 --min_dist 300 --lsigma_cutoff 1.20 --lsigma_D 100 --lave_max 1.00 --lave_min -1.00 --lave_D 100 --detect_ice 1 --T_norm_type 1 --do_bandpass 1' returned non-zero exit status 1 FAILED: pickMicrographStep, step 2 2019-12-19 12:16:15.784806 ------------------- PROTOCOL FAILED (DONE 2/8369) From: Gregory Sharov < [ mailto:sha...@gm... | sha...@gm... ] > Date: Thursday, December 19, 2019 at 12:11 PM To: Maria Maldonado < [ mailto:mm...@uc... | mm...@uc... ] > Cc: Pablo Conesa < [ mailto:pc...@cn... | pc...@cn... ] >, " [ mailto:sc...@cn... | sc...@cn... ] " < [ mailto:sc...@cn... | sc...@cn... ] > Subject: Re: [scipion] Issues with Gautomatch on Scipion Hi, I'm afraid MPI is hiding the errors behind, you have to re-run it with MPI=1 to see the errors. And yes, try to put 0.53, sm20, cu7.5 into config file and run it. If it worked outside Scipion it should work inside as well. Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH, UK. tel. +44 (0) 1223 267228 e-mail: [ mailto:gs...@mr... | gs...@mr... ] On Thu, Dec 19, 2019 at 8:06 PM Maria Maldonado < [ mailto:mm...@uc... | mm...@uc... ] > wrote: BQ_BEGIN Hi Grigory, Thank you for your comments. Yes, I’m using GPU RTX2080. Gautomatch outside Scipion was 0.53, sm20, cu7.5. Thanks for letting me know it is so sensitive to the cuda/cc/gpu. I will try some more binaries and move on to a different picker. The stdout file for the last few micrographs is below. Would you mind highlighting were the error is described, so I can learn to recognize it next time? Sorry for the basic question. I appreciate your help. Best wishes, Maria STARTED: pickMicrographStep, step 131 2019-12-18 16:48:32.258164 STARTED: pickMicrographStep, step 132 Picking micrograph: Runs/000073_ProtImportMicrographs/extra/ucdavis_20190903_SCmb_B_01 45.mrc 2019-12-18 16:48:32.302543 Some paths do not exist in: /programs/x86_64-linux/scipion/2.0/None # Fill to override scipion CUDA_LIB if different Sending environment to 3 Picking micrograph: Runs/000073_ProtImportMicrographs/extra/ucdavis_20190903_SCmb_B_01 46.mrc Some paths do not exist in: /programs/x86_64-linux/scipion/2.0/None # Fill to override scipion CUDA_LIB if different Sending environment to 2 FINISHED: pickMicrographStep, step 130 2019-12-18 16:48:32.025812 STARTED: pickMicrographStep, step 133 2019-12-18 16:48:32.441198 Picking micrograph: Runs/000073_ProtImportMicrographs/extra/ucdavis_20190903_SCmb_B_01 47.mrc Some paths do not exist in: /programs/x86_64-linux/scipion/2.0/None # Fill to override scipion CUDA_LIB if different Sending environment to 1 Sending command to 3: /programs/x86_64-linux/gautomatch/0.56/bin/Gautomatch_v0.56_sm62 _cu8.0 Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0145.mrc -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 --ang_step 5 --diame ter 300 --lp 30 --hp 1000 --gid 2 --apixT 0.83 --cc_cutoff 0.10 --speed 2 --boxsize 25 0 --min_dist 300 --lsigma_cutoff 1.20 --lsigma_D 100 --lave_max 1.00 --lave_min -1.00 --lave_D 100 --detect_ice 1 --T_norm_type 1 --do_bandpass 1 Sending command to 2: /programs/x86_64-linux/gautomatch/0.56/bin/Gautomatch_v0.56_sm62 _cu8.0 Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0146.mrc -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 --ang_step 5 --diame ter 300 --lp 30 --hp 1000 --gid 1 --apixT 0.83 --cc_cutoff 0.10 --speed 2 --boxsize 25 0 --min_dist 300 --lsigma_cutoff 1.20 --lsigma_D 100 --lave_max 1.00 --lave_min -1.00 --lave_D 100 --detect_ice 1 --T_norm_type 1 --do_bandpass 1 Sending command to 1: /programs/x86_64-linux/gautomatch/0.56/bin/Gautomatch_v0.56_sm62 _cu8.0 Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0147.mrc -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 --ang_step 5 --diame ter 300 --lp 30 --hp 1000 --gid 0 --apixT 0.83 --cc_cutoff 0.10 --speed 2 --boxsize 25 0 --min_dist 300 --lsigma_cutoff 1.20 --lsigma_D 100 --lave_max 1.00 --lave_min -1.00 --lave_D 100 --detect_ice 1 --T_norm_type 1 --do_bandpass 1 Traceback (most recent call last): File "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/executor.py", line 151, in run self.step._run() # not self.step.run() , to avoid race conditions File "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", line 237, in _run resultFiles = self._runFunc() File "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", line 233, in _runFunc return self._func(*self._args) File "/programs/x86_64-linux/scipion/2.0/pyworkflow/em/protocol/protocol_particles_p icking.py", line 242, in pickMicrographStep self._pickMicrograph(mic, *args) File "/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/site-packages/gautom atch/protocols/protocol_gautomatch.py", line 327, in _pickMicrograph self._pickMicrographList([mic], args) File "/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/site-packages/gautom atch/protocols/protocol_gautomatch.py", line 351, in _pickMicrographList runJob=self.runJob) File "/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/site-packages/gautom atch/__init__.py", line 103, in runGautomatch runJob(cls.getProgram(), args, env=environ) File "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", line 1311 , in runJob self._stepsExecutor.runJob(self._log, program, arguments, **kwargs) File "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/executor.py", line 353, in runJob gpuList=self.getGpuList()) File "/programs/x86_64-linux/scipion/2.0/pyworkflow/utils/mpi.py", line 88, in runJo bMPI send(command, mpiComm, mpiDest, TAG_RUN_JOB+mpiDest) File "/programs/x86_64-linux/scipion/2.0/pyworkflow/utils/mpi.py", line 73, in send raise Exception(str(result)) Exception: Command '/programs/x86_64-linux/gautomatch/0.56/bin/Gautomatch_v0.56_sm62_c u8.0 Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0146.mrc -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 --ang_step 5 --diamete r 300 --lp 30 --hp 1000 --gid 1 --apixT 0.83 --cc_cutoff 0.10 --speed 2 --boxsize 250 --min_dist 300 --lsigma_cutoff 1.20 --lsigma_D 100 --lave_max 1.00 --lave_min -1.00 -- lave_D 100 --detect_ice 1 --T_norm_type 1 --do_bandpass 1' returned non-zero exit stat us -6 Protocol failed: Command '/programs/x86_64-linux/gautomatch/0.56/bin/Gautomatch_v0.56_ sm62_cu8.0 Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0146. mrc -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 --ang_step 5 --d iameter 300 --lp 30 --hp 1000 --gid 1 --apixT 0.83 --cc_cutoff 0.10 --speed 2 --boxsiz e 250 --min_dist 300 --lsigma_cutoff 1.20 --lsigma_D 100 --lave_max 1.00 --lave_min -1 .00 --lave_D 100 --detect_ice 1 --T_norm_type 1 --do_bandpass 1' returned non-zero exi t status -6 FAILED: pickMicrographStep, step 132 2019-12-18 16:50:00.595338 ------------------- PROTOCOL FAILED (DONE 131/8369) From: Gregory Sharov < [ mailto:sha...@gm... | sha...@gm... ] > Date: Thursday, December 19, 2019 at 11:13 AM To: Maria Maldonado < [ mailto:mm...@uc... | mm...@uc... ] > Cc: Pablo Conesa < [ mailto:pc...@cn... | pc...@cn... ] >, " [ mailto:sc...@cn... | sc...@cn... ] " < [ mailto:sc...@cn... | sc...@cn... ] > Subject: Re: [scipion] Issues with Gautomatch on Scipion Hi Maria, 1) Which GPUs are you using? RTX2080? 2) You said earlier that you managed to run Gautomatch outside Scipion? Which cuda / cc version was it? You can download all kinds of binaries from [ https://www.mrc-lmb.cam.ac.uk/kzhang/Gautomatch/Gautomatch_v0.56/ | https://www.mrc-lmb.cam.ac.uk/kzhang/Gautomatch/Gautomatch_v0.56/ ] and put them into scipion 3) Exit status is pretty useless, you will have to look at stdout file or even run it from cmdline to see what's going on. 4) You can use any other reference-based picker e.g. Relion and forget the painful Gautomatch. It is very cuda/cc/gpu sensitive and not updated anymore for new GPUs. Merry Christmas! Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH, UK. tel. +44 (0) 1223 267228 e-mail: [ mailto:gs...@mr... | gs...@mr... ] On Thu, Dec 19, 2019 at 4:55 PM Maria Maldonado < [ mailto:mm...@uc... | mm...@uc... ] > wrote: BQ_BEGIN Hi Grigory, Sorry for the delayed reply, the GPUs were being used. I tried Gautomatch_v0.56 from the Scipion directory with binaries sm62 and sm30 (I don’t have an sm20 for this version) and the program failed with the same error as before: CalledProcessError: Command '/programs/x86_64-linux/scipion/2.0/software/em/gautomatch-0.56/bin/Gautomatch_v0.56_sm30_cu8.0 Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 --ang_step 5 --diameter 300 --lp 30 --hp 1000 --gid 0 --apixT 0.83 --cc_cutoff 0.10 --speed 2 --boxsize 200 --min_dist 300 --lsigma_cutoff 1.20 --lsigma_D 100 --lave_max 1.00 --lave_min -1.00 --lave_D 100 --detect_ice 0 --T_norm_type 1 --do_bandpass 1' returned non-zero exit status -11 Protocol failed: Command '/programs/x86_64-linux/scipion/2.0/software/em/gautomatch-0.56/bin/Gautomatch_v0.56_sm30_cu8.0 Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 --ang_step 5 --diameter 300 --lp 30 --hp 1000 --gid 0 --apixT 0.83 --cc_cutoff 0.10 --speed 2 --boxsize 200 --min_dist 300 --lsigma_cutoff 1.20 --lsigma_D 100 --lave_max 1.00 --lave_min -1.00 --lave_D 100 --detect_ice 0 --T_norm_type 1 --do_bandpass 1' returned non-zero exit status -11 FAILED: pickMicrographStep, step 2) When I tried Gautomatch_v0.56 sm62 from the SBgrid directory (ie from outside Scipion) with slightly different parameters, it ran for 130 micrographs, but then failed with the following error: ERROR: Protocol failed: Command '/programs/x86_64-linux/gautomatch/0.56/bin/Gautomatch_v0.56_sm62_cu8.0 Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0146.mrc -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 --ang_step 5 --diameter 300 --lp 30 --hp 1000 --gid 1 --apixT 0.83 --cc_cutoff 0.10 --speed 2 --boxsize 250 --min_dist 300 --lsigma_cutoff 1.20 --lsigma_D 100 --lave_max 1.00 --lave_min -1.00 --lave_D 100 --detect_ice 1 --T_norm_type 1 --do_bandpass 1' returned non-zero exit status -6 Config: # -*- conf -*- # All relative paths will have $SCIPION_HOME as their root. [DIRS_LOCAL] SCIPION_USER_DATA = ~/ScipionUserData SCIPION_LOGS = %(SCIPION_USER_DATA)s/logs SCIPION_TMP = %(SCIPION_USER_DATA)s/tmp [DIRS_GLOBAL] SCIPION_TESTS = data/tests SCIPION_SOFTWARE = software [REMOTE] SCIPION_URL = [ http://scipion.cnb.csic.es/downloads/scipion | http://scipion.cnb.csic.es/downloads/scipion ] SCIPION_URL_SOFTWARE = %(SCIPION_URL)s/software SCIPION_URL_TESTDATA = %(SCIPION_URL)s/data/tests [PACKAGES] EM_ROOT = software/em CRYOEF_HOME = %(EM_ROOT)s/cryoEF-1.1.0 CCP4_HOME = /programs/x86_64-linux/ccp4/7.0/ccp4-7.0 EMAN2DIR = %(EM_ROOT)s/eman-2.21/ ETHAN_HOME = %(EM_ROOT)s/ethan-1.2 GAUTOMATCH_HOME = /programs/x86_64-linux/gautomatch/0.56 # replace '_cuX.Y_' to certain installed cuda GAUTOMATCH = Gautomatch_v0.56_sm62_cu8.0 # GAUTOMATCH_CUDA_LIB = None # Fill to override scipion CUDA_LIB if different Do you know what could be going on? What is returned non-zero exit status -6 and -11 ? Do you have further suggestions for how to try to solve this? Thank you very much for your help, Maria From: Gregory Sharov < [ mailto:sha...@gm... | sha...@gm... ] > Date: Thursday, December 12, 2019 at 10:40 AM To: Maria Maldonado < [ mailto:mm...@uc... | mm...@uc... ] > Cc: Pablo Conesa < [ mailto:pc...@cn... | pc...@cn... ] >, " [ mailto:sc...@cn... | sc...@cn... ] " < [ mailto:sc...@cn... | sc...@cn... ] > Subject: Re: [scipion] Issues with Gautomatch on Scipion Hi Maria, for RTX 2080 you need CC=7.5. You could try [ https://www.mrc-lmb.cam.ac.uk/kzhang/Gautomatch/Gautomatch_v0.56/Gautomatch_v0.56_sm62_cu8.0 | Gautomatch_v0.56_sm62_cu8.0 ] and it might work on this card. However you said that Gautomatch-v0.53_sm_20_cu7.5_x86_64 works for you outside scipion? Was it also on RTX 2080? If it works, you can use this binary with the latest plugin version (just adapt the config file accordingly). Let us know how it goes. Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH, UK. tel. +44 (0) 1223 267228 e-mail: [ mailto:gs...@mr... | gs...@mr... ] On Thu, Dec 12, 2019 at 5:10 PM Maria Maldonado < [ mailto:mm...@uc... | mm...@uc... ] > wrote: BQ_BEGIN Pablo, I believe we have GPU RTX 2080. Computing capability should be 7.5 according to the nvidia website. So I should use the sm value closest to 75, is that right? Thanks again, Maria From: Maria Maldonado < [ mailto:mm...@uc... | mm...@uc... ] > Date: Thursday, December 12, 2019 at 8:47 AM To: Pablo Conesa < [ mailto:pc...@cn... | pc...@cn... ] >, Gregory Sharov < [ mailto:sha...@gm... | sha...@gm... ] > Cc: " [ mailto:sc...@cn... | sc...@cn... ] " < [ mailto:sc...@cn... | sc...@cn... ] > Subject: Re: [scipion] Issues with Gautomatch on Scipion Hi Pablo, Thanks for the suggestions. This is my current nvidia-smi output: NVIDIA-SMI 440.36 Driver Version: 440.36 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce RTX 208... Off | 00000000:18:00.0 Off | N/A | | 53% 84C P2 183W / 250W | 8720MiB / 11019MiB | 73% Default | +-------------------------------+----------------------+----------------------+ | 1 GeForce RTX 208... Off | 00000000:3B:00.0 Off | N/A | | 38% 61C P2 116W / 250W | 6732MiB / 11019MiB | 57% Default | +-------------------------------+----------------------+----------------------+ | 2 GeForce RTX 208... Off | 00000000:86:00.0 Off | N/A | | 39% 63C P2 203W / 250W | 6668MiB / 11019MiB | 59% Default | +-------------------------------+----------------------+----------------------+ | 3 GeForce RTX 208... Off | 00000000:AF:00.0 Off | N/A | | 39% 63C P2 170W / 250W | 6668MiB / 11019MiB | 56% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 254007 C ...ms/x86_64-linux/cryolo/1.5.3/bin/python 8707MiB | | 1 254007 C ...ms/x86_64-linux/cryolo/1.5.3/bin/python 6719MiB | | 2 254007 C ...ms/x86_64-linux/cryolo/1.5.3/bin/python 6655MiB | | 3 254007 C ...ms/x86_64-linux/cryolo/1.5.3/bin/python 6655MiB | Thanks, Maria From: Pablo Conesa < [ mailto:pc...@cn... | pc...@cn... ] > Date: Wednesday, December 11, 2019 at 11:38 PM To: Maria Maldonado < [ mailto:mm...@uc... | mm...@uc... ] >, Gregory Sharov < [ mailto:sha...@gm... | sha...@gm... ] > Cc: " [ mailto:sc...@cn... | sc...@cn... ] " < [ mailto:sc...@cn... | sc...@cn... ] > Subject: Re: [scipion] Issues with Gautomatch on Scipion Hi, I'm not sure about this but in some cases the architecture of the GPU needs to match the binaries. You are using Gautomatch_v0.56_sm 30 _cu8.0 which is a low value for current GPUs. I know some user had to match the sm value to the closest GPU compute capability, but I think this was for gctf. May be is worth trying? In that case, could you please send the output of nvidia-smi to see which GPUs you have. On 11/12/19 20:04, Maria Maldonado wrote: BQ_BEGIN Grigory, Please find below the config and error message from when I run the new version. Many thanks, Maria Run.stdout RUNNING PROTOCOL ----------------- HostName: [ http://unicorn.mcb.ucdavis.edu/ | unicorn.mcb.ucdavis.edu ] PID: 184221 Scipion: v2.0 (2019-04-23) Diocletian currentDir: /media/raid/lettslab/EM_processing/mmaldo/20190903_MungBean_SC_B_UCFSKrios/CIp_processing/CIstar_scipion workingDir: Runs/001122_ProtGautomatch runMode: Restart MPI: 1 threads: 1 Starting at step: 1 Running steps STARTED: convertInputStep, step 1 2019-12-11 10:32:55.873136 FINISHED: convertInputStep, step 1 2019-12-11 10:32:55.960758 STARTED: pickMicrographStep, step 2 2019-12-11 10:32:56.072227 Picking micrograph: Runs/000073_ProtImportMicrographs/extra/ucdavis_20190903_SCmb_B_0001.mrc Some paths do not exist in: /programs/x86_64-linux/scipion/2.0/None # Fill to override scipion CUDA_LIB if different /programs/x86_64-linux/scipion/2.0/software/em/gautomatch-0.56/bin/Gautomatch_v0.56_sm30_cu8.0 Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 --ang_step 5 --diameter 300 --lp 30 --hp 1000 --gid 0 --apixT 0.83 --cc_cutoff 0.10 --speed 2 --boxsize 200 --min_dist 300 --lsigma_cutoff 1.20 --lsigma_D 100 --lave_max 1.00 --lave_min -1.00 --lave_D 100 --detect_ice 0 --T_norm_type 1 --do_bandpass 1 *************************************************************************************************** User input parameters: --T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.830 --ang_step 5.00 --diameter 300.00 --lp 30.000 --hp 1000.000 --gid 0 --apixT 0.830 --cc_cutoff 0.100 --speed 2 --boxsize 200 --min_dist 300.00 --lsigma_cutoff 1.200 --lsigma_D 100.000 --lave_max 1.000 --lave_min -1.000 --lave_D 100.000 --detect_ice 0 --T_norm_type 1 --do_bandpass 1 All parameters to be used: Basic options: --T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.830 --diameter 300.00 --apixT 0.830 Additional options: --ang_step 5.00 --speed 2 --boxsize 200 --min_dist 300.00 --cc_cutoff 0.100 --lsigma_cutoff 1.200 --lsigma_D 100.000 --lave_min -1.000 --lave_max 1.000 --lave_D 100.000 --dont_invertT 0 --lp 30.00 --hp 1000.00 --do_pre_filter 0 --pre_lp 8.00 --pre_hp 1000.00 --detect_ice 0 --T_norm_type 1 --do_bandpass 1 CTF options: --do_ctf 0 --do_local_ctf 0 --kV 300.00 --cs 2.70 --ac 0.10 I/O options: --write_ccmax_mic 0 --write_pf_mic 0 --write_pref_mic 0 --write_bg_mic 0 --write_bgfree_mic 0 --write_lsigma_mic 0 --write_mic_mask 0 --write_star 1 --do_unfinished 0 --write_rejected_box 1 --exclusive_picking 0 --excluded_suffix NULL --mask_excluded 0 --global_excluded_box NULL --extract_raw 0 --extract_pf 0 --gid 0 All 1 files to be used: Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc Opening Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc for initial test and preperation ....... >>>>>>TIME<<<<<< PREPARATION: 0.313178s --------------------------------------------------------------------------------------------------------------------------------- Reading File Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc ... >>>>>>TIME<<<<<< READING: 0.076049s Normalizing the micrograph Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc ... >>>>>>TIME<<<<<< NORMALIZATION: 0.964321s Template 001 ........................................................................ Template 002 ........................................................................ Template 003 ........................................................................ Template 004 ........................................................................ Template 005 ........................................................................ Template 006 ........................................................................ Template 007 ........................................................................ Template 008 ........................................................................ Template 009 ........................................................................ Template 010 ..................................................Traceback (most recent call last): File "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", line 186, in run self._run() File "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", line 237, in _run resultFiles = self._runFunc() File "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", line 233, in _runFunc return self._func(*self._args) File "/programs/x86_64-linux/scipion/2.0/pyworkflow/em/protocol/protocol_particles_picking.py", line 242, in pickMicrographStep self._pickMicrograph(mic, *args) File "/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/site-packages/gautomatch/protocols/protocol_gautomatch.py", line 327, in _pickMicrograph self._pickMicrographList([mic], args) File "/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/site-packages/gautomatch/protocols/protocol_gautomatch.py", line 351, in _pickMicrographList runJob=self.runJob) File "/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/site-packages/gautomatch/__init__.py", line 103, in runGautomatch runJob(cls.getProgram(), args, env=environ) File "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", line 1311, in runJob self._stepsExecutor.runJob(self._log, program, arguments, **kwargs) File "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/executor.py", line 70, in runJob env=env, cwd=cwd, gpuList=self.getGpuList()) File "/programs/x86_64-linux/scipion/2.0/pyworkflow/utils/process.py", line 52, in runJob return runCommand(command, env, cwd) File "/programs/x86_64-linux/scipion/2.0/pyworkflow/utils/process.py", line 67, in runCommand env=env, cwd=cwd) File "/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/subprocess.py", line 190, in check_call raise CalledProcessError(retcode, cmd) CalledProcessError: Command '/programs/x86_64-linux/scipion/2.0/software/em/gautomatch-0.56/bin/Gautomatch_v0.56_sm30_cu8.0 Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 --ang_step 5 --diameter 300 --lp 30 --hp 1000 --gid 0 --apixT 0.83 --cc_cutoff 0.10 --speed 2 --boxsize 200 --min_dist 300 --lsigma_cutoff 1.20 --lsigma_D 100 --lave_max 1.00 --lave_min -1.00 --lave_D 100 --detect_ice 0 --T_norm_type 1 --do_bandpass 1' returned non-zero exit status -11 Protocol failed: Command '/programs/x86_64-linux/scipion/2.0/software/em/gautomatch-0.56/bin/Gautomatch_v0.56_sm30_cu8.0 Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 --ang_step 5 --diameter 300 --lp 30 --hp 1000 --gid 0 --apixT 0.83 --cc_cutoff 0.10 --speed 2 --boxsize 200 --min_dist 300 --lsigma_cutoff 1.20 --lsigma_D 100 --lave_max 1.00 --lave_min -1.00 --lave_D 100 --detect_ice 0 --T_norm_type 1 --do_bandpass 1' returned non-zero exit status -11 FAILED: pickMicrographStep, step 2 2019-12-11 10:32:59.514441 ------------------- PROTOCOL FAILED (DONE 2/8369) Config # -*- conf -*- # All relative paths will have $SCIPION_HOME as their root. [DIRS_LOCAL] SCIPION_USER_DATA = ~/ScipionUserData SCIPION_LOGS = %(SCIPION_USER_DATA)s/logs SCIPION_TMP = %(SCIPION_USER_DATA)s/tmp [DIRS_GLOBAL] SCIPION_TESTS = data/tests SCIPION_SOFTWARE = software [REMOTE] SCIPION_URL = [ http://scipion.cnb.csic.es/downloads/scipion | http://scipion.cnb.csic.es/downloads/scipion ] SCIPION_URL_SOFTWARE = %(SCIPION_URL)s/software SCIPION_URL_TESTDATA = %(SCIPION_URL)s/data/tests [PACKAGES] EM_ROOT = software/em CRYOEF_HOME = %(EM_ROOT)s/cryoEF-1.1.0 CCP4_HOME = /programs/x86_64-linux/ccp4/7.0/ccp4-7.0 EMAN2DIR = %(EM_ROOT)s/eman-2.21/ ETHAN_HOME = %(EM_ROOT)s/ethan-1.2 GAUTOMATCH_HOME = %(EM_ROOT)s/gautomatch-0.56 # replace '_cuX.Y_' to certain installed cuda GAUTO BQ_END BQ_END BQ_END BQ_END BQ_END _______________________________________________ scipion-users mailing list sci...@li... https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: 范宏成 <187...@16...> - 2020-01-08 07:55:16
|
Dear Jose, I got it. Thank you very much! Fan | | 范宏成 | | m18...@16... | 签名由网易邮箱大师定制 On 1/8/2020 15:34,Jose Maria Carazo<ca...@cn...> wrote: Just jumping into it, Orney the first author of this paper, will contact you ASAP (he was coming back from being for Chritmast with his family in Cuba) We are extremely interested in all our methods to be widely used, and clearly the situation as it is is far from perfect..... but Ernery will contact you, and will also address the whole list with a clear way to use his program. My apologises for the current situation Wbw..JM On Wed, Jan 8, 2020 at 3:02 AM 范宏成 <187...@16...> wrote: HI Pablo, I'd like to go that way. Thank you very much! | | 范宏成 | | m18...@16... | 签名由网易邮箱大师定制 On 1/7/2020 15:36,Pablo Conesa<pc...@cn...> wrote: Thanks Fan, Erney, just told me that deepres is not released in scipion 2.0 but can be access in the devel "mode". I guess any of he xmipp guys will be able to help you if you are willing to go that way. On 5/1/20 16:25, 范宏成 wrote: Hi Pablo, I can see xmipp protocols in my scipion except for deepRes. Please see attachment. Fan At 2020-01-05 23:04:56, pc...@cn... wrote: Hi, can you check what is listed when you select the view "all" in the top right drop-down. Can you see any Xmipp protocol? El 2 ene. 2020 17:49, 范宏成 <187...@16...> escribió: Dear Jose, Regarding the DeepRes, I did't find in my scipion although I use crtl+F and search by writing the deepres. I guess maybe my scipion is not the developer version. So I tried to reinstall my scipion (devel mode) and use plugin to install xmipp(v3.19.04 source code).I found there existed deepRes in my new install scipion(scipion/software/em/xmipp/bin/xmipp_deepRes_resolution and software/em/xmipp/bin/models/deepRes). However, I still didn't search the deepRes in scipion gui. How to solve this problem? Thank you! Fan | | 范宏成 | | m18...@16... | 签名由 网易邮箱大师 定制 On 1/2/2020 17:46,JOSE LUIS VILAS PRIETO<jl...@cn...> wrote: Dear Fan, Thanks for getting in touch with us. The input of localdeblur can be any local resolution map, it means that you can compute the local resolution with monores, resmap, blocres, o deepres. There is no one better than the other. In the case of monores, as well as resmap, it is mandatory that your map contains noise, if it is noise free, use as I put two halves. It should always be the best way of estimating the local resolution. Also, deepres is a good choice, because it overcomes the noise problem, and you can computer the local resolution map of a noise-free density map. Deepres belongs to xmipp package and you should be able to find it in analisys tab in the left panel. Alternatively, you can do crtl+F and search by writing the name of the protocol, it means deepres. Regarding with the micelle, you can create a mask and apply it to remove it Anything you need, please let us know Kind regards José Luis Vilas Quoting 范宏成 <187...@16...>: Dear Colleagues, I have some questions about Local resolution estimation and Local Deblur. Looking forward to your response . Thank you! 1. MonoRes. Can MonoRes estimate the local resolution when I do focused refinement (masked refinement) using relion. I mean there maybe no region to estimate the noise. In this condition, I have to use the relion local resolution protocol to get the focused map local resolution. 2. Local Deblur. Can Local Deblur support the relion local resolution map or only support MonoRes,Resmap, Blocres? 3. DeepRes. I recently read a paper about the DeepRes and find it very useful for my project. I didn't find the DeepRes protocol in my scipion. My scipion version is v2.0(2019-04-23) Diocletian in Centos7 system(Binary version). How can I use this protocol? 4.DeepRes for membrane protein. In DeepRes paper, It said it is very important to mask out the micelle because DeepRes only trained with proteins. If I don't mask out the micelle, What bad consequencies will happen? By the way, How to effectively and carefully mask out the micellce? In my way, I deleted the micelle using particle substraction and run again auto-refinement in relion. In order not to delete the signal from the protein, I used to use the soft mask, So My membrante protein had a small number of micelles around the TM domain. I'm afraid this residual micelle may cause bad consequencies when using DeepRes. Does someone have a better way to delete the micelle in membrane proteins? Fan Hongcheng | | 范宏成 | | m18...@16... | 签名由网易邮箱大师定制 _______________________________________________ scipion-users mailing list sci...@li... https://lists.sourceforge.net/lists/listinfo/scipion-users _______________________________________________ scipion-users mailing list sci...@li...https://lists.sourceforge.net/lists/listinfo/scipion-users -- Pablo Conesa - Madrid Scipion team _______________________________________________ scipion-users mailing list sci...@li... https://lists.sourceforge.net/lists/listinfo/scipion-users -- Prof. Jose-Maria Carazo Biocomputing Unit, Head, CNB-CSIC Spanish National Center for Biotechnology Darwin 3, Universidad Autonoma de Madrid 28049 Madrid, Spain Cell: +34639197980 |
From: Jose M. C. <ca...@cn...> - 2020-01-08 07:35:33
|
Just jumping into it, Orney the first author of this paper, will contact you ASAP (he was coming back from being for Chritmast with his family in Cuba) We are extremely interested in all our methods to be widely used, and clearly the situation as it is is far from perfect..... but Ernery will contact you, and will also address the whole list with a clear way to use his program. My apologises for the current situation Wbw..JM On Wed, Jan 8, 2020 at 3:02 AM 范宏成 <187...@16...> wrote: > HI Pablo, > I'd like to go that way. Thank you very much! > > 范宏成 > m18...@16... > > <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?&name=%E8%8C%83%E5%AE%8F%E6%88%90&uid=m18717124574%40163.com&ftlId=1&iconUrl=http%3A%2F%2Fmail-online.nosdn.127.net%2Fsmfc69753e7457a6e590754a7cd5492c0d.jpg&items=%5B%22m18717124574%40163.com%22%5D> > 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制 > On 1/7/2020 15:36,Pablo Conesa<pc...@cn...> <pc...@cn...> > wrote: > > Thanks Fan, Erney, just told me that deepres is not released in scipion > 2.0 but can be access in the devel "mode". > > I guess any of he xmipp guys will be able to help you if you are willing > to go that way. > On 5/1/20 16:25, 范宏成 wrote: > > Hi Pablo, > I can see xmipp protocols in my scipion except for deepRes. Please see > attachment. > > Fan > > > > > > At 2020-01-05 23:04:56, pc...@cn... wrote: > > Hi, can you check what is listed when you select the view "all" in the top > right drop-down. Can you see any Xmipp protocol? > > El 2 ene. 2020 17:49, 范宏成 <187...@16...> <187...@16...> > escribió: > > Dear Jose, > Regarding the DeepRes, I did't find in my scipion although I use crtl+F > and search by writing the deepres. I guess maybe my scipion is not the > developer version. So I tried to reinstall my scipion (devel mode) and use > plugin to install xmipp(v3.19.04 source code).I found there existed deepRes > in my new install > scipion(scipion/software/em/xmipp/bin/xmipp_deepRes_resolution and > software/em/xmipp/bin/models/deepRes). However, I still didn't search > the deepRes in scipion gui. How to solve this problem? Thank you! > > Fan > 范宏成 > m18...@16... > > <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?&name=%E8%8C%83%E5%AE%8F%E6%88%90&uid=m18717124574%40163.com&ftlId=1&iconUrl=http%3A%2F%2Fmail-online.nosdn.127.net%2Fsmfc69753e7457a6e590754a7cd5492c0d.jpg&items=%5B%22m18717124574%40163.com%22%5D> > 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制 > On 1/2/2020 17:46,JOSE LUIS VILAS PRIETO<jl...@cn...> > <jl...@cn...> wrote: > > Dear Fan, > > Thanks for getting in touch with us. > > The input of localdeblur can be any local resolution map, it means > that you can compute the local resolution with monores, resmap, > blocres, o deepres. There is no one better than the other. > > In the case of monores, as well as resmap, it is mandatory that your > map contains noise, if it is noise free, use as I put two halves. It > should always be the best way of estimating the local resolution. > > Also, deepres is a good choice, because it overcomes the noise > problem, and you can computer the local resolution map of a noise-free > density map. > > Deepres belongs to xmipp package and you should be able to find it in > analisys tab in the left panel. Alternatively, you can do crtl+F and > search by writing the name of the protocol, it means deepres. > > Regarding with the micelle, you can create a mask and apply it to remove it > > Anything you need, please let us know > > Kind regards > > José Luis Vilas > > Quoting 范宏成 <187...@16...> <187...@16...>: > > Dear Colleagues, > I have some questions about Local resolution estimation and Local > Deblur. Looking forward to your response . Thank you! > 1. MonoRes. Can MonoRes estimate the local resolution when I do > focused refinement (masked refinement) using relion. I mean there > maybe no region to estimate the noise. In this condition, I have > to use the relion local resolution protocol to get the focused map > local resolution. > 2. Local Deblur. Can Local Deblur support the relion local > resolution map or only support MonoRes,Resmap, Blocres? > 3. DeepRes. I recently read a paper about the DeepRes and find it > very useful for my project. I didn't find the DeepRes protocol in > my scipion. My scipion version is v2.0(2019-04-23) Diocletian in > Centos7 system(Binary version). How can I use this protocol? > 4.DeepRes for membrane protein. In DeepRes paper, It said it is very > important to mask out the micelle because DeepRes only trained with > proteins. If I don't mask out the micelle, What bad consequencies > will happen? By the way, How to effectively and carefully mask out > the micellce? In my way, I deleted the micelle using particle > substraction and run again auto-refinement in relion. In order not > to delete the signal from the protein, I used to use the soft mask, > So My membrante protein had a small number of micelles around the > TM domain. I'm afraid this residual micelle may cause bad > consequencies when using DeepRes. Does someone have a better way to > delete the micelle in membrane proteins? > > > Fan Hongcheng > > > | | > 范宏成 > | > | > m18...@16... > | > 签名由网易邮箱大师定制 > > > > > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > > > > > > > > _______________________________________________ > scipion-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/scipion-users > > -- > Pablo Conesa - *Madrid Scipion <http://scipion.i2pc.es> team* > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > -- Prof. Jose-Maria Carazo Biocomputing Unit, Head, CNB-CSIC Spanish National Center for Biotechnology Darwin 3, Universidad Autonoma de Madrid 28049 Madrid, Spain Cell: +34639197980 |
From: 范宏成 <187...@16...> - 2020-01-08 02:02:29
|
HI Pablo, I'd like to go that way. Thank you very much! | | 范宏成 | | m18...@16... | 签名由网易邮箱大师定制 On 1/7/2020 15:36,Pablo Conesa<pc...@cn...> wrote: Thanks Fan, Erney, just told me that deepres is not released in scipion 2.0 but can be access in the devel "mode". I guess any of he xmipp guys will be able to help you if you are willing to go that way. On 5/1/20 16:25, 范宏成 wrote: Hi Pablo, I can see xmipp protocols in my scipion except for deepRes. Please see attachment. Fan At 2020-01-05 23:04:56, pc...@cn... wrote: Hi, can you check what is listed when you select the view "all" in the top right drop-down. Can you see any Xmipp protocol? El 2 ene. 2020 17:49, 范宏成 <187...@16...> escribió: Dear Jose, Regarding the DeepRes, I did't find in my scipion although I use crtl+F and search by writing the deepres. I guess maybe my scipion is not the developer version. So I tried to reinstall my scipion (devel mode) and use plugin to install xmipp(v3.19.04 source code).I found there existed deepRes in my new install scipion(scipion/software/em/xmipp/bin/xmipp_deepRes_resolution and software/em/xmipp/bin/models/deepRes). However, I still didn't search the deepRes in scipion gui. How to solve this problem? Thank you! Fan | | 范宏成 | | m18...@16... | 签名由 网易邮箱大师 定制 On 1/2/2020 17:46,JOSE LUIS VILAS PRIETO<jl...@cn...> wrote: Dear Fan, Thanks for getting in touch with us. The input of localdeblur can be any local resolution map, it means that you can compute the local resolution with monores, resmap, blocres, o deepres. There is no one better than the other. In the case of monores, as well as resmap, it is mandatory that your map contains noise, if it is noise free, use as I put two halves. It should always be the best way of estimating the local resolution. Also, deepres is a good choice, because it overcomes the noise problem, and you can computer the local resolution map of a noise-free density map. Deepres belongs to xmipp package and you should be able to find it in analisys tab in the left panel. Alternatively, you can do crtl+F and search by writing the name of the protocol, it means deepres. Regarding with the micelle, you can create a mask and apply it to remove it Anything you need, please let us know Kind regards José Luis Vilas Quoting 范宏成 <187...@16...>: Dear Colleagues, I have some questions about Local resolution estimation and Local Deblur. Looking forward to your response . Thank you! 1. MonoRes. Can MonoRes estimate the local resolution when I do focused refinement (masked refinement) using relion. I mean there maybe no region to estimate the noise. In this condition, I have to use the relion local resolution protocol to get the focused map local resolution. 2. Local Deblur. Can Local Deblur support the relion local resolution map or only support MonoRes,Resmap, Blocres? 3. DeepRes. I recently read a paper about the DeepRes and find it very useful for my project. I didn't find the DeepRes protocol in my scipion. My scipion version is v2.0(2019-04-23) Diocletian in Centos7 system(Binary version). How can I use this protocol? 4.DeepRes for membrane protein. In DeepRes paper, It said it is very important to mask out the micelle because DeepRes only trained with proteins. If I don't mask out the micelle, What bad consequencies will happen? By the way, How to effectively and carefully mask out the micellce? In my way, I deleted the micelle using particle substraction and run again auto-refinement in relion. In order not to delete the signal from the protein, I used to use the soft mask, So My membrante protein had a small number of micelles around the TM domain. I'm afraid this residual micelle may cause bad consequencies when using DeepRes. Does someone have a better way to delete the micelle in membrane proteins? Fan Hongcheng | | 范宏成 | | m18...@16... | 签名由网易邮箱大师定制 _______________________________________________ scipion-users mailing list sci...@li... https://lists.sourceforge.net/lists/listinfo/scipion-users _______________________________________________ scipion-users mailing list sci...@li...https://lists.sourceforge.net/lists/listinfo/scipion-users -- Pablo Conesa - Madrid Scipion team |
From: Grigory S. <sha...@gm...> - 2020-01-07 19:55:29
|
Hi Sorry to hear that. I'm afraid gautomatch and gctf do not work well on new Cuda or gpus or k3 images, and it doesn't look like their developer plans to update his software. So you might switch to other pickers. On Tue, Jan 7, 2020, 19:46 Maria Maldonado <mm...@uc...> wrote: > Hi Grigory, > > > > Thank you for your message, and happy new year. > > > > I used gid 0 and the syntax error was gone, however, I still cannot get > the 0.56 version to run. I tried every single 0.56 binary ran outside > of Scipion and none of them worked. Some crash during normalization > (segmentation fault core aborted), some did the normalization and ran > through the templates before crashing, but even so, after the templates > they all give a segmentation fault (core aborted) message. Do you know what > could be going on? > > > > Thanks again, > > Maria > > > > > > *From: *Grigory Sharov <sha...@gm...> > *Date: *Sunday, December 29, 2019 at 3:50 AM > *To: *Maria Maldonado <mm...@uc...> > *Subject: *Re: [scipion] Issues with Gautomatch on Scipion > > > > Hello Maria, > > > > sorry for the delayed reply. It looks like GPU list is not parsed > properly. What did you put in a GPUs field in the protocol form? It accepts > a string like "0 1 2" for GPU ids 0,1,2. You can try to put simply 0 there > > > > > Best regards, > Grigory > > > > > -------------------------------------------------------------------------------- > > Grigory Sharov, Ph.D. > > MRC Laboratory of Molecular Biology, > Francis Crick Avenue, > Cambridge Biomedical Campus, > Cambridge CB2 0QH, UK. > tel. +44 (0) 1223 267228 <+44%201223%20267228> > > e-mail: gs...@mr... > > > > > > On Thu, Dec 19, 2019 at 8:24 PM Maria Maldonado <mm...@uc...> > wrote: > > The error with MPI=1 was the following. (I am still trying 0.56 because I > want to be able to use the deicer option; I have already imported the > results of 0.53 into Scipion) > > > > Thanks, > > > > Maria Maldonado > > > > RUNNING PROTOCOL ----------------- > > HostName: unicorn.mcb.ucdavis.edu > > PID: 104269 > > Scipion: v2.0 (2019-04-23) Diocletian > > currentDir: > /media/raid/lettslab/EM_processing/mmaldo/20190903_MungBean_SC_B_UCFSKrios/CIp_processing/CIstar_scipion > > workingDir: Runs/001122_ProtGautomatch > > runMode: Restart > > MPI: 1 > > threads: 1 > > Starting at step: 1 > > Running steps > > STARTED: convertInputStep, step 1 > > 2019-12-19 12:16:15.438819 > > FINISHED: convertInputStep, step 1 > > 2019-12-19 12:16:15.579415 > > STARTED: pickMicrographStep, step 2 > > 2019-12-19 12:16:15.685537 > > Picking micrograph: > Runs/000073_ProtImportMicrographs/extra/ucdavis_20190903_SCmb_B_0001.mrc > > /programs/x86_64-linux/gautomatch/0.56/bin/Gautomatch_v0.56_sm62_cu8.0 > Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc > -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 > --ang_step 5 --diameter 300 --lp 30 --hp 1000 --gid %(GPU)s --apixT 0.83 > --cc_cutoff 0.10 --speed 2 --boxsize 250 --min_dist 300 --lsigma_cutoff > 1.20 --lsigma_D 100 --lave_max 1.00 --lave_min -1.00 --lave_D 100 > --detect_ice 1 --T_norm_type 1 --do_bandpass 1 > > /bin/sh: -c: line 0: syntax error near unexpected token `(' > > /bin/sh: -c: line 0: > `/programs/x86_64-linux/gautomatch/0.56/bin/Gautomatch_v0.56_sm62_cu8.0 > Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc > -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 > --ang_step 5 --diameter 300 --lp 30 --hp 1000 --gid %(GPU)s --apixT 0.83 > --cc_cutoff 0.10 --speed 2 --boxsize 250 --min_dist 300 --lsigma_cutoff > 1.20 --lsigma_D 100 --lave_max 1.00 --lave_min -1.00 --lave_D 100 > --detect_ice 1 --T_norm_type 1 --do_bandpass 1' > > Traceback (most recent call last): > > File > "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", line > 186, in run > > self._run() > > File > "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", line > 237, in _run > > resultFiles = self._runFunc() > > File > "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", line > 233, in _runFunc > > return self._func(*self._args) > > File > "/programs/x86_64-linux/scipion/2.0/pyworkflow/em/protocol/protocol_particles_picking.py", > line 242, in pickMicrographStep > > self._pickMicrograph(mic, *args) > > File > "/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/site-packages/gautomatch/protocols/protocol_gautomatch.py", > line 327, in _pickMicrograph > > self._pickMicrographList([mic], args) > > File > "/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/site-packages/gautomatch/protocols/protocol_gautomatch.py", > line 351, in _pickMicrographList > > runJob=self.runJob) > > File > "/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/site-packages/gautomatch/__init__.py", > line 103, in runGautomatch > > runJob(cls.getProgram(), args, env=environ) > > File > "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", line > 1311, in runJob > > self._stepsExecutor.runJob(self._log, program, arguments, **kwargs) > > File > "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/executor.py", line > 70, in runJob > > env=env, cwd=cwd, gpuList=self.getGpuList()) > > File "/programs/x86_64-linux/scipion/2.0/pyworkflow/utils/process.py", > line 52, in runJob > > return runCommand(command, env, cwd) > > File "/programs/x86_64-linux/scipion/2.0/pyworkflow/utils/process.py", > line 67, in runCommand > > env=env, cwd=cwd) > > File > "/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/subprocess.py", > line 190, in check_call > > raise CalledProcessError(retcode, cmd) > > CalledProcessError: Command > '/programs/x86_64-linux/gautomatch/0.56/bin/Gautomatch_v0.56_sm62_cu8.0 > Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc > -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 > --ang_step 5 --diameter 300 --lp 30 --hp 1000 --gid %(GPU)s --apixT 0.83 > --cc_cutoff 0.10 --speed 2 --boxsize 250 --min_dist 300 --lsigma_cutoff > 1.20 --lsigma_D 100 --lave_max 1.00 --lave_min -1.00 --lave_D 100 > --detect_ice 1 --T_norm_type 1 --do_bandpass 1' returned non-zero exit > status 1 > > Protocol failed: Command > '/programs/x86_64-linux/gautomatch/0.56/bin/Gautomatch_v0.56_sm62_cu8.0 > Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc > -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 > --ang_step 5 --diameter 300 --lp 30 --hp 1000 --gid %(GPU)s --apixT 0.83 > --cc_cutoff 0.10 --speed 2 --boxsize 250 --min_dist 300 --lsigma_cutoff > 1.20 --lsigma_D 100 --lave_max 1.00 --lave_min -1.00 --lave_D 100 > --detect_ice 1 --T_norm_type 1 --do_bandpass 1' returned non-zero exit > status 1 > > FAILED: pickMicrographStep, step 2 > > 2019-12-19 12:16:15.784806 > > ------------------- PROTOCOL FAILED (DONE 2/8369) > > > > *From: *Gregory Sharov <sha...@gm...> > *Date: *Thursday, December 19, 2019 at 12:11 PM > *To: *Maria Maldonado <mm...@uc...> > *Cc: *Pablo Conesa <pc...@cn...>, "sc...@cn..." < > sc...@cn...> > *Subject: *Re: [scipion] Issues with Gautomatch on Scipion > > > > Hi, > > > > I'm afraid MPI is hiding the errors behind, you have to re-run it with > MPI=1 to see the errors. And yes, try to put 0.53, sm20, cu7.5 into config > file and run it. If it worked outside Scipion it should work inside as well. > > > Best regards, > Grigory > > > > > -------------------------------------------------------------------------------- > > Grigory Sharov, Ph.D. > > MRC Laboratory of Molecular Biology, > Francis Crick Avenue, > Cambridge Biomedical Campus, > Cambridge CB2 0QH, UK. > tel. +44 (0) 1223 267228 <+44%201223%20267228> > > e-mail: gs...@mr... > > > > > > On Thu, Dec 19, 2019 at 8:06 PM Maria Maldonado <mm...@uc...> > wrote: > > Hi Grigory, > > > > Thank you for your comments. Yes, I’m using GPU RTX2080. Gautomatch > outside Scipion was 0.53, sm20, cu7.5. Thanks for letting me know it is so > sensitive to the cuda/cc/gpu. I will try some more binaries and move on to > a different picker. > > > > The stdout file for the last few micrographs is below. Would you mind > highlighting were the error is described, so I can learn to recognize it > next time? Sorry for the basic question. I appreciate your help. > > > > Best wishes, > > Maria > > > > > > STARTED: pickMicrographStep, step 131 > > 2019-12-18 16:48:32.258164 > > STARTED: pickMicrographStep, step 132 > > Picking micrograph: > Runs/000073_ProtImportMicrographs/extra/ucdavis_20190903_SCmb_B_01 > 45.mrc > > 2019-12-18 16:48:32.302543 > > Some paths do not exist in: /programs/x86_64-linux/scipion/2.0/None # Fill > to override scipion CUDA_LIB if different > > Sending environment to 3 > > Picking micrograph: > Runs/000073_ProtImportMicrographs/extra/ucdavis_20190903_SCmb_B_01 > 46.mrc > > Some paths do not exist in: /programs/x86_64-linux/scipion/2.0/None # Fill > to override scipion CUDA_LIB if different > > Sending environment to 2 > > FINISHED: pickMicrographStep, step 130 > > 2019-12-18 16:48:32.025812 > > STARTED: pickMicrographStep, step 133 > > 2019-12-18 16:48:32.441198 > > Picking micrograph: > Runs/000073_ProtImportMicrographs/extra/ucdavis_20190903_SCmb_B_01 > 47.mrc > > Some paths do not exist in: /programs/x86_64-linux/scipion/2.0/None # Fill > to override scipion CUDA_LIB if different > > Sending environment to 1 > > Sending command to 3: > /programs/x86_64-linux/gautomatch/0.56/bin/Gautomatch_v0.56_sm62 > _cu8.0 > Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0145.mrc > -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 > --ang_step 5 --diame ter 300 --lp 30 --hp > 1000 --gid 2 --apixT 0.83 --cc_cutoff 0.10 --speed 2 --boxsize > 25 0 --min_dist 300 --lsigma_cutoff 1.20 > --lsigma_D 100 --lave_max 1.00 --lave_min > -1.00 --lave_D 100 --detect_ice 1 > --T_norm_type 1 --do_bandpass 1 > > Sending command to 2: > /programs/x86_64-linux/gautomatch/0.56/bin/Gautomatch_v0.56_sm62 > _cu8.0 > Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0146.mrc > -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM > 0.83 --ang_step 5 --diame ter 300 --lp 30 > --hp 1000 --gid 1 --apixT 0.83 --cc_cutoff 0.10 --speed 2 --boxsize > 25 0 --min_dist 300 --lsigma_cutoff 1.20 > --lsigma_D 100 --lave_max 1.00 --lave_min > -1.00 --lave_D 100 --detect_ice 1 > --T_norm_type 1 --do_bandpass 1 > > Sending command to 1: > /programs/x86_64-linux/gautomatch/0.56/bin/Gautomatch_v0.56_sm62 > _cu8.0 > Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0147.mrc > -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 > --ang_step 5 --diame ter 300 --lp 30 --hp > 1000 --gid 0 --apixT 0.83 --cc_cutoff 0.10 --speed 2 --boxsize > 25 0 --min_dist 300 --lsigma_cutoff 1.20 > --lsigma_D 100 --lave_max 1.00 --lave_min > -1.00 --lave_D 100 --detect_ice 1 > --T_norm_type 1 --do_bandpass 1 > > Traceback (most recent call last): > > File > "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/executor.py", line > 151, in run > > self.step._run() # not self.step.run() , to avoid race conditions > > File > "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", line > 237, in _run > > resultFiles = self._runFunc() > > File > "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", line > 233, in _runFunc > > return self._func(*self._args) > > File > "/programs/x86_64-linux/scipion/2.0/pyworkflow/em/protocol/protocol_particles_p > icking.py", line 242, in pickMicrographStep > > self._pickMicrograph(mic, *args) > > File > "/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/site-packages/gautom > atch/protocols/protocol_gautomatch.py", line 327, in _pickMicrograph > > self._pickMicrographList([mic], args) > > File > "/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/site-packages/gautom > atch/protocols/protocol_gautomatch.py", line 351, in _pickMicrographList > > runJob=self.runJob) > > File > "/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/site-packages/gautom > atch/__init__.py", line 103, in runGautomatch > > runJob(cls.getProgram(), args, env=environ) > > File > "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", line > 1311 , in runJob > > self._stepsExecutor.runJob(self._log, program, arguments, **kwargs) > > File > "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/executor.py", line > 353, in runJob > > gpuList=self.getGpuList()) > > File "/programs/x86_64-linux/scipion/2.0/pyworkflow/utils/mpi.py", line > 88, in runJo bMPI > > send(command, mpiComm, mpiDest, TAG_RUN_JOB+mpiDest) > > File "/programs/x86_64-linux/scipion/2.0/pyworkflow/utils/mpi.py", line > 73, in send > > raise Exception(str(result)) > > Exception: Command > '/programs/x86_64-linux/gautomatch/0.56/bin/Gautomatch_v0.56_sm62_c > u8.0 > Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0146.mrc > -T > Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 --ang_step 5 > --diamete r 300 --lp 30 --hp 1000 --gid 1 > --apixT 0.83 --cc_cutoff 0.10 --speed 2 --boxsize > 250 --min_dist 300 --lsigma_cutoff 1.20 > --lsigma_D 100 --lave_max 1.00 --lave_min -1.00 > -- lave_D 100 --detect_ice 1 --T_norm_type > 1 --do_bandpass 1' returned non-zero exit > stat us -6 > > Protocol failed: Command > '/programs/x86_64-linux/gautomatch/0.56/bin/Gautomatch_v0.56_ > sm62_cu8.0 > Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0146. > mrc -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 > --ang_step 5 --d iameter 300 --lp 30 --hp > 1000 --gid 1 --apixT 0.83 --cc_cutoff 0.10 --speed 2 --boxsiz > e 250 --min_dist 300 --lsigma_cutoff 1.20 > --lsigma_D 100 --lave_max 1.00 --lave_min > -1 .00 --lave_D 100 --detect_ice 1 > --T_norm_type 1 --do_bandpass 1' returned non-zero > exi t status -6 > > FAILED: pickMicrographStep, step 132 > > 2019-12-18 16:50:00.595338 > > ------------------- PROTOCOL FAILED (DONE 131/8369) > > > > > > > > > > > > > > *From: *Gregory Sharov <sha...@gm...> > *Date: *Thursday, December 19, 2019 at 11:13 AM > *To: *Maria Maldonado <mm...@uc...> > *Cc: *Pablo Conesa <pc...@cn...>, "sc...@cn..." < > sc...@cn...> > *Subject: *Re: [scipion] Issues with Gautomatch on Scipion > > > > Hi Maria, > > > > 1) Which GPUs are you using? RTX2080? > > > > 2) You said earlier that you managed to run Gautomatch outside Scipion? > Which cuda / cc version was it? You can download all kinds of binaries > from https://www.mrc-lmb.cam.ac.uk/kzhang/Gautomatch/Gautomatch_v0.56/ and > put them into scipion > > > > 3) Exit status is pretty useless, you will have to look at stdout file or > even run it from cmdline to see what's going on. > > > > 4) You can use any other reference-based picker e.g. Relion and forget the > painful Gautomatch. It is very cuda/cc/gpu sensitive and not updated > anymore for new GPUs. > > > > Merry Christmas! > > > Best regards, > Grigory > > > > > -------------------------------------------------------------------------------- > > Grigory Sharov, Ph.D. > > MRC Laboratory of Molecular Biology, > Francis Crick Avenue, > Cambridge Biomedical Campus, > Cambridge CB2 0QH, UK. > tel. +44 (0) 1223 267228 <+44%201223%20267228> > > e-mail: gs...@mr... > > > > > > On Thu, Dec 19, 2019 at 4:55 PM Maria Maldonado <mm...@uc...> > wrote: > > Hi Grigory, > > > > Sorry for the delayed reply, the GPUs were being used. > > I tried Gautomatch_v0.56 from the Scipion directory with binaries sm62 and > sm30 (I don’t have an sm20 for this version) and the program failed with > the same error as before: > > CalledProcessError: Command > '/programs/x86_64-linux/scipion/2.0/software/em/gautomatch-0.56/bin/Gautomatch_v0.56_sm30_cu8.0 > Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc > -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 > --ang_step 5 --diameter 300 --lp 30 --hp 1000 --gid 0 --apixT 0.83 > --cc_cutoff 0.10 --speed 2 --boxsize 200 --min_dist 300 --lsigma_cutoff > 1.20 --lsigma_D 100 --lave_max 1.00 --lave_min -1.00 --lave_D 100 > --detect_ice 0 --T_norm_type 1 --do_bandpass 1' returned non-zero exit > status -11 > > Protocol failed: Command > '/programs/x86_64-linux/scipion/2.0/software/em/gautomatch-0.56/bin/Gautomatch_v0.56_sm30_cu8.0 > Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc > -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 > --ang_step 5 --diameter 300 --lp 30 --hp 1000 --gid 0 --apixT 0.83 > --cc_cutoff 0.10 --speed 2 --boxsize 200 --min_dist 300 --lsigma_cutoff > 1.20 --lsigma_D 100 --lave_max 1.00 --lave_min -1.00 --lave_D 100 > --detect_ice 0 --T_norm_type 1 --do_bandpass 1' returned non-zero exit > status -11 > > FAILED: pickMicrographStep, step 2) > > When I tried Gautomatch_v0.56 sm62 from the SBgrid directory (ie from > outside Scipion) with slightly different parameters, it ran for 130 > micrographs, but then failed with the following error: > > ERROR: Protocol failed: Command > '/programs/x86_64-linux/gautomatch/0.56/bin/Gautomatch_v0.56_sm62_cu8.0 > Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0146.mrc > -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 > --ang_step 5 --diameter 300 --lp 30 --hp 1000 --gid 1 --apixT 0.83 > --cc_cutoff 0.10 --speed 2 --boxsize 250 --min_dist 300 --lsigma_cutoff > 1.20 --lsigma_D 100 --lave_max 1.00 --lave_min -1.00 --lave_D 100 > --detect_ice 1 --T_norm_type 1 --do_bandpass 1' returned non-zero exit > status -6 > > Config: > > # -*- conf -*- > > # All relative paths will have $SCIPION_HOME as their root. > > [DIRS_LOCAL] > > SCIPION_USER_DATA = ~/ScipionUserData > > SCIPION_LOGS = %(SCIPION_USER_DATA)s/logs > > SCIPION_TMP = %(SCIPION_USER_DATA)s/tmp > > [DIRS_GLOBAL] > > SCIPION_TESTS = data/tests > > SCIPION_SOFTWARE = software > > [REMOTE] > > SCIPION_URL = http://scipion.cnb.csic.es/downloads/scipion > > SCIPION_URL_SOFTWARE = %(SCIPION_URL)s/software > > SCIPION_URL_TESTDATA = %(SCIPION_URL)s/data/tests > > [PACKAGES] > > EM_ROOT = software/em > > CRYOEF_HOME = %(EM_ROOT)s/cryoEF-1.1.0 > > CCP4_HOME = /programs/x86_64-linux/ccp4/7.0/ccp4-7.0 > > EMAN2DIR = %(EM_ROOT)s/eman-2.21/ > > ETHAN_HOME = %(EM_ROOT)s/ethan-1.2 > > GAUTOMATCH_HOME = /programs/x86_64-linux/gautomatch/0.56 > > # replace '_cuX.Y_' to certain installed cuda > > GAUTOMATCH = Gautomatch_v0.56_sm62_cu8.0 > > # GAUTOMATCH_CUDA_LIB = None # Fill to override scipion CUDA_LIB if > different > > Do you know what could be going on? What is returned non-zero exit status > -6 and -11? Do you have further suggestions for how to try to solve this? > > Thank you very much for your help, > > Maria > > > > > > > > > > *From: *Gregory Sharov <sha...@gm...> > *Date: *Thursday, December 12, 2019 at 10:40 AM > *To: *Maria Maldonado <mm...@uc...> > *Cc: *Pablo Conesa <pc...@cn...>, "sc...@cn..." < > sc...@cn...> > *Subject: *Re: [scipion] Issues with Gautomatch on Scipion > > > > Hi Maria, > > > > for RTX 2080 you need CC=7.5. You could try Gautomatch_v0.56_sm62_cu8.0 > <https://www.mrc-lmb.cam.ac.uk/kzhang/Gautomatch/Gautomatch_v0.56/Gautomatch_v0.56_sm62_cu8.0> and > it might work on this card. > > However you said that Gautomatch-v0.53_sm_20_cu7.5_x86_64 works for you > outside scipion? Was it also on RTX 2080? If it works, you can use this > binary with the latest plugin version (just adapt the config file > accordingly). > > > > Let us know how it goes. > > > > Best regards, > Grigory > > > > > -------------------------------------------------------------------------------- > > Grigory Sharov, Ph.D. > > MRC Laboratory of Molecular Biology, > Francis Crick Avenue, > Cambridge Biomedical Campus, > Cambridge CB2 0QH, UK. > tel. +44 (0) 1223 267228 <+44%201223%20267228> > > e-mail: gs...@mr... > > > > > > On Thu, Dec 12, 2019 at 5:10 PM Maria Maldonado <mm...@uc...> > wrote: > > Pablo, > > I believe we have GPU RTX 2080. Computing capability should be 7.5 > according to the nvidia website. So I should use the sm value closest to > 75, is that right? > > > > Thanks again, > > Maria > > > > > > > > > > *From: *Maria Maldonado <mm...@uc...> > *Date: *Thursday, December 12, 2019 at 8:47 AM > *To: *Pablo Conesa <pc...@cn...>, Gregory Sharov < > sha...@gm...> > *Cc: *"sc...@cn..." <sc...@cn...> > *Subject: *Re: [scipion] Issues with Gautomatch on Scipion > > > > Hi Pablo, > > > > Thanks for the suggestions. This is my current nvidia-smi output: > > > > NVIDIA-SMI 440.36 Driver Version: 440.36 CUDA Version: > 10.2 | > > > |-------------------------------+----------------------+----------------------+ > > | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. > ECC | > > | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute > M. | > > > |===============================+======================+======================| > > | 0 GeForce RTX 208... Off | 00000000:18:00.0 Off | > N/A | > > | 53% 84C P2 183W / 250W | 8720MiB / 11019MiB | 73% > Default | > > > +-------------------------------+----------------------+----------------------+ > > | 1 GeForce RTX 208... Off | 00000000:3B:00.0 Off | > N/A | > > | 38% 61C P2 116W / 250W | 6732MiB / 11019MiB | 57% > Default | > > > +-------------------------------+----------------------+----------------------+ > > | 2 GeForce RTX 208... Off | 00000000:86:00.0 Off | > N/A | > > | 39% 63C P2 203W / 250W | 6668MiB / 11019MiB | 59% > Default | > > > +-------------------------------+----------------------+----------------------+ > > | 3 GeForce RTX 208... Off | 00000000:AF:00.0 Off | > N/A | > > | 39% 63C P2 170W / 250W | 6668MiB / 11019MiB | 56% > Default | > > > +-------------------------------+----------------------+----------------------+ > > > > > +-----------------------------------------------------------------------------+ > > | Processes: GPU > Memory | > > | GPU PID Type Process name > Usage | > > > |=============================================================================| > > | 0 254007 C ...ms/x86_64-linux/cryolo/1.5.3/bin/python > 8707MiB | > > | 1 254007 C ...ms/x86_64-linux/cryolo/1.5.3/bin/python > 6719MiB | > > | 2 254007 C ...ms/x86_64-linux/cryolo/1.5.3/bin/python > 6655MiB | > > | 3 254007 C ...ms/x86_64-linux/cryolo/1.5.3/bin/python > 6655MiB | > > > > Thanks, > > Maria > > > > *From: *Pablo Conesa <pc...@cn...> > *Date: *Wednesday, December 11, 2019 at 11:38 PM > *To: *Maria Maldonado <mm...@uc...>, Gregory Sharov < > sha...@gm...> > *Cc: *"sc...@cn..." <sc...@cn...> > *Subject: *Re: [scipion] Issues with Gautomatch on Scipion > > > > Hi, I'm not sure about this but in some cases the architecture of the GPU > needs to match the binaries. > > You are using Gautomatch_v0.56_sm*30*_cu8.0 which is a low value for > current GPUs. > > I know some user had to match the sm value to the closest GPU compute > capability, but I think this was for gctf. > > May be is worth trying? > > In that case, could you please send the output of nvidia-smi to see which > GPUs you have. > > On 11/12/19 20:04, Maria Maldonado wrote: > > Grigory, > > > > Please find below the config and error message from when I run the new > version. > > > > Many thanks, > > Maria > > > > > > > > > > Run.stdout > > > > RUNNING PROTOCOL ----------------- > > HostName: unicorn.mcb.ucdavis.edu > > PID: 184221 > > Scipion: v2.0 (2019-04-23) Diocletian > > currentDir: > /media/raid/lettslab/EM_processing/mmaldo/20190903_MungBean_SC_B_UCFSKrios/CIp_processing/CIstar_scipion > > workingDir: Runs/001122_ProtGautomatch > > runMode: Restart > > MPI: 1 > > threads: 1 > > Starting at step: 1 > > Running steps > > STARTED: convertInputStep, step 1 > > 2019-12-11 10:32:55.873136 > > FINISHED: convertInputStep, step 1 > > 2019-12-11 10:32:55.960758 > > STARTED: pickMicrographStep, step 2 > > 2019-12-11 10:32:56.072227 > > Picking micrograph: > Runs/000073_ProtImportMicrographs/extra/ucdavis_20190903_SCmb_B_0001.mrc > > Some paths do not exist in: /programs/x86_64-linux/scipion/2.0/None # Fill > to override scipion CUDA_LIB if different > > /programs/x86_64-linux/scipion/2.0/software/em/gautomatch-0.56/bin/Gautomatch_v0.56_sm30_cu8.0 > Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc > -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 > --ang_step 5 --diameter 300 --lp 30 --hp 1000 --gid 0 --apixT 0.83 > --cc_cutoff 0.10 --speed 2 --boxsize 200 --min_dist 300 --lsigma_cutoff > 1.20 --lsigma_D 100 --lave_max 1.00 --lave_min -1.00 --lave_D 100 > --detect_ice 0 --T_norm_type 1 --do_bandpass 1 > > > *************************************************************************************************** > > User input parameters: > > --T > Runs/001122_ProtGautomatch/extra/references.mrcs > > --apixM 0.830 > > --ang_step 5.00 > > --diameter 300.00 > > --lp 30.000 > > --hp 1000.000 > > --gid 0 > > --apixT 0.830 > > --cc_cutoff 0.100 > > --speed 2 > > --boxsize 200 > > --min_dist 300.00 > > --lsigma_cutoff 1.200 > > --lsigma_D 100.000 > > --lave_max 1.000 > > --lave_min -1.000 > > --lave_D 100.000 > > --detect_ice 0 > > --T_norm_type 1 > > --do_bandpass 1 > > > > All parameters to be used: > > Basic options: > > --T > Runs/001122_ProtGautomatch/extra/references.mrcs > > --apixM 0.830 > > --diameter 300.00 > > --apixT 0.830 > > Additional options: > > --ang_step 5.00 > > --speed 2 > > --boxsize 200 > > --min_dist 300.00 > > --cc_cutoff 0.100 > > --lsigma_cutoff 1.200 > > --lsigma_D 100.000 > > --lave_min -1.000 > > --lave_max 1.000 > > --lave_D 100.000 > > --dont_invertT 0 > > --lp 30.00 > > --hp 1000.00 > > --do_pre_filter 0 > > --pre_lp 8.00 > > --pre_hp 1000.00 > > --detect_ice 0 > > --T_norm_type 1 > > --do_bandpass 1 > > CTF options: > > --do_ctf 0 > > --do_local_ctf 0 > > --kV 300.00 > > --cs 2.70 > > --ac 0.10 > > I/O options: > > --write_ccmax_mic 0 > > --write_pf_mic 0 > > --write_pref_mic 0 > > --write_bg_mic 0 > > --write_bgfree_mic 0 > > --write_lsigma_mic 0 > > --write_mic_mask 0 > > --write_star 1 > > --do_unfinished 0 > > --write_rejected_box 1 > > --exclusive_picking 0 > > --excluded_suffix NULL > > --mask_excluded 0 > > --global_excluded_box NULL > > --extract_raw 0 > > --extract_pf 0 > > --gid 0 > > > > > > All 1 files to be used: > > > Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc > > Opening > Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc > for initial test and preperation ....... > > >>>>>>TIME<<<<<< PREPARATION: 0.313178s > > > > > --------------------------------------------------------------------------------------------------------------------------------- > > Reading File > Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc > ... > > >>>>>>TIME<<<<<< READING: 0.076049s > > Normalizing the micrograph > Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc > ... > > >>>>>>TIME<<<<<< NORMALIZATION: 0.964321s > > Template 001 > > ........................................................................ > > Template 002 > > ........................................................................ > > Template 003 > > ........................................................................ > > Template 004 > > ........................................................................ > > Template 005 > > ........................................................................ > > Template 006 > > ........................................................................ > > Template 007 > > ........................................................................ > > Template 008 > > ........................................................................ > > Template 009 > > ........................................................................ > > Template 010 > > ..................................................Traceback (most recent > call last): > > File > "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", line > 186, in run > > self._run() > > File > "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", line > 237, in _run > > resultFiles = self._runFunc() > > File > "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", line > 233, in _runFunc > > return self._func(*self._args) > > File > "/programs/x86_64-linux/scipion/2.0/pyworkflow/em/protocol/protocol_particles_picking.py", > line 242, in pickMicrographStep > > self._pickMicrograph(mic, *args) > > File > "/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/site-packages/gautomatch/protocols/protocol_gautomatch.py", > line 327, in _pickMicrograph > > self._pickMicrographList([mic], args) > > File > "/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/site-packages/gautomatch/protocols/protocol_gautomatch.py", > line 351, in _pickMicrographList > > runJob=self.runJob) > > File > "/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/site-packages/gautomatch/__init__.py", > line 103, in runGautomatch > > runJob(cls.getProgram(), args, env=environ) > > File > "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", line > 1311, in runJob > > self._stepsExecutor.runJob(self._log, program, arguments, **kwargs) > > File > "/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/executor.py", line > 70, in runJob > > env=env, cwd=cwd, gpuList=self.getGpuList()) > > File "/programs/x86_64-linux/scipion/2.0/pyworkflow/utils/process.py", > line 52, in runJob > > return runCommand(command, env, cwd) > > File "/programs/x86_64-linux/scipion/2.0/pyworkflow/utils/process.py", > line 67, in runCommand > > env=env, cwd=cwd) > > File > "/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/subprocess.py", > line 190, in check_call > > raise CalledProcessError(retcode, cmd) > > CalledProcessError: Command > '/programs/x86_64-linux/scipion/2.0/software/em/gautomatch-0.56/bin/Gautomatch_v0.56_sm30_cu8.0 > Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc > -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 > --ang_step 5 --diameter 300 --lp 30 --hp 1000 --gid 0 --apixT 0.83 > --cc_cutoff 0.10 --speed 2 --boxsize 200 --min_dist 300 --lsigma_cutoff > 1.20 --lsigma_D 100 --lave_max 1.00 --lave_min -1.00 --lave_D 100 > --detect_ice 0 --T_norm_type 1 --do_bandpass 1' returned non-zero exit > status -11 > > Protocol failed: Command > '/programs/x86_64-linux/scipion/2.0/software/em/gautomatch-0.56/bin/Gautomatch_v0.56_sm30_cu8.0 > Runs/001122_ProtGautomatch/extra/micrographs/ucdavis_20190903_SCmb_B_0001.mrc > -T Runs/001122_ProtGautomatch/extra/references.mrcs --apixM 0.83 > --ang_step 5 --diameter 300 --lp 30 --hp 1000 --gid 0 --apixT 0.83 > --cc_cutoff 0.10 --speed 2 --boxsize 200 --min_dist 300 --lsigma_cutoff > 1.20 --lsigma_D 100 --lave_max 1.00 --lave_min -1.00 --lave_D 100 > --detect_ice 0 --T_norm_type 1 --do_bandpass 1' returned non-zero exit > status -11 > > FAILED: pickMicrographStep, step 2 > > 2019-12-11 10:32:59.514441 > > ------------------- PROTOCOL FAILED (DONE 2/8369) > > > > > > > > Config > > > > > > # -*- conf -*- > > > > # All relative paths will have $SCIPION_HOME as their root. > > > > [DIRS_LOCAL] > > SCIPION_USER_DATA = ~/ScipionUserData > > SCIPION_LOGS = %(SCIPION_USER_DATA)s/logs > > SCIPION_TMP = %(SCIPION_USER_DATA)s/tmp > > > > [DIRS_GLOBAL] > > SCIPION_TESTS = data/tests > > SCIPION_SOFTWARE = software > > > > [REMOTE] > > SCIPION_URL = http://scipion.cnb.csic.es/downloads/scipion > > SCIPION_URL_SOFTWARE = %(SCIPION_URL)s/software > > SCIPION_URL_TESTDATA = %(SCIPION_URL)s/data/tests > > > > [PACKAGES] > > EM_ROOT = software/em > > > > CRYOEF_HOME = %(EM_ROOT)s/cryoEF-1.1.0 > > CCP4_HOME = /programs/x86_64-linux/ccp4/7.0/ccp4-7.0 > > EMAN2DIR = %(EM_ROOT)s/eman-2.21/ > > ETHAN_HOME = %(EM_ROOT)s/ethan-1.2 > > GAUTOMATCH_HOME = %(EM_ROOT)s/gautomatch-0.56 > > # replace '_cuX.Y_' to certain installed cuda > > GAUTO > > |
From: Pablo C. <pc...@cn...> - 2020-01-07 07:37:29
|
Thanks Fan, Erney, just told me that deepres is not released in scipion 2.0 but can be access in the devel "mode". I guess any of he xmipp guys will be able to help you if you are willing to go that way. On 5/1/20 16:25, 范宏成 wrote: > Hi Pablo, > I can see xmipp protocols in my scipion except for deepRes. Please see > attachment. > > Fan > > > > > > At 2020-01-05 23:04:56, pc...@cn... wrote: > > Hi, can you check what is listed when you select the view "all" in > the top right drop-down. Can you see any Xmipp protocol? > > El 2 ene. 2020 17:49, 范宏成 <187...@16...> escribió: > > Dear Jose, > Regarding the DeepRes, I did't find in my scipion although I > usecrtl+F and search by writing the deepres. I guess maybe my > scipion is not the developer version. So I tried to reinstall > my scipion (devel mode) and use plugin to install > xmipp(v3.19.04 source code).I found there existed deepRes in > my new install > scipion(scipion/software/em/xmipp/bin/xmipp_deepRes_resolution > and software/em/xmipp/bin/models/deepRes). However, I still > didn't search the deepRes in scipion gui. How to solve this > problem? Thank you! > > Fan > > 范宏成 > m18...@16... > > <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?&name=%E8%8C%83%E5%AE%8F%E6%88%90&uid=m18717124574%40163.com&ftlId=1&iconUrl=http%3A%2F%2Fmail-online.nosdn.127.net%2Fsmfc69753e7457a6e590754a7cd5492c0d.jpg&items=%5B%22m18717124574%40163.com%22%5D> > > 签名由 网易邮箱大师 > <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制 > On 1/2/2020 17:46,JOSE LUIS VILAS PRIETO<jl...@cn...> > <mailto:jl...@cn...> wrote: > > Dear Fan, > > Thanks for getting in touch with us. > > The input of localdeblur can be any local resolution map, > it means > that you can compute the local resolution with monores, > resmap, > blocres, o deepres. There is no one better than the other. > > In the case of monores, as well as resmap, it is mandatory > that your > map contains noise, if it is noise free, use as I put two > halves. It > should always be the best way of estimating the local > resolution. > > Also, deepres is a good choice, because it overcomes the > noise > problem, and you can computer the local resolution map of > a noise-free > density map. > > Deepres belongs to xmipp package and you should be able to > find it in > analisys tab in the left panel. Alternatively, you can do > crtl+F and > search by writing the name of the protocol, it means deepres. > > Regarding with the micelle, you can create a mask and > apply it to remove it > > Anything you need, please let us know > > Kind regards > > José Luis Vilas > > Quoting 范宏成 <187...@16...>: > > Dear Colleagues, > I have some questions about Local resolution > estimation and Local > Deblur. Looking forward to your response . Thank you! > 1. MonoRes. Can MonoRes estimate the local resolution > when I do > focused refinement (masked refinement) using relion. I > mean there > maybe no region to estimate the noise. In this > condition, I have > to use the relion local resolution protocol to get the > focused map > local resolution. > 2. Local Deblur. Can Local Deblur support the relion > local > resolution map or only support MonoRes,Resmap, Blocres? > 3. DeepRes. I recently read a paper about the DeepRes > and find it > very useful for my project. I didn't find the DeepRes > protocol in > my scipion. My scipion version is v2.0(2019-04-23) > Diocletian in > Centos7 system(Binary version). How can I use this > protocol? > 4.DeepRes for membrane protein. In DeepRes paper, It > said it is very > important to mask out the micelle because DeepRes only > trained with > proteins. If I don't mask out the micelle, What bad > consequencies > will happen? By the way, How to effectively and > carefully mask out > the micellce? In my way, I deleted the micelle using > particle > substraction and run again auto-refinement in relion. > In order not > to delete the signal from the protein, I used to use > the soft mask, > So My membrante protein had a small number of > micelles around the > TM domain. I'm afraid this residual micelle may cause bad > consequencies when using DeepRes. Does someone have a > better way to > delete the micelle in membrane proteins? > > > Fan Hongcheng > > > | | > 范宏成 > | > | > m18...@16... > | > 签名由网易邮箱大师定制 > > > > > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > > > > > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users -- Pablo Conesa - *Madrid Scipion <http://scipion.i2pc.es> team* |
From: 范宏成 <187...@16...> - 2020-01-06 09:09:51
|
Dear Colleagues, I used the relion local resolution estimation in the scipion and wanted to import into LocalDeblur. It appeared to be failed. The following is run.log. run.log 00007: 2020-01-06 16:19:26,765 INFO: runMode: Continue 00008: 2020-01-06 16:19:26,766 INFO: MPI: 1 00009: 2020-01-06 16:19:26,766 INFO: threads: 4 00010: 2020-01-06 16:19:26,768 INFO: len(steps) 5 len(prevSteps) 0 00011: 2020-01-06 16:19:26,769 INFO: Starting at step: 1 00012: 2020-01-06 16:19:26,835 INFO: Running steps 00013: 2020-01-06 16:19:26,904 INFO: STARTED: convertInputStep, step 1 00014: 2020-01-06 16:19:26,904 INFO: 2020-01-06 16:19:26.903935 00015: 2020-01-06 16:19:27,030 INFO: FINISHED: convertInputStep, step 1 00016: 2020-01-06 16:19:27,030 INFO: 2020-01-06 16:19:26.916327 00017: 2020-01-06 16:19:27,031 INFO: STARTED: checkBackgroundStep, step 2 00018: 2020-01-06 16:19:27,032 INFO: 2020-01-06 16:19:27.031477 00019: 2020-01-06 16:19:27,689 INFO: FINISHED: checkBackgroundStep, step 2 00020: 2020-01-06 16:19:27,689 INFO: 2020-01-06 16:19:27.642308 00021: 2020-01-06 16:19:27,690 INFO: STARTED: createMaskStep, step 3 00022: 2020-01-06 16:19:27,691 INFO: 2020-01-06 16:19:27.690446 00023: 2020-01-06 16:19:27,698 INFO: xmipp_transform_threshold -i Runs/030962_ProtRelionLocalRes/extra/relion_locres_filtered.mrc:mrc -o Runs/044723_XmippProtLocSharp/tmp/binaryMask.vol --select below 0.100000 --substitute binarize 00024: 2020-01-06 16:19:28,680 INFO: FINISHED: createMaskStep, step 3 00025: 2020-01-06 16:19:28,681 INFO: 2020-01-06 16:19:28.574667 00026: 2020-01-06 16:19:28,682 INFO: STARTED: sharpeningAndMonoResStep, step 4 00027: 2020-01-06 16:19:28,683 INFO: 2020-01-06 16:19:28.682466 00028: 2020-01-06 16:19:28,691 INFO: xmipp_volume_local_sharpening -o Runs/044723_XmippProtLocSharp/extra/sharpenedMap_1.mrc --resolution_map Runs/030962_ProtRelionLocalRes/extra/relion_locres_filtered.mrc:mrc --sampling 1.230000 -n 4 -k 0.025000 --md Runs/044723_XmippProtLocSharp/tmp/params.xmd --vol Runs/030857_ProtRelionRefine3D/extra/relion_class001.mrc:mrc 00029: 2020-01-06 16:19:56,316 INFO: xmipp_image_header -i Runs/044723_XmippProtLocSharp/extra/sharpenedMap_1.mrc -s 1.230000 00030: 2020-01-06 16:19:56,429 INFO: xmipp_resolution_monogenic_signal --vol Runs/044723_XmippProtLocSharp/extra/sharpenedMap_1.mrc --mask Runs/044723_XmippProtLocSharp/tmp/binaryMask.vol --sampling_rate 1.230000 --minRes 2.460000 --maxRes 0.123808 --step 0.250000 --mask_out Runs/044723_XmippProtLocSharp/tmp/refined_mask.vol -o Runs/044723_XmippProtLocSharp/tmp/resolutionMonoRes.vol --volumeRadius 120.000000 --exact --chimera_volume Runs/044723_XmippProtLocSharp/tmp/MonoResChimera.vol --sym c1 --significance 0.950000 --md_outputdata Runs/044723_XmippProtLocSharp/tmp/mask_data.xmd --threads 4 00031: 2020-01-06 16:20:02,794 ERROR: Protocol failed: Command 'xmipp_resolution_monogenic_signal --vol Runs/044723_XmippProtLocSharp/extra/sharpenedMap_1.mrc --mask Runs/044723_XmippProtLocSharp/tmp/binaryMask.vol --sampling_rate 1.230000 --minRes 2.460000 --maxRes 0.123808 --step 0.250000 --mask_out Runs/044723_XmippProtLocSharp/tmp/refined_mask.vol -o Runs/044723_XmippProtLocSharp/tmp/resolutionMonoRes.vol --volumeRadius 120.000000 --exact --chimera_volume Runs/044723_XmippProtLocSharp/tmp/MonoResChimera.vol --sym c1 --significance 0.950000 --md_outputdata Runs/044723_XmippProtLocSharp/tmp/mask_data.xmd --threads 4' returned non-zero exit status -11 00032: 2020-01-06 16:20:02,837 INFO: FAILED: sharpeningAndMonoResStep, step 4 00033: 2020-01-06 16:20:02,838 INFO: 2020-01-06 16:20:02.719926 00034: 2020-01-06 16:20:03,010 INFO: ------------------- PROTOCOL FAILED (DONE 4/5) Then I used Monores, Resmap, Blocres. The LocalDeblur can be executed successfully. About the localDeblur, The output are several sharpened maps from different interations, Should I use the last sharpened map? Because the sharpening algorithm reaches the convergence criterion. But I found the last map appeared to be noiser visualized in Chimera. So can I use the earlier interation result? Fan Hongcheng | | 范宏成 | | m18...@16... | 签名由网易邮箱大师定制 |
From: 范宏成 <187...@16...> - 2020-01-05 15:25:30
|
Hi Pablo, I can see xmipp protocols in my scipion except for deepRes. Please see attachment. Fan At 2020-01-05 23:04:56, pc...@cn... wrote: Hi, can you check what is listed when you select the view "all" in the top right drop-down. Can you see any Xmipp protocol? El 2 ene. 2020 17:49, 范宏成 <187...@16...> escribió: Dear Jose, Regarding the DeepRes, I did't find in my scipion although I use crtl+F and search by writing the deepres. I guess maybe my scipion is not the developer version. So I tried to reinstall my scipion (devel mode) and use plugin to install xmipp(v3.19.04 source code).I found there existed deepRes in my new install scipion(scipion/software/em/xmipp/bin/xmipp_deepRes_resolution and software/em/xmipp/bin/models/deepRes). However, I still didn't search the deepRes in scipion gui. How to solve this problem? Thank you! Fan | | 范宏成 | | m18...@16... | 签名由 网易邮箱大师 定制 On 1/2/2020 17:46,JOSE LUIS VILAS PRIETO<jl...@cn...> wrote: Dear Fan, Thanks for getting in touch with us. The input of localdeblur can be any local resolution map, it means that you can compute the local resolution with monores, resmap, blocres, o deepres. There is no one better than the other. In the case of monores, as well as resmap, it is mandatory that your map contains noise, if it is noise free, use as I put two halves. It should always be the best way of estimating the local resolution. Also, deepres is a good choice, because it overcomes the noise problem, and you can computer the local resolution map of a noise-free density map. Deepres belongs to xmipp package and you should be able to find it in analisys tab in the left panel. Alternatively, you can do crtl+F and search by writing the name of the protocol, it means deepres. Regarding with the micelle, you can create a mask and apply it to remove it Anything you need, please let us know Kind regards José Luis Vilas Quoting 范宏成 <187...@16...>: Dear Colleagues, I have some questions about Local resolution estimation and Local Deblur. Looking forward to your response . Thank you! 1. MonoRes. Can MonoRes estimate the local resolution when I do focused refinement (masked refinement) using relion. I mean there maybe no region to estimate the noise. In this condition, I have to use the relion local resolution protocol to get the focused map local resolution. 2. Local Deblur. Can Local Deblur support the relion local resolution map or only support MonoRes,Resmap, Blocres? 3. DeepRes. I recently read a paper about the DeepRes and find it very useful for my project. I didn't find the DeepRes protocol in my scipion. My scipion version is v2.0(2019-04-23) Diocletian in Centos7 system(Binary version). How can I use this protocol? 4.DeepRes for membrane protein. In DeepRes paper, It said it is very important to mask out the micelle because DeepRes only trained with proteins. If I don't mask out the micelle, What bad consequencies will happen? By the way, How to effectively and carefully mask out the micellce? In my way, I deleted the micelle using particle substraction and run again auto-refinement in relion. In order not to delete the signal from the protein, I used to use the soft mask, So My membrante protein had a small number of micelles around the TM domain. I'm afraid this residual micelle may cause bad consequencies when using DeepRes. Does someone have a better way to delete the micelle in membrane proteins? Fan Hongcheng | | 范宏成 | | m18...@16... | 签名由网易邮箱大师定制 _______________________________________________ scipion-users mailing list sci...@li... https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: <pc...@cn...> - 2020-01-05 15:05:19
|
<div dir='auto'>Hi, can you check what is listed when you select the view "all" in the top right drop-down. Can you see any Xmipp protocol?</div><div class="gmail_extra"><br><div class="gmail_quote">El 2 ene. 2020 17:49, 范宏成 <187...@16...> escribió:<br type="attribution" /><blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div> <div style="font-family:'微软雅黑' , 'verdana' , 'microsoft yahei' , 'simsun' , sans-serif;font-size:14px;line-height:1.6"> <div></div><div> <div> Dear Jose, </div><div>Regarding the DeepRes, I did't find in my scipion although I use<span style="line-height:22.4px"> </span><span style="line-height:22.4px">crtl+F and </span><span style="line-height:22.4px">search by writing the deepres. I guess maybe my scipion is not the developer version. So I tried to reinstall my scipion (devel mode) and use plugin to install xmipp(v3.19.04 source code).I found there existed deepRes in my new install scipion(scipion/software/em/xmipp/bin/xmipp_deepRes_resolution and </span><span style="line-height:22.4px">software/em/xmipp/bin/models/deepRes). </span><span style="line-height:22.4px"> </span><span style="line-height:22.4px">However, I still didn't search the deepRes in scipion gui.</span><span style="line-height:22.4px"> How to solve this problem? Thank you!</span></div><div><span style="line-height:22.4px"><br /></span></div><div><span style="line-height:22.4px">Fan</span></div><div> </div> <div> <div style="font-size:14px;padding:0;margin:0;line-height:14px"> <div style="padding-bottom:6px;margin-bottom:10px;border-bottom:1px solid #e6e6e6;display:inline-block"> <a href="https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?&name=%E8%8C%83%E5%AE%8F%E6%88%90&uid=m18717124574%40163.com&ftlId=1&iconUrl=http%3A%2F%2Fmail-online.nosdn.127.net%2Fsmfc69753e7457a6e590754a7cd5492c0d.jpg&items=%5B%22m18717124574%40163.com%22%5D" style="display:block;background:#fff;max-width:400px;padding:15px 0 10px 0;text-decoration:none;outline:none;-webkit-text-size-adjust:!important;text-size-adjust:!important"> <table cellpadding="0" style="width:100%;max-width:100%;table-layout:fixed;border-collapse:collapse;color:#9b9ea1;font-size:14px;line-height:1.3;-webkit-text-size-adjust:!important;text-size-adjust:!important"><tbody style="font-family:'pingfang sc' , 'hiragino sans gb' , 'wenquanyi micro hei' , 'microsoft yahei' , , 'verdana' !important;word-wrap:break-word;-webkit-text-size-adjust:!important;text-size-adjust:!important"><tr><td width="38" style="padding:0;width:38px"> <img width="38" height="38" style="vertical-align:middle;width:38px;height:38px;border-radius:50%" src="http://mail-online.nosdn.127.net/smfc69753e7457a6e590754a7cd5492c0d.jpg" /> </td><td style="padding:0 0 0 10px;color:#31353b"> <div style="font-size:16px;font-weight:bold;width:100%;white-space:nowrap;text-overflow:ellipsis">范宏成</div> </td></tr><tr style="font-size:14px !important;width:100%"><td colspan="2" style="padding:10px 0 0 0;font-size:14px !important;width:100%"> <div style="width:100%;font-size:14px !important;word-wrap:break-word">m18717124574@163.com</div> </td></tr></tbody></table> </a> </div> </div> <div style="font-size:12px;color:#b5b9bd;line-height:18px"> 签名由 <a style="text-decoration:none;color:#4196ff;padding:0 5px" href="https://mail.163.com/dashi/dlpro.html?from=mail81">网易邮箱大师</a> 定制 </div> </div> </div><div style="background-color:#f2f2f2;color:black;padding-top:6px;padding-bottom:6px;border-radius:3px;-moz-border-radius:3px;-webkit-border-radius:3px;margin-top:45px;margin-bottom:20px"> <div style="font-size:12px;line-height:1.5;margin-left:10px;margin-right:10px">On 1/2/2020 17:46,<a style="text-decoration:none;color:#2a83f2" href="mailto:jlvilas@cnb.csic.es">JOSE LUIS VILAS PRIETO<jlvilas@cnb.csic.es></a> wrote: </div> </div> <blockquote style="margin:0;padding:0;font-size:14px"> Dear Fan,<br /><br />Thanks for getting in touch with us.<br /><br />The input of localdeblur can be any local resolution map, it means <br />that you can compute the local resolution with monores, resmap, <br />blocres, o deepres. There is no one better than the other.<br /><br />In the case of monores, as well as resmap, it is mandatory that your <br />map contains noise, if it is noise free, use as I put two halves. It <br />should always be the best way of estimating the local resolution.<br /><br />Also, deepres is a good choice, because it overcomes the noise <br />problem, and you can computer the local resolution map of a noise-free <br />density map.<br /><br />Deepres belongs to xmipp package and you should be able to find it in <br />analisys tab in the left panel. Alternatively, you can do crtl+F and <br />search by writing the name of the protocol, it means deepres.<br /><br />Regarding with the micelle, you can create a mask and apply it to remove it<br /><br />Anything you need, please let us know<br /><br />Kind regards<br /><br />José Luis Vilas<br /><br />Quoting 范宏成 <18717124574@163.com>:<br /><br /> <blockquote>Dear Colleagues,<br /> I have some questions about Local resolution estimation and Local <br /> Deblur. Looking forward to your response . Thank you!<br /> 1. MonoRes. Can MonoRes estimate the local resolution when I do <br /> focused refinement (masked refinement) using relion. I mean there <br /> maybe no region to estimate the noise. In this condition, I have <br /> to use the relion local resolution protocol to get the focused map <br /> local resolution.<br /> 2. Local Deblur. Can Local Deblur support the relion local <br /> resolution map or only support MonoRes,Resmap, Blocres?<br /> 3. DeepRes. I recently read a paper about the DeepRes and find it <br /> very useful for my project. I didn't find the DeepRes protocol in <br /> my scipion. My scipion version is v2.0(2019-04-23) Diocletian in <br /> Centos7 system(Binary version). How can I use this protocol?<br /> 4.DeepRes for membrane protein. In DeepRes paper, It said it is very <br /> important to mask out the micelle because DeepRes only trained with <br /> proteins. If I don't mask out the micelle, What bad consequencies <br /> will happen? By the way, How to effectively and carefully mask out <br /> the micellce? In my way, I deleted the micelle using particle <br /> substraction and run again auto-refinement in relion. In order not <br /> to delete the signal from the protein, I used to use the soft mask, <br /> So My membrante protein had a small number of micelles around the <br /> TM domain. I'm afraid this residual micelle may cause bad <br /> consequencies when using DeepRes. Does someone have a better way to <br /> delete the micelle in membrane proteins?<br /><br /><br /> Fan Hongcheng<br /><br /><br /> | |<br /> 范宏成<br /> |<br /> |<br /> m18717124574@163.com<br /> |<br /> 签名由网易邮箱大师定制<br /></blockquote><br /><br /><br /><br /><br />_______________________________________________<br />scipion-users mailing list<br />scipion-users@lists.sourceforge.net<br />https://lists.sourceforge.net/lists/listinfo/scipion-users<br /></blockquote> </div> </div></blockquote></div><br></div> |
From: 范宏成 <187...@16...> - 2020-01-02 16:49:26
|
Dear Jose, Regarding the DeepRes, I did't find in my scipion although I use crtl+F and search by writing the deepres. I guess maybe my scipion is not the developer version. So I tried to reinstall my scipion (devel mode) and use plugin to install xmipp(v3.19.04 source code).I found there existed deepRes in my new install scipion(scipion/software/em/xmipp/bin/xmipp_deepRes_resolution and software/em/xmipp/bin/models/deepRes). However, I still didn't search the deepRes in scipion gui. How to solve this problem? Thank you! Fan | | 范宏成 | | m18...@16... | 签名由网易邮箱大师定制 On 1/2/2020 17:46,JOSE LUIS VILAS PRIETO<jl...@cn...> wrote: Dear Fan, Thanks for getting in touch with us. The input of localdeblur can be any local resolution map, it means that you can compute the local resolution with monores, resmap, blocres, o deepres. There is no one better than the other. In the case of monores, as well as resmap, it is mandatory that your map contains noise, if it is noise free, use as I put two halves. It should always be the best way of estimating the local resolution. Also, deepres is a good choice, because it overcomes the noise problem, and you can computer the local resolution map of a noise-free density map. Deepres belongs to xmipp package and you should be able to find it in analisys tab in the left panel. Alternatively, you can do crtl+F and search by writing the name of the protocol, it means deepres. Regarding with the micelle, you can create a mask and apply it to remove it Anything you need, please let us know Kind regards José Luis Vilas Quoting 范宏成 <187...@16...>: Dear Colleagues, I have some questions about Local resolution estimation and Local Deblur. Looking forward to your response . Thank you! 1. MonoRes. Can MonoRes estimate the local resolution when I do focused refinement (masked refinement) using relion. I mean there maybe no region to estimate the noise. In this condition, I have to use the relion local resolution protocol to get the focused map local resolution. 2. Local Deblur. Can Local Deblur support the relion local resolution map or only support MonoRes,Resmap, Blocres? 3. DeepRes. I recently read a paper about the DeepRes and find it very useful for my project. I didn't find the DeepRes protocol in my scipion. My scipion version is v2.0(2019-04-23) Diocletian in Centos7 system(Binary version). How can I use this protocol? 4.DeepRes for membrane protein. In DeepRes paper, It said it is very important to mask out the micelle because DeepRes only trained with proteins. If I don't mask out the micelle, What bad consequencies will happen? By the way, How to effectively and carefully mask out the micellce? In my way, I deleted the micelle using particle substraction and run again auto-refinement in relion. In order not to delete the signal from the protein, I used to use the soft mask, So My membrante protein had a small number of micelles around the TM domain. I'm afraid this residual micelle may cause bad consequencies when using DeepRes. Does someone have a better way to delete the micelle in membrane proteins? Fan Hongcheng | | 范宏成 | | m18...@16... | 签名由网易邮箱大师定制 _______________________________________________ scipion-users mailing list sci...@li... https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: JOSE L. V. P. <jl...@cn...> - 2020-01-02 09:46:55
|
Dear Fan, Thanks for getting in touch with us. The input of localdeblur can be any local resolution map, it means that you can compute the local resolution with monores, resmap, blocres, o deepres. There is no one better than the other. In the case of monores, as well as resmap, it is mandatory that your map contains noise, if it is noise free, use as I put two halves. It should always be the best way of estimating the local resolution. Also, deepres is a good choice, because it overcomes the noise problem, and you can computer the local resolution map of a noise-free density map. Deepres belongs to xmipp package and you should be able to find it in analisys tab in the left panel. Alternatively, you can do crtl+F and search by writing the name of the protocol, it means deepres. Regarding with the micelle, you can create a mask and apply it to remove it Anything you need, please let us know Kind regards José Luis Vilas Quoting 范宏成 <187...@16...>: > Dear Colleagues, > I have some questions about Local resolution estimation and Local > Deblur. Looking forward to your response . Thank you! > 1. MonoRes. Can MonoRes estimate the local resolution when I do > focused refinement (masked refinement) using relion. I mean there > maybe no region to estimate the noise. In this condition, I have > to use the relion local resolution protocol to get the focused map > local resolution. > 2. Local Deblur. Can Local Deblur support the relion local > resolution map or only support MonoRes,Resmap, Blocres? > 3. DeepRes. I recently read a paper about the DeepRes and find it > very useful for my project. I didn't find the DeepRes protocol in > my scipion. My scipion version is v2.0(2019-04-23) Diocletian in > Centos7 system(Binary version). How can I use this protocol? > 4.DeepRes for membrane protein. In DeepRes paper, It said it is very > important to mask out the micelle because DeepRes only trained with > proteins. If I don't mask out the micelle, What bad consequencies > will happen? By the way, How to effectively and carefully mask out > the micellce? In my way, I deleted the micelle using particle > substraction and run again auto-refinement in relion. In order not > to delete the signal from the protein, I used to use the soft mask, > So My membrante protein had a small number of micelles around the > TM domain. I'm afraid this residual micelle may cause bad > consequencies when using DeepRes. Does someone have a better way to > delete the micelle in membrane proteins? > > > Fan Hongcheng > > > | | > 范宏成 > | > | > m18...@16... > | > 签名由网易邮箱大师定制 |
From: 范宏成 <187...@16...> - 2020-01-02 09:01:54
|
Dear Colleagues, I have some questions about Local resolution estimation and Local Deblur. Looking forward to your response . Thank you! 1. MonoRes. Can MonoRes estimate the local resolution when I do focused refinement (masked refinement) using relion. I mean there maybe no region to estimate the noise. In this condition, I have to use the relion local resolution protocol to get the focused map local resolution. 2. Local Deblur. Can Local Deblur support the relion local resolution map or only support MonoRes,Resmap, Blocres? 3. DeepRes. I recently read a paper about the DeepRes and find it very useful for my project. I didn't find the DeepRes protocol in my scipion. My scipion version is v2.0(2019-04-23) Diocletian in Centos7 system(Binary version). How can I use this protocol? 4.DeepRes for membrane protein. In DeepRes paper, It said it is very important to mask out the micelle because DeepRes only trained with proteins. If I don't mask out the micelle, What bad consequencies will happen? By the way, How to effectively and carefully mask out the micellce? In my way, I deleted the micelle using particle substraction and run again auto-refinement in relion. In order not to delete the signal from the protein, I used to use the soft mask, So My membrante protein had a small number of micelles around the TM domain. I'm afraid this residual micelle may cause bad consequencies when using DeepRes. Does someone have a better way to delete the micelle in membrane proteins? Fan Hongcheng | | 范宏成 | | m18...@16... | 签名由网易邮箱大师定制 |
From: Pablo C. <pc...@cn...> - 2019-12-27 15:48:13
|
Hi Leonid, you've tried the right combination, the last one: pattern : *.mrc But I think what you have are movies. Therefore try the same but with the "Import movies" protocol not the micrographs. Hope this helps. On 25/12/19 10:32, Leonid Lavnevich wrote: > Dear Sir Madam, > > I am writing to you because of a problem that I have with imposing > micrographs with scipion. You will find attached some print screens > and errors that the software gives me. > Apparently the micrographs I have are the stacked ones (I have tried > different patterns)… > > Thank you in advance, > > Merry Christmas and a Happy New Year, > > Leonid > > > > > *Leonid Lavnevich* > PhD researcher at IPCM > > Sorbonne Universités - The Faculty of Science & Engineering > Tel: +33 7 83 56 18 50 > Mail: leo...@so... > <mailto:leo...@so...> > LinkedIn: https://www.linkedin.com/in/leonid-lavnevich/ > > > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users -- Pablo Conesa - *Madrid Scipion <http://scipion.i2pc.es> team* |
From: Gregory S. <sha...@gm...> - 2019-12-26 18:52:32
|
Dear Leonid, Looking at the filenames I think what you have are not stacked micrographs but movies. You should use import movies protocol instead. Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH, UK. tel. +44 (0) 1223 267228 <+44%201223%20267228> e-mail: gs...@mr... > > |
From: Leonid L. <leo...@so...> - 2019-12-25 09:49:08
|
Dear Sir Madam, I am writing to you because of a problem that I have with imposing micrographs with scipion. You will find attached some print screens and errors that the software gives me. Apparently the micrographs I have are the stacked ones (I have tried different patterns)… Thank you in advance, Merry Christmas and a Happy New Year, Leonid Leonid Lavnevich PhD researcher at IPCM Sorbonne Universités - The Faculty of Science & Engineering Tel: +33 7 83 56 18 50 Mail: leo...@so... LinkedIn: https://www.linkedin.com/in/leonid-lavnevich/ |
From: Dmitry S. <Sem...@gm...> - 2019-12-18 11:23:22
|
Dear Juha, All worked. Thank you very much! Sincerely, Dmitry > On 17. Dec 2019, at 17:09, Juha Huiskonen <juh...@he...> wrote: > > Hi Dmitry, > > As a last resort you can try to copy a run.db file from a successful run over it (see the last directory under Runs and subfolder logs). Make a backup first. > > Best wishes, > Juha > > > On Tue, Dec 17, 2019 at 3:52 PM Dmitry A. Semchonok <sem...@gm... <mailto:sem...@gm...>> wrote: > Dear colleagues, > > > Could you please prompt me the easy fix to use when the one cannot open the > Scipion project?. > > > I think the project.sqlite was corrupted. > > > > > Shall I substitute this file on some other? > > > Thank you > > > Sincerely, > Dmitry > > > > > _______________________________________________ > scipion-users mailing list > sci...@li... <mailto:sci...@li...> > https://lists.sourceforge.net/lists/listinfo/scipion-users <https://lists.sourceforge.net/lists/listinfo/scipion-users> > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Juha H. <juh...@he...> - 2019-12-17 16:10:01
|
Hi Dmitry, As a last resort you can try to copy a run.db file from a successful run over it (see the last directory under Runs and subfolder logs). Make a backup first. Best wishes, Juha On Tue, Dec 17, 2019 at 3:52 PM Dmitry A. Semchonok <sem...@gm...> wrote: > Dear colleagues, > > > Could you please prompt me the easy fix to use when the one cannot open > the > Scipion project?. > > > I think the project.sqlite was corrupted. > > > > > Shall I substitute this file on some other? > > > Thank you > > > Sincerely, > Dmitry > > > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > |
From: Dmitry A. S. <sem...@gm...> - 2019-12-17 13:51:28
|
Dear colleagues, Could you please prompt me the easy fix to use when the one cannot open the Scipion project?. I think the project.sqlite was corrupted. Shall I substitute this file on some other? Thank you Sincerely, Dmitry |
From: Carlos O. S. S. <co...@cn...> - 2019-12-11 20:39:02
|
Dear Amanda, this error rings a bell, but I thought it was related to the micrograph names, that might have a -. If you have repeated it and it finishes, it cannot be, but it also indicates that it cannot be reproduced, which makes it very difficult to fix. As you have one successful execution, you can continue from there, and thank you for the report. Kind regards, Carlos Oscar On 11/12/2019 19:09, Amanda Erwin wrote: > Hi all, > > I started Xmipp3-CL2D with 13,012 particles into 20 classes using 4 > MPI. I got computed classes (0-3), computed classes_core (0-3), but no > stable_score classes. The error in the run.stdout is: > > 00382: STARTED: createOutputStep, step 9 > 00383: 2019-12-05 18:34:35.005964 > 00384: Traceback (most recent call last): > 00385: File > "/lsi/sbgrid/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", > line 186, in run > 00386: self._run() > 00387: File > "/lsi/sbgrid/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", > line 237, in _run > 00388: resultFiles = self._runFunc() > 00389: File > "/lsi/sbgrid/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", > line 233, in _runFunc > 00390: return self._func(*self._args) > 00391: File > "/lsi/sbgrid/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/site-packages/xmipp3/protocols/protocol_cl2d.py", > line 315, in createOutputStep > 00392: self._fillClassesFromLevel(classes2DSet, "last", subset) > 00393: File > "/lsi/sbgrid/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/site-packages/xmipp3/protocols/protocol_cl2d.py", > line 520, in _fillClassesFromLevel > 00394: updateClassCallback=self._updateClass) > 00395: File > "/lsi/sbgrid/programs/x86_64-linux/scipion/2.0/pyworkflow/em/data.py", > line 1839, in classifyItems > 00396: classItem.append(newItem) > 00397: File > "/lsi/sbgrid/programs/x86_64-linux/scipion/2.0/pyworkflow/em/data.py", > line 1040, in append > 00398: EMSet.append(self, image) > 00399: File > "/lsi/sbgrid/programs/x86_64-linux/scipion/2.0/pyworkflow/object.py", > line 1190, in append > 00400: self._insertItem(item) > 00401: File > "/lsi/sbgrid/programs/x86_64-linux/scipion/2.0/pyworkflow/object.py", > line 1194, in _insertItem > 00402: self._getMapper().insert(item) > 00403: File > "/lsi/sbgrid/programs/x86_64-linux/scipion/2.0/pyworkflow/mapper/sqlite.py", > line 758, in insert > 00404: self.db.createTables(obj.getObjDict(includeClass=True)) > 00405: File > "/lsi/sbgrid/programs/x86_64-linux/scipion/2.0/pyworkflow/mapper/sqlite.py", > line 1101, in createTables > 00406: )""" % self.tablePrefix) > 00407: OperationalError: near "-": syntax error > 00408: Protocol failed: near "-": syntax error > 00409: FAILED: createOutputStep, step 9 > 00410: 2019-12-05 18:34:36.139019 > 00411: *** Last status is failed > 00412: ------------------- PROTOCOL FAILED (DONE 9/13) > > I do not have "-" in my file names. The same job on other particles > with the same naming scheme have completed without error. > > I have found a similar error reported on github > (https://github.com/I2PC/scipion/issues/1833). However, I did not stop > my job at any point - the job simply failed while creating stable > cores. Does anyone have any ideas why this error might be happening? > > Thanks! > Amanda > > -- > ______________________________________________ > *Amanda Erwin* > PhD Candidate | M. Ohi Lab > Department of Cell and Developmental Biology > University of Michigan Life Sciences Institute > https://www.lsi.umich.edu/science/our-labs/melanie-ohi-lab > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Amanda E. <er...@um...> - 2019-12-11 15:40:07
|
Hi all, I started Xmipp3-CL2D with 13,012 particles into 20 classes using 4 MPI. I got computed classes (0-3), computed classes_core (0-3), but no stable_score classes. The error in the run.stdout is: 00382: STARTED: createOutputStep, step 9 00383: 2019-12-05 18:34:35.005964 00384: Traceback (most recent call last): 00385: File "/lsi/sbgrid/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", line 186, in run 00386: self._run() 00387: File "/lsi/sbgrid/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", line 237, in _run 00388: resultFiles = self._runFunc() 00389: File "/lsi/sbgrid/programs/x86_64-linux/scipion/2.0/pyworkflow/protocol/protocol.py", line 233, in _runFunc 00390: return self._func(*self._args) 00391: File "/lsi/sbgrid/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/site-packages/xmipp3/protocols/protocol_cl2d.py", line 315, in createOutputStep 00392: self._fillClassesFromLevel(classes2DSet, "last", subset) 00393: File "/lsi/sbgrid/programs/x86_64-linux/scipion/2.0/software/lib/python2.7/site-packages/xmipp3/protocols/protocol_cl2d.py", line 520, in _fillClassesFromLevel 00394: updateClassCallback=self._updateClass) 00395: File "/lsi/sbgrid/programs/x86_64-linux/scipion/2.0/pyworkflow/em/data.py", line 1839, in classifyItems 00396: classItem.append(newItem) 00397: File "/lsi/sbgrid/programs/x86_64-linux/scipion/2.0/pyworkflow/em/data.py", line 1040, in append 00398: EMSet.append(self, image) 00399: File "/lsi/sbgrid/programs/x86_64-linux/scipion/2.0/pyworkflow/object.py", line 1190, in append 00400: self._insertItem(item) 00401: File "/lsi/sbgrid/programs/x86_64-linux/scipion/2.0/pyworkflow/object.py", line 1194, in _insertItem 00402: self._getMapper().insert(item) 00403: File "/lsi/sbgrid/programs/x86_64-linux/scipion/2.0/pyworkflow/mapper/sqlite.py", line 758, in insert 00404: self.db.createTables(obj.getObjDict(includeClass=True)) 00405: File "/lsi/sbgrid/programs/x86_64-linux/scipion/2.0/pyworkflow/mapper/sqlite.py", line 1101, in createTables 00406: )""" % self.tablePrefix) 00407: OperationalError: near "-": syntax error 00408: Protocol failed: near "-": syntax error 00409: FAILED: createOutputStep, step 9 00410: 2019-12-05 18:34:36.139019 00411: *** Last status is failed 00412: ------------------- PROTOCOL FAILED (DONE 9/13) I do not have "-" in my file names. The same job on other particles with the same naming scheme have completed without error. I have found a similar error reported on github ( https://github.com/I2PC/scipion/issues/1833). However, I did not stop my job at any point - the job simply failed while creating stable cores. Does anyone have any ideas why this error might be happening? Thanks! Amanda -- ______________________________________________ *Amanda Erwin* PhD Candidate | M. Ohi Lab Department of Cell and Developmental Biology University of Michigan Life Sciences Institute https://www.lsi.umich.edu/science/our-labs/melanie-ohi-lab |