From: Grigory S. <sha...@gm...> - 2021-05-19 09:48:22
|
Thank you, Pablo! Indeed I never considered mrcs to have a single 2D particle, which is entirely possible. Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH, UK. tel. +44 (0) 1223 267228 <+44%201223%20267228> e-mail: gs...@mr... On Wed, May 19, 2021 at 10:45 AM Pablo Conesa <pc...@cn...> wrote: > Hi, we've found the issue. > > Although import particles (files mode) seemed correct....it wasn't for > only those mrcs files having a single image. 30 of them. > > Removing them at import time worked and now relion 2d classification works > as expected. > > We'll fix it. > > > Cheers! > > > On 19/5/21 10:56, Grigory Sharov wrote: > > Hi Dmitry, > > I'll try to reproduce the error when I get a chance > > On Wed, May 19, 2021, 09:41 Pablo Conesa <pc...@cn...> wrote: > >> I see, maybe we can arrange a tele conf to see in detail what is wrong. >> I'll contact you. >> On 19/5/21 10:26, Dmitry Semchonok wrote: >> >> Dear Pablo and Grigory, >> >> >> Thank you! >> >> Yes, I am well aware of the fact that there is no info for the CTF etc :) >> >> >> All I need is just a nice aligned 2D set of images (from the set I >> imported) >> >> (Ideally I would like to have this set just from cryosparc but I have no >> idea how to do that right away :) ) >> >> Please see the log of relion >> >> >> >> RUNNING PROTOCOL ----------------- >> 00002: Hostname: cryoem01 >> 00003: PID: 36440 >> 00004: pyworkflow: 3.0.13 >> 00005: plugin: relion >> 00006: plugin v: 3.1.2 >> 00007: currentDir: /data1/ScipionUserData/projects/Caro__helix >> 00008: workingDir: Runs/001735_ProtRelionClassify2D >> 00009: runMode: Continue >> 00010: MPI: 3 >> 00011: threads: 3 >> 00012: Starting at step: 1 >> 00013: Running steps >> 00014: STARTED: convertInputStep, step 1, time 2021-05-18 >> 15:06:21.557294 >> 00015: Converting set from >> 'Runs/001662_ProtImportParticles/particles.sqlite' into >> 'Runs/001735_ProtRelionClassify2D/input_particles.star' >> 00016: convertBinaryFiles: creating soft links. >> 00017: Root: Runs/001735_ProtRelionClassify2D/extra/input -> >> Runs/001662_ProtImportParticles/extra >> 00018: FINISHED: convertInputStep, step 1, time 2021-05-18 >> 15:06:22.474076 >> 00019: STARTED: runRelionStep, step 2, time 2021-05-18 15:06:22.502665 >> 00020: mpirun -np 3 `which relion_refine_mpi` --i >> Runs/001735_ProtRelionClassify2D/input_particles.star --particle_diameter >> 690 --zero_mask --K 64 --norm --scale --o >> Runs/001735_ProtRelionClassify2D/extra/relion --oversampling 1 >> --flatten_solvent --tau2_fudge 2.0 --iter 25 --offset_range 5.0 >> --offset_step 2.0 --psi_step 10.0 --dont_combine_weights_via_disc >> --scratch_dir /data1/new_scratch/ --pool 3 --gpu --j 3 >> 00021: RELION version: 3.1.2 >> 00022: Precision: BASE=double, CUDA-ACC=single >> 00023: >> 00024: === RELION MPI setup === >> 00025: + Number of MPI processes = 3 >> 00026: + Number of threads per MPI process = 3 >> 00027: + Total number of threads therefore = 9 >> 00028: + Leader (0) runs on host = cryoem01 >> 00029: + Follower 1 runs on host = cryoem01 >> 00030: + Follower 2 runs on host = cryoem01 >> 00031: ================= >> 00032: uniqueHost cryoem01 has 2 ranks. >> 00033: GPU-ids not specified for this rank, threads will automatically >> be mapped to available devices. >> 00034: Thread 0 on follower 1 mapped to device 0 >> 00035: Thread 1 on follower 1 mapped to device 0 >> 00036: Thread 2 on follower 1 mapped to device 0 >> 00037: GPU-ids not specified for this rank, threads will automatically >> be mapped to available devices. >> 00038: Thread 0 on follower 2 mapped to device 1 >> 00039: Thread 1 on follower 2 mapped to device 1 >> 00040: Thread 2 on follower 2 mapped to device 1 >> 00041: Running CPU instructions in double precision. >> 00042: + WARNING: Changing psi sampling rate (before oversampling) to >> 5.625 degrees, for more efficient GPU calculations >> 00043: + On host cryoem01: free scratch space = 447.485 Gb. >> 00044: Copying particles to scratch directory: >> /data1/new_scratch/relion_volatile/ >> 00045: 000/??? sec ~~(,_,"> >> [oo] >> 00046: 0/ 0 sec ~~(,_,">in: >> /opt/Scipion3/software/em/relion-3.1.2/src/rwMRC.h, line 192 >> 00047: ERROR: >> 00048: readMRC: Image number 11 exceeds stack size 1 of image >> 000011@Runs/001735_ProtRelionClassify2D/extra/input/1024562735536827037_FoilHole_1618719_Data_1621438_1621440_20200703_085118_Fractions_patch_aligned_doseweighted_particles.mrcs >> 00049: === Backtrace === >> 00050: >> /opt/Scipion3/software/em/relion-3.1.2/bin/relion_refine_mpi(_ZN11RelionErrorC1ERKSsS1_l+0x41) >> [0x4786a1] >> 00051: >> /opt/Scipion3/software/em/relion-3.1.2/bin/relion_refine_mpi(_ZN5ImageIdE7readMRCElbRK8FileName+0x99f) >> [0x4b210f] >> 00052: >> /opt/Scipion3/software/em/relion-3.1.2/bin/relion_refine_mpi(_ZN5ImageIdE5_readERK8FileNameR13fImageHandlerblbb+0x17b) >> [0x4b407b] >> 00053: >> /opt/Scipion3/software/em/relion-3.1.2/bin/relion_refine_mpi(_ZN10Experiment22copyParticlesToScratchEibbd+0xda7) >> [0x5b8f87] >> 00054: >> /opt/Scipion3/software/em/relion-3.1.2/bin/relion_refine_mpi(_ZN14MlOptimiserMpi18initialiseWorkLoadEv+0x210) >> [0x498540] >> 00055: >> /opt/Scipion3/software/em/relion-3.1.2/bin/relion_refine_mpi(_ZN14MlOptimiserMpi10initialiseEv+0x9aa) >> [0x49ab2a] >> 00056: >> /opt/Scipion3/software/em/relion-3.1.2/bin/relion_refine_mpi(main+0x55) >> [0x4322a5] >> 00057: /lib64/libc.so.6(__libc_start_main+0xf5) [0x7fea6a54e555] >> 00058: /opt/Scipion3/software/em/relion-3.1.2/bin/relion_refine_mpi() >> [0x435fbf] >> 00059: ================== >> 00060: ERROR: >> 00061: readMRC: Image number 11 exceeds stack size 1 of image >> 000011@Runs/001735_ProtRelionClassify2D/extra/input/1024562735536827037_FoilHole_1618719_Data_1621438_1621440_20200703_085118_Fractions_patch_aligned_doseweighted_particles.mrcs >> 00062: application called MPI_Abort(MPI_COMM_WORLD, 1) - process 1 >> 00063: Traceback (most recent call last): >> 00064: File >> "/usr/local/miniconda/envs/scipion3/lib/python3.8/site-packages/pyworkflow/protocol/protocol.py", >> line 197, in run >> 00065: self._run() >> 00066: File >> "/usr/local/miniconda/envs/scipion3/lib/python3.8/site-packages/pyworkflow/protocol/protocol.py", >> line 248, in _run >> 00067: resultFiles = self._runFunc() >> 00068: File >> "/usr/local/miniconda/envs/scipion3/lib/python3.8/site-packages/pyworkflow/protocol/protocol.py", >> line 244, in _runFunc >> 00069: return self._func(*self._args) >> 00070: File >> "/usr/local/miniconda/envs/scipion3/lib/python3.8/site-packages/relion/protocols/protocol_base.py", >> line 811, in runRelionStep >> 00071: self.runJob(self._getProgram(), params) >> 00072: File >> "/usr/local/miniconda/envs/scipion3/lib/python3.8/site-packages/pyworkflow/protocol/protocol.py", >> line 1388, in runJob >> 00073: self._stepsExecutor.runJob(self._log, program, arguments, >> **kwargs) >> 00074: File >> "/usr/local/miniconda/envs/scipion3/lib/python3.8/site-packages/pyworkflow/protocol/executor.py", >> line 65, in runJob >> 00075: process.runJob(log, programName, params, >> 00076: File >> "/usr/local/miniconda/envs/scipion3/lib/python3.8/site-packages/pyworkflow/utils/process.py", >> line 52, in runJob >> 00077: return runCommand(command, env, cwd) >> 00078: File >> "/usr/local/miniconda/envs/scipion3/lib/python3.8/site-packages/pyworkflow/utils/process.py", >> line 67, in runCommand >> 00079: check_call(command, shell=True, stdout=sys.stdout, >> stderr=sys.stderr, >> 00080: File >> "/usr/local/miniconda/envs/scipion3/lib/python3.8/subprocess.py", line 364, >> in check_call >> 00081: raise CalledProcessError(retcode, cmd) >> 00082: subprocess.CalledProcessError: Command ' mpirun -np 3 `which >> relion_refine_mpi` --i >> Runs/001735_ProtRelionClassify2D/input_particles.star --particle_diameter >> 690 --zero_mask --K 64 --norm --scale --o >> Runs/001735_ProtRelionClassify2D/extra/relion --oversampling 1 >> --flatten_solvent --tau2_fudge 2.0 --iter 25 --offset_range 5.0 >> --offset_step 2.0 --psi_step 10.0 --dont_combine_weights_via_disc >> --scratch_dir /data1/new_scratch/ --pool 3 --gpu --j 3' returned >> non-zero exit status 1. >> 00083: Protocol failed: Command ' mpirun -np 3 `which >> relion_refine_mpi` --i >> Runs/001735_ProtRelionClassify2D/input_particles.star --particle_diameter >> 690 --zero_mask --K 64 --norm --scale --o >> Runs/001735_ProtRelionClassify2D/extra/relion --oversampling 1 >> --flatten_solvent --tau2_fudge 2.0 --iter 25 --offset_range 5.0 >> --offset_step 2.0 --psi_step 10.0 --dont_combine_weights_via_disc >> --scratch_dir /data1/new_scratch/ --pool 3 --gpu --j 3' returned >> non-zero exit status 1. >> 00084: FAILED: runRelionStep, step 2, time 2021-05-18 15:06:24.609548 >> 00085: *** Last status is failed >> 00086: ------------------- PROTOCOL FAILED (DONE 2/3) >> >> >> *Additionally,* >> >> So it seemed from the first look that there is some issue with the image >> 11 — I delete the image 11 but the problem still remained. >> >> >> >> >> *Optionally,* >> >> I believe that xmipp-2D should work but I did not try it yet. >> >> Thank you >> >> Sincerely, >> Dmitry >> >> >> >> >> >> >> >> >> >> On 19. May 2021, at 10:13, Pablo Conesa <pc...@cn...> wrote: >> >> Hi! So, I think Grigory is right, you've gone through the import >> particles "without metadata info" therefore you only have the images >> without any alignment information. >> >> >> In theory, 2d classification should work with this kind of import. Could >> you please share the logs of one of the relion classification? >> On 19/5/21 9:26, Dmitry Semchonok wrote: >> >> Dear Grigory, >> >> >> I did nothing much, just tried to start relion 2D // or cryosparc. >> >> >> The only thing I tried additionally since I could no proceed is to a) >> change the box size; b) just resave the subset with the same number of >> images. >> >> >> Please see the image. >> >> <Screenshot 2021-05-19 at 09.23.41.png> >> >> >> Thank you >> >> Sincerely, >> >> Dmitry >> >> >> On 18. May 2021, at 15:19, Grigory Sharov <sha...@gm...> >> wrote: >> >> Hi, >> >> I imagine you have 624 micrographs, so particles are exported to mrc on a >> mic basis. I see you used the "files" option to import mrcs particles into >> Scipion. This means the imported particles have no metadata except pixel >> size you provided. >> >> What did you do with them after import? >> >> Best regards, >> Grigory >> >> >> -------------------------------------------------------------------------------- >> Grigory Sharov, Ph.D. >> >> MRC Laboratory of Molecular Biology, >> Francis Crick Avenue, >> Cambridge Biomedical Campus, >> Cambridge CB2 0QH, UK. >> tel. +44 (0) 1223 267228 <+44%201223%20267228> >> e-mail: gs...@mr... >> >> >> On Tue, May 18, 2021 at 2:12 PM Dmitry Semchonok <Sem...@gm...> >> wrote: >> >>> Dear Grigory, >>> >>> Yes I did that — the particles are looking fine. >>> >>> >>> I guess the issue still comes from the fact that originally in cryosparc >>> Export the stacks of particle were placed into 624 mrc. But the number of >>> particles is about 44 818. So even after the renaming and the export I see >>> this in SCIPION export log >>> >>> <Screenshot 2021-05-18 at 15.09.11.png> >>> >>> What I guess may help is if I somehow combine all those files in 1 mrcs >>> first and then add import them to SCIPION. >>> Do you perhaps know how to do that? >>> >>> Thank you >>> >>> Sincerely, >>> Dmitry >>> >>> >>> >>> On 18. May 2021, at 15:02, Grigory Sharov <sha...@gm...> >>> wrote: >>> >>> Hi Dmitry, >>> >>> as the error states your star file points to a non-existing image in the >>> mrcs stack. You need to check first if you import from cryosparc with mrcs >>> worked correctly (open / display particles) then trace all the steps you >>> did before 2D classification. >>> >>> Best regards, >>> Grigory >>> >>> >>> -------------------------------------------------------------------------------- >>> Grigory Sharov, Ph.D. >>> >>> MRC Laboratory of Molecular Biology, >>> Francis Crick Avenue, >>> Cambridge Biomedical Campus, >>> Cambridge CB2 0QH, UK. >>> tel. +44 (0) 1223 267228 <+44%201223%20267228> >>> e-mail: gs...@mr... >>> >>> >>> On Tue, May 18, 2021 at 1:25 PM Dmitry Semchonok <Sem...@gm...> >>> wrote: >>> >>>> Dear Pablo, >>>> >>>> >>>> Thank you. I heard about this option. For that I guess the >>>> https://pypi.org/project/cs2star/ needs to be installed. >>>> >>>> >>>> >>>> >>>> In Cryosparc itself there is an option to export files. And then what >>>> we get is the mrc files with different number of particles in each. >>>> >>>> It appeared to be possible to rename mrc —> to —>mrcs. Then SCIPION can >>>> import those particles. >>>> >>>> Currently the problem is that not relion nor cryosparc can run these >>>> particles. >>>> >>>> >>>> >>>> >>>> Relion stops with error: >>>> >>>> >>>> 00001: RUNNING PROTOCOL ----------------- >>>> 00002: Hostname: cryoem01 >>>> 00003: PID: 46455 >>>> 00004: pyworkflow: 3.0.13 >>>> 00005: plugin: relion >>>> 00006: plugin v: 3.1.2 >>>> 00007: currentDir: /data1/ScipionUserData/projects/Caro__helix >>>> 00008: workingDir: Runs/001546_ProtRelionClassify2D >>>> 00009: runMode: Continue >>>> 00010: MPI: 3 >>>> 00011: threads: 3 >>>> 00012: Starting at step: 1 >>>> 00013: Running steps >>>> 00014: STARTED: convertInputStep, step 1, time 2021-05-18 >>>> 12:33:26.198123 >>>> 00015: Converting set from >>>> 'Runs/001492_ProtUserSubSet/particles.sqlite' into >>>> 'Runs/001546_ProtRelionClassify2D/input_particles.star' >>>> 00016: convertBinaryFiles: creating soft links. >>>> 00017: Root: Runs/001546_ProtRelionClassify2D/extra/input -> >>>> Runs/001057_ProtImportParticles/extra >>>> 00018: FINISHED: convertInputStep, step 1, time 2021-05-18 >>>> 12:33:27.117588 >>>> 00019: STARTED: runRelionStep, step 2, time 2021-05-18 12:33:27.145974 >>>> 00020: mpirun -np 3 `which relion_refine_mpi` --i >>>> Runs/001546_ProtRelionClassify2D/input_particles.star --particle_diameter >>>> 690 --zero_mask --K 64 --norm --scale --o >>>> Runs/001546_ProtRelionClassify2D/extra/relion --oversampling 1 >>>> --flatten_solvent --tau2_fudge 2.0 --iter 25 --offset_range 5.0 >>>> --offset_step 2.0 --psi_step 10.0 --dont_combine_weights_via_disc >>>> --scratch_dir /data1/new_scratch/ --pool 3 --gpu --j 3 >>>> 00021: RELION version: 3.1.2 >>>> 00022: Precision: BASE=double, CUDA-ACC=single >>>> 00023: >>>> 00024: === RELION MPI setup === >>>> 00025: + Number of MPI processes = 3 >>>> 00026: + Number of threads per MPI process = 3 >>>> 00027: + Total number of threads therefore = 9 >>>> 00028: + Leader (0) runs on host = cryoem01 >>>> 00029: + Follower 1 runs on host = cryoem01 >>>> 00030: + Follower 2 runs on host = cryoem01 >>>> 00031: ================= >>>> 00032: uniqueHost cryoem01 has 2 ranks. >>>> 00033: GPU-ids not specified for this rank, threads will >>>> automatically be mapped to available devices. >>>> 00034: Thread 0 on follower 1 mapped to device 0 >>>> 00035: Thread 1 on follower 1 mapped to device 0 >>>> 00036: Thread 2 on follower 1 mapped to device 0 >>>> 00037: GPU-ids not specified for this rank, threads will >>>> automatically be mapped to available devices. >>>> 00038: Thread 0 on follower 2 mapped to device 1 >>>> 00039: Thread 1 on follower 2 mapped to device 1 >>>> 00040: Thread 2 on follower 2 mapped to device 1 >>>> 00041: Running CPU instructions in double precision. >>>> 00042: + WARNING: Changing psi sampling rate (before oversampling) >>>> to 5.625 degrees, for more efficient GPU calculations >>>> 00043: + On host cryoem01: free scratch space = 448.252 Gb. >>>> 00044: Copying particles to scratch directory: >>>> /data1/new_scratch/relion_volatile/ >>>> 00045: 000/??? sec ~~(,_,"> >>>> [oo] >>>> 00046: 1/ 60 sec ~~(,_,">in: >>>> /opt/Scipion3/software/em/relion-3.1.2/src/rwMRC.h, line 192 >>>> 00047: ERROR: >>>> 00048: readMRC: Image number 11 exceeds stack size 1 of image >>>> 000011@Runs/001546_ProtRelionClassify2D/extra/input/1024562735536827037_FoilHole_1618719_Data_1621438_1621440_20200703_085118_Fractions_patch_aligned_doseweighted_particles.mrcs >>>> 00049: === Backtrace === >>>> 00050: >>>> /opt/Scipion3/software/em/relion-3.1.2/bin/relion_refine_mpi(_ZN11RelionErrorC1ERKSsS1_l+0x41) >>>> [0x4786a1] >>>> 00051: >>>> /opt/Scipion3/software/em/relion-3.1.2/bin/relion_refine_mpi(_ZN5ImageIdE7readMRCElbRK8FileName+0x99f) >>>> [0x4b210f] >>>> 00052: >>>> /opt/Scipion3/software/em/relion-3.1.2/bin/relion_refine_mpi(_ZN5ImageIdE5_readERK8FileNameR13fImageHandlerblbb+0x17b) >>>> [0x4b407b] >>>> 00053: >>>> /opt/Scipion3/software/em/relion-3.1.2/bin/relion_refine_mpi(_ZN10Experiment22copyParticlesToScratchEibbd+0xda7) >>>> [0x5b8f87] >>>> 00054: >>>> /opt/Scipion3/software/em/relion-3.1.2/bin/relion_refine_mpi(_ZN14MlOptimiserMpi18initialiseWorkLoadEv+0x210) >>>> [0x498540] >>>> 00055: >>>> /opt/Scipion3/software/em/relion-3.1.2/bin/relion_refine_mpi(_ZN14MlOptimiserMpi10initialiseEv+0x9aa) >>>> [0x49ab2a] >>>> 00056: >>>> /opt/Scipion3/software/em/relion-3.1.2/bin/relion_refine_mpi(main+0x55) >>>> [0x4322a5] >>>> 00057: /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f657f51a555] >>>> 00058: /opt/Scipion3/software/em/relion-3.1.2/bin/relion_refine_mpi() >>>> [0x435fbf] >>>> 00059: ================== >>>> 00060: ERROR: >>>> 00061: readMRC: Image number 11 exceeds stack size 1 of image >>>> 000011@Runs/001546_ProtRelionClassify2D/extra/input/1024562735536827037_FoilHole_1618719_Data_1621438_1621440_20200703_085118_Fractions_patch_aligned_doseweighted_particles.mrcs >>>> 00062: application called MPI_Abort(MPI_COMM_WORLD, 1) - process 1 >>>> 00063: Traceback (most recent call last): >>>> 00064: File >>>> "/usr/local/miniconda/envs/scipion3/lib/python3.8/site-packages/pyworkflow/protocol/protocol.py", >>>> line 197, in run >>>> 00065: self._run() >>>> 00066: File >>>> "/usr/local/miniconda/envs/scipion3/lib/python3.8/site-packages/pyworkflow/protocol/protocol.py", >>>> line 248, in _run >>>> 00067: resultFiles = self._runFunc() >>>> 00068: File >>>> "/usr/local/miniconda/envs/scipion3/lib/python3.8/site-packages/pyworkflow/protocol/protocol.py", >>>> line 244, in _runFunc >>>> 00069: return self._func(*self._args) >>>> 00070: File >>>> "/usr/local/miniconda/envs/scipion3/lib/python3.8/site-packages/relion/protocols/protocol_base.py", >>>> line 811, in runRelionStep >>>> 00071: self.runJob(self._getProgram(), params) >>>> 00072: File >>>> "/usr/local/miniconda/envs/scipion3/lib/python3.8/site-packages/pyworkflow/protocol/protocol.py", >>>> line 1388, in runJob >>>> 00073: self._stepsExecutor.runJob(self._log, program, arguments, >>>> **kwargs) >>>> 00074: File >>>> "/usr/local/miniconda/envs/scipion3/lib/python3.8/site-packages/pyworkflow/protocol/executor.py", >>>> line 65, in runJob >>>> 00075: process.runJob(log, programName, params, >>>> 00076: File >>>> "/usr/local/miniconda/envs/scipion3/lib/python3.8/site-packages/pyworkflow/utils/process.py", >>>> line 52, in runJob >>>> 00077: return runCommand(command, env, cwd) >>>> 00078: File >>>> "/usr/local/miniconda/envs/scipion3/lib/python3.8/site-packages/pyworkflow/utils/process.py", >>>> line 67, in runCommand >>>> 00079: check_call(command, shell=True, stdout=sys.stdout, >>>> stderr=sys.stderr, >>>> 00080: File >>>> "/usr/local/miniconda/envs/scipion3/lib/python3.8/subprocess.py", line 364, >>>> in check_call >>>> 00081: raise CalledProcessError(retcode, cmd) >>>> 00082: subprocess.CalledProcessError: Command ' mpirun -np 3 `which >>>> relion_refine_mpi` --i >>>> Runs/001546_ProtRelionClassify2D/input_particles.star --particle_diameter >>>> 690 --zero_mask --K 64 --norm --scale --o >>>> Runs/001546_ProtRelionClassify2D/extra/relion --oversampling 1 >>>> --flatten_solvent --tau2_fudge 2.0 --iter 25 --offset_range 5.0 >>>> --offset_step 2.0 --psi_step 10.0 --dont_combine_weights_via_disc >>>> --scratch_dir /data1/new_scratch/ --pool 3 --gpu --j 3' returned >>>> non-zero exit status 1. >>>> 00083: Protocol failed: Command ' mpirun -np 3 `which >>>> relion_refine_mpi` --i >>>> Runs/001546_ProtRelionClassify2D/input_particles.star --particle_diameter >>>> 690 --zero_mask --K 64 --norm --scale --o >>>> Runs/001546_ProtRelionClassify2D/extra/relion --oversampling 1 >>>> --flatten_solvent --tau2_fudge 2.0 --iter 25 --offset_range 5.0 >>>> --offset_step 2.0 --psi_step 10.0 --dont_combine_weights_via_disc >>>> --scratch_dir /data1/new_scratch/ --pool 3 --gpu --j 3' returned >>>> non-zero exit status 1. >>>> 00084: FAILED: runRelionStep, step 2, time 2021-05-18 12:33:29.230213 >>>> 00085: *** Last status is failed >>>> 00086: ------------------- PROTOCOL FAILED (DONE 2/3) >>>> >>>> >>>> Cryosparc (in SCIPION) requires CTF to run. >>>> >>>> >>>> >>>> Thais is where I am now. >>>> >>>> Perhaps there is a solution? >>>> >>>> >>>> Sincerely, >>>> Dmitry >>>> >>>> >>>> >>>> On 18. May 2021, at 14:16, Pablo Conesa <pc...@cn...> wrote: >>>> >>>> Dear Dmitry, the import of CS metadata files (*.cs) is not supported in >>>> Scipion. Does CS has an option to export to star files. It rings a bell. >>>> On 18/5/21 9:53, Dmitry Semchonok wrote: >>>> >>>> Dear Grigory, >>>> >>>> >>>> The files are in mrc format. >>>> >>>> >>>> Please, let me try to explain more plan: >>>> >>>> I have a project in cryosparc. There I have cryosparc selected 2D >>>> classes. I want to export the particles of those classes into SCIPION. >>>> >>>> So I I pressed Export (fig 1) and the program(cryosparc) created the >>>> folder with mrc + other files (fig 2;3). I looked into J48 and found many >>>> *.mrc files of the particles. But it is not 1 mrc = 1 particle. It seems to >>>> be a mrc stuck - so I have several files inside 1 *.mrc (fig 4) (you can >>>> also notice that they all have different sizes) >>>> >>>> So I need to export them somehow in SCIPION >>>> >>>> For that, I used the SCIPION export - images protocol where for the >>>> files to add I put *.mrc. But the protocol seems to be added only 1 mrc as >>>> 1 picture and instead of having 46392 particles I have ~600 particles. >>>> >>>> (Also the geometry seems not preserved). >>>> >>>> >>>> So my question how to export the particles from cryosparc into SCIPION >>>> correctly? >>>> >>>> >>>> Thank you! >>>> >>>> >>>> >>>> https://disk.yandex.com/d/Fv3Q1lpwEzSisg >>>> >>>> Sincerely, >>>> >>>> Dmitry >>>> >>>> >>>> >>>> On 17. May 2021, at 18:12, Grigory Sharov <sha...@gm...> >>>> wrote: >>>> >>>> Hi Dmitry, >>>> >>>> mrc stacks should have "mrcs" extension. Is this the problem you are >>>> getting? >>>> >>>> Best regards, >>>> Grigory >>>> >>>> >>>> -------------------------------------------------------------------------------- >>>> Grigory Sharov, Ph.D. >>>> >>>> MRC Laboratory of Molecular Biology, >>>> Francis Crick Avenue, >>>> Cambridge Biomedical Campus, >>>> Cambridge CB2 0QH, UK. >>>> tel. +44 (0) 1223 267228 <+44%201223%20267228> >>>> e-mail: gs...@mr... >>>> >>>> >>>> On Mon, May 17, 2021 at 4:49 PM Dmitry Semchonok <Sem...@gm...> >>>> wrote: >>>> >>>>> Dear colleagues, >>>>> >>>>> I would like to export the particles from cryosparc to SCIPION. >>>>> >>>>> How to do that? >>>>> >>>>> >>>>> >>>>> >>>>> What I tried: >>>>> >>>>> >>>>> 1. In cryosparc I pressed Export – to export the particles I am >>>>> interested in. >>>>> >>>>> 2. In the folder Export – I found many mrc stacks with particles >>>>> in each. >>>>> >>>>> 3. I tried to export them to SCIPION using Export particles but >>>>> instead of reading each stack and combine them in the 1 dataset I received >>>>> 1 particle / per each mrc stack. >>>>> >>>>> >>>>> Any ideas? >>>>> >>>>> >>>>> Thank you >>>>> >>>>> Sincerely, >>>>> >>>>> Dmitry >>>>> >>>>> _______________________________________________ >>>>> scipion-users mailing list >>>>> sci...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/scipion-users >>>>> >>>> _______________________________________________ >>>> scipion-users mailing list >>>> sci...@li... >>>> https://lists.sourceforge.net/lists/listinfo/scipion-users >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> scipion-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/scipion-users >>>> >>>> -- >>>> Pablo Conesa - *Madrid Scipion <http://scipion.i2pc.es/> team* >>>> _______________________________________________ >>>> scipion-users mailing list >>>> sci...@li... >>>> https://lists.sourceforge.net/lists/listinfo/scipion-users >>>> >>>> >>>> _______________________________________________ >>>> scipion-users mailing list >>>> sci...@li... >>>> https://lists.sourceforge.net/lists/listinfo/scipion-users >>>> >>> _______________________________________________ >>> scipion-users mailing list >>> sci...@li... >>> https://lists.sourceforge.net/lists/listinfo/scipion-users >>> >>> >>> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> >> >> >> >> _______________________________________________ >> scipion-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/scipion-users >> >> -- >> Pablo Conesa - *Madrid Scipion <http://scipion.i2pc.es/> team* >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> >> >> >> >> _______________________________________________ >> scipion-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/scipion-users >> >> -- >> Pablo Conesa - *Madrid Scipion <http://scipion.i2pc.es> team* >> > > > _______________________________________________ > scipion-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/scipion-users > > -- > Pablo Conesa - *Madrid Scipion <http://scipion.i2pc.es> team* > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > |