You can subscribe to this list here.
2016 |
Jan
(2) |
Feb
(13) |
Mar
(9) |
Apr
(4) |
May
(5) |
Jun
(2) |
Jul
(8) |
Aug
(3) |
Sep
(25) |
Oct
(7) |
Nov
(49) |
Dec
(15) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2017 |
Jan
(24) |
Feb
(36) |
Mar
(53) |
Apr
(44) |
May
(37) |
Jun
(34) |
Jul
(12) |
Aug
(15) |
Sep
(14) |
Oct
(9) |
Nov
(9) |
Dec
(7) |
2018 |
Jan
(16) |
Feb
(9) |
Mar
(27) |
Apr
(39) |
May
(8) |
Jun
(24) |
Jul
(22) |
Aug
(11) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
2019 |
Jan
(4) |
Feb
(5) |
Mar
|
Apr
(1) |
May
(21) |
Jun
(13) |
Jul
(31) |
Aug
(22) |
Sep
(9) |
Oct
(19) |
Nov
(24) |
Dec
(12) |
2020 |
Jan
(30) |
Feb
(12) |
Mar
(16) |
Apr
(4) |
May
(37) |
Jun
(17) |
Jul
(19) |
Aug
(15) |
Sep
(26) |
Oct
(84) |
Nov
(64) |
Dec
(55) |
2021 |
Jan
(18) |
Feb
(58) |
Mar
(26) |
Apr
(88) |
May
(51) |
Jun
(36) |
Jul
(31) |
Aug
(37) |
Sep
(79) |
Oct
(15) |
Nov
(29) |
Dec
(8) |
2022 |
Jan
(5) |
Feb
(8) |
Mar
(29) |
Apr
(21) |
May
(11) |
Jun
(11) |
Jul
(18) |
Aug
(16) |
Sep
(6) |
Oct
(10) |
Nov
(23) |
Dec
(1) |
2023 |
Jan
(18) |
Feb
|
Mar
(4) |
Apr
|
May
(3) |
Jun
(10) |
Jul
(1) |
Aug
|
Sep
|
Oct
(1) |
Nov
(3) |
Dec
(5) |
2024 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
(1) |
Feb
|
Mar
|
Apr
(5) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Pablo C. <pc...@cn...> - 2018-04-06 13:51:38
|
************************************************************************* IMPORTANT: This is different than the: *2nd I2PC cryoEM Facilities Meeting. Madrid, June 26-27, 2018* <http://i2pc.es/2nd-i2pc-cryoem-facilities-meeting-madrid-june-26-27-2018/> *aiming at facility managers *event. ************************************************************************** Dear Colleagues, The Instruct Image Processing Center (I2PC) and FEI/ThermoFisher are organizing the First FEI-I2PC Cryo-EM Course on Scipion <http://scipion.i2pc.es> *for Facilities*, focused on Facilities needs in data management and processing and how they can be served by Scipion. Our Scipion team and collaborators have been working in the last years in what we call streaming processing. We understand the importance of having feedback during acquisition and this is precisely what will be covered in this course. From basic processing like movie alignment and CTF estimation on the fly, to automatic picking and extraction. Scipion is equipped with several consensus protocols that might help in your automated pipeline to screen and select the best micrograph, CTFs or particles. All with the benefit of Scipion flexibility and ”mix and match” capabilities. Scipion is also able to gather all the processing information and present it in a HTML report that could be served over the web and automatically refreshed. Additionally, Scipion API allows the integration with your LIMS system. Currently released Scipion 1.2 goes in streaming all the way to particle extraction, and during the course we will present current work in 2D classification and initial volume, in collaboration with XMIPP team. The topic of general pipelines for Single Particle Analysis with Scipion will also be covered, as it will be Scipion usage on the cloud. We are targeting a maximum of 20 attendees, and we count with 10 accommodation grants for students and Senior researchers from Instruct countries. *Ideal attendee profile:* IT/developer person trying to automatically process CryoEM acquisition data. (SPA) Please, find more information at: http://i2pc.es/instruct-i2pc-fei-facility-based-image-processing-for-electron-microscopy-madrid-june-27-29-2018/ Instruct countries: https://www.structuralbiology.eu/content/countries--instruct We look forward meeting you in Madrid! The Instruct Image Processing Center team |
From: Pablo C. <pc...@cn...> - 2018-04-05 06:02:20
|
Hi Abhisek, did you install relion, through scipion? The installation is exactly the same released by relion, but that way the compilation uses the same MPI libraries. Which MPI are you using and what version? All the best, Pablo. On 05/04/18 07:37, abhisek Mondal wrote: > Hi, > > I'm not being able to run relion of scipion. Despite re building the > scipion from source it keep crashing, however xmipp is working fine > with MPI processes. > > Error report: > > 00368: mpirun -np 2 -bynode `which relion_refine_mpi` --gpu > --tau2_fudge 2 --scale --dont_combine_weights_via_disc --iter 25 > --norm --psi_step 10.0 --ctf --offset_range 5.0 --oversampling 1 > --pool 3 --o Runs/002019_ProtRelionClassify2D/extra/relion --i > Runs/002019_ProtRelionClassify2D/input_particles.star > --particle_diameter 264.6 --K 30 --preread_images --flatten_solvent > --zero_mask --offset_step 2.0 --angpix 1.89 --j 2 > 00369: > -------------------------------------------------------------------------- > 00370: The following command line options and corresponding MCA > parameter have > 00371: been deprecated and replaced as follows: > 00372: > 00373: Command line options: > 00374: Deprecated: --bynode, -bynode > 00375: Replacement: --map-by node > 00376: > 00377: Equivalent MCA parameter: > 00378: Deprecated: rmaps_base_bynode > 00379: Replacement: rmaps_base_mapping_policy=node > 00380: > 00381: The deprecated forms *will* disappear in a future version of > Open MPI. > 00382: Please update to the new syntax. > 00383: > -------------------------------------------------------------------------- > 00384: [localhost.localdomain:12543] PMIX ERROR: BAD-PARAM in file > ../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c > at line 1005 > 00385: *** An error occurred in MPI_Init > 00386: *** on a NULL communicator > 00387: *** MPI_ERRORS_ARE_FATAL (processes in this communicator will > now abort, > 00388: *** and potentially your MPI job) > 00389: [localhost.localdomain:12543] Local abort before MPI_INIT > completed completed successfully, but am not able to aggregate error > messages, and not able to guarantee that all other processes were killed! > 00390: [localhost.localdomain:12542] PMIX ERROR: BAD-PARAM in file > ../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c > at line 1005 > 00391: *** An error occurred in MPI_Init > 00392: *** on a NULL communicator > 00393: *** MPI_ERRORS_ARE_FATAL (processes in this communicator will > now abort, > 00394: *** and potentially your MPI job) > 00395: [localhost.localdomain:12542] Local abort before MPI_INIT > completed completed successfully, but am not able to aggregate error > messages, and not able to guarantee that all other processes were killed! > 00396: ------------------------------------------------------- > 00397: Primary job terminated normally, but 1 process returned > 00398: a non-zero exit code.. Per user-direction, the job has been > aborted. > 00399: ------------------------------------------------------- > 00400: > -------------------------------------------------------------------------- > 00401: mpirun detected that one or more processes exited with > non-zero status, thus causing > 00402: the job to be terminated. The first process to do so was: > 00403: > 00404: Process name: [[57465,1],1] > 00405: Exit code: 1 > 00406: > -------------------------------------------------------------------------- > > Please suggest me a fix. > > Thank you. > > -- > Abhisek Mondal > /Senior Research Fellow > / > /Structural Biology and Bioinformatics Division > / > /CSIR-Indian Institute of Chemical Biology/ > /Kolkata 700032 > / > /INDIA > / > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: abhisek M. <abh...@gm...> - 2018-04-05 05:37:25
|
Hi, I'm not being able to run relion of scipion. Despite re building the scipion from source it keep crashing, however xmipp is working fine with MPI processes. Error report: 00368: mpirun -np 2 -bynode `which relion_refine_mpi` --gpu --tau2_fudge 2 --scale --dont_combine_weights_via_disc --iter 25 --norm --psi_step 10.0 --ctf --offset_range 5.0 --oversampling 1 --pool 3 --o Runs/002019_ProtRelionClassify2D/extra/relion --i Runs/002019_ProtRelionClassify2D/input_particles.star --particle_diameter 264.6 --K 30 --preread_images --flatten_solvent --zero_mask --offset_step 2.0 --angpix 1.89 --j 2 00369: -------------------------------------------------------------------------- 00370: The following command line options and corresponding MCA parameter have 00371: been deprecated and replaced as follows: 00372: 00373: Command line options: 00374: Deprecated: --bynode, -bynode 00375: Replacement: --map-by node 00376: 00377: Equivalent MCA parameter: 00378: Deprecated: rmaps_base_bynode 00379: Replacement: rmaps_base_mapping_policy=node 00380: 00381: The deprecated forms *will* disappear in a future version of Open MPI. 00382: Please update to the new syntax. 00383: -------------------------------------------------------------------------- 00384: [localhost.localdomain:12543] PMIX ERROR: BAD-PARAM in file ../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c at line 1005 00385: *** An error occurred in MPI_Init 00386: *** on a NULL communicator 00387: *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, 00388: *** and potentially your MPI job) 00389: [localhost.localdomain:12543] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed! 00390: [localhost.localdomain:12542] PMIX ERROR: BAD-PARAM in file ../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c at line 1005 00391: *** An error occurred in MPI_Init 00392: *** on a NULL communicator 00393: *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, 00394: *** and potentially your MPI job) 00395: [localhost.localdomain:12542] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed! 00396: ------------------------------------------------------- 00397: Primary job terminated normally, but 1 process returned 00398: a non-zero exit code.. Per user-direction, the job has been aborted. 00399: ------------------------------------------------------- 00400: -------------------------------------------------------------------------- 00401: mpirun detected that one or more processes exited with non-zero status, thus causing 00402: the job to be terminated. The first process to do so was: 00403: 00404: Process name: [[57465,1],1] 00405: Exit code: 1 00406: -------------------------------------------------------------------------- Please suggest me a fix. Thank you. -- Abhisek Mondal *Senior Research Fellow* *Structural Biology and Bioinformatics Division* *CSIR-Indian Institute of Chemical Biology* *Kolkata 700032* *INDIA* |
From: Pablo C. <pc...@cn...> - 2018-04-04 14:19:26
|
Dear EM Community, The Instruct Image Processing Center (I2PC) at MadridI2PC is organizing its second *Cryo-EM Facilities meeting*, following the successful first meeting hold last year. The aim is to gather together Cryo-EM *Facility managers* who want to share their experience, problems and solutions on data handling and processing, using Scipion or not. I2PC, as developers of Scipion, has as one of its top priorities the improvement of the functionality that Scipion offers to Cryo-EM Facilities. We will present Scipion and how it is used in several Facilities, such as Diamond/eBIC (UK), ESRF (FR and International), SciLifeLab (Sweden), NIH at NCI (US), NIH at NIEHS (US) or McGill University (Canada) (click here for a map of Facilities using Scipion <https://www.google.com/maps/d/viewer?mid=1MHEnnhBsUarOGJnlo0BapQrrGtA&hl=es&usp=sharing>); We will also count with FEI/Thermo-Fisher participation in the context of the I2PC-FEI Image Processing Course on Scipion for Facilities that will immediately follow this meeting. We will present Scipion 2018-2019 Road Map during the meeting, so that Facilities could comment and improve on it. together with its development Road Map. More importantly, there will be ample time to discuss and present common Facility data handling and processing problems, and hopefully share with others how you are running your Facility in those aspects. For more details, please go here <http://i2pc.es/2nd-i2pc-cryoem-facilities-meeting-madrid-june-26-27-2018/>. All the best, Pablo. Scipion team - I2PC center |
From: Jose M. de la R. T. <del...@gm...> - 2018-04-04 08:47:11
|
Hi Abishek, Have you check that the OpenMPI variables in scipion/config/scipion.conf points to the same version/location that the OpenMPI used to compile Relion? Best, Jose Miguel On Wed, Apr 4, 2018 at 10:44 AM, abhisek Mondal <abh...@gm...> wrote: > Hi, > > I have recently upgraded my system to openmpi-3.0. But despite proper > installation and gpu integration I keep receiving this error, as I was also > receiving in openmpi-1.4: > > *$ mpirun -np 10 -bynode `which relion_preprocess_mpi` --i > input_micrographs.star --coord_dir "." --coord_suffix .coords.star > --part_star extra/output_particles.star --part_dir "." --extract > --extract_size 140 --bg_radius 52 --invert_contrast --norm* > > *00021: > --------------------------------------------------------------------------* > *00022: The following command line options and corresponding MCA > parameter have* > *00023: been deprecated and replaced as follows:* > *00024: * > *00025: Command line options:* > *00026: Deprecated: --bynode, -bynode* > *00027: Replacement: --map-by node* > *00028: * > *00029: Equivalent MCA parameter:* > *00030: Deprecated: rmaps_base_bynode* > *00031: Replacement: rmaps_base_mapping_policy=node* > *00032: * > *00033: The deprecated forms *will* disappear in a future version of > Open MPI.* > *00034: Please update to the new syntax.* > *00035: > --------------------------------------------------------------------------* > *00036: [localhost.localdomain:29946] PMIX ERROR: BAD-PARAM in file > ../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c at > line 1005* > *00037: [localhost.localdomain:29951] PMIX ERROR: BAD-PARAM in file > ../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c at > line 1005* > *00038: [localhost.localdomain:29948] PMIX ERROR: BAD-PARAM in file > ../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c at > line 1005* > *00039: [localhost.localdomain:29952] PMIX ERROR: BAD-PARAM in file > ../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c at > line 1005* > *00040: [localhost.localdomain:29944] PMIX ERROR: BAD-PARAM in file > ../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c at > line 1005* > *00041: [localhost.localdomain:29950] PMIX ERROR: BAD-PARAM in file > ../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c at > line 1005* > *00042: [localhost.localdomain:29949] PMIX ERROR: BAD-PARAM in file > ../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c at > line 1005* > *00043: *** An error occurred in MPI_Init* > *00044: *** on a NULL communicator* > *00045: *** MPI_ERRORS_ARE_FATAL (processes in this communicator will > now abort,* > *00046: *** and potentially your MPI job)* > *00047: *** An error occurred in MPI_Init* > *00048: *** on a NULL communicator* > *00049: *** MPI_ERRORS_ARE_FATAL (processes in this communicator will > now abort,* > *00050: *** and potentially your MPI job)* > *00051: *** An error occurred in MPI_Init* > *00052: *** on a NULL communicator* > *00053: *** MPI_ERRORS_ARE_FATAL (processes in this communicator will > now abort,* > *00054: *** and potentially your MPI job)* > *00055: [localhost.localdomain:29952] Local abort before MPI_INIT > completed completed successfully, but am not able to aggregate error > messages, and not able to guarantee that all other processes were killed!* > > I'm not sure what is causing this crash. > > Please help me out. > > Thank you > > > -- > Abhisek Mondal > > *Senior Research Fellow* > > *Structural Biology and Bioinformatics Division* > *CSIR-Indian Institute of Chemical Biology* > > *Kolkata 700032* > > *INDIA* > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |
From: abhisek M. <abh...@gm...> - 2018-04-04 08:44:46
|
Hi, I have recently upgraded my system to openmpi-3.0. But despite proper installation and gpu integration I keep receiving this error, as I was also receiving in openmpi-1.4: *$ mpirun -np 10 -bynode `which relion_preprocess_mpi` --i input_micrographs.star --coord_dir "." --coord_suffix .coords.star --part_star extra/output_particles.star --part_dir "." --extract --extract_size 140 --bg_radius 52 --invert_contrast --norm* *00021: --------------------------------------------------------------------------* *00022: The following command line options and corresponding MCA parameter have* *00023: been deprecated and replaced as follows:* *00024: * *00025: Command line options:* *00026: Deprecated: --bynode, -bynode* *00027: Replacement: --map-by node* *00028: * *00029: Equivalent MCA parameter:* *00030: Deprecated: rmaps_base_bynode* *00031: Replacement: rmaps_base_mapping_policy=node* *00032: * *00033: The deprecated forms *will* disappear in a future version of Open MPI.* *00034: Please update to the new syntax.* *00035: --------------------------------------------------------------------------* *00036: [localhost.localdomain:29946] PMIX ERROR: BAD-PARAM in file ../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c at line 1005* *00037: [localhost.localdomain:29951] PMIX ERROR: BAD-PARAM in file ../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c at line 1005* *00038: [localhost.localdomain:29948] PMIX ERROR: BAD-PARAM in file ../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c at line 1005* *00039: [localhost.localdomain:29952] PMIX ERROR: BAD-PARAM in file ../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c at line 1005* *00040: [localhost.localdomain:29944] PMIX ERROR: BAD-PARAM in file ../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c at line 1005* *00041: [localhost.localdomain:29950] PMIX ERROR: BAD-PARAM in file ../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c at line 1005* *00042: [localhost.localdomain:29949] PMIX ERROR: BAD-PARAM in file ../../../../../../../opal/mca/pmix/pmix2x/pmix/src/dstore/pmix_esh.c at line 1005* *00043: *** An error occurred in MPI_Init* *00044: *** on a NULL communicator* *00045: *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,* *00046: *** and potentially your MPI job)* *00047: *** An error occurred in MPI_Init* *00048: *** on a NULL communicator* *00049: *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,* *00050: *** and potentially your MPI job)* *00051: *** An error occurred in MPI_Init* *00052: *** on a NULL communicator* *00053: *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,* *00054: *** and potentially your MPI job)* *00055: [localhost.localdomain:29952] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!* I'm not sure what is causing this crash. Please help me out. Thank you -- Abhisek Mondal *Senior Research Fellow* *Structural Biology and Bioinformatics Division* *CSIR-Indian Institute of Chemical Biology* *Kolkata 700032* *INDIA* |
From: Paul P. S. <ps...@pr...> - 2018-04-03 12:34:13
|
Dear Scipion, I noticed that you are advertising some manner of data processing streamlining (https://github.com/I2PC/scipion/wiki/Streaming-Processing) through Scipion. We have not installed Scipion, have little familiarity with Scipion, but have need to complete on-the-fly data processing as they come off our Titan Krios and are piped toward our supercomputing cluster. We are only just now coming online with our Krios and would like a better system than simply copy/pasting (we have the fiber-optic hardware in place and are exploring the possibility of using Globus nodes to speed up the data transfer even more). Any information about this functionality within Scipion would be greatly appreciated. Thank you in advance. All the best, Paul |
From: Pablo C. <pc...@cn...> - 2018-04-03 11:09:09
|
Dear EM community, We are very pleased to announce the release of a new version of Scipion: v1.2. We have put our efforts in improving the Streaming functionality to work better in facilities. We have also updated some EM packages versions and done some bug-fixing and enhancements. For more details, please go here <https://github.com/I2PC/scipion/wiki/Release-Notes#v12-2017-04-02-caligula> I want to thank all the people in our group (I2PC center) for their contribution, and more specially those NOT in our lab that are actively improving Scipion, either developers contributing integrating EM packages/methods and facilities betatesting and running recent developments . For a more detailed list please see: http://scipion.i2pc.es/acknowledgements Scipion currently targets 3 different users: 1.- CryoEM users <https://github.com/I2PC/scipion/wiki/User-Documentation>, mainly SPA at the moment 2.- Facility managers <https://github.com/I2PC/scipion/wiki/Streaming-Processing> that runs CryoEM facilities and want to implement an automatic processing pipeline 3.- CryoEM methods developers <https://github.com/I2PC/scipion/wiki/Developers-Page>: yes!....Scipion does not do image processing as such, but talks nicely with software that actually do image processing. Why not making your method available to all our user base? <http://scipion.i2pc.es/download_form> Download link <http://scipion.i2pc.es/download_form> - How to install <https://github.com/I2PC/scipion/wiki/How-to-Install> - Contact us <https://github.com/I2PC/scipion/wiki/Contact-Us> All the best, Pablo Scipion team |
From: Carlos O. S. <co...@cn...> - 2018-03-23 05:32:20
|
Dear Teige and all, sorry for joining this thread so late. Are the duplicates coming from two independent pickings or coordinate extraction? If so, the consensus picking will be able to remove those coordinates that are closer than a given distance. If not, I agree with José Miguel that something new should be written/adapted. But in this latter case, I would like to understand how these duplicates appeared. Kind regards, Carlos Oscar > ---------- Forwarded message ---------- > From: *Jose Miguel de la Rosa Trevin* <del...@gm... > <mailto:del...@gm...>> > Date: Wed, Mar 21, 2018 at 2:57 PM > Subject: Re: [scipion-users] Scipion v1.1 difficulty continuing > relion2d & relating joined particle set to micrographs via .xmd > To: "Matthews-Palmer, Teige Rowan Seal" > <t.m...@im... > <mailto:t.m...@im...>> > Cc: "sci...@li... > <mailto:sci...@li...>" > <sci...@li... > <mailto:sci...@li...>> > > > Great Teige! > > By the way, regarding the box related discussion in ccpem...I think > the protocol picking consensus can not > be used to remove duplicates. I have added a protocol named 'picking > differences' that use a negative > set of coordinates to remove from another set....I think that I can > easily modify that one to also remove > very close coordinates within the single input set. > > Best, > Jose Miguel > > > On Wed, Mar 21, 2018 at 2:53 PM, Matthews-Palmer, Teige Rowan Seal > <t.m...@im... > <mailto:t.m...@im...>> wrote: > > Dear Jose Miguel, > > Again a delayed reply, sorry. > The 1.2 candidate branch did fix the issue. :-) > In my case the issue was only with loading lists of particle sets, > and not other types of objects. > > All the best, > Teige > >> On 1 Mar 2018, at 12:29, Jose Miguel de la Rosa Trevin >> <del...@gm... <mailto:del...@gm...>> wrote: >> >> No worries! >> >> Can you try the version 1.2 release candidate (branch >> release-1.1.facilities-devel)?? >> We fixed an important issue that caused what you describe....a >> very long time >> when selecting any object as input. Was in big projects and with >> many 2D classifications. >> But I can't remember if this fix was introduced in 1.1 or not. >> >> Cheers, >> Jose Miguel >> >> >> On Thu, Mar 1, 2018 at 1:25 PM, Matthews-Palmer, Teige Rowan Seal >> <t.m...@im... >> <mailto:t.m...@im...>> wrote: >> >> Hi Jose Miguel, >> >> Sorry my reply is so late. I think that the problem might >> have been caused by me interrupting the process of generating >> the sqlite files, by closing the results viewer. In my case >> they were taking a very long time, like 10 minutes, (the file >> is >1GB) and I didn’t know that the delay was because an >> sqlite file needed to be generated. >> >> I am also experiencing that loading the list of particle sets >> for input particles in any protocol run window, takes a very >> long time. It is taking more than 10 minutes to open the >> sub-window list of particle sets. This is the same on both >> duplicates of a project on two (powerful) machines. It seems >> to be because the project is quite large. Does this sound >> normal? Is there anything I could do about it..? Scipion >> version is 1.1. >> >> Thanks for your help. >> Teige >> >> >>> On 9 Feb 2018, at 09:16, Jose Miguel de la Rosa Trevin >>> <del...@gm... <mailto:del...@gm...>> >>> wrote: >>> >>> Hi Teige, >>> >>> Can you check on the run folder (under >>> ProjectFolder/Runs/0000ID_ProtRelion2D or similar) >>> in the extra folder, if there are zero-size sqlite files? >>> You could try to delete the generated >>> sqlite files there and try the Analyze Results button >>> again...still it is weird why it has failed >>> in your case. (and it seems to others as well as mentioned >>> by Pablo). >>> >>> Best, >>> Jose Miguel >>> >>> >>> On Thu, Feb 8, 2018 at 10:06 PM, Matthews-Palmer, Teige >>> Rowan Seal <t.m...@im... >>> <mailto:t.m...@im...>> wrote: >>> >>> Dear Jose, >>> >>> Thanks for the quick & very helpful reply. >>> >>> I had a problem with generating the scipion output with >>> the protocol viewer (see image). Only the last iteration >>> generated _size, but the classavg images are not >>> available. I tried “Show classification in Scipion” for >>> many other iterations and they all show classavgs but as >>> empty classes i.e. the other data is not generated >>> properly. I just used the classID to pick the right >>> subset and it seems to have worked, so I will continue >>> like you suggest, thanks. >>> >>> Thanks a lot for explaining how to properly continue a >>> relion2d run in scipion! >>> Also thanks for pointing out that micrographName is what >>> keeps track of the micrographs. I spent a bit of effort >>> trying to trace it in the star files and understand why >>> scipion doesn’t mind the non-unique micID, but I didn’t >>> work it out. :-) >>> >>> That was a great help, thanks again. >>> All the best, >>> Teige >>> >>> >>> >>>> On 8 Feb 2018, at 19:56, Jose Miguel de la Rosa Trevin >>>> <del...@gm... >>>> <mailto:del...@gm...>> wrote: >>>> >>>> Dear Teige, >>>> >>>> Thanks for providing feedback about your use of >>>> Scipion...find below some answers to your questions. >>>> >>>> On Thu, Feb 8, 2018 at 8:16 PM, Matthews-Palmer, Teige >>>> Rowan Seal <t.m...@im... >>>> <mailto:t.m...@im...>> wrote: >>>> >>>> Dear Scipion Users, >>>> >>>> When running a large relion 2D classification, >>>> which fails unavoidably (this class of error: >>>> https://github.com/3dem/relion/issues/155 >>>> <https://github.com/3dem/relion/issues/155>) at a >>>> late iteration but has not reached the specified 25 >>>> iterations, scipion has not processed the relion >>>> output in its sqlite database (I’m guessing) and so >>>> when analysing the completed iterations, >>>> information is missing e.g. size=0 for all classes. >>>> Therefore subsets of particles cannot be selected >>>> to remove bad classes, re-extract with recentering >>>> and keep processing. >>>> This is effectively a roadblock to continue >>>> processing inside scipion. >>>> >>>> This is the default behaviour of many protocols >>>> wrapping iterative programs, Scipion only generate the >>>> output at the last iteration. We didn't wanted to >>>> generate output for every iteration because it will >>>> create many more objects and probably most of them are >>>> not desired by the user. So, when you click in the >>>> Analyze Results, you have an option to visualize the >>>> results in Scipion and convert the result of a given >>>> iteration to Scipion format. In this example, you could >>>> visualize the last iterate (option Show classification >>>> in Scipion) and from there you can create the subset of >>>> particles to continue. >>>> Then for the steps that you want to do, you should use >>>> the 'extract - coordinates' protocol where you provide >>>> the subset of particles and you can apply the shifts to >>>> re-center the particles. In your case, since your >>>> particles come from merging from two datasets of >>>> micrographs, you will need to join the micrographs to >>>> obtain a full set of micrographs and then extract the >>>> particles base on that joined set and using the >>>> original pixel size. >>>> >>>> Is there a way to get the scipion DB / GUI to >>>> recover the information from any given iteration? >>>> /(Tangentially, although running a relion command >>>> with —continue flag worked, in the scipion GUI the >>>> continue option failed, perhaps because ‘consider >>>> previous alignment’ was set to No? Scipion >>>> responded by starting at iteration 1, threatening >>>> to delete the successful iterations so far - which >>>> took ~2weeks! From there on scipion could not be >>>> persuaded to continue from iteration 18, and I do >>>> not know what would happen if I let it continue >>>> what looks like a restart.) >>>> / >>>> >>>> Here the thing that might be confusing is the Relion >>>> continue option and the Scipion continue ones, that are >>>> different. In Scipion, when you use Continue mode, it >>>> will try to continue from the last completed step, but >>>> in the case of Relion, the whole execution of the >>>> program is a big step, so Scipion continue does not >>>> work here. What you can do, is make a copy of the >>>> protocol and select the option in the form to 'Continue >>>> from a previous Run' (at the top of the form) and then >>>> provide the other run and the desired iteration (by >>>> default the last). I hope that I could clarify this >>>> instead of confusing you more. >>>> >>>> >>>> I can take the optimiser.star file to relion and >>>> make a subset selection, fine. >>>> However, now I would like to re-extract particles >>>> with recentering, and un-bin the particles later. >>>> These both seem to be difficult now. Especially >>>> because I joined two sets of particles before >>>> binning, and the MicrographIDs were *not* adjusted >>>> to be unique values in the unioned set. >>>> >>>> We noticed this problem at the begining of merging set >>>> of micrographs...so, you don't need to worry about the >>>> unique integer ID, which is re-generated when using the >>>> merge. We introduced a micName, that is kind of unique >>>> string identifier base on the filename. So, even if you >>>> import to set of movies and take particles from then, >>>> you can later join the micrographs (or movies) and >>>> re-extract the particles and we match the micName >>>> instead of the micID. >>>> >>>> If I import the subset particle.star back in to >>>> scipion, it generates outputMicrographs & >>>> outputParticles - the Micrograph info is wrong & >>>> can’t find the micrographs, it thinks there are >>>> 5615 micrographs, but this is wrong since there are >>>> now 1114 non-unique micIDs. >>>> Does anyone know how the relationship between >>>> particles in the joined set and their micrographs >>>> can be re-established? >>>> >>>> I recommend you here going the previously mentioned >>>> procedure...creating the subset - extracting >>>> coordinates (applying shifts) - extracting particles >>>> from the joined set of micrograhs. >>>> Anyway, you are right that the created SetOfMicrographs >>>> is not properly set in this case. I think that we have >>>> been using the import particles when importing >>>> completely from Relion, where we could generate a new >>>> SetOfMicrographs based on the .star files that make >>>> sense. I was trying to import some particles recently >>>> from Relion and trying to match to existing micrographs >>>> in the project and I found a bug because the micName >>>> was not properly set from the star file. So we need to >>>> fix this soon. >>>> >>>> >>>> for xmipp particle extraction, the image.xmd files >>>> (/star files) lose the connection to the >>>> micrographs because the _micrograph field >>>> is /tmp/[…]noDust.xmp rather than the .mrc output >>>> from motioncorr. Other than that there >>>> is _micrographId ; does anyone know where >>>> _micrographId is related to the corresponding >>>> micrograph? >>>> >>>> At this point I hope that you have a better idea of the >>>> relation of micId and micName , which is used in CTFs, >>>> particles and coordinates to match the corresponding >>>> micrograph. >>>> >>>> >>>> Any help is greatly appreciated. >>>> >>>> Please let me know if I could clarify some concepts and >>>> you can continue with your processing. Please do not >>>> hesitate to contact us if you have any other issue or >>>> feedback as well. >>>> >>>> All the best, >>>> Teige >>>> >>>> Kind regards >>>> Jose Miguel >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Check out the vibrant tech community on one of the >>>> world's most >>>> engaging tech sites, Slashdot.org >>>> <http://slashdot.org/>! http://sdm.link/slashdot >>>> _______________________________________________ >>>> scipion-users mailing list >>>> sci...@li... >>>> <mailto:sci...@li...> >>>> https://lists.sourceforge.net/lists/listinfo/scipion-users >>>> <https://lists.sourceforge.net/lists/listinfo/scipion-users> >>>> >>>> >>> >>> >> >> > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > <mailto:sci...@li...> > https://lists.sourceforge.net/lists/listinfo/scipion-users > <https://lists.sourceforge.net/lists/listinfo/scipion-users> > > > > > -- > Prof. Jose-Maria Carazo > Biocomputing Unit, Head, CNB-CSIC > Spanish National Center for Biotechnology > Darwin 3, Universidad Autonoma de Madrid > 28049 Madrid, Spain > > > Cell: +34639197980 -- ------------------------------------------------------------------------ Carlos Oscar Sánchez Sorzano e-mail: co...@cn... Biocomputing unit http://i2pc.es/coss National Center of Biotechnology (CSIC) c/Darwin, 3 Campus Universidad Autónoma (Cantoblanco) Tlf: 34-91-585 4510 28049 MADRID (SPAIN) Fax: 34-91-585 4506 ------------------------------------------------------------------------ |
From: Jose M. de la R. T. <del...@gm...> - 2018-03-21 17:32:17
|
Hi Teige and Grigory, There is a handy script (scipion/scripts/fix_links.py) that can be used to restore the links of the import runs. You can run the script using scipion python scripts/fix_links.py and it should show the usage message: Usage: fixlinks.py PROJECT SEARCH_DIR PROJECT: provide the project name to fix broken links in the imports. SEARCH_DIR: provide a directory where to look for the files. and fix the links. Hope this helps, Jose Miguel On Wed, Mar 21, 2018 at 6:07 PM, Gregory Sharov <sha...@gm...> wrote: > Hi, > > yes, updating symlinks in extra folder should work. Or you could copy the > import movie protocol specifying new location (it also create symlinks ;). > Unless you have changed filenames when moving the data, it should also work. > > Best regards, > Grigory > > ------------------------------------------------------------ > -------------------- > Grigory Sharov, Ph.D. > > MRC Laboratory of Molecular Biology, > Francis Crick Avenue, > Cambridge Biomedical Campus, > Cambridge CB2 0QH, UK. > tel. +44 (0) 1223 267542 <+44%201223%20267542> > e-mail: gs...@mr... > > On Wed, Mar 21, 2018 at 5:01 PM, Matthews-Palmer, Teige < > t.m...@im...> wrote: > >> Hi Grigory, >> >> Thanks a lot for the guidance. :-) >> In order to run step 1 on the same import movies input, I need to fix the >> symlinks to point to the new location of the original data. >> I imagine I can just replace the symlinks in >> Runs/xxxxProtImportMovies/extra/ with new, correct symlinks and Scipion >> should refer to those. >> Here goes. >> >> Thanks & best wishes, >> Teige >> >> On 21 Mar 2018, at 16:46, Gregory Sharov <sha...@GM...> >> wrote: >> >> Hi Teige, >> >> the usual way to do this would be: >> >> 1) copy motioncor2 protocol and run it with save movies=yes >> 2) extract coordinates from you refined subset (apply shifts = no) >> 3) extract movie particles using inputs (1) and (2) with apply movie >> alignments=no >> 4) continue with relion movie refinement and polishing or whatever you >> would like to do :) >> >> Best regards, >> Grigory >> >> ------------------------------------------------------------ >> -------------------- >> Grigory Sharov, Ph.D. >> >> MRC Laboratory of Molecular Biology, >> Francis Crick Avenue, >> Cambridge Biomedical Campus, >> Cambridge CB2 0QH, UK. >> tel. +44 (0) 1223 267542 <+44%201223%20267542> >> e-mail: gs...@mr... >> >> On Wed, Mar 21, 2018 at 4:02 PM, Matthews-Palmer, Teige < >> t.m...@im...> wrote: >> >>> Dear Scipion-users, >>> >>> I have a situation in Scipion where a dataset of movies was >>> motion-corrected with motioncor2, but aligned movies were not saved (at the >>> time, there was not storage space). Now the processing is at a late stage >>> and I would like to attempt movie refinement & particle polishing, so >>> aligned movie frames are necessary. >>> >>> I cannot restart the motioncorr protocols because they are referenced >>> from subsequent protocols. >>> I could run a new motioncorr protocol. >>> Does anyone know if I will be able to easily transfer co-ordinates and >>> angles from a late-stage set of particles to these new movies to make >>> movie-particles? >>> Will scipion - extract coordinates be able to extract the right >>> co-ordinates from aligned movies? Is there something else I need to bear in >>> mind to handle movie-particles in Scipion? >>> >>> Thanks for any help, >>> All the best >>> Teige >>> ------------------------------------------------------------ >>> ------------------ >>> Check out the vibrant tech community on one of the world's most >>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>> _______________________________________________ >>> scipion-users mailing list >>> sci...@li... >>> https://lists.sourceforge.net/lists/listinfo/scipion-users >>> >> >> >> > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |
From: Gregory S. <sha...@gm...> - 2018-03-21 17:08:17
|
Hi, yes, updating symlinks in extra folder should work. Or you could copy the import movie protocol specifying new location (it also create symlinks ;). Unless you have changed filenames when moving the data, it should also work. Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH, UK. tel. +44 (0) 1223 267542 <+44%201223%20267542> e-mail: gs...@mr... On Wed, Mar 21, 2018 at 5:01 PM, Matthews-Palmer, Teige < t.m...@im...> wrote: > Hi Grigory, > > Thanks a lot for the guidance. :-) > In order to run step 1 on the same import movies input, I need to fix the > symlinks to point to the new location of the original data. > I imagine I can just replace the symlinks in Runs/xxxxProtImportMovies/extra/ > with new, correct symlinks and Scipion should refer to those. > Here goes. > > Thanks & best wishes, > Teige > > On 21 Mar 2018, at 16:46, Gregory Sharov <sha...@GM...> wrote: > > Hi Teige, > > the usual way to do this would be: > > 1) copy motioncor2 protocol and run it with save movies=yes > 2) extract coordinates from you refined subset (apply shifts = no) > 3) extract movie particles using inputs (1) and (2) with apply movie > alignments=no > 4) continue with relion movie refinement and polishing or whatever you > would like to do :) > > Best regards, > Grigory > > ------------------------------------------------------------ > -------------------- > Grigory Sharov, Ph.D. > > MRC Laboratory of Molecular Biology, > Francis Crick Avenue, > Cambridge Biomedical Campus, > Cambridge CB2 0QH, UK. > tel. +44 (0) 1223 267542 <+44%201223%20267542> > e-mail: gs...@mr... > > On Wed, Mar 21, 2018 at 4:02 PM, Matthews-Palmer, Teige < > t.m...@im...> wrote: > >> Dear Scipion-users, >> >> I have a situation in Scipion where a dataset of movies was >> motion-corrected with motioncor2, but aligned movies were not saved (at the >> time, there was not storage space). Now the processing is at a late stage >> and I would like to attempt movie refinement & particle polishing, so >> aligned movie frames are necessary. >> >> I cannot restart the motioncorr protocols because they are referenced >> from subsequent protocols. >> I could run a new motioncorr protocol. >> Does anyone know if I will be able to easily transfer co-ordinates and >> angles from a late-stage set of particles to these new movies to make >> movie-particles? >> Will scipion - extract coordinates be able to extract the right >> co-ordinates from aligned movies? Is there something else I need to bear in >> mind to handle movie-particles in Scipion? >> >> Thanks for any help, >> All the best >> Teige >> ------------------------------------------------------------ >> ------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> > > > |
From: Matthews-Palmer, T. <t.m...@im...> - 2018-03-21 17:01:48
|
Hi Grigory, Thanks a lot for the guidance. :-) In order to run step 1 on the same import movies input, I need to fix the symlinks to point to the new location of the original data. I imagine I can just replace the symlinks in Runs/xxxxProtImportMovies/extra/ with new, correct symlinks and Scipion should refer to those. Here goes. Thanks & best wishes, Teige On 21 Mar 2018, at 16:46, Gregory Sharov <sha...@GM...<mailto:sha...@GM...>> wrote: Hi Teige, the usual way to do this would be: 1) copy motioncor2 protocol and run it with save movies=yes 2) extract coordinates from you refined subset (apply shifts = no) 3) extract movie particles using inputs (1) and (2) with apply movie alignments=no 4) continue with relion movie refinement and polishing or whatever you would like to do :) Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH, UK. tel. +44 (0) 1223 267542<tel:+44%201223%20267542> e-mail: gs...@mr...<mailto:gs...@mr...> On Wed, Mar 21, 2018 at 4:02 PM, Matthews-Palmer, Teige <t.m...@im...<mailto:t.m...@im...>> wrote: Dear Scipion-users, I have a situation in Scipion where a dataset of movies was motion-corrected with motioncor2, but aligned movies were not saved (at the time, there was not storage space). Now the processing is at a late stage and I would like to attempt movie refinement & particle polishing, so aligned movie frames are necessary. I cannot restart the motioncorr protocols because they are referenced from subsequent protocols. I could run a new motioncorr protocol. Does anyone know if I will be able to easily transfer co-ordinates and angles from a late-stage set of particles to these new movies to make movie-particles? Will scipion - extract coordinates be able to extract the right co-ordinates from aligned movies? Is there something else I need to bear in mind to handle movie-particles in Scipion? Thanks for any help, All the best Teige ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org<http://Slashdot.org>! http://sdm.link/slashdot _______________________________________________ scipion-users mailing list sci...@li...<mailto:sci...@li...> https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Gregory S. <sha...@gm...> - 2018-03-21 16:47:17
|
Hi Teige, the usual way to do this would be: 1) copy motioncor2 protocol and run it with save movies=yes 2) extract coordinates from you refined subset (apply shifts = no) 3) extract movie particles using inputs (1) and (2) with apply movie alignments=no 4) continue with relion movie refinement and polishing or whatever you would like to do :) Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH, UK. tel. +44 (0) 1223 267542 <+44%201223%20267542> e-mail: gs...@mr... On Wed, Mar 21, 2018 at 4:02 PM, Matthews-Palmer, Teige < t.m...@im...> wrote: > Dear Scipion-users, > > I have a situation in Scipion where a dataset of movies was > motion-corrected with motioncor2, but aligned movies were not saved (at the > time, there was not storage space). Now the processing is at a late stage > and I would like to attempt movie refinement & particle polishing, so > aligned movie frames are necessary. > > I cannot restart the motioncorr protocols because they are referenced from > subsequent protocols. > I could run a new motioncorr protocol. > Does anyone know if I will be able to easily transfer co-ordinates and > angles from a late-stage set of particles to these new movies to make > movie-particles? > Will scipion - extract coordinates be able to extract the right > co-ordinates from aligned movies? Is there something else I need to bear in > mind to handle movie-particles in Scipion? > > Thanks for any help, > All the best > Teige > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > |
From: Matthews-Palmer, T. <t.m...@im...> - 2018-03-21 16:02:21
|
Dear Scipion-users, I have a situation in Scipion where a dataset of movies was motion-corrected with motioncor2, but aligned movies were not saved (at the time, there was not storage space). Now the processing is at a late stage and I would like to attempt movie refinement & particle polishing, so aligned movie frames are necessary. I cannot restart the motioncorr protocols because they are referenced from subsequent protocols. I could run a new motioncorr protocol. Does anyone know if I will be able to easily transfer co-ordinates and angles from a late-stage set of particles to these new movies to make movie-particles? Will scipion - extract coordinates be able to extract the right co-ordinates from aligned movies? Is there something else I need to bear in mind to handle movie-particles in Scipion? Thanks for any help, All the best Teige |
From: Kyle D. <kyl...@ep...> - 2018-03-21 15:52:17
|
Hi Yaiza, No problem at all. In fact, I am using this script and it works perfectly ;) Thanks! Kyle On 03/21/2018 02:05 PM, Yaiza Rancel wrote: > Hi again Kyle, > > Sorry! We do have already a script to do this, it's in > scripts/create_project.py, and you can use it like this: > scipion python scripts/create_project.py name="session1234" > workflow="path/to/your/workflow.json" > Cheers, > Yaiza > > Activo Mie, 21 Marzo at 12:47 PM , Yaiza Rancel > <su...@bc...> Escrito: > Hi Kyle, > > At the moment we don't have the option to do this straight away, > however this is done in Scipion for our tutorials and it shouldn't > be very difficult create a new script and adapt it to your needs. > You can find the tutorials script in > pyworkflow/apps/pw_tutorial.py, I recommend you to launch the > intro tutorial and check the script to see how it's done (To > launch the tutorial do: ./scipion tutorial intro). > I've attached here a simple version that creates a new project > from a json and opens it, you can maybe use it as a starting > point. To run it from your scipion directory you can do: > > ./scipion python path/to/create_project_from_json.py > [PROJECT_NAME] [PATH_TO_JSON] > > Hope this helps, let us know if you have further questions. > > > Yaiza Rancel > Scipion Team > > Activo Vie, 9 Marzo at 8:13 AM , Kyle Michael Douglass > <kyl...@ep...> Escrito: > Hi all, > > First-time Scipion user here (version 1.1). How would I create > a new project and import a workflow from a .json file from the > command line? > > I ask because my collaborators are using Scipion as part of a > larger data analysis pipeline that extends beyond EM. At one > point in this pipeline we want to automatically launch the > Scipion GUI with a new project and import a workflow that has > already been specified in a .json file. I know how to do this > manually with the GUI, but it would be nice to have the new > project and workflow setup via a script so we can easily vary > some of the analysis parameters. > > Thanks a lot for the help. I would be happy to provide any > additional information if necessary. > > Cheers, > Kyle > > Dr. Kyle M. Douglass > Post-doctoral Researcher > EPFL - The Laboratory of Experimental Biophysics > http://leb.epfl.ch/ > http://kmdouglass.github.io > > 300:553135 -- Kyle M. Douglass, PhD Post-doctoral researcher The Laboratory of Experimental Biophysics EPFL, Lausanne, Switzerland http://kmdouglass.github.io http://leb.epfl.ch |
From: Matthews-Palmer, T. R. S. <t.m...@im...> - 2018-03-21 14:17:33
|
Hi Jose Miguel, I think it would be a very good feature. Yong Zi Tan shared their python script for checking the distances of co-ordinates on ccpem, but it would be a nicer experience to run a protocol in Scipion! Thanks again Teige On 21 Mar 2018, at 13:57, Jose Miguel de la Rosa Trevin <del...@gm...<mailto:del...@gm...>> wrote: Great Teige! By the way, regarding the box related discussion in ccpem...I think the protocol picking consensus can not be used to remove duplicates. I have added a protocol named 'picking differences' that use a negative set of coordinates to remove from another set....I think that I can easily modify that one to also remove very close coordinates within the single input set. Best, Jose Miguel On Wed, Mar 21, 2018 at 2:53 PM, Matthews-Palmer, Teige Rowan Seal <t.m...@im...<mailto:t.m...@im...>> wrote: Dear Jose Miguel, Again a delayed reply, sorry. The 1.2 candidate branch did fix the issue. :-) In my case the issue was only with loading lists of particle sets, and not other types of objects. All the best, Teige On 1 Mar 2018, at 12:29, Jose Miguel de la Rosa Trevin <del...@gm...<mailto:del...@gm...>> wrote: No worries! Can you try the version 1.2 release candidate (branch release-1.1.facilities-devel)?? We fixed an important issue that caused what you describe....a very long time when selecting any object as input. Was in big projects and with many 2D classifications. But I can't remember if this fix was introduced in 1.1 or not. Cheers, Jose Miguel On Thu, Mar 1, 2018 at 1:25 PM, Matthews-Palmer, Teige Rowan Seal <t.m...@im...<mailto:t.m...@im...>> wrote: Hi Jose Miguel, Sorry my reply is so late. I think that the problem might have been caused by me interrupting the process of generating the sqlite files, by closing the results viewer. In my case they were taking a very long time, like 10 minutes, (the file is >1GB) and I didn’t know that the delay was because an sqlite file needed to be generated. I am also experiencing that loading the list of particle sets for input particles in any protocol run window, takes a very long time. It is taking more than 10 minutes to open the sub-window list of particle sets. This is the same on both duplicates of a project on two (powerful) machines. It seems to be because the project is quite large. Does this sound normal? Is there anything I could do about it..? Scipion version is 1.1. Thanks for your help. Teige On 9 Feb 2018, at 09:16, Jose Miguel de la Rosa Trevin <del...@gm...<mailto:del...@gm...>> wrote: Hi Teige, Can you check on the run folder (under ProjectFolder/Runs/0000ID_ProtRelion2D or similar) in the extra folder, if there are zero-size sqlite files? You could try to delete the generated sqlite files there and try the Analyze Results button again...still it is weird why it has failed in your case. (and it seems to others as well as mentioned by Pablo). Best, Jose Miguel On Thu, Feb 8, 2018 at 10:06 PM, Matthews-Palmer, Teige Rowan Seal <t.m...@im...<mailto:t.m...@im...>> wrote: Dear Jose, Thanks for the quick & very helpful reply. I had a problem with generating the scipion output with the protocol viewer (see image). Only the last iteration generated _size, but the classavg images are not available. I tried “Show classification in Scipion” for many other iterations and they all show classavgs but as empty classes i.e. the other data is not generated properly. I just used the classID to pick the right subset and it seems to have worked, so I will continue like you suggest, thanks. Thanks a lot for explaining how to properly continue a relion2d run in scipion! Also thanks for pointing out that micrographName is what keeps track of the micrographs. I spent a bit of effort trying to trace it in the star files and understand why scipion doesn’t mind the non-unique micID, but I didn’t work it out. :-) That was a great help, thanks again. All the best, Teige On 8 Feb 2018, at 19:56, Jose Miguel de la Rosa Trevin <del...@gm...<mailto:del...@gm...>> wrote: Dear Teige, Thanks for providing feedback about your use of Scipion...find below some answers to your questions. On Thu, Feb 8, 2018 at 8:16 PM, Matthews-Palmer, Teige Rowan Seal <t.m...@im...<mailto:t.m...@im...>> wrote: Dear Scipion Users, When running a large relion 2D classification, which fails unavoidably (this class of error: https://github.com/3dem/relion/issues/155) at a late iteration but has not reached the specified 25 iterations, scipion has not processed the relion output in its sqlite database (I’m guessing) and so when analysing the completed iterations, information is missing e.g. size=0 for all classes. Therefore subsets of particles cannot be selected to remove bad classes, re-extract with recentering and keep processing. This is effectively a roadblock to continue processing inside scipion. This is the default behaviour of many protocols wrapping iterative programs, Scipion only generate the output at the last iteration. We didn't wanted to generate output for every iteration because it will create many more objects and probably most of them are not desired by the user. So, when you click in the Analyze Results, you have an option to visualize the results in Scipion and convert the result of a given iteration to Scipion format. In this example, you could visualize the last iterate (option Show classification in Scipion) and from there you can create the subset of particles to continue. Then for the steps that you want to do, you should use the 'extract - coordinates' protocol where you provide the subset of particles and you can apply the shifts to re-center the particles. In your case, since your particles come from merging from two datasets of micrographs, you will need to join the micrographs to obtain a full set of micrographs and then extract the particles base on that joined set and using the original pixel size. Is there a way to get the scipion DB / GUI to recover the information from any given iteration? (Tangentially, although running a relion command with —continue flag worked, in the scipion GUI the continue option failed, perhaps because ‘consider previous alignment’ was set to No? Scipion responded by starting at iteration 1, threatening to delete the successful iterations so far - which took ~2weeks! From there on scipion could not be persuaded to continue from iteration 18, and I do not know what would happen if I let it continue what looks like a restart.) Here the thing that might be confusing is the Relion continue option and the Scipion continue ones, that are different. In Scipion, when you use Continue mode, it will try to continue from the last completed step, but in the case of Relion, the whole execution of the program is a big step, so Scipion continue does not work here. What you can do, is make a copy of the protocol and select the option in the form to 'Continue from a previous Run' (at the top of the form) and then provide the other run and the desired iteration (by default the last). I hope that I could clarify this instead of confusing you more. I can take the optimiser.star file to relion and make a subset selection, fine. However, now I would like to re-extract particles with recentering, and un-bin the particles later. These both seem to be difficult now. Especially because I joined two sets of particles before binning, and the MicrographIDs were not adjusted to be unique values in the unioned set. We noticed this problem at the begining of merging set of micrographs...so, you don't need to worry about the unique integer ID, which is re-generated when using the merge. We introduced a micName, that is kind of unique string identifier base on the filename. So, even if you import to set of movies and take particles from then, you can later join the micrographs (or movies) and re-extract the particles and we match the micName instead of the micID. If I import the subset particle.star back in to scipion, it generates outputMicrographs & outputParticles - the Micrograph info is wrong & can’t find the micrographs, it thinks there are 5615 micrographs, but this is wrong since there are now 1114 non-unique micIDs. Does anyone know how the relationship between particles in the joined set and their micrographs can be re-established? I recommend you here going the previously mentioned procedure...creating the subset - extracting coordinates (applying shifts) - extracting particles from the joined set of micrograhs. Anyway, you are right that the created SetOfMicrographs is not properly set in this case. I think that we have been using the import particles when importing completely from Relion, where we could generate a new SetOfMicrographs based on the .star files that make sense. I was trying to import some particles recently from Relion and trying to match to existing micrographs in the project and I found a bug because the micName was not properly set from the star file. So we need to fix this soon. for xmipp particle extraction, the image.xmd files (/star files) lose the connection to the micrographs because the _micrograph field is /tmp/[…]noDust.xmp rather than the .mrc output from motioncorr. Other than that there is _micrographId ; does anyone know where _micrographId is related to the corresponding micrograph? At this point I hope that you have a better idea of the relation of micId and micName , which is used in CTFs, particles and coordinates to match the corresponding micrograph. Any help is greatly appreciated. Please let me know if I could clarify some concepts and you can continue with your processing. Please do not hesitate to contact us if you have any other issue or feedback as well. All the best, Teige Kind regards Jose Miguel ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org<http://slashdot.org/>! http://sdm.link/slashdot _______________________________________________ scipion-users mailing list sci...@li...<mailto:sci...@li...> https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Jose M. de la R. T. <del...@gm...> - 2018-03-21 13:57:57
|
Great Teige! By the way, regarding the box related discussion in ccpem...I think the protocol picking consensus can not be used to remove duplicates. I have added a protocol named 'picking differences' that use a negative set of coordinates to remove from another set....I think that I can easily modify that one to also remove very close coordinates within the single input set. Best, Jose Miguel On Wed, Mar 21, 2018 at 2:53 PM, Matthews-Palmer, Teige Rowan Seal < t.m...@im...> wrote: > Dear Jose Miguel, > > Again a delayed reply, sorry. > The 1.2 candidate branch did fix the issue. :-) > In my case the issue was only with loading lists of particle sets, and not > other types of objects. > > All the best, > Teige > > On 1 Mar 2018, at 12:29, Jose Miguel de la Rosa Trevin < > del...@gm...> wrote: > > No worries! > > Can you try the version 1.2 release candidate (branch > release-1.1.facilities-devel)?? > We fixed an important issue that caused what you describe....a very long > time > when selecting any object as input. Was in big projects and with many 2D > classifications. > But I can't remember if this fix was introduced in 1.1 or not. > > Cheers, > Jose Miguel > > > On Thu, Mar 1, 2018 at 1:25 PM, Matthews-Palmer, Teige Rowan Seal < > t.m...@im...> wrote: > >> Hi Jose Miguel, >> >> Sorry my reply is so late. I think that the problem might have been >> caused by me interrupting the process of generating the sqlite files, by >> closing the results viewer. In my case they were taking a very long time, >> like 10 minutes, (the file is >1GB) and I didn’t know that the delay was >> because an sqlite file needed to be generated. >> >> I am also experiencing that loading the list of particle sets for input >> particles in any protocol run window, takes a very long time. It is taking >> more than 10 minutes to open the sub-window list of particle sets. This is >> the same on both duplicates of a project on two (powerful) machines. It >> seems to be because the project is quite large. Does this sound normal? Is >> there anything I could do about it..? Scipion version is 1.1. >> >> Thanks for your help. >> Teige >> >> >> On 9 Feb 2018, at 09:16, Jose Miguel de la Rosa Trevin < >> del...@gm...> wrote: >> >> Hi Teige, >> >> Can you check on the run folder (under ProjectFolder/Runs/0000ID_ProtRelion2D >> or similar) >> in the extra folder, if there are zero-size sqlite files? You could try >> to delete the generated >> sqlite files there and try the Analyze Results button again...still it is >> weird why it has failed >> in your case. (and it seems to others as well as mentioned by Pablo). >> >> Best, >> Jose Miguel >> >> >> On Thu, Feb 8, 2018 at 10:06 PM, Matthews-Palmer, Teige Rowan Seal < >> t.m...@im...> wrote: >> >>> Dear Jose, >>> >>> Thanks for the quick & very helpful reply. >>> >>> I had a problem with generating the scipion output with the protocol >>> viewer (see image). Only the last iteration generated _size, but the >>> classavg images are not available. I tried “Show classification in Scipion” >>> for many other iterations and they all show classavgs but as empty classes >>> i.e. the other data is not generated properly. I just used the classID to >>> pick the right subset and it seems to have worked, so I will continue like >>> you suggest, thanks. >>> >>> Thanks a lot for explaining how to properly continue a relion2d run in >>> scipion! >>> Also thanks for pointing out that micrographName is what keeps track of >>> the micrographs. I spent a bit of effort trying to trace it in the star >>> files and understand why scipion doesn’t mind the non-unique micID, but I >>> didn’t work it out. :-) >>> >>> That was a great help, thanks again. >>> All the best, >>> Teige >>> >>> >>> >>> On 8 Feb 2018, at 19:56, Jose Miguel de la Rosa Trevin < >>> del...@gm...> wrote: >>> >>> Dear Teige, >>> >>> Thanks for providing feedback about your use of Scipion...find below >>> some answers to your questions. >>> >>> On Thu, Feb 8, 2018 at 8:16 PM, Matthews-Palmer, Teige Rowan Seal < >>> t.m...@im...> wrote: >>> >>>> Dear Scipion Users, >>>> >>>> When running a large relion 2D classification, which fails unavoidably >>>> (this class of error: https://github.com/3dem/relion/issues/155) at a >>>> late iteration but has not reached the specified 25 iterations, scipion has >>>> not processed the relion output in its sqlite database (I’m guessing) and >>>> so when analysing the completed iterations, information is missing e.g. >>>> size=0 for all classes. Therefore subsets of particles cannot be selected >>>> to remove bad classes, re-extract with recentering and keep processing. >>>> This is effectively a roadblock to continue processing inside scipion. >>>> >>> This is the default behaviour of many protocols wrapping iterative >>> programs, Scipion only generate the output at the last iteration. We didn't >>> wanted to generate output for every iteration because it will create many >>> more objects and probably most of them are not desired by the user. So, >>> when you click in the Analyze Results, you have an option to visualize the >>> results in Scipion and convert the result of a given iteration to Scipion >>> format. In this example, you could visualize the last iterate (option Show >>> classification in Scipion) and from there you can create the subset of >>> particles to continue. >>> Then for the steps that you want to do, you should use the 'extract - >>> coordinates' protocol where you provide the subset of particles and you can >>> apply the shifts to re-center the particles. In your case, since your >>> particles come from merging from two datasets of micrographs, you will need >>> to join the micrographs to obtain a full set of micrographs and then >>> extract the particles base on that joined set and using the original pixel >>> size. >>> >>>> Is there a way to get the scipion DB / GUI to recover the information >>>> from any given iteration? >>>> >>>> *(Tangentially, although running a relion command with —continue flag >>>> worked, in the scipion GUI the continue option failed, perhaps because >>>> ‘consider previous alignment’ was set to No? Scipion responded by starting >>>> at iteration 1, threatening to delete the successful iterations so far - >>>> which took ~2weeks! From there on scipion could not be persuaded to >>>> continue from iteration 18, and I do not know what would happen if I let it >>>> continue what looks like a restart.) * >>>> >>> Here the thing that might be confusing is the Relion continue option and >>> the Scipion continue ones, that are different. In Scipion, when you use >>> Continue mode, it will try to continue from the last completed step, but in >>> the case of Relion, the whole execution of the program is a big step, so >>> Scipion continue does not work here. What you can do, is make a copy of the >>> protocol and select the option in the form to 'Continue from a previous >>> Run' (at the top of the form) and then provide the other run and the >>> desired iteration (by default the last). I hope that I could clarify this >>> instead of confusing you more. >>> >>>> >>>> I can take the optimiser.star file to relion and make a subset >>>> selection, fine. >>>> However, now I would like to re-extract particles with recentering, and >>>> un-bin the particles later. These both seem to be difficult now. Especially >>>> because I joined two sets of particles before binning, and the >>>> MicrographIDs were *not* adjusted to be unique values in the unioned >>>> set. >>>> >>> We noticed this problem at the begining of merging set of >>> micrographs...so, you don't need to worry about the unique integer ID, >>> which is re-generated when using the merge. We introduced a micName, that >>> is kind of unique string identifier base on the filename. So, even if you >>> import to set of movies and take particles from then, you can later join >>> the micrographs (or movies) and re-extract the particles and we match the >>> micName instead of the micID. >>> >>>> If I import the subset particle.star back in to scipion, it generates >>>> outputMicrographs & outputParticles - the Micrograph info is wrong & can’t >>>> find the micrographs, it thinks there are 5615 micrographs, but this is >>>> wrong since there are now 1114 non-unique micIDs. >>>> Does anyone know how the relationship between particles in the joined >>>> set and their micrographs can be re-established? >>>> >>> I recommend you here going the previously mentioned procedure...creating >>> the subset - extracting coordinates (applying shifts) - extracting >>> particles from the joined set of micrograhs. >>> Anyway, you are right that the created SetOfMicrographs is not properly >>> set in this case. I think that we have been using the import particles when >>> importing completely from Relion, where we could generate a new >>> SetOfMicrographs based on the .star files that make sense. I was trying to >>> import some particles recently from Relion and trying to match to existing >>> micrographs in the project and I found a bug because the micName was not >>> properly set from the star file. So we need to fix this soon. >>> >>>> >>>> for xmipp particle extraction, the image.xmd files (/star files) lose >>>> the connection to the micrographs because the _micrograph field >>>> is /tmp/[…]noDust.xmp rather than the .mrc output from motioncorr. Other >>>> than that there is _micrographId ; does anyone know where _micrographId is >>>> related to the corresponding micrograph? >>>> >>> At this point I hope that you have a better idea of the relation of >>> micId and micName , which is used in CTFs, particles and coordinates to >>> match the corresponding micrograph. >>> >>>> >>>> Any help is greatly appreciated. >>>> >>> Please let me know if I could clarify some concepts and you can continue >>> with your processing. Please do not hesitate to contact us if you have any >>> other issue or feedback as well. >>> >>>> All the best, >>>> Teige >>>> >>> Kind regards >>> Jose Miguel >>> >>> >>>> >>>> ------------------------------------------------------------ >>>> ------------------ >>>> Check out the vibrant tech community on one of the world's most >>>> engaging tech sites, Slashdot.org <http://slashdot.org/>! >>>> http://sdm.link/slashdot >>>> _______________________________________________ >>>> scipion-users mailing list >>>> sci...@li... >>>> https://lists.sourceforge.net/lists/listinfo/scipion-users >>>> >>>> >>> >>> >> >> > > |
From: Matthews-Palmer, T. R. S. <t.m...@im...> - 2018-03-21 13:53:46
|
Dear Jose Miguel, Again a delayed reply, sorry. The 1.2 candidate branch did fix the issue. :-) In my case the issue was only with loading lists of particle sets, and not other types of objects. All the best, Teige On 1 Mar 2018, at 12:29, Jose Miguel de la Rosa Trevin <del...@gm...<mailto:del...@gm...>> wrote: No worries! Can you try the version 1.2 release candidate (branch release-1.1.facilities-devel)?? We fixed an important issue that caused what you describe....a very long time when selecting any object as input. Was in big projects and with many 2D classifications. But I can't remember if this fix was introduced in 1.1 or not. Cheers, Jose Miguel On Thu, Mar 1, 2018 at 1:25 PM, Matthews-Palmer, Teige Rowan Seal <t.m...@im...<mailto:t.m...@im...>> wrote: Hi Jose Miguel, Sorry my reply is so late. I think that the problem might have been caused by me interrupting the process of generating the sqlite files, by closing the results viewer. In my case they were taking a very long time, like 10 minutes, (the file is >1GB) and I didn’t know that the delay was because an sqlite file needed to be generated. I am also experiencing that loading the list of particle sets for input particles in any protocol run window, takes a very long time. It is taking more than 10 minutes to open the sub-window list of particle sets. This is the same on both duplicates of a project on two (powerful) machines. It seems to be because the project is quite large. Does this sound normal? Is there anything I could do about it..? Scipion version is 1.1. Thanks for your help. Teige On 9 Feb 2018, at 09:16, Jose Miguel de la Rosa Trevin <del...@gm...<mailto:del...@gm...>> wrote: Hi Teige, Can you check on the run folder (under ProjectFolder/Runs/0000ID_ProtRelion2D or similar) in the extra folder, if there are zero-size sqlite files? You could try to delete the generated sqlite files there and try the Analyze Results button again...still it is weird why it has failed in your case. (and it seems to others as well as mentioned by Pablo). Best, Jose Miguel On Thu, Feb 8, 2018 at 10:06 PM, Matthews-Palmer, Teige Rowan Seal <t.m...@im...<mailto:t.m...@im...>> wrote: Dear Jose, Thanks for the quick & very helpful reply. I had a problem with generating the scipion output with the protocol viewer (see image). Only the last iteration generated _size, but the classavg images are not available. I tried “Show classification in Scipion” for many other iterations and they all show classavgs but as empty classes i.e. the other data is not generated properly. I just used the classID to pick the right subset and it seems to have worked, so I will continue like you suggest, thanks. Thanks a lot for explaining how to properly continue a relion2d run in scipion! Also thanks for pointing out that micrographName is what keeps track of the micrographs. I spent a bit of effort trying to trace it in the star files and understand why scipion doesn’t mind the non-unique micID, but I didn’t work it out. :-) That was a great help, thanks again. All the best, Teige On 8 Feb 2018, at 19:56, Jose Miguel de la Rosa Trevin <del...@gm...<mailto:del...@gm...>> wrote: Dear Teige, Thanks for providing feedback about your use of Scipion...find below some answers to your questions. On Thu, Feb 8, 2018 at 8:16 PM, Matthews-Palmer, Teige Rowan Seal <t.m...@im...<mailto:t.m...@im...>> wrote: Dear Scipion Users, When running a large relion 2D classification, which fails unavoidably (this class of error: https://github.com/3dem/relion/issues/155) at a late iteration but has not reached the specified 25 iterations, scipion has not processed the relion output in its sqlite database (I’m guessing) and so when analysing the completed iterations, information is missing e.g. size=0 for all classes. Therefore subsets of particles cannot be selected to remove bad classes, re-extract with recentering and keep processing. This is effectively a roadblock to continue processing inside scipion. This is the default behaviour of many protocols wrapping iterative programs, Scipion only generate the output at the last iteration. We didn't wanted to generate output for every iteration because it will create many more objects and probably most of them are not desired by the user. So, when you click in the Analyze Results, you have an option to visualize the results in Scipion and convert the result of a given iteration to Scipion format. In this example, you could visualize the last iterate (option Show classification in Scipion) and from there you can create the subset of particles to continue. Then for the steps that you want to do, you should use the 'extract - coordinates' protocol where you provide the subset of particles and you can apply the shifts to re-center the particles. In your case, since your particles come from merging from two datasets of micrographs, you will need to join the micrographs to obtain a full set of micrographs and then extract the particles base on that joined set and using the original pixel size. Is there a way to get the scipion DB / GUI to recover the information from any given iteration? (Tangentially, although running a relion command with —continue flag worked, in the scipion GUI the continue option failed, perhaps because ‘consider previous alignment’ was set to No? Scipion responded by starting at iteration 1, threatening to delete the successful iterations so far - which took ~2weeks! From there on scipion could not be persuaded to continue from iteration 18, and I do not know what would happen if I let it continue what looks like a restart.) Here the thing that might be confusing is the Relion continue option and the Scipion continue ones, that are different. In Scipion, when you use Continue mode, it will try to continue from the last completed step, but in the case of Relion, the whole execution of the program is a big step, so Scipion continue does not work here. What you can do, is make a copy of the protocol and select the option in the form to 'Continue from a previous Run' (at the top of the form) and then provide the other run and the desired iteration (by default the last). I hope that I could clarify this instead of confusing you more. I can take the optimiser.star file to relion and make a subset selection, fine. However, now I would like to re-extract particles with recentering, and un-bin the particles later. These both seem to be difficult now. Especially because I joined two sets of particles before binning, and the MicrographIDs were not adjusted to be unique values in the unioned set. We noticed this problem at the begining of merging set of micrographs...so, you don't need to worry about the unique integer ID, which is re-generated when using the merge. We introduced a micName, that is kind of unique string identifier base on the filename. So, even if you import to set of movies and take particles from then, you can later join the micrographs (or movies) and re-extract the particles and we match the micName instead of the micID. If I import the subset particle.star back in to scipion, it generates outputMicrographs & outputParticles - the Micrograph info is wrong & can’t find the micrographs, it thinks there are 5615 micrographs, but this is wrong since there are now 1114 non-unique micIDs. Does anyone know how the relationship between particles in the joined set and their micrographs can be re-established? I recommend you here going the previously mentioned procedure...creating the subset - extracting coordinates (applying shifts) - extracting particles from the joined set of micrograhs. Anyway, you are right that the created SetOfMicrographs is not properly set in this case. I think that we have been using the import particles when importing completely from Relion, where we could generate a new SetOfMicrographs based on the .star files that make sense. I was trying to import some particles recently from Relion and trying to match to existing micrographs in the project and I found a bug because the micName was not properly set from the star file. So we need to fix this soon. for xmipp particle extraction, the image.xmd files (/star files) lose the connection to the micrographs because the _micrograph field is /tmp/[…]noDust.xmp rather than the .mrc output from motioncorr. Other than that there is _micrographId ; does anyone know where _micrographId is related to the corresponding micrograph? At this point I hope that you have a better idea of the relation of micId and micName , which is used in CTFs, particles and coordinates to match the corresponding micrograph. Any help is greatly appreciated. Please let me know if I could clarify some concepts and you can continue with your processing. Please do not hesitate to contact us if you have any other issue or feedback as well. All the best, Teige Kind regards Jose Miguel ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org<http://slashdot.org/>! http://sdm.link/slashdot _______________________________________________ scipion-users mailing list sci...@li...<mailto:sci...@li...> https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Yaiza R. <su...@bc...> - 2018-03-21 13:05:49
|
Hi again Kyle, Sorry! We do have already a script to do this, it's in scripts/create_project.py, and you can use it like this: scipion python scripts/create_project.py name="session1234" workflow="path/to/your/workflow.json" Cheers, Yaiza Activo Mie, 21 Marzo at 12:47 PM , Yaiza Rancel <su...@bc...> Escrito: Hi Kyle, At the moment we don't have the option to do this straight away, however this is done in Scipion for our tutorials and it shouldn't be very difficult create a new script and adapt it to your needs. You can find the tutorials script in pyworkflow/apps/pw_tutorial.py, I recommend you to launch the intro tutorial and check the script to see how it's done (To launch the tutorial do: ./scipion tutorial intro). I've attached here a simple version that creates a new project from a json and opens it, you can maybe use it as a starting point. To run it from your scipion directory you can do: ./scipion python path/to/create_project_from_json.py [PROJECT_NAME] [PATH_TO_JSON] Hope this helps, let us know if you have further questions. Yaiza Rancel Scipion Team Activo Vie, 9 Marzo at 8:13 AM , Kyle Michael Douglass <kyl...@ep...> Escrito: Hi all, First-time Scipion user here (version 1.1). How would I create a new project and import a workflow from a .json file from the command line? I ask because my collaborators are using Scipion as part of a larger data analysis pipeline that extends beyond EM. At one point in this pipeline we want to automatically launch the Scipion GUI with a new project and import a workflow that has already been specified in a .json file. I know how to do this manually with the GUI, but it would be nice to have the new project and workflow setup via a script so we can easily vary some of the analysis parameters. Thanks a lot for the help. I would be happy to provide any additional information if necessary. Cheers, Kyle Dr. Kyle M. Douglass Post-doctoral Researcher EPFL - The Laboratory of Experimental Biophysics http://leb.epfl.ch/ http://kmdouglass.github.io |
From: Yaiza R. <su...@bc...> - 2018-03-21 11:47:35
|
Hi Kyle, At the moment we don't have the option to do this straight away, however this is done in Scipion for our tutorials and it shouldn't be very difficult create a new script and adapt it to your needs. You can find the tutorials script in pyworkflow/apps/pw_tutorial.py, I recommend you to launch the intro tutorial and check the script to see how it's done (To launch the tutorial do: ./scipion tutorial intro). I've attached here a simple version that creates a new project from a json and opens it, you can maybe use it as a starting point. To run it from your scipion directory you can do: ./scipion python path/to/create_project_from_json.py [PROJECT_NAME] [PATH_TO_JSON] Hope this helps, let us know if you have further questions. Yaiza Rancel Scipion Team Activo Vie, 9 Marzo at 8:13 AM , Kyle Michael Douglass <kyl...@ep...> Escrito: Hi all, First-time Scipion user here (version 1.1). How would I create a new project and import a workflow from a .json file from the command line? I ask because my collaborators are using Scipion as part of a larger data analysis pipeline that extends beyond EM. At one point in this pipeline we want to automatically launch the Scipion GUI with a new project and import a workflow that has already been specified in a .json file. I know how to do this manually with the GUI, but it would be nice to have the new project and workflow setup via a script so we can easily vary some of the analysis parameters. Thanks a lot for the help. I would be happy to provide any additional information if necessary. Cheers, Kyle Dr. Kyle M. Douglass Post-doctoral Researcher EPFL - The Laboratory of Experimental Biophysics http://leb.epfl.ch/ http://kmdouglass.github.io |
From: Pablo C. <pc...@cn...> - 2018-03-09 15:39:44
|
Excellent! Just keep us on the loop if you struggle with something. On 09/03/18 14:00, Kyle Douglass wrote: > > Dear Pablo and Jose Miguel, > > Thank you very much for your help. The ability to generate a project > and workflow based on a JSON template is exactly what I was looking > for. I should furthermore be able to figure out the Python API from > the .py files you mentioned. > > Cheers, > > Kyle > > > On 03/09/2018 01:40 PM, Pablo Conesa wrote: >> >> Thanks Jose....additionally you can have a look to this: >> >> https://github.com/I2PC/scipion/wiki/Streaming-Processing#kicking-off-scipion-in-your-pipeline >> >> I've just added today since I had to explain this to a facilitiy >> manager that had the same /similar question >> >> Feel free to ask...this is that first version and it can be more >> clear., may be?. >> >> >> On 09/03/18 10:40, Jose Miguel de la Rosa Trevin wrote: >>> Hi Kyle, >>> >>> Unfortunately we haven't documented the API that well...this is a >>> pending task. >>> Nonetheless you have some developers documentation here: >>> https://github.com/I2PC/scipion/wiki/Developers-Page >>> I would recommend to take a look at the "How to create a protocol page": >>> https://github.com/I2PC/scipion/wiki/Creating-a-Protocol >>> >>> I will recommend to look at the Python code, specially the following: >>> - Object (basic wrapper around Python types that can be >>> stored/retrieved), pyworkflow/object.py >>> - Protocol (basic execution unit:Manager (to deal we projects): >>> pyworkflow/protocol/protocol.py >>> - Project (managing a given project) pyworkflow/project.py >>> - Manager: pyworkflow/manager.py >>> >>> Feel free to provide any feedback, >>> Best, >>> Jose Miguel >>> >>> >>> On Fri, Mar 9, 2018 at 11:15 AM, Kyle Douglass >>> <kyl...@ep... <mailto:kyl...@ep...>> wrote: >>> >>> Hi Jose Miguel, >>> >>> >>> On 03/09/2018 09:23 AM, Jose Miguel de la Rosa Trevin wrote: >>> >>>> You can easily do what you want in a Python script via our API. >>>> Basically you will need to create a new project and then load the >>>> given .json file. Take a look at the following example here: >>>> >>>> https://github.com/delarosatrevin/scipion-session/blob/f2ac1934f24ebcf0c3da205e9878eca86e4d80a4/aarhus/new-session-aarhus.py#L60 >>>> <https://github.com/delarosatrevin/scipion-session/blob/f2ac1934f24ebcf0c3da205e9878eca86e4d80a4/aarhus/new-session-aarhus.py#L60> >>>> >>> >>> Thanks for the swift response! Indeed, I think that this is the >>> solution I am looking for. >>> >>> Is the Python API documented anywhere, or should I focus on >>> extracting the main functions by looking at the Python code itself? >>> >>> Thanks again, >>> >>> Kyle >>> >>> -- >>> Kyle M. Douglass, PhD >>> Post-doctoral researcher >>> The Laboratory of Experimental Biophysics >>> EPFL, Lausanne, Switzerland >>> http://kmdouglass.github.io >>> http://leb.epfl.ch >>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Check out the vibrant tech community on one of the world's most >>> engaging tech sites, Slashdot.org!http://sdm.link/slashdot >>> >>> >>> _______________________________________________ >>> scipion-users mailing list >>> sci...@li... >>> https://lists.sourceforge.net/lists/listinfo/scipion-users >> >> >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org!http://sdm.link/slashdot >> >> >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users > > -- > Kyle M. Douglass, PhD > Post-doctoral researcher > The Laboratory of Experimental Biophysics > EPFL, Lausanne, Switzerland > http://kmdouglass.github.io > http://leb.epfl.ch > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Kyle D. <kyl...@ep...> - 2018-03-09 14:00:29
|
Dear Pablo and Jose Miguel, Thank you very much for your help. The ability to generate a project and workflow based on a JSON template is exactly what I was looking for. I should furthermore be able to figure out the Python API from the .py files you mentioned. Cheers, Kyle On 03/09/2018 01:40 PM, Pablo Conesa wrote: > > Thanks Jose....additionally you can have a look to this: > > https://github.com/I2PC/scipion/wiki/Streaming-Processing#kicking-off-scipion-in-your-pipeline > > I've just added today since I had to explain this to a facilitiy > manager that had the same /similar question > > Feel free to ask...this is that first version and it can be more > clear., may be?. > > > On 09/03/18 10:40, Jose Miguel de la Rosa Trevin wrote: >> Hi Kyle, >> >> Unfortunately we haven't documented the API that well...this is a >> pending task. >> Nonetheless you have some developers documentation here: >> https://github.com/I2PC/scipion/wiki/Developers-Page >> I would recommend to take a look at the "How to create a protocol page": >> https://github.com/I2PC/scipion/wiki/Creating-a-Protocol >> >> I will recommend to look at the Python code, specially the following: >> - Object (basic wrapper around Python types that can be >> stored/retrieved), pyworkflow/object.py >> - Protocol (basic execution unit:Manager (to deal we projects): >> pyworkflow/protocol/protocol.py >> - Project (managing a given project) pyworkflow/project.py >> - Manager: pyworkflow/manager.py >> >> Feel free to provide any feedback, >> Best, >> Jose Miguel >> >> >> On Fri, Mar 9, 2018 at 11:15 AM, Kyle Douglass <kyl...@ep... >> <mailto:kyl...@ep...>> wrote: >> >> Hi Jose Miguel, >> >> >> On 03/09/2018 09:23 AM, Jose Miguel de la Rosa Trevin wrote: >> >>> You can easily do what you want in a Python script via our API. >>> Basically you will need to create a new project and then load the >>> given .json file. Take a look at the following example here: >>> >>> https://github.com/delarosatrevin/scipion-session/blob/f2ac1934f24ebcf0c3da205e9878eca86e4d80a4/aarhus/new-session-aarhus.py#L60 >>> <https://github.com/delarosatrevin/scipion-session/blob/f2ac1934f24ebcf0c3da205e9878eca86e4d80a4/aarhus/new-session-aarhus.py#L60> >>> >> >> Thanks for the swift response! Indeed, I think that this is the >> solution I am looking for. >> >> Is the Python API documented anywhere, or should I focus on >> extracting the main functions by looking at the Python code itself? >> >> Thanks again, >> >> Kyle >> >> -- >> Kyle M. Douglass, PhD >> Post-doctoral researcher >> The Laboratory of Experimental Biophysics >> EPFL, Lausanne, Switzerland >> http://kmdouglass.github.io >> http://leb.epfl.ch >> >> >> >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org!http://sdm.link/slashdot >> >> >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users -- Kyle M. Douglass, PhD Post-doctoral researcher The Laboratory of Experimental Biophysics EPFL, Lausanne, Switzerland http://kmdouglass.github.io http://leb.epfl.ch |
From: Pablo C. <pc...@cn...> - 2018-03-09 12:41:01
|
Thanks Jose....additionally you can have a look to this: https://github.com/I2PC/scipion/wiki/Streaming-Processing#kicking-off-scipion-in-your-pipeline I've just added today since I had to explain this to a facilitiy manager that had the same /similar question Feel free to ask...this is that first version and it can be more clear., may be?. On 09/03/18 10:40, Jose Miguel de la Rosa Trevin wrote: > Hi Kyle, > > Unfortunately we haven't documented the API that well...this is a > pending task. > Nonetheless you have some developers documentation here: > https://github.com/I2PC/scipion/wiki/Developers-Page > I would recommend to take a look at the "How to create a protocol page": > https://github.com/I2PC/scipion/wiki/Creating-a-Protocol > > I will recommend to look at the Python code, specially the following: > - Object (basic wrapper around Python types that can be > stored/retrieved), pyworkflow/object.py > - Protocol (basic execution unit:Manager (to deal we projects): > pyworkflow/protocol/protocol.py > - Project (managing a given project) pyworkflow/project.py > - Manager: pyworkflow/manager.py > > Feel free to provide any feedback, > Best, > Jose Miguel > > > On Fri, Mar 9, 2018 at 11:15 AM, Kyle Douglass <kyl...@ep... > <mailto:kyl...@ep...>> wrote: > > Hi Jose Miguel, > > > On 03/09/2018 09:23 AM, Jose Miguel de la Rosa Trevin wrote: > >> You can easily do what you want in a Python script via our API. >> Basically you will need to create a new project and then load the >> given .json file. Take a look at the following example here: >> >> https://github.com/delarosatrevin/scipion-session/blob/f2ac1934f24ebcf0c3da205e9878eca86e4d80a4/aarhus/new-session-aarhus.py#L60 >> <https://github.com/delarosatrevin/scipion-session/blob/f2ac1934f24ebcf0c3da205e9878eca86e4d80a4/aarhus/new-session-aarhus.py#L60> >> > > Thanks for the swift response! Indeed, I think that this is the > solution I am looking for. > > Is the Python API documented anywhere, or should I focus on > extracting the main functions by looking at the Python code itself? > > Thanks again, > > Kyle > > -- > Kyle M. Douglass, PhD > Post-doctoral researcher > The Laboratory of Experimental Biophysics > EPFL, Lausanne, Switzerland > http://kmdouglass.github.io > http://leb.epfl.ch > > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Jose M. de la R. T. <del...@gm...> - 2018-03-09 10:40:25
|
Hi Kyle, Unfortunately we haven't documented the API that well...this is a pending task. Nonetheless you have some developers documentation here: https://github.com/I2PC/scipion/wiki/Developers-Page I would recommend to take a look at the "How to create a protocol page": https://github.com/I2PC/scipion/wiki/Creating-a-Protocol I will recommend to look at the Python code, specially the following: - Object (basic wrapper around Python types that can be stored/retrieved), pyworkflow/object.py - Protocol (basic execution unit:Manager (to deal we projects): pyworkflow/protocol/protocol.py - Project (managing a given project) pyworkflow/project.py - Manager: pyworkflow/manager.py Feel free to provide any feedback, Best, Jose Miguel On Fri, Mar 9, 2018 at 11:15 AM, Kyle Douglass <kyl...@ep...> wrote: > Hi Jose Miguel, > > On 03/09/2018 09:23 AM, Jose Miguel de la Rosa Trevin wrote: > > You can easily do what you want in a Python script via our API. > Basically you will need to create a new project and then load the > given .json file. Take a look at the following example here: > > https://github.com/delarosatrevin/scipion-session/blob/f2ac1934f24ebcf0c3da205e9878eca86e4d80a4/aarhus/new-session-aarhus.py#L60 > > > > Thanks for the swift response! Indeed, I think that this is the solution I > am looking for. > > Is the Python API documented anywhere, or should I focus on extracting the > main functions by looking at the Python code itself? > > Thanks again, > > Kyle > > -- > Kyle M. Douglass, PhD > Post-doctoral researcher > The Laboratory of Experimental Biophysics > EPFL, Lausanne, Switzerlandhttp://kmdouglass.github.iohttp://leb.epfl.ch > > |
From: Jose M. de la R. T. <del...@gm...> - 2018-03-09 08:24:07
|
Hi Kyle, You can easily do what you want in a Python script via our API. Basically you will need to create a new project and then load the given .json file. Take a look at the following example here: https://github.com/delarosatrevin/scipion-session/blob/f2ac1934f24ebcf0c3da205e9878eca86e4d80a4/aarhus/new-session-aarhus.py#L60 I also wrote a simple script to edit .json files in the GUI (basically faking a project in a temporary folder and then saving the .json back when closing the windows.) that maybe could also be useful for you. It is in the branch *release-1.1.facilities-devel *that will be the next stable release 1.2. Following the link: https://github.com/I2PC/scipion/blob/release-1.1.facilities-devel/scripts/edit_workflow.py Hope this helps Best Jose Miguel On Fri, Mar 9, 2018 at 8:12 AM, Kyle Michael Douglass <kyl...@ep... > wrote: > Hi all, > > > First-time Scipion user here (version 1.1). How would I create a new > project and import a workflow from a .json file from the command line? > > > I ask because my collaborators are using Scipion as part of a larger data > analysis pipeline that extends beyond EM. At one point in this pipeline we > want to automatically launch the Scipion GUI with a new project and import > a workflow that has already been specified in a .json file. I know how to > do this manually with the GUI, but it would be nice to have the new project > and workflow setup via a script so we can easily vary some of the analysis > parameters. > > > Thanks a lot for the help. I would be happy to provide any additional > information if necessary. > > > Cheers, > > Kyle > > > Dr. Kyle M. Douglass > Post-doctoral Researcher > EPFL - The Laboratory of Experimental Biophysics > http://leb.epfl.ch/ > http://kmdouglass.github.io > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |