You can subscribe to this list here.
2016 |
Jan
(2) |
Feb
(13) |
Mar
(9) |
Apr
(4) |
May
(5) |
Jun
(2) |
Jul
(8) |
Aug
(3) |
Sep
(25) |
Oct
(7) |
Nov
(49) |
Dec
(15) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2017 |
Jan
(24) |
Feb
(36) |
Mar
(53) |
Apr
(44) |
May
(37) |
Jun
(34) |
Jul
(12) |
Aug
(15) |
Sep
(14) |
Oct
(9) |
Nov
(9) |
Dec
(7) |
2018 |
Jan
(16) |
Feb
(9) |
Mar
(27) |
Apr
(39) |
May
(8) |
Jun
(24) |
Jul
(22) |
Aug
(11) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
2019 |
Jan
(4) |
Feb
(5) |
Mar
|
Apr
(1) |
May
(21) |
Jun
(13) |
Jul
(31) |
Aug
(22) |
Sep
(9) |
Oct
(19) |
Nov
(24) |
Dec
(12) |
2020 |
Jan
(30) |
Feb
(12) |
Mar
(16) |
Apr
(4) |
May
(37) |
Jun
(17) |
Jul
(19) |
Aug
(15) |
Sep
(26) |
Oct
(84) |
Nov
(64) |
Dec
(55) |
2021 |
Jan
(18) |
Feb
(58) |
Mar
(26) |
Apr
(88) |
May
(51) |
Jun
(36) |
Jul
(31) |
Aug
(37) |
Sep
(79) |
Oct
(15) |
Nov
(29) |
Dec
(8) |
2022 |
Jan
(5) |
Feb
(8) |
Mar
(29) |
Apr
(21) |
May
(11) |
Jun
(11) |
Jul
(18) |
Aug
(16) |
Sep
(6) |
Oct
(10) |
Nov
(23) |
Dec
(1) |
2023 |
Jan
(18) |
Feb
|
Mar
(4) |
Apr
|
May
(3) |
Jun
(10) |
Jul
(1) |
Aug
|
Sep
|
Oct
(1) |
Nov
(3) |
Dec
(5) |
2024 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
(1) |
Feb
|
Mar
|
Apr
(5) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Kyle M. D. <kyl...@ep...> - 2018-03-09 07:12:58
|
Hi all, First-time Scipion user here (version 1.1). How would I create a new project and import a workflow from a .json file from the command line? I ask because my collaborators are using Scipion as part of a larger data analysis pipeline that extends beyond EM. At one point in this pipeline we want to automatically launch the Scipion GUI with a new project and import a workflow that has already been specified in a .json file. I know how to do this manually with the GUI, but it would be nice to have the new project and workflow setup via a script so we can easily vary some of the analysis parameters. Thanks a lot for the help. I would be happy to provide any additional information if necessary. Cheers, Kyle Dr. Kyle M. Douglass Post-doctoral Researcher EPFL - The Laboratory of Experimental Biophysics http://leb.epfl.ch/ http://kmdouglass.github.io |
From: Jesper L. K. <je...@mb...> - 2018-03-07 08:27:03
|
Thanks! //Jesper ---------------------------------------------- Jesper Lykkegaard Karlsen Scientific Computing Centre for Structural Biology Department of Molecular Biology and Genetics Aarhus University Gustav Wieds Vej 10 8000 Aarhus C E-mail: je...@mb... Tlf: +45 50906203 On 2018-03-06 09:10, Pablo Conesa wrote: > Good point Jesper > > The git branch is: > > release-1.1.facilities-devel > > https://github.com/I2PC/scipion/tree/release-1.1.facilities-devel?files=1 > > I don't expect more changes apart from one or fixing small test issues. > |
From: Jose M. de la R. T. <del...@gm...> - 2018-03-05 12:36:18
|
Hi Jesper, I'm sorry for this partial information. The branch related to the v1.2 release candidate is: *release-1.1.facilities-devel* Hope this helps, Best, Jose Miguel On Mon, Mar 5, 2018 at 12:34 PM, Jesper Lykkegaard Karlsen <je...@mb...> wrote: > Hi all, > > I January version 2.1 beta og Scipion was announced on the [3DEM] mail > list (not on this mail list?). > > Installations guide referred to some download links, but no instruction on > how to get this version through git. > > There is no release-1.2...... in the git repo, so what branch does this > V1.2b correspond to on https://github.com/I2PC/scipion.git ? devel? > > Thanks > > //Jesper > > -- > ---------------------------------------------- > Jesper Lykkegaard Karlsen > Scientific Computing > Centre for Structural Biology > Department of Molecular Biology and Genetics > Aarhus University > Gustav Wieds Vej 10 > 8000 Aarhus C > > E-mail: je...@mb... > Tlf: +45 50906203 > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > |
From: Jesper L. K. <je...@mb...> - 2018-03-05 11:34:36
|
Hi all, I January version 2.1 beta og Scipion was announced on the [3DEM] mail list (not on this mail list?). Installations guide referred to some download links, but no instruction on how to get this version through git. There is no release-1.2...... in the git repo, so what branch does this V1.2b correspond to on https://github.com/I2PC/scipion.git ? devel? Thanks //Jesper -- ---------------------------------------------- Jesper Lykkegaard Karlsen Scientific Computing Centre for Structural Biology Department of Molecular Biology and Genetics Aarhus University Gustav Wieds Vej 10 8000 Aarhus C E-mail: je...@mb... Tlf: +45 50906203 |
From: Gregory S. <sha...@gm...> - 2018-03-01 17:47:31
|
Oh, I missed the last part of your message. Scipion viewer by default applies particle orientations when displaying, so you might not see the difference between particles before and after running xmipp apply alignment protocol. Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH, UK. tel. +44 (0) 1223 267542 <+44%201223%20267542> e-mail: gs...@mr... On Thu, Mar 1, 2018 at 5:21 PM, Gregory Sharov <sha...@gm...> wrote: > Hello Teige, > > there are a couple of things you should consider: 1) 2d classification in > relion will always start from random angles unless you selected option > "Consider previous alignment" (which in old code would set priors=angles); > 2) before applying the mask you should use "xmipp-apply alignment 2d" > protocol to produce a new stack of aligned particles. Alignment assign > protocol will only create a "link" between particle set and aligments. > > So the workflow in your case would probably be: relion 3D autorefine -> > relion2D without search (consider prev. alignment=yes) -> (output classes > are therefore oriented as per the 3d map) -> take the side-view class and > create a mask -> create particle subset from relion2D of side-view class -> > xmipp3 apply alignment to subset --> xmipp3 apply 2d mask to the previous > output > > Best regards, > Grigory > > ------------------------------------------------------------ > -------------------- > Grigory Sharov, Ph.D. > > MRC Laboratory of Molecular Biology, > Francis Crick Avenue, > Cambridge Biomedical Campus, > Cambridge CB2 0QH, UK. > tel. +44 (0) 1223 267542 <+44%201223%20267542> > e-mail: gs...@mr... > > On Thu, Mar 1, 2018 at 4:56 PM, Matthews-Palmer, Teige < > t.m...@im...> wrote: > >> Dear Scipion Users, >> >> I have a situation where a domain of my object does not align in 2D or 3D >> (some failed attempts of density subtraction & masked refinement). >> I have side-views where this domain is separate (not overlapping) from >> the rest of the object. >> What I would like to do is to apply a fairly tight 2D mask around this >> domain to the particle images. The xmipp programs in scipion seem to >> provide a convenient way, however I cannot get it to work right yet. >> >> My workflow is: relion 3D autorefine -> relionc2D without search -> >> (output classes are therefore oriented as per the 3d map) -> take the >> side-view class and create a mask -> import mask -> create particle subset >> from relionc2D of side-view class -> xmipp3 apply 2d mask >> The outcome is that the mask is rotated in the output particles, but the >> transformations are wrong (not even close). If I perform relionc2D without >> search on the unmasked side-view particle subset, they do not recreate the >> class average so they must not have the alignments assigned. >> >> If I do instead: >> create particle subset from relionc2D of side-view class -> scipion >> alignment assign using relion 3D autorefine as input alignments -> xmipp >> apply 2d mask >> Then, checking the particle subset, it can recreate the class average >> without searching, so it has the alignments, but the mask is not rotated at >> all in the output particles. >> >> So I tried xmipp3 apply alignment 2d to see if it will actually perform a >> transformation and interpolation on the particle images and create a new >> particle stack, but the output particles are unchanged. >> >> Does anyone have experience of 2D masking and have a workflow that works? >> Any help greatly appreciated. >> All the best, >> Teige >> >> >> >> ------------------------------------------------------------ >> ------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> > > |
From: Gregory S. <sha...@gm...> - 2018-03-01 17:22:20
|
Hello Teige, there are a couple of things you should consider: 1) 2d classification in relion will always start from random angles unless you selected option "Consider previous alignment" (which in old code would set priors=angles); 2) before applying the mask you should use "xmipp-apply alignment 2d" protocol to produce a new stack of aligned particles. Alignment assign protocol will only create a "link" between particle set and aligments. So the workflow in your case would probably be: relion 3D autorefine -> relion2D without search (consider prev. alignment=yes) -> (output classes are therefore oriented as per the 3d map) -> take the side-view class and create a mask -> create particle subset from relion2D of side-view class -> xmipp3 apply alignment to subset --> xmipp3 apply 2d mask to the previous output Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH, UK. tel. +44 (0) 1223 267542 <+44%201223%20267542> e-mail: gs...@mr... On Thu, Mar 1, 2018 at 4:56 PM, Matthews-Palmer, Teige < t.m...@im...> wrote: > Dear Scipion Users, > > I have a situation where a domain of my object does not align in 2D or 3D > (some failed attempts of density subtraction & masked refinement). > I have side-views where this domain is separate (not overlapping) from the > rest of the object. > What I would like to do is to apply a fairly tight 2D mask around this > domain to the particle images. The xmipp programs in scipion seem to > provide a convenient way, however I cannot get it to work right yet. > > My workflow is: relion 3D autorefine -> relionc2D without search -> > (output classes are therefore oriented as per the 3d map) -> take the > side-view class and create a mask -> import mask -> create particle subset > from relionc2D of side-view class -> xmipp3 apply 2d mask > The outcome is that the mask is rotated in the output particles, but the > transformations are wrong (not even close). If I perform relionc2D without > search on the unmasked side-view particle subset, they do not recreate the > class average so they must not have the alignments assigned. > > If I do instead: > create particle subset from relionc2D of side-view class -> scipion > alignment assign using relion 3D autorefine as input alignments -> xmipp > apply 2d mask > Then, checking the particle subset, it can recreate the class average > without searching, so it has the alignments, but the mask is not rotated at > all in the output particles. > > So I tried xmipp3 apply alignment 2d to see if it will actually perform a > transformation and interpolation on the particle images and create a new > particle stack, but the output particles are unchanged. > > Does anyone have experience of 2D masking and have a workflow that works? > Any help greatly appreciated. > All the best, > Teige > > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > |
From: Matthews-Palmer, T. <t.m...@im...> - 2018-03-01 16:56:21
|
Dear Scipion Users, I have a situation where a domain of my object does not align in 2D or 3D (some failed attempts of density subtraction & masked refinement). I have side-views where this domain is separate (not overlapping) from the rest of the object. What I would like to do is to apply a fairly tight 2D mask around this domain to the particle images. The xmipp programs in scipion seem to provide a convenient way, however I cannot get it to work right yet. My workflow is: relion 3D autorefine -> relionc2D without search -> (output classes are therefore oriented as per the 3d map) -> take the side-view class and create a mask -> import mask -> create particle subset from relionc2D of side-view class -> xmipp3 apply 2d mask The outcome is that the mask is rotated in the output particles, but the transformations are wrong (not even close). If I perform relionc2D without search on the unmasked side-view particle subset, they do not recreate the class average so they must not have the alignments assigned. If I do instead: create particle subset from relionc2D of side-view class -> scipion alignment assign using relion 3D autorefine as input alignments -> xmipp apply 2d mask Then, checking the particle subset, it can recreate the class average without searching, so it has the alignments, but the mask is not rotated at all in the output particles. So I tried xmipp3 apply alignment 2d to see if it will actually perform a transformation and interpolation on the particle images and create a new particle stack, but the output particles are unchanged. Does anyone have experience of 2D masking and have a workflow that works? Any help greatly appreciated. All the best, Teige |
From: Jose M. de la R. T. <del...@gm...> - 2018-03-01 12:44:54
|
...apart from this bug-fix, there are many others for this new release, together with more features. On Thu, Mar 1, 2018 at 1:29 PM, Jose Miguel de la Rosa Trevin < del...@gm...> wrote: > No worries! > > Can you try the version 1.2 release candidate (branch > release-1.1.facilities-devel)?? > We fixed an important issue that caused what you describe....a very long > time > when selecting any object as input. Was in big projects and with many 2D > classifications. > But I can't remember if this fix was introduced in 1.1 or not. > > Cheers, > Jose Miguel > > > On Thu, Mar 1, 2018 at 1:25 PM, Matthews-Palmer, Teige Rowan Seal < > t.m...@im...> wrote: > >> Hi Jose Miguel, >> >> Sorry my reply is so late. I think that the problem might have been >> caused by me interrupting the process of generating the sqlite files, by >> closing the results viewer. In my case they were taking a very long time, >> like 10 minutes, (the file is >1GB) and I didn’t know that the delay was >> because an sqlite file needed to be generated. >> >> I am also experiencing that loading the list of particle sets for input >> particles in any protocol run window, takes a very long time. It is taking >> more than 10 minutes to open the sub-window list of particle sets. This is >> the same on both duplicates of a project on two (powerful) machines. It >> seems to be because the project is quite large. Does this sound normal? Is >> there anything I could do about it..? Scipion version is 1.1. >> >> Thanks for your help. >> Teige >> >> >> On 9 Feb 2018, at 09:16, Jose Miguel de la Rosa Trevin < >> del...@gm...> wrote: >> >> Hi Teige, >> >> Can you check on the run folder (under ProjectFolder/Runs/0000ID_ProtRelion2D >> or similar) >> in the extra folder, if there are zero-size sqlite files? You could try >> to delete the generated >> sqlite files there and try the Analyze Results button again...still it is >> weird why it has failed >> in your case. (and it seems to others as well as mentioned by Pablo). >> >> Best, >> Jose Miguel >> >> >> On Thu, Feb 8, 2018 at 10:06 PM, Matthews-Palmer, Teige Rowan Seal < >> t.m...@im...> wrote: >> >>> Dear Jose, >>> >>> Thanks for the quick & very helpful reply. >>> >>> I had a problem with generating the scipion output with the protocol >>> viewer (see image). Only the last iteration generated _size, but the >>> classavg images are not available. I tried “Show classification in Scipion” >>> for many other iterations and they all show classavgs but as empty classes >>> i.e. the other data is not generated properly. I just used the classID to >>> pick the right subset and it seems to have worked, so I will continue like >>> you suggest, thanks. >>> >>> Thanks a lot for explaining how to properly continue a relion2d run in >>> scipion! >>> Also thanks for pointing out that micrographName is what keeps track of >>> the micrographs. I spent a bit of effort trying to trace it in the star >>> files and understand why scipion doesn’t mind the non-unique micID, but I >>> didn’t work it out. :-) >>> >>> That was a great help, thanks again. >>> All the best, >>> Teige >>> >>> >>> >>> On 8 Feb 2018, at 19:56, Jose Miguel de la Rosa Trevin < >>> del...@gm...> wrote: >>> >>> Dear Teige, >>> >>> Thanks for providing feedback about your use of Scipion...find below >>> some answers to your questions. >>> >>> On Thu, Feb 8, 2018 at 8:16 PM, Matthews-Palmer, Teige Rowan Seal < >>> t.m...@im...> wrote: >>> >>>> Dear Scipion Users, >>>> >>>> When running a large relion 2D classification, which fails unavoidably >>>> (this class of error: https://github.com/3dem/relion/issues/155) at a >>>> late iteration but has not reached the specified 25 iterations, scipion has >>>> not processed the relion output in its sqlite database (I’m guessing) and >>>> so when analysing the completed iterations, information is missing e.g. >>>> size=0 for all classes. Therefore subsets of particles cannot be selected >>>> to remove bad classes, re-extract with recentering and keep processing. >>>> This is effectively a roadblock to continue processing inside scipion. >>>> >>> This is the default behaviour of many protocols wrapping iterative >>> programs, Scipion only generate the output at the last iteration. We didn't >>> wanted to generate output for every iteration because it will create many >>> more objects and probably most of them are not desired by the user. So, >>> when you click in the Analyze Results, you have an option to visualize the >>> results in Scipion and convert the result of a given iteration to Scipion >>> format. In this example, you could visualize the last iterate (option Show >>> classification in Scipion) and from there you can create the subset of >>> particles to continue. >>> Then for the steps that you want to do, you should use the 'extract - >>> coordinates' protocol where you provide the subset of particles and you can >>> apply the shifts to re-center the particles. In your case, since your >>> particles come from merging from two datasets of micrographs, you will need >>> to join the micrographs to obtain a full set of micrographs and then >>> extract the particles base on that joined set and using the original pixel >>> size. >>> >>>> Is there a way to get the scipion DB / GUI to recover the information >>>> from any given iteration? >>>> >>>> *(Tangentially, although running a relion command with —continue flag >>>> worked, in the scipion GUI the continue option failed, perhaps because >>>> ‘consider previous alignment’ was set to No? Scipion responded by starting >>>> at iteration 1, threatening to delete the successful iterations so far - >>>> which took ~2weeks! From there on scipion could not be persuaded to >>>> continue from iteration 18, and I do not know what would happen if I let it >>>> continue what looks like a restart.) * >>>> >>> Here the thing that might be confusing is the Relion continue option and >>> the Scipion continue ones, that are different. In Scipion, when you use >>> Continue mode, it will try to continue from the last completed step, but in >>> the case of Relion, the whole execution of the program is a big step, so >>> Scipion continue does not work here. What you can do, is make a copy of the >>> protocol and select the option in the form to 'Continue from a previous >>> Run' (at the top of the form) and then provide the other run and the >>> desired iteration (by default the last). I hope that I could clarify this >>> instead of confusing you more. >>> >>>> >>>> I can take the optimiser.star file to relion and make a subset >>>> selection, fine. >>>> However, now I would like to re-extract particles with recentering, and >>>> un-bin the particles later. These both seem to be difficult now. Especially >>>> because I joined two sets of particles before binning, and the >>>> MicrographIDs were *not* adjusted to be unique values in the unioned >>>> set. >>>> >>> We noticed this problem at the begining of merging set of >>> micrographs...so, you don't need to worry about the unique integer ID, >>> which is re-generated when using the merge. We introduced a micName, that >>> is kind of unique string identifier base on the filename. So, even if you >>> import to set of movies and take particles from then, you can later join >>> the micrographs (or movies) and re-extract the particles and we match the >>> micName instead of the micID. >>> >>>> If I import the subset particle.star back in to scipion, it generates >>>> outputMicrographs & outputParticles - the Micrograph info is wrong & can’t >>>> find the micrographs, it thinks there are 5615 micrographs, but this is >>>> wrong since there are now 1114 non-unique micIDs. >>>> Does anyone know how the relationship between particles in the joined >>>> set and their micrographs can be re-established? >>>> >>> I recommend you here going the previously mentioned procedure...creating >>> the subset - extracting coordinates (applying shifts) - extracting >>> particles from the joined set of micrograhs. >>> Anyway, you are right that the created SetOfMicrographs is not properly >>> set in this case. I think that we have been using the import particles when >>> importing completely from Relion, where we could generate a new >>> SetOfMicrographs based on the .star files that make sense. I was trying to >>> import some particles recently from Relion and trying to match to existing >>> micrographs in the project and I found a bug because the micName was not >>> properly set from the star file. So we need to fix this soon. >>> >>>> >>>> for xmipp particle extraction, the image.xmd files (/star files) lose >>>> the connection to the micrographs because the _micrograph field >>>> is /tmp/[…]noDust.xmp rather than the .mrc output from motioncorr. Other >>>> than that there is _micrographId ; does anyone know where _micrographId is >>>> related to the corresponding micrograph? >>>> >>> At this point I hope that you have a better idea of the relation of >>> micId and micName , which is used in CTFs, particles and coordinates to >>> match the corresponding micrograph. >>> >>>> >>>> Any help is greatly appreciated. >>>> >>> Please let me know if I could clarify some concepts and you can continue >>> with your processing. Please do not hesitate to contact us if you have any >>> other issue or feedback as well. >>> >>>> All the best, >>>> Teige >>>> >>> Kind regards >>> Jose Miguel >>> >>> >>>> >>>> ------------------------------------------------------------ >>>> ------------------ >>>> Check out the vibrant tech community on one of the world's most >>>> engaging tech sites, Slashdot.org <http://slashdot.org/>! >>>> http://sdm.link/slashdot >>>> _______________________________________________ >>>> scipion-users mailing list >>>> sci...@li... >>>> https://lists.sourceforge.net/lists/listinfo/scipion-users >>>> >>>> >>> >>> >> >> > |
From: Jose M. de la R. T. <del...@gm...> - 2018-03-01 12:29:17
|
No worries! Can you try the version 1.2 release candidate (branch release-1.1.facilities-devel)?? We fixed an important issue that caused what you describe....a very long time when selecting any object as input. Was in big projects and with many 2D classifications. But I can't remember if this fix was introduced in 1.1 or not. Cheers, Jose Miguel On Thu, Mar 1, 2018 at 1:25 PM, Matthews-Palmer, Teige Rowan Seal < t.m...@im...> wrote: > Hi Jose Miguel, > > Sorry my reply is so late. I think that the problem might have been caused > by me interrupting the process of generating the sqlite files, by closing > the results viewer. In my case they were taking a very long time, like 10 > minutes, (the file is >1GB) and I didn’t know that the delay was because an > sqlite file needed to be generated. > > I am also experiencing that loading the list of particle sets for input > particles in any protocol run window, takes a very long time. It is taking > more than 10 minutes to open the sub-window list of particle sets. This is > the same on both duplicates of a project on two (powerful) machines. It > seems to be because the project is quite large. Does this sound normal? Is > there anything I could do about it..? Scipion version is 1.1. > > Thanks for your help. > Teige > > > On 9 Feb 2018, at 09:16, Jose Miguel de la Rosa Trevin < > del...@gm...> wrote: > > Hi Teige, > > Can you check on the run folder (under ProjectFolder/Runs/0000ID_ProtRelion2D > or similar) > in the extra folder, if there are zero-size sqlite files? You could try to > delete the generated > sqlite files there and try the Analyze Results button again...still it is > weird why it has failed > in your case. (and it seems to others as well as mentioned by Pablo). > > Best, > Jose Miguel > > > On Thu, Feb 8, 2018 at 10:06 PM, Matthews-Palmer, Teige Rowan Seal < > t.m...@im...> wrote: > >> Dear Jose, >> >> Thanks for the quick & very helpful reply. >> >> I had a problem with generating the scipion output with the protocol >> viewer (see image). Only the last iteration generated _size, but the >> classavg images are not available. I tried “Show classification in Scipion” >> for many other iterations and they all show classavgs but as empty classes >> i.e. the other data is not generated properly. I just used the classID to >> pick the right subset and it seems to have worked, so I will continue like >> you suggest, thanks. >> >> Thanks a lot for explaining how to properly continue a relion2d run in >> scipion! >> Also thanks for pointing out that micrographName is what keeps track of >> the micrographs. I spent a bit of effort trying to trace it in the star >> files and understand why scipion doesn’t mind the non-unique micID, but I >> didn’t work it out. :-) >> >> That was a great help, thanks again. >> All the best, >> Teige >> >> >> >> On 8 Feb 2018, at 19:56, Jose Miguel de la Rosa Trevin < >> del...@gm...> wrote: >> >> Dear Teige, >> >> Thanks for providing feedback about your use of Scipion...find below some >> answers to your questions. >> >> On Thu, Feb 8, 2018 at 8:16 PM, Matthews-Palmer, Teige Rowan Seal < >> t.m...@im...> wrote: >> >>> Dear Scipion Users, >>> >>> When running a large relion 2D classification, which fails unavoidably >>> (this class of error: https://github.com/3dem/relion/issues/155) at a >>> late iteration but has not reached the specified 25 iterations, scipion has >>> not processed the relion output in its sqlite database (I’m guessing) and >>> so when analysing the completed iterations, information is missing e.g. >>> size=0 for all classes. Therefore subsets of particles cannot be selected >>> to remove bad classes, re-extract with recentering and keep processing. >>> This is effectively a roadblock to continue processing inside scipion. >>> >> This is the default behaviour of many protocols wrapping iterative >> programs, Scipion only generate the output at the last iteration. We didn't >> wanted to generate output for every iteration because it will create many >> more objects and probably most of them are not desired by the user. So, >> when you click in the Analyze Results, you have an option to visualize the >> results in Scipion and convert the result of a given iteration to Scipion >> format. In this example, you could visualize the last iterate (option Show >> classification in Scipion) and from there you can create the subset of >> particles to continue. >> Then for the steps that you want to do, you should use the 'extract - >> coordinates' protocol where you provide the subset of particles and you can >> apply the shifts to re-center the particles. In your case, since your >> particles come from merging from two datasets of micrographs, you will need >> to join the micrographs to obtain a full set of micrographs and then >> extract the particles base on that joined set and using the original pixel >> size. >> >>> Is there a way to get the scipion DB / GUI to recover the information >>> from any given iteration? >>> >>> *(Tangentially, although running a relion command with —continue flag >>> worked, in the scipion GUI the continue option failed, perhaps because >>> ‘consider previous alignment’ was set to No? Scipion responded by starting >>> at iteration 1, threatening to delete the successful iterations so far - >>> which took ~2weeks! From there on scipion could not be persuaded to >>> continue from iteration 18, and I do not know what would happen if I let it >>> continue what looks like a restart.) * >>> >> Here the thing that might be confusing is the Relion continue option and >> the Scipion continue ones, that are different. In Scipion, when you use >> Continue mode, it will try to continue from the last completed step, but in >> the case of Relion, the whole execution of the program is a big step, so >> Scipion continue does not work here. What you can do, is make a copy of the >> protocol and select the option in the form to 'Continue from a previous >> Run' (at the top of the form) and then provide the other run and the >> desired iteration (by default the last). I hope that I could clarify this >> instead of confusing you more. >> >>> >>> I can take the optimiser.star file to relion and make a subset >>> selection, fine. >>> However, now I would like to re-extract particles with recentering, and >>> un-bin the particles later. These both seem to be difficult now. Especially >>> because I joined two sets of particles before binning, and the >>> MicrographIDs were *not* adjusted to be unique values in the unioned >>> set. >>> >> We noticed this problem at the begining of merging set of >> micrographs...so, you don't need to worry about the unique integer ID, >> which is re-generated when using the merge. We introduced a micName, that >> is kind of unique string identifier base on the filename. So, even if you >> import to set of movies and take particles from then, you can later join >> the micrographs (or movies) and re-extract the particles and we match the >> micName instead of the micID. >> >>> If I import the subset particle.star back in to scipion, it generates >>> outputMicrographs & outputParticles - the Micrograph info is wrong & can’t >>> find the micrographs, it thinks there are 5615 micrographs, but this is >>> wrong since there are now 1114 non-unique micIDs. >>> Does anyone know how the relationship between particles in the joined >>> set and their micrographs can be re-established? >>> >> I recommend you here going the previously mentioned procedure...creating >> the subset - extracting coordinates (applying shifts) - extracting >> particles from the joined set of micrograhs. >> Anyway, you are right that the created SetOfMicrographs is not properly >> set in this case. I think that we have been using the import particles when >> importing completely from Relion, where we could generate a new >> SetOfMicrographs based on the .star files that make sense. I was trying to >> import some particles recently from Relion and trying to match to existing >> micrographs in the project and I found a bug because the micName was not >> properly set from the star file. So we need to fix this soon. >> >>> >>> for xmipp particle extraction, the image.xmd files (/star files) lose >>> the connection to the micrographs because the _micrograph field >>> is /tmp/[…]noDust.xmp rather than the .mrc output from motioncorr. Other >>> than that there is _micrographId ; does anyone know where _micrographId is >>> related to the corresponding micrograph? >>> >> At this point I hope that you have a better idea of the relation of micId >> and micName , which is used in CTFs, particles and coordinates to match the >> corresponding micrograph. >> >>> >>> Any help is greatly appreciated. >>> >> Please let me know if I could clarify some concepts and you can continue >> with your processing. Please do not hesitate to contact us if you have any >> other issue or feedback as well. >> >>> All the best, >>> Teige >>> >> Kind regards >> Jose Miguel >> >> >>> >>> ------------------------------------------------------------ >>> ------------------ >>> Check out the vibrant tech community on one of the world's most >>> engaging tech sites, Slashdot.org <http://slashdot.org/>! >>> http://sdm.link/slashdot >>> _______________________________________________ >>> scipion-users mailing list >>> sci...@li... >>> https://lists.sourceforge.net/lists/listinfo/scipion-users >>> >>> >> >> > > |
From: Matthews-Palmer, T. R. S. <t.m...@im...> - 2018-03-01 12:25:47
|
Hi Jose Miguel, Sorry my reply is so late. I think that the problem might have been caused by me interrupting the process of generating the sqlite files, by closing the results viewer. In my case they were taking a very long time, like 10 minutes, (the file is >1GB) and I didn’t know that the delay was because an sqlite file needed to be generated. I am also experiencing that loading the list of particle sets for input particles in any protocol run window, takes a very long time. It is taking more than 10 minutes to open the sub-window list of particle sets. This is the same on both duplicates of a project on two (powerful) machines. It seems to be because the project is quite large. Does this sound normal? Is there anything I could do about it..? Scipion version is 1.1. Thanks for your help. Teige On 9 Feb 2018, at 09:16, Jose Miguel de la Rosa Trevin <del...@gm...<mailto:del...@gm...>> wrote: Hi Teige, Can you check on the run folder (under ProjectFolder/Runs/0000ID_ProtRelion2D or similar) in the extra folder, if there are zero-size sqlite files? You could try to delete the generated sqlite files there and try the Analyze Results button again...still it is weird why it has failed in your case. (and it seems to others as well as mentioned by Pablo). Best, Jose Miguel On Thu, Feb 8, 2018 at 10:06 PM, Matthews-Palmer, Teige Rowan Seal <t.m...@im...<mailto:t.m...@im...>> wrote: Dear Jose, Thanks for the quick & very helpful reply. I had a problem with generating the scipion output with the protocol viewer (see image). Only the last iteration generated _size, but the classavg images are not available. I tried “Show classification in Scipion” for many other iterations and they all show classavgs but as empty classes i.e. the other data is not generated properly. I just used the classID to pick the right subset and it seems to have worked, so I will continue like you suggest, thanks. Thanks a lot for explaining how to properly continue a relion2d run in scipion! Also thanks for pointing out that micrographName is what keeps track of the micrographs. I spent a bit of effort trying to trace it in the star files and understand why scipion doesn’t mind the non-unique micID, but I didn’t work it out. :-) That was a great help, thanks again. All the best, Teige On 8 Feb 2018, at 19:56, Jose Miguel de la Rosa Trevin <del...@gm...<mailto:del...@gm...>> wrote: Dear Teige, Thanks for providing feedback about your use of Scipion...find below some answers to your questions. On Thu, Feb 8, 2018 at 8:16 PM, Matthews-Palmer, Teige Rowan Seal <t.m...@im...<mailto:t.m...@im...>> wrote: Dear Scipion Users, When running a large relion 2D classification, which fails unavoidably (this class of error: https://github.com/3dem/relion/issues/155) at a late iteration but has not reached the specified 25 iterations, scipion has not processed the relion output in its sqlite database (I’m guessing) and so when analysing the completed iterations, information is missing e.g. size=0 for all classes. Therefore subsets of particles cannot be selected to remove bad classes, re-extract with recentering and keep processing. This is effectively a roadblock to continue processing inside scipion. This is the default behaviour of many protocols wrapping iterative programs, Scipion only generate the output at the last iteration. We didn't wanted to generate output for every iteration because it will create many more objects and probably most of them are not desired by the user. So, when you click in the Analyze Results, you have an option to visualize the results in Scipion and convert the result of a given iteration to Scipion format. In this example, you could visualize the last iterate (option Show classification in Scipion) and from there you can create the subset of particles to continue. Then for the steps that you want to do, you should use the 'extract - coordinates' protocol where you provide the subset of particles and you can apply the shifts to re-center the particles. In your case, since your particles come from merging from two datasets of micrographs, you will need to join the micrographs to obtain a full set of micrographs and then extract the particles base on that joined set and using the original pixel size. Is there a way to get the scipion DB / GUI to recover the information from any given iteration? (Tangentially, although running a relion command with —continue flag worked, in the scipion GUI the continue option failed, perhaps because ‘consider previous alignment’ was set to No? Scipion responded by starting at iteration 1, threatening to delete the successful iterations so far - which took ~2weeks! From there on scipion could not be persuaded to continue from iteration 18, and I do not know what would happen if I let it continue what looks like a restart.) Here the thing that might be confusing is the Relion continue option and the Scipion continue ones, that are different. In Scipion, when you use Continue mode, it will try to continue from the last completed step, but in the case of Relion, the whole execution of the program is a big step, so Scipion continue does not work here. What you can do, is make a copy of the protocol and select the option in the form to 'Continue from a previous Run' (at the top of the form) and then provide the other run and the desired iteration (by default the last). I hope that I could clarify this instead of confusing you more. I can take the optimiser.star file to relion and make a subset selection, fine. However, now I would like to re-extract particles with recentering, and un-bin the particles later. These both seem to be difficult now. Especially because I joined two sets of particles before binning, and the MicrographIDs were not adjusted to be unique values in the unioned set. We noticed this problem at the begining of merging set of micrographs...so, you don't need to worry about the unique integer ID, which is re-generated when using the merge. We introduced a micName, that is kind of unique string identifier base on the filename. So, even if you import to set of movies and take particles from then, you can later join the micrographs (or movies) and re-extract the particles and we match the micName instead of the micID. If I import the subset particle.star back in to scipion, it generates outputMicrographs & outputParticles - the Micrograph info is wrong & can’t find the micrographs, it thinks there are 5615 micrographs, but this is wrong since there are now 1114 non-unique micIDs. Does anyone know how the relationship between particles in the joined set and their micrographs can be re-established? I recommend you here going the previously mentioned procedure...creating the subset - extracting coordinates (applying shifts) - extracting particles from the joined set of micrograhs. Anyway, you are right that the created SetOfMicrographs is not properly set in this case. I think that we have been using the import particles when importing completely from Relion, where we could generate a new SetOfMicrographs based on the .star files that make sense. I was trying to import some particles recently from Relion and trying to match to existing micrographs in the project and I found a bug because the micName was not properly set from the star file. So we need to fix this soon. for xmipp particle extraction, the image.xmd files (/star files) lose the connection to the micrographs because the _micrograph field is /tmp/[…]noDust.xmp rather than the .mrc output from motioncorr. Other than that there is _micrographId ; does anyone know where _micrographId is related to the corresponding micrograph? At this point I hope that you have a better idea of the relation of micId and micName , which is used in CTFs, particles and coordinates to match the corresponding micrograph. Any help is greatly appreciated. Please let me know if I could clarify some concepts and you can continue with your processing. Please do not hesitate to contact us if you have any other issue or feedback as well. All the best, Teige Kind regards Jose Miguel ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org<http://slashdot.org/>! http://sdm.link/slashdot _______________________________________________ scipion-users mailing list sci...@li...<mailto:sci...@li...> https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: sunil k. T. <skt...@gm...> - 2018-02-13 07:52:20
|
Hi I am trying to compile the Xmipp using scipion on my MacBook. I am getting errors. Compiling Xmipp ... install/scons.py Keeping existing file: /Applications/Software/scipion1.1/software/em/xmipp/xmipp.bashrc Keeping existing file: /Applications/Software/scipion1.1/software/em/xmipp/xmipp.csh Keeping existing file: /Applications/Software/scipion1.1/software/em/xmipp/xmipp.fish scons: Reading SConscript files ... Mkdir("/Applications/Software/scipion1.1/software/em/xmipp/java/build") scons: done reading SConscript files. scons: Building targets ... scons: `software/lib/libXmippAlglib.so' is up to date. scons: `software/lib/libXmippBilib.so' is up to date. scons: `software/lib/libXmippCondor.so' is up to date. scons: `software/lib/libXmippDelaunay.so' is up to date. scons: `software/lib/libXmippSqliteExt.so' is up to date. g++ -o software/em/xmipp/external/gtest/gtest-all.os -c -O3 -fPIC -Isoftware/em/xmipp/external -Isoftware/em/xmipp/libraries -Isoftware/include software/em/xmipp/external/gtest/gtest-all.cc In file included from software/em/xmipp/external/gtest/gtest-all.cc:39: software/em/xmipp/external/gtest/gtest.h:1561:13: fatal error: 'tr1/tuple' file not found # include <tr1/tuple> // NOLINT ^~~~~~~~~~~ 1 error generated. scons: *** [software/em/xmipp/external/gtest/gtest-all.os] Error 1 scons: building terminated because of errors. Please suggest what to do next. Thanks Sunil |
From: Jose M. de la R. T. <del...@gm...> - 2018-02-09 09:16:58
|
Hi Teige, Can you check on the run folder (under ProjectFolder/Runs/0000ID_ProtRelion2D or similar) in the extra folder, if there are zero-size sqlite files? You could try to delete the generated sqlite files there and try the Analyze Results button again...still it is weird why it has failed in your case. (and it seems to others as well as mentioned by Pablo). Best, Jose Miguel On Thu, Feb 8, 2018 at 10:06 PM, Matthews-Palmer, Teige Rowan Seal < t.m...@im...> wrote: > Dear Jose, > > Thanks for the quick & very helpful reply. > > I had a problem with generating the scipion output with the protocol > viewer (see image). Only the last iteration generated _size, but the > classavg images are not available. I tried “Show classification in Scipion” > for many other iterations and they all show classavgs but as empty classes > i.e. the other data is not generated properly. I just used the classID to > pick the right subset and it seems to have worked, so I will continue like > you suggest, thanks. > > Thanks a lot for explaining how to properly continue a relion2d run in > scipion! > Also thanks for pointing out that micrographName is what keeps track of > the micrographs. I spent a bit of effort trying to trace it in the star > files and understand why scipion doesn’t mind the non-unique micID, but I > didn’t work it out. :-) > > That was a great help, thanks again. > All the best, > Teige > > > > On 8 Feb 2018, at 19:56, Jose Miguel de la Rosa Trevin < > del...@gm...> wrote: > > Dear Teige, > > Thanks for providing feedback about your use of Scipion...find below some > answers to your questions. > > On Thu, Feb 8, 2018 at 8:16 PM, Matthews-Palmer, Teige Rowan Seal < > t.m...@im...> wrote: > >> Dear Scipion Users, >> >> When running a large relion 2D classification, which fails unavoidably >> (this class of error: https://github.com/3dem/relion/issues/155) at a >> late iteration but has not reached the specified 25 iterations, scipion has >> not processed the relion output in its sqlite database (I’m guessing) and >> so when analysing the completed iterations, information is missing e.g. >> size=0 for all classes. Therefore subsets of particles cannot be selected >> to remove bad classes, re-extract with recentering and keep processing. >> This is effectively a roadblock to continue processing inside scipion. >> > This is the default behaviour of many protocols wrapping iterative > programs, Scipion only generate the output at the last iteration. We didn't > wanted to generate output for every iteration because it will create many > more objects and probably most of them are not desired by the user. So, > when you click in the Analyze Results, you have an option to visualize the > results in Scipion and convert the result of a given iteration to Scipion > format. In this example, you could visualize the last iterate (option Show > classification in Scipion) and from there you can create the subset of > particles to continue. > Then for the steps that you want to do, you should use the 'extract - > coordinates' protocol where you provide the subset of particles and you can > apply the shifts to re-center the particles. In your case, since your > particles come from merging from two datasets of micrographs, you will need > to join the micrographs to obtain a full set of micrographs and then > extract the particles base on that joined set and using the original pixel > size. > >> Is there a way to get the scipion DB / GUI to recover the information >> from any given iteration? >> >> *(Tangentially, although running a relion command with —continue flag >> worked, in the scipion GUI the continue option failed, perhaps because >> ‘consider previous alignment’ was set to No? Scipion responded by starting >> at iteration 1, threatening to delete the successful iterations so far - >> which took ~2weeks! From there on scipion could not be persuaded to >> continue from iteration 18, and I do not know what would happen if I let it >> continue what looks like a restart.) * >> > Here the thing that might be confusing is the Relion continue option and > the Scipion continue ones, that are different. In Scipion, when you use > Continue mode, it will try to continue from the last completed step, but in > the case of Relion, the whole execution of the program is a big step, so > Scipion continue does not work here. What you can do, is make a copy of the > protocol and select the option in the form to 'Continue from a previous > Run' (at the top of the form) and then provide the other run and the > desired iteration (by default the last). I hope that I could clarify this > instead of confusing you more. > >> >> I can take the optimiser.star file to relion and make a subset selection, >> fine. >> However, now I would like to re-extract particles with recentering, and >> un-bin the particles later. These both seem to be difficult now. Especially >> because I joined two sets of particles before binning, and the >> MicrographIDs were *not* adjusted to be unique values in the unioned set. >> > We noticed this problem at the begining of merging set of > micrographs...so, you don't need to worry about the unique integer ID, > which is re-generated when using the merge. We introduced a micName, that > is kind of unique string identifier base on the filename. So, even if you > import to set of movies and take particles from then, you can later join > the micrographs (or movies) and re-extract the particles and we match the > micName instead of the micID. > >> If I import the subset particle.star back in to scipion, it generates >> outputMicrographs & outputParticles - the Micrograph info is wrong & can’t >> find the micrographs, it thinks there are 5615 micrographs, but this is >> wrong since there are now 1114 non-unique micIDs. >> Does anyone know how the relationship between particles in the joined set >> and their micrographs can be re-established? >> > I recommend you here going the previously mentioned procedure...creating > the subset - extracting coordinates (applying shifts) - extracting > particles from the joined set of micrograhs. > Anyway, you are right that the created SetOfMicrographs is not properly > set in this case. I think that we have been using the import particles when > importing completely from Relion, where we could generate a new > SetOfMicrographs based on the .star files that make sense. I was trying to > import some particles recently from Relion and trying to match to existing > micrographs in the project and I found a bug because the micName was not > properly set from the star file. So we need to fix this soon. > >> >> for xmipp particle extraction, the image.xmd files (/star files) lose the >> connection to the micrographs because the _micrograph field >> is /tmp/[…]noDust.xmp rather than the .mrc output from motioncorr. Other >> than that there is _micrographId ; does anyone know where _micrographId is >> related to the corresponding micrograph? >> > At this point I hope that you have a better idea of the relation of micId > and micName , which is used in CTFs, particles and coordinates to match the > corresponding micrograph. > >> >> Any help is greatly appreciated. >> > Please let me know if I could clarify some concepts and you can continue > with your processing. Please do not hesitate to contact us if you have any > other issue or feedback as well. > >> All the best, >> Teige >> > Kind regards > Jose Miguel > > >> >> ------------------------------------------------------------ >> ------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> >> > > |
From: Pablo C. <pc...@cn...> - 2018-02-09 09:12:38
|
Dear Teige and Jose, thank you. That last issue with the averages....we are still trying to reproduce it. We've seen it 3 times at least but no idea how you get there. We will try to understand why is happening and fix it...in the meantime...other people have done an import form relion using relion files that sit on the problematic folder. All the best, Pablo. On 08/02/18 22:06, Matthews-Palmer, Teige Rowan Seal wrote: > Dear Jose, > > Thanks for the quick & very helpful reply. > > I had a problem with generating the scipion output with the protocol > viewer (see image). Only the last iteration generated _size, but the > classavg images are not available. I tried “Show classification in > Scipion” for many other iterations and they all show classavgs but as > empty classes i.e. the other data is not generated properly. I just > used the classID to pick the right subset and it seems to have worked, > so I will continue like you suggest, thanks. > > Thanks a lot for explaining how to properly continue a relion2d run in > scipion! > Also thanks for pointing out that micrographName is what keeps track > of the micrographs. I spent a bit of effort trying to trace it in the > star files and understand why scipion doesn’t mind the non-unique > micID, but I didn’t work it out. :-) > > That was a great help, thanks again. > All the best, > Teige > > > >> On 8 Feb 2018, at 19:56, Jose Miguel de la Rosa Trevin >> <del...@gm... <mailto:del...@gm...>> wrote: >> >> Dear Teige, >> >> Thanks for providing feedback about your use of Scipion...find below >> some answers to your questions. >> >> On Thu, Feb 8, 2018 at 8:16 PM, Matthews-Palmer, Teige Rowan Seal >> <t.m...@im... >> <mailto:t.m...@im...>> wrote: >> >> Dear Scipion Users, >> >> When running a large relion 2D classification, which fails >> unavoidably (this class of error: >> https://github.com/3dem/relion/issues/155 >> <https://github.com/3dem/relion/issues/155>) at a late iteration >> but has not reached the specified 25 iterations, scipion has not >> processed the relion output in its sqlite database (I’m guessing) >> and so when analysing the completed iterations, information >> is missing e.g. size=0 for all classes. Therefore subsets of >> particles cannot be selected to remove bad classes, re-extract >> with recentering and keep processing. >> This is effectively a roadblock to continue processing inside >> scipion. >> >> This is the default behaviour of many protocols wrapping iterative >> programs, Scipion only generate the output at the last iteration. We >> didn't wanted to generate output for every iteration because it will >> create many more objects and probably most of them are not desired by >> the user. So, when you click in the Analyze Results, you have an >> option to visualize the results in Scipion and convert the result of >> a given iteration to Scipion format. In this example, you could >> visualize the last iterate (option Show classification in Scipion) >> and from there you can create the subset of particles to continue. >> Then for the steps that you want to do, you should use the 'extract - >> coordinates' protocol where you provide the subset of particles and >> you can apply the shifts to re-center the particles. In your case, >> since your particles come from merging from two datasets of >> micrographs, you will need to join the micrographs to obtain a full >> set of micrographs and then extract the particles base on that joined >> set and using the original pixel size. >> >> Is there a way to get the scipion DB / GUI to recover the >> information from any given iteration? >> /(Tangentially, although running a relion command with —continue >> flag worked, in the scipion GUI the continue option failed, >> perhaps because ‘consider previous alignment’ was set to No? >> Scipion responded by starting at iteration 1, threatening to >> delete the successful iterations so far - which took ~2weeks! >> From there on scipion could not be persuaded to continue from >> iteration 18, and I do not know what would happen if I let it >> continue what looks like a restart.) >> / >> >> Here the thing that might be confusing is the Relion continue option >> and the Scipion continue ones, that are different. In Scipion, when >> you use Continue mode, it will try to continue from the last >> completed step, but in the case of Relion, the whole execution of the >> program is a big step, so Scipion continue does not work here. What >> you can do, is make a copy of the protocol and select the option in >> the form to 'Continue from a previous Run' (at the top of the form) >> and then provide the other run and the desired iteration (by default >> the last). I hope that I could clarify this instead of confusing you >> more. >> >> >> I can take the optimiser.star file to relion and make a subset >> selection, fine. >> However, now I would like to re-extract particles with >> recentering, and un-bin the particles later. These both seem to >> be difficult now. Especially because I joined two sets >> of particles before binning, and the MicrographIDs were *not* >> adjusted to be unique values in the unioned set. >> >> We noticed this problem at the begining of merging set of >> micrographs...so, you don't need to worry about the unique integer >> ID, which is re-generated when using the merge. We introduced a >> micName, that is kind of unique string identifier base on the >> filename. So, even if you import to set of movies and take particles >> from then, you can later join the micrographs (or movies) and >> re-extract the particles and we match the micName instead of the micID. >> >> If I import the subset particle.star back in to scipion, it >> generates outputMicrographs & outputParticles - the Micrograph >> info is wrong & can’t find the micrographs, it thinks there are >> 5615 micrographs, but this is wrong since there are now 1114 >> non-unique micIDs. >> Does anyone know how the relationship between particles in the >> joined set and their micrographs can be re-established? >> >> I recommend you here going the previously mentioned >> procedure...creating the subset - extracting coordinates (applying >> shifts) - extracting particles from the joined set of micrograhs. >> Anyway, you are right that the created SetOfMicrographs is not >> properly set in this case. I think that we have been using the import >> particles when importing completely from Relion, where we could >> generate a new SetOfMicrographs based on the .star files that make >> sense. I was trying to import some particles recently from Relion and >> trying to match to existing micrographs in the project and I found a >> bug because the micName was not properly set from the star file. So >> we need to fix this soon. >> >> >> for xmipp particle extraction, the image.xmd files (/star files) >> lose the connection to the micrographs because the _micrograph >> field is /tmp/[…]noDust.xmp rather than the .mrc output from >> motioncorr. Other than that there is _micrographId ; does anyone >> know where _micrographId is related to the corresponding micrograph? >> >> At this point I hope that you have a better idea of the relation of >> micId and micName , which is used in CTFs, particles and coordinates >> to match the corresponding micrograph. >> >> >> Any help is greatly appreciated. >> >> Please let me know if I could clarify some concepts and you can >> continue with your processing. Please do not hesitate to contact us >> if you have any other issue or feedback as well. >> >> All the best, >> Teige >> >> Kind regards >> Jose Miguel >> >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org <http://Slashdot.org>! >> http://sdm.link/slashdot <http://sdm.link/slashdot> >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> <mailto:sci...@li...> >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> <https://lists.sourceforge.net/lists/listinfo/scipion-users> >> >> > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Matthews-Palmer, T. R. S. <t.m...@im...> - 2018-02-08 21:06:57
|
Dear Jose, Thanks for the quick & very helpful reply. I had a problem with generating the scipion output with the protocol viewer (see image). Only the last iteration generated _size, but the classavg images are not available. I tried “Show classification in Scipion” for many other iterations and they all show classavgs but as empty classes i.e. the other data is not generated properly. I just used the classID to pick the right subset and it seems to have worked, so I will continue like you suggest, thanks. Thanks a lot for explaining how to properly continue a relion2d run in scipion! Also thanks for pointing out that micrographName is what keeps track of the micrographs. I spent a bit of effort trying to trace it in the star files and understand why scipion doesn’t mind the non-unique micID, but I didn’t work it out. :-) That was a great help, thanks again. All the best, Teige [cid:C1B...@th...] On 8 Feb 2018, at 19:56, Jose Miguel de la Rosa Trevin <del...@gm...<mailto:del...@gm...>> wrote: Dear Teige, Thanks for providing feedback about your use of Scipion...find below some answers to your questions. On Thu, Feb 8, 2018 at 8:16 PM, Matthews-Palmer, Teige Rowan Seal <t.m...@im...<mailto:t.m...@im...>> wrote: Dear Scipion Users, When running a large relion 2D classification, which fails unavoidably (this class of error: https://github.com/3dem/relion/issues/155) at a late iteration but has not reached the specified 25 iterations, scipion has not processed the relion output in its sqlite database (I’m guessing) and so when analysing the completed iterations, information is missing e.g. size=0 for all classes. Therefore subsets of particles cannot be selected to remove bad classes, re-extract with recentering and keep processing. This is effectively a roadblock to continue processing inside scipion. This is the default behaviour of many protocols wrapping iterative programs, Scipion only generate the output at the last iteration. We didn't wanted to generate output for every iteration because it will create many more objects and probably most of them are not desired by the user. So, when you click in the Analyze Results, you have an option to visualize the results in Scipion and convert the result of a given iteration to Scipion format. In this example, you could visualize the last iterate (option Show classification in Scipion) and from there you can create the subset of particles to continue. Then for the steps that you want to do, you should use the 'extract - coordinates' protocol where you provide the subset of particles and you can apply the shifts to re-center the particles. In your case, since your particles come from merging from two datasets of micrographs, you will need to join the micrographs to obtain a full set of micrographs and then extract the particles base on that joined set and using the original pixel size. Is there a way to get the scipion DB / GUI to recover the information from any given iteration? (Tangentially, although running a relion command with —continue flag worked, in the scipion GUI the continue option failed, perhaps because ‘consider previous alignment’ was set to No? Scipion responded by starting at iteration 1, threatening to delete the successful iterations so far - which took ~2weeks! From there on scipion could not be persuaded to continue from iteration 18, and I do not know what would happen if I let it continue what looks like a restart.) Here the thing that might be confusing is the Relion continue option and the Scipion continue ones, that are different. In Scipion, when you use Continue mode, it will try to continue from the last completed step, but in the case of Relion, the whole execution of the program is a big step, so Scipion continue does not work here. What you can do, is make a copy of the protocol and select the option in the form to 'Continue from a previous Run' (at the top of the form) and then provide the other run and the desired iteration (by default the last). I hope that I could clarify this instead of confusing you more. I can take the optimiser.star file to relion and make a subset selection, fine. However, now I would like to re-extract particles with recentering, and un-bin the particles later. These both seem to be difficult now. Especially because I joined two sets of particles before binning, and the MicrographIDs were not adjusted to be unique values in the unioned set. We noticed this problem at the begining of merging set of micrographs...so, you don't need to worry about the unique integer ID, which is re-generated when using the merge. We introduced a micName, that is kind of unique string identifier base on the filename. So, even if you import to set of movies and take particles from then, you can later join the micrographs (or movies) and re-extract the particles and we match the micName instead of the micID. If I import the subset particle.star back in to scipion, it generates outputMicrographs & outputParticles - the Micrograph info is wrong & can’t find the micrographs, it thinks there are 5615 micrographs, but this is wrong since there are now 1114 non-unique micIDs. Does anyone know how the relationship between particles in the joined set and their micrographs can be re-established? I recommend you here going the previously mentioned procedure...creating the subset - extracting coordinates (applying shifts) - extracting particles from the joined set of micrograhs. Anyway, you are right that the created SetOfMicrographs is not properly set in this case. I think that we have been using the import particles when importing completely from Relion, where we could generate a new SetOfMicrographs based on the .star files that make sense. I was trying to import some particles recently from Relion and trying to match to existing micrographs in the project and I found a bug because the micName was not properly set from the star file. So we need to fix this soon. for xmipp particle extraction, the image.xmd files (/star files) lose the connection to the micrographs because the _micrograph field is /tmp/[…]noDust.xmp rather than the .mrc output from motioncorr. Other than that there is _micrographId ; does anyone know where _micrographId is related to the corresponding micrograph? At this point I hope that you have a better idea of the relation of micId and micName , which is used in CTFs, particles and coordinates to match the corresponding micrograph. Any help is greatly appreciated. Please let me know if I could clarify some concepts and you can continue with your processing. Please do not hesitate to contact us if you have any other issue or feedback as well. All the best, Teige Kind regards Jose Miguel ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org<http://Slashdot.org>! http://sdm.link/slashdot _______________________________________________ scipion-users mailing list sci...@li...<mailto:sci...@li...> https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Jose M. de la R. T. <del...@gm...> - 2018-02-08 19:56:58
|
Dear Teige, Thanks for providing feedback about your use of Scipion...find below some answers to your questions. On Thu, Feb 8, 2018 at 8:16 PM, Matthews-Palmer, Teige Rowan Seal < t.m...@im...> wrote: > Dear Scipion Users, > > When running a large relion 2D classification, which fails unavoidably > (this class of error: https://github.com/3dem/relion/issues/155) at a > late iteration but has not reached the specified 25 iterations, scipion has > not processed the relion output in its sqlite database (I’m guessing) and > so when analysing the completed iterations, information is missing e.g. > size=0 for all classes. Therefore subsets of particles cannot be selected > to remove bad classes, re-extract with recentering and keep processing. > This is effectively a roadblock to continue processing inside scipion. > This is the default behaviour of many protocols wrapping iterative programs, Scipion only generate the output at the last iteration. We didn't wanted to generate output for every iteration because it will create many more objects and probably most of them are not desired by the user. So, when you click in the Analyze Results, you have an option to visualize the results in Scipion and convert the result of a given iteration to Scipion format. In this example, you could visualize the last iterate (option Show classification in Scipion) and from there you can create the subset of particles to continue. Then for the steps that you want to do, you should use the 'extract - coordinates' protocol where you provide the subset of particles and you can apply the shifts to re-center the particles. In your case, since your particles come from merging from two datasets of micrographs, you will need to join the micrographs to obtain a full set of micrographs and then extract the particles base on that joined set and using the original pixel size. > Is there a way to get the scipion DB / GUI to recover the information from > any given iteration? > > *(Tangentially, although running a relion command with —continue flag > worked, in the scipion GUI the continue option failed, perhaps because > ‘consider previous alignment’ was set to No? Scipion responded by starting > at iteration 1, threatening to delete the successful iterations so far - > which took ~2weeks! From there on scipion could not be persuaded to > continue from iteration 18, and I do not know what would happen if I let it > continue what looks like a restart.)* > Here the thing that might be confusing is the Relion continue option and the Scipion continue ones, that are different. In Scipion, when you use Continue mode, it will try to continue from the last completed step, but in the case of Relion, the whole execution of the program is a big step, so Scipion continue does not work here. What you can do, is make a copy of the protocol and select the option in the form to 'Continue from a previous Run' (at the top of the form) and then provide the other run and the desired iteration (by default the last). I hope that I could clarify this instead of confusing you more. > > I can take the optimiser.star file to relion and make a subset selection, > fine. > However, now I would like to re-extract particles with recentering, and > un-bin the particles later. These both seem to be difficult now. Especially > because I joined two sets of particles before binning, and the > MicrographIDs were *not* adjusted to be unique values in the unioned set. > We noticed this problem at the begining of merging set of micrographs...so, you don't need to worry about the unique integer ID, which is re-generated when using the merge. We introduced a micName, that is kind of unique string identifier base on the filename. So, even if you import to set of movies and take particles from then, you can later join the micrographs (or movies) and re-extract the particles and we match the micName instead of the micID. > If I import the subset particle.star back in to scipion, it generates > outputMicrographs & outputParticles - the Micrograph info is wrong & can’t > find the micrographs, it thinks there are 5615 micrographs, but this is > wrong since there are now 1114 non-unique micIDs. > Does anyone know how the relationship between particles in the joined set > and their micrographs can be re-established? > I recommend you here going the previously mentioned procedure...creating the subset - extracting coordinates (applying shifts) - extracting particles from the joined set of micrograhs. Anyway, you are right that the created SetOfMicrographs is not properly set in this case. I think that we have been using the import particles when importing completely from Relion, where we could generate a new SetOfMicrographs based on the .star files that make sense. I was trying to import some particles recently from Relion and trying to match to existing micrographs in the project and I found a bug because the micName was not properly set from the star file. So we need to fix this soon. > > for xmipp particle extraction, the image.xmd files (/star files) lose the > connection to the micrographs because the _micrograph field > is /tmp/[…]noDust.xmp rather than the .mrc output from motioncorr. Other > than that there is _micrographId ; does anyone know where _micrographId is > related to the corresponding micrograph? > At this point I hope that you have a better idea of the relation of micId and micName , which is used in CTFs, particles and coordinates to match the corresponding micrograph. > > Any help is greatly appreciated. > Please let me know if I could clarify some concepts and you can continue with your processing. Please do not hesitate to contact us if you have any other issue or feedback as well. > All the best, > Teige > Kind regards Jose Miguel > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |
From: Matthews-Palmer, T. R. S. <t.m...@im...> - 2018-02-08 19:17:00
|
Dear Scipion Users, When running a large relion 2D classification, which fails unavoidably (this class of error: https://github.com/3dem/relion/issues/155) at a late iteration but has not reached the specified 25 iterations, scipion has not processed the relion output in its sqlite database (I’m guessing) and so when analysing the completed iterations, information is missing e.g. size=0 for all classes. Therefore subsets of particles cannot be selected to remove bad classes, re-extract with recentering and keep processing. This is effectively a roadblock to continue processing inside scipion. Is there a way to get the scipion DB / GUI to recover the information from any given iteration? (Tangentially, although running a relion command with —continue flag worked, in the scipion GUI the continue option failed, perhaps because ‘consider previous alignment’ was set to No? Scipion responded by starting at iteration 1, threatening to delete the successful iterations so far - which took ~2weeks! From there on scipion could not be persuaded to continue from iteration 18, and I do not know what would happen if I let it continue what looks like a restart.) I can take the optimiser.star file to relion and make a subset selection, fine. However, now I would like to re-extract particles with recentering, and un-bin the particles later. These both seem to be difficult now. Especially because I joined two sets of particles before binning, and the MicrographIDs were not adjusted to be unique values in the unioned set. If I import the subset particle.star back in to scipion, it generates outputMicrographs & outputParticles - the Micrograph info is wrong & can’t find the micrographs, it thinks there are 5615 micrographs, but this is wrong since there are now 1114 non-unique micIDs. Does anyone know how the relationship between particles in the joined set and their micrographs can be re-established? for xmipp particle extraction, the image.xmd files (/star files) lose the connection to the micrographs because the _micrograph field is /tmp/[…]noDust.xmp rather than the .mrc output from motioncorr. Other than that there is _micrographId ; does anyone know where _micrographId is related to the corresponding micrograph? Any help is greatly appreciated. All the best, Teige [cid:1FA...@th...] |
From: Jose M. de la R. T. <del...@gm...> - 2018-02-06 19:20:07
|
Hi Ahmad, Use ./scipion to execute any test command from the installation folder. Best Jose Miguel On Feb 6, 2018 8:07 PM, "Ahmad Khalifa" <ahm...@ma...> wrote: Hello, I just installed Scipion, and tried to run some of the small test commands fro mwithin the Scipion directory, I got the following: [labusr@luxor scipion]$ scipion tests model.test_object bash: scipion: command not found... Is that not how I'm supposed to run them? Please help me test my installation. Regards. ------------------------------------------------------------ ------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ scipion-users mailing list sci...@li... https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Ahmad K. <ahm...@ma...> - 2018-02-06 19:07:11
|
Hello, I just installed Scipion, and tried to run some of the small test commands fro mwithin the Scipion directory, I got the following: [labusr@luxor scipion]$ scipion tests model.test_object bash: scipion: command not found... Is that not how I'm supposed to run them? Please help me test my installation. Regards. |
From: Miao G. <gmh...@16...> - 2018-02-03 16:15:33
|
Dear Jose, Thanks very much for your reply. I am using the binary installation. The problem was solved. Something was wrong with the pyworkflow/em/convert.py file. I downloaded the new V1.1 and installed again and it works well now. Defining SCIPION_HOME and SCIPION_USER_DATA in my env file is OK. But I removed them as your suggestion. Best, Miao At 2018-01-31 16:33:31, "Jose Miguel de la Rosa Trevin" <del...@gm...> wrote: Dear Miao, You don't need to define neither SCIPION_HOME or SCIPION_USER_DATA. The first one is inferred from the location of the scipion script and the second is defined in the configuration file. Try to remove those and execute Scipion again. I'm not sure if the 'xmipp' issue is related to that or not. Are you using the binary installation or from source code? That error seems to me like xmipp libraries (now required by Scipion) are not properly installed. Best, Jose Miguel On Mon, Jan 29, 2018 at 3:52 AM, Miao Gui <gmh...@16...> wrote: Dear scipion developers, I am a new user of scipion. I have configured and installed the scipion on my workstation, everything looks fine. And I added the following lines in my ~/.cshrc setenv SCIPION_HOME "/usr/local/scipion-master" setenv SCIPION_USER_DATA "/usr/local/scipion-master/data" alias setxmipp source /usr/local/scipion/software/em/xmipp/xmipp.csh But when I run scipion under the /usr/local/scipion-master/ folder, it comes to an error and the message is listed below. How could I solve this problem? Scipion v1.1 (2017-06-14) Balbino >>>>> python /usr/local/scipion-master/pyworkflow/apps/pw_manager.py Traceback (most recent call last): File "/usr/local/scipion-master/pyworkflow/apps/pw_manager.py", line 32, in <module> from pyworkflow.gui.project import ProjectManagerWindow File "/usr/local/scipion-master/pyworkflow/gui/project/__init__.py", line 27, in <module> from project import ProjectManagerWindow, ProjectWindow File "/usr/local/scipion-master/pyworkflow/gui/project/project.py", line 45, in <module> from pyworkflow.manager import Manager File "/usr/local/scipion-master/pyworkflow/manager.py", line 33, in <module> from project import Project File "/usr/local/scipion-master/pyworkflow/project.py", line 40, in <module> import pyworkflow.em as em File "/usr/local/scipion-master/pyworkflow/em/__init__.py", line 35, in <module> from data import * File "/usr/local/scipion-master/pyworkflow/em/data.py", line 39, in <module> from convert import ImageHandler File "/usr/local/scipion-master/pyworkflow/em/convert.py", line 39, in <module> class ImageHandler(object): File "/usr/local/scipion-master/pyworkflow/em/convert.py", line 42, in ImageHandler DT_DEFAULT = xmipp.DT_DEFAULT NameError: name 'xmipp' is not defined Best wishes, Miao Gui ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ scipion-users mailing list sci...@li... https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Jose M. de la R. T. <del...@gm...> - 2018-01-31 08:33:38
|
Dear Miao, You don't need to define neither SCIPION_HOME or SCIPION_USER_DATA. The first one is inferred from the location of the scipion script and the second is defined in the configuration file. Try to remove those and execute Scipion again. I'm not sure if the 'xmipp' issue is related to that or not. Are you using the binary installation or from source code? That error seems to me like xmipp libraries (now required by Scipion) are not properly installed. Best, Jose Miguel On Mon, Jan 29, 2018 at 3:52 AM, Miao Gui <gmh...@16...> wrote: > Dear scipion developers, > > I am a new user of scipion. I have configured and installed the scipion on > my workstation, everything looks fine. And I added the following lines in > my ~/.cshrc > > *setenv SCIPION_HOME "/usr/local/scipion-master"* > *setenv SCIPION_USER_DATA "/usr/local/scipion-master/data"* > *alias setxmipp source /usr/local/scipion/software/em/xmipp/xmipp.csh* > > But when I run scipion under the /usr/local/scipion-master/ folder, it > comes to an error and the message is listed below. How could I solve this > problem? > > *Scipion v1.1 (2017-06-14) Balbino* > > *>>>>> python /usr/local/scipion-master/pyworkflow/apps/pw_manager.py * > *Traceback (most recent call last):* > * File "/usr/local/scipion-master/pyworkflow/apps/pw_manager.py", line > 32, in <module>* > * from pyworkflow.gui.project import ProjectManagerWindow* > * File "/usr/local/scipion-master/pyworkflow/gui/project/__init__.py", > line 27, in <module>* > * from project import ProjectManagerWindow, ProjectWindow* > * File "/usr/local/scipion-master/pyworkflow/gui/project/project.py", > line 45, in <module>* > * from pyworkflow.manager import Manager* > * File "/usr/local/scipion-master/pyworkflow/manager.py", line 33, in > <module>* > * from project import Project* > * File "/usr/local/scipion-master/pyworkflow/project.py", line 40, in > <module>* > * import pyworkflow.em as em* > * File "/usr/local/scipion-master/pyworkflow/em/__init__.py", line 35, in > <module>* > * from data import ** > * File "/usr/local/scipion-master/pyworkflow/em/data.py", line 39, in > <module>* > * from convert import ImageHandler* > * File "/usr/local/scipion-master/pyworkflow/em/convert.py", line 39, in > <module>* > * class ImageHandler(object):* > * File "/usr/local/scipion-master/pyworkflow/em/convert.py", line 42, in > ImageHandler* > * DT_DEFAULT = xmipp.DT_DEFAULT* > *NameError: name 'xmipp' is not defined* > > Best wishes, > Miao Gui > > > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |
From: Miao G. <gmh...@16...> - 2018-01-29 02:52:19
|
Dear scipion developers, I am a new user of scipion. I have configured and installed the scipion on my workstation, everything looks fine. And I added the following lines in my ~/.cshrc setenv SCIPION_HOME "/usr/local/scipion-master" setenv SCIPION_USER_DATA "/usr/local/scipion-master/data" alias setxmipp source /usr/local/scipion/software/em/xmipp/xmipp.csh But when I run scipion under the /usr/local/scipion-master/ folder, it comes to an error and the message is listed below. How could I solve this problem? Scipion v1.1 (2017-06-14) Balbino >>>>> python /usr/local/scipion-master/pyworkflow/apps/pw_manager.py Traceback (most recent call last): File "/usr/local/scipion-master/pyworkflow/apps/pw_manager.py", line 32, in <module> from pyworkflow.gui.project import ProjectManagerWindow File "/usr/local/scipion-master/pyworkflow/gui/project/__init__.py", line 27, in <module> from project import ProjectManagerWindow, ProjectWindow File "/usr/local/scipion-master/pyworkflow/gui/project/project.py", line 45, in <module> from pyworkflow.manager import Manager File "/usr/local/scipion-master/pyworkflow/manager.py", line 33, in <module> from project import Project File "/usr/local/scipion-master/pyworkflow/project.py", line 40, in <module> import pyworkflow.em as em File "/usr/local/scipion-master/pyworkflow/em/__init__.py", line 35, in <module> from data import * File "/usr/local/scipion-master/pyworkflow/em/data.py", line 39, in <module> from convert import ImageHandler File "/usr/local/scipion-master/pyworkflow/em/convert.py", line 39, in <module> class ImageHandler(object): File "/usr/local/scipion-master/pyworkflow/em/convert.py", line 42, in ImageHandler DT_DEFAULT = xmipp.DT_DEFAULT NameError: name 'xmipp' is not defined Best wishes, Miao Gui |
From: Montserrat F. F. <mf...@ib...> - 2018-01-23 10:43:52
|
Hi, The box size is 220x220. This might be the reason why we are running out of memory? We are running the process in a GPU clusters, and were using 4 threads and 4 MPI because the system admins told us to use that values. We will try to run it using an odd number of MPI. Thank you very much for your feedback, Montserrat Fabrega El dt, 23 gen 2018 a les 0:03 Joshua Jude Lobo <jl...@um...> va escriure: > Hi Dr.Ferrer > > It seems like you are running out of memory on your cards . What is the > box size ? .Also you might want to give an odd number of MPI because one > will always become a master an the rest > will be slaves > > Sincerely > Joshua Lobo > > On Mon, Jan 22, 2018 at 11:46 AM, Montserrat Fabrega Ferrer < > mf...@ib...> wrote: > >> Hi, >> >> I am trying to run a Relion auto-refine in Scipion v1.1 (2017-06-14) >> Balbino. The Relion version is 2.0.3. However, I get the error I copy >> below. Does anybody have any suggestion that would help? >> >> Thank you very much in advance, >> >> Montserrat Fabrega >> >> 00001: RUNNING PROTOCOL ----------------- >> 00002: PID: 10237 >> 00003: Scipion: v1.1 (2017-06-14) Balbino >> 00004: currentDir: >> /gpfs/projects/irb12/irb12336/ScipionUserData/projects/Titan >> 00005: workingDir: Runs/011639_ProtRelionRefine3D >> 00006: runMode: Continue >> 00007: MPI: 4 >> 00008: threads: 4 >> 00009: len(steps) 3 len(prevSteps) 0 >> 00010: Starting at step: 1 >> 00011: Running steps >> 00012: STARTED: convertInputStep, step 1 >> 00013: 2018-01-22 00:13:40.523885 >> 00014: Converting set from >> 'Runs/011589_ProtUserSubSet/particles.sqlite' into >> 'Runs/011639_ProtRelionRefine3D/input_particles.star' >> 00015: FINISHED: convertInputStep, step 1 >> 00016: 2018-01-22 00:13:45.845200 >> 00017: STARTED: runRelionStep, step 2 >> 00018: 2018-01-22 00:13:45.860738 >> 00019: srun `which relion_refine_mpi` --gpu --low_resol_join_halves >> 40 --pool 3 --auto_local_healpix_order 4 --angpix 1.04 >> --dont_combine_weights_via_disc --ref >> Runs/011639_ProtRelionRefine3D/tmp/proposedVolume00003.mrc --scale >> --offset_range 5.0 --ini_high 60.0 --offset_step 2.0 --healpix_order 2 >> --auto_refine --ctf --oversampling 1 --split_random_halves --o >> Runs/011639_ProtRelionRefine3D/extra/relion --i >> Runs/011639_ProtRelionRefine3D/input_particles.star --zero_mask --norm >> --firstiter_cc --sym c12 --flatten_solvent --particle_diameter 228.8 --j >> 4 >> 00020: === RELION MPI setup === >> 00021: + Number of MPI processes = 4 >> 00022: + Number of threads per MPI process = 4 >> 00023: + Total number of threads therefore = 16 >> 00024: + Master (0) runs on host = nvb36 >> 00025: + Slave 1 runs on host = nvb36 >> 00026: + Slave 2 runs on host = nvb36 >> 00027: + Slave 3 runs on host = nvb36 >> 00028: ================= >> 00029: uniqueHost nvb36 has 3 ranks. >> 00030: GPU-ids not specified for this rank, threads will automatically >> be mapped to available devices. >> 00031: Thread 0 on slave 1 mapped to device 0 >> 00032: Thread 1 on slave 1 mapped to device 0 >> 00033: Thread 2 on slave 1 mapped to device 0 >> 00034: Thread 3 on slave 1 mapped to device 1 >> 00035: GPU-ids not specified for this rank, threads will automatically >> be mapped to available devices. >> 00036: Thread 0 on slave 2 mapped to device 1 >> 00037: Thread 1 on slave 2 mapped to device 1 >> 00038: Thread 2 on slave 2 mapped to device 2 >> 00039: Thread 3 on slave 2 mapped to device 2 >> 00040: GPU-ids not specified for this rank, threads will automatically >> be mapped to available devices. >> 00041: Thread 0 on slave 3 mapped to device 2 >> 00042: Thread 1 on slave 3 mapped to device 3 >> 00043: Thread 2 on slave 3 mapped to device 3 >> 00044: Thread 3 on slave 3 mapped to device 3 >> 00045: Device 1 on nvb36 is split between 2 slaves >> 00046: Device 2 on nvb36 is split between 2 slaves >> 00047: [nvb36:10305] *** Process received signal *** >> 00048: [nvb36:10305] Signal: Segmentation fault (11) >> 00049: [nvb36:10305] Signal code: Address not mapped (1) >> 00050: [nvb36:10305] Failing at address: 0x2802b08 >> 00051: [nvb36:10305] [ 0] /lib64/libpthread.so.0() [0x358740f790] >> 00052: [nvb36:10305] [ 1] /opt/mpi/bullxmpi/ >> 1.2.9.1/lib/libmpi.so.1(opal_memory_ptmalloc2_free+0x26) >> <http://1.2.9.1/lib/libmpi.so.1%28opal_memory_ptmalloc2_free+0x26%29> >> [0x2ac8aeb94046] >> 00053: [nvb36:10305] [ 2] >> /apps/RELION/2.0.3/lib/librelion_lib.so(_ZN14MlOptimiserMpi10initialiseEv+0x115f) >> [0x2ac8a7491f0f] >> 00054: [nvb36:10305] [ 3] >> /apps/RELION/2.0.3/bin/relion_refine_mpi(main+0x218) [0x4052c8] >> 00055: [nvb36:10305] [ 4] /lib64/libc.so.6(__libc_start_main+0xfd) >> [0x3586c1ed5d] >> 00056: [nvb36:10305] [ 5] /apps/RELION/2.0.3/bin/relion_refine_mpi() >> [0x404fe9] >> 00057: [nvb36:10305] *** End of error message *** >> 00058: srun: error: nvb36: task 0: Segmentation fault >> >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> >> > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > |
From: Joshua J. L. <jl...@um...> - 2018-01-22 23:02:57
|
Hi Dr.Ferrer It seems like you are running out of memory on your cards . What is the box size ? .Also you might want to give an odd number of MPI because one will always become a master an the rest will be slaves Sincerely Joshua Lobo On Mon, Jan 22, 2018 at 11:46 AM, Montserrat Fabrega Ferrer < mf...@ib...> wrote: > Hi, > > I am trying to run a Relion auto-refine in Scipion v1.1 (2017-06-14) > Balbino. The Relion version is 2.0.3. However, I get the error I copy > below. Does anybody have any suggestion that would help? > > Thank you very much in advance, > > Montserrat Fabrega > > 00001: RUNNING PROTOCOL ----------------- > 00002: PID: 10237 > 00003: Scipion: v1.1 (2017-06-14) Balbino > 00004: currentDir: /gpfs/projects/irb12/irb12336/ > ScipionUserData/projects/Titan > 00005: workingDir: Runs/011639_ProtRelionRefine3D > 00006: runMode: Continue > 00007: MPI: 4 > 00008: threads: 4 > 00009: len(steps) 3 len(prevSteps) 0 > 00010: Starting at step: 1 > 00011: Running steps > 00012: STARTED: convertInputStep, step 1 > 00013: 2018-01-22 00:13:40.523885 > 00014: Converting set from 'Runs/011589_ProtUserSubSet/particles.sqlite' > into 'Runs/011639_ProtRelionRefine3D/input_particles.star' > 00015: FINISHED: convertInputStep, step 1 > 00016: 2018-01-22 00:13:45.845200 > 00017: STARTED: runRelionStep, step 2 > 00018: 2018-01-22 00:13:45.860738 > 00019: srun `which relion_refine_mpi` --gpu --low_resol_join_halves 40 > --pool 3 --auto_local_healpix_order 4 --angpix 1.04 > --dont_combine_weights_via_disc --ref Runs/011639_ProtRelionRefine3D/tmp/proposedVolume00003.mrc > --scale --offset_range 5.0 --ini_high 60.0 --offset_step 2.0 > --healpix_order 2 --auto_refine --ctf --oversampling 1 > --split_random_halves --o Runs/011639_ProtRelionRefine3D/extra/relion > --i Runs/011639_ProtRelionRefine3D/input_particles.star --zero_mask > --norm --firstiter_cc --sym c12 --flatten_solvent --particle_diameter > 228.8 --j 4 > 00020: === RELION MPI setup === > 00021: + Number of MPI processes = 4 > 00022: + Number of threads per MPI process = 4 > 00023: + Total number of threads therefore = 16 > 00024: + Master (0) runs on host = nvb36 > 00025: + Slave 1 runs on host = nvb36 > 00026: + Slave 2 runs on host = nvb36 > 00027: + Slave 3 runs on host = nvb36 > 00028: ================= > 00029: uniqueHost nvb36 has 3 ranks. > 00030: GPU-ids not specified for this rank, threads will automatically > be mapped to available devices. > 00031: Thread 0 on slave 1 mapped to device 0 > 00032: Thread 1 on slave 1 mapped to device 0 > 00033: Thread 2 on slave 1 mapped to device 0 > 00034: Thread 3 on slave 1 mapped to device 1 > 00035: GPU-ids not specified for this rank, threads will automatically > be mapped to available devices. > 00036: Thread 0 on slave 2 mapped to device 1 > 00037: Thread 1 on slave 2 mapped to device 1 > 00038: Thread 2 on slave 2 mapped to device 2 > 00039: Thread 3 on slave 2 mapped to device 2 > 00040: GPU-ids not specified for this rank, threads will automatically > be mapped to available devices. > 00041: Thread 0 on slave 3 mapped to device 2 > 00042: Thread 1 on slave 3 mapped to device 3 > 00043: Thread 2 on slave 3 mapped to device 3 > 00044: Thread 3 on slave 3 mapped to device 3 > 00045: Device 1 on nvb36 is split between 2 slaves > 00046: Device 2 on nvb36 is split between 2 slaves > 00047: [nvb36:10305] *** Process received signal *** > 00048: [nvb36:10305] Signal: Segmentation fault (11) > 00049: [nvb36:10305] Signal code: Address not mapped (1) > 00050: [nvb36:10305] Failing at address: 0x2802b08 > 00051: [nvb36:10305] [ 0] /lib64/libpthread.so.0() [0x358740f790] > 00052: [nvb36:10305] [ 1] /opt/mpi/bullxmpi/1.2.9.1/lib/ > libmpi.so.1(opal_memory_ptmalloc2_free+0x26) > <http://1.2.9.1/lib/libmpi.so.1%28opal_memory_ptmalloc2_free+0x26%29> > [0x2ac8aeb94046] > 00053: [nvb36:10305] [ 2] /apps/RELION/2.0.3/lib/libreli > on_lib.so(_ZN14MlOptimiserMpi10initialiseEv+0x115f) [0x2ac8a7491f0f] > 00054: [nvb36:10305] [ 3] /apps/RELION/2.0.3/bin/relion_refine_mpi(main+0x218) > [0x4052c8] > 00055: [nvb36:10305] [ 4] /lib64/libc.so.6(__libc_start_main+0xfd) > [0x3586c1ed5d] > 00056: [nvb36:10305] [ 5] /apps/RELION/2.0.3/bin/relion_refine_mpi() > [0x404fe9] > 00057: [nvb36:10305] *** End of error message *** > 00058: srun: error: nvb36: task 0: Segmentation fault > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |
From: Montserrat F. F. <mf...@ib...> - 2018-01-22 18:08:36
|
Hi, I am trying to run a Relion auto-refine in Scipion v1.1 (2017-06-14) Balbino. The Relion version is 2.0.3. However, I get the error I copy below. Does anybody have any suggestion that would help? Thank you very much in advance, Montserrat Fabrega 00001: RUNNING PROTOCOL ----------------- 00002: PID: 10237 00003: Scipion: v1.1 (2017-06-14) Balbino 00004: currentDir: /gpfs/projects/irb12/irb12336/ ScipionUserData/projects/Titan 00005: workingDir: Runs/011639_ProtRelionRefine3D 00006: runMode: Continue 00007: MPI: 4 00008: threads: 4 00009: len(steps) 3 len(prevSteps) 0 00010: Starting at step: 1 00011: Running steps 00012: STARTED: convertInputStep, step 1 00013: 2018-01-22 00:13:40.523885 00014: Converting set from 'Runs/011589_ProtUserSubSet/particles.sqlite' into 'Runs/011639_ProtRelionRefine3D/input_particles.star' 00015: FINISHED: convertInputStep, step 1 00016: 2018-01-22 00:13:45.845200 00017: STARTED: runRelionStep, step 2 00018: 2018-01-22 00:13:45.860738 00019: srun `which relion_refine_mpi` --gpu --low_resol_join_halves 40 --pool 3 --auto_local_healpix_order 4 --angpix 1.04 --dont_combine_weights_via_disc --ref Runs/011639_ProtRelionRefine3D/tmp/proposedVolume00003.mrc --scale --offset_range 5.0 --ini_high 60.0 --offset_step 2.0 --healpix_order 2 --auto_refine --ctf --oversampling 1 --split_random_halves --o Runs/011639_ProtRelionRefine3D/extra/relion --i Runs/011639_ProtRelionRefine3D/input_particles.star --zero_mask --norm --firstiter_cc --sym c12 --flatten_solvent --particle_diameter 228.8 --j 4 00020: === RELION MPI setup === 00021: + Number of MPI processes = 4 00022: + Number of threads per MPI process = 4 00023: + Total number of threads therefore = 16 00024: + Master (0) runs on host = nvb36 00025: + Slave 1 runs on host = nvb36 00026: + Slave 2 runs on host = nvb36 00027: + Slave 3 runs on host = nvb36 00028: ================= 00029: uniqueHost nvb36 has 3 ranks. 00030: GPU-ids not specified for this rank, threads will automatically be mapped to available devices. 00031: Thread 0 on slave 1 mapped to device 0 00032: Thread 1 on slave 1 mapped to device 0 00033: Thread 2 on slave 1 mapped to device 0 00034: Thread 3 on slave 1 mapped to device 1 00035: GPU-ids not specified for this rank, threads will automatically be mapped to available devices. 00036: Thread 0 on slave 2 mapped to device 1 00037: Thread 1 on slave 2 mapped to device 1 00038: Thread 2 on slave 2 mapped to device 2 00039: Thread 3 on slave 2 mapped to device 2 00040: GPU-ids not specified for this rank, threads will automatically be mapped to available devices. 00041: Thread 0 on slave 3 mapped to device 2 00042: Thread 1 on slave 3 mapped to device 3 00043: Thread 2 on slave 3 mapped to device 3 00044: Thread 3 on slave 3 mapped to device 3 00045: Device 1 on nvb36 is split between 2 slaves 00046: Device 2 on nvb36 is split between 2 slaves 00047: [nvb36:10305] *** Process received signal *** 00048: [nvb36:10305] Signal: Segmentation fault (11) 00049: [nvb36:10305] Signal code: Address not mapped (1) 00050: [nvb36:10305] Failing at address: 0x2802b08 00051: [nvb36:10305] [ 0] /lib64/libpthread.so.0() [0x358740f790] 00052: [nvb36:10305] [ 1] /opt/mpi/bullxmpi/1.2.9.1/lib/ libmpi.so.1(opal_memory_ptmalloc2_free+0x26) <http://1.2.9.1/lib/libmpi.so.1%28opal_memory_ptmalloc2_free+0x26%29> [0x2ac8aeb94046] 00053: [nvb36:10305] [ 2] /apps/RELION/2.0.3/lib/librelion_lib.so(_ ZN14MlOptimiserMpi10initialiseEv+0x115f) [0x2ac8a7491f0f] 00054: [nvb36:10305] [ 3] /apps/RELION/2.0.3/bin/relion_refine_mpi(main+0x218) [0x4052c8] 00055: [nvb36:10305] [ 4] /lib64/libc.so.6(__libc_start_main+0xfd) [0x3586c1ed5d] 00056: [nvb36:10305] [ 5] /apps/RELION/2.0.3/bin/relion_refine_mpi() [0x404fe9] 00057: [nvb36:10305] *** End of error message *** 00058: srun: error: nvb36: task 0: Segmentation fault |
From: Pablo C. <pc...@cn...> - 2018-01-15 14:20:03
|
Hi, I would say 88 followers in a week is quite good, actually we've passed one the first requirements. Next one is to have 35 more questions with 10+ votes. So post there your question and upvote the others! All the best, Pablo. -------- Forwarded Message -------- Subject: 3D em forum in Stack exchange? Date: Tue, 9 Jan 2018 08:43:32 +0100 From: Pablo Conesa <pc...@cn...> To: 3d...@nc... Happy new year to all. I've discovered that it is possible to create a "Stack exchange" site for anything you like. I'm wondering if people here will find it useful. Developers here might be familiar with "Stackoverflow", I'm talking about the same thing but for EM. I've suggested the site: https://area51.stackexchange.com/proposals/116061/3d-em?referrer=r60GEWfllmkDeMC9lNRU3A2 This has to pass a number of stages to become final. First milestones are: *60 followers** **40 more questions with a score of 10 or more* All the best, Pablo. |