You can subscribe to this list here.
2016 |
Jan
(2) |
Feb
(13) |
Mar
(9) |
Apr
(4) |
May
(5) |
Jun
(2) |
Jul
(8) |
Aug
(3) |
Sep
(25) |
Oct
(7) |
Nov
(49) |
Dec
(15) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2017 |
Jan
(24) |
Feb
(36) |
Mar
(53) |
Apr
(44) |
May
(37) |
Jun
(34) |
Jul
(12) |
Aug
(15) |
Sep
(14) |
Oct
(9) |
Nov
(9) |
Dec
(7) |
2018 |
Jan
(16) |
Feb
(9) |
Mar
(27) |
Apr
(39) |
May
(8) |
Jun
(24) |
Jul
(22) |
Aug
(11) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
2019 |
Jan
(4) |
Feb
(5) |
Mar
|
Apr
(1) |
May
(21) |
Jun
(13) |
Jul
(31) |
Aug
(22) |
Sep
(9) |
Oct
(19) |
Nov
(24) |
Dec
(12) |
2020 |
Jan
(30) |
Feb
(12) |
Mar
(16) |
Apr
(4) |
May
(37) |
Jun
(17) |
Jul
(19) |
Aug
(15) |
Sep
(26) |
Oct
(84) |
Nov
(64) |
Dec
(55) |
2021 |
Jan
(18) |
Feb
(58) |
Mar
(26) |
Apr
(88) |
May
(51) |
Jun
(36) |
Jul
(31) |
Aug
(37) |
Sep
(79) |
Oct
(15) |
Nov
(29) |
Dec
(8) |
2022 |
Jan
(5) |
Feb
(8) |
Mar
(29) |
Apr
(21) |
May
(11) |
Jun
(11) |
Jul
(18) |
Aug
(16) |
Sep
(6) |
Oct
(10) |
Nov
(23) |
Dec
(1) |
2023 |
Jan
(18) |
Feb
|
Mar
(4) |
Apr
|
May
(3) |
Jun
(10) |
Jul
(1) |
Aug
|
Sep
|
Oct
(1) |
Nov
(3) |
Dec
(5) |
2024 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
(1) |
Feb
|
Mar
|
Apr
(5) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Dmitry S. <sem...@gm...> - 2017-01-11 10:52:13
|
Dear colleagues, *? --> auto-refinement for 2D* We work a lot with negative staining samples doing classification (in relion 2D) and getting 2D projection average classes. It might be possible to* improve 2D results* doing *auto-refinement* step (for 2D). If yes how can we do that? I tried to use dataset and created 2D mrc (vol) file as a reference but the program ended up with some errors. Dear colleagues do you have any ideas about that possibility ? Sincerely, Dmitry |
From: ZhuLi <ga...@16...> - 2017-01-11 03:55:32
|
Dear colleagues, When I did RCT with scipion, I found that it could only generate volumes according to the classified untilted sets (like what you only need to import in the " Random Conical Tilt" menu are the tilt_pairs and Good_classes files ). Since after viewing those volumes, I need to merge a few classes to generate a better volume which may come from the same conformation. Then, what should I do to extract and merge those tilted particles from corresponding classes? (as you know, the classes were generated from untilted particles.) Hope I described my question clearly. Thanks. Liz |
From: Pablo C. <pc...@cn...> - 2017-01-09 15:33:54
|
Dear Dmitry, Currently there is no protocol in scipion to convert/save file formats. Actually, Scipion does it all the time but we never exposed this in the workflow as a protocol. You can use Xmipp programs to do this through scipion command line: This will print the very detailed help page: *scipion xmipp_image_convert* An example could be: *scipion xmipp_image_convert -i ~/Desktop/mic.tif -o mic.mrc* We recently discuss about adding a convert/export protocol in Scipion, could you please comment on your particular use case. It may help to decide whether to create it or not? On 09/01/17 11:45, Dmitry Semchonok wrote: > Dear colleagues, > > Is is possible to save/convert tiff image to mrc using SCIPION? > > Sincerely, > Dmitry > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, SlashDot.org! http://sdm.link/slashdot > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Dmitry S. <sem...@gm...> - 2017-01-09 10:45:59
|
Dear colleagues, Is is possible to save/convert tiff image to mrc using SCIPION? Sincerely, Dmitry |
From: Pablo C. <pc...@cn...> - 2016-12-27 13:12:13
|
Thank you all! Great job. Pablo Scipion Team. On 26/12/16 03:31, ashutosh srivastava wrote: > Dear Grigory and Daniel > > Thank you so much for the help. It's working perfectly now. > > Best Regards > Ashutosh > > On Sat, Dec 24, 2016 at 6:59 PM, Daniel Luque Buzo <dl...@is... > <mailto:dl...@is...>> wrote: > > Hello Ashutosh, > > Find attached he hosts.conf file of out SGE cluster. The file is > based scipion wiki example that Grigory pointed. > > The module lines are specific of our cluster (if you do not use > module to manage environments/versions in you cluster should not > be necessary). > > When we started to configure our cluster, we had the same problem > for using more than one node but the problem was related with the > parallel environment configuration of the cluster (named orte in > our case). > > Hope it helps! > > Best Regards > -- > Daniel Luque Buzo > Unidad de Microscopía Electrónica y Confocal > Centro Nacional de Microbiología - ISCIII > Carretera de Majadahonda - Pozuelo, Km. 2.200 > 28220 - Majadahonda (Madrid) > Tel: +34 91 822 39 71 <tel:+34%20918%2022%2039%2071> > e-mail: dl...@is... <mailto:dl...@is...> > > > [localhost] > PARALLEL_COMMAND = mpirun -np %_(JOB_NODES)d -bynode %_(COMMAND)s > MANDATORY = 2 > NAME = OGE > CANCEL_COMMAND = qdel %_(JOB_ID)s > CHECK_COMMAND = qstat -j %_(JOB_ID)s > SUBMIT_COMMAND = qsub %_(JOB_SCRIPT)s > SUBMIT_PREFIX = scipion > SUBMIT_TEMPLATE = #!/bin/bash > ###====================================================### > #$ -V > #$ -S /bin/bash > #$ -cwd ### Use the current working directory > #$ -N scipion%_(JOB_NAME)s ### Job name > #$ -q %_(JOB_QUEUE)s ### Queue name > #$ -pe %_(JOB_PE)s %_(JOB_SLOTS)s > #$ -j y ### Merge stdin and stdout > ###=======================================================# > module load Python/python-2.7.11 > module load OpenMPI/openmpi-1.10.0 > ##module load scipion-1.0.0 > %_(JOB_COMMAND)s > > QUEUES = { "all.q": [ > ["JOB_SLOTS","20","Select the proper number > of cores"], > ["JOB_PE","orte","Parallel Enviroment > (PE)","Select the proper Parallel Enviroment (orte)"] > ] > } > > > >> El 24 dic 2016, a las 10:01, Grigory Sharov >> <sha...@gm... <mailto:sha...@gm...>> >> escribió: >> >> Hello Ashutosh, >> >> in scipion wiki there is an example for SGE batch system. You >> might check it out and give it a try: >> >> https://github.com/I2PC/scipion/wiki/Host-Configuration >> <https://github.com/I2PC/scipion/wiki/Host-Configuration> >> >> Best regards, >> Grigory >> >> -------------------------------------------------------------------------------- >> Grigory Sharov, Ph.D. >> Institute of Genetics and Molecular and Cellular Biology >> Integrated Structural Biology Department (CBI) >> 1, rue Laurent Fries >> 67404 Illkirch, France >> tel. 03 69 48 51 00 >> e-mail: sh...@ig... <mailto:sh...@ig...> >> >> On Sat, Dec 24, 2016 at 2:25 AM, ashutosh srivastava >> <ash...@gm... <mailto:ash...@gm...>> wrote: >> >> Dear Scipion Users >> >> I am analysing ~65000 particles in SPA protocol. I am trying >> to do this on a cluster which uses SGE as job scheduler. >> Since the hosts.conf file is provided for PBS/Torque, I tried >> converting it into SGE type script but it failed in using >> more than one nodes. So I went ahead and used a bash >> jobscript to run the ML2D in the following way >> >> #!/bin/bash >> #$ -N classification_2d >> #$ -pe openmpi 96 >> #$ -o pass >> #$ -e error >> #$ -cwd >> #$ -q BLAB >> export MPIDIR=/opt/tools/mpi/openmpi-1.10.2-gcc4.4.7/bin >> $MPIDIR/mpirun -machinefile $TMPDIR/machines -np $NSLOTS >> /opt/tools/scipion/scipion xmipp_mpi_ml_align2d -i >> input/particles65818_export.st >> <http://particles65818_export.st/>k --nref 200 --fast --oroot >> output/ml2d --mirror >> >> This runs fine, generating following files >> ml2dclasses.stk >> ml2dclasses.xmd >> ml2dextra (folder containing iteration data) >> ml2d_images_average.xmp >> ml2dimages.xmd >> >> However I am having trouble now to write a similar script for >> generating initial volume using RANSAC. >> Is there a way to use the protocol_ransac.py in a script >> similar to one mentioned above? If not then could some one >> guide me regarding this? >> Apart from this I would also like to know as to how I can >> import the classes generated as mentioned above to scipion >> gui to visualize them. >> >> Thank you >> >> Best Regards >> Ashutosh >> >> >> >> ------------------------------------------------------------------------------ >> Developer Access Program for Intel Xeon Phi Processors >> Access to Intel Xeon Phi processor-based developer platforms. >> With one year of Intel Parallel Studio XE. >> Training and support from Colfax. >> Order your platform today.http://sdm.link/intel >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> <mailto:sci...@li...> >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> <https://lists.sourceforge.net/lists/listinfo/scipion-users> >> >> >> ------------------------------------------------------------------------------ >> Developer Access Program for Intel Xeon Phi Processors >> Access to Intel Xeon Phi processor-based developer platforms. >> With one year of Intel Parallel Studio XE. >> Training and support from Colfax. >> Order your platform >> today.http://sdm.link/intel_______________________________________________ >> <http://sdm.link/intel_______________________________________________> >> scipion-users mailing list >> sci...@li... >> <mailto:sci...@li...> >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> <https://lists.sourceforge.net/lists/listinfo/scipion-users> > > ************************* AVISO LEGAL ************************* > Este mensaje electrónico está dirigido exclusivamente a sus > destinatarios, pudiendo contener documentos anexos de carácter > privado y confidencial. Si por error ha recibido este mensaje y no > se encuentra entre los destinatarios, por favor no use, informe, > distribuya, imprima o copie su contenido por ningún medio. Le > rogamos lo comunique al remitente y borre completamente el mensaje > y sus anexos. El Instituto de Salud Carlos III no asume ningún > tipo de responsabilidad legal por el contenido de este mensaje > cuando no responda a las funciones atribuidas al remitente del > mismo por la normativa vigente. > > > > > ------------------------------------------------------------------------------ > Developer Access Program for Intel Xeon Phi Processors > Access to Intel Xeon Phi processor-based developer platforms. > With one year of Intel Parallel Studio XE. > Training and support from Colfax. > Order your platform today.http://sdm.link/intel > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: ashutosh s. <ash...@gm...> - 2016-12-26 02:31:41
|
Dear Grigory and Daniel Thank you so much for the help. It's working perfectly now. Best Regards Ashutosh On Sat, Dec 24, 2016 at 6:59 PM, Daniel Luque Buzo <dl...@is...> wrote: > Hello Ashutosh, > > Find attached he hosts.conf file of out SGE cluster. The file is based > scipion wiki example that Grigory pointed. > > The module lines are specific of our cluster (if you do not use module to > manage environments/versions in you cluster should not be necessary). > > When we started to configure our cluster, we had the same problem for > using more than one node but the problem was related with the parallel > environment configuration of the cluster (named orte in our case). > > Hope it helps! > > Best Regards > -- > Daniel Luque Buzo > Unidad de Microscopía Electrónica y Confocal > Centro Nacional de Microbiología - ISCIII > Carretera de Majadahonda - Pozuelo, Km. 2.200 > 28220 - Majadahonda (Madrid) > Tel: +34 91 822 39 71 <+34%20918%2022%2039%2071> > e-mail: dl...@is... > > > [localhost] > PARALLEL_COMMAND = mpirun -np %_(JOB_NODES)d -bynode %_(COMMAND)s > MANDATORY = 2 > NAME = OGE > CANCEL_COMMAND = qdel %_(JOB_ID)s > CHECK_COMMAND = qstat -j %_(JOB_ID)s > SUBMIT_COMMAND = qsub %_(JOB_SCRIPT)s > SUBMIT_PREFIX = scipion > SUBMIT_TEMPLATE = #!/bin/bash > ###====================================================### > #$ -V > #$ -S /bin/bash > #$ -cwd ### Use the current working directory > #$ -N scipion%_(JOB_NAME)s ### Job name > #$ -q %_(JOB_QUEUE)s ### Queue name > #$ -pe %_(JOB_PE)s %_(JOB_SLOTS)s > #$ -j y ### Merge stdin and stdout > ###=======================================================# > > module load Python/python-2.7.11 > module load OpenMPI/openmpi-1.10.0 > ##module load scipion-1.0.0 > %_(JOB_COMMAND)s > > QUEUES = { "all.q": [ > ["JOB_SLOTS","20","Select the proper number of > cores"], > ["JOB_PE","orte","Parallel Enviroment (PE)","Select > the proper Parallel Enviroment (orte)"] > ] > } > > > > El 24 dic 2016, a las 10:01, Grigory Sharov <sha...@gm...> > escribió: > > Hello Ashutosh, > > in scipion wiki there is an example for SGE batch system. You might check > it out and give it a try: > > https://github.com/I2PC/scipion/wiki/Host-Configuration > > Best regards, > Grigory > > ------------------------------------------------------------ > -------------------- > Grigory Sharov, Ph.D. > Institute of Genetics and Molecular and Cellular Biology > Integrated Structural Biology Department (CBI) > 1, rue Laurent Fries > 67404 Illkirch, France > tel. 03 69 48 51 00 > e-mail: sh...@ig... > > On Sat, Dec 24, 2016 at 2:25 AM, ashutosh srivastava <ash...@gm...> > wrote: > >> Dear Scipion Users >> >> I am analysing ~65000 particles in SPA protocol. I am trying to do this >> on a cluster which uses SGE as job scheduler. Since the hosts.conf file is >> provided for PBS/Torque, I tried converting it into SGE type script but it >> failed in using more than one nodes. So I went ahead and used a bash >> jobscript to run the ML2D in the following way >> >> #!/bin/bash >> #$ -N classification_2d >> #$ -pe openmpi 96 >> #$ -o pass >> #$ -e error >> #$ -cwd >> #$ -q BLAB >> export MPIDIR=/opt/tools/mpi/openmpi-1.10.2-gcc4.4.7/bin >> $MPIDIR/mpirun -machinefile $TMPDIR/machines -np $NSLOTS >> /opt/tools/scipion/scipion xmipp_mpi_ml_align2d -i input/ >> particles65818_export.stk --nref 200 --fast --oroot output/ml2d --mirror >> >> This runs fine, generating following files >> ml2dclasses.stk >> ml2dclasses.xmd >> ml2dextra (folder containing iteration data) >> ml2d_images_average.xmp >> ml2dimages.xmd >> >> However I am having trouble now to write a similar script for generating >> initial volume using RANSAC. >> Is there a way to use the protocol_ransac.py in a script similar to one >> mentioned above? If not then could some one guide me regarding this? >> Apart from this I would also like to know as to how I can import the >> classes generated as mentioned above to scipion gui to visualize them. >> >> Thank you >> >> Best Regards >> Ashutosh >> >> >> >> ------------------------------------------------------------ >> ------------------ >> Developer Access Program for Intel Xeon Phi Processors >> Access to Intel Xeon Phi processor-based developer platforms. >> With one year of Intel Parallel Studio XE. >> Training and support from Colfax. >> Order your platform today.http://sdm.link/intel >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> >> > ------------------------------------------------------------ > ------------------ > Developer Access Program for Intel Xeon Phi Processors > Access to Intel Xeon Phi processor-based developer platforms. > With one year of Intel Parallel Studio XE. > Training and support from Colfax. > Order your platform today.http://sdm.link/intel___ > ____________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > > ************************* AVISO LEGAL ************************* Este > mensaje electrónico está dirigido exclusivamente a sus destinatarios, > pudiendo contener documentos anexos de carácter privado y confidencial. Si > por error ha recibido este mensaje y no se encuentra entre los > destinatarios, por favor no use, informe, distribuya, imprima o copie su > contenido por ningún medio. Le rogamos lo comunique al remitente y borre > completamente el mensaje y sus anexos. El Instituto de Salud Carlos III no > asume ningún tipo de responsabilidad legal por el contenido de este mensaje > cuando no responda a las funciones atribuidas al remitente del mismo por la > normativa vigente. > |
From: Daniel L. B. <dl...@is...> - 2016-12-24 10:14:29
|
Hello Ashutosh, Find attached he hosts.conf file of out SGE cluster. The file is based scipion wiki example that Grigory pointed. The module lines are specific of our cluster (if you do not use module to manage environments/versions in you cluster should not be necessary). When we started to configure our cluster, we had the same problem for using more than one node but the problem was related with the parallel environment configuration of the cluster (named orte in our case). Hope it helps! Best Regards -- Daniel Luque Buzo Unidad de Microscopía Electrónica y Confocal Centro Nacional de Microbiología - ISCIII Carretera de Majadahonda - Pozuelo, Km. 2.200 28220 - Majadahonda (Madrid) Tel: +34 91 822 39 71 e-mail: dl...@is...<mailto:dl...@is...> [localhost] PARALLEL_COMMAND = mpirun -np %_(JOB_NODES)d -bynode %_(COMMAND)s MANDATORY = 2 NAME = OGE CANCEL_COMMAND = qdel %_(JOB_ID)s CHECK_COMMAND = qstat -j %_(JOB_ID)s SUBMIT_COMMAND = qsub %_(JOB_SCRIPT)s SUBMIT_PREFIX = scipion SUBMIT_TEMPLATE = #!/bin/bash ###====================================================### #$ -V #$ -S /bin/bash #$ -cwd ### Use the current working directory #$ -N scipion%_(JOB_NAME)s ### Job name #$ -q %_(JOB_QUEUE)s ### Queue name #$ -pe %_(JOB_PE)s %_(JOB_SLOTS)s #$ -j y ### Merge stdin and stdout ###=======================================================# module load Python/python-2.7.11 module load OpenMPI/openmpi-1.10.0 ##module load scipion-1.0.0 %_(JOB_COMMAND)s QUEUES = { "all.q": [ ["JOB_SLOTS","20","Select the proper number of cores"], ["JOB_PE","orte","Parallel Enviroment (PE)","Select the proper Parallel Enviroment (orte)"] ] } El 24 dic 2016, a las 10:01, Grigory Sharov <sha...@gm...<mailto:sha...@gm...>> escribió: Hello Ashutosh, in scipion wiki there is an example for SGE batch system. You might check it out and give it a try: https://github.com/I2PC/scipion/wiki/Host-Configuration Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. Institute of Genetics and Molecular and Cellular Biology Integrated Structural Biology Department (CBI) 1, rue Laurent Fries 67404 Illkirch, France tel. 03 69 48 51 00 e-mail: sh...@ig...<mailto:sh...@ig...> On Sat, Dec 24, 2016 at 2:25 AM, ashutosh srivastava <ash...@gm...<mailto:ash...@gm...>> wrote: Dear Scipion Users I am analysing ~65000 particles in SPA protocol. I am trying to do this on a cluster which uses SGE as job scheduler. Since the hosts.conf file is provided for PBS/Torque, I tried converting it into SGE type script but it failed in using more than one nodes. So I went ahead and used a bash jobscript to run the ML2D in the following way #!/bin/bash #$ -N classification_2d #$ -pe openmpi 96 #$ -o pass #$ -e error #$ -cwd #$ -q BLAB export MPIDIR=/opt/tools/mpi/openmpi-1.10.2-gcc4.4.7/bin $MPIDIR/mpirun -machinefile $TMPDIR/machines -np $NSLOTS /opt/tools/scipion/scipion xmipp_mpi_ml_align2d -i input/particles65818_export.st<http://particles65818_export.st/>k --nref 200 --fast --oroot output/ml2d --mirror This runs fine, generating following files ml2dclasses.stk ml2dclasses.xmd ml2dextra (folder containing iteration data) ml2d_images_average.xmp ml2dimages.xmd However I am having trouble now to write a similar script for generating initial volume using RANSAC. Is there a way to use the protocol_ransac.py in a script similar to one mentioned above? If not then could some one guide me regarding this? Apart from this I would also like to know as to how I can import the classes generated as mentioned above to scipion gui to visualize them. Thank you Best Regards Ashutosh ------------------------------------------------------------------------------ Developer Access Program for Intel Xeon Phi Processors Access to Intel Xeon Phi processor-based developer platforms. With one year of Intel Parallel Studio XE. Training and support from Colfax. Order your platform today.http://sdm.link/intel _______________________________________________ scipion-users mailing list sci...@li...<mailto:sci...@li...> https://lists.sourceforge.net/lists/listinfo/scipion-users ------------------------------------------------------------------------------ Developer Access Program for Intel Xeon Phi Processors Access to Intel Xeon Phi processor-based developer platforms. With one year of Intel Parallel Studio XE. Training and support from Colfax. Order your platform today.http://sdm.link/intel_______________________________________________ scipion-users mailing list sci...@li...<mailto:sci...@li...> https://lists.sourceforge.net/lists/listinfo/scipion-users ************************* AVISO LEGAL ************************* Este mensaje electrónico está dirigido exclusivamente a sus destinatarios, pudiendo contener documentos anexos de carácter privado y confidencial. Si por error ha recibido este mensaje y no se encuentra entre los destinatarios, por favor no use, informe, distribuya, imprima o copie su contenido por ningún medio. Le rogamos lo comunique al remitente y borre completamente el mensaje y sus anexos. El Instituto de Salud Carlos III no asume ningún tipo de responsabilidad legal por el contenido de este mensaje cuando no responda a las funciones atribuidas al remitente del mismo por la normativa vigente. |
From: Grigory S. <sha...@gm...> - 2016-12-24 09:02:14
|
Hello Ashutosh, in scipion wiki there is an example for SGE batch system. You might check it out and give it a try: https://github.com/I2PC/scipion/wiki/Host-Configuration Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. Institute of Genetics and Molecular and Cellular Biology Integrated Structural Biology Department (CBI) 1, rue Laurent Fries 67404 Illkirch, France tel. 03 69 48 51 00 e-mail: sh...@ig... On Sat, Dec 24, 2016 at 2:25 AM, ashutosh srivastava <ash...@gm...> wrote: > Dear Scipion Users > > I am analysing ~65000 particles in SPA protocol. I am trying to do this on > a cluster which uses SGE as job scheduler. Since the hosts.conf file is > provided for PBS/Torque, I tried converting it into SGE type script but it > failed in using more than one nodes. So I went ahead and used a bash > jobscript to run the ML2D in the following way > > #!/bin/bash > #$ -N classification_2d > #$ -pe openmpi 96 > #$ -o pass > #$ -e error > #$ -cwd > #$ -q BLAB > export MPIDIR=/opt/tools/mpi/openmpi-1.10.2-gcc4.4.7/bin > $MPIDIR/mpirun -machinefile $TMPDIR/machines -np $NSLOTS > /opt/tools/scipion/scipion xmipp_mpi_ml_align2d -i input/ > particles65818_export.stk --nref 200 --fast --oroot output/ml2d --mirror > > This runs fine, generating following files > ml2dclasses.stk > ml2dclasses.xmd > ml2dextra (folder containing iteration data) > ml2d_images_average.xmp > ml2dimages.xmd > > However I am having trouble now to write a similar script for generating > initial volume using RANSAC. > Is there a way to use the protocol_ransac.py in a script similar to one > mentioned above? If not then could some one guide me regarding this? > Apart from this I would also like to know as to how I can import the > classes generated as mentioned above to scipion gui to visualize them. > > Thank you > > Best Regards > Ashutosh > > > > ------------------------------------------------------------ > ------------------ > Developer Access Program for Intel Xeon Phi Processors > Access to Intel Xeon Phi processor-based developer platforms. > With one year of Intel Parallel Studio XE. > Training and support from Colfax. > Order your platform today.http://sdm.link/intel > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |
From: ashutosh s. <ash...@gm...> - 2016-12-24 01:25:39
|
Dear Scipion Users I am analysing ~65000 particles in SPA protocol. I am trying to do this on a cluster which uses SGE as job scheduler. Since the hosts.conf file is provided for PBS/Torque, I tried converting it into SGE type script but it failed in using more than one nodes. So I went ahead and used a bash jobscript to run the ML2D in the following way #!/bin/bash #$ -N classification_2d #$ -pe openmpi 96 #$ -o pass #$ -e error #$ -cwd #$ -q BLAB export MPIDIR=/opt/tools/mpi/openmpi-1.10.2-gcc4.4.7/bin $MPIDIR/mpirun -machinefile $TMPDIR/machines -np $NSLOTS /opt/tools/scipion/scipion xmipp_mpi_ml_align2d -i input/ particles65818_export.stk --nref 200 --fast --oroot output/ml2d --mirror This runs fine, generating following files ml2dclasses.stk ml2dclasses.xmd ml2dextra (folder containing iteration data) ml2d_images_average.xmp ml2dimages.xmd However I am having trouble now to write a similar script for generating initial volume using RANSAC. Is there a way to use the protocol_ransac.py in a script similar to one mentioned above? If not then could some one guide me regarding this? Apart from this I would also like to know as to how I can import the classes generated as mentioned above to scipion gui to visualize them. Thank you Best Regards Ashutosh |
From: Carlos O. S. <co...@cn...> - 2016-12-23 18:16:38
|
Dear Dmitry, this is a very timely question. We have just integrated into the devel branch a tool called MonoRes which also measures locally the resolution following a different mathematical approach. The paper is almost written and will be sent shortly to a journal. If you want to use the method before its formal presentation, you may contact Jose Luis Vilas, its author. Kind regards, Carlos Oscar On 12/23/16 15:12, Dmitry A. Semchonok wrote: > Dear colleagues, > > What are the local resolution estimation tools we have except resmap? > > Sincerely > Dmitry > > > > ------------------------------------------------------------------------------ > Developer Access Program for Intel Xeon Phi Processors > Access to Intel Xeon Phi processor-based developer platforms. > With one year of Intel Parallel Studio XE. > Training and support from Colfax. > Order your platform today.http://sdm.link/intel > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > -- ------------------------------------------------------------------------ Carlos Oscar Sánchez Sorzano e-mail: co...@cn... Biocomputing unit http://biocomp.cnb.csic.es National Center of Biotechnology (CSIC) c/Darwin, 3 Campus Universidad Autónoma (Cantoblanco) Tlf: 34-91-585 4510 28049 MADRID (SPAIN) Fax: 34-91-585 4506 ------------------------------------------------------------------------ |
From: Dmitry A. S. <sem...@gm...> - 2016-12-23 14:12:45
|
Dear colleagues, What are the local resolution estimation tools we have except resmap? Sincerely Dmitry |
From: Josue G. B. <jos...@gm...> - 2016-12-09 10:27:23
|
Dear Dmitry: It seems that you have a problem with the RAM memory available in your cluster. In version 1.0.1 the results of cross-correlation+Optical Flow (alignining your movies) are the same as particle polishing, and it is a simpler way to process the particles. Unfortunately, particle polishing demands a huge of RAM memory and time, and depends of your particle dimensions and the number of particles. The only way to decrease the amount of memory without change the dimensions of the images is decreasing the number of MPI processes (until 1), but, of course, there exist a minimum... In previous projects I needed a worstation (1node) with 512GB of RAM to successfully execute particle polish with 3 MPI process (I dont remember the box size, but no more than 400 pixels). Maybe you can try with a compilation of relion in single precision.... Can you send me a log error to see the problem?... Best Regards, Josue 2016-12-08 16:24 GMT+01:00 Dmitry Semchonok <sem...@gm...>: > Dear colleagues, > > I'm still on the stage of particle polishing (from the frames) > > > What are on average *the setting* that you use for the > > > 1) MPI > > 2) nodes/himem - ? > > What is on average > > *the memory per/core (in himem mode)? *I tried 20480; 40960 = error > > I have something around 800 000 frames > > > > > Sincerely > Dmitry > > ------------------------------------------------------------ > ------------------ > Developer Access Program for Intel Xeon Phi Processors > Access to Intel Xeon Phi processor-based developer platforms. > With one year of Intel Parallel Studio XE. > Training and support from Colfax. > Order your platform today.http://sdm.link/xeonphi > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |
From: Dmitry S. <sem...@gm...> - 2016-12-08 15:24:42
|
Dear colleagues, I'm still on the stage of particle polishing (from the frames) What are on average *the setting* that you use for the 1) MPI 2) nodes/himem - ? What is on average *the memory per/core (in himem mode)? *I tried 20480; 40960 = error I have something around 800 000 frames Sincerely Dmitry |
From: Weng Ng <we...@ne...> - 2016-12-08 12:34:19
|
Hi All, I have been trying to import movie frames and create movie stacks at the same time using scipion, but each time it failed after importing with this message : Traceback (most recent call last): File "/dls_sw/apps/scipion/devel/scipion/pyworkflow/protocol/protocol.py", line 182, in run self._run() File "/dls_sw/apps/scipion/devel/scipion/pyworkflow/protocol/protocol.py", line 234, in _run raise Exception('Missing filePaths: ' + ' '.join(missingFiles)) Exception: Missing filePaths: Does this mean there is an error in the script for this? But I have previously done this without error. Kindly please enlighten me. Many thanks, Weng |
From: Carlos O. S. <co...@cn...> - 2016-12-07 06:48:57
|
Dear Dmitry, the problem seems to be related to the maximum memory limit for each of the working nodes. I'm not familiar with slurm but most queuing systems have an option to increase the maximum memory available to each node (this also seems to be the case of Slurm, https://slurm.schedmd.com/sbatch.html). This should go in Scipion host configuration file (https://github.com/I2PC/scipion/wiki/Host-Configuration, probably you have already edited this file to be able to submit jobs to Slurm). Regarding threads or mpi, it is not a matter of the GUI interface but the program itself. Relion_polishing is implemented using mpi parallelization and not threads. Kind regards, Carlos Oscar On 12/06/16 13:08, Dmitry Semchonok wrote: > Dear colleagues, > > I have a problem running particle polishing. > > Every time I try to run it (with movies) I get an error: > > In the run file I have the following notification: > > *run.stdout:* > .................................... > > slurmstepd: Exceeded step memory limit at some point.... > ....................................... > > *Log file:* > > Protocol failed: Command 'srun `which relion_particle_polish_mpi` > --i Runs/015602_ProtRelionPolish/extra/relion_it032_data.star --o > shiny --angpix 1.108 --movie_frames_running_avg 5 > --dont_read_old_files --sym c2 --sigma_nb 100 --perframe_highres 3.000 > --autob_lowres 20.000 --bg_radius 155 --mask > Runs/015357_ProtRelionPostprocess/extra/postprocess_automask.mrc ' > returned non-zero exit status 137 > > Any recommendations? > > In the gui is only MPI available (no threads). > > What are the working settings? > > Than you! > > Sincerely, > Dmitry > > > ------------------------------------------------------------------------------ > Developer Access Program for Intel Xeon Phi Processors > Access to Intel Xeon Phi processor-based developer platforms. > With one year of Intel Parallel Studio XE. > Training and support from Colfax. > Order your platform today.http://sdm.link/xeonphi > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users -- ------------------------------------------------------------------------ Carlos Oscar Sánchez Sorzano e-mail: co...@cn... Biocomputing unit http://biocomp.cnb.csic.es National Center of Biotechnology (CSIC) c/Darwin, 3 Campus Universidad Autónoma (Cantoblanco) Tlf: 34-91-585 4510 28049 MADRID (SPAIN) Fax: 34-91-585 4506 ------------------------------------------------------------------------ |
From: Dmitry S. <sem...@gm...> - 2016-12-06 12:08:14
|
Dear colleagues, I have a problem running particle polishing. Every time I try to run it (with movies) I get an error: In the run file I have the following notification: *run.stdout:* .................................... slurmstepd: Exceeded step memory limit at some point.... ....................................... *Log file:* Protocol failed: Command 'srun `which relion_particle_polish_mpi` --i Runs/015602_ProtRelionPolish/extra/relion_it032_data.star --o shiny --angpix 1.108 --movie_frames_running_avg 5 --dont_read_old_files --sym c2 --sigma_nb 100 --perframe_highres 3.000 --autob_lowres 20.000 --bg_radius 155 --mask Runs/015357_ProtRelionPostprocess/extra/postprocess_automask.mrc ' returned non-zero exit status 137 Any recommendations? In the gui is only MPI available (no threads). What are the working settings? Than you! Sincerely, Dmitry |
From: Grigory S. <sha...@gm...> - 2016-12-02 13:13:10
|
Hello Dmitry, post-processing in relion is described in its wiki: http://www2.mrc-lmb.cam.ac.uk/relion/index.php/Analyse_results#Getting_higher_resolution_and_map_sharpening Did you provide a mask? Increasing B-factor (e.g., -200) strongly enhances high resolution features including noise, so it's normal. If your resolution is above 10A I wouldn't use B-sharpening/MTF division altogether. Also, auto B-factor might simply not work for these cases. Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. Institute of Genetics and Molecular and Cellular Biology Integrated Structural Biology Department (CBI) 1, rue Laurent Fries 67404 Illkirch, France tel. 03 69 48 51 00 e-mail: sh...@ig... On Fri, Dec 2, 2016 at 1:57 PM, Dmitry Semchonok <sem...@gm...> wrote: > Dear colleagues, > > > > I run the *protocol relion-post-processing* and the result model has some > noise mainly around the center. > > > The b-factor that was first set to the auto value (how to find out what > was it precisely for the “auto” settings?) > > > I set the b-factor manually -50, -100, -200 > > > > The less noise was obtained with b-factor = -50 > > How to get rid of that noise and set the correct b-factor? > > There is no such noise in the classification file. > > > > (K2mtf was taken from relion mrc website) > > > > > > > Thank you. > > > > Sincerely, > > Dmitry > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, SlashDot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |
From: Dmitry S. <sem...@gm...> - 2016-12-02 12:57:46
|
Dear colleagues, I run the *protocol relion-post-processing* and the result model has some noise mainly around the center. The b-factor that was first set to the auto value (how to find out what was it precisely for the “auto” settings?) I set the b-factor manually -50, -100, -200 The less noise was obtained with b-factor = -50 How to get rid of that noise and set the correct b-factor? There is no such noise in the classification file. (K2mtf was taken from relion mrc website) Thank you. Sincerely, Dmitry |
From: Dmitry S. <sem...@gm...> - 2016-12-01 11:08:13
|
Dear colleagues, I try to run particle polishing protocol. As input I placed 3Dautorefine file. The program stopped saying – 00155: There are no images read in: please check your input file... 00156: File: src/exp_ And after fails saying Protocol failed: Command 'srun `which relion_particle_polish_mpi` --i Runs/015051_ProtRelionPolish/extra/relion_it032_data.star --o shiny --angpix 1.108 --movie_frames_running_avg 5 --dont_read_old_files --sym c2 --sigma_nb 100 --perframe_highres 3.000 --autob_lowres 20.000 --bg_radius 160 --mask Runs/014999_ProtRelionPostprocess/extra/postprocess_automask.mrc ' returned non-zero exit status 1 When I in the settings press on eye to see the input – nothing is shown Is there a solution? P.s. The postprocessing runs with the same input file with no problems Sincerely, Dmitry |
From: Carlos O. S. <co...@cn...> - 2016-11-29 09:39:45
|
Dear Elad, in version 1.0.1, mpi is not fully working. You may try to use only threads, or no parallelization at all (thread=1, mpi=1) Kind regards, Carlos Oscar On 11/28/16 19:56, Elad Binshtein wrote: > Hi All, > > I still try to run the xmipp RCT wiyh no luck. > > I have this error with the mpi4py in the terminal: > > Running runJobMPISlave: 1 > Traceback (most recent call last): > File > "/home/sbgrid/programs/x86_64-linux/scipion/1.0.1/pyworkflow/apps/pw_protocol_mpirun.py", > line 57, in <module> > runJobMPISlave(comm) > File > "/home/sbgrid/programs/x86_64-linux/scipion/1.0.1/pyworkflow/utils/mpi.py", > line 107, in runJobMPISlave > done, command = req_recv.test() > File "Request.pyx", line 237, in mpi4py.MPI.Request.test > (src/mpi4py.MPI.c:52898) > File "pickled.pxi", line 401, in mpi4py.MPI.PyMPI_test > (src/mpi4py.MPI.c:31080) > mpi4py.MPI.Exception: MPI_ERR_TRUNCATE: message truncated > > our IT try to fix it together with SBgrid and it not working. Do > anyone have this error? > > Thanks, > > > On Tue, Nov 1, 2016 at 3:41 PM, Grigory Sharov > <sha...@gm... <mailto:sha...@gm...>> wrote: > > Hi, > > the thing is that protocol runs with mpi=2 and 1 thread (by > default in the protocol, thread number is not taken from GUI) and > prints all output to console and not to run.stdout. > > Elad, in principle it should still run (however slowly), you could > check the output in the console where you launched scipion (if it > was done interactively) or the output log file from the job > submission system on your cluster (if you run it on cluster). > > Best regards, > Grigory > > -------------------------------------------------------------------------------- > Grigory Sharov, Ph.D. > Institute of Genetics and Molecular and Cellular Biology > Integrated Structural Biology Department (CBI) > 1, rue Laurent Fries > 67404 Illkirch, France > tel. 03 69 48 51 00 > e-mail: sh...@ig... <mailto:sh...@ig...> > > On Tue, Nov 1, 2016 at 9:12 PM, Carlos Oscar Sorzano > <co...@cn... <mailto:co...@cn...>> wrote: > > Dear Elad, > > can you send the log so that we can better see what may be > happening? If you attach a screenshot of the project, we may > try to figure out the workflow you have followed. > > Kind regards, Carlos Oscar > > > On 11/01/16 17:05, Elad Binshtein wrote: >> Hi, >> I'm try to go through the initial vol tutorial and in the >> lass step EMAN and ransac is work good I can not run the >> xmipp RCT protocol (no error on the run log just stuck on >> step 1/8) >> any suggestion? >> >> Thanks >> >> -- >> ________________________________ >> Elad Binshtein, Ph.D. >> Cryo EM specialist - staff scientist >> Center for Structure Biology (CSB) >> Vanderbilt University >> Nashville, TN >> Mobile: +1-615-481-4408 <tel:%2B1-615-481-4408> >> E-Mail: el...@gm... <mailto:el...@gm...> >> ________________________________ >> >> >> ------------------------------------------------------------------------------ >> Developer Access Program for Intel Xeon Phi Processors >> Access to Intel Xeon Phi processor-based developer platforms. >> With one year of Intel Parallel Studio XE. >> Training and support from Colfax. >> Order your platform today.http://sdm.link/xeonphi >> >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> <mailto:sci...@li...> >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> <https://lists.sourceforge.net/lists/listinfo/scipion-users> > > -- > ------------------------------------------------------------------------ > Carlos Oscar Sánchez Sorzano e-mail:co...@cn... <mailto:co...@cn...> > Biocomputing unithttp://biocomp.cnb.csic.es > National Center of Biotechnology (CSIC) > c/Darwin, 3 > Campus Universidad Autónoma (Cantoblanco) Tlf: 34-91-585 4510 > 28049 MADRID (SPAIN) Fax: 34-91-585 4506 > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------------ > Developer Access Program for Intel Xeon Phi Processors Access > to Intel Xeon Phi processor-based developer platforms. With > one year of Intel Parallel Studio XE. Training and support > from Colfax. Order your platform today. > http://sdm.link/xeonphi > _______________________________________________ scipion-users > mailing list sci...@li... > <mailto:sci...@li...> > https://lists.sourceforge.net/lists/listinfo/scipion-users > <https://lists.sourceforge.net/lists/listinfo/scipion-users> > > -- > ________________________________ Elad Binshtein, Ph.D. > Cryo EM specialist - staff scientist Center for Structure Biology > (CSB) MCN Room 1207 Vanderbilt University Nashville, TN Office: > +1-615-322-4671 Mobile: +1-615-481-4408E-Mail: el...@gm... > <mailto:el...@gm...> ________________________________ -- ------------------------------------------------------------------------ Carlos Oscar Sánchez Sorzano e-mail: co...@cn... Biocomputing unit http://biocomp.cnb.csic.es National Center of Biotechnology (CSIC) c/Darwin, 3 Campus Universidad Autónoma (Cantoblanco) Tlf: 34-91-585 4510 28049 MADRID (SPAIN) Fax: 34-91-585 4506 ------------------------------------------------------------------------ |
From: Elad B. <el...@gm...> - 2016-11-28 18:56:33
|
Hi All, I still try to run the xmipp RCT wiyh no luck. I have this error with the mpi4py in the terminal: Running runJobMPISlave: 1 Traceback (most recent call last): File "/home/sbgrid/programs/x86_64-linux/scipion/1.0.1/pyworkflow/apps/pw_protocol_mpirun.py", line 57, in <module> runJobMPISlave(comm) File "/home/sbgrid/programs/x86_64-linux/scipion/1.0.1/pyworkflow/utils/mpi.py", line 107, in runJobMPISlave done, command = req_recv.test() File "Request.pyx", line 237, in mpi4py.MPI.Request.test (src/mpi4py.MPI.c:52898) File "pickled.pxi", line 401, in mpi4py.MPI.PyMPI_test (src/mpi4py.MPI.c:31080) mpi4py.MPI.Exception: MPI_ERR_TRUNCATE: message truncated our IT try to fix it together with SBgrid and it not working. Do anyone have this error? Thanks, On Tue, Nov 1, 2016 at 3:41 PM, Grigory Sharov <sha...@gm...> wrote: > Hi, > > the thing is that protocol runs with mpi=2 and 1 thread (by default in the > protocol, thread number is not taken from GUI) and prints all output to > console and not to run.stdout. > > Elad, in principle it should still run (however slowly), you could check > the output in the console where you launched scipion (if it was done > interactively) or the output log file from the job submission system on > your cluster (if you run it on cluster). > > Best regards, > Grigory > > ------------------------------------------------------------ > -------------------- > Grigory Sharov, Ph.D. > Institute of Genetics and Molecular and Cellular Biology > Integrated Structural Biology Department (CBI) > 1, rue Laurent Fries > 67404 Illkirch, France > tel. 03 69 48 51 00 > e-mail: sh...@ig... > > On Tue, Nov 1, 2016 at 9:12 PM, Carlos Oscar Sorzano <co...@cn...> > wrote: > >> Dear Elad, >> >> can you send the log so that we can better see what may be happening? If >> you attach a screenshot of the project, we may try to figure out the >> workflow you have followed. >> >> Kind regards, Carlos Oscar >> >> On 11/01/16 17:05, Elad Binshtein wrote: >> >> Hi, >> I'm try to go through the initial vol tutorial and in the lass step EMAN >> and ransac is work good I can not run the xmipp RCT protocol (no error on >> the run log just stuck on step 1/8) >> any suggestion? >> >> Thanks >> >> -- >> ________________________________ >> Elad Binshtein, Ph.D. >> Cryo EM specialist - staff scientist >> Center for Structure Biology (CSB) >> Vanderbilt University >> Nashville, TN >> Mobile: +1-615-481-4408 >> E-Mail: el...@gm... >> ________________________________ >> >> >> ------------------------------------------------------------------------------ >> Developer Access Program for Intel Xeon Phi Processors >> Access to Intel Xeon Phi processor-based developer platforms. >> With one year of Intel Parallel Studio XE. >> Training and support from Colfax. >> Order your platform today. http://sdm.link/xeonphi >> >> >> >> _______________________________________________ >> scipion-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/scipion-users >> >> >> -- >> ------------------------------------------------------------------------ >> Carlos Oscar Sánchez Sorzano e-mail: co...@cn... >> Biocomputing unit http://biocomp.cnb.csic.es >> National Center of Biotechnology (CSIC) >> c/Darwin, 3 >> Campus Universidad Autónoma (Cantoblanco) Tlf: 34-91-585 4510 >> 28049 MADRID (SPAIN) Fax: 34-91-585 4506 >> ------------------------------------------------------------------------ >> >> >> ------------------------------------------------------------ >> ------------------ >> Developer Access Program for Intel Xeon Phi Processors >> Access to Intel Xeon Phi processor-based developer platforms. >> With one year of Intel Parallel Studio XE. >> Training and support from Colfax. >> Order your platform today. http://sdm.link/xeonphi >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> >> > -- ________________________________ Elad Binshtein, Ph.D. Cryo EM specialist - staff scientist Center for Structure Biology (CSB) MCN Room 1207 Vanderbilt University Nashville, TN Office: +1-615-322-4671 Mobile: +1-615-481-4408 E-Mail: el...@gm... ________________________________ |
From: Jose M. C. <ca...@cn...> - 2016-11-23 07:00:51
|
Dear Dimitry, Regarding local and non-local correction, like Optical Flow, in the context of last year Map Challenge, we could show that Optical Flow always produced equal or better results than non-local approach. At the time we did not use dose compensation, which should improuve on top. Josue Gomez has all the test data on our side, that were indeed compiled but never published Wbw..JM On Wed, Nov 23, 2016 at 7:35 AM, Carlos Oscar Sorzano <co...@cn...> wrote: > Dear Dmitry, > > as far as I remember there is no serious comparison about different > strategies of dose compensation, frame alignment (either at the level of > micrograph as in cross correlation/optical flow or at the level of > particle, polishing). Assuming that any combination is equivalent to any > other combination is not reasonable, but unfortunately, I don't have an > answer to which strategy is the best. Each one showed in their respective > publications, that that action alone was improving resolution, but there > has been nothing on the comparison or the combination of them. > > Kind regards, Carlos Oscar > > On 22/11/2016 11:55, Dmitry Semchonok wrote: > > Dear colleagues, > > 1 more question > > How do you use the *protocol grigoriefflab - unblur*? > > After the *movie alignment protocol*? > > How important is to use this protocol? > > Isn't the particle polishing will in a way do the same? > > > Thank you > > Sincerely, > Dmitry > > > ------------------------------------------------------------------------------ > > > > _______________________________________________ > scipion-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/scipion-users > > > -- > ------------------------------------------------------------------------ > Carlos Oscar Sánchez Sorzano e-mail: co...@cn... > Biocomputing unit http://biocomp.cnb.csic.es > National Center of Biotechnology (CSIC) > c/Darwin, 3 > Campus Universidad Autónoma (Cantoblanco) Tlf: 34-91-585 4510 > 28049 MADRID (SPAIN) Fax: 34-91-585 4506 > ------------------------------------------------------------------------ > > > ------------------------------------------------------------ > ------------------ > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > -- Prof. Jose-Maria Carazo Biocomputing Unit, Head, CNB-CSIC Spanish National Center for Biotechnology Darwin 3, Universidad Autonoma de Madrid 28049 Madrid, Spain Cell: +34639197980 |
From: Grigory S. <sha...@gm...> - 2016-11-23 06:49:56
|
Hi Dmitry, You could look at the following paper https://www.ncbi.nlm.nih.gov/pubmed/27572725 that compares all popular methods of movie alignment except the most recent motioncor2 program On Nov 22, 2016 13:55, "Dmitry Semchonok" <sem...@gm...> wrote: > Dear colleagues, > > 1 more question > > How do you use the *protocol grigoriefflab - unblur*? > > After the *movie alignment protocol*? > > How important is to use this protocol? > > Isn't the particle polishing will in a way do the same? > > > Thank you > > Sincerely, > Dmitry > > ------------------------------------------------------------ > ------------------ > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |
From: Carlos O. S. <co...@cn...> - 2016-11-23 06:35:51
|
Dear Dmitry, as far as I remember there is no serious comparison about different strategies of dose compensation, frame alignment (either at the level of micrograph as in cross correlation/optical flow or at the level of particle, polishing). Assuming that any combination is equivalent to any other combination is not reasonable, but unfortunately, I don't have an answer to which strategy is the best. Each one showed in their respective publications, that that action alone was improving resolution, but there has been nothing on the comparison or the combination of them. Kind regards, Carlos Oscar On 22/11/2016 11:55, Dmitry Semchonok wrote: > Dear colleagues, > > 1 more question > > How do you use the *protocol grigoriefflab - unblur*? > > After the *movie alignment protocol*? > > How important is to use this protocol? > > Isn't the particle polishing will in a way do the same? > > > Thank you > > Sincerely, > Dmitry > > > ------------------------------------------------------------------------------ > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users -- ------------------------------------------------------------------------ Carlos Oscar Sánchez Sorzano e-mail: co...@cn... Biocomputing unit http://biocomp.cnb.csic.es National Center of Biotechnology (CSIC) c/Darwin, 3 Campus Universidad Autónoma (Cantoblanco) Tlf: 34-91-585 4510 28049 MADRID (SPAIN) Fax: 34-91-585 4506 ------------------------------------------------------------------------ |
From: Jose M. de la R. T. <del...@gm...> - 2016-11-22 11:35:13
|
Dear Dmitry, What version of Scipion are you using? v1.0.x ? In that version cross-correlation global alignment (CC) was in the same protocol of optical flow local alignment (OF). In the devel version we have decoupled different movie alignment protocols, but this will be for the next release. If you used the v1.0, you will need to relaunch a "copy" of the movie alignment job and then combine both CC + OF. If you have already estimated the CTF for the first workflow, then you could reuse that when re-extracting particles from the new aligned micrographs. Or you can just re-estimate with Xmipp or any other program if you like...you should start a new job as usual and use the new micrographs as input. Hope this helps Jose Miguel On Tue, Nov 22, 2016 at 11:04 AM, Dmitry Semchonok <sem...@gm...> wrote: > Dear colleagues, > > > I used protocol xmipp3 - movie alignment but only cross-correlation > correction. > > But now I would like to add optical correction. > > Is there a way to redo this step? > > I already processed till the final 3d model.... > > > Also I would like to incert xmipp - ctf extimation protocol inbetween > > grigotieflab and particle picking > > Is that any possible? > > > > > Sincerely, > Dmitry > > ------------------------------------------------------------ > ------------------ > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |