You can subscribe to this list here.
2016 |
Jan
(2) |
Feb
(13) |
Mar
(9) |
Apr
(4) |
May
(5) |
Jun
(2) |
Jul
(8) |
Aug
(3) |
Sep
(25) |
Oct
(7) |
Nov
(49) |
Dec
(15) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2017 |
Jan
(24) |
Feb
(36) |
Mar
(53) |
Apr
(44) |
May
(37) |
Jun
(34) |
Jul
(12) |
Aug
(15) |
Sep
(14) |
Oct
(9) |
Nov
(9) |
Dec
(7) |
2018 |
Jan
(16) |
Feb
(9) |
Mar
(27) |
Apr
(39) |
May
(8) |
Jun
(24) |
Jul
(22) |
Aug
(11) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
2019 |
Jan
(4) |
Feb
(5) |
Mar
|
Apr
(1) |
May
(21) |
Jun
(13) |
Jul
(31) |
Aug
(22) |
Sep
(9) |
Oct
(19) |
Nov
(24) |
Dec
(12) |
2020 |
Jan
(30) |
Feb
(12) |
Mar
(16) |
Apr
(4) |
May
(37) |
Jun
(17) |
Jul
(19) |
Aug
(15) |
Sep
(26) |
Oct
(84) |
Nov
(64) |
Dec
(55) |
2021 |
Jan
(18) |
Feb
(58) |
Mar
(26) |
Apr
(88) |
May
(51) |
Jun
(36) |
Jul
(31) |
Aug
(37) |
Sep
(79) |
Oct
(15) |
Nov
(29) |
Dec
(8) |
2022 |
Jan
(5) |
Feb
(8) |
Mar
(29) |
Apr
(21) |
May
(11) |
Jun
(11) |
Jul
(18) |
Aug
(16) |
Sep
(6) |
Oct
(10) |
Nov
(23) |
Dec
(1) |
2023 |
Jan
(18) |
Feb
|
Mar
(4) |
Apr
|
May
(3) |
Jun
(10) |
Jul
(1) |
Aug
|
Sep
|
Oct
(1) |
Nov
(3) |
Dec
(5) |
2024 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
(1) |
Feb
|
Mar
|
Apr
(5) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Pablo C. <pc...@cn...> - 2017-02-03 11:06:38
|
Dear Yanfeng, My guess here is that there is a versions incompatibility between Scipion/Xmipp binaries (complied with openMPI 1.6, not sure, I'll check this) If this is the case we have 2 options: a:) Point to an openMPI 1.6 (for this please, edit the MPI_BINDIR, MPI_LIBDIR, and MPI_INCLUDE to point to a openMPI 1.6 version) b:) Use Scipion from sources, see https://github.com/I2PC/scipion/wiki/How-to-Install#installing-scipion-in-3-steps Let us know ho it goes, Kind regards , Pablo On 02/02/17 15:02, Zhou, Yanfeng wrote: > Hi Pablo, > > Thanks for the reply. I download the precompiled scipion file, and > xmipp folder is already there after I untar the install file. I > finish install with command like "scipion install spider --no-xmipp". > If I install with "scipion install", there is an error saying > imagej.tgz not exist. This error is described under > https://webcache.googleusercontent.com/search?q=cache:1yBKeY3aEKwJ:https://github.com/I2PC/scipion/issues/458+&cd=1&hl=en&ct=clnk&gl=us > > After my install with --no-xmipp, I can run xmipp applications without > xmipp_mpi_, either through scipion GUI or command line. The > MPI_BINDIR, MPI_LIBDIR, and MPI_INCLUDE are pointing to > /usr/lib64/openmpi folder by default. But newer versions of openmpi > no longer have libmpi.so.1 available. So I edit the xmipp.cfg to > point to an older version of MPI. > > I am running xmipp through scipion gui, but also test the command > line. Both give same error message when there is a xmipp_mpi_ in the > application. > > Please advise. > > Best, > Yanfeng > > > ------------------------------------------------------------------------ > *From:* Pablo Conesa [pc...@cn...] > *Sent:* Thursday, February 02, 2017 2:09 AM > *To:* Zhou, Yanfeng > *Cc:* jc...@cn... > *Subject:* Re: [xmipp] xmipp run error with xmipp_mpi_transform_filter > > Dear Yanfeng, > > by default "scipion install" compiles xmipp. > > So if you've run "scipion install" successfully, then xmipp is > compiled..unless you are working with binaries. > > Is that your case? > > Also, we do not change xmipp.cfg, although not sure if that makes any > difference. > > Are you running xmipp from the command line, without Scipion? > > > Best, Pablo. > > > > On 01/02/17 19:44, Carlos Oscar Sorzano wrote: >> >> Dear Yanfeng, >> >> I'm forwarding your message to the people more in charge of the >> installation issues. >> >> Kind regards, Carlos Oscar >> >> >> On 02/01/17 19:31, Zhou, Yanfeng wrote: >>> >>> Hi Carlos Oscar, >>> >>> Have another issue with xmipp distributed with scipion-v1.0.1, and I >>> failed to find a documentation on this topic. When I run commands >>> with mpi, like xmipp_mpi_run, it says >>> >>> Sorry! You were supposed to get help about: >>> >>> Opal_init:startup:internal-failure >>> >>> But I couldn’t open the help file: >>> >>> /software/lib/Linux-x86-64/openmpi-1.6.5-build3/share/openmpi/help-opal-runtime.txt: >>> No such file or directory. >>> >>> I have configured scipion.conf to point to openmpi directories >>> before scipion installation. After the install, openmpi directories >>> in xmipp.cfg are also edited. I could not find a documentation with >>> Scipion about how to successfully compile xmipp. Could you advise >>> or forward my message to someone who may know? >>> >>> Thank you, and I appreciate your input. >>> >>> Best, >>> >>> Yanfeng >>> >>> *From:*Carlos Oscar Sorzano [mailto:co...@cn...] >>> *Sent:* Tuesday, January 31, 2017 11:11 PM >>> *To:* Zhou, Yanfeng; xm...@cn... >>> *Subject:* Re: [xmipp] xmipp run error with xmipp_mpi_transform_filter >>> >>> Good to know. Thanks >>> >>> On 01/31/17 21:54, Zhou, Yanfeng wrote: >>> >>> Thanks for the help, Carlos Oscar. I have fixed it. >>> >>> Best, >>> >>> Yanfeng >>> >>> *From:*Carlos Oscar Sorzano [mailto:co...@cn...] >>> *Sent:* Friday, January 27, 2017 6:19 PM >>> *To:* Zhou, Yanfeng; xm...@cn... <mailto:xm...@cn...> >>> *Subject:* Re: [xmipp] xmipp run error with >>> xmipp_mpi_transform_filter >>> >>> Dear Yanfeng, >>> >>> currently Xmipp is distributed along with Scipion. You may >>> follow Scipion installation instructions at >>> https://github.com/I2PC/scipion/wiki/How-to-Install. >>> >>> Kind regards, Carlos Oscar >>> >>> On 25/01/2017 15:12, Zhou, Yanfeng wrote: >>> >>> Thanks for the information, Carlos Oscar. Could you briefly >>> advise me how to recompile Xmipp and link libmpi.so from >>> LD_LIBRARY_PATH? My Linux experience is limited, and >>> basically I am learning and editing with Google. It seems >>> not to help much. >>> >>> Best, >>> >>> Yanfeng >>> >>> *From:*Carlos Oscar Sorzano [mailto:co...@cn...] >>> *Sent:* Monday, January 23, 2017 6:21 PM >>> *To:* Zhou, Yanfeng; xm...@cn... >>> *Subject:* Re: [xmipp] xmipp run error with >>> xmipp_mpi_transform_filter >>> >>> Dear Yanfeng, >>> >>> I can think of two reasons for this error. The 1st one is >>> that Xmipp has not been compiled in your machine, so that >>> Xmipp binaries have been downloaded, which were compiled in >>> a machine with libmpi.so.1 and you have a newer version. >>> Recompiling Xmipp should solve the issue. The 2nd one is >>> that you have the library but there is no link called >>> libmpi.so pointing to it, making such a link at a place >>> accessible from the LD_LIBRARY_PATH should solve it. >>> >>> Kind regards, Carlos Oscar >>> >>> On 01/23/17 22:07, Zhou, Yanfeng wrote: >>> >>> Dear Xmipp developer, >>> >>> This is Yanfeng from Shire in Lexington, MA. I am >>> testing classaverages with xmipp 3.1 under red hat >>> enterprise linux 7 system. I get error in command: >>> >>> mpirun –np 8 –bynode ‘which xmip_mpi_transform_filter’ >>> –I Images/imported/run_001/images.xmd –bad_pixels >>> outliers 3.5 –xmipp_protocol_script >>> >>> It says error while loading shared libraries: libmpi.so.1. >>> >>> I installed Xmipp 3.1 for CentOS 64bit, and openmpi was >>> configured properly. I included the LD_LIBRARY path in >>> my .bashrc file. Could you advise what could be the >>> issue? Should I download the installer for generic >>> system instead of CentOS? >>> >>> Thank you! >>> >>> Best, >>> >>> Yanfeng >>> >>> *Yanfeng Zhou, Ph.D.* >>> >>> Senior Scientist, >>> >>> Analytical Research, Discovery Therapeutics >>> >>> Shire >>> >>> 300 Shire Way >>> >>> Lexington, MA 02421 >>> >>> Phone: +1(781)869-7789 >>> >>> Email: Yan...@Sh... >>> <mailto:Yan...@Sh...> >>> >>> >>> ****************************************************** >>> Shire plc, the ultimate parent of the Shire Group of >>> companies, is registered in Jersey No. 99854 >>> Registered Office: 22 Grenville Street, St Helier, >>> Jersey JE4 8PX >>> ****************************************************** >>> >>> >>> Please consider the environment before printing this e-mail >>> >>> This email and any files transmitted with it are >>> confidential and >>> may be legally privileged and are intended solely for >>> the use of >>> the individual or entity to whom they are addressed. If >>> you are >>> not the intended recipient please note that any disclosure, >>> distribution, or copying of this email is strictly >>> prohibited and may >>> be unlawful. If received in error, please delete this >>> email and any >>> attachments and confirm this to the sender. >>> >>> >>> >>> >>> >>> -- >>> >>> ------------------------------------------------------------------------ >>> >>> Carlos Oscar Sánchez Sorzano e-mail:co...@cn... <mailto:co...@cn...> >>> >>> Biocomputing unithttp://biocomp.cnb.csic.es >>> >>> National Center of Biotechnology (CSIC) >>> >>> c/Darwin, 3 >>> >>> Campus Universidad Autónoma (Cantoblanco) Tlf: 34-91-585 4510 >>> >>> 28049 MADRID (SPAIN) Fax: 34-91-585 4506 >>> >>> ------------------------------------------------------------------------ >>> >>> >>> ****************************************************** >>> Shire plc, the ultimate parent of the Shire Group of >>> companies, is registered in Jersey No. 99854 >>> Registered Office: 22 Grenville Street, St Helier, Jersey >>> JE4 8PX >>> ****************************************************** >>> >>> >>> Please consider the environment before printing this e-mail >>> >>> This email and any files transmitted with it are >>> confidential and >>> may be legally privileged and are intended solely for the use of >>> the individual or entity to whom they are addressed. If you are >>> not the intended recipient please note that any disclosure, >>> distribution, or copying of this email is strictly >>> prohibited and may >>> be unlawful. If received in error, please delete this email >>> and any >>> attachments and confirm this to the sender. >>> >>> >>> >>> >>> -- >>> >>> ------------------------------------------------------------------------ >>> >>> Carlos Oscar Sánchez Sorzano e-mail:co...@cn... <mailto:co...@cn...> >>> >>> Biocomputing unithttp://biocomp.cnb.csic.es >>> >>> National Center of Biotechnology (CSIC) >>> >>> c/Darwin, 3 >>> >>> Campus Universidad Autónoma (Cantoblanco) Tlf: 34-91-585 4510 >>> >>> 28049 MADRID (SPAIN) Fax: 34-91-585 4506 >>> >>> ------------------------------------------------------------------------ >>> >>> >>> ****************************************************** >>> Shire plc, the ultimate parent of the Shire Group of companies, >>> is registered in Jersey No. 99854 >>> Registered Office: 22 Grenville Street, St Helier, Jersey JE4 8PX >>> ****************************************************** >>> >>> >>> Please consider the environment before printing this e-mail >>> >>> This email and any files transmitted with it are confidential and >>> may be legally privileged and are intended solely for the use of >>> the individual or entity to whom they are addressed. If you are >>> not the intended recipient please note that any disclosure, >>> distribution, or copying of this email is strictly prohibited >>> and may >>> be unlawful. If received in error, please delete this email and any >>> attachments and confirm this to the sender. >>> >>> >>> >>> -- >>> ------------------------------------------------------------------------ >>> Carlos Oscar Sánchez Sorzano e-mail:co...@cn... <mailto:co...@cn...> >>> Biocomputing unithttp://biocomp.cnb.csic.es >>> National Center of Biotechnology (CSIC) >>> c/Darwin, 3 >>> Campus Universidad Autónoma (Cantoblanco) Tlf: 34-91-585 4510 >>> 28049 MADRID (SPAIN) Fax: 34-91-585 4506 >>> ------------------------------------------------------------------------ >>> >>> ****************************************************** >>> Shire plc, the ultimate parent of the Shire Group of companies, is >>> registered in Jersey No. 99854 >>> Registered Office: 22 Grenville Street, St Helier, Jersey JE4 8PX >>> ****************************************************** >>> >>> >>> Please consider the environment before printing this e-mail >>> >>> This email and any files transmitted with it are confidential and >>> may be legally privileged and are intended solely for the use of >>> the individual or entity to whom they are addressed. If you are >>> not the intended recipient please note that any disclosure, >>> distribution, or copying of this email is strictly prohibited and may >>> be unlawful. If received in error, please delete this email and any >>> attachments and confirm this to the sender. >> >> -- >> ------------------------------------------------------------------------ >> Carlos Oscar Sánchez Sorzano e-mail:co...@cn... >> Biocomputing unithttp://biocomp.cnb.csic.es >> National Center of Biotechnology (CSIC) >> c/Darwin, 3 >> Campus Universidad Autónoma (Cantoblanco) Tlf: 34-91-585 4510 >> 28049 MADRID (SPAIN) Fax: 34-91-585 4506 >> ------------------------------------------------------------------------ > > > ****************************************************** > Shire plc, the ultimate parent of the Shire Group of companies, is > registered in Jersey No. 99854 > Registered Office: 22 Grenville Street, St Helier, Jersey JE4 8PX > ****************************************************** > > > Please consider the environment before printing this e-mail > > This email and any files transmitted with it are confidential and > may be legally privileged and are intended solely for the use of > the individual or entity to whom they are addressed. If you are > not the intended recipient please note that any disclosure, > distribution, or copying of this email is strictly prohibited and may > be unlawful. If received in error, please delete this email and any > attachments and confirm this to the sender. |
From: Pablo C. <pc...@cn...> - 2017-02-03 10:39:28
|
Dear Luigi, please follow this thread. Jose Miguel, I already replied to Luigi, and asked for the same thing. Let's keep this thread since situation is better explained. Cheers, Pablo Scipion Team On 03/02/17 11:32, Jose Miguel de la Rosa Trevin wrote: > > Dear Luigi, > > Thanks for providing feedback to us. Regarding your specific issues > with picking, what do you mean by "after providing the references" in > Xmipp and Eman pickers? To the best of my knowledge, both programs > generate the references on-the-fly, after the selection of some > particles manually. So, there is not need to provide pre-computed > references such as 2D class averages. > > In the case of EMAN2-Box, both the manual/supervised steps are > interwinned in a very interactive way. On the other hand, in Xmipp3 > picker, those steps are separated in two phases. You first need to > picking manually some micrographs entirely (or a well-defined box in > one micrograph is its contains many particles)...and then switch to > "Supervised" mode, where the algorithm will propose you some particle > candidates and you can correct it (either removing bad proposed > particles or adding missing one). The algorithm will "learn" from that > and then in another protocol you can launch the automatic picking > based on that training. > > Anyway, if you provide a few micrographs we could try to help about > the parameters selection. > > Hope this helps, > > Jose Miguel > > Scipion Team > > > ----------------------------------------- > > My name is Luigi De Colibus and I’m Post Doc at the Division of > Structural Biology of Oxford University. > I’m writing you because I’m trying to run automated particle picking > programs on my micrographs trough SCIPION. > Surprisingly, despite the fact my virus particle is very big and > evident on the micrographs, neither Xmipp particle picking program nor > Boxer in EMAN2 is able to pick correctly the particles after providing > the reference. > I guess my settings could be wrong and trying different values has not > helped so far. > > It would be great if you could point me in the right direction. > I can send you one of my micrographs if you need to do a test run. > > Thanks a lot in advance for your help! > > Best Regards > > Reply to: luigi at strubi.ox.ac.uk <http://strubi.ox.ac.uk> > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, SlashDot.org! http://sdm.link/slashdot > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Jose M. de la R. T. <del...@gm...> - 2017-02-03 10:32:19
|
Dear Luigi, Thanks for providing feedback to us. Regarding your specific issues with picking, what do you mean by "after providing the references" in Xmipp and Eman pickers? To the best of my knowledge, both programs generate the references on-the-fly, after the selection of some particles manually. So, there is not need to provide pre-computed references such as 2D class averages. In the case of EMAN2-Box, both the manual/supervised steps are interwinned in a very interactive way. On the other hand, in Xmipp3 picker, those steps are separated in two phases. You first need to picking manually some micrographs entirely (or a well-defined box in one micrograph is its contains many particles)...and then switch to "Supervised" mode, where the algorithm will propose you some particle candidates and you can correct it (either removing bad proposed particles or adding missing one). The algorithm will "learn" from that and then in another protocol you can launch the automatic picking based on that training. Anyway, if you provide a few micrographs we could try to help about the parameters selection. Hope this helps, Jose Miguel Scipion Team ----------------------------------------- My name is Luigi De Colibus and I’m Post Doc at the Division of Structural Biology of Oxford University. I’m writing you because I’m trying to run automated particle picking programs on my micrographs trough SCIPION. Surprisingly, despite the fact my virus particle is very big and evident on the micrographs, neither Xmipp particle picking program nor Boxer in EMAN2 is able to pick correctly the particles after providing the reference. I guess my settings could be wrong and trying different values has not helped so far. It would be great if you could point me in the right direction. I can send you one of my micrographs if you need to do a test run. Thanks a lot in advance for your help! Best Regards Reply to: luigi at strubi.ox.ac.uk |
From: Dmitry S. <sem...@gm...> - 2017-02-02 14:20:02
|
Dear colleagues, 1. Wienner filter + CTF envelope correction --> relion 2D error (normalisation). I did the CTF correction of the particle using Wiener filter script + correct for the CTF envelope I had a problem running the corrected particles in relion 2D alignment (in the settings of relion I selected “NO” for the CTF correction). The error: Error: it appears that these images have not been normalized to an average background value of 0 and a stddev value of 1 Then I tried relion - preprocessing tool Outcome: finished but in the log: WARNING! Stddev on image …… Xmipp_ProtCTFCorrectedWiener2D/corrected_ctf_particles.stk is zero! Skipping normalisation Tried Xmipp - preprocessing tool Outcome - finished no error. But again - when you run relion 2D alignment you getting the same error. Why? 2. Particle picking. Probably very basic question but it is still not a routine for me. Now I’m collecting the data on Titan Krios and one of the outcome is the binned jpg file where you can see your particles nicely. Do we have the similar tool in the SCIPION that will allow us to initially bin the images, pick the particles and then interpolate the coordinates on unbinned images? (Normally I don't really use the binning). Is that what we can do using the CTF estimation script? If so how and when can/should we unbin the dataset? Sincerely, Dmitry |
From: Carlos O. S. <co...@cn...> - 2017-02-01 04:20:21
|
Dear Dmitry, relion 2 is already integrated in Scipion development branch. If you use that branch, you should be capable of using Relion 2. Alternatively, we are releasing next Scipion release very soon, and you may want to wait for the more stable release. As you wish. Kind regards, Carlos Oscar On 01/31/17 16:19, Dmitry A. Semchonok wrote: > Dear colleagues, > > How can I update the SCIPION in order to use relion 2.0? Is it available > under the developers version? The idea is to run the project using gpu. > > Sincerely, > Dmitry > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, SlashDot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > -- ------------------------------------------------------------------------ Carlos Oscar Sánchez Sorzano e-mail: co...@cn... Biocomputing unit http://biocomp.cnb.csic.es National Center of Biotechnology (CSIC) c/Darwin, 3 Campus Universidad Autónoma (Cantoblanco) Tlf: 34-91-585 4510 28049 MADRID (SPAIN) Fax: 34-91-585 4506 ------------------------------------------------------------------------ |
From: Dmitry A. S. <sem...@gm...> - 2017-01-31 15:19:23
|
Dear colleagues, How can I update the SCIPION in order to use relion 2.0? Is it available under the developers version? The idea is to run the project using gpu. Sincerely, Dmitry |
From: Pablo C. <pc...@cn...> - 2017-01-25 18:35:03
|
Hi Juha, sorry to hear that, Could you please check Scipion version , and relion version. Current Scipion release (v1.0.1) does not support RELION 2 files. Is that the case? If not, could you please send the star file, I guess we can point to local micrographs and still get the error. Thanks, Pablo Scipion team. On 25/01/17 19:24, Juha Huiskonen wrote: > Hello > > I have a STAR file with micrograph name and particle coordinates for > each micrograph. > > I tried importing these with "Import coordinates" and selecting > "relion", giving the folder where the micrographs and STAR files are. > I get error micrographs not found and box size zero (or something > similar). > > Any suggestions? > > Best wishes, > Juha > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, SlashDot.org! http://sdm.link/slashdot > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Juha H. <ju...@st...> - 2017-01-25 18:25:04
|
Hello I have a STAR file with micrograph name and particle coordinates for each micrograph. I tried importing these with "Import coordinates" and selecting "relion", giving the folder where the micrographs and STAR files are. I get error micrographs not found and box size zero (or something similar). Any suggestions? Best wishes, Juha |
From: ldelcano <lde...@cn...> - 2017-01-23 09:13:13
|
Sorry Valerie, I was away last week and could not look at your problem again. Just a couple of more questions for you: First, have you installed the Scipion binaries or compiled the source on your machine? Are you running on a cluster or on a single machine? I believe you are running CL2D in your own project, right? Could you run this tests? scipion test tests.em.protocols.test_protocols_xmipp_2d.TestXmippCL2D thanks Laura On 23/01/17 09:44, Valerie Biou wrote: > Hello > > sorry to insist but I haven’t solved the problem. > > > here is the result of the commands that Laura asked me to type: > > biou@vblinux:~$ scipion run mpirun -np 4 hostname > > Scipion v1.0.1 (2016-06-30) Augusto > >>>>>> "mpirun" "-np" "4" "hostname" > vblinux > vblinux > vblinux > vblinux > biou@vblinux:~$ mpirun hostname > vblinux > vblinux > vblinux > vblinux > > Thanks! > > Valérie > >>> Le 18 janv. 2017 à 14:46, ldelcano <lde...@cn...> a écrit : >>> >>> Hi Valerie, >>> >>> it seems a problem with openmpi, can you run >>> >>> ./scipion run mpirun -np 4 hostname >>> >>> and just: >>> >>> mpirun hostname >>> >>> thanks >>> >>> Laura >>> >>> >>> On 18/01/17 11:06, Valerie Biou wrote: >>>> Dear all, >>>> >>>> I have installed the latest Scipion version on a linux machine Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-59-generic x86_64). >>>> the install tests have run OK but I have recurrent problems with CL2D: >>>> >>>> at first it was looking for libmpi.so.1 so I created a symbolic link : >>>> lrwxrwxrwx 1 root root 12 janv. 17 17:25 libmpi.so.1 -> libmpi.so.12 >>>> >>>> Now it fails with the message below. >>>> >>>> Can you help me fix this, please? >>>> >>>> Best regards, >>>> Valerie >>>> >>>> >>>> 00001: RUNNING PROTOCOL ----------------- >>>> 00002: Scipion: v1.0.1 >>>> 00003: currentDir: /home/biou/ScipionUserData/projects/EX_LMNG >>>> 00004: workingDir: Runs/001509_XmippProtCL2D >>>> 00005: runMode: Restart >>>> 00006: MPI: 2 >>>> 00007: threads: 1 >>>> 00008: Starting at step: 1 >>>> 00009: Running steps >>>> 00010: STARTED: convertInputStep, step 1 >>>> 00011: 2017-01-18 10:28:06.919867 >>>> 00012: FINISHED: convertInputStep, step 1 >>>> 00013: 2017-01-18 10:28:14.324241 >>>> 00014: STARTED: runJob, step 2 >>>> 00015: 2017-01-18 10:28:14.436293 >>>> 00016: mpirun -np 2 -bynode `which xmipp_mpi_classify_CL2D` -i Runs/001509_XmippProtCL2D/extra/images.xmd --odir Runs/001509_XmippProtCL2D/extra --oroot level --nref 20 --iter 10 --distance correlation --classicalMultiref --nref0 4 >>>> 00017: -------------------------------------------------------------------------- >>>> 00018: The following command line options and corresponding MCA parameter have >>>> 00019: been deprecated and replaced as follows: >>>> 00020: >>>> 00021: Command line options: >>>> 00022: Deprecated: --bynode, -bynode >>>> 00023: Replacement: --map-by node >>>> 00024: >>>> 00025: Equivalent MCA parameter: >>>> 00026: Deprecated: rmaps_base_bynode >>>> 00027: Replacement: rmaps_base_mapping_policy=node >>>> 00028: >>>> 00029: The deprecated forms *will* disappear in a future version of Open MPI. >>>> 00030: Please update to the new syntax. >>>> 00031: -------------------------------------------------------------------------- >>>> 00032: -------------------------------------------------------------------------- >>>> 00033: A requested component was not found, or was unable to be opened. This >>>> 00034: means that this component is either not installed or is unable to be >>>> 00035: used on your system (e.g., sometimes this means that shared libraries >>>> 00036: that the component requires are unable to be found/loaded). Note that >>>> 00037: Open MPI stopped checking at the first component that it did not find. >>>> 00038: >>>> 00039: Host: vblinux >>>> 00040: Framework: ess >>>> 00041: Component: pmi >>>> 00042: -------------------------------------------------------------------------- >>>> 00043: -------------------------------------------------------------------------- >>>> 00044: A requested component was not found, or was unable to be opened. This >>>> 00045: means that this component is either not installed or is unable to be >>>> 00046: used on your system (e.g., sometimes this means that shared libraries >>>> 00047: that the component requires are unable to be found/loaded). Note that >>>> 00048: Open MPI stopped checking at the first component that it did not find. >>>> 00049: >>>> 00050: Host: vblinux >>>> 00051: Framework: ess >>>> 00052: Component: pmi >>>> 00053: -------------------------------------------------------------------------- >>>> 00054: [vblinux:02124] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file runtime/orte_init.c at line 129 >>>> 00055: [vblinux:02123] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file runtime/orte_init.c at line 129 >>>> 00056: -------------------------------------------------------------------------- >>>> 00057: It looks like orte_init failed for some reason; your parallel process is >>>> 00058: likely to abort. There are many reasons that a parallel process can >>>> 00059: fail during orte_init; some of which are due to configuration or >>>> 00060: environment problems. This failure appears to be an internal failure; >>>> 00061: here's some additional information (which may only be relevant to an >>>> 00062: Open MPI developer): >>>> 00063: >>>> 00064: orte_ess_base_open failed >>>> 00065: --> Returned value Not found (-13) instead of ORTE_SUCCESS >>>> 00066: -------------------------------------------------------------------------- >>>> 00067: -------------------------------------------------------------------------- >>>> 00068: It looks like orte_init failed for some reason; your parallel process is >>>> 00069: likely to abort. There are many reasons that a parallel process can >>>> 00070: fail during orte_init; some of which are due to configuration or >>>> 00071: environment problems. This failure appears to be an internal failure; >>>> 00072: here's some additional information (which may only be relevant to an >>>> 00073: Open MPI developer): >>>> 00074: >>>> 00075: orte_ess_base_open failed >>>> 00076: --> Returned value Not found (-13) instead of ORTE_SUCCESS >>>> 00077: -------------------------------------------------------------------------- >>>> 00078: *** An error occurred in MPI_Init >>>> 00079: *** on a NULL communicator >>>> 00080: *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, >>>> 00081: *** and potentially your MPI job) >>>> 00082: *** An error occurred in MPI_Init >>>> 00083: *** on a NULL communicator >>>> 00084: *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, >>>> 00085: *** and potentially your MPI job) >>>> 00086: -------------------------------------------------------------------------- >>>> 00087: It looks like MPI_INIT failed for some reason; your parallel process is >>>> 00088: likely to abort. There are many reasons that a parallel process can >>>> 00089: fail during MPI_INIT; some of which are due to configuration or environment >>>> 00090: problems. This failure appears to be an internal failure; here's some >>>> 00091: additional information (which may only be relevant to an Open MPI >>>> 00092: developer): >>>> 00093: >>>> 00094: ompi_mpi_init: ompi_rte_init failed >>>> 00095: --> Returned "Not found" (-13) instead of "Success" (0) >>>> 00096: -------------------------------------------------------------------------- >>>> 00097: -------------------------------------------------------------------------- >>>> 00098: It looks like MPI_INIT failed for some reason; your parallel process is >>>> 00099: likely to abort. There are many reasons that a parallel process can >>>> 00100: fail during MPI_INIT; some of which are due to configuration or environment >>>> 00101: problems. This failure appears to be an internal failure; here's some >>>> 00102: additional information (which may only be relevant to an Open MPI >>>> 00103: developer): >>>> 00104: >>>> 00105: ompi_mpi_init: ompi_rte_init failed >>>> 00106: --> Returned "Not found" (-13) instead of "Success" (0) >>>> 00107: -------------------------------------------------------------------------- >>>> 00108: [vblinux:2124] Local abort before MPI_INIT completed successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed! >>>> 00109: [vblinux:2123] Local abort before MPI_INIT completed successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed! >>>> 00110: ------------------------------------------------------- >>>> 00111: Primary job terminated normally, but 1 process returned >>>> 00112: a non-zero exit code.. Per user-direction, the job has been aborted. >>>> 00113: ------------------------------------------------------- >>>> 00114: -------------------------------------------------------------------------- >>>> 00115: mpirun detected that one or more processes exited with non-zero status, thus causing >>>> 00116: the job to be terminated. The first process to do so was: >>>> 00117: >>>> 00118: Process name: [[63772,1],0] >>>> 00119: Exit code: 1 >>>> 00120: -------------------------------------------------------------------------- >>>> 00121: Traceback (most recent call last): >>>> 00122: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line 167, in run >>>> 00123: self._run() >>>> 00124: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line 211, in _run >>>> 00125: resultFiles = self._runFunc() >>>> 00126: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line 207, in _runFunc >>>> 00127: return self._func(*self._args) >>>> 00128: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line 960, in runJob >>>> 00129: self._stepsExecutor.runJob(self._log, program, arguments, **kwargs) >>>> 00130: File "/usr/local/scipion/pyworkflow/protocol/executor.py", line 56, in runJob >>>> 00131: env=env, cwd=cwd) >>>> 00132: File "/usr/local/scipion/pyworkflow/utils/process.py", line 51, in runJob >>>> 00133: return runCommand(command, env, cwd) >>>> 00134: File "/usr/local/scipion/pyworkflow/utils/process.py", line 65, in runCommand >>>> 00135: check_call(command, shell=True, stdout=sys.stdout, stderr=sys.stderr, env=env, cwd=cwd) >>>> 00136: File "/usr/local/scipion/software/lib/python2.7/subprocess.py", line 540, in check_call >>>> 00137: raise CalledProcessError(retcode, cmd) >>>> 00138: CalledProcessError: Command 'mpirun -np 2 -bynode `which xmipp_mpi_classify_CL2D` -i Runs/001509_XmippProtCL2D/extra/images.xmd --odir Runs/001509_XmippProtCL2D/extra --oroot level --nref 20 --iter 10 --distance correlation --classicalMultiref --nref0 4' returned non-zero exit status 1 >>>> 00139: Protocol failed: Command 'mpirun -np 2 -bynode `which xmipp_mpi_classify_CL2D` -i Runs/001509_XmippProtCL2D/extra/images.xmd --odir Runs/001509_XmippProtCL2D/extra --oroot level --nref 20 --iter 10 --distance correlation --classicalMultiref --nref0 4' returned non-zero exit status 1 >>>> 00140: FAILED: runJob, step 2 >>>> 00141: 2017-01-18 10:28:14.966673 >>>> 00142: Cleaning temporarly files.... >>>> 00143: ------------------- PROTOCOL FAILED (DONE 2/13) >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Check out the vibrant tech community on one of the world's most >>>> engaging tech sites, SlashDot.org! http://sdm.link/slashdot >>>> _______________________________________________ >>>> scipion-users mailing list >>>> sci...@li... >>>> https://lists.sourceforge.net/lists/listinfo/scipion-users > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, SlashDot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Valerie B. <val...@ib...> - 2017-01-23 08:44:35
|
Hello sorry to insist but I haven’t solved the problem. here is the result of the commands that Laura asked me to type: biou@vblinux:~$ scipion run mpirun -np 4 hostname Scipion v1.0.1 (2016-06-30) Augusto >>>>> "mpirun" "-np" "4" "hostname" vblinux vblinux vblinux vblinux biou@vblinux:~$ mpirun hostname vblinux vblinux vblinux vblinux Thanks! Valérie >> Le 18 janv. 2017 à 14:46, ldelcano <lde...@cn...> a écrit : >> >> Hi Valerie, >> >> it seems a problem with openmpi, can you run >> >> ./scipion run mpirun -np 4 hostname >> >> and just: >> >> mpirun hostname >> >> thanks >> >> Laura >> >> >> On 18/01/17 11:06, Valerie Biou wrote: >>> Dear all, >>> >>> I have installed the latest Scipion version on a linux machine Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-59-generic x86_64). >>> the install tests have run OK but I have recurrent problems with CL2D: >>> >>> at first it was looking for libmpi.so.1 so I created a symbolic link : >>> lrwxrwxrwx 1 root root 12 janv. 17 17:25 libmpi.so.1 -> libmpi.so.12 >>> >>> Now it fails with the message below. >>> >>> Can you help me fix this, please? >>> >>> Best regards, >>> Valerie >>> >>> >>> 00001: RUNNING PROTOCOL ----------------- >>> 00002: Scipion: v1.0.1 >>> 00003: currentDir: /home/biou/ScipionUserData/projects/EX_LMNG >>> 00004: workingDir: Runs/001509_XmippProtCL2D >>> 00005: runMode: Restart >>> 00006: MPI: 2 >>> 00007: threads: 1 >>> 00008: Starting at step: 1 >>> 00009: Running steps >>> 00010: STARTED: convertInputStep, step 1 >>> 00011: 2017-01-18 10:28:06.919867 >>> 00012: FINISHED: convertInputStep, step 1 >>> 00013: 2017-01-18 10:28:14.324241 >>> 00014: STARTED: runJob, step 2 >>> 00015: 2017-01-18 10:28:14.436293 >>> 00016: mpirun -np 2 -bynode `which xmipp_mpi_classify_CL2D` -i Runs/001509_XmippProtCL2D/extra/images.xmd --odir Runs/001509_XmippProtCL2D/extra --oroot level --nref 20 --iter 10 --distance correlation --classicalMultiref --nref0 4 >>> 00017: -------------------------------------------------------------------------- >>> 00018: The following command line options and corresponding MCA parameter have >>> 00019: been deprecated and replaced as follows: >>> 00020: >>> 00021: Command line options: >>> 00022: Deprecated: --bynode, -bynode >>> 00023: Replacement: --map-by node >>> 00024: >>> 00025: Equivalent MCA parameter: >>> 00026: Deprecated: rmaps_base_bynode >>> 00027: Replacement: rmaps_base_mapping_policy=node >>> 00028: >>> 00029: The deprecated forms *will* disappear in a future version of Open MPI. >>> 00030: Please update to the new syntax. >>> 00031: -------------------------------------------------------------------------- >>> 00032: -------------------------------------------------------------------------- >>> 00033: A requested component was not found, or was unable to be opened. This >>> 00034: means that this component is either not installed or is unable to be >>> 00035: used on your system (e.g., sometimes this means that shared libraries >>> 00036: that the component requires are unable to be found/loaded). Note that >>> 00037: Open MPI stopped checking at the first component that it did not find. >>> 00038: >>> 00039: Host: vblinux >>> 00040: Framework: ess >>> 00041: Component: pmi >>> 00042: -------------------------------------------------------------------------- >>> 00043: -------------------------------------------------------------------------- >>> 00044: A requested component was not found, or was unable to be opened. This >>> 00045: means that this component is either not installed or is unable to be >>> 00046: used on your system (e.g., sometimes this means that shared libraries >>> 00047: that the component requires are unable to be found/loaded). Note that >>> 00048: Open MPI stopped checking at the first component that it did not find. >>> 00049: >>> 00050: Host: vblinux >>> 00051: Framework: ess >>> 00052: Component: pmi >>> 00053: -------------------------------------------------------------------------- >>> 00054: [vblinux:02124] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file runtime/orte_init.c at line 129 >>> 00055: [vblinux:02123] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file runtime/orte_init.c at line 129 >>> 00056: -------------------------------------------------------------------------- >>> 00057: It looks like orte_init failed for some reason; your parallel process is >>> 00058: likely to abort. There are many reasons that a parallel process can >>> 00059: fail during orte_init; some of which are due to configuration or >>> 00060: environment problems. This failure appears to be an internal failure; >>> 00061: here's some additional information (which may only be relevant to an >>> 00062: Open MPI developer): >>> 00063: >>> 00064: orte_ess_base_open failed >>> 00065: --> Returned value Not found (-13) instead of ORTE_SUCCESS >>> 00066: -------------------------------------------------------------------------- >>> 00067: -------------------------------------------------------------------------- >>> 00068: It looks like orte_init failed for some reason; your parallel process is >>> 00069: likely to abort. There are many reasons that a parallel process can >>> 00070: fail during orte_init; some of which are due to configuration or >>> 00071: environment problems. This failure appears to be an internal failure; >>> 00072: here's some additional information (which may only be relevant to an >>> 00073: Open MPI developer): >>> 00074: >>> 00075: orte_ess_base_open failed >>> 00076: --> Returned value Not found (-13) instead of ORTE_SUCCESS >>> 00077: -------------------------------------------------------------------------- >>> 00078: *** An error occurred in MPI_Init >>> 00079: *** on a NULL communicator >>> 00080: *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, >>> 00081: *** and potentially your MPI job) >>> 00082: *** An error occurred in MPI_Init >>> 00083: *** on a NULL communicator >>> 00084: *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, >>> 00085: *** and potentially your MPI job) >>> 00086: -------------------------------------------------------------------------- >>> 00087: It looks like MPI_INIT failed for some reason; your parallel process is >>> 00088: likely to abort. There are many reasons that a parallel process can >>> 00089: fail during MPI_INIT; some of which are due to configuration or environment >>> 00090: problems. This failure appears to be an internal failure; here's some >>> 00091: additional information (which may only be relevant to an Open MPI >>> 00092: developer): >>> 00093: >>> 00094: ompi_mpi_init: ompi_rte_init failed >>> 00095: --> Returned "Not found" (-13) instead of "Success" (0) >>> 00096: -------------------------------------------------------------------------- >>> 00097: -------------------------------------------------------------------------- >>> 00098: It looks like MPI_INIT failed for some reason; your parallel process is >>> 00099: likely to abort. There are many reasons that a parallel process can >>> 00100: fail during MPI_INIT; some of which are due to configuration or environment >>> 00101: problems. This failure appears to be an internal failure; here's some >>> 00102: additional information (which may only be relevant to an Open MPI >>> 00103: developer): >>> 00104: >>> 00105: ompi_mpi_init: ompi_rte_init failed >>> 00106: --> Returned "Not found" (-13) instead of "Success" (0) >>> 00107: -------------------------------------------------------------------------- >>> 00108: [vblinux:2124] Local abort before MPI_INIT completed successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed! >>> 00109: [vblinux:2123] Local abort before MPI_INIT completed successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed! >>> 00110: ------------------------------------------------------- >>> 00111: Primary job terminated normally, but 1 process returned >>> 00112: a non-zero exit code.. Per user-direction, the job has been aborted. >>> 00113: ------------------------------------------------------- >>> 00114: -------------------------------------------------------------------------- >>> 00115: mpirun detected that one or more processes exited with non-zero status, thus causing >>> 00116: the job to be terminated. The first process to do so was: >>> 00117: >>> 00118: Process name: [[63772,1],0] >>> 00119: Exit code: 1 >>> 00120: -------------------------------------------------------------------------- >>> 00121: Traceback (most recent call last): >>> 00122: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line 167, in run >>> 00123: self._run() >>> 00124: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line 211, in _run >>> 00125: resultFiles = self._runFunc() >>> 00126: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line 207, in _runFunc >>> 00127: return self._func(*self._args) >>> 00128: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line 960, in runJob >>> 00129: self._stepsExecutor.runJob(self._log, program, arguments, **kwargs) >>> 00130: File "/usr/local/scipion/pyworkflow/protocol/executor.py", line 56, in runJob >>> 00131: env=env, cwd=cwd) >>> 00132: File "/usr/local/scipion/pyworkflow/utils/process.py", line 51, in runJob >>> 00133: return runCommand(command, env, cwd) >>> 00134: File "/usr/local/scipion/pyworkflow/utils/process.py", line 65, in runCommand >>> 00135: check_call(command, shell=True, stdout=sys.stdout, stderr=sys.stderr, env=env, cwd=cwd) >>> 00136: File "/usr/local/scipion/software/lib/python2.7/subprocess.py", line 540, in check_call >>> 00137: raise CalledProcessError(retcode, cmd) >>> 00138: CalledProcessError: Command 'mpirun -np 2 -bynode `which xmipp_mpi_classify_CL2D` -i Runs/001509_XmippProtCL2D/extra/images.xmd --odir Runs/001509_XmippProtCL2D/extra --oroot level --nref 20 --iter 10 --distance correlation --classicalMultiref --nref0 4' returned non-zero exit status 1 >>> 00139: Protocol failed: Command 'mpirun -np 2 -bynode `which xmipp_mpi_classify_CL2D` -i Runs/001509_XmippProtCL2D/extra/images.xmd --odir Runs/001509_XmippProtCL2D/extra --oroot level --nref 20 --iter 10 --distance correlation --classicalMultiref --nref0 4' returned non-zero exit status 1 >>> 00140: FAILED: runJob, step 2 >>> 00141: 2017-01-18 10:28:14.966673 >>> 00142: Cleaning temporarly files.... >>> 00143: ------------------- PROTOCOL FAILED (DONE 2/13) >>> >>> >>> ------------------------------------------------------------------------------ >>> Check out the vibrant tech community on one of the world's most >>> engaging tech sites, SlashDot.org! http://sdm.link/slashdot >>> _______________________________________________ >>> scipion-users mailing list >>> sci...@li... >>> https://lists.sourceforge.net/lists/listinfo/scipion-users >> > |
From: ldelcano <lde...@cn...> - 2017-01-18 13:46:59
|
Hi Valerie, it seems a problem with openmpi, can you run ./scipion run mpirun -np 4 hostname and just: mpirun hostname thanks Laura On 18/01/17 11:06, Valerie Biou wrote: > Dear all, > > I have installed the latest Scipion version on a linux machine Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-59-generic x86_64). > the install tests have run OK but I have recurrent problems with CL2D: > > at first it was looking for libmpi.so.1 so I created a symbolic link : > lrwxrwxrwx 1 root root 12 janv. 17 17:25 libmpi.so.1 -> libmpi.so.12 > > Now it fails with the message below. > > Can you help me fix this, please? > > Best regards, > Valerie > > > 00001: RUNNING PROTOCOL ----------------- > 00002: Scipion: v1.0.1 > 00003: currentDir: /home/biou/ScipionUserData/projects/EX_LMNG > 00004: workingDir: Runs/001509_XmippProtCL2D > 00005: runMode: Restart > 00006: MPI: 2 > 00007: threads: 1 > 00008: Starting at step: 1 > 00009: Running steps > 00010: STARTED: convertInputStep, step 1 > 00011: 2017-01-18 10:28:06.919867 > 00012: FINISHED: convertInputStep, step 1 > 00013: 2017-01-18 10:28:14.324241 > 00014: STARTED: runJob, step 2 > 00015: 2017-01-18 10:28:14.436293 > 00016: mpirun -np 2 -bynode `which xmipp_mpi_classify_CL2D` -i Runs/001509_XmippProtCL2D/extra/images.xmd --odir Runs/001509_XmippProtCL2D/extra --oroot level --nref 20 --iter 10 --distance correlation --classicalMultiref --nref0 4 > 00017: -------------------------------------------------------------------------- > 00018: The following command line options and corresponding MCA parameter have > 00019: been deprecated and replaced as follows: > 00020: > 00021: Command line options: > 00022: Deprecated: --bynode, -bynode > 00023: Replacement: --map-by node > 00024: > 00025: Equivalent MCA parameter: > 00026: Deprecated: rmaps_base_bynode > 00027: Replacement: rmaps_base_mapping_policy=node > 00028: > 00029: The deprecated forms *will* disappear in a future version of Open MPI. > 00030: Please update to the new syntax. > 00031: -------------------------------------------------------------------------- > 00032: -------------------------------------------------------------------------- > 00033: A requested component was not found, or was unable to be opened. This > 00034: means that this component is either not installed or is unable to be > 00035: used on your system (e.g., sometimes this means that shared libraries > 00036: that the component requires are unable to be found/loaded). Note that > 00037: Open MPI stopped checking at the first component that it did not find. > 00038: > 00039: Host: vblinux > 00040: Framework: ess > 00041: Component: pmi > 00042: -------------------------------------------------------------------------- > 00043: -------------------------------------------------------------------------- > 00044: A requested component was not found, or was unable to be opened. This > 00045: means that this component is either not installed or is unable to be > 00046: used on your system (e.g., sometimes this means that shared libraries > 00047: that the component requires are unable to be found/loaded). Note that > 00048: Open MPI stopped checking at the first component that it did not find. > 00049: > 00050: Host: vblinux > 00051: Framework: ess > 00052: Component: pmi > 00053: -------------------------------------------------------------------------- > 00054: [vblinux:02124] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file runtime/orte_init.c at line 129 > 00055: [vblinux:02123] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file runtime/orte_init.c at line 129 > 00056: -------------------------------------------------------------------------- > 00057: It looks like orte_init failed for some reason; your parallel process is > 00058: likely to abort. There are many reasons that a parallel process can > 00059: fail during orte_init; some of which are due to configuration or > 00060: environment problems. This failure appears to be an internal failure; > 00061: here's some additional information (which may only be relevant to an > 00062: Open MPI developer): > 00063: > 00064: orte_ess_base_open failed > 00065: --> Returned value Not found (-13) instead of ORTE_SUCCESS > 00066: -------------------------------------------------------------------------- > 00067: -------------------------------------------------------------------------- > 00068: It looks like orte_init failed for some reason; your parallel process is > 00069: likely to abort. There are many reasons that a parallel process can > 00070: fail during orte_init; some of which are due to configuration or > 00071: environment problems. This failure appears to be an internal failure; > 00072: here's some additional information (which may only be relevant to an > 00073: Open MPI developer): > 00074: > 00075: orte_ess_base_open failed > 00076: --> Returned value Not found (-13) instead of ORTE_SUCCESS > 00077: -------------------------------------------------------------------------- > 00078: *** An error occurred in MPI_Init > 00079: *** on a NULL communicator > 00080: *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > 00081: *** and potentially your MPI job) > 00082: *** An error occurred in MPI_Init > 00083: *** on a NULL communicator > 00084: *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, > 00085: *** and potentially your MPI job) > 00086: -------------------------------------------------------------------------- > 00087: It looks like MPI_INIT failed for some reason; your parallel process is > 00088: likely to abort. There are many reasons that a parallel process can > 00089: fail during MPI_INIT; some of which are due to configuration or environment > 00090: problems. This failure appears to be an internal failure; here's some > 00091: additional information (which may only be relevant to an Open MPI > 00092: developer): > 00093: > 00094: ompi_mpi_init: ompi_rte_init failed > 00095: --> Returned "Not found" (-13) instead of "Success" (0) > 00096: -------------------------------------------------------------------------- > 00097: -------------------------------------------------------------------------- > 00098: It looks like MPI_INIT failed for some reason; your parallel process is > 00099: likely to abort. There are many reasons that a parallel process can > 00100: fail during MPI_INIT; some of which are due to configuration or environment > 00101: problems. This failure appears to be an internal failure; here's some > 00102: additional information (which may only be relevant to an Open MPI > 00103: developer): > 00104: > 00105: ompi_mpi_init: ompi_rte_init failed > 00106: --> Returned "Not found" (-13) instead of "Success" (0) > 00107: -------------------------------------------------------------------------- > 00108: [vblinux:2124] Local abort before MPI_INIT completed successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed! > 00109: [vblinux:2123] Local abort before MPI_INIT completed successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed! > 00110: ------------------------------------------------------- > 00111: Primary job terminated normally, but 1 process returned > 00112: a non-zero exit code.. Per user-direction, the job has been aborted. > 00113: ------------------------------------------------------- > 00114: -------------------------------------------------------------------------- > 00115: mpirun detected that one or more processes exited with non-zero status, thus causing > 00116: the job to be terminated. The first process to do so was: > 00117: > 00118: Process name: [[63772,1],0] > 00119: Exit code: 1 > 00120: -------------------------------------------------------------------------- > 00121: Traceback (most recent call last): > 00122: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line 167, in run > 00123: self._run() > 00124: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line 211, in _run > 00125: resultFiles = self._runFunc() > 00126: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line 207, in _runFunc > 00127: return self._func(*self._args) > 00128: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line 960, in runJob > 00129: self._stepsExecutor.runJob(self._log, program, arguments, **kwargs) > 00130: File "/usr/local/scipion/pyworkflow/protocol/executor.py", line 56, in runJob > 00131: env=env, cwd=cwd) > 00132: File "/usr/local/scipion/pyworkflow/utils/process.py", line 51, in runJob > 00133: return runCommand(command, env, cwd) > 00134: File "/usr/local/scipion/pyworkflow/utils/process.py", line 65, in runCommand > 00135: check_call(command, shell=True, stdout=sys.stdout, stderr=sys.stderr, env=env, cwd=cwd) > 00136: File "/usr/local/scipion/software/lib/python2.7/subprocess.py", line 540, in check_call > 00137: raise CalledProcessError(retcode, cmd) > 00138: CalledProcessError: Command 'mpirun -np 2 -bynode `which xmipp_mpi_classify_CL2D` -i Runs/001509_XmippProtCL2D/extra/images.xmd --odir Runs/001509_XmippProtCL2D/extra --oroot level --nref 20 --iter 10 --distance correlation --classicalMultiref --nref0 4' returned non-zero exit status 1 > 00139: Protocol failed: Command 'mpirun -np 2 -bynode `which xmipp_mpi_classify_CL2D` -i Runs/001509_XmippProtCL2D/extra/images.xmd --odir Runs/001509_XmippProtCL2D/extra --oroot level --nref 20 --iter 10 --distance correlation --classicalMultiref --nref0 4' returned non-zero exit status 1 > 00140: FAILED: runJob, step 2 > 00141: 2017-01-18 10:28:14.966673 > 00142: Cleaning temporarly files.... > 00143: ------------------- PROTOCOL FAILED (DONE 2/13) > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, SlashDot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Valerie B. <val...@ib...> - 2017-01-18 10:06:13
|
Dear all, I have installed the latest Scipion version on a linux machine Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-59-generic x86_64). the install tests have run OK but I have recurrent problems with CL2D: at first it was looking for libmpi.so.1 so I created a symbolic link : lrwxrwxrwx 1 root root 12 janv. 17 17:25 libmpi.so.1 -> libmpi.so.12 Now it fails with the message below. Can you help me fix this, please? Best regards, Valerie 00001: RUNNING PROTOCOL ----------------- 00002: Scipion: v1.0.1 00003: currentDir: /home/biou/ScipionUserData/projects/EX_LMNG 00004: workingDir: Runs/001509_XmippProtCL2D 00005: runMode: Restart 00006: MPI: 2 00007: threads: 1 00008: Starting at step: 1 00009: Running steps 00010: STARTED: convertInputStep, step 1 00011: 2017-01-18 10:28:06.919867 00012: FINISHED: convertInputStep, step 1 00013: 2017-01-18 10:28:14.324241 00014: STARTED: runJob, step 2 00015: 2017-01-18 10:28:14.436293 00016: mpirun -np 2 -bynode `which xmipp_mpi_classify_CL2D` -i Runs/001509_XmippProtCL2D/extra/images.xmd --odir Runs/001509_XmippProtCL2D/extra --oroot level --nref 20 --iter 10 --distance correlation --classicalMultiref --nref0 4 00017: -------------------------------------------------------------------------- 00018: The following command line options and corresponding MCA parameter have 00019: been deprecated and replaced as follows: 00020: 00021: Command line options: 00022: Deprecated: --bynode, -bynode 00023: Replacement: --map-by node 00024: 00025: Equivalent MCA parameter: 00026: Deprecated: rmaps_base_bynode 00027: Replacement: rmaps_base_mapping_policy=node 00028: 00029: The deprecated forms *will* disappear in a future version of Open MPI. 00030: Please update to the new syntax. 00031: -------------------------------------------------------------------------- 00032: -------------------------------------------------------------------------- 00033: A requested component was not found, or was unable to be opened. This 00034: means that this component is either not installed or is unable to be 00035: used on your system (e.g., sometimes this means that shared libraries 00036: that the component requires are unable to be found/loaded). Note that 00037: Open MPI stopped checking at the first component that it did not find. 00038: 00039: Host: vblinux 00040: Framework: ess 00041: Component: pmi 00042: -------------------------------------------------------------------------- 00043: -------------------------------------------------------------------------- 00044: A requested component was not found, or was unable to be opened. This 00045: means that this component is either not installed or is unable to be 00046: used on your system (e.g., sometimes this means that shared libraries 00047: that the component requires are unable to be found/loaded). Note that 00048: Open MPI stopped checking at the first component that it did not find. 00049: 00050: Host: vblinux 00051: Framework: ess 00052: Component: pmi 00053: -------------------------------------------------------------------------- 00054: [vblinux:02124] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file runtime/orte_init.c at line 129 00055: [vblinux:02123] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file runtime/orte_init.c at line 129 00056: -------------------------------------------------------------------------- 00057: It looks like orte_init failed for some reason; your parallel process is 00058: likely to abort. There are many reasons that a parallel process can 00059: fail during orte_init; some of which are due to configuration or 00060: environment problems. This failure appears to be an internal failure; 00061: here's some additional information (which may only be relevant to an 00062: Open MPI developer): 00063: 00064: orte_ess_base_open failed 00065: --> Returned value Not found (-13) instead of ORTE_SUCCESS 00066: -------------------------------------------------------------------------- 00067: -------------------------------------------------------------------------- 00068: It looks like orte_init failed for some reason; your parallel process is 00069: likely to abort. There are many reasons that a parallel process can 00070: fail during orte_init; some of which are due to configuration or 00071: environment problems. This failure appears to be an internal failure; 00072: here's some additional information (which may only be relevant to an 00073: Open MPI developer): 00074: 00075: orte_ess_base_open failed 00076: --> Returned value Not found (-13) instead of ORTE_SUCCESS 00077: -------------------------------------------------------------------------- 00078: *** An error occurred in MPI_Init 00079: *** on a NULL communicator 00080: *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, 00081: *** and potentially your MPI job) 00082: *** An error occurred in MPI_Init 00083: *** on a NULL communicator 00084: *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, 00085: *** and potentially your MPI job) 00086: -------------------------------------------------------------------------- 00087: It looks like MPI_INIT failed for some reason; your parallel process is 00088: likely to abort. There are many reasons that a parallel process can 00089: fail during MPI_INIT; some of which are due to configuration or environment 00090: problems. This failure appears to be an internal failure; here's some 00091: additional information (which may only be relevant to an Open MPI 00092: developer): 00093: 00094: ompi_mpi_init: ompi_rte_init failed 00095: --> Returned "Not found" (-13) instead of "Success" (0) 00096: -------------------------------------------------------------------------- 00097: -------------------------------------------------------------------------- 00098: It looks like MPI_INIT failed for some reason; your parallel process is 00099: likely to abort. There are many reasons that a parallel process can 00100: fail during MPI_INIT; some of which are due to configuration or environment 00101: problems. This failure appears to be an internal failure; here's some 00102: additional information (which may only be relevant to an Open MPI 00103: developer): 00104: 00105: ompi_mpi_init: ompi_rte_init failed 00106: --> Returned "Not found" (-13) instead of "Success" (0) 00107: -------------------------------------------------------------------------- 00108: [vblinux:2124] Local abort before MPI_INIT completed successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed! 00109: [vblinux:2123] Local abort before MPI_INIT completed successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed! 00110: ------------------------------------------------------- 00111: Primary job terminated normally, but 1 process returned 00112: a non-zero exit code.. Per user-direction, the job has been aborted. 00113: ------------------------------------------------------- 00114: -------------------------------------------------------------------------- 00115: mpirun detected that one or more processes exited with non-zero status, thus causing 00116: the job to be terminated. The first process to do so was: 00117: 00118: Process name: [[63772,1],0] 00119: Exit code: 1 00120: -------------------------------------------------------------------------- 00121: Traceback (most recent call last): 00122: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line 167, in run 00123: self._run() 00124: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line 211, in _run 00125: resultFiles = self._runFunc() 00126: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line 207, in _runFunc 00127: return self._func(*self._args) 00128: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line 960, in runJob 00129: self._stepsExecutor.runJob(self._log, program, arguments, **kwargs) 00130: File "/usr/local/scipion/pyworkflow/protocol/executor.py", line 56, in runJob 00131: env=env, cwd=cwd) 00132: File "/usr/local/scipion/pyworkflow/utils/process.py", line 51, in runJob 00133: return runCommand(command, env, cwd) 00134: File "/usr/local/scipion/pyworkflow/utils/process.py", line 65, in runCommand 00135: check_call(command, shell=True, stdout=sys.stdout, stderr=sys.stderr, env=env, cwd=cwd) 00136: File "/usr/local/scipion/software/lib/python2.7/subprocess.py", line 540, in check_call 00137: raise CalledProcessError(retcode, cmd) 00138: CalledProcessError: Command 'mpirun -np 2 -bynode `which xmipp_mpi_classify_CL2D` -i Runs/001509_XmippProtCL2D/extra/images.xmd --odir Runs/001509_XmippProtCL2D/extra --oroot level --nref 20 --iter 10 --distance correlation --classicalMultiref --nref0 4' returned non-zero exit status 1 00139: Protocol failed: Command 'mpirun -np 2 -bynode `which xmipp_mpi_classify_CL2D` -i Runs/001509_XmippProtCL2D/extra/images.xmd --odir Runs/001509_XmippProtCL2D/extra --oroot level --nref 20 --iter 10 --distance correlation --classicalMultiref --nref0 4' returned non-zero exit status 1 00140: FAILED: runJob, step 2 00141: 2017-01-18 10:28:14.966673 00142: Cleaning temporarly files.... 00143: ------------------- PROTOCOL FAILED (DONE 2/13) |
From: Valerie B. <val...@ib...> - 2017-01-18 09:57:57
|
Dear all, I have installed the latest scipion version on a linux machine Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-59-generic x86_64). the install tests have run OK but I have recurrent problems with CL2D: at first it was looking for libmpi.so.1 so I created a symbolic link : lrwxrwxrwx 1 root root 12 janv. 17 17:25 libmpi.so.1 -> libmpi.so.12 Now it fails with the message below. Can you help me fix this, please? Best regards Valerie 00001: RUNNING PROTOCOL ----------------- 00002: Scipion: v1.0.1 00003: currentDir: /home/biou/ScipionUserData/projects/EX_LMNG 00004: workingDir: Runs/001509_XmippProtCL2D 00005: runMode: Restart 00006: MPI: 2 00007: threads: 1 00008: Starting at step: 1 00009: Running steps 00010: STARTED: convertInputStep, step 1 00011: 2017-01-18 10:28:06.919867 00012: FINISHED: convertInputStep, step 1 00013: 2017-01-18 10:28:14.324241 00014: STARTED: runJob, step 2 00015: 2017-01-18 10:28:14.436293 00016: mpirun -np 2 -bynode `which xmipp_mpi_classify_CL2D` -i Runs/001509_XmippProtCL2D/extra/images.xmd --odir Runs/001509_XmippProtCL2D/extra --oroot level --nref 20 --iter 10 --distance correlation --classicalMultiref --nref0 4 00017: -------------------------------------------------------------------------- 00018: The following command line options and corresponding MCA parameter have 00019: been deprecated and replaced as follows: 00020: 00021: Command line options: 00022: Deprecated: --bynode, -bynode 00023: Replacement: --map-by node 00024: 00025: Equivalent MCA parameter: 00026: Deprecated: rmaps_base_bynode 00027: Replacement: rmaps_base_mapping_policy=node 00028: 00029: The deprecated forms *will* disappear in a future version of Open MPI. 00030: Please update to the new syntax. 00031: -------------------------------------------------------------------------- 00032: -------------------------------------------------------------------------- 00033: A requested component was not found, or was unable to be opened. This 00034: means that this component is either not installed or is unable to be 00035: used on your system (e.g., sometimes this means that shared libraries 00036: that the component requires are unable to be found/loaded). Note that 00037: Open MPI stopped checking at the first component that it did not find. 00038: 00039: Host: vblinux 00040: Framework: ess 00041: Component: pmi 00042: -------------------------------------------------------------------------- 00043: -------------------------------------------------------------------------- 00044: A requested component was not found, or was unable to be opened. This 00045: means that this component is either not installed or is unable to be 00046: used on your system (e.g., sometimes this means that shared libraries 00047: that the component requires are unable to be found/loaded). Note that 00048: Open MPI stopped checking at the first component that it did not find. 00049: 00050: Host: vblinux 00051: Framework: ess 00052: Component: pmi 00053: -------------------------------------------------------------------------- 00054: [vblinux:02124] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file runtime/orte_init.c at line 129 00055: [vblinux:02123] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file runtime/orte_init.c at line 129 00056: -------------------------------------------------------------------------- 00057: It looks like orte_init failed for some reason; your parallel process is 00058: likely to abort. There are many reasons that a parallel process can 00059: fail during orte_init; some of which are due to configuration or 00060: environment problems. This failure appears to be an internal failure; 00061: here's some additional information (which may only be relevant to an 00062: Open MPI developer): 00063: 00064: orte_ess_base_open failed 00065: --> Returned value Not found (-13) instead of ORTE_SUCCESS 00066: -------------------------------------------------------------------------- 00067: -------------------------------------------------------------------------- 00068: It looks like orte_init failed for some reason; your parallel process is 00069: likely to abort. There are many reasons that a parallel process can 00070: fail during orte_init; some of which are due to configuration or 00071: environment problems. This failure appears to be an internal failure; 00072: here's some additional information (which may only be relevant to an 00073: Open MPI developer): 00074: 00075: orte_ess_base_open failed 00076: --> Returned value Not found (-13) instead of ORTE_SUCCESS 00077: -------------------------------------------------------------------------- 00078: *** An error occurred in MPI_Init 00079: *** on a NULL communicator 00080: *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, 00081: *** and potentially your MPI job) 00082: *** An error occurred in MPI_Init 00083: *** on a NULL communicator 00084: *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, 00085: *** and potentially your MPI job) 00086: -------------------------------------------------------------------------- 00087: It looks like MPI_INIT failed for some reason; your parallel process is 00088: likely to abort. There are many reasons that a parallel process can 00089: fail during MPI_INIT; some of which are due to configuration or environment 00090: problems. This failure appears to be an internal failure; here's some 00091: additional information (which may only be relevant to an Open MPI 00092: developer): 00093: 00094: ompi_mpi_init: ompi_rte_init failed 00095: --> Returned "Not found" (-13) instead of "Success" (0) 00096: -------------------------------------------------------------------------- 00097: -------------------------------------------------------------------------- 00098: It looks like MPI_INIT failed for some reason; your parallel process is 00099: likely to abort. There are many reasons that a parallel process can 00100: fail during MPI_INIT; some of which are due to configuration or environment 00101: problems. This failure appears to be an internal failure; here's some 00102: additional information (which may only be relevant to an Open MPI 00103: developer): 00104: 00105: ompi_mpi_init: ompi_rte_init failed 00106: --> Returned "Not found" (-13) instead of "Success" (0) 00107: -------------------------------------------------------------------------- 00108: [vblinux:2124] Local abort before MPI_INIT completed successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed! 00109: [vblinux:2123] Local abort before MPI_INIT completed successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed! 00110: ------------------------------------------------------- 00111: Primary job terminated normally, but 1 process returned 00112: a non-zero exit code.. Per user-direction, the job has been aborted. 00113: ------------------------------------------------------- 00114: -------------------------------------------------------------------------- 00115: mpirun detected that one or more processes exited with non-zero status, thus causing 00116: the job to be terminated. The first process to do so was: 00117: 00118: Process name: [[63772,1],0] 00119: Exit code: 1 00120: -------------------------------------------------------------------------- 00121: Traceback (most recent call last): 00122: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line 167, in run 00123: self._run() 00124: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line 211, in _run 00125: resultFiles = self._runFunc() 00126: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line 207, in _runFunc 00127: return self._func(*self._args) 00128: File "/usr/local/scipion/pyworkflow/protocol/protocol.py", line 960, in runJob 00129: self._stepsExecutor.runJob(self._log, program, arguments, **kwargs) 00130: File "/usr/local/scipion/pyworkflow/protocol/executor.py", line 56, in runJob 00131: env=env, cwd=cwd) 00132: File "/usr/local/scipion/pyworkflow/utils/process.py", line 51, in runJob 00133: return runCommand(command, env, cwd) 00134: File "/usr/local/scipion/pyworkflow/utils/process.py", line 65, in runCommand 00135: check_call(command, shell=True, stdout=sys.stdout, stderr=sys.stderr, env=env, cwd=cwd) 00136: File "/usr/local/scipion/software/lib/python2.7/subprocess.py", line 540, in check_call 00137: raise CalledProcessError(retcode, cmd) 00138: CalledProcessError: Command 'mpirun -np 2 -bynode `which xmipp_mpi_classify_CL2D` -i Runs/001509_XmippProtCL2D/extra/images.xmd --odir Runs/001509_XmippProtCL2D/extra --oroot level --nref 20 --iter 10 --distance correlation --classicalMultiref --nref0 4' returned non-zero exit status 1 00139: Protocol failed: Command 'mpirun -np 2 -bynode `which xmipp_mpi_classify_CL2D` -i Runs/001509_XmippProtCL2D/extra/images.xmd --odir Runs/001509_XmippProtCL2D/extra --oroot level --nref 20 --iter 10 --distance correlation --classicalMultiref --nref0 4' returned non-zero exit status 1 00140: FAILED: runJob, step 2 00141: 2017-01-18 10:28:14.966673 00142: Cleaning temporarly files.... 00143: ------------------- PROTOCOL FAILED (DONE 2/13) |
From: Pablo C. <pc...@cn...> - 2017-01-17 09:42:53
|
Dear Meng, Are you by any chance, importing relion 2.0 star files. You are using Scipion v1.0 (released last February) and Relion 2.0 files are not compatible. We have tested Relion 2.0 beta and is working fine so far, but haven't released the code yet. If that is the case: A:) Scipion v1.0 can't use Relion 2.0, and we haven't explore any workaround. So no solution at all for this case B:) Checkout our devel branch, which currently supports Relion 2.0 files. BUt be aware that this is a development branch, and although we try hard to keep it stable, it hasn't gone through the extensive tests and quality checks that any Scipion release will. C:) Wait, if it's an option, until we release our new version v1.1, that is planned for February, but might be delayed until March depending on our release tests process success. Does this help? Cheers, Pablo. Scipion team. On 17/01/17 08:33, Pablo Conesa wrote: > > Dear Meng, > > > I could access your images now, thanks. > > > I guess you are getting this when clicking on a certain file in the > "Browser" window: > > > > > Am I right? > > > If so, it seems the browser is trying to get a preview image and for > some reason it can't deal with it. > > > Would it be possible, to share/send the file you are clicking in and, > if it's a star file that points to a mrcs or other binary file, the > binary file too? > > > Be aware that you are using a public email list, if you want don't > want to share sensible information/data, just reply to me > pc...@cn.... > > > Thanks for reporting, Pablo. > > > Scipion team. > > > On 16/01/17 19:45, Meng Yang, Dr wrote: >> Meng Yang, Dr has shared OneDrive for Business files with you. To >> view them, click the links below. >> >> <https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/problem%281%29.png> >> >> problem(1).png >> <https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/problem%281%29.png> >> [problem(1).png] >> >> <https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/terminal_infor.png> >> >> terminal_infor.png >> <https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/terminal_infor.png> >> [terminal_infor.png] >> >> Dear Josue, >> >> Thank you very much for your reply. >> >> Here I attached the screenshot of both the GUi and the terminal. >> >> Please let me know whether you can see them. >> >> >> Thanks again! >> >> ------------------------------------------------------------------------ >> *From:* Josue Gomez Blanco <jos...@gm...> >> *Sent:* January 16, 2017 6:37:59 AM >> *To:* Meng Yang, Dr >> *Cc:* sci...@li... >> *Subject:* Re: [scipion-users] Can't import star file into scipion >> Dear Yang: >> I cant see the picture that you sent. Can you attach a screenshot of >> the error and also the parameters? Thank you in advance. >> Josue >> >> 2017-01-16 16:07 GMT+01:00 Meng Yang, Dr <men...@ma... >> <mailto:men...@ma...>>: >> >> Meng Yang, Dr has shared a OneDrive for Business file with you. >> To view it, click the link below. >> >> <https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/problem.png> >> >> problem.png >> <https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/problem.png> >> [problem.png] >> >> Dear all, >> >> I have a problem about particle import into scipion, as am trying >> to import a star file but I keep getting the error message :do >> not know how to handle this type. >> I have checked the star file and everything and there is no >> problem with it. The third column in the star file >> (label: _rlnImageName) points to the actual place where the .mrcs >> are. >> >> >> Can somebody help me with this? Thank you very much! >> >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, SlashDot.org! >> <http://sdm.link/slashdot>http://sdm.link/slashdot >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> <mailto:sci...@li...> >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> <https://lists.sourceforge.net/lists/listinfo/scipion-users> >> >> >> >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, SlashDot.org!http://sdm.link/slashdot >> >> >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users > |
From: Pablo C. <pc...@cn...> - 2017-01-17 07:33:36
|
Dear Meng, I could access your images now, thanks. I guess you are getting this when clicking on a certain file in the "Browser" window: Am I right? If so, it seems the browser is trying to get a preview image and for some reason it can't deal with it. Would it be possible, to share/send the file you are clicking in and, if it's a star file that points to a mrcs or other binary file, the binary file too? Be aware that you are using a public email list, if you want don't want to share sensible information/data, just reply to me pc...@cn.... Thanks for reporting, Pablo. Scipion team. On 16/01/17 19:45, Meng Yang, Dr wrote: > Meng Yang, Dr has shared OneDrive for Business files with you. To view > them, click the links below. > > <https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/problem%281%29.png> > > problem(1).png > <https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/problem%281%29.png> > [problem(1).png] > > <https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/terminal_infor.png> > > terminal_infor.png > <https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/terminal_infor.png> > [terminal_infor.png] > > Dear Josue, > > Thank you very much for your reply. > > Here I attached the screenshot of both the GUi and the terminal. > > Please let me know whether you can see them. > > > Thanks again! > > ------------------------------------------------------------------------ > *From:* Josue Gomez Blanco <jos...@gm...> > *Sent:* January 16, 2017 6:37:59 AM > *To:* Meng Yang, Dr > *Cc:* sci...@li... > *Subject:* Re: [scipion-users] Can't import star file into scipion > Dear Yang: > I cant see the picture that you sent. Can you attach a screenshot of > the error and also the parameters? Thank you in advance. > Josue > > 2017-01-16 16:07 GMT+01:00 Meng Yang, Dr <men...@ma... > <mailto:men...@ma...>>: > > Meng Yang, Dr has shared a OneDrive for Business file with you. To > view it, click the link below. > > <https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/problem.png> > > problem.png > <https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/problem.png> > [problem.png] > > Dear all, > > I have a problem about particle import into scipion, as am trying > to import a star file but I keep getting the error message :do not > know how to handle this type. > I have checked the star file and everything and there is no > problem with it. The third column in the star file > (label: _rlnImageName) points to the actual place where the .mrcs > are. > > > Can somebody help me with this? Thank you very much! > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, SlashDot.org! http://sdm.link/slashdot > <http://sdm.link/slashdot> > _______________________________________________ > scipion-users mailing list > sci...@li... > <mailto:sci...@li...> > https://lists.sourceforge.net/lists/listinfo/scipion-users > <https://lists.sourceforge.net/lists/listinfo/scipion-users> > > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, SlashDot.org! http://sdm.link/slashdot > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Meng Y. D. <men...@ma...> - 2017-01-16 21:20:50
|
To view problem(1).png, sign in<https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/_layouts/15/acceptinvite.aspx?invitation=%7BE3E4EB2B%2D858A%2D4E07%2DA95D%2DEF75FF93D592%7D&listId=15475f36%2D7859%2D48ed%2Da9ed%2D41feccb883af&itemId=758a94da%2Dc525%2D4c35%2Da9fe%2Da6975574e868> or create an account. |
From: Meng Y. D. <men...@ma...> - 2017-01-16 21:19:21
|
To view terminal_infor.png, sign in<https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/_layouts/15/acceptinvite.aspx?invitation=%7B441B2ABC%2DA4A7%2D45D0%2D9A67%2DAA4CA7A93F65%7D&listId=15475f36%2D7859%2D48ed%2Da9ed%2D41feccb883af&itemId=17bb5a51%2D2be2%2D4346%2D802f%2Dbe97bc637e54> or create an account. |
From: Meng Y. D. <men...@ma...> - 2017-01-16 21:18:26
|
Meng Yang, Dr has shared OneDrive for Business?files with you. To view them, click the links below. <https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/problem(1).png> [https://r1.res.office365.com/owa/prem/images/dc-png_20.png]<https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/problem(1).png> problem(1).png<https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/problem(1).png> <https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/terminal_infor.png> [https://r1.res.office365.com/owa/prem/images/dc-png_20.png]<https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/terminal_infor.png> terminal_infor.png<https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/terminal_infor.png> Dear Josue, Thank you very much for your reply. Here I attached the screenshot of both the GUi and the terminal. Please let me know whether you can see them. Thanks again! ________________________________ From: Josue Gomez Blanco <jos...@gm...> Sent: January 16, 2017 6:37:59 AM To: Meng Yang, Dr Cc: sci...@li... Subject: Re: [scipion-users] Can't import star file into scipion Dear Yang: I cant see the picture that you sent. Can you attach a screenshot of the error and also the parameters? Thank you in advance. Josue 2017-01-16 16:07 GMT+01:00 Meng Yang, Dr <men...@ma...<mailto:men...@ma...>>: Meng Yang, Dr has shared a?OneDrive for Business?file with you. To view it, click the link below. <https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/problem.png> [https://r1.res.office365.com/owa/prem/images/dc-png_20.png]<https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/problem.png> problem.png<https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/problem.png> Dear all, I have a problem about particle import into scipion, as am trying to import a star file but I keep getting the error message :do not know how to handle this type. I have checked the star file and everything and there is no problem with it. The third column in the star file (label: _rlnImageName) points to the actual place where the .mrcs are. Can somebody help me with this? Thank you very much! ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, SlashDot.org! http://sdm.link/slashdot _______________________________________________ scipion-users mailing list sci...@li...<mailto:sci...@li...> https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Josue G. B. <jos...@gm...> - 2017-01-16 18:38:06
|
Dear Yang: I cant see the picture that you sent. Can you attach a screenshot of the error and also the parameters? Thank you in advance. Josue 2017-01-16 16:07 GMT+01:00 Meng Yang, Dr <men...@ma...>: > Meng Yang, Dr has shared a OneDrive for Business file with you. To view > it, click the link below. > > <https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/problem.png> > problem.png > <https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/problem.png> > > <https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/problem.png> > > Dear all, > > I have a problem about particle import into scipion, as am trying to > import a star file but I keep getting the error message :do not know how to > handle this type. > I have checked the star file and everything and there is no problem with > it. The third column in the star file (label: _rlnImageName) points to > the actual place where the .mrcs are. > > > Can somebody help me with this? Thank you very much! > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, SlashDot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |
From: Meng Y. D. <men...@ma...> - 2017-01-16 15:07:41
|
To view problem.png, sign in<https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/_layouts/15/acceptinvite.aspx?invitation=%7B5E40CD43%2D5139%2D4A07%2D82DC%2D37384B2115E2%7D&listId=15475f36%2D7859%2D48ed%2Da9ed%2D41feccb883af&itemId=332867f5%2Df772%2D4119%2Dbb18%2D11de98568f1e> or create an account. |
From: Meng Y. D. <men...@ma...> - 2017-01-16 15:07:39
|
Meng Yang, Dr has shared a?OneDrive for Business?file with you. To view it, click the link below. <https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/problem.png> [https://r1.res.office365.com/owa/prem/images/dc-png_20.png]<https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/problem.png> problem.png<https://mcgill-my.sharepoint.com/personal/meng_yang_mail_mcgill_ca/Documents/Email%20attachments/problem.png> Dear all, I have a problem about particle import into scipion, as am trying to import a star file but I keep getting the error message :do not know how to handle this type. I have checked the star file and everything and there is no problem with it. The third column in the star file (label: _rlnImageName) points to the actual place where the .mrcs are. Can somebody help me with this? Thank you very much! |
From: Pablo C. <pc...@cn...> - 2017-01-16 10:29:36
|
Dear Dmitry, here you have 3 options. A:) Use Relion withing Scipion and Scipion will manage the conversion for you. If you want to use Relion 2.0 Beta, we already tested this in our devel branch, and, although it's pretty stable and we already process and have betatesters using it, it's a development branch with all that it implies in terms of stability. B:) Start a the Relion protocol within Scipion, and then stop it once conversion step has finished, you should have the relion star files in the extra folder. C:) All data is in the databases and files so you can create the star files yourself, but i rather do A or B. All the best, Pablo. On 16/01/17 11:18, Dmitry Semchonok wrote: > Dear colleagues, > > _*Subset files: no *.star*_ > > I have a subset of images obtained after 2D - relion (using SCIPION) > > Inside the folder ProtUserSubSet there are such file as: > * > * > *particles sqlite > subset sqlite* > > and the 3 folders: > > *extra > * > *log > * > *temp > > * > I would like to know how can I continue using relion self-standing > soft - which file can I use as an input for 3D for example (if *star > is absent)? * > > * > Is it possible to fetch *.star from sqlite file?* > > * > Sincerely, > Dmitry* > * > * > * > * > * > * > > * > > > ------------------------------------------------------------------------------ > Developer Access Program for Intel Xeon Phi Processors > Access to Intel Xeon Phi processor-based developer platforms. > With one year of Intel Parallel Studio XE. > Training and support from Colfax. > Order your platform today. http://sdm.link/xeonphi > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Dmitry S. <sem...@gm...> - 2017-01-16 10:18:44
|
Dear colleagues, *Subset files: no *.star* I have a subset of images obtained after 2D - relion (using SCIPION) Inside the folder ProtUserSubSet there are such file as: *particles sqlitesubset sqlite* and the 3 folders: *extra* *log* *temp* I would like to know how can I continue using relion self-standing soft - which file can I use as an input for 3D for example (if *star is absent)? Is it possible to fetch *.star from sqlite file? Sincerely, Dmitry |
From: Carlos O. S. <co...@cn...> - 2017-01-12 14:58:01
|
Dear Dmitry, as far as we understand, there is not such a thing as 2D auto-refinement. You may select a subset of classes and reclassify them using a different classifier (in this regard you may use Relion 2D to group bad particles in a few classes, and Xmipp CL2D to classify the good images, CL2D cores and stable cores also help to remove bad particles). Is that what you refer to? Kind regards, Carlos Oscar On 01/11/17 11:52, Dmitry Semchonok wrote: > > Dear colleagues, > > > *? --> auto-refinement for 2D* > > > We work a lot with negative staining samples doing classification (in > relion 2D) and getting 2D projection average classes. > > > It might be possible to*improve 2D results* doing *auto-refinement* > step (for 2D). > > > If yes how can we do that? > > > I tried to use dataset and created 2D mrc (vol) file as a reference > but the program ended up with some errors. > > Dear colleagues do you have any ideas about that possibility ? > > Sincerely, > > Dmitry > > > > ------------------------------------------------------------------------------ > Developer Access Program for Intel Xeon Phi Processors > Access to Intel Xeon Phi processor-based developer platforms. > With one year of Intel Parallel Studio XE. > Training and support from Colfax. > Order your platform today. http://sdm.link/xeonphi > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users -- ------------------------------------------------------------------------ Carlos Oscar Sánchez Sorzano e-mail: co...@cn... Biocomputing unit http://biocomp.cnb.csic.es National Center of Biotechnology (CSIC) c/Darwin, 3 Campus Universidad Autónoma (Cantoblanco) Tlf: 34-91-585 4510 28049 MADRID (SPAIN) Fax: 34-91-585 4506 ------------------------------------------------------------------------ |
From: Carlos O. S. <co...@cn...> - 2017-01-12 09:00:31
|
Dear Liz, as you indicate, the RCT protocol is only capable of reconstructing volumes from individual untilted classes. To merge them, you may use any subtomogram averaging protocol. Currently in Scipion the only one fully available (there are some other in development) is cltomo (do CTRL-F to locate it). The number of classes would be 1 in your case. Kind regards, Carlos Oscar -------- Forwarded Message -------- Subject: [scipion-users] about RCT-Can I generate a tilted particle set according to an untilted class set? Date: Wed, 11 Jan 2017 11:54:40 +0800 (CST) From: ZhuLi <ga...@16...> To: sci...@li... Dear colleagues, When I did RCT with scipion, I found that it could only generate volumes according to the classified untilted sets (like what you only need to import in the " Random Conical Tilt" menu are the tilt_pairs and Good_classes files ). Since after viewing those volumes, I need to merge a few classes to generate a better volume which may come from the same conformation. Then, what should I do to extract and merge those tilted particles from corresponding classes? (as you know, the classes were generated from untilted particles.) Hope I described my question clearly. Thanks. Liz |