You can subscribe to this list here.
2016 |
Jan
(2) |
Feb
(13) |
Mar
(9) |
Apr
(4) |
May
(5) |
Jun
(2) |
Jul
(8) |
Aug
(3) |
Sep
(25) |
Oct
(7) |
Nov
(49) |
Dec
(15) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2017 |
Jan
(24) |
Feb
(36) |
Mar
(53) |
Apr
(44) |
May
(37) |
Jun
(34) |
Jul
(12) |
Aug
(15) |
Sep
(14) |
Oct
(9) |
Nov
(9) |
Dec
(7) |
2018 |
Jan
(16) |
Feb
(9) |
Mar
(27) |
Apr
(39) |
May
(8) |
Jun
(24) |
Jul
(22) |
Aug
(11) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
2019 |
Jan
(4) |
Feb
(5) |
Mar
|
Apr
(1) |
May
(21) |
Jun
(13) |
Jul
(31) |
Aug
(22) |
Sep
(9) |
Oct
(19) |
Nov
(24) |
Dec
(12) |
2020 |
Jan
(30) |
Feb
(12) |
Mar
(16) |
Apr
(4) |
May
(37) |
Jun
(17) |
Jul
(19) |
Aug
(15) |
Sep
(26) |
Oct
(84) |
Nov
(64) |
Dec
(55) |
2021 |
Jan
(18) |
Feb
(58) |
Mar
(26) |
Apr
(88) |
May
(51) |
Jun
(36) |
Jul
(31) |
Aug
(37) |
Sep
(79) |
Oct
(15) |
Nov
(29) |
Dec
(8) |
2022 |
Jan
(5) |
Feb
(8) |
Mar
(29) |
Apr
(21) |
May
(11) |
Jun
(11) |
Jul
(18) |
Aug
(16) |
Sep
(6) |
Oct
(10) |
Nov
(23) |
Dec
(1) |
2023 |
Jan
(18) |
Feb
|
Mar
(4) |
Apr
|
May
(3) |
Jun
(10) |
Jul
(1) |
Aug
|
Sep
|
Oct
(1) |
Nov
(3) |
Dec
(5) |
2024 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
(1) |
Feb
|
Mar
|
Apr
(5) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Dieter B. <die...@me...> - 2017-07-07 04:52:35
|
Dear colleagues, I am trying to window a stack of particle images and a corresponding volume from 450 x 450 x 450 px to 200 x 200 x 200 but shifted on z by 100 px up. Are the following commands correct? 1) for the stack of particle images: xmipp_transform_window -i subtracted.mrcs --corners -100 -100 -0 99 -o subtracted_200_new.mrcs" 2) for the volume: "xmipp_transform_window -i run_class001.mrc:mrc --corners -100 -100 -0 99 99 199 -o run_class001.mrc:mrc" I am somewhat intrigued by the message issued on running on the stack of particle images: "Output File: subtracted_200_new.mrcs New window: from (z0,y0,x0)=(0,-100,-100) to (zF,yF,xF)=(0,99,0)" what does it do with the z-axis here? thanks, best regards, Dieter ------------------------------------------------------------------------ Dieter Blaas, Max F. Perutz Laboratories Medical University of Vienna, Inst. Med. Biochem., Vienna Biocenter (VBC), Dr. Bohr Gasse 9/3, A-1030 Vienna, Austria, Tel: 0043 1 4277 61630, Fax: 0043 1 4277 9616, e-mail: die...@me... ------------------------------------------------------------------------ |
From: Grigory S. <sha...@gm...> - 2017-06-28 10:31:39
|
Hi guys, I think I found a workaround, the problem is that in my case for CPU relion queues the PARALLEL_COMMAND = mpiexec -mca orte_forward_job_control 1 -n 120 ... and in template: #$ -pe openmpi 5 #$ -l dedicated=24 where 24 is cores per node (default, to book only whole nodes), 5 is number of nodes, 120 is total mpi*cores, threads = 1 (in relion GUI mpi=120, threads=1) ---------- While for GPU queue: PARALLEL_COMMAND = mpiexec -mca orte_forward_job_control 1 -n 5 ... and in template: #$ -pe openmpi 1 #$ -l dedicated=32 where 32 is cores per node(default, to book only whole nodes), 1 is number of nodes, 5 is number of mpis (relion gui mpi=5, threads=6). I know it looks weird, but that's how things are :) Since %_(JOB_CORES)d (mpis * threads) can be used only in template but not in parallel_command, I created a new var JOB_NODES2=mpi*threads/24, which is overwritten in case of gpu. So here is my final config. I still have to test it with non-relion programs though.. PARALLEL_COMMAND = mpiexec -mca orte_forward_job_control 1 -n > %_(JOB_NODES)d %_(COMMAND)s > SUBMIT_COMMAND = unset module; qsub %_(JOB_SCRIPT)s > SUBMIT_TEMPLATE = #!/bin/sh > #$ -V > #$ -N scipion%_(JOB_NAME)s > #$ -pe %_(JOB_PE_TYPE)s %_(JOB_NODES2)d > #$ -l dedicated=%_(JOB_THREADS)d > #$ -e %_(JOB_SCRIPT)s.err > #$ -o %_(JOB_SCRIPT)s.out > #$ -cwd > #$ -S /bin/bash > %_(JOB_EXTRA_PARAMS)s > %_(JOB_COMMAND)s > QUEUES = { > "CPU single node": [["JOB_PE_TYPE", "smp", "SGE PE type", "Select > SGE PE type: openmpi or smp"], ["JOB_EXTRA_PARAMS", "", "Extra params", > "Provide extra params for SGE"]], > "CPU 24 cores": [["JOB_PE_TYPE", "openmpi", "SGE PE type", "Select > SGE PE type: openmpi or smp"], ["JOB_EXTRA_PARAMS", "", "Extra params", > "Provide extra params for SGE"]], > "relion CPU 24 cores": [["JOB_PE_TYPE", "openmpi", "SGE PE type", > "Select SGE PE type: openmpi or smp"], ["JOB_EXTRA_PARAMS", "#$ -l > dedicated=24 -A Relion", "Extra params", "Use number of MPIs multiple of > 24"]], > "GPU 32 cores": [["JOB_PE_TYPE", "openmpi", "SGE PE type", "Select > SGE PE type: openmpi"], ["JOB_EXTRA_PARAMS", "#$ -pe openmpi 1 -l > dedicated=32 -l gpu=4 -A Relion", "Extra params", "For Relion always use 5 > mpis, 6 threads"]] > } To wrap up it would be good to have a bit more flexibility in hosts.conf Best regards, Grigory ------------------------------------------------------------ -------------------- Grigory Sharov, Ph.D. MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH, UK. tel. +44 (0) 1223 267542 <+44%201223%20267542> e-mail: gs...@mr... On Wed, Jun 28, 2017 at 10:52 AM, Jose Miguel de la Rosa Trevin < del...@gm...> wrote: > Hi Grigory, > > Can you comment a bit more about the situation? Why do you need to > different PARALLEL_COMMAND in the same queue system? > What I have seen (and used) it setting different variables in the > SUBMIT_TEMPLATE that are defaulted to Void values for some > queues and then defined in the specific ones (e.g., GPUs resource > allocation when some nodes are just CPUs). > > Thanks for reporting, > Bests, > Jose Miguel > > > On Wed, Jun 28, 2017 at 11:48 AM, Pablo Conesa <pc...@cn...> > wrote: > >> Hi Grigory, I don't see a workaround, I'll add it to github. Thanks. >> >> On 27/06/17 18:43, Grigory Sharov wrote: >> >> Hi all, >> >> is it possible to provide a different PARALLEL_COMMAND in hosts.conf for >> a specific queue? Would be also great to have multiple SUBMIT_TEMPLATE... >> >> Best regards, >> Grigory >> >> ------------------------------------------------------------ >> -------------------- >> Grigory Sharov, Ph.D. >> >> MRC Laboratory of Molecular Biology, >> Francis Crick Avenue, >> Cambridge Biomedical Campus, >> Cambridge CB2 0QH, UK. >> tel. +44 (0) 1223 267542 <+44%201223%20267542> >> e-mail: gs...@mr... >> >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> >> >> >> _______________________________________________ >> scipion-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/scipion-users >> >> >> >> ------------------------------------------------------------ >> ------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> >> > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |
From: Pablo C. <pc...@cn...> - 2017-06-28 09:58:04
|
Hi Dmitry, For one of the runs you are getting a NoneType error when getting the Dimensions of the movies. Can you check that the input movies have proper dimensions? Just a wild guess. On 27/06/17 23:10, Dmitry Semchonok wrote: > Dear colleagues, > > I have a problem with the protocol *xmipp3-movie extract particles > > * > On our cluster if I use > > threads - the process stops at step 14* > > * > > and if i use MPI (localy MPI = 1) the program keeps failing. > > Does anyone knows what is wrong and how to make the protocol work? > > > > Please see the attachment. > > Thank you ! > > Sincerely, > Dmiitry > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Jose M. de la R. T. <del...@gm...> - 2017-06-28 09:52:35
|
Hi Grigory, Can you comment a bit more about the situation? Why do you need to different PARALLEL_COMMAND in the same queue system? What I have seen (and used) it setting different variables in the SUBMIT_TEMPLATE that are defaulted to Void values for some queues and then defined in the specific ones (e.g., GPUs resource allocation when some nodes are just CPUs). Thanks for reporting, Bests, Jose Miguel On Wed, Jun 28, 2017 at 11:48 AM, Pablo Conesa <pc...@cn...> wrote: > Hi Grigory, I don't see a workaround, I'll add it to github. Thanks. > > On 27/06/17 18:43, Grigory Sharov wrote: > > Hi all, > > is it possible to provide a different PARALLEL_COMMAND in hosts.conf for > a specific queue? Would be also great to have multiple SUBMIT_TEMPLATE... > > Best regards, > Grigory > > ------------------------------------------------------------ > -------------------- > Grigory Sharov, Ph.D. > > MRC Laboratory of Molecular Biology, > Francis Crick Avenue, > Cambridge Biomedical Campus, > Cambridge CB2 0QH, UK. > tel. +44 (0) 1223 267542 <+44%201223%20267542> > e-mail: gs...@mr... > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > > _______________________________________________ > scipion-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/scipion-users > > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |
From: Pablo C. <pc...@cn...> - 2017-06-28 09:49:06
|
Hi Grigory, I don't see a workaround, I'll add it to github. Thanks. On 27/06/17 18:43, Grigory Sharov wrote: > Hi all, > > is it possible to provide a different PARALLEL_COMMAND in hosts.conf > for a specific queue? Would be also great to have multiple > SUBMIT_TEMPLATE... > > Best regards, > Grigory > > -------------------------------------------------------------------------------- > Grigory Sharov, Ph.D. > > MRC Laboratory of Molecular Biology, > Francis Crick Avenue, > Cambridge Biomedical Campus, > Cambridge CB2 0QH, UK. > tel. +44 (0) 1223 267542 <tel:+44%201223%20267542> > e-mail: gs...@mr... <mailto:gs...@mr...> > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Laura d. C. <su...@bc...> - 2017-06-28 09:27:40
|
Hi Juha, No, I tested on a machine with a single GPU. Could you please tell us if you are Pre-reading particles into RAM, number of pooled particles and number of MPIs? Also GPU RAM on each node and exact error that you are getting. You could also try to do resize and normalize with Relion preprocess protocol and tell us if it works. thanks in advance Laura Activo Mar, 27 Junio at 3:52 PM , Juha Huiskonen <ju...@st...> Escrito: Hi Laura, Is your test with multiple nodes or GPUs? I wonder what happens in Relion if there is just one stack, when it's splitting it. Also I guess memory can be an issue when opening a large single stack on GPUs? I have 160,000 particles that were originally 300 pix and now they are 100 pix. I was running 3D classification and it indeed crashed in the Estimating Initial Noise step Best wishes, Juha On Tue, Jun 27, 2017 at 2:28 PM, ldelcano <lde...@cn...> wrote: Hi Jose Miguel, I have tested with a project that I have. Particles 200 px resized to 66 px (factor 0.333) and normalized with Xmipp. Then I run Relion 2D classification and it arrives to Iteration 1 without errors. I guess the error that Juha is reporting is on Estimating Initial Noise Expectra. I will try tomorrow with another dataset. cheers Laura On 27/06/17 15:03, Jose Miguel de la Rosa Trevin wrote: Laura, have you checked if the same error happens (or not) with a tutorial dataset, for example? On Tue, Jun 27, 2017 at 3:02 PM, Laura del Caño <su...@bc...> wrote: Hi Juha, could you send us protocol parameters for the binning and normalization, as well as particle size (and maybe logs) to try to reproduce the problem? thanks Laura Activo Mar, 27 Junio at 1:21 PM , Juha Huiskonen <ju...@st...> Escrito: Dear all, I need to run a fairly large classification run. To speed it up, I have run xmipp-crop/resize particles to achieve binning by factor of 3 and then normalised the particles again with xmipp-preprocess particles. Without binning/renormalization the job runs fine (albeit slowly). With binning/renormalization I get a segmentation fault referring to libfftw3 right when the 'Estimating initial noise spectra' starts. I am running on GPU. The only difference I can see between the two is that after binning/renormalization all the particles are in one large stack, whereas normally they are in several separate stacks. Has anyone else seen this behaviour? Any suggestions? Best wishes, Juha ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ scipion-users mailing list sci...@li... https://lists.sourceforge.net/lists/listinfo/scipion-users ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ scipion-users mailing list sci...@li... https://lists.sourceforge.net/lists/listinfo/scipion-users ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ scipion-users mailing list sci...@li... https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Dmitry S. <sem...@gm...> - 2017-06-27 21:11:09
|
Dear colleagues, I have a problem with the protocol *xmipp3-movie extract particles* On our cluster if I use threads - the process stops at step 14 and if i use MPI (localy MPI = 1) the program keeps failing. Does anyone knows what is wrong and how to make the protocol work? Please see the attachment. Thank you ! Sincerely, Dmiitry |
From: Grigory S. <sha...@gm...> - 2017-06-27 16:43:56
|
Hi all, is it possible to provide a different PARALLEL_COMMAND in hosts.conf for a specific queue? Would be also great to have multiple SUBMIT_TEMPLATE... Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH, UK. tel. +44 (0) 1223 267542 <+44%201223%20267542> e-mail: gs...@mr... |
From: Juha H. <ju...@st...> - 2017-06-27 15:03:37
|
It seems binning results in one big stack called output_images.stk On Tue, Jun 27, 2017 at 3:04 PM, Jose Miguel de la Rosa Trevin < del...@gm...> wrote: > Hi Juha, > > What is the protocol that make a single stack? The binning one or the > normalization? Apart from this specific issue later in Relion protocol, > maybe would be good idea to keep an stack per micrograph name. > > Bests, > Jose Miguel > > > On Tue, Jun 27, 2017 at 3:35 PM, Juha Huiskonen <ju...@st...> > wrote: > >> Hi Laura, >> >> Is your test with multiple nodes or GPUs? I wonder what happens in Relion >> if there is just one stack, when it's splitting it. Also I guess memory can >> be an issue when opening a large single stack on GPUs? I have 160,000 >> particles that were originally 300 pix and now they are 100 pix. I was >> running 3D classification and it indeed crashed in the Estimating >> Initial Noise step >> >> Best wishes, >> Juha >> >> On Tue, Jun 27, 2017 at 2:28 PM, ldelcano <lde...@cn...> wrote: >> >>> Hi Jose Miguel, >>> >>> I have tested with a project that I have. Particles 200 px resized to 66 >>> px (factor 0.333) and normalized with Xmipp. Then I run Relion 2D >>> classification and it arrives to Iteration 1 without errors. I guess the >>> error that Juha is reporting is on Estimating Initial Noise Expectra. I >>> will try tomorrow with another dataset. >>> >>> cheers >>> >>> Laura >>> >>> On 27/06/17 15:03, Jose Miguel de la Rosa Trevin wrote: >>> >>> Laura, have you checked if the same error happens (or not) with a >>> tutorial dataset, for example? >>> >>> On Tue, Jun 27, 2017 at 3:02 PM, Laura del Caño < >>> su...@bc...> wrote: >>> >>>> Hi Juha, >>>> >>>> could you send us protocol parameters for the binning and >>>> normalization, as well as particle size (and maybe logs) to try to >>>> reproduce the problem? >>>> >>>> thanks >>>> Laura >>>> >>>> >>>> Activo Mar, 27 Junio at 1:21 PM , Juha Huiskonen <ju...@st...> >>>> Escrito: >>>> Dear all, >>>> >>>> I need to run a fairly large classification run. To speed it up, I have >>>> run xmipp-crop/resize particles to achieve binning by factor of 3 and then >>>> normalised the particles again with xmipp-preprocess particles. >>>> >>>> Without binning/renormalization the job runs fine (albeit slowly). With >>>> binning/renormalization I get a segmentation fault referring to libfftw3 >>>> right when the 'Estimating initial noise spectra' starts. I am running on >>>> GPU. >>>> >>>> The only difference I can see between the two is that after >>>> binning/renormalization all the particles are in one large stack, whereas >>>> normally they are in several separate stacks. Has anyone else seen this >>>> behaviour? Any suggestions? >>>> >>>> Best wishes, >>>> Juha >>>> >>>> 168:553135 >>>> >>>> ------------------------------------------------------------ >>>> ------------------ >>>> Check out the vibrant tech community on one of the world's most >>>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>>> _______________________________________________ >>>> scipion-users mailing list >>>> sci...@li... >>>> https://lists.sourceforge.net/lists/listinfo/scipion-users >>>> >>>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Check out the vibrant tech community on one of the world's most >>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>> >>> >>> >>> _______________________________________________ >>> scipion-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/scipion-users >>> >>> >>> >>> ------------------------------------------------------------ >>> ------------------ >>> Check out the vibrant tech community on one of the world's most >>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>> _______________________________________________ >>> scipion-users mailing list >>> sci...@li... >>> https://lists.sourceforge.net/lists/listinfo/scipion-users >>> >>> >> >> ------------------------------------------------------------ >> ------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> >> > |
From: Jose M. de la R. T. <del...@gm...> - 2017-06-27 14:04:27
|
Hi Juha, What is the protocol that make a single stack? The binning one or the normalization? Apart from this specific issue later in Relion protocol, maybe would be good idea to keep an stack per micrograph name. Bests, Jose Miguel On Tue, Jun 27, 2017 at 3:35 PM, Juha Huiskonen <ju...@st...> wrote: > Hi Laura, > > Is your test with multiple nodes or GPUs? I wonder what happens in Relion > if there is just one stack, when it's splitting it. Also I guess memory can > be an issue when opening a large single stack on GPUs? I have 160,000 > particles that were originally 300 pix and now they are 100 pix. I was > running 3D classification and it indeed crashed in the Estimating Initial > Noise step > > Best wishes, > Juha > > On Tue, Jun 27, 2017 at 2:28 PM, ldelcano <lde...@cn...> wrote: > >> Hi Jose Miguel, >> >> I have tested with a project that I have. Particles 200 px resized to 66 >> px (factor 0.333) and normalized with Xmipp. Then I run Relion 2D >> classification and it arrives to Iteration 1 without errors. I guess the >> error that Juha is reporting is on Estimating Initial Noise Expectra. I >> will try tomorrow with another dataset. >> >> cheers >> >> Laura >> >> On 27/06/17 15:03, Jose Miguel de la Rosa Trevin wrote: >> >> Laura, have you checked if the same error happens (or not) with a >> tutorial dataset, for example? >> >> On Tue, Jun 27, 2017 at 3:02 PM, Laura del Caño < >> su...@bc...> wrote: >> >>> Hi Juha, >>> >>> could you send us protocol parameters for the binning and normalization, >>> as well as particle size (and maybe logs) to try to reproduce the problem? >>> >>> thanks >>> Laura >>> >>> >>> Activo Mar, 27 Junio at 1:21 PM , Juha Huiskonen <ju...@st...> >>> Escrito: >>> Dear all, >>> >>> I need to run a fairly large classification run. To speed it up, I have >>> run xmipp-crop/resize particles to achieve binning by factor of 3 and then >>> normalised the particles again with xmipp-preprocess particles. >>> >>> Without binning/renormalization the job runs fine (albeit slowly). With >>> binning/renormalization I get a segmentation fault referring to libfftw3 >>> right when the 'Estimating initial noise spectra' starts. I am running on >>> GPU. >>> >>> The only difference I can see between the two is that after >>> binning/renormalization all the particles are in one large stack, whereas >>> normally they are in several separate stacks. Has anyone else seen this >>> behaviour? Any suggestions? >>> >>> Best wishes, >>> Juha >>> >>> 168:553135 >>> >>> ------------------------------------------------------------ >>> ------------------ >>> Check out the vibrant tech community on one of the world's most >>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>> _______________________________________________ >>> scipion-users mailing list >>> sci...@li... >>> https://lists.sourceforge.net/lists/listinfo/scipion-users >>> >>> >> >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> >> >> >> _______________________________________________ >> scipion-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/scipion-users >> >> >> >> ------------------------------------------------------------ >> ------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> >> > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |
From: Juha H. <ju...@st...> - 2017-06-27 13:52:32
|
Hi Laura, Is your test with multiple nodes or GPUs? I wonder what happens in Relion if there is just one stack, when it's splitting it. Also I guess memory can be an issue when opening a large single stack on GPUs? I have 160,000 particles that were originally 300 pix and now they are 100 pix. I was running 3D classification and it indeed crashed in the Estimating Initial Noise step Best wishes, Juha On Tue, Jun 27, 2017 at 2:28 PM, ldelcano <lde...@cn...> wrote: > Hi Jose Miguel, > > I have tested with a project that I have. Particles 200 px resized to 66 > px (factor 0.333) and normalized with Xmipp. Then I run Relion 2D > classification and it arrives to Iteration 1 without errors. I guess the > error that Juha is reporting is on Estimating Initial Noise Expectra. I > will try tomorrow with another dataset. > > cheers > > Laura > > On 27/06/17 15:03, Jose Miguel de la Rosa Trevin wrote: > > Laura, have you checked if the same error happens (or not) with a tutorial > dataset, for example? > > On Tue, Jun 27, 2017 at 3:02 PM, Laura del Caño < > su...@bc...> wrote: > >> Hi Juha, >> >> could you send us protocol parameters for the binning and normalization, >> as well as particle size (and maybe logs) to try to reproduce the problem? >> >> thanks >> Laura >> >> >> Activo Mar, 27 Junio at 1:21 PM , Juha Huiskonen <ju...@st...> >> Escrito: >> Dear all, >> >> I need to run a fairly large classification run. To speed it up, I have >> run xmipp-crop/resize particles to achieve binning by factor of 3 and then >> normalised the particles again with xmipp-preprocess particles. >> >> Without binning/renormalization the job runs fine (albeit slowly). With >> binning/renormalization I get a segmentation fault referring to libfftw3 >> right when the 'Estimating initial noise spectra' starts. I am running on >> GPU. >> >> The only difference I can see between the two is that after >> binning/renormalization all the particles are in one large stack, whereas >> normally they are in several separate stacks. Has anyone else seen this >> behaviour? Any suggestions? >> >> Best wishes, >> Juha >> >> 168:553135 >> >> ------------------------------------------------------------ >> ------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> >> > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > > _______________________________________________ > scipion-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/scipion-users > > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |
From: Grigory S. <sha...@gm...> - 2017-06-27 13:39:57
|
Hi Juha, can it be that your particle stack doesn't fit into memory? Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH, UK. tel. +44 (0) 1223 267542 <+44%201223%20267542> e-mail: gs...@mr... On Tue, Jun 27, 2017 at 2:28 PM, ldelcano <lde...@cn...> wrote: > Hi Jose Miguel, > > I have tested with a project that I have. Particles 200 px resized to 66 > px (factor 0.333) and normalized with Xmipp. Then I run Relion 2D > classification and it arrives to Iteration 1 without errors. I guess the > error that Juha is reporting is on Estimating Initial Noise Expectra. I > will try tomorrow with another dataset. > > cheers > > Laura > > On 27/06/17 15:03, Jose Miguel de la Rosa Trevin wrote: > > Laura, have you checked if the same error happens (or not) with a tutorial > dataset, for example? > > On Tue, Jun 27, 2017 at 3:02 PM, Laura del Caño < > su...@bc...> wrote: > >> Hi Juha, >> >> could you send us protocol parameters for the binning and normalization, >> as well as particle size (and maybe logs) to try to reproduce the problem? >> >> thanks >> Laura >> >> >> Activo Mar, 27 Junio at 1:21 PM , Juha Huiskonen <ju...@st...> >> Escrito: >> Dear all, >> >> I need to run a fairly large classification run. To speed it up, I have >> run xmipp-crop/resize particles to achieve binning by factor of 3 and then >> normalised the particles again with xmipp-preprocess particles. >> >> Without binning/renormalization the job runs fine (albeit slowly). With >> binning/renormalization I get a segmentation fault referring to libfftw3 >> right when the 'Estimating initial noise spectra' starts. I am running on >> GPU. >> >> The only difference I can see between the two is that after >> binning/renormalization all the particles are in one large stack, whereas >> normally they are in several separate stacks. Has anyone else seen this >> behaviour? Any suggestions? >> >> Best wishes, >> Juha >> >> 168:553135 >> >> ------------------------------------------------------------ >> ------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> >> > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > > _______________________________________________ > scipion-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/scipion-users > > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |
From: ldelcano <lde...@cn...> - 2017-06-27 13:28:16
|
Hi Jose Miguel, I have tested with a project that I have. Particles 200 px resized to 66 px (factor 0.333) and normalized with Xmipp. Then I run Relion 2D classification and it arrives to Iteration 1 without errors. I guess the error that Juha is reporting is on Estimating Initial Noise Expectra. I will try tomorrow with another dataset. cheers Laura On 27/06/17 15:03, Jose Miguel de la Rosa Trevin wrote: > Laura, have you checked if the same error happens (or not) with a > tutorial dataset, for example? > > On Tue, Jun 27, 2017 at 3:02 PM, Laura del Caño > <su...@bc... <mailto:su...@bc...>> > wrote: > > Hi Juha, > > could you send us protocol parameters for the binning and > normalization, as well as particle size (and maybe logs) to try to > reproduce the problem? > > thanks > Laura > > > Activo Mar, 27 Junio at 1:21 PM , Juha Huiskonen > <ju...@st... <mailto:ju...@st...>> Escrito: > Dear all, > > I need to run a fairly large classification run. To speed it > up, I have run xmipp-crop/resize particles to achieve binning > by factor of 3 and then normalised the particles again with > xmipp-preprocess particles. > > Without binning/renormalization the job runs fine (albeit > slowly). With binning/renormalization I get a segmentation > fault referring to libfftw3 right when the 'Estimating initial > noise spectra' starts. I am running on GPU. > > The only difference I can see between the two is that after > binning/renormalization all the particles are in one large > stack, whereas normally they are in several separate stacks. > Has anyone else seen this behaviour? Any suggestions? > > Best wishes, > Juha > > 168:553135 > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > <mailto:sci...@li...> > https://lists.sourceforge.net/lists/listinfo/scipion-users > <https://lists.sourceforge.net/lists/listinfo/scipion-users> > > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Jose M. de la R. T. <del...@gm...> - 2017-06-27 13:03:52
|
Laura, have you checked if the same error happens (or not) with a tutorial dataset, for example? On Tue, Jun 27, 2017 at 3:02 PM, Laura del Caño < su...@bc...> wrote: > Hi Juha, > > could you send us protocol parameters for the binning and normalization, > as well as particle size (and maybe logs) to try to reproduce the problem? > > thanks > Laura > > > Activo Mar, 27 Junio at 1:21 PM , Juha Huiskonen <ju...@st...> > Escrito: > Dear all, > > I need to run a fairly large classification run. To speed it up, I have > run xmipp-crop/resize particles to achieve binning by factor of 3 and then > normalised the particles again with xmipp-preprocess particles. > > Without binning/renormalization the job runs fine (albeit slowly). With > binning/renormalization I get a segmentation fault referring to libfftw3 > right when the 'Estimating initial noise spectra' starts. I am running on > GPU. > > The only difference I can see between the two is that after > binning/renormalization all the particles are in one large stack, whereas > normally they are in several separate stacks. Has anyone else seen this > behaviour? Any suggestions? > > Best wishes, > Juha > > 168:553135 > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |
From: Laura d. C. <su...@bc...> - 2017-06-27 13:02:16
|
Hi Juha, could you send us protocol parameters for the binning and normalization, as well as particle size (and maybe logs) to try to reproduce the problem? thanks Laura Activo Mar, 27 Junio at 1:21 PM , Juha Huiskonen <ju...@st...> Escrito: Dear all, I need to run a fairly large classification run. To speed it up, I have run xmipp-crop/resize particles to achieve binning by factor of 3 and then normalised the particles again with xmipp-preprocess particles. Without binning/renormalization the job runs fine (albeit slowly). With binning/renormalization I get a segmentation fault referring to libfftw3 right when the 'Estimating initial noise spectra' starts. I am running on GPU. The only difference I can see between the two is that after binning/renormalization all the particles are in one large stack, whereas normally they are in several separate stacks. Has anyone else seen this behaviour? Any suggestions? Best wishes, Juha |
From: Juha H. <ju...@st...> - 2017-06-27 11:21:38
|
Dear all, I need to run a fairly large classification run. To speed it up, I have run xmipp-crop/resize particles to achieve binning by factor of 3 and then normalised the particles again with xmipp-preprocess particles. Without binning/renormalization the job runs fine (albeit slowly). With binning/renormalization I get a segmentation fault referring to libfftw3 right when the 'Estimating initial noise spectra' starts. I am running on GPU. The only difference I can see between the two is that after binning/renormalization all the particles are in one large stack, whereas normally they are in several separate stacks. Has anyone else seen this behaviour? Any suggestions? Best wishes, Juha |
From: Jose M. de la R. T. <del...@gm...> - 2017-06-16 15:30:06
|
Hi David, I'm afraid that the queues are read from a json config file as a Python dict, which does not have an specific order to iterate the items. We need to check where is need to be changed to use an OrderedDict or put the description as a list of (key, value) pairs. Thanks for your feedback, Jose Miguel On Fri, Jun 16, 2017 at 3:38 PM, Hoover , David (NIH/CIT) [E] < hoo...@he...> wrote: > Is there a way to order the queues in the hosts.conf file in the GUI? We > have about 10 queues available to users, and the ordering is random. > > David Hoover > HPC @ NIH > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > |
From: Hoover , D. (NIH/C. [E] <hoo...@he...> - 2017-06-16 13:38:34
|
Is there a way to order the queues in the hosts.conf file in the GUI? We have about 10 queues available to users, and the ordering is random. David Hoover HPC @ NIH |
From: Grigory S. <sha...@gm...> - 2017-06-16 11:45:33
|
Hi Dmitry, first of all, I suggest to update to yesterday release from your beta version. Next, latest motioncor2 that you have needs CUDA 8.0 so I suggest to check that CUDA_LIB / CUDA_BIN vars in $SCIPION/config/scipion.conf point to right version Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH, UK. tel. +44 (0) 1223 267542 <+44%201223%20267542> e-mail: gs...@mr... On Fri, Jun 16, 2017 at 12:31 PM, Dmitry Semchonok <sem...@gm...> wrote: > Dear colleagues, > > Fist of all I want to congratulate the SCIPION team and all of the users > with the new version of the Scipion - we all waited that so long - thank > you very much. > > My question is about the mottion corr script. > > > I doesn't run properly, > > what I tested : > > CPU - in queue on the cluster waiting for the startup > MPI - in queue on the cluster waiting for the startup > > Threads - failed > > ERROR: Movie failed > > > > [image: Встроенное изображение 1] > > the parameters that I use are the following > > [image: Встроенное изображение 2] > > [image: Встроенное изображение 3] > > Am I doing something wrong or there is a problem? > > Sincerely, > Dmitry > > > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |
From: Dmitry S. <sem...@gm...> - 2017-06-16 11:31:56
|
Dear colleagues, Fist of all I want to congratulate the SCIPION team and all of the users with the new version of the Scipion - we all waited that so long - thank you very much. My question is about the mottion corr script. I doesn't run properly, what I tested : CPU - in queue on the cluster waiting for the startup MPI - in queue on the cluster waiting for the startup Threads - failed ERROR: Movie failed [image: Встроенное изображение 1] the parameters that I use are the following [image: Встроенное изображение 2] [image: Встроенное изображение 3] Am I doing something wrong or there is a problem? Sincerely, Dmitry |
From: Pablo C. <pc...@cn...> - 2017-06-15 08:48:19
|
We are very pleased to announce the release of a new version of Scipion <http://scipion.cnb.csic.es>. It’s been over a year since the previous and first version and we have been working on 3 main goals for this release: * Consolidation: We put and will always put our best effort into making Scipion a robust and reliable software. We have improved performance, usability and fixed multiple bugs. * EM packages integration: We have updated several EM packages to their latest versions (relion 2.0.4, ctffind4.1.8) and added new ones (motioncor2, gctf, gautomatch, …). Single movie alignment protocol (as in Scipion 1.0) has been split into several ones for each program. * Streaming capabilities: To speed up first preprocessing steps we have enabled Scipion to work in “streaming mode”, allowing users to compute aligned movies and estimate CTF as soon as a movie or micrograph comes out of the microscope PC. See the full release noteshere <https://github.com/I2PC/scipion/wiki/Release-Notes#v11-2017-06-14> Please, go tohttp://scipion.cnb.csic.es/m/download_formto download the bundles and follow our installation guide <https://github.com/I2PC/scipion/wiki/How-to-Install-v1.1> Many thanks too everyone that have contributed to make this happen: biocomputing unit <http://biocomp.cnb.csic.es/staff> staff Scipion and Xmipp developers and collaborators Scipion beta testers Happy em! |
From: Roberto M. <ro...@cn...> - 2017-06-14 15:57:50
|
Furtheremore, when you create a new project you may select the project folder (just fill the "project location" entry) cheers Roberto On Wed, Jun 14, 2017 at 1:02 PM, Pablo Conesa <pc...@cn...> wrote: > Hi David, thanks Grigory. > > Scipion deals with 2 config files: > > 1.- One common to all users that usually is at > <SCIPION_HOME>/config/scipion.conf > This file contains common variables for the system, common to all users like > path to java, or to MPI... > > 2.- One per user that by default is created at > ~/.config/scipion/scipion.conf > This file contains "user specific configurations" > > So the way it is designed, each user will have all his/her projects created > in the same folder and separately from the other users. > > What is your case? > > It doesn't fit in how Scipion works? > > All the best, Pablo > > Scipion team. > > > > > On 14/06/17 10:34, Grigory Sharov wrote: > > Dear David, > > SCIPION_USER_DATA is defined on a per-user basis in > ~/.config/scipion/scipion.conf , if you would like to have your project in a > different location, you could in principle create a symlink in > ~/ScipionUserData/projects/ folder. > Is this what you are looking for? > > > Best regards, > Grigory > > -------------------------------------------------------------------------------- > Grigory Sharov, Ph.D. > > MRC Laboratory of Molecular Biology, > Francis Crick Avenue, > Cambridge Biomedical Campus, > Cambridge CB2 0QH, UK. > tel. +44 (0) 1223 267542 > e-mail: gs...@mr... > > On Wed, Jun 14, 2017 at 12:23 AM, Hoover, David <hoo...@he...> > wrote: >> >> Is there any way of setting the SCIPION_USER_DATA from either the command >> line or via the environment to override the value in the config files? It >> seems there is no way to run multiple scipion jobs simultaneously without >> clobbering the output. >> >> David Hoover >> HPC @ NIH >> >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users > > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > |
From: Pablo C. <pc...@cn...> - 2017-06-14 11:02:18
|
Hi David, thanks Grigory. Scipion deals with 2 config files: 1.- One common to all users that usually is at <SCIPION_HOME>/config/scipion.conf This file contains common variables for the system, common to all users like path to java, or to MPI... 2.- One per user that by default is created at ~/.config/scipion/scipion.conf This file contains "user specific configurations" So the way it is designed, each user will have all his/her projects created in the same folder and separately from the other users. What is your case? It doesn't fit in how Scipion works? All the best, Pablo Scipion team. On 14/06/17 10:34, Grigory Sharov wrote: > Dear David, > > SCIPION_USER_DATA is defined on a per-user basis in > ~/.config/scipion/scipion.conf , if you would like to have your > project in a different location, you could in principle create a > symlink in ~/ScipionUserData/projects/ folder. > Is this what you are looking for? > > > Best regards, > Grigory > > -------------------------------------------------------------------------------- > Grigory Sharov, Ph.D. > > MRC Laboratory of Molecular Biology, > Francis Crick Avenue, > Cambridge Biomedical Campus, > Cambridge CB2 0QH, UK. > tel. +44 (0) 1223 267542 <tel:+44%201223%20267542> > e-mail: gs...@mr... <mailto:gs...@mr...> > > On Wed, Jun 14, 2017 at 12:23 AM, Hoover, David > <hoo...@he... <mailto:hoo...@he...>> wrote: > > Is there any way of setting the SCIPION_USER_DATA from either the > command line or via the environment to override the value in the > config files? It seems there is no way to run multiple scipion > jobs simultaneously without clobbering the output. > > David Hoover > HPC @ NIH > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > <mailto:sci...@li...> > https://lists.sourceforge.net/lists/listinfo/scipion-users > <https://lists.sourceforge.net/lists/listinfo/scipion-users> > > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: Grigory S. <sha...@gm...> - 2017-06-14 08:34:34
|
Dear David, SCIPION_USER_DATA is defined on a per-user basis in ~/.config/scipion/scipion.conf , if you would like to have your project in a different location, you could in principle create a symlink in ~/ScipionUserData/projects/ folder. Is this what you are looking for? Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH, UK. tel. +44 (0) 1223 267542 <+44%201223%20267542> e-mail: gs...@mr... On Wed, Jun 14, 2017 at 12:23 AM, Hoover, David <hoo...@he...> wrote: > Is there any way of setting the SCIPION_USER_DATA from either the command > line or via the environment to override the value in the config files? It > seems there is no way to run multiple scipion jobs simultaneously without > clobbering the output. > > David Hoover > HPC @ NIH > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > |
From: Hoover, D. <hoo...@he...> - 2017-06-13 23:23:45
|
Is there any way of setting the SCIPION_USER_DATA from either the command line or via the environment to override the value in the config files? It seems there is no way to run multiple scipion jobs simultaneously without clobbering the output. David Hoover HPC @ NIH |