You can subscribe to this list here.
2016 |
Jan
(2) |
Feb
(13) |
Mar
(9) |
Apr
(4) |
May
(5) |
Jun
(2) |
Jul
(8) |
Aug
(3) |
Sep
(25) |
Oct
(7) |
Nov
(49) |
Dec
(15) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2017 |
Jan
(24) |
Feb
(36) |
Mar
(53) |
Apr
(44) |
May
(37) |
Jun
(34) |
Jul
(12) |
Aug
(15) |
Sep
(14) |
Oct
(9) |
Nov
(9) |
Dec
(7) |
2018 |
Jan
(16) |
Feb
(9) |
Mar
(27) |
Apr
(39) |
May
(8) |
Jun
(24) |
Jul
(22) |
Aug
(11) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
2019 |
Jan
(4) |
Feb
(5) |
Mar
|
Apr
(1) |
May
(21) |
Jun
(13) |
Jul
(31) |
Aug
(22) |
Sep
(9) |
Oct
(19) |
Nov
(24) |
Dec
(12) |
2020 |
Jan
(30) |
Feb
(12) |
Mar
(16) |
Apr
(4) |
May
(37) |
Jun
(17) |
Jul
(19) |
Aug
(15) |
Sep
(26) |
Oct
(84) |
Nov
(64) |
Dec
(55) |
2021 |
Jan
(18) |
Feb
(58) |
Mar
(26) |
Apr
(88) |
May
(51) |
Jun
(36) |
Jul
(31) |
Aug
(37) |
Sep
(79) |
Oct
(15) |
Nov
(29) |
Dec
(8) |
2022 |
Jan
(5) |
Feb
(8) |
Mar
(29) |
Apr
(21) |
May
(11) |
Jun
(11) |
Jul
(18) |
Aug
(16) |
Sep
(6) |
Oct
(10) |
Nov
(23) |
Dec
(1) |
2023 |
Jan
(18) |
Feb
|
Mar
(4) |
Apr
|
May
(3) |
Jun
(10) |
Jul
(1) |
Aug
|
Sep
|
Oct
(1) |
Nov
(3) |
Dec
(5) |
2024 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
(1) |
Feb
|
Mar
|
Apr
(5) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: D.A. S. <sem...@gm...> - 2017-11-22 08:44:23
|
Dear colleagues, Is there an easy fix for such an error that occur when you switch between the SCIPION versions? time data '2017-11-03 23:17:36' does not match format '%Y-%m-%d %H:%M:%S.%f' Thank you! Sincerely, Dmitry |
From: John H. <joh...@co...> - 2017-10-28 17:16:49
|
Thanks, Jose Miguel! Regards, -jh- On 10/28/2017 09:01 AM, Jose Miguel de la Rosa Trevin wrote: > Hi John, > > Thanks for your feedback. As you already guessed, the binaries are > compiled against > a given version of libmpi (if I remember correctly we are using a > CentOS 6 or similar). > So, I think that if you build from source it should properly link to > the correct libmpi version. > > Let us know if that work for you or any other idea. > > Cheers, > Jose Miguel > > > On Sat, Oct 28, 2017 at 8:12 AM, John Heumann > <joh...@co... <mailto:joh...@co...>> wrote: > > Greetings! > > I recently installed the v1.1_2017-06-14 binaries on a RHEL7 > system. None of the additional packages are currently installed. > All of the small tests pass except 2 which seem to be using eman > components (not installed), so I'm ignoring those. The Xmipp > workflow test failed at the ML2D step, however. The stdout log > indicates failure to load libmpi.so.1. It's not surprising that it > can't find libmpi.so.1 since the current version is > libmpi.so.12.0.6. As a temporary workaround I made a symbolic link > libmpi.so.1 -> libmpi.so, which, in turn, points to > libmpi.so.12.06. The test passes after this kludge. > > Why is this test trying to load a specific, older version of the > mpi library? Isn't the core functionality in the newer versions > backwards compatible? Would I avoid this problem if I build from > source myself? Or have I configured something incorrectly? > > Thanks in advance! > > Regards, > -jh- > > -- > John M. Heumann > Department of Molecular, Cellular, and Developmental Biology > 347 UCB, University of Colorado > Boulder, CO 80309-0347 > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > <http://secure-web.cisco.com/1SgICm5s5ROsvLzhZDbbmQbgUlWoAsmQGisPXKJhqM7gLWMRBu9lkMNB_4v7WnvpF2J4vgYdnci1a_JicQl8K3fzuNTq8Ufy0P6Nc9rzgYsuRXhNwF9zNEZ8vZ5118fgG2TVHAB-DsEOBTp8i7DkZ7BdcF1vCZzB0KTc9Tk9QFfq5os_HuzBYw_jKol2H0pHHHYa2sMVfx-pkaAW0y4a_M7yFzLGNb_2UWoqu1LKzWZuBh968WLp8_BzEobivUY_4vRBQ1sNjzQGdoa1EM0pCPMOoFU-JR_nP6yEI0C9RCp2Y6vVFc3GFb4baePHErS2lziViYzk68bG4_EI8bBrbO6xNv6VSlwVXlKB321QODVP4qHu15F2Zcx_AiMPaLy13_e-ve-TCWbt5-ZmGtkU4tTJfW2wrfZRMKwXyEPMttEL5PY6_1s3vlI3X3g3Kn_bgHltLdQNislKJSG4NKNF4UQ/http%3A%2F%2Fsdm.link%2Fslashdot> > _______________________________________________ > scipion-users mailing list > sci...@li... > <mailto:sci...@li...> > https://lists.sourceforge.net/lists/listinfo/scipion-users > <https://lists.sourceforge.net/lists/listinfo/scipion-users> > > -- John M. Heumann Department of Molecular, Cellular, and Developmental Biology 347 UCB, University of Colorado Boulder, CO 80309-0347 |
From: Jose M. de la R. T. <del...@gm...> - 2017-10-28 15:01:20
|
Hi John, Thanks for your feedback. As you already guessed, the binaries are compiled against a given version of libmpi (if I remember correctly we are using a CentOS 6 or similar). So, I think that if you build from source it should properly link to the correct libmpi version. Let us know if that work for you or any other idea. Cheers, Jose Miguel On Sat, Oct 28, 2017 at 8:12 AM, John Heumann <joh...@co...> wrote: > Greetings! > > I recently installed the v1.1_2017-06-14 binaries on a RHEL7 system. > None of the additional packages are currently installed. All of the small > tests pass except 2 which seem to be using eman components (not installed), > so I'm ignoring those. The Xmipp workflow test failed at the ML2D step, > however. The stdout log indicates failure to load libmpi.so.1. It's not > surprising that it can't find libmpi.so.1 since the current version is > libmpi.so.12.0.6. As a temporary workaround I made a symbolic link > libmpi.so.1 -> libmpi.so, which, in turn, points to libmpi.so.12.06. The > test passes after this kludge. > > Why is this test trying to load a specific, older version of the mpi > library? Isn't the core functionality in the newer versions backwards > compatible? Would I avoid this problem if I build from source myself? Or > have I configured something incorrectly? > > Thanks in advance! > > Regards, > -jh- > > -- > John M. Heumann > Department of Molecular, Cellular, and Developmental Biology > 347 UCB, University of Colorado > Boulder, CO 80309-0347 > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > |
From: John H. <joh...@co...> - 2017-10-28 06:26:48
|
Greetings! I recently installed the v1.1_2017-06-14 binaries on a RHEL7 system. None of the additional packages are currently installed. All of the small tests pass except 2 which seem to be using eman components (not installed), so I'm ignoring those. The Xmipp workflow test failed at the ML2D step, however. The stdout log indicates failure to load libmpi.so.1. It's not surprising that it can't find libmpi.so.1 since the current version is libmpi.so.12.0.6. As a temporary workaround I made a symbolic link libmpi.so.1 -> libmpi.so, which, in turn, points to libmpi.so.12.06. The test passes after this kludge. Why is this test trying to load a specific, older version of the mpi library? Isn't the core functionality in the newer versions backwards compatible? Would I avoid this problem if I build from source myself? Or have I configured something incorrectly? Thanks in advance! Regards, -jh- -- John M. Heumann Department of Molecular, Cellular, and Developmental Biology 347 UCB, University of Colorado Boulder, CO 80309-0347 |
From: Grigory S. <sha...@gm...> - 2017-10-19 16:27:19
|
Hi Dmitry, please check that you have loaded required CUDA module (7.5) and set up correctly CUDA_BIN and CUDA_LIB in your $SCIPION/config/scipion.conf file. Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH, UK. tel. +44 (0) 1223 267542 <+44%201223%20267542> e-mail: gs...@mr... On Thu, Oct 19, 2017 at 4:36 PM, D.A. Semchonok <sem...@gm...> wrote: > Dear Grigory, > > So issue with polishing again - > > please see the print-screen. > > (I was using threads = 20) > > > Any solution/advice? > > > > > Thank you! > > Sincerely, > Dmitry > |
From: abhisek M. <abh...@gm...> - 2017-10-16 15:43:56
|
Hello, Is there any way to restrict how much memory will be used from GPU ? I mean set a cut-off value for GPU. On Mon, Oct 16, 2017 at 3:38 PM, Jose Miguel de la Rosa Trevin < del...@gm...> wrote: > Hi Abhisek, > > You can try to reduce the number of MPI or threads to avoid usage of GPU > memory. > > Hope that helps, > Best, > Jose Miguel > > > On Mon, Oct 16, 2017 at 5:36 PM, abhisek Mondal <abh...@gm...> > wrote: > >> Hi, >> >> I was running a 3D classification in Relion-2.0 with GPU-acceleration on. >> But after running sometime it ran out of memory. >> The error message was following: >> *00181: Expectation iteration 3 of 25* >> *00182: 0.50/36.78 min ~~(,_,">[localhost:05098] *** Process received >> signal **** >> *00183: [localhost:05098] Signal: Segmentation fault (11)* >> *00184: [localhost:05098] Signal code: Address not mapped (1)* >> *00185: [localhost:05098] Failing at address: 0x28* >> *00186: [localhost:05098] [ 0] /lib64/libpthread.so.0(+0xf5e0) >> [0x7f2a4a5575e0]* >> *00187: [localhost:05098] [ 1] >> /home/user/scipion/software/em/relion-2.0.4/lib/librelion_gpu_util.so(_ZNSt6vectorI10cudaStagerImESaIS1_EED1Ev+0x3d) >> [0x7f2a4af5195d]* >> *00188: [localhost:05098] [ 2] >> /home/user/scipion/software/em/relion-2.0.4/lib/librelion_gpu_util.so(_ZN15MlOptimiserCuda32doThreadExpectationSomeParticlesEi+0x31a5) >> [0x7f2a4af4ca15]* >> *00189: [localhost:05098] [ 3] >> /home/user/scipion/software/em/relion-2.0.4/lib/librelion_lib.so(_Z36globalThreadExpectationSomeParticlesR14ThreadArgument+0x28) >> [0x7f2a556a5a88]* >> *00190: [localhost:05098] [ 4] >> /home/user/scipion/software/em/relion-2.0.4/lib/librelion_lib.so(_Z11_threadMainPv+0x1d) >> [0x7f2a556d8a6d]* >> *00191: [localhost:05098] [ 5] /lib64/libpthread.so.0(+0x7e25) >> [0x7f2a4a54fe25]* >> *00192: [localhost:05098] [ 6] /lib64/libc.so.6(clone+0x6d) >> [0x7f2a4a27d34d]* >> *00193: [localhost:05098] *** End of error message **** >> *00194: ERROR: CudaCustomAllocator out of memory* >> *00195: [requestedSpace: 82618368 B]* >> *00196: [largestContinuousFreeSpace: 35837696 B]* >> *00197: [totalFreeSpace: 54575360 B]* >> *00198: [512B] (36864B) (165376B) (36864B) (512B) [5632B] (512B) (512B) >> [1536B] (2048B) [8192B] (36864B) (165376B) [2560B] (2048B) (2048B) [49152B] >> (165376B) (36864B) (165376B) [55296B] (36864B) (165376B) [540672B] (90112B) >> (91648B) (90112B) (91648B) (90112B) (91648B) (941056B) [994304B] (2642432B) >> [3305984B] (19759616B) (39519232B) (39519232B) (39519232B) (39519232B) >> (79037952B) [13773824B] (24662528B) (49325056B) (49325056B) (49325056B) >> (49325056B) (98649600B) (20654592B) (41309184B) (41309184B) (41309184B) >> (41309184B) [35837696B] = 823101184B* >> *00199: 1.07/40.23 min >> .~~(,_,">--------------------------------------------------------------------------* >> *00200: mpirun noticed that process rank 3 with PID 5098 on node >> localhost.localdomain exited on signal 11 (Segmentation fault).* >> *00201: >> --------------------------------------------------------------------------* >> *00202: Traceback (most recent call last):* >> *00203: File "/home/user/scipion/pyworkflow/protocol/protocol.py", >> line 182, in run* >> *00204: self._run()* >> *00205: File "/home/user/scipion/pyworkflow/protocol/protocol.py", >> line 228, in _run* >> *00206: resultFiles = self._runFunc()* >> *00207: File "/home/user/scipion/pyworkflow/protocol/protocol.py", >> line 224, in _runFunc* >> *00208: return self._func(*self._args)* >> *00209: File >> "/home/user/scipion/pyworkflow/em/packages/relion/protocol_base.py", line >> 789, in runRelionStep* >> *00210: self.runJob(self._getProgram(), params)* >> *00211: File "/home/user/scipion/pyworkflow/protocol/protocol.py", >> line 1077, in runJob* >> *00212: self._stepsExecutor.runJob(self._log, program, arguments, >> **kwargs)* >> *00213: File "/home/user/scipion/pyworkflow/protocol/executor.py", >> line 56, in runJob* >> *00214: env=env, cwd=cwd)* >> *00215: File "/home/user/scipion/pyworkflow/utils/process.py", line >> 51, in runJob* >> *00216: return runCommand(command, env, cwd)* >> *00217: File "/home/user/scipion/pyworkflow/utils/process.py", line >> 65, in runCommand* >> *00218: check_call(command, shell=True, stdout=sys.stdout, >> stderr=sys.stderr, env=env, cwd=cwd)* >> *00219: File >> "/home/user/scipion/software/lib/python2.7/subprocess.py", line 540, in >> check_call* >> *00220: raise CalledProcessError(retcode, cmd)* >> *00221: CalledProcessError: Command 'mpirun -np 6 -bynode `which >> relion_refine_mpi` --gpu --pool 3 --angpix 1.89 >> --dont_combine_weights_via_disc --ctf_phase_flipped --ref >> Runs/005404_ProtRelionClassify3D/tmp/threed_06.mrc --scale --offset_range >> 5.0 --ini_high 60.0 --offset_step 2.0 --healpix_order 2 --tau2_fudge 4 >> --ctf --oversampling 1 --o Runs/005404_ProtRelionClassify3D/extra/relion >> --i Runs/005404_ProtRelionClassify3D/input_particles.star --iter 25 >> --zero_mask --norm --firstiter_cc --sym c1 --K 5 --flatten_solvent >> --particle_diameter 242 --j 3' returned non-zero exit status 139* >> >> Is there any way to avoid this crash ? Can I restrict GPU memory ? >> >> Thank you. >> >> -- >> Abhisek Mondal >> >> *Senior Research Fellow* >> >> *Structural Biology and Bioinformatics Division* >> *CSIR-Indian Institute of Chemical Biology* >> >> *Kolkata 700032* >> >> *INDIA* >> >> ------------------------------------------------------------ >> ------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> >> > -- Abhisek Mondal *Senior Research Fellow* *Structural Biology and Bioinformatics Division* *CSIR-Indian Institute of Chemical Biology* *Kolkata 700032* *INDIA* |
From: Jose M. de la R. T. <del...@gm...> - 2017-10-16 15:38:56
|
Hi Abhisek, You can try to reduce the number of MPI or threads to avoid usage of GPU memory. Hope that helps, Best, Jose Miguel On Mon, Oct 16, 2017 at 5:36 PM, abhisek Mondal <abh...@gm...> wrote: > Hi, > > I was running a 3D classification in Relion-2.0 with GPU-acceleration on. > But after running sometime it ran out of memory. > The error message was following: > *00181: Expectation iteration 3 of 25* > *00182: 0.50/36.78 min ~~(,_,">[localhost:05098] *** Process received > signal **** > *00183: [localhost:05098] Signal: Segmentation fault (11)* > *00184: [localhost:05098] Signal code: Address not mapped (1)* > *00185: [localhost:05098] Failing at address: 0x28* > *00186: [localhost:05098] [ 0] /lib64/libpthread.so.0(+0xf5e0) > [0x7f2a4a5575e0]* > *00187: [localhost:05098] [ 1] > /home/user/scipion/software/em/relion-2.0.4/lib/librelion_gpu_util.so(_ZNSt6vectorI10cudaStagerImESaIS1_EED1Ev+0x3d) > [0x7f2a4af5195d]* > *00188: [localhost:05098] [ 2] > /home/user/scipion/software/em/relion-2.0.4/lib/librelion_gpu_util.so(_ZN15MlOptimiserCuda32doThreadExpectationSomeParticlesEi+0x31a5) > [0x7f2a4af4ca15]* > *00189: [localhost:05098] [ 3] > /home/user/scipion/software/em/relion-2.0.4/lib/librelion_lib.so(_Z36globalThreadExpectationSomeParticlesR14ThreadArgument+0x28) > [0x7f2a556a5a88]* > *00190: [localhost:05098] [ 4] > /home/user/scipion/software/em/relion-2.0.4/lib/librelion_lib.so(_Z11_threadMainPv+0x1d) > [0x7f2a556d8a6d]* > *00191: [localhost:05098] [ 5] /lib64/libpthread.so.0(+0x7e25) > [0x7f2a4a54fe25]* > *00192: [localhost:05098] [ 6] /lib64/libc.so.6(clone+0x6d) > [0x7f2a4a27d34d]* > *00193: [localhost:05098] *** End of error message **** > *00194: ERROR: CudaCustomAllocator out of memory* > *00195: [requestedSpace: 82618368 B]* > *00196: [largestContinuousFreeSpace: 35837696 B]* > *00197: [totalFreeSpace: 54575360 B]* > *00198: [512B] (36864B) (165376B) (36864B) (512B) [5632B] (512B) (512B) > [1536B] (2048B) [8192B] (36864B) (165376B) [2560B] (2048B) (2048B) [49152B] > (165376B) (36864B) (165376B) [55296B] (36864B) (165376B) [540672B] (90112B) > (91648B) (90112B) (91648B) (90112B) (91648B) (941056B) [994304B] (2642432B) > [3305984B] (19759616B) (39519232B) (39519232B) (39519232B) (39519232B) > (79037952B) [13773824B] (24662528B) (49325056B) (49325056B) (49325056B) > (49325056B) (98649600B) (20654592B) (41309184B) (41309184B) (41309184B) > (41309184B) [35837696B] = 823101184B* > *00199: 1.07/40.23 min > .~~(,_,">--------------------------------------------------------------------------* > *00200: mpirun noticed that process rank 3 with PID 5098 on node > localhost.localdomain exited on signal 11 (Segmentation fault).* > *00201: > --------------------------------------------------------------------------* > *00202: Traceback (most recent call last):* > *00203: File "/home/user/scipion/pyworkflow/protocol/protocol.py", > line 182, in run* > *00204: self._run()* > *00205: File "/home/user/scipion/pyworkflow/protocol/protocol.py", > line 228, in _run* > *00206: resultFiles = self._runFunc()* > *00207: File "/home/user/scipion/pyworkflow/protocol/protocol.py", > line 224, in _runFunc* > *00208: return self._func(*self._args)* > *00209: File > "/home/user/scipion/pyworkflow/em/packages/relion/protocol_base.py", line > 789, in runRelionStep* > *00210: self.runJob(self._getProgram(), params)* > *00211: File "/home/user/scipion/pyworkflow/protocol/protocol.py", > line 1077, in runJob* > *00212: self._stepsExecutor.runJob(self._log, program, arguments, > **kwargs)* > *00213: File "/home/user/scipion/pyworkflow/protocol/executor.py", > line 56, in runJob* > *00214: env=env, cwd=cwd)* > *00215: File "/home/user/scipion/pyworkflow/utils/process.py", line > 51, in runJob* > *00216: return runCommand(command, env, cwd)* > *00217: File "/home/user/scipion/pyworkflow/utils/process.py", line > 65, in runCommand* > *00218: check_call(command, shell=True, stdout=sys.stdout, > stderr=sys.stderr, env=env, cwd=cwd)* > *00219: File > "/home/user/scipion/software/lib/python2.7/subprocess.py", line 540, in > check_call* > *00220: raise CalledProcessError(retcode, cmd)* > *00221: CalledProcessError: Command 'mpirun -np 6 -bynode `which > relion_refine_mpi` --gpu --pool 3 --angpix 1.89 > --dont_combine_weights_via_disc --ctf_phase_flipped --ref > Runs/005404_ProtRelionClassify3D/tmp/threed_06.mrc --scale --offset_range > 5.0 --ini_high 60.0 --offset_step 2.0 --healpix_order 2 --tau2_fudge 4 > --ctf --oversampling 1 --o Runs/005404_ProtRelionClassify3D/extra/relion > --i Runs/005404_ProtRelionClassify3D/input_particles.star --iter 25 > --zero_mask --norm --firstiter_cc --sym c1 --K 5 --flatten_solvent > --particle_diameter 242 --j 3' returned non-zero exit status 139* > > Is there any way to avoid this crash ? Can I restrict GPU memory ? > > Thank you. > > -- > Abhisek Mondal > > *Senior Research Fellow* > > *Structural Biology and Bioinformatics Division* > *CSIR-Indian Institute of Chemical Biology* > > *Kolkata 700032* > > *INDIA* > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |
From: abhisek M. <abh...@gm...> - 2017-10-16 15:36:35
|
Hi, I was running a 3D classification in Relion-2.0 with GPU-acceleration on. But after running sometime it ran out of memory. The error message was following: *00181: Expectation iteration 3 of 25* *00182: 0.50/36.78 min ~~(,_,">[localhost:05098] *** Process received signal **** *00183: [localhost:05098] Signal: Segmentation fault (11)* *00184: [localhost:05098] Signal code: Address not mapped (1)* *00185: [localhost:05098] Failing at address: 0x28* *00186: [localhost:05098] [ 0] /lib64/libpthread.so.0(+0xf5e0) [0x7f2a4a5575e0]* *00187: [localhost:05098] [ 1] /home/user/scipion/software/em/relion-2.0.4/lib/librelion_gpu_util.so(_ZNSt6vectorI10cudaStagerImESaIS1_EED1Ev+0x3d) [0x7f2a4af5195d]* *00188: [localhost:05098] [ 2] /home/user/scipion/software/em/relion-2.0.4/lib/librelion_gpu_util.so(_ZN15MlOptimiserCuda32doThreadExpectationSomeParticlesEi+0x31a5) [0x7f2a4af4ca15]* *00189: [localhost:05098] [ 3] /home/user/scipion/software/em/relion-2.0.4/lib/librelion_lib.so(_Z36globalThreadExpectationSomeParticlesR14ThreadArgument+0x28) [0x7f2a556a5a88]* *00190: [localhost:05098] [ 4] /home/user/scipion/software/em/relion-2.0.4/lib/librelion_lib.so(_Z11_threadMainPv+0x1d) [0x7f2a556d8a6d]* *00191: [localhost:05098] [ 5] /lib64/libpthread.so.0(+0x7e25) [0x7f2a4a54fe25]* *00192: [localhost:05098] [ 6] /lib64/libc.so.6(clone+0x6d) [0x7f2a4a27d34d]* *00193: [localhost:05098] *** End of error message **** *00194: ERROR: CudaCustomAllocator out of memory* *00195: [requestedSpace: 82618368 B]* *00196: [largestContinuousFreeSpace: 35837696 B]* *00197: [totalFreeSpace: 54575360 B]* *00198: [512B] (36864B) (165376B) (36864B) (512B) [5632B] (512B) (512B) [1536B] (2048B) [8192B] (36864B) (165376B) [2560B] (2048B) (2048B) [49152B] (165376B) (36864B) (165376B) [55296B] (36864B) (165376B) [540672B] (90112B) (91648B) (90112B) (91648B) (90112B) (91648B) (941056B) [994304B] (2642432B) [3305984B] (19759616B) (39519232B) (39519232B) (39519232B) (39519232B) (79037952B) [13773824B] (24662528B) (49325056B) (49325056B) (49325056B) (49325056B) (98649600B) (20654592B) (41309184B) (41309184B) (41309184B) (41309184B) [35837696B] = 823101184B* *00199: 1.07/40.23 min .~~(,_,">--------------------------------------------------------------------------* *00200: mpirun noticed that process rank 3 with PID 5098 on node localhost.localdomain exited on signal 11 (Segmentation fault).* *00201: --------------------------------------------------------------------------* *00202: Traceback (most recent call last):* *00203: File "/home/user/scipion/pyworkflow/protocol/protocol.py", line 182, in run* *00204: self._run()* *00205: File "/home/user/scipion/pyworkflow/protocol/protocol.py", line 228, in _run* *00206: resultFiles = self._runFunc()* *00207: File "/home/user/scipion/pyworkflow/protocol/protocol.py", line 224, in _runFunc* *00208: return self._func(*self._args)* *00209: File "/home/user/scipion/pyworkflow/em/packages/relion/protocol_base.py", line 789, in runRelionStep* *00210: self.runJob(self._getProgram(), params)* *00211: File "/home/user/scipion/pyworkflow/protocol/protocol.py", line 1077, in runJob* *00212: self._stepsExecutor.runJob(self._log, program, arguments, **kwargs)* *00213: File "/home/user/scipion/pyworkflow/protocol/executor.py", line 56, in runJob* *00214: env=env, cwd=cwd)* *00215: File "/home/user/scipion/pyworkflow/utils/process.py", line 51, in runJob* *00216: return runCommand(command, env, cwd)* *00217: File "/home/user/scipion/pyworkflow/utils/process.py", line 65, in runCommand* *00218: check_call(command, shell=True, stdout=sys.stdout, stderr=sys.stderr, env=env, cwd=cwd)* *00219: File "/home/user/scipion/software/lib/python2.7/subprocess.py", line 540, in check_call* *00220: raise CalledProcessError(retcode, cmd)* *00221: CalledProcessError: Command 'mpirun -np 6 -bynode `which relion_refine_mpi` --gpu --pool 3 --angpix 1.89 --dont_combine_weights_via_disc --ctf_phase_flipped --ref Runs/005404_ProtRelionClassify3D/tmp/threed_06.mrc --scale --offset_range 5.0 --ini_high 60.0 --offset_step 2.0 --healpix_order 2 --tau2_fudge 4 --ctf --oversampling 1 --o Runs/005404_ProtRelionClassify3D/extra/relion --i Runs/005404_ProtRelionClassify3D/input_particles.star --iter 25 --zero_mask --norm --firstiter_cc --sym c1 --K 5 --flatten_solvent --particle_diameter 242 --j 3' returned non-zero exit status 139* Is there any way to avoid this crash ? Can I restrict GPU memory ? Thank you. -- Abhisek Mondal *Senior Research Fellow* *Structural Biology and Bioinformatics Division* *CSIR-Indian Institute of Chemical Biology* *Kolkata 700032* *INDIA* |
From: Grigory S. <sha...@gm...> - 2017-10-11 15:11:40
|
Dear Dmitry, I guess this is related with the upgrade to CentOS 7. Your issue is described here: https://www.centos.org/forums/viewtopic.php?t=48723 You should ask your sysadmin to create a symlink for missing library (libptf77blas.so.3) pointing to libtatlas.so. Hopefully, this will solve the problem. Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH, UK. tel. +44 (0) 1223 267542 <+44%201223%20267542> e-mail: gs...@mr... On Wed, Oct 11, 2017 at 3:48 PM, D.A. Semchonok <sem...@gm...> wrote: > Dear Grigory, > > I would like to ask if you possibly know what may go wrong in starting up > of the SCIPON dev and how can I fix that. > > When I start up the software I get the following error: > > > Thank you! > > Sincerely, > Dmitry > > > > > |
From: Grigory S. <sha...@gm...> - 2017-10-06 09:49:52
|
Hello Teige, just a wild guess: from what I can see you are using eman2 cmake instead of the system one. Have you tried removing any eman-related stuff from your environment and trying to recompile? Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH, UK. tel. +44 (0) 1223 267542 <+44%201223%20267542> e-mail: gs...@mr... On Fri, Sep 29, 2017 at 5:49 PM, Matthews-Palmer, Teige Rowan Seal < t.m...@im...> wrote: > Dear Scipion users, > > > I wondered if anyone can suggest a solution to a problem I'm having > compiling Scipion from a clone of the git on Linux Mint 17.3. > > > gcc & gfortran are both version 4.8.4 > > cmake is version 2.8.12.2 > > > scipion install proceeds until: > > > cd software/tmp/lapack-3.5.0 > cmake -DBUILD_SHARED_LIBS:BOOL=ON -DLAPACKE:BOOL=ON > -DCMAKE_INSTALL_PREFIX:PATH=/beebylab/software/scipion/1.1/software . > > /beebylab/software/scipion/1.1/software/log/lapack_cmake.log 2>&1 > Error: target 'software/tmp/lapack-3.5.0/Makefile' not built > > > Looking at the CMake log, it says gcc is broken and unable to compile a > test: > > cat /beebylab/software/scipion/1.1/software/log/lapack_cmake.log > -- The Fortran compiler identification is GNU 4.8.4 > -- Check for working Fortran compiler: /usr/bin/gcc > -- Check for working Fortran compiler: /usr/bin/gcc -- broken > CMake Error at /beebylab/software/eman/EMAN2.2/share/cmake-3.8/Modules/ > CMakeTestFortranCompiler.cmake:44 (message): > The Fortran compiler "/usr/bin/gcc" is not able to compile a simple test > program. > > > Looking at the CMake Error log to see the cause, apparently all the calls > of gfortran procedures in the test fortran script are 'undefined' (problem > with linking or paths but I don't understand this well, no experience of > make or cmake). > > e.g. (selection of lines not in proper order, that illustrate things not > found) > > gcc: error trying to exec 'f951': execvp: No such file or directory > > CMakeFortranCompilerId.F:(.text+0x93f): undefined reference to > `_gfortran_st_write' > CMakeFortranCompilerId.F:(.text+0x958): undefined reference to > `_gfortran_transfer_character_write' > CMakeFortranCompilerId.F:(.text+0x967): undefined reference to > `_gfortran_st_write_done' > testFortranCompiler.f:(.text+0x3f): undefined reference to > `_gfortran_st_write' > testFortranCompiler.f:(.text+0x58): undefined reference to > `_gfortran_transfer_character_write' > testFortranCompiler.f:(.text+0x67): undefined reference to > `_gfortran_st_write_done' > > > I'll attach the entire error log, but it's very long. Any suggestions are > really appreciated! > > All the best > > Teige > > > > > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |
From: Matthews-Palmer, T. R. S. <t.m...@im...> - 2017-09-29 16:49:28
|
Dear Scipion users, I wondered if anyone can suggest a solution to a problem I'm having compiling Scipion from a clone of the git on Linux Mint 17.3. gcc & gfortran are both version 4.8.4 cmake is version 2.8.12.2 scipion install proceeds until: cd software/tmp/lapack-3.5.0 cmake -DBUILD_SHARED_LIBS:BOOL=ON -DLAPACKE:BOOL=ON -DCMAKE_INSTALL_PREFIX:PATH=/beebylab/software/scipion/1.1/software . > /beebylab/software/scipion/1.1/software/log/lapack_cmake.log 2>&1 Error: target 'software/tmp/lapack-3.5.0/Makefile' not built Looking at the CMake log, it says gcc is broken and unable to compile a test: cat /beebylab/software/scipion/1.1/software/log/lapack_cmake.log -- The Fortran compiler identification is GNU 4.8.4 -- Check for working Fortran compiler: /usr/bin/gcc -- Check for working Fortran compiler: /usr/bin/gcc -- broken CMake Error at /beebylab/software/eman/EMAN2.2/share/cmake-3.8/Modules/CMakeTestFortranCompiler.cmake:44 (message): The Fortran compiler "/usr/bin/gcc" is not able to compile a simple test program. Looking at the CMake Error log to see the cause, apparently all the calls of gfortran procedures in the test fortran script are 'undefined' (problem with linking or paths but I don't understand this well, no experience of make or cmake). e.g. (selection of lines not in proper order, that illustrate things not found) gcc: error trying to exec 'f951': execvp: No such file or directory CMakeFortranCompilerId.F:(.text+0x93f): undefined reference to `_gfortran_st_write' CMakeFortranCompilerId.F:(.text+0x958): undefined reference to `_gfortran_transfer_character_write' CMakeFortranCompilerId.F:(.text+0x967): undefined reference to `_gfortran_st_write_done' testFortranCompiler.f:(.text+0x3f): undefined reference to `_gfortran_st_write' testFortranCompiler.f:(.text+0x58): undefined reference to `_gfortran_transfer_character_write' testFortranCompiler.f:(.text+0x67): undefined reference to `_gfortran_st_write_done' I'll attach the entire error log, but it's very long. Any suggestions are really appreciated! All the best Teige |
From: Jose M. de la R. T. <del...@gm...> - 2017-09-14 08:26:55
|
Dear Guixing, Thanks for your feedback, it is important for us to know better the user needs. We will try our best to have a Mac version, but as Pablo said, we still have many other things to deal with before. Best, Jose Miguel On Thu, Sep 14, 2017 at 10:20 AM, Pablo Conesa <pc...@cn...> wrote: > Dear Guixing Ma > > We have done some attempts and were unsuccessful. It's in our "TO DO" list > but I must say is not a priority...unless we see more and more request like > yours. > > All the best, Pablo. > > Scipion team > > On 14/09/17 09:51, 马贵兴 wrote: > > Dear sir/miss: > > I strongly suggest you to make an MacOS version of Scipion. > > May I known if you have the plan? > > Many thanks. > > yours. > > > 马贵兴 > Guixing Ma > 南方科技大学生物系 > Department of Biology, South University of Science and Technology of China > E-mail:ma...@ma... > Tel:18575522114 <(857)%20552-2114> > 地址:广东省深圳市南山区西丽学苑大道1088号 > Addr: 1088 Xueyuan Road, Nanshan District, Shenzhen, Guangdong, China > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > > _______________________________________________ > scipion-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/scipion-users > > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |
From: Pablo C. <pc...@cn...> - 2017-09-14 08:20:15
|
Dear Guixing Ma We have done some attempts and were unsuccessful. It's in our "TO DO" list but I must say is not a priority...unless we see more and more request like yours. All the best, Pablo. Scipion team On 14/09/17 09:51, 马贵兴 wrote: > > Dear sir/miss: > > I strongly suggest you to make an MacOS version of Scipion. > > May I known if you have the plan? > > Many thanks. > > yours. > > > 马贵兴 > Guixing Ma > 南方科技大学生物系 > Department of Biology, South University of Science and Technology of > China > E-mail:ma...@ma... <mailto:ma...@ma...> > Tel:18575522114 > 地址:广东省深圳市南山区西丽学苑大道1088号 > Addr: 1088 Xueyuan Road, Nanshan District, Shenzhen, Guangdong, China > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users |
From: <ma...@ma...> - 2017-09-14 08:09:21
|
Dear sir/miss: I strongly suggest you to make an MacOS version of Scipion. May I known if you have the plan? Many thanks. yours. 马贵兴 Guixing Ma 南方科技大学生物系 Department of Biology, South University of Science and Technology of China E-mail:ma...@ma... Tel:18575522114 地址:广东省深圳市南山区西丽学苑大道1088号 Addr: 1088 Xueyuan Road, Nanshan District, Shenzhen, Guangdong, China |
From: Grigory S. <sha...@gm...> - 2017-09-13 17:57:52
|
Hello Abhisek, I am afraid importing particles from EMAN is not implemented yet. You can convert *.lst to a new hdf stack and import it using "From files" option, but no metadata information will be gathered (alignments, CTF). Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH, UK. tel. +44 (0) 1223 267542 <+44%201223%20267542> e-mail: gs...@mr... On Wed, Sep 13, 2017 at 2:41 PM, abhisek Mondal <abh...@gm...> wrote: > Hi, > > I'm currently trying to import particles from eman2 to scipion. The > "import particles" option is not allowing simply to import from Eman2. > > Can you please help me out here ? I tried but failed to import the *.lst > file, the all particle stack generated in eman2. > > Thank you. > > -- > Abhisek Mondal > > *Senior Research Fellow* > > *Structural Biology and Bioinformatics Division* > *CSIR-Indian Institute of Chemical Biology* > > *Kolkata 700032* > > *INDIA* > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |
From: abhisek M. <abh...@gm...> - 2017-09-13 13:41:24
|
Hi, I'm currently trying to import particles from eman2 to scipion. The "import particles" option is not allowing simply to import from Eman2. Can you please help me out here ? I tried but failed to import the *.lst file, the all particle stack generated in eman2. Thank you. -- Abhisek Mondal *Senior Research Fellow* *Structural Biology and Bioinformatics Division* *CSIR-Indian Institute of Chemical Biology* *Kolkata 700032* *INDIA* |
From: Juha H. <ju...@st...> - 2017-09-13 09:21:50
|
Hi Gary, Please contact me directly if you are still struggling with it and I can share our host.conf file for Univa Grid Engine with you. Best wishes, Juha On Wed, Sep 13, 2017 at 9:23 AM, Jose Miguel de la Rosa Trevin < del...@gm...> wrote: > Hi Gary, > > I think I know what the problem is. You need to specify 'localhost' in the > host.conf file. > I know it could be a bit confusing and we may improve this in the future. > It should be able to submit the jobs in your cluster. The _launchRemote > function in the > Python code is not fully working. It will be used for a future idea of > launching jobs > to more than one machines (clusters or not), but this is still not working. > > Best, > Jose Miguel > > > > On Tue, Sep 12, 2017 at 10:37 PM, Gary Smith <gar...@vr...> wrote: > >> This is what a normal qsub does when you submit a script to UGE... >> >> $ qsub busy.sh >> Your job 6836 ("bz") has been submitted >> $ >> >> From: Jose Miguel de la Rosa Trevin <del...@gm...> >> Date: Tuesday, September 12, 2017 at 4:34 PM >> To: Vertex <gar...@vr...> >> Cc: "sci...@li..." <sci...@li...urcefor >> ge.net> >> Subject: Re: [scipion-users] problem using queuing system >> >> Hi Gary, >> >> This error is because Scipion is trying to parse the output from the >> submission command. It tries to parse a number (the job id) from the >> output. Can you share the output of a successful submission command to the >> queue? This could give us a hint about the issue. >> >> Best, >> Jose Miguel >> >> >> >> On Sep 12, 2017 8:53 PM, "Gary Smith" <gar...@vr...> wrote: >> >>> I'm trying to get Scipion-1.1 running with our Univa Grid Engine setup >>> (aka. SGE). Scipion appears to be failing to use the launch protocol >>> remotely, even though this works fine outside of Scipion. Has anyone seen >>> errors like this? >>> >>> Error during EXECUTE: ** Couldn't parse ouput: [31m >>> Scipion v1.1 (2017-06-14) Balbino >>> >>> [0m >>> >>> Traceback: >>> Traceback (most recent call last): >>> File "/cluster/app/scipion/1.1/pyworkflow/gui/form.py", line 1710, in >>> _close >>> message = self.callback(self.protocol, onlySave) >>> File "/cluster/app/scipion/1.1/pyworkflow/gui/project/viewprotocols.py", >>> line 1662, in _executeSaveProtocol >>> self.project.launchProtocol(prot) >>> File "/cluster/app/scipion/1.1/pyworkflow/project.py", line 417, in >>> launchProtocol >>> pwprot.launch(protocol, wait) >>> File "/cluster/app/scipion/1.1/pyworkflow/protocol/launch.py", line >>> 64, in launch >>> jobId = _launchRemote(protocol, wait) >>> File "/cluster/app/scipion/1.1/pyworkflow/protocol/launch.py", line >>> 164, in _launchRemote >>> raise Exception("** Couldn't parse ouput: %s" % redStr(out)) >>> Exception: ** Couldn't parse ouput: [31m >>> Scipion v1.1 (2017-06-14) Balbino >>> >>> [0m >>> >>> >>> >>> This email message and any attachments are confidential and intended for >>> use by the addressee(s) only. If you are not the intended recipient, please >>> notify me immediately by replying to this message, and destroy all copies >>> of this message and any attachments. Thank you. >>> >>> ------------------------------------------------------------ >>> ------------------ >>> Check out the vibrant tech community on one of the world's most >>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__sdm.link_slashdot&d=DwMFaQ&c=TzEZu9LIcihmW37vx9Ah6w&r=8Ew8fGcAZAtsnfoFySJ7h_Uw6QpVzYku_PYJB0pVyvc&m=_pTUgPWTKKdSH0G97_STmem_ngcaD_W7ouXimd5uZEU&s=5jVAIqfoH9LR-n1BYFg-4vH_F732aUkGy2qBZORp1Ys&e=> >>> _______________________________________________ >>> scipion-users mailing list >>> sci...@li... >>> https://lists.sourceforge.net/lists/listinfo/scipion-users >>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.sourceforge.net_lists_listinfo_scipion-2Dusers&d=DwMFaQ&c=TzEZu9LIcihmW37vx9Ah6w&r=8Ew8fGcAZAtsnfoFySJ7h_Uw6QpVzYku_PYJB0pVyvc&m=_pTUgPWTKKdSH0G97_STmem_ngcaD_W7ouXimd5uZEU&s=ewScVJ6V8ISAwcmHqXU8Cuh24C-rS_pS03XGg3-LB1s&e=> >>> >>> >> >> This email message and any attachments are confidential and intended for >> use by the addressee(s) only. If you are not the intended recipient, please >> notify me immediately by replying to this message, and destroy all copies >> of this message and any attachments. Thank you. >> > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |
From: Jose M. de la R. T. <del...@gm...> - 2017-09-13 08:23:37
|
Hi Gary, I think I know what the problem is. You need to specify 'localhost' in the host.conf file. I know it could be a bit confusing and we may improve this in the future. It should be able to submit the jobs in your cluster. The _launchRemote function in the Python code is not fully working. It will be used for a future idea of launching jobs to more than one machines (clusters or not), but this is still not working. Best, Jose Miguel On Tue, Sep 12, 2017 at 10:37 PM, Gary Smith <gar...@vr...> wrote: > This is what a normal qsub does when you submit a script to UGE... > > $ qsub busy.sh > Your job 6836 ("bz") has been submitted > $ > > From: Jose Miguel de la Rosa Trevin <del...@gm...> > Date: Tuesday, September 12, 2017 at 4:34 PM > To: Vertex <gar...@vr...> > Cc: "sci...@li..." <scipion-users@lists. > sourceforge.net> > Subject: Re: [scipion-users] problem using queuing system > > Hi Gary, > > This error is because Scipion is trying to parse the output from the > submission command. It tries to parse a number (the job id) from the > output. Can you share the output of a successful submission command to the > queue? This could give us a hint about the issue. > > Best, > Jose Miguel > > > > On Sep 12, 2017 8:53 PM, "Gary Smith" <gar...@vr...> wrote: > >> I'm trying to get Scipion-1.1 running with our Univa Grid Engine setup >> (aka. SGE). Scipion appears to be failing to use the launch protocol >> remotely, even though this works fine outside of Scipion. Has anyone seen >> errors like this? >> >> Error during EXECUTE: ** Couldn't parse ouput: [31m >> Scipion v1.1 (2017-06-14) Balbino >> >> [0m >> >> Traceback: >> Traceback (most recent call last): >> File "/cluster/app/scipion/1.1/pyworkflow/gui/form.py", line 1710, in >> _close >> message = self.callback(self.protocol, onlySave) >> File "/cluster/app/scipion/1.1/pyworkflow/gui/project/viewprotocols.py", >> line 1662, in _executeSaveProtocol >> self.project.launchProtocol(prot) >> File "/cluster/app/scipion/1.1/pyworkflow/project.py", line 417, in >> launchProtocol >> pwprot.launch(protocol, wait) >> File "/cluster/app/scipion/1.1/pyworkflow/protocol/launch.py", line >> 64, in launch >> jobId = _launchRemote(protocol, wait) >> File "/cluster/app/scipion/1.1/pyworkflow/protocol/launch.py", line >> 164, in _launchRemote >> raise Exception("** Couldn't parse ouput: %s" % redStr(out)) >> Exception: ** Couldn't parse ouput: [31m >> Scipion v1.1 (2017-06-14) Balbino >> >> [0m >> >> >> >> This email message and any attachments are confidential and intended for >> use by the addressee(s) only. If you are not the intended recipient, please >> notify me immediately by replying to this message, and destroy all copies >> of this message and any attachments. Thank you. >> >> ------------------------------------------------------------ >> ------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> <https://urldefense.proofpoint.com/v2/url?u=http-3A__sdm.link_slashdot&d=DwMFaQ&c=TzEZu9LIcihmW37vx9Ah6w&r=8Ew8fGcAZAtsnfoFySJ7h_Uw6QpVzYku_PYJB0pVyvc&m=_pTUgPWTKKdSH0G97_STmem_ngcaD_W7ouXimd5uZEU&s=5jVAIqfoH9LR-n1BYFg-4vH_F732aUkGy2qBZORp1Ys&e=> >> _______________________________________________ >> scipion-users mailing list >> sci...@li... >> https://lists.sourceforge.net/lists/listinfo/scipion-users >> <https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.sourceforge.net_lists_listinfo_scipion-2Dusers&d=DwMFaQ&c=TzEZu9LIcihmW37vx9Ah6w&r=8Ew8fGcAZAtsnfoFySJ7h_Uw6QpVzYku_PYJB0pVyvc&m=_pTUgPWTKKdSH0G97_STmem_ngcaD_W7ouXimd5uZEU&s=ewScVJ6V8ISAwcmHqXU8Cuh24C-rS_pS03XGg3-LB1s&e=> >> >> > > This email message and any attachments are confidential and intended for > use by the addressee(s) only. If you are not the intended recipient, please > notify me immediately by replying to this message, and destroy all copies > of this message and any attachments. Thank you. > |
From: Gary S. <gar...@vr...> - 2017-09-12 20:37:52
|
This is what a normal qsub does when you submit a script to UGE... $ qsub busy.sh Your job 6836 ("bz") has been submitted $ From: Jose Miguel de la Rosa Trevin <del...@gm...<mailto:del...@gm...>> Date: Tuesday, September 12, 2017 at 4:34 PM To: Vertex <gar...@vr...<mailto:gar...@vr...>> Cc: "sci...@li...<mailto:sci...@li...>" <sci...@li...<mailto:sci...@li...>> Subject: Re: [scipion-users] problem using queuing system Hi Gary, This error is because Scipion is trying to parse the output from the submission command. It tries to parse a number (the job id) from the output. Can you share the output of a successful submission command to the queue? This could give us a hint about the issue. Best, Jose Miguel On Sep 12, 2017 8:53 PM, "Gary Smith" <gar...@vr...<mailto:gar...@vr...>> wrote: I'm trying to get Scipion-1.1 running with our Univa Grid Engine setup (aka. SGE). Scipion appears to be failing to use the launch protocol remotely, even though this works fine outside of Scipion. Has anyone seen errors like this? Error during EXECUTE: ** Couldn't parse ouput: [31m Scipion v1.1 (2017-06-14) Balbino [0m Traceback: Traceback (most recent call last): File "/cluster/app/scipion/1.1/pyworkflow/gui/form.py", line 1710, in _close message = self.callback(self.protocol, onlySave) File "/cluster/app/scipion/1.1/pyworkflow/gui/project/viewprotocols.py", line 1662, in _executeSaveProtocol self.project.launchProtocol(prot) File "/cluster/app/scipion/1.1/pyworkflow/project.py", line 417, in launchProtocol pwprot.launch(protocol, wait) File "/cluster/app/scipion/1.1/pyworkflow/protocol/launch.py", line 64, in launch jobId = _launchRemote(protocol, wait) File "/cluster/app/scipion/1.1/pyworkflow/protocol/launch.py", line 164, in _launchRemote raise Exception("** Couldn't parse ouput: %s" % redStr(out)) Exception: ** Couldn't parse ouput: [31m Scipion v1.1 (2017-06-14) Balbino [0m This email message and any attachments are confidential and intended for use by the addressee(s) only. If you are not the intended recipient, please notify me immediately by replying to this message, and destroy all copies of this message and any attachments. Thank you. ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot<https://urldefense.proofpoint.com/v2/url?u=http-3A__sdm.link_slashdot&d=DwMFaQ&c=TzEZu9LIcihmW37vx9Ah6w&r=8Ew8fGcAZAtsnfoFySJ7h_Uw6QpVzYku_PYJB0pVyvc&m=_pTUgPWTKKdSH0G97_STmem_ngcaD_W7ouXimd5uZEU&s=5jVAIqfoH9LR-n1BYFg-4vH_F732aUkGy2qBZORp1Ys&e=> _______________________________________________ scipion-users mailing list sci...@li...<mailto:sci...@li...> https://lists.sourceforge.net/lists/listinfo/scipion-users<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.sourceforge.net_lists_listinfo_scipion-2Dusers&d=DwMFaQ&c=TzEZu9LIcihmW37vx9Ah6w&r=8Ew8fGcAZAtsnfoFySJ7h_Uw6QpVzYku_PYJB0pVyvc&m=_pTUgPWTKKdSH0G97_STmem_ngcaD_W7ouXimd5uZEU&s=ewScVJ6V8ISAwcmHqXU8Cuh24C-rS_pS03XGg3-LB1s&e=> This email message and any attachments are confidential and intended for use by the addressee(s) only. If you are not the intended recipient, please notify me immediately by replying to this message, and destroy all copies of this message and any attachments. Thank you. |
From: Jose M. de la R. T. <del...@gm...> - 2017-09-12 20:34:54
|
Hi Gary, This error is because Scipion is trying to parse the output from the submission command. It tries to parse a number (the job id) from the output. Can you share the output of a successful submission command to the queue? This could give us a hint about the issue. Best, Jose Miguel On Sep 12, 2017 8:53 PM, "Gary Smith" <gar...@vr...> wrote: > I'm trying to get Scipion-1.1 running with our Univa Grid Engine setup > (aka. SGE). Scipion appears to be failing to use the launch protocol > remotely, even though this works fine outside of Scipion. Has anyone seen > errors like this? > > Error during EXECUTE: ** Couldn't parse ouput: [31m > Scipion v1.1 (2017-06-14) Balbino > > [0m > > Traceback: > Traceback (most recent call last): > File "/cluster/app/scipion/1.1/pyworkflow/gui/form.py", line 1710, in > _close > message = self.callback(self.protocol, onlySave) > File "/cluster/app/scipion/1.1/pyworkflow/gui/project/viewprotocols.py", > line 1662, in _executeSaveProtocol > self.project.launchProtocol(prot) > File "/cluster/app/scipion/1.1/pyworkflow/project.py", line 417, in > launchProtocol > pwprot.launch(protocol, wait) > File "/cluster/app/scipion/1.1/pyworkflow/protocol/launch.py", line 64, > in launch > jobId = _launchRemote(protocol, wait) > File "/cluster/app/scipion/1.1/pyworkflow/protocol/launch.py", line > 164, in _launchRemote > raise Exception("** Couldn't parse ouput: %s" % redStr(out)) > Exception: ** Couldn't parse ouput: [31m > Scipion v1.1 (2017-06-14) Balbino > > [0m > > > > This email message and any attachments are confidential and intended for > use by the addressee(s) only. If you are not the intended recipient, please > notify me immediately by replying to this message, and destroy all copies > of this message and any attachments. Thank you. > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |
From: Gary S. <gar...@vr...> - 2017-09-12 18:53:49
|
I'm trying to get Scipion-1.1 running with our Univa Grid Engine setup (aka. SGE). Scipion appears to be failing to use the launch protocol remotely, even though this works fine outside of Scipion. Has anyone seen errors like this? Error during EXECUTE: ** Couldn't parse ouput: [31m Scipion v1.1 (2017-06-14) Balbino [0m Traceback: Traceback (most recent call last): File "/cluster/app/scipion/1.1/pyworkflow/gui/form.py", line 1710, in _close message = self.callback(self.protocol, onlySave) File "/cluster/app/scipion/1.1/pyworkflow/gui/project/viewprotocols.py", line 1662, in _executeSaveProtocol self.project.launchProtocol(prot) File "/cluster/app/scipion/1.1/pyworkflow/project.py", line 417, in launchProtocol pwprot.launch(protocol, wait) File "/cluster/app/scipion/1.1/pyworkflow/protocol/launch.py", line 64, in launch jobId = _launchRemote(protocol, wait) File "/cluster/app/scipion/1.1/pyworkflow/protocol/launch.py", line 164, in _launchRemote raise Exception("** Couldn't parse ouput: %s" % redStr(out)) Exception: ** Couldn't parse ouput: [31m Scipion v1.1 (2017-06-14) Balbino [0m This email message and any attachments are confidential and intended for use by the addressee(s) only. If you are not the intended recipient, please notify me immediately by replying to this message, and destroy all copies of this message and any attachments. Thank you. |
From: Grigory S. <sha...@gm...> - 2017-09-07 13:51:26
|
Hello Dmitry, are you using release-1.1.1 branch from git? If you do movie processing in Relion, inputs are movie particles (extracted using a separate protocol) and a completed 3d auto-refine run with normal particles. Is this the case? This error could happen only if your normal particles after relion auto-refine 3d run do not have any shift information... Best regards, Grigory -------------------------------------------------------------------------------- Grigory Sharov, Ph.D. MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH, UK. tel. +44 (0) 1223 267542 <+44%201223%20267542> e-mail: gs...@mr... On Thu, Sep 7, 2017 at 2:08 PM, D.A. Semchonok <sem...@gm...> wrote: > Dear colleagues, > > I have a new issue running the relion 3d auto refinement - > > so > - the input are the movies; > - *Select previous run* the result of relion 3d auto-refine (with stacked > frames) > > > > the error > > Converting set from 'Runs/032578_XmippProtExtractMovieParticles/movie_particles.sqlite' > into 'Runs/032753_ProtRelionRefine3D/movie_particles.star' > 00081: Traceback (most recent call last): > 00082: File "/home/p260888/scipion/pyworkflow/protocol/protocol.py", > line 182, in run > 00083: self._run() > 00084: File "/home/p260888/scipion/pyworkflow/protocol/protocol.py", > line 228, in _run > 00085: resultFiles = self._runFunc() > 00086: File "/home/p260888/scipion/pyworkflow/protocol/protocol.py", > line 224, in _runFunc > 00087: return self._func(*self._args) > 00088: File "/home/p260888/scipion/pyworkflow/em/packages/relion/protocol_base.py", > line 805, in convertInputStep > 00089: mdAux.copyColumn(md.RLN_ORIENT_ORIGIN_X_PRIOR, > md.RLN_ORIENT_ORIGIN_X) > 00090: XmippError: Source label: 'rlnOriginX' doesn't exist on metadata > 00091: Protocol failed: Source label: 'rlnOriginX' doesn't exist on > metadata > 00092: FAILED: convertInputStep, step 1 > 00093: 2017-09-07 14:57:39.324748 > 00094: ------------------- PROTOCOL FAILED (DONE 1/3) > > > Suggestions? > > Thank you! > > Sincerely, > Dmitry > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |
From: D.A. S. <sem...@gm...> - 2017-09-07 13:08:53
|
Dear colleagues, I have a new issue running the relion 3d auto refinement - so - the input are the movies; - Select previous run the result of relion 3d auto-refine (with stacked frames) the error Converting set from 'Runs/032578_XmippProtExtractMovieParticles/movie_particles.sqlite' into 'Runs/032753_ProtRelionRefine3D/movie_particles.star' 00081: Traceback (most recent call last): 00082: File "/home/p260888/scipion/pyworkflow/protocol/protocol.py", line 182, in run 00083: self._run() 00084: File "/home/p260888/scipion/pyworkflow/protocol/protocol.py", line 228, in _run 00085: resultFiles = self._runFunc() 00086: File "/home/p260888/scipion/pyworkflow/protocol/protocol.py", line 224, in _runFunc 00087: return self._func(*self._args) 00088: File "/home/p260888/scipion/pyworkflow/em/packages/relion/protocol_base.py", line 805, in convertInputStep 00089: mdAux.copyColumn(md.RLN_ORIENT_ORIGIN_X_PRIOR, md.RLN_ORIENT_ORIGIN_X) 00090: XmippError: Source label: 'rlnOriginX' doesn't exist on metadata 00091: Protocol failed: Source label: 'rlnOriginX' doesn't exist on metadata 00092: FAILED: convertInputStep, step 1 00093: 2017-09-07 14:57:39.324748 00094: ------------------- PROTOCOL FAILED (DONE 1/3) Suggestions? Thank you! Sincerely, Dmitry |
From: D.A. S. <sem...@gm...> - 2017-09-01 12:21:51
|
Dear colleagues, When I execute the protocol xmipp3 - extract movie particles shall I enable the option Apply movie alignment to extract? I want to do the particle polishing after sincerely, Dmitry |
From: Jose M. de la R. T. <del...@gm...> - 2017-08-28 07:50:23
|
Hi, I think that you have download the binary installation of Scipion...and then you are forgetting to add the '--no-xmipp' flag when running './scipion install EM-package-X'. Please try to download again and use the '--no-xmipp' flag to see if it works in that way. Best, Jose Miguel On Mon, Aug 28, 2017 at 9:33 AM, abhisek Mondal <abh...@gm...> wrote: > Hi, > I was recently trying to install scipion package but hitting an error > every time. "./scipion install" command yields following error: > > scons: *** [software/em/xmipp/external/imagej/ij.jar] Source > `software/em/xmipp/external/imagej.tgz' not found, needed by target > `software/em/xmipp/external/imagej/ij.jar'. > > Seems imageJ is not available. Is there any way to overcome this issue ? > Some advice regarding the same will be highly appreciated. > > Thank you. > > -- > Abhisek Mondal > > *Senior Research Fellow* > > *Structural Biology and Bioinformatics Division* > *CSIR-Indian Institute of Chemical Biology* > > *Kolkata 700032* > > *INDIA* > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > scipion-users mailing list > sci...@li... > https://lists.sourceforge.net/lists/listinfo/scipion-users > > |