From: Thomas H. <th...@gm...> - 2016-03-24 19:28:26
|
Hi Kanika, sorry for the delay but I was traveling. I think Friedrich and Dustin may be right. Since your localization does not write out the score and angle files, it may be that your MRC file formats are not understood for some reason. If you don’t mind, you could send me a sample and I will have a look at your filetype and whether there’s a technical problem. Do you remember what you changed when the program ran? Does pytom still process the tutorial files correctly? Best, Thomas > On Mar 17, 2016, at 8:56 AM, Kanika Khanna <kk...@uc...> wrote: > > Hi, > > @Thomas, no scores and angles were generated as the process did not run. When I run the commands in ipytom, I get the following error: > > <pytom_volume.vol_comp; proxy of <Swig Object of type 'swigTom::swigVolume< std::complex< float >,float > *' at 0x1a4aed0> > > > @Dustin and Friedrich, the memory of my system as I see is 32.64 GB which I think is pretty good (Is it?) My tomogram is binned 2 times. I have actually run the mpi command sometime back when it worked and now it no longer with the same job. > > I converted the mrc to em format (using mrc2em.py). The problem persists. On that note, are all the volume, template, reference files supposed to be in the same format (all mrc or all em?). I tried playing around with that and using the smallest no. of rotations (angle file), still nothing. > > Can you please explain what exactly would be reading the files interactively in pytom? > > Thanks! > > > > On Wed, Mar 16, 2016 at 8:40 PM, Dustin Morado <dus...@gm... <mailto:dus...@gm...>> wrote: > Anything with alloc in the error is usually a memory error. Do you have enough RAM to fit everything? If you use binning does it work? > > -- > Cheers, > Dustin > > > On Mar 16, 2016, at 7:47 PM, Kanika Khanna <kk...@uc... <mailto:kk...@uc...>> wrote: > > > > Hello > > > > I have been trying endlessly to run the localization job with parallel processing, but it is always giving an error. > > > > Traceback (most recent call last): > > File "/usr/local/pytom/bin/localization.py", line 82, in <module> > > startLocalizationJob(jobName, splitX, splitY, splitZ, doSplitAngles=False) > > File "/usr/local/pytom/bin/localization.py", line 21, in startLocalizationJob > > leader.parallelRun(job, splitX, splitY, splitZ, verbose) > > File "/usr/local/pytom/localization/parallel_extract_peaks.py", line 1046, in parallelRun > > result = self.run(verbose) > > File "/usr/local/pytom/localization/parallel_extract_peaks.py", line 104, in run > > [resV, orientV, sumV, sqrV] = extractPeaks(v, ref, rot, scoreFnc, m, mIsSphere, wedg, nodeName=self.name <http://self.name/>, verboseMode=verbose, moreInfo=moreInfo) > > File "/usr/local/pytom/localization/extractPeaks.py", line 130, in extractPeaks > > meanV = meanUnderMask(volume, maskV, p); > > File "/usr/local/pytom/basic/correlation.py", line 244, in meanUnderMask > > result = iftshift(ifft(fMask*fft(volume)))/(size*p) > > File "/usr/local/pytom/basic/fourier.py", line 122, in fft > > returnValue = pytom_volume.vol_comp(theTuple.complexVolume.sizeX(),theTuple.complexVolume.sizeY(),theTuple.complexVolume.sizeZ()) > > File "/usr/local/pytom/pytomc/swigModules/pytom_volume.py", line 234, in __init__ > > this = _pytom_volume.new_vol_comp(*args) > > RuntimeError: std::bad_alloc > > -------------------------------------------------------------------------- > > mpirun detected that one or more processes exited with non-zero status, thus causing > > the job to be terminated. The first process to do so was: > > > > Process name: [[6894,1],0] > > Exit code: 1 > > > > > > On Fri, Mar 11, 2016 at 12:00 PM, Kanika Khanna <kk...@uc... <mailto:kk...@uc...>> wrote: > > Hi > > > > I am sorry but the job did start running and it gave this weird error then and aborted again! :( > > Primary job terminated normally, but 1 process returned > > a non-zero exit code.. Per user-direction, the job has been aborted. > > > > node_0: send number of 50 rotations to node 2 > > node_0: send number of 25 rotations to node 1 > > node_2: send number of 25 rotations to node 3 > > node_0: starting to calculate 25 rotations > > node_3: starting to calculate 25 rotations > > node_1: starting to calculate 25 rotations > > node_2: starting to calculate 25 rotations > > Traceback (most recent call last): > > File "/usr/local/pytom/bin/localization.py", line 82, in <module> > > startLocalizationJob(jobName, splitX, splitY, splitZ, doSplitAngles=False) > > File "/usr/local/pytom/bin/localization.py", line 21, in startLocalizationJob > > leader.parallelRun(job, splitX, splitY, splitZ, verbose) > > File "/usr/local/pytom/localization/parallel_extract_peaks.py", line 1046, in parallelRun > > result = self.run(verbose) > > File "/usr/local/pytom/localization/parallel_extract_peaks.py", line 104, in run > > [resV, orientV, sumV, sqrV] = extractPeaks(v, ref, rot, scoreFnc, m, mIsSphere, wedg, nodeName=self.name <http://self.name/>, verboseMode=verbose, moreInfo=moreInfo) > > File "/usr/local/pytom/localization/extractPeaks.py", line 130, in extractPeaks > > meanV = meanUnderMask(volume, maskV, p); > > File "/usr/local/pytom/basic/correlation.py", line 244, in meanUnderMask > > result = iftshift(ifft(fMask*fft(volume)))/(size*p) > > File "/usr/local/pytom/pytomc/swigModules/pytom_volume.py", line 342, in __mul__ > > return _pytom_volume.vol_comp___mul__(self, *args) > > RuntimeError: std::bad_alloc > > ------------------------------------------------------- > > Primary job terminated normally, but 1 process returned > > a non-zero exit code.. Per user-direction, the job has been aborted. > > > > > > On Fri, Mar 11, 2016 at 11:56 AM, Kanika Khanna <kk...@uc... <mailto:kk...@uc...>> wrote: > > Hi Thomas, > > > > I have been able to resolve issue 1. I just killed all the processes and started again. It worked that time. > > > > On Sat, Mar 12, 2016 at 1:05 AM, Thomas Hrabe <th...@gm... <mailto:th...@gm...>> wrote: > > Hi Kanika, > > > > > > 1. > > to me, it looks like that you are trying to open a file called > > > > > <Volume Binning="[0, 0, 0]" Filename="PY79.mrc" Sampling="[0, 0, 0]" Subregion="[0, 0, 0, 0, 0, 0]"/> > > > > > > Can you please attach the xml file? > > > > 2. > > I have to look into the script as well. I must re-install it before I can advice about how to use it. > > > > Cheers, > > Thomas > > > > > On Mar 11, 2016, at 11:15 AM, Kanika Khanna <kk...@uc... <mailto:kk...@uc...>> wrote: > > > > > > Hello all, > > > > > > 1. When I try to execute my job using the following command > > > mpirun --host HostName -c 4 pytom localization.py PathToJobFile 2 2 2 > > > > > > I get the following error: > > > Traceback (most recent call last): > > > File "/usr/local/pytom/bin/localization.py", line 82, in <module> > > > startLocalizationJob(jobName, splitX, splitY, splitZ, doSplitAngles=False) > > > File "/usr/local/pytom/bin/localization.py", line 11, in startLocalizationJob > > > job.check() > > > File "/usr/local/pytom/localization/peak_job.py", line 184, in check > > > Traceback (most recent call last): > > > File "/usr/local/pytom/bin/localization.py", line 82, in <module> > > > raise IOError('File: ' + str(self.volume) + ' not found!') > > > IOError: File: <Volume Binning="[0, 0, 0]" Filename="PY79.mrc" Sampling="[0, 0, 0]" Subregion="[0, 0, 0, 0, 0, 0]"/> > > > not found! > > > startLocalizationJob(jobName, splitX, splitY, splitZ, doSplitAngles=False) > > > File "/usr/local/pytom/bin/localization.py", line 11, in startLocalizationJob > > > job.check() > > > File "/usr/local/pytom/localization/peak_job.py", line 184, in check > > > raise IOError('File: ' + str(self.volume) + ' not found!') > > > IOError: File: <Volume Binning="[0, 0, 0]" Filename="PY79.mrc" Sampling="[0, 0, 0]" Subregion="[0, 0, 0, 0, 0, 0]"/> > > > not found! > > > Traceback (most recent call last): > > > File "/usr/local/pytom/bin/localization.py", line 82, in <module> > > > startLocalizationJob(jobName, splitX, splitY, splitZ, doSplitAngles=False) > > > File "/usr/local/pytom/bin/localization.py", line 11, in startLocalizationJob > > > job.check() > > > File "/usr/local/pytom/localization/peak_job.py", line 184, in check > > > raise IOError('File: ' + str(self.volume) + ' not found!') > > > IOError: File: <Volume Binning="[0, 0, 0]" Filename="PY79.mrc" Sampling="[0, 0, 0]" Subregion="[0, 0, 0, 0, 0, 0]"/> > > > not found! > > > ------------------------------------------------------- > > > Primary job terminated normally, but 1 process returned > > > a non-zero exit code.. Per user-direction, the job has been aborted. > > > ------------------------------------------------------- > > > Traceback (most recent call last): > > > File "/usr/local/pytom/bin/localization.py", line 82, in <module> > > > startLocalizationJob(jobName, splitX, splitY, splitZ, doSplitAngles=False) > > > File "/usr/local/pytom/bin/localization.py", line 11, in startLocalizationJob > > > job.check() > > > File "/usr/local/pytom/localization/peak_job.py", line 184, in check > > > raise IOError('File: ' + str(self.volume) + ' not found!') > > > IOError: File: <Volume Binning="[0, 0, 0]" Filename="PY79.mrc" Sampling="[0, 0, 0]" Subregion="[0, 0, 0, 0, 0, 0]"/> > > > not found! > > > -------------------------------------------------------------------------- > > > mpirun detected that one or more processes exited with non-zero status, thus causing > > > the job to be terminated. The first process to do so was: > > > > > > Process name: [[27852,1],3] > > > Exit code: 1 > > > > > > > > > Any idea how I can circumvent this? Or what exactly might be going wrong? > > > > > > 2. Once you have the pl.xml file generated after extraction of particles, we have the script VolumeDialog.py for viewing it in Chimera. I already installed it in the Volume Viewer folder. How exactly are we suppose to use it? Also, there were a couple of existing files of the same name in the folder existing previously (is that the case with everyone here?). Anything to do with them? > > > > > > Thanks! > > > > > > > > > > > > > > > ------------------------------------------------------------------------------ > > > Transform Data into Opportunity. > > > Accelerate data analysis in your applications with > > > Intel Data Analytics Acceleration Library. > > > Click to learn more. > > > http://pubads.g.doubleclick.net/gampad/clk?id=278785111&iu=/4140_______________________________________________ <http://pubads.g.doubleclick.net/gampad/clk?id=278785111&iu=/4140_______________________________________________> > > > Pytom-mail mailing list > > > Pyt...@li... <mailto:Pyt...@li...> > > > https://lists.sourceforge.net/lists/listinfo/pytom-mail <https://lists.sourceforge.net/lists/listinfo/pytom-mail> > > > > > > > > > > ------------------------------------------------------------------------------ > > Transform Data into Opportunity. > > Accelerate data analysis in your applications with > > Intel Data Analytics Acceleration Library. > > Click to learn more. > > http://pubads.g.doubleclick.net/gampad/clk?id=278785231&iu=/4140_______________________________________________ <http://pubads.g.doubleclick.net/gampad/clk?id=278785231&iu=/4140_______________________________________________> > > Pytom-mail mailing list > > Pyt...@li... <mailto:Pyt...@li...> > > https://lists.sourceforge.net/lists/listinfo/pytom-mail <https://lists.sourceforge.net/lists/listinfo/pytom-mail> > > > ------------------------------------------------------------------------------ > Transform Data into Opportunity. > Accelerate data analysis in your applications with > Intel Data Analytics Acceleration Library. > Click to learn more. > http://pubads.g.doubleclick.net/gampad/clk?id=278785231&iu=/4140_______________________________________________ > Pytom-mail mailing list > Pyt...@li... > https://lists.sourceforge.net/lists/listinfo/pytom-mail |