From: Christoph H. <chr...@gm...> - 2012-07-13 20:54:20
|
Thanks! Had not seen this one.. Have resumed it now on a 30g node. Do you think that will be sufficient? Sorry to always ask this questions about memory. I understand that it depends on the data/assembly, but I do not have any idea, so maybe you can give me a hint from your experience. Along the same lines: Is there any educated guess of how long the buildUnitigs may run? Sorry again for this kind of questions.. These things are always an issue for me because I have to specify job runtime and memory usage in advance. And according to memory usage I am charged with CPUhours, which I only have a limited amount of.. very annoying. The unitigger.err file says the following now: Bubble popping = on Intersection breaking = on Bad mate threshold = -7 Error threshold = 0.030 (3.000%) Error limit = 2.500 errors sizeof(ufPath) = 24 FragmentInfo()-- Loading fragment information for 224632508 fragments and 3 libraries from cache '/projects/nn9201k/Celera/work2/salaris1/4-unitigger/salaris.fragmentInfo' setLogFile()-- Now logging to '/projects/nn9201k/Celera/work2/salaris1/4-unitigger/salaris.001.bestoverlapgraph-containments.log' Once again, I understand that it is impossible to estimate these parameters accurately. Just a rough guess would help already. Thanks!! cheers, Christoph On 07/13/2012 09:55 PM, Walenz, Brian wrote: > If the third step finished with no errors, then you can delete all the > buckets, and all the ####.idx and ####.ovs files. Those would have been > automagically removed had you enabled either of the delete options. If you > have the 'ovs' file, it also likely finished successfully. > > Build unitigs seems to have made it up to loading overlaps. It still has to > allocate space for unitigs. It will grow, but I don't know by how much. > Safe guess is by more than 1gb -- so yes, move it somewhere with more > memory. I'm not sure how hard it will be to get a good guess on total > memory usage so we could abort the run early. It's on the todo list now. > > b > > > > On 7/13/12 2:35 PM, "Christoph Hahn" <chr...@gm...> wrote: > >> Hi Brian, >> >> No, I did not enable any of these delete functions, so I will delete the >> bucket directories manually now. I do have ####.idx and ####.ovs files >> (for the first 100 of 418 - 2-sort.sh ran 100 jobs. Is that a problem? >> Yes, I think the bucket???? directories make up most of the difference >> in disk space. >> >> Concerning the buildUnitigs, I was just wondering because it is now >> running with constantly 15g on a 16g machine. Its running for almost 2 >> hours now and has just created the following files at the beginning. >> They are unchanged so far. >> >> -rw-r--r-- 1 chrishah users 2.6G Jul 13 18:48 salaris.fragmentInfo >> -rw-r--r-- 1 chrishah users 0 Jul 13 18:48 >> salaris.001.bestoverlapgraph-containments.log >> -rw-r--r-- 1 chrishah users 2.4K Jul 13 18:48 unitigger.err >> >> Is there any increase of memory usage to be expected? If yes, I would be >> inclined to stop it now and start it over again on a bigger machine >> right away. >> >> Thanks for your help! I appreciate it! >> >> cheers, >> Christoph >> >> On 13.07.2012 20:20, Walenz, Brian wrote: >>> Hi, Christoph- >>> >>> Good to hear! You're the third person (I know of) to run the parallel >>> version. Instead of fixing the older store build, I'd rather spend time to >>> integrate the new one with runCA, either as a set of jobs for SGE, or a >>> series of sequential jobs. It's just scripting, but there might be some >>> performance issues to optimize. >>> >>> If the store is complete, the bucket directories can be deleted. The third >>> step should have done this for you. Maybe not if you didn't enable >>> deleteearly or deletelate. The store is complete if you have just the #### >>> files, an 'idx' and an 'ovs' file. You should not have any ####.idx or >>> ####.ovs files. Is the extra space in the bucket??? directories? The >>> difference (546 - 320 = 226) seems to be a reasonable size for the buckets. >>> >>> Memory for buildUnitgs (aka bog) cannot be specified. There isn't any data >>> we can keep on disk, or not load, or compute differently in a smaller memory >>> size. Memory is used to store fragment meta data (clear lengths, mate >>> pairs) and best overlaps, and constructed unitigs. The first two are of a >>> known size. The number of unitigs depends on the assembly. We've seen an >>> assembly that exhausted memory in bog, caused by junk fragments creating an >>> enormous number of single-fragment unitigs. >>> >>> b >>> >>> >>> >>> On 7/13/12 1:53 PM, "Christoph Hahn"<chr...@gm...> wrote: >>> >>>> Hi Brian, >>>> >>>> It s done! I have by now also updated the overlapStore with the frg- and >>>> ovlcorr and I am in the process of building unitigs now. >>>> >>>> I like this parallel version for building the ovlStore. You were right >>>> the last jobs needed double the memory. When distributing the jobs to >>>> several CPUs it is very time efficient and also used fewer overall >>>> CPUhours in comparison to the regular overlapStore command. One thing >>>> though is that I think it needs substantially more disk space. I am not >>>> 100% sure (because its gone now..), but I believe the *.ovlStore build >>>> by the regular command used some 320G of disk space, while the one I >>>> have now is using 546G. Are all the bucket???? directories in *.ovlStore >>>> still needed? >>>> >>>> Overall I think I learned a lot about CA by running the latest steps >>>> again with the parallel version of ovlStore build and your help. Are >>>> there plans to include a failsafe for the overlapStore update function, >>>> until the process is finished? So that it can be resumed in case it >>>> stops for whatever reason. >>>> >>>> One more thing: Is there a way to specify the memory buildUnitigs is >>>> using? >>>> >>>> Thanks again for your help!! >>>> >>>> cheers, >>>> Christoph >>>> >>>> >>>> On 12.07.2012 18:52, Walenz, Brian wrote: >>>>> You've captured the process nicely. >>>>> >>>>> After #1 finishes, check that you have one 'sliceSizes' file per bucket >>>>> directory. If any are missing, run that bucket again. I think (hope) that >>>>> #2 will complain if any are missing, but this has been a problem in the >>>>> past. >>>>> >>>>> Hopefully memory won't be an issue during sorting. I estimate memory size >>>>> as >>>>> 3 * (sizeof gz files) / #jobs. But, if you have Illumina + long reads >>>>> (454+, >>>>> Sanger), the balancing is screwed up and the early jobs (overlaps of >>>>> Illumina >>>>> to Illumina) have fewer overlaps than the later jobs (Illumina to long >>>>> reads). Every time I've run this, I could do 90-95% of the sort jobs on >>>>> our >>>>> grid, but had to use large memory machines for the rest. >>>>> >>>>> More jobs creates more files, but I don't think it is necessarily slower. >>>>> I >>>>> haven't benchmarked it though. >>>>> >>>>> No jobID for #3, it is tiny, does little compute, and not too much I/O. I >>>>> usually run this interactively off grid. >>>>> >>>>> b >>>>> >>>>> ________________________________________ >>>>> From: Christoph Hahn [chr...@gm...] >>>>> Sent: Thursday, July 12, 2012 9:31 AM >>>>> To: Walenz, Brian >>>>> Cc: wgs...@li... >>>>> Subject: Re: [wgs-assembler-users] runCA stopped while updating >>>>> overlapStore >>>>> - how to resume??? >>>>> >>>>> Hi Brian, >>>>> >>>>> I ran the runCA-overlapStoreBuild.pl script now. It created the three >>>>> scripts: >>>>> 1-bucketize.sh >>>>> 2-sort.sh >>>>> 3-index.sh >>>>> >>>>> right now I am running 1-bucketize.sh for every job index from 1 to >>>>> 2135. I have distributed the jobs on several CPUs and that works nicely. >>>>> >>>>> when this is finished I need to run 2-sort.sh. I specified -jobs 100 in >>>>> the runCA-overlapStoreBuild.pl, so as far as I understand it should have >>>>> created 100 jobs, right? So, I run 2-sort.sh for jobIDs 1 to 100, then? >>>>> the jobID in this case is actually the slicenumber, right? so, for e.g. >>>>> 2-sort.sh 2 it will look through all bucket directories and pull out >>>>> slice002.gz, read them into memory and write the overlaps into the store. >>>>> >>>>> When this is done I just need to run 3-index.sh once. No jobIDs >>>>> required, right? >>>>> >>>>> Am I missing anything? >>>>> >>>>> cheers, >>>>> Christoph >>>>> >>>>> >>>>> On 07/11/2012 05:54 AM, Walenz, Brian wrote: >>>>>> The first step will create 1 job for each overlapper job. These should be >>>>>> small memory, but there is some internal buffering done and I usually >>>>>> request 2gb for them anyway. >>>>>> >>>>>> The second step will create '-jobs j' jobs. Memory size here is a giant >>>>>> unknown. The '-memory m' option will cause the job to not run if it needs >>>>>> more than that much memory. Currently, you'll have to increase -memory >>>>>> for >>>>>> these jobs and find a bigger machine. >>>>>> >>>>>> All jobs in both steps are single-threaded and run independently of each >>>>>> other. >>>>>> >>>>>> b >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On 7/10/12 6:46 PM, "Christoph Hahn"<chr...@gm...> wrote: >>>>>> >>>>>>> Hi Brian, >>>>>>> >>>>>>> Thanks! overlaps are being computed now and CVS version of CA has been >>>>>>> successfully compiled. Will try the runCA-overlapStoreBuild.pl once the >>>>>>> overlapper is finished. One question there: I understand that the memory >>>>>>> usage is regulated by the -jobs j parameter. higher value for j means >>>>>>> less memory for every job. How can I specify the number of CPUs to be >>>>>>> used in the parallel steps? >>>>>>> >>>>>>> Thanks for your help! I appreciate it! >>>>>>> >>>>>>> cheers, >>>>>>> Christoph >>>>>>> >>>>>>> On 07/10/2012 10:18 PM, Walenz, Brian wrote: >>>>>>>> Quick guess is that runCA is finding the old ovlStore and assuming it is >>>>>>>> complete, then continuing on to frgcorr. runCA tests for the existence >>>>>>>> of >>>>>>>> name.ovlStore to determine if overlaps are finished; it doesn't check >>>>>>>> that >>>>>>>> the store is valid. So, delete *ovlStore* too. >>>>>>>> >>>>>>>> Your latest build (from scratch) is suffering from a long standing >>>>>>>> dependency issue. It needs kmer checked out and 'make install'ed. >>>>>>>> >>>>>>>> make[1]: *** No rule to make target `sweatShop.H', needed by >>>>>>>> `classifyMates.o'. Stop. >>>>>>>> make[1]: *** Waiting for unfinished jobs.... >>>>>>>> make: *** [objs] Error 1 >>>>>>>> >>>>>>>> Once kmer is installed, wipe (again) the Linux-amd64 and rebuild. >>>>>>>> >>>>>>>> The kmer included in CA7 is too old for the CVS version of CA, so you'll >>>>>>>> need to grab it from subversion. >>>>>>>> >>>>>>>> http://sourceforge.net/apps/mediawiki/wgs-assembler/index.php?title=Chec >>>>>>>> k_ >>>>>>>> ou >>>>>>>> t_and_Compile >>>>>>>> >>>>>>>> b >>>>>>>> >>>>>>>> >>>>>>>> On 7/10/12 4:00 PM, "Christoph Hahn"<chr...@gm...> wrote: >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> I actually tried to just rerun the overlapper. I moved the 1-overlapper >>>>>>>>> and the 3-overlapcorrection directories and just ran runCA and it >>>>>>>>> immediately starts with doing frgcorr. Do you mean recompute from the >>>>>>>>> very start? Is there a way to avoid recomputing the initial overlaps at >>>>>>>>> least(it took some 10000 CPUhours)?? >>>>>>>>> >>>>>>>>> Tried to compile it again - not successful. Ran make in the src >>>>>>>>> directory (output in makelog) and also in the AS_RUN directory (output >>>>>>>>> AS_RUN-makelog). >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Christoph >>>>>>>>> >>>>>>>>> >>>>>>>>> On 07/10/2012 09:04 PM, Walenz, Brian wrote: >>>>>>>>>> Odd, the *gz should only be deleted after the store is successfully >>>>>>>>>> built. >>>>>>>>>> runCA might have been confused by the attempt to rerun. The easiest >>>>>>>>>> will >>>>>>>>>> be >>>>>>>>>> to recompute. :-( >>>>>>>>>> >>>>>>>>>> I've never seen the 'libCA.a' error before. That particular program >>>>>>>>>> is >>>>>>>>>> the >>>>>>>>>> first to get built. Looks like libCA.a wasn't created. My fix for >>>>>>>>>> most >>>>>>>>>> strange compile errors is to remove the entire Linux-amd64 directory >>>>>>>>>> and >>>>>>>>>> recompile. If that fails, send along the complete output of make and >>>>>>>>>> I'll >>>>>>>>>> take a look. >>>>>>>>>> >>>>>>>>>> b >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On 7/10/12 2:15 PM, "Christoph Hahn"<chr...@gm...> wrote: >>>>>>>>>> >>>>>>>>>>> Hi Brian, >>>>>>>>>>> >>>>>>>>>>> Thanks for your reply! >>>>>>>>>>> >>>>>>>>>>> I would be happy to try the new parallel overlap store build, but I >>>>>>>>>>> think I need the *.ovb.gz outputs for that and unfortunately I dont >>>>>>>>>>> have >>>>>>>>>>> them any more. Looks like they were deleted after the ovlStore was >>>>>>>>>>> build. So I guess I ll need to run the overlapper again, first. Am I >>>>>>>>>>> understanding that correctly? >>>>>>>>>>> >>>>>>>>>>> I have downloaded the cvs and tried to make, but I get: >>>>>>>>>>> *** No rule to make target `libCA.a', needed by `fragmentDepth'. >>>>>>>>>>> Stop. >>>>>>>>>>> >>>>>>>>>>> I really appreciate your help! >>>>>>>>>>> >>>>>>>>>>> cheers, >>>>>>>>>>> Christoph >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On 07/10/2012 05:09 PM, Walenz, Brian wrote: >>>>>>>>>>>> Hi, Christoph- >>>>>>>>>>>> >>>>>>>>>>>> The original overlap store build is difficult to resume. I think it >>>>>>>>>>>> can >>>>>>>>>>>> be >>>>>>>>>>>> done, but it will take code changes that are probably specific to >>>>>>>>>>>> the >>>>>>>>>>>> case >>>>>>>>>>>> you have. Only if you do not have the *ovb.gz outputs from >>>>>>>>>>>> overlapper >>>>>>>>>>>> will >>>>>>>>>>>> I suggest this. >>>>>>>>>>>> >>>>>>>>>>>> Option 1 is then to restart. >>>>>>>>>>>> >>>>>>>>>>>> Option 2 is to use a new 'data-parallel' overlap store build >>>>>>>>>>>> (AS_RUN/runCA-overlapStoreBuild.pl). It runs as a series of three >>>>>>>>>>>> grid >>>>>>>>>>>> jobs. The first job is parallel, and transfers the overlapper >>>>>>>>>>>> output >>>>>>>>>>>> into >>>>>>>>>>>> buckets for sorting. The second job, also parallel, sorts each >>>>>>>>>>>> bucket. >>>>>>>>>>>> The >>>>>>>>>>>> final job, sequential, builds an index for the store. Since this >>>>>>>>>>>> compute >>>>>>>>>>>> is >>>>>>>>>>>> just a collection of jobs, it can be restarted/resumed/fixed easily. >>>>>>>>>>>> >>>>>>>>>>>> Its performance can be great -- at JCVI we've seen builds that we >>>>>>>>>>>> estimated >>>>>>>>>>>> would take 2 days using the original sequential build, finish in a >>>>>>>>>>>> few >>>>>>>>>>>> (4?) >>>>>>>>>>>> hours with the data parallel version. But on our development >>>>>>>>>>>> cluster, >>>>>>>>>>>> it >>>>>>>>>>>> is >>>>>>>>>>>> slower than the sequential version. It depends on the disk >>>>>>>>>>>> throughput. >>>>>>>>>>>> Our >>>>>>>>>>>> dev cluster is powered off of a 6-disk ZFS, while the production >>>>>>>>>>>> side >>>>>>>>>>>> has >>>>>>>>>>>> a >>>>>>>>>>>> big Isilon. >>>>>>>>>>>> >>>>>>>>>>>> It is only in CVS. I just added command line help and a bit of >>>>>>>>>>>> documentation, so do an update first. >>>>>>>>>>>> >>>>>>>>>>>> Happy to provide help if you want to try it out. More than happy to >>>>>>>>>>>> accept >>>>>>>>>>>> better documentation. >>>>>>>>>>>> >>>>>>>>>>>> b >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On 7/10/12 6:47 AM, "Christoph Hahn"<chr...@gm...> >>>>>>>>>>>> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hei Ole, >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks for your reply. I had looked on the preprocessing page you >>>>>>>>>>>>> are >>>>>>>>>>>>> referring to just recently. Sounds like a good approach you are >>>>>>>>>>>>> using! >>>>>>>>>>>>> Will definitely consider that to make the assembly more effective >>>>>>>>>>>>> in >>>>>>>>>>>>> a >>>>>>>>>>>>> next try. Thanks for that! >>>>>>>>>>>>> For now, I think I am pretty much over all the trimming and >>>>>>>>>>>>> correction >>>>>>>>>>>>> steps (once I get this last thing sorted out..). As far as I can >>>>>>>>>>>>> see >>>>>>>>>>>>> the >>>>>>>>>>>>> next step is already building the unitigs, so I ll try to finish >>>>>>>>>>>>> this >>>>>>>>>>>>> assembly as it is now. Will try to improve it afterwards. I am >>>>>>>>>>>>> really >>>>>>>>>>>>> curious how a first attempt of a hybrid approach (454+illumina) >>>>>>>>>>>>> will >>>>>>>>>>>>> perform in comparison to the pure illumina assemblies which I have >>>>>>>>>>>>> pretty much optimized now (and with which I am pretty happy, btw), >>>>>>>>>>>>> I >>>>>>>>>>>>> think. >>>>>>>>>>>>> >>>>>>>>>>>>> I am afraid, your suggestion to do doFragmentCorrection=0 directly >>>>>>>>>>>>> now >>>>>>>>>>>>> will not work. For the next step (the unitigger) I ll need an >>>>>>>>>>>>> intact >>>>>>>>>>>>> overlap store. As it is now, I think it is useless, being only >>>>>>>>>>>>> half-updated.. I also discovered that just rerunning the previous >>>>>>>>>>>>> overlapStore command (the one before the frg- and ovlcorrection) is >>>>>>>>>>>>> not >>>>>>>>>>>>> working as I thought it would. >>>>>>>>>>>>> Seems to be a very unfortunate situation - really dont know how to >>>>>>>>>>>>> proceed.. It would be fantastic if anyone could give me a tip what >>>>>>>>>>>>> to >>>>>>>>>>>>> do!! >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks for your help! >>>>>>>>>>>>> >>>>>>>>>>>>> much obliged, >>>>>>>>>>>>> Christoph >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On 09.07.2012 13:20, Ole Kristian Tørresen wrote: >>>>>>>>>>>>> Hi Christoph. >>>>>>>>>>>>> >>>>>>>>>>>>> This is not an answer to your question, but a suggestion for a >>>>>>>>>>>>> work-around. If I remember correctly, you have both Illumina and >>>>>>>>>>>>> 454 >>>>>>>>>>>>> reads. Celera runs, as you see below, frgcorrection and overlap >>>>>>>>>>>>> based >>>>>>>>>>>>> trimming to correct 454 reads, and merTrim to correct Illumina >>>>>>>>>>>>> reads >>>>>>>>>>>>> (can also be used on 454 reads). What I've been doing lately, is to >>>>>>>>>>>>> run meryl on a trusted set of Illumina reads, pair end for example, >>>>>>>>>>>>> I >>>>>>>>>>>>> ran it on some overlapping reads which I had merged with FLASH. >>>>>>>>>>>>> Then >>>>>>>>>>>>> you can use the set of trusted k-mers to correct different >>>>>>>>>>>>> datasets. >>>>>>>>>>>>> For example, I first ran CA to the end of OBT (overlap based >>>>>>>>>>>>> trimming) >>>>>>>>>>>>> for my 454 reads, and then output the result as fastq-files. I used >>>>>>>>>>>>> the trusted k-mer set to correct these 454 reads too. If you do >>>>>>>>>>>>> this >>>>>>>>>>>>> for all your reads, used either merTim or merTrim/OBT, and do >>>>>>>>>>>>> deduplication on all the datasets too, then you'll end up with >>>>>>>>>>>>> reads >>>>>>>>>>>>> that you can use in assemblies where you skip relatively expensive >>>>>>>>>>>>> steps as frgcorrection. >>>>>>>>>>>>> >>>>>>>>>>>>> I don't think frgcorrection is that useful for the type of data >>>>>>>>>>>>> you're >>>>>>>>>>>>> using anyway. >>>>>>>>>>>>> >>>>>>>>>>>>> If you have a set of corrected reads, you can use these settings >>>>>>>>>>>>> for >>>>>>>>>>>>> CA: >>>>>>>>>>>>> doOBT=0 >>>>>>>>>>>>> doFragmentCorrection=0 >>>>>>>>>>>>> >>>>>>>>>>>>> When I think of it, you might use doFragmentCorrection=0 on this >>>>>>>>>>>>> assembly now. You might have to clean up your directory tree, like >>>>>>>>>>>>> removing the 3-overlapcorrection directory and maybe some other >>>>>>>>>>>>> steps >>>>>>>>>>>>> too. Apply with caution. >>>>>>>>>>>>> >>>>>>>>>>>>> Most of the stuff I've mentioned I've taken from here: >>>>>>>>>>>>> > http://sourceforge.net/apps/mediawiki/wgs-assembler/index.php?title>>>>>>>>>>>> > = >>>>>>>>>>>>> Pre >>>>>>>>>>>>> pr >>>>>>>>>>>>> oc >>>>>>>>>>>>> es >>>>>>>>>>>>> sing >>>>>>>>>>>>> and discussion with Brian. >>>>>>>>>>>>> >>>>>>>>>>>>> Ole >>>>>>>>>>>>> >>>>>>>>>>>>> On 9 July 2012 12:47, Christoph Hahn<chr...@gm...> >>>>>>>>>>>>> wrote: >>>>>>>>>>>>> Dear users and developers, >>>>>>>>>>>>> >>>>>>>>>>>>> I have the following problem: In my assembly process I have just >>>>>>>>>>>>> completed >>>>>>>>>>>>> the fragment- and overlap error correction. Unfortunately runCA >>>>>>>>>>>>> stopped >>>>>>>>>>>>> in >>>>>>>>>>>>> the subsequent updating of the overlapStore, because of an >>>>>>>>>>>>> incorrectly >>>>>>>>>>>>> set >>>>>>>>>>>>> time limit.. >>>>>>>>>>>>> If I am trying to resume the assembly now, I get the following >>>>>>>>>>>>> error: >>>>>>>>>>>>> ----------------------------------------START Mon Jul 9 11:05:53 >>>>>>>>>>>>> 2012 >>>>>>>>>>>>> /xanadu/home/chrishah/programmes/wgs-7.0/Linux-amd64/bin/overlapSto >>>>>>>>>>>>> re >>>>>>>>>>>>> -u >>>>>>>>>>>>> /projects/nn9201k/Celera/work2/salaris1/salaris.ovlStore >>>>>>>>>>>>> /projects/nn9201k/Celera/work2/salaris1/3-overlapco >>>>>>>>>>>>> rrection/salaris.erates> >>>>>>>>>>>>> /projects/nn9201k/Celera/work2/salaris1/3-overlapcorrection/overlap >>>>>>>>>>>>> Sto >>>>>>>>>>>>> re >>>>>>>>>>>>> -u >>>>>>>>>>>>> pd >>>>>>>>>>>>> ate-erates.err >>>>>>>>>>>>> 2>&1 >>>>>>>>>>>>> ----------------------------------------END Mon Jul 9 11:05:54 >>>>>>>>>>>>> 2012 >>>>>>>>>>>>> (1 >>>>>>>>>>>>> seconds) >>>>>>>>>>>>> ERROR: Failed with signal HUP (1) >>>>>>>>>>>>> =================================================================== >>>>>>>>>>>>> === >>>>>>>>>>>>> == >>>>>>>>>>>>> == >>>>>>>>>>>>> == >>>>>>>>>>>>> ==== >>>>>>>>>>>>> >>>>>>>>>>>>> runCA failed. >>>>>>>>>>>>> >>>>>>>>>>>>> ---------------------------------------- >>>>>>>>>>>>> Stack trace: >>>>>>>>>>>>> >>>>>>>>>>>>> at >>>>>>>>>>>>> /usit/titan/u1/chrishah/programmes/wgs-7.0/Linux-amd64/bin/./runCA >>>>>>>>>>>>> line >>>>>>>>>>>>> 1237 >>>>>>>>>>>>> main::caFailure('failed to apply the overlap >>>>>>>>>>>>> corrections', >>>>>>>>>>>>> '/projects/nn9201k/Celera/work2/salaris1/3-overlapcorrection/o...') >>>>>>>>>>>>> called >>>>>>>>>>>>> at /usit/titan/u1/chrishah/programmes/wgs >>>>>>>>>>>>> -7.0/Linux-amd64/bin/./runCA line 4077 >>>>>>>>>>>>> main::overlapCorrection() called at >>>>>>>>>>>>> /usit/titan/u1/chrishah/programmes/wgs-7.0/Linux-amd64/bin/./runCA >>>>>>>>>>>>> line >>>>>>>>>>>>> 5880 >>>>>>>>>>>>> >>>>>>>>>>>>> ---------------------------------------- >>>>>>>>>>>>> Last few lines of the relevant log file >>>>>>>>>>>>> (/projects/nn9201k/Celera/work2/salaris1/3-overlapcorrection/overla >>>>>>>>>>>>> pSt >>>>>>>>>>>>> or >>>>>>>>>>>>> e- >>>>>>>>>>>>> up >>>>>>>>>>>>> date-erates.err): >>>>>>>>>>>>> >>>>>>>>>>>>> AS_OVS_openBinaryOverlapFile()-- Failed to open >>>>>>>>>>>>> '/projects/nn9201k/Celera/work2/salaris1/salaris.ovlStore/0001~' >>>>>>>>>>>>> for >>>>>>>>>>>>> reading: No such file or directory >>>>>>>>>>>>> >>>>>>>>>>>>> ---------------------------------------- >>>>>>>>>>>>> Failure message: >>>>>>>>>>>>> >>>>>>>>>>>>> failed to apply the overlap corrections >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> So it can obviously not find the file /salaris.ovlStore/0001~. The >>>>>>>>>>>>> reason >>>>>>>>>>>>> is, from what I can see, that the /salaris.ovlStore/0001~ file has >>>>>>>>>>>>> already >>>>>>>>>>>>> been updated to /salaris.ovlStore/0001 before it stopped. In fact >>>>>>>>>>>>> it >>>>>>>>>>>>> seems >>>>>>>>>>>>> to have stopped after updating /salaris.ovlStore/0249 (of 430). Is >>>>>>>>>>>>> there >>>>>>>>>>>>> a >>>>>>>>>>>>> way to tell runCA to continue from /salaris.ovlStore/0250~, >>>>>>>>>>>>> instead >>>>>>>>>>>>> of >>>>>>>>>>>>> from >>>>>>>>>>>>> 0001~, which is obviously not there any more?? >>>>>>>>>>>>> Another solution I was thinking of is to run the previous >>>>>>>>>>>>> overlapStore >>>>>>>>>>>>> command again manually (the one that was done before starting the >>>>>>>>>>>>> frgcorr >>>>>>>>>>>>> and ovlcorr: >>>>>>>>>>>>> /xanadu/home/chrishah/programmes/wgs-7.0/Linux-amd64/bin/overlapSto >>>>>>>>>>>>> re >>>>>>>>>>>>> -c >>>>>>>>>>>>> /projects/nn9201k/Celera/work2/salaris1/salaris.ovlStore.BUILDING >>>>>>>>>>>>> -g >>>>>>>>>>>>> /projects/nn9201k/Celera/work2/salaris1/salaris.gkpStore -i 0 -M >>>>>>>>>>>>> 14000 >>>>>>>>>>>>> -L >>>>>>>>>>>>> /projects/nn9201k/Celera/work2/salaris1/salaris.ovlStore.list> >>>>>>>>>>>>> /projects/nn9201k/Celera/work2/salaris1/salaris.ovlStore.err 2>&1) >>>>>>>>>>>>> to >>>>>>>>>>>>> restore the status from before the frgcorr and ovlcorr steps, >>>>>>>>>>>>> before >>>>>>>>>>>>> resuming runCA. This should restore the 0001~ file, right? The most >>>>>>>>>>>>> important thing is that I want to avoid rerunning the frgcorr and >>>>>>>>>>>>> ovlcorr >>>>>>>>>>>>> steps, because these steps were really resource intensive. >>>>>>>>>>>>> >>>>>>>>>>>>> I would really appreciate any comments or suggestions to my >>>>>>>>>>>>> problem! >>>>>>>>>>>>> Thanks >>>>>>>>>>>>> in advance for your help! >>>>>>>>>>>>> >>>>>>>>>>>>> much obliged, >>>>>>>>>>>>> Christoph >>>>>>>>>>>>> >>>>>>>>>>>>> University of Oslo >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> ------------------------------------------------------------------- >>>>>>>>>>>>> --- >>>>>>>>>>>>> -- >>>>>>>>>>>>> -- >>>>>>>>>>>>> -- >>>>>>>>>>>>> -- >>>>>>>>>>>>> Live Security Virtual Conference >>>>>>>>>>>>> Exclusive live event will cover all the ways today's security and >>>>>>>>>>>>> threat landscape has changed and how IT managers can respond. >>>>>>>>>>>>> Discussions >>>>>>>>>>>>> will include endpoint security, mobile security and the latest in >>>>>>>>>>>>> malware >>>>>>>>>>>>> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ >>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>> wgs-assembler-users mailing list >>>>>>>>>>>>> wgs...@li... >>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/wgs-assembler-users >>>>>>>>>>>>> >>>>>>>>>>>>> ------------------------------------------------------------------- >>>>>>>>>>>>> -- >>>>>>>>>>>>> --- >>>>>>>>>>>>> -- >>>>>>>>>>>>> -- >>>>>>>>>>>>> -- >>>>>>>>>>>>> Live Security Virtual Conference >>>>>>>>>>>>> Exclusive live event will cover all the ways today's security and >>>>>>>>>>>>> threat landscape has changed and how IT managers can respond. >>>>>>>>>>>>> Discussions >>>>>>>>>>>>> will include endpoint security, mobile security and the latest in >>>>>>>>>>>>> malware >>>>>>>>>>>>> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ >>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>> wgs-assembler-users mailing list >>>>>>>>>>>>> wgs...@li... >>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/wgs-assembler-users >>>>> |