#280 Out of memory allocating offs[] with 32G Ram

v0.9.0
closed
Val
bowtie (175)
5
2014-07-23
2013-03-21
No

Hi,
I am running tophat, but the error I am encountering is bowtie 2 related, so hopefully this is the right place to post. I apologize in advance if I am missing an important point - six months ago I had never touched the command line, and although I read message boards daily, this is my first post ever.

I am aligning 100bp paired end reads on a linux cluster and I have allocated up to 32 GB RAM, I've run ~50 alignments with the same settings, indexes etc, but on one of the runs I see the error:

Out of memory allocating the offs[] array for the Bowtie index.
Please try again on a computer with more memory.
Error: Encountered internal Bowtie 2 exception (#1)

I have posted the full output below.

I tried running it again to see if the error was reproducible and the same error occurred at the same point.

The resource summary reported by the cluster is
CPU time : 104819.57 sec.
Max Memory : 3703 MB
Max Swap : 4298 MB

Max Processes : 9
Max Threads : 15

My other runs which run successfully report similar or higher max memory, so I don't think it is an issue with the submission to the cluster, since there is more memory available than it is being accessed, i'm not sure that allocating more memory as the error suggests will help (I assume 32GB is far more than it should need)

The Fastq files are similar size to other ones which have run successfully.
I've tried googling the error, but I can't find a similar situation.
I also tried looking in the logs created during the run, but I don't find anything obviously wrong.
I'm really not sure how to further trouble shoot.

I would really appreciate any help or advice how to more forward.

Thank you,
-brandi

output:

[2013-03-21 09:56:41] Beginning TopHat run (v2.0.7)
-----------------------------------------------
[2013-03-21 09:56:41] Checking for Bowtie
Bowtie version: 2.0.5.0
[2013-03-21 09:56:41] Checking for Samtools
Samtools version: 0.1.12.a
[2013-03-21 09:56:41] Checking for Bowtie index files
[2013-03-21 09:56:41] Checking for reference FASTA file
[2013-03-21 09:56:41] Generating SAM header for genome
format: fastq
quality scale: phred33 (default)
[2013-03-21 09:58:10] Reading known junctions from GTF file
[2013-03-21 09:58:37] Preparing reads
left reads: min. length=100, max. length=100, 27713872 kept reads (64616 discarded)
right reads: min. length=100, max. length=100, 26978243 kept reads (800245 discarded)
[2013-03-21 10:31:23] Creating transcriptome data files..
[2013-03-21 10:33:33] Building Bowtie index from human.ens.cleaned.fa
[2013-03-21 11:17:42] Mapping left_kept_reads to transcriptome human.ens.cleaned with Bowtie2
[2013-03-21 12:05:47] Mapping right_kept_reads to transcriptome human.ens.cleaned with Bowtie2
[2013-03-21 12:50:18] Resuming TopHat pipeline with unmapped reads
[2013-03-21 12:50:19] Mapping left_kept_reads.m2g_um to genome genome with Bowtie2
[2013-03-21 13:06:02] Mapping left_kept_reads.m2g_um_seg1 to genome genome with Bowtie2 (1/4)
[2013-03-21 13:18:50] Mapping left_kept_reads.m2g_um_seg2 to genome genome with Bowtie2 (2/4)
[2013-03-21 13:31:00] Mapping left_kept_reads.m2g_um_seg3 to genome genome with Bowtie2 (3/4)
[2013-03-21 13:43:48] Mapping left_kept_reads.m2g_um_seg4 to genome genome with Bowtie2 (4/4)
[2013-03-21 13:57:00] Mapping right_kept_reads.m2g_um to genome genome with Bowtie2
[2013-03-21 14:15:07] Mapping right_kept_reads.m2g_um_seg1 to genome genome with Bowtie2 (1/4)
[2013-03-21 14:35:57] Mapping right_kept_reads.m2g_um_seg2 to genome genome with Bowtie2 (2/4)
[2013-03-21 14:59:28] Mapping right_kept_reads.m2g_um_seg3 to genome genome with Bowtie2 (3/4)
[FAILED]
Error running bowtie:
Out of memory allocating the offs[] array for the Bowtie index.
Please try again on a computer with more memory.
Error: Encountered internal Bowtie 2 exception (#1)
Command: /n/sw/bowtie2-2.0.5/bowtie2-align -q -k 41 -N 1 -i C,10000,0 -L 14 -p 8 --sam-no-hd -x genome -

Discussion

  • Val
    Val
    2014-01-30

    Hi Brandi,
    I am still unable to reproduce this issue. Is it possible to share the dataset that causes the error? Have you tracked down somehow the problem in the meantime?

    Val

     
  • Val
    Val
    2014-01-30

    • status: open --> pending
    • assigned_to: Val
    • Group: --> v0.9.0
     
  • Val
    Val
    2014-07-23

    • status: pending --> closed
     
  • Val
    Val
    2014-07-23

    I assume a solution was found for this.

    Val