Memory usage

  • Krzysztof Sakrejda

    I have specified a state-space Jolly-Seber model which is using more memory
    per node than I expected. For about 40k nodes, it uses about 1Gb of RAM. I
    know I have run models with hundreds of thousands of nodes in under 2Gb of
    RAM, so the per-node cost is confusing. I would appreciate any help in
    understanding the root of this problem.

    All the code is available as .zip or .tar.gz from github:

    If you unzip/untar it in a directory and source the 'runJAGS.R' file from an R
    session, this will simulate data and run the model.

    Additional information:

    • This is a state-space Jolly-Seber model with recruitment estimated based on size/survival/capture, rather than just survival/capture. The recruitment estimation I'm uncertain of, but the rest of the parameters are recovered from simulated data. Currently it simulates data for 50 individuals.
    • Memory usage on more or less the same magnitude occurs on a Windows machine with a vanilla installation of R/JAGS/rjags, and two up to date 64-bit Gentoo linux ~amd64 machines. All are JAGS 2.2.0. I don't think it's machine/configuration specific.
    • I can get the memory usage down to about 50-75% of the current usage by doing some contorted indexing to only define important portions of the state space, but the memory usage is an order of magnitude large than I expected so this doesn't help much.
  • Martyn Plummer

    Martyn Plummer - 2011-02-08

    I'm not seeing excessive memory use in the model itself, but I see you are
    monitoring a lot of variables, some of which are quite large. For example, z,
    age, ageByYears, size, and res are all 50 x 29 matrices. If you monitor all
    the variables in your example then I estimate that you need 68 megabytes to
    store 1000 iterations, and a 1 Gb to store 15000 iterations (commented out in
    your example, but I guess you have tried this).

  • Martyn Plummer

    Martyn Plummer - 2011-02-09

    As a follow-up, I have compared 32-bit and 64-bit versions, and memory usage
    is nearly twice as high using x86_64 compared with i686. This is because
    pointers are twice the size and JAGS models rely extensively on pointers. You
    might save some memory by compiling a 32-bit version, but you will run up
    against the 2Gb memory limit if you try to monitor too many samples.

  • Krzysztof Sakrejda

    The memory usage issue actually shows up when the model is in the adaptive
    phase prior to sampling. I'm pretty sure the issue is real since I've run a
    different Cormack-Jolly-Seber model on 10000 individuals in 4Gb RAM or so (it
    was about 4.5 million nodes), but clearly my example is not reproducible. I'll
    try to fix that and post again. Thanks again for all the work you put into

  • Krzysztof Sakrejda

    After more back and forth, the final solution to this problem ended up as
    follows, from Martyn:

    I have found the root of your problem. You are using a method to

    calculate age by years that is very inefficient in JAGS. You can work

    around the problem by using this:

    ageByYears <- dinterval(age, ageThresh)

    in place of this:

    ageByYears <- toAgeByYears[ age + 1 ]


    ageThresh = which(diff(toAge) > 0)

    The dinterval function works like the "cut" function in R.

    Implementing this suggestion cuts memory use more than 10-fold, putting it
    into the same range I've seen for other Jolly-Seber models. He also indicated
    that he is working on fixing this type of inefficiency in the development
    version. Thank you Martyn!


Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:

No, thanks