Menu

#110 GARP model stalls at 52%

open
regiov
5
2009-03-30
2009-03-30
Anonymous
No

I have repeatedly run a cloned version of the GARP algorithm on openModeller Desktop 1.0.8 and cannot get beyond 52% experiment progress. I have changed computers and am now running on a machine with 23.9 GB of RAM, but stalled at the same point as with a slower machine. I have 121 GB of free space on the working drive, so that is not the problem either. The process has never proceeded beyond 53%.
Has anyone else experienced this problem?

Discussion

  • regiov

    regiov - 2009-03-30

    I never saw this before in any of the available openModeller interfaces. What values did you use in the GARP parameters?

    We need to be able to replicate the problem here. Is it possible for you to make your points/layers available somewhere for download? If you're using well-known layers such as worldclim, just tell us which ones you used.

    Thanks,
    Renato

     
  • regiov

    regiov - 2009-03-30
    • assigned_to: timlinux --> rdg
     
  • Alicyn Gitlin

    Alicyn Gitlin - 2009-04-09

    I have been running GARP in a way that is probably huge and memory intensive. I want to output all files so that I can choose my own subsets (and hopefully I am doing it correctly). I don't have a place that you can download my files, but can find one if I need to. Let me know if this is enough information, and if you need actual files I will send them.
    I am working with 800-m PRISM data; extent = American states of Colorado, New Mexico, Utah, Arizona; data layers input = monthly max temp, min temp, and precip (36 layers total); projections = 10 total (one is the same as the input layers and the other nine are manipulations of the input layers with different temp and precip values); mask = a raster of pixels along rivers/streams. I have 128 points for one species and 49 for the other (same problem occurred with each species).
    GARP parameters:
    Commission Sample Size: 10000
    Commission Threshold: 100
    Convergence limit: 0.01
    Hard Omission Threshold: 100
    Max generations: 1000
    Max number of threads: 1 (I tried 4, but after seeing the warning, I changed back to 1!)
    Models Under Omission Threshold: 1000
    Population size: 500
    Resamples: 10000
    Total runs: 1000
    Training proportion: 100

    If you need the actual files, let me know and I will see if I can get them on an ftp site.
    Thank you!

     
  • regiov

    regiov - 2009-04-13

    In most situations openModeller is still not prepared to gracefully abort an experiment when you're running out of memory. From your description, I guess that if we find the exact place in the code where the problem is happening, instead of freezing at 52% you would probably get an error message such as "not enough memory to perform task xxx".

    There are two things that I can suggest you, besides trying to run the experiment on a machine with more resources: 1) Reduce the total runs parameter. Generating 1000 models to select the best subsets seems an overkill to me. Are you sure you really need this? Getting the 10 best models out of 100 is what most people do. I'm not sure how much you would be able to improve your final model by increasing so much the total number of runs, but I guess it would not be significant. 2) If you keep having problems, you could try to use the command line interface (om_console) since it definitely consumes less memory than the Desktop interface. The only thing is that it's not user-friendly, so I recommend you to try the recommendation #1 first.

    Please let me know if you manage to run the experiment.

    Hope this helps,
    --
    Renato

     

Log in to post a comment.