Menu

poisson?

Help
2012-03-30
2012-09-01
  • JY Barnagaud

    JY Barnagaud - 2012-03-30

    Hi forum;

    I heard about that JAGS can't work out a Poisson with a parameter = 0

    which is quite annoying in zip models (like the one partially quoted below)

    N ~ dpois(esp)

    esp <- w * lambda

    w ~ dbern(psi)

    I've had a "runtime error" while trying to update that model with Jags 3.2.0
    under R -> could that poisson(0) be the reason? Or should I look at an issue
    somewhere else? The 1000 adaptation interations run well, and then the first
    8% of MCMC iterations, before R fails.

    many thanks for advice!

    bugbird

     
  • Martyn Plummer

    Martyn Plummer - 2012-03-30

    JAGS 3.2.0 can handle a zero mean parameter. If that were the problem it would
    happen almost immediately.

    You haven't given enough information to help anyone solve your problem. At the
    very least you need to give the model in full and also the full error message
    (It doesn't just say "RUNTIME ERROR" does it?).

     
  • JY Barnagaud

    JY Barnagaud - 2012-03-30

    Hi Martyn, thanks for your answer

    Well you have answered my specific question about the P(0) issue - so it's
    something else.

    I did not keep the error message (would I have wanted I guess than even a
    screen print wouldn't have been possible as the whole computer was frozen), it
    was something like that the code ended in an inadequate way - in an
    independent window, not produced inside the R device (which subsequently
    failed).

    I guess it might be due either to the model code itself or bad inits choice.
    some slight simplification of the model is enough to fix it but here it is
    anyway in case something is obvious.

    Thanks again;

    bugbird

    model{

    for (g in 1:numori){

    mlb.mean~dunif(0,1)

    mu.lam.beta<-log(mlb.mean)-log(1-mlb.mean)

    mlb2.mean~dunif(0,1)

    mu.lam.beta2<-log(mlb2.mean)-log(1-mlb2.mean)

    } # g

    for (h in 1:numori){ # loop over numori origins

    tau.lam.beta~dgamma(0.1,0.1)

    tau.lam.beta2~dgamma(0.1,0.1)

    } # h

    for (g in 1:numori){

    mpb.mean~dunif(0,1)

    mu.psi.beta<-log(mpb.mean)-log(1-mpb.mean)

    mpb2.mean~dunif(0,1)

    mu.psi.beta2<-log(mpb2.mean)-log(1-mpb2.mean)

    } # g

    for (h in 1:numori){

    tau.psi.beta~dgamma(0.1,0.1)

    tau.psi.beta2~dgamma(0.1,0.1)

    } # h

    for (i in 1:S){ # loop over S species

    intercept

    for (f in 1:nzone){

    psi.alpha~dnorm(mu.psi.int_,tau.psi.int_)

    } # f

    mu.psi.int_~dnorm(0,0.001)

    tau.psi.int_<- 1/(sigma.psi.int_*sigma.psi.int_)

    sigma.psi.int_~dunif(0,100)

    slopes

    psi.beta_ ~ dnorm(mu.psi.beta[ST_],tau.psi.beta[ST_])

    psi.beta2_ ~ dnorm(mu.psi.beta2[ST_],tau.psi.beta2[ST_])

    intercept

    for (f in 1:nzone){

    lam.alpha~dnorm(mu.lam.int_,tau.lam.int_)

    }

    mu.lam.int_~dnorm(0,0.001)

    tau.lam.int_<- 1/(sigma.lam.int_*sigma.lam.int_)

    sigma.lam.int_~dunif(0,100)

    slopes

    lam.beta_ ~ dnorm(mu.lam.beta[ST_],tau.lam.beta[ST_])

    lam.beta2_ ~ dnorm(mu.lam.beta2[ST_],tau.lam.beta2[ST_])

    det.alpha_ ~ dnorm(0,0.001)

    det.beta1_ ~ dnorm(0,0.001)

    det.beta2_ ~ dnorm(0,0.001)

    det.beta3_ ~ dnorm(0,0.001)

    likelihoods

    for (j in 1:R){ # loop over R sites

    w ~ dbern(psi)

    logit(psi)<-psi.alpha[i,ZON]+psi.beta_NATFOR500+psi.beta2_SHA500

    abundance model

    N ~ dpois(esp)

    esp <- w * lambda

    log(lambda)<-lam.alpha[i,ZON]+lam.beta_NATFOR500+lam.beta2_SHA500

    detection model

    for (k in 1:K){

    obs ~ dbin(p,N)

    p <- theta*w

    logit(theta) <- det.alpha_ + det.beta1_ * HOUR + det.beta2_ * VH + det.beta3_
    * VH2

    } # k

    } # j

    } # i

    fit bayesian probability statistic

    for (i in 1:S){

    for (j in 1:R){

    for (k in 1:K){

    compute expected values of obs

    eval<-N*p

    sd.resi<-sqrt(eval*(1-p))+0.5

    E<-(obs-eval)/sd.resi

    E2<-pow(E,2)

    replicate data sets

    obs.new~dbin(p,N)

    E.new<-(obs.new-eval)/sd.resi

    E2.new<-pow(E.new,2)

    } # k

    } # j

    } # i

    fit<-sum(E2)

    fit.new<-sum(E2.new)

    } # model


     
  • Martyn Plummer

    Martyn Plummer - 2012-03-30

    Are you saying that JAGS locked up your whole computer? Because that sounds
    bad.

     
  • JY Barnagaud

    JY Barnagaud - 2012-03-30

    Basically JAGS made R fail but it seems that it was more the failure of R than
    JAGS itself which caused this temporary freeze. Might be due to the large
    amount of memory used by R at the time of the failure due to the size of the
    model & of the data (?)

    (BTW my computer runs under Windows XP, 3.20 Ghz, dual core, 2.25Go RAM)

     
  • Martyn Plummer

    Martyn Plummer - 2012-03-30

    You might have just run out of memory. Try running the same model on a more
    recent computer. If you have a really big problem, you need a 64-bit operating
    system. On 32-bit there is a hard memory limit of 2Gb.

     
  • JY Barnagaud

    JY Barnagaud - 2012-03-30

    So hopefully I'll find a recent computer with 64b system somewhere and follow
    your advice !

    One very last question: I'm new to Jags so sorry if that's something trivial:

    should I consider the n.adapt as equivalent to a burnin period (like
    specifying the burn in period in OpenBUGS)? As far as I understood the answer
    is "no" but not sure about it neither why.

    Thanks a lot for your rapid help anyway.

     
  • Jack Tanner

    Jack Tanner - 2012-03-30

    There's a discussion of adaptation in the JAGS manual. Please have a look and
    see if that answers your question.

     
  • JY Barnagaud

    JY Barnagaud - 2012-04-02

    About adaptation: not too clear to me, it seems that it's not a MCMC run, but
    I can't figure out what it is then.

    About running model: crashes also with 64 bits (update of the mode for 10000
    iterations works fine, then monitoring parameters for 10000 aditional
    iterations makes R fail with a "runtime error"). I'm monitoring

    • an array of 91715 N iterations
    • an array of 917 * 15 * 3 * N iterations
    • 6 matrices of 15 * N iterations
      It's quite high but not tremendous - could that size of object be too much to
      be sustained by R?
     
  • Martyn Plummer

    Martyn Plummer - 2012-04-02

    Let's do the math. You need 8 bytes to store a double-precision float, so the
    minimum memory needed for your 10000 iterations is

    > (917 * 15 * 3 + 917 * 15 + 6 * 15 ) * 10000 * 8 
    [1] 4408800000
    

    or 4 gigabytes. Due to the way memory allocation works, you may need up to
    twice that amount. So yes, you are going to run out of memory quite quickly.

     
  • Tim Handley

    Tim Handley - 2012-04-02

    Regarding adaptation:

    There are several different ways of generating samples with MCMC. JAGS tries
    to be smart about this, and choose the method that is most efficient for your
    particular model. Each of these methods has semi-hidden parameters that you
    won't see unless you're a programmer or a statistician. These are not model
    parameters, they do not fit your data. Rather, they are meta-sampling-
    parameters which can be tweaked to optimize the efficiency of sample
    generation. During the adaptation period, JAGS is drawing samples, but it is
    also tweaking the meta-sampling-parameters to help you get the best possible
    performance.

    In general, you want to run JAGS in two steps. First, call jags.model() with
    n.adapt=100 to 1000. This allows JAGS to optimize the meta-parameters. You
    cannot store the samples generated here, but that's fine. Once the sampler is
    optimized, you then call jags.samples(). This second function uses the
    optimized sampler to collect samples of your model parameters.

    As an example, run jags.model with n.adapt=1. Then run jags.samples() with
    n.iter=10000 and n.thin=100. Time the call to jags.samples(), and look at the
    within-chain autocorrelation. Then do the same thing but with n.adapt=1000.
    The second call to jags.camples() will likely be faster and more efficient
    (lower autocorrelation) because you've given JAGS a lot of time (1000
    iterations) in which to optimize the efficiency of the sampling.

     
  • JY Barnagaud

    JY Barnagaud - 2012-04-03

    that's exactly the kind of explanation I'm able to understand, thanks a lot!

     

Log in to post a comment.