Failure to calculate log density

Help
Chrissuthy
2012-09-25
2016-10-04
  • Chrissuthy

    Chrissuthy - 2012-09-25

    Hello,
    I have recently switched to JAGS in an attempt to speed up the run time of some analysis I am doing. The time saved is incredible, I'm delighted! BUT, now I keep getting the following error message:

    Error: Error in node N_juv[66,3]
    Failure to calculate log density

    I figured it may be something to do with my starting values but I have spent a long time tweaking these but continue to get the same message. Incidentally, the numbers of the indexing (in this case [66,3]) change each time. I was wondering if there is an obvious reason for this error message??
    I have pasted the code below in case it is not a straight forward fix.
    Many thanks in advance

    model{
    #Priors
     psi1       ~  dunif(0, 1)
     alpha      ~  dnorm(0.001, 0.001)
     mu_b0A     ~  dnorm(0, 0.001)
     mu_b1A     ~  dnorm(0, 0.001)
     mu_b1J     ~  dnorm(0, 0.001)
     mu_g0      ~  dnorm(0, 0.001)
     mu_g1      ~  dnorm(0, 0.001)
     mu_om      ~  dnorm(0, 0.001)
     p          ~  dunif(0,1)
     #tau_e     ~  dgamma(0.001, 0.001)
     tau_om     ~  dgamma(0.001, 0.001)
     tau_b0A    ~  dgamma(0.001, 0.001)
     tau_b1A    ~  dgamma(0.001, 0.001)
     tau_g0     ~  dgamma(0.001, 0.001)
     tau_g1     ~  dgamma(0.001, 0.001)
     bet0_A[nyear] ~  dnorm(mu_b0A, tau_b0A)
     bet1_A[nyear] ~  dnorm(mu_b1A, tau_b1A)
    
      for(k in 1:(nyear-1)){
        o[k]        ~  dnorm(mu_om,tau_om)
        omega[k]    <- exp(o[k])
        bet0_A[k]   ~  dnorm(mu_b0A, tau_b0A)
        bet1_A[k]   ~  dnorm(mu_b1A, tau_b1A)
        gam0[k]     ~  dnorm(mu_g0, tau_g0)
        gam1[k]     ~  dnorm(mu_g1, tau_g1)
        phi[k]      ~  dunif(0,1)
      }
    
    #Year 1
     for(m in 1:nsite){
        z[m,1]  ~ dbern(psi1)
     #Adult ~ patch length
        N_ad[m,1]   ~ dpois(lambda_A[m,1])
         log(lambda_A[m,1]) <- bet0_A[1] + bet1_A[1] * length[m]
        zA[m,1] <- z[m,1] * N_ad[m,1]
     #Juv ~ adult
        N_juv[m,1]  ~ dpois(lambda_J[m,1])
         log(lambda_J[m,1]) <- mu_b1J * zA[m,1]
        zJ[m,1] <- z[m,1] * N_juv[m,1]
     #connectivity  
      for(n in 1:nsite){
        con[m,n] <- exp(-alpha * dmat[m,n]) * (1 - equals(IND[m],IND[n]))
       }
      }
    
    #Ecological submodel
     for(k in 2:nyear){
      for(i in 1:nsite){
     #Calculate colonisation as a function of by per patch connectivity to N_J
        col[i,k-1]          <- 1 - exp(-omega[k-1] *(inprod(zJ[,k-1],con[i,]))) 
     #Calculate extinction as a function of N_A
        logit(ext[i,k-1])   <- gam0[k-1] + gam1[k-1] * zA[i,k-1]
     #Calculate occupancy probability WITH a rescue effect
        muZ[i,k]            <- z[i,k-1] * ext[i,k-1] + (1 - z[i,k-1]) * col[i,k-1]
        z[i,k]              ~  dbern(muZ[i,k])
      #Adult ~ patch length
            N_ad[i,k]   ~ dpois(lambda_A[i,k])
             log(lambda_A[i,k]) <- bet0_A[k] + bet1_A[k] * length[i]
            zA[i,k] <- z[i,k] * N_ad[i,k]
      #Juv ~ adult
            N_juv[i,k]  ~ dpois(lambda_J[i,k])
             log(lambda_J[i,k]) <- mu_b1J * N_ad[i,k]
            zJ[i,k] <- z[i,k] * N_juv[i,k]
       }#k
      }#i
    
    # Observation model
     for(i in 1:nsite){
      for(k in 1:nyear){
       for(j in 1:REPS[i,k]){
        muy[i,j,k]  <- z[i,k] * p
        x[i,j,k]    ~  dbern(muy[i,j,k])
       }#j
      }#k
     }#i
    
    n.occ[1]<-sum(z[1:nsite,1])
    for (k in 2:nyear){
    n.occ[k] <- sum(z[1:nsite,k])
    }#k
    
    }#end
    
     
    Last edit: Martyn Plummer 2012-11-18
  • Chrissuthy

    Chrissuthy - 2012-10-10

    In case this got missed by anybody, thought I would try bump it up the list again. Frustratingly still cannot find a workaround here.

     
  • Jan Glaescher

    Jan Glaescher - 2012-11-14

    Did you get an answer to this one? I encounter the same problem with this model.

    model {
    
        Rmu ~ dunif(0, 1)
        Rsigma ~ dunif(0, 1)
        Rlambda <- 1/sqrt(Rsigma)
    
        Mmu ~ dunif(0, 1)
        Msigma ~ dunif(0, 1)
        Mlambda <- 1/sqrt(Msigma)
    
        for ( s in 1:16 ) {
            r[s]  ~ dnorm(Rmu,Rlambda)
            mu[s] ~ dnorm(Mmu,Mlambda)
    
            for ( t in 1:112 ) {
    
                U[t,1,s] <- (r[s]==1) * log(data[t,1,s]) + (r[s]<1) * data[t,1,s]^(1-r[s]) + (r[s]>1) * (1/(1-r[s])) * data[t,1,s]^(1-r[s]) 
                U[t,2,s] <- (r[s]==1) * log(data[t,2,s]) + (r[s]<1) * data[t,2,s]^(1-r[s]) + (r[s]>1) * (1/(1-r[s])) * data[t,2,s]^(1-r[s]) 
                P[t,1,s] <- U[t,1,s]^(1/mu[s]) / (U[t,1,s]^(1/mu[s]) + U[t,2,s]^(1-mu[s]))
                P[t,2,s] <- U[t,2,s]^(1/mu[s]) / (U[t,1,s]^(1/mu[s]) + U[t,2,s]^(1-mu[s]))
    
                data[t,3,s] ~ dcat(P[t,1:2,s])
            }
        }
    }
    
     
    Last edit: Jan Glaescher 2012-11-16
  • Martyn Plummer

    Martyn Plummer - 2012-11-18

    Both of you are generating non-finite values somewhere in your model.

    Jan, the value of x^y is not well defined if x is negative (unless y is an integer). I expect you are generating some NaN (not a number) values.

    Chris, you have a double exponential in your model, i.e. omega is the exponential of o and col is calculated by taking the exponential of omega (with some coefficient). This could easily blow up to give an infinite value.

    Instead of using reference priors, try putting strong biologically plausible priors on your parameters. Try simulating from a model with strong priors to see what the data looks like. If the model is too complex, break it down into component parts and study each simpler sub-model separately. The BUGS language allows you to build up models in a modular fashion, so take advantage of this feature.

     
  • Jan Glaescher

    Jan Glaescher - 2012-11-21

    Martyn, thanks for the tip. I am still looking for the non-finite values, because the x's in my model (the data) are always positive and the y's (the exponent is actually 1-y) are sampled from a beta distribution, hence a positive, real value between 0 and 1. It's unclear how that can generate non-finite values, but I am looking into using a debugger to at least see the variable values at the non-finite error state ...

     
  • Abdelouahed BEN MHAMED

    please help me, i have the same problem,
    here is my code:

    data{
    for(t in 1:n){
    LogC[t]<- log(C[t])
    }
    }
    model{
    for(t in 1:(n-1)){
    # Observation model

        LogC_mean[t] <- log(qseg) + log(Eff[t]) + log(B[t])
        LogC[t] ~ dnorm(LogC_mean[t],tau_c)
    
        # Process model
    
        LogB[t+1]<- log(r)+log(B[t]-C[t]) -log(1+(r/K)*(B[t]-C[t])) + error_B[t]
        B[t+1] <- exp(LogB[t+1])
    }
    LogC_mean[10] <- log(qseg) + log(Eff[10]) + log(B[10])
    LogC[10] ~ dnorm(LogC_mean[10],tau_c)
    
    LogK ~ dnorm(mu_K,tau_K)
    K <- exp(LogK)
    inv_K <- min(1/K,1)
    Bm ~ dnorm(mu_K,tau_K)
    LogB[1] <- Bm
    B[1] <- exp(LogB[1])
    Logr ~ dnorm(mu_r,tau_r)
    r <- exp(Logr)
    Logqseg ~ dnorm(mu_qseg,tau_qseg)
    qseg <- exp(Logqseg)
    # Hyper-Prior
    mu_K ~ dnorm(1000,0.001)
    mu_r ~ dnorm(0,0.001)
    mu_qseg ~ dnorm(0,0.0001)
    tau_K ~ dgamma(0.001,0.001)
    tau_r ~ dgamma(0.001,0.001)
    tau_qseg ~ dgamma(0.001,0.001)
    tau_c ~ dgamma(0.001,0.001)
    tau_B ~ dgamma(0.001,0.001)
    for(t in 1:(n-1)){
        error_B[t] ~ dnorm(0,tau_B)
    }
    

    }

     
    Last edit: Abdelouahed BEN MHAMED 2013-10-03
  • Martyn Plummer

    Martyn Plummer - 2013-10-03

    Here you are taking the log of a difference between B[t] and C[t]

    LogB[t+1]<- log(r)+log(B[t]-C[t]) -log(1+(r/K)*(B[t]-C[t]))
    

    You must have B[t] > C[t] for all t otherwise the log is not defined. For t=1 you can truncate the prior distribution of B[1] to guarantee that it is greater than C[1], e.g.

    B[1] ~ dnorm(mu_r, tau_r) T(C[1], )
    

    For the rest, I don't know! You are defining B[2:n] recursively using random variables r and K. It's not easy to see how to enforce this constraint.

     
  • Abdelouahed BEN MHAMED

    thanks for responding.
    it still there is a problem which i don't really know how to resolve.

     
  • Lars Velten

    Lars Velten - 2014-07-02

    My model is much simpler than the ones listed above and still gives the same error...

    ~~~~~~~
    model {
    for (g in 1:A) {
    for (j in 1:B) {
    #Parameters of a negative binomial distribution
    #as prob is never 0,1, size is always defined
    prob[g,j] ~ dbeta(1.0001,1.0001) T(0.0001,0.9999)
    mu[g,j] ~ dunif(0.01,10000)
    size[g,j] <- prob[g,j] * mu[g,j] / (1-prob[g,j])
    for (i in 1:C) {
    M[j,i,g] ~ dnbinom(prob[g,j], size[g,j])
    N[j,i,g] ~ dbinom(p[i],M[j,i,g]) #p[i] and N[j,i,g] are data. (a noisy sampling method).
    }
    }
    }
    }
    ~~~~~~

    I have more complicated models with a similar binomial subsampling step working. But this one consistently gives
    Error in node M[1,23,493] (or some other indeces)
    Failure to calculate log density
    (during update, not during initialization! for init. I use M = N+1, and mu >> M)

    Would be great if someone could tell me what I'm overlooking!

     
    Last edit: Lars Velten 2014-07-02
  • Matt Denwood

    Matt Denwood - 2014-07-03

    Hi Lars

    I think the error is coming from the dnbinom distribution not the dbinom distribution - possibly due to extreme values of prob and/or size. I would consider re-parameterising it as a Poisson-gamma which is equivalent but avoids the necessity to calculate size in this way. Something like (untested):

    model {
    for (g in 1:A) {
     for (j in 1:B) {
      shape[g,j] ~ dgamma(0.01, 0.01) # a different prior for shape may be better
      mu[g,j] ~ dunif(0.01,10000) # may not be a sensible prior for mu
      rate[g,j] <- shape[g,j] / mu[g,j]
      for (i in 1:C) {
        M[j,i,g] ~ dpois(lambda[j,i,g])
            lambda[j,i,g] ~ dgamma(shape[g,j], rate[g,j])
        N[j,i,g] ~ dbinom(p[i],M[j,i,g]) #p[i] and N[j,i,g] are data. (a noisy sampling method).
        } 
      }
    }
    }
    

    I'm assuming that there is a good reason for using a negative binomial to model the size parameter of the binomial part here ... if not then there may well be better ways to model this.

    Matt

     
  • Lars Velten

    Lars Velten - 2014-07-09

    Great, thanks a lot!

     
  • Alberto De Rosa

    Alberto De Rosa - 2015-05-09

    Hi, I'm having a similar problem, can anyone help me?

    model
    {

    for(i in 1:qo)
        {
        w[i]~dexp(l[i])
        ## Linear model
        log(l[i])<-bc[cs[i],sp[i]] + bm[m[i],sp[i]]+b[1]*t[d[i]]+b[2]*h[d[i]]+b[3]*v[d[i]]+b[4]*ws[d[i]]*wm[d[i]] + b[5]*p[d[i]]+ int + epsilon[i]
        teps[i] ~ dgamma(0.001,0.001)
        }
    for(i in 2:qo)
        {
        epsilon[i] ~ dnorm(0,teps[i])
        }
    epsilon[1]<- -sum(epsilon[2:qo])
    int ~ dnorm(0,te)
    for(i in 1:qg)
        {
        v[i] ~ dnorm(mcv[1],tcv[1])
        p[i] ~ dnorm(mcv[2],tcv[2])
        wm[i] ~ dnorm(mcv[3],tcv[3])
        ws[i] ~ dnorm(mcv[4],tcv[4])
        h[i] ~ dnorm(mcv[5],tcv[5])
        t[i] ~ dnorm(mcv[6],tcv[6])
        }
    
    for(i in 1:qs)
        {
        ## hierarchical model species
        bsp[i] ~ dnorm(a, nu)
        }
    
    for(i in 1:5)
        {
        ## hierarchical model b
        b[i] ~ dnorm(0, 1e-6)
        }
    for(i in 2:qs)
        {   
            bc[1,i] ~ dnorm(0,tc)
        }
    for(i in 1:qs)
        {
            bc[2,i] ~ dnorm(0,tc)
            bc[3,i] ~ dnorm(0,tc)
        }
    bc[1,1] <- -sum(bc[2:3,1:qs])
    for(i in 1:3) 
        {
        for(u in 1:qs)
                {
             ## hierarchical model bm
                bm[i,u] ~ dnorm(0,tm)
            }
            }
    
    ## other  priors
    for(i in 1:6)
    {
    mcv[i] ~ dnorm(0,1e-6)
    tcv[i] ~ dgamma(1e-3,1e-3)
    }
    
    a ~ dnorm(0,1e-6)
    nu ~ dgamma(0.001,0.001)
    
    tm ~ dgamma(0.001,0.001)
    tc ~ dgamma(0.001,0.001)
    te ~ dgamma(0.001,0.001)
    

    }

     
    Last edit: Alberto De Rosa 2015-05-09
  • Alberto De Rosa

    Alberto De Rosa - 2015-05-09

    never mind, it was the indexed precision for epsilon.
    by putting a shared precision among them it doesn't fail to calculate log density anymore

     
  • O.mykiss

    O.mykiss - 2015-06-30

    I have the same issue that other people have listed above. When I run my model with fewer iterations (both burn in and total iterations) I don't receive the "Failure to calculate log density" error. When I increase either burn in or total iterations I begin to get the error.

    Here is the code:

    model <- function() {  
      for(i in 1:jw){
        m[i] ~ dbin(p[i],n[i]) 
        logit(p[i]) <- etaP[i]
        etaP[i] ~ dnorm(etaP1,tauP)
    
        u[i] ~ dbin(p[i],U[i])
        U[i] <- round(exp(etaU[i])) 
        etaU[i]~ dnorm(etaU1,tauU)
    
      }
      etaP1 ~ dnorm(-2,.666)
      tauP <- 1/sqrt(tauP2inv)
      tauP2inv ~ dgamma(.001,.001)
    
      etaU1 ~ dnorm(7.5,4.0)
      tauU <- 1/sqrt(tauU2inv)
      tauU2inv ~ dgamma(.001,.001)
    
    }
    

    Any help would be appreciated!

     
    Last edit: Martyn Plummer 2015-07-01
  • Martyn Plummer

    Martyn Plummer - 2015-07-01

    The error message should tell you which was the node that failed in the log density calculations, but you did not say.

    This error occurs when the function that calculates the log density returns NaN (not a number). If the error is only occurring in long runs then it suggests that the Markov chain is moving into a part of the parameter space where the density cannot be evaluated. You should look to the prior distributions to stop this from happening. For example, you might want to put a much tighter prior on tauU to stop U[i] from getting too large.

    It looks like you are assuming that the normal distribution is parameterized in terms of its mean and standard deviation. This is not the case. In the BUGS language, the normal distribution is parameterized in terms of its mean and precision (inverse of the variance). If you want the conjugate inverse gamma prior on the variance then you can define it directly as follows:

    model <- function() {  
      for(i in 1:jw){
        m[i] ~ dbin(p[i],n[i]) 
        logit(p[i]) <- etaP[i]
        etaP[i] ~ dnorm(etaP1,tauP)
    
        u[i] ~ dbin(p[i],U[i])
        U[i] <- round(exp(etaU[i])) 
        etaU[i]~ dnorm(etaU1,tauU)
    
      }
      etaP1 ~ dnorm(-2,.666)
      tauP ~ dgamma(.001,.001)
    
      etaU1 ~ dnorm(7.5,4.0)
      tauU ~ dgamma(.001,.001)
    }
    
     
  • O.mykiss

    O.mykiss - 2015-07-01

    That is just what I needed! Thanks a ton for the help.

     
  • Anonymous - 2016-02-23

    Hello,

    I have a similar problem to mentioned above. My model has many explanatory variables but simplification of model has not changed anything. I am receiving always the same error:
    Error: Error in node gsp[1]
    Failure to calculate log density

    I would be grateful if somebody could take a look and tell me what I am doing wrong. I am a beginner in JAGS.

    My model is:

    #Data upload
    d <- read.table("all_species.txt", header = T)
    d$grow <- (log(d$DBH2)-log(d$DBH1))/(d$yearinterval)
    
    # Model
    grow.model <- function(){
        tau.noninfo <- 1.0E-4
        tau.err <- 1.0E+4
        for(i in 1:n.trees){
            Grow[i] ~ dnorm(log.Rgr[i], tau.err)
            log.Rgr[i] <- (beta[1] + asp[Species[i]])
            + (beta[2] + bsp[Species[i]]) * temp[i]
            + (beta[3] + csp[Species[i]]) * sqtemp[i]+(beta[4]+dsp[Species[i]])*rain[i]+(beta[5]+esp[Species[i]])*sqrain[i]+(beta[6] + fsp[Species[i]])*log(DBH[i])+(beta[7] + gsp[Species[i]])*CI[i]
        }
        for(i in 1:n.beta){
        beta[i] ~ dnorm(0.0, tau.noninfo)
        }
        for(j in 1:n.species){
            asp[j] ~ dnorm(0.0, tau[1])
            bsp[j] ~ dnorm(0.0, tau[2])
            csp[j] ~ dnorm(0.0, tau[3])
            dsp[j] ~ dnorm(0.0, tau[4])
            esp[j] ~ dnorm(0.0, tau[5])
            fsp[j] ~ dnorm(0.0, tau[6])
            gsp[j] ~ dnorm(0.0, tau[7])
        }
        for(k in 1:n.tau){
            tau[k] <- pow(sigma[k], -2)
            sigma[k] ~ dunif(0,100)
        }
    }
    
    # Data
    list.data <- list("Grow" = d$grow,
                      "DBH" = d$DBH1,
                      "Species" = d$spc,
                      "sqtemp" = (d$mtemp)*(d$mtemp),
                      "temp" = d$mtemp,
                      "rain" = d$rain,
                      "sqrain" = (d$rain)*(d$rain),
                      "CI" = d$CI5,
                      "n.trees" = nrow(d),
                      "n.beta" = 7,
                      "n.species" = length(unique(d$spc)),
                      "n.tau" = 7)
    
    # Initialization
    # 
    list.inits <- function(){
        list("beta" = rep(0, 7),
             "sigma" = runif(list.data$n.tau, 0, 100),
             "asp" = rep(0, list.data$n.species),
             "bsp" = rep(0, list.data$n.species),
             "csp" = rep(0, list.data$n.species),
             "dsp" = rep(0, list.data$n.species),
             "esp" = rep(0, list.data$n.species),
             "fsp" = rep(0, list.data$n.species),
             "gsp" = rep(0, list.data$n.species))
    }
    
    # Parameters to save
    params <- c("beta", "sigma",
                "asp", "bsp", "csp", "dsp", "esp", "fsp", "gsp")
    
    # Run jags
    library(R2jags)
    set.seed(123)
    grow.jags <- jags(data = list.data, inits=list.inits, params,
                      n.iter = 50000, n.burnin = 10000, n.chains = 3,
                      model.file = grow.model)
    
     
    • Martyn Plummer

      Martyn Plummer - 2016-02-23

      Well your outcome variables Grow[i] have a precision of tau.err=1.0E+4, i.e. a standard deviation of 0.01, so that might have something to do with it.

       
      • Anonymous - 2016-02-29

        Thank You very much for the help!
        If somebody comes across the same problem, this code works in my case:

        grow.model <- function(){
        tau.noninfo <- 1.0E-4
        Hyper.gamma <- 1.0E-2
        for(i in 1:n.trees){
        Grow[i] ~ dnorm(log.Rgr[i], tau.err)
        log.Rgr[i] <- beta[1] + gamma[1, Species[i]]
        + (beta[2] + gamma[2, Species[i]]) * temp[i] + (beta[3] + gamma[3, Species[i]]) * rain[i] + (beta[4] + gamma[4, Species[i]]) * sqtemp[i] + (beta[5] + gamma[5, Species[i]]) * sqrain[i] + (beta[6] + gamma[6, Species[i]]) * log(DBH[i]) + (beta[7] + gamma[7, Species[i]]) * CI[i]
        }
        for(i in 1:n.beta){
        beta[i] ~ dnorm(0.0, tau.noninfo)
        }
        for(i in 1:n.gamma){
        for(j in 1:n.species){
        gamma[i, j] ~ dnorm(0.0, tau.gamma[i])
        }
        tau.gamma[i] ~ dgamma(Hyper.gamma, Hyper.gamma)

        tau.err ~ dgamma(Hyper.gamma, Hyper.gamma)
        

        }

         
  • Anne Bjorkman

    Anne Bjorkman - 2016-08-25

    Hi all,

    I am having the same problem but with the odd (to me at least) difference that the node generating the error is in my response variable. If I run the model for fewer than 2000 iterations, it works fine, but more than that and I get the error:

    "Error in node yB[3640], Failure to calculate log density"

    It's particularly strange because I have run the exact same model with two other response variables and both have worked fine, so it must be specific to this data, but when I look at the data point at yB[3640] it looks totally normal. If I make yB[3640] "NA" then the error just moves to the next node (yB[3641]). Any idea what could be going on here? I would be very grateful for any help.

    Here is a simplified version of the model code:

    model{
    
      # priors
    
      tau.blocB <- 1/(sigma.blocB*sigma.blocB)
      sigma.blocB ~ dunif(0,50)
    
      for (i in 1:nspecies){
        sigma.allB[i]~dunif(0,100)
      }
    
      for (i in 1:nspeciesB){
        Traitspecies[i] ~ dnorm(logpriormean[i],logpriorprecision[i])T(0,) #truncated normal so values can't go negative
      }
    
      #likelihood
    
      for (i in 1:nB){
        yB[i] ~ dgamma(alphaB[i],betaB[i])
        alphaB[i] <- muB[i]^2/sigma.allB[speciesB[i]]^2
        betaB[i] <- muB[i]/sigma.allB[speciesB[i]]^2
        log(muB[i]) <- alocB[location.locB[i]]
      }
    
      for (i in 1:nspplocB){
        alocB[i] ~ dnorm(Traitspecies[species.locB[i]],tau.blocB)
      }
    
    }
    
     
    Last edit: Martyn Plummer 2016-08-25
    • Martyn Plummer

      Martyn Plummer - 2016-08-25

      I'll repeat some advice I gave earlier in the thread.

      This error occurs when the function that calculates the log density returns NaN (not a number). If the error is only occurring in long runs then it suggests that the Markov chain is moving into a part of the parameter space where the density cannot be evaluated. You should look to the prior distributions to stop this from happening.

      Instead of using reference priors, try putting strong biologically plausible priors on your parameters. Try simulating from a model with strong priors to see what the data looks like.

       
      • Anne Bjorkman

        Anne Bjorkman - 2016-08-26

        Hi Martyn,

        Thanks for your response. I did read your previous comment prior to posting, but I already have biologically plausible priors on "Traitspecies" (the priors vary depending on the number of observations there are for each species - species with many observations have flat priors so that the posterior is estimated primarily from the data, while species with few observations have relatively narrow priors (with a mean and precision based on a combination of species and genus-level information). All are truncated at 0 because the values cannot be negative.

        It's confusing to me that the error is in the response variable, because I (as far as I know) can't put a prior on that to prevent it from moving to an unevaluable part of the parameter space. I tried playing around with the sigma priors but that didn't help. I also tried drastically narrowing the priors on Traitspecies but still got the same error at the same node. Do you have any other insights into which priors in particular might be problematic, or other things that could be causing the problem? I very much appreciate your help.

         
      • Anne Bjorkman

        Anne Bjorkman - 2016-09-19

        Dear Martyn and all,

        I have unfortunately still not resolved the issue I posted about above. I have tried putting stronger priors on all Traitspecies (intercept) parameters, but still got the same error about "failure to calculate log density". I also tried setting a VERY narrow prior on Traitspecies for just the species associated with the observation at line 3640, where the error occurs, but then I get a "node inconsistent with parents" error. I have tried to find a sweet spot between these two errors by adjusting the strength of that prior, but I always get either one or the other error. Setting the stronger priors does allow me to run the model for 10,000 iterations without getting the error (before I got the error after just 2,000 iterations), but I still get it when I try to run for 20,000 iterations. (The model is almost converged by 10,000 but not quite).

        Any additional helps or hints would be greatly appreciated, as I am very stuck and don't know how to proceed from here.

         
  • Martyn Plummer

    Martyn Plummer - 2016-09-20

    One t hing I noticed about your model is that you have a constraint on Traitspecies to ensure that it is positive.

    Traitspecies[i] ~ dnorm(logpriormean[i],logpriorprecision[i])T(0,)
    

    But then you use a log link for muB[i]. The log link will ensure muB[i] > 0 regardless of the value of Traitspecies[i]. The constraint seems unnecessary and might be keeping the Markov chain out of a region of the parameter space it needs to be in.

    A second issue is that you have a separate standard deviation for each species:

    for (i in 1:nspecies){
          sigma.allB[i]~dunif(0,100)
    }
    

    This is fine if you have lots of replication for each species, but if one species has only 1 or 2 observations then it is a source of instability in the model as you do not have enough replication to estimate sigmaB[i]. Try simplifying the model to one with a common variance component, i.e.

    sigma.allB.common ~ dunif(0, 100)
    for (i in 1:nspecies){
          sigma.allB[i] <- sigma.allB.common
    }
    

    just to see if this model fits.

     
    • Anne Bjorkman

      Anne Bjorkman - 2016-10-04

      Thanks very much for your continued help with this. I think I have finally figured out what is causing the problem. The error appears to occur for species that have only 4 observations (my cutoff for the minimum number of observations) AND two of the four observations are identical, such that there are only 3 unique observations for that species.

      I'm not sure why this is a problem, but when I evenually removed the species associated with the line at which the error was occuring, the model got caught on another line, for a different species which also had this same data issue (4 observations but only 3 unique). Removing that species too fixed the problem, and then the model runs fine.

      I can't say for sure that this is what was causing the problem, but since it happened twice it seems likely. Perhaps your comment about estimating a sigma per species is related to this, since estimating a sigma on only 3 unique values could be problematic? I have not yet tried keeping those 2 species in the dataset and using the common sigma, as you suggested, but that will be my next step.

       

Log in to post a comment.

Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:





No, thanks