Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo

Close

JAGS for Meta-Analysis Question

Nate
2014-04-18
2014-05-21
  • Nate
    Nate
    2014-04-18

    Hi everyone,

    I'm using JAGS to run a meta-analysis and I've run into an issue. I have calculated log-response ratios and errors for about 177 studies, where studies fall into one of 8 groups. I'm interested in estimating an overall effect as well as a effects for each group. I've discovered that if I use the model

    model{
      for(i in 1:N){
        RR[i] ~ dnorm(yhat[i], prec_y[i])
        yhat[i] <- mu + gamma[type[i]]
      }
    
      for(j in 1:J){
        gamma[j] ~ dnorm(gamma_mu, gamma_prec)
        gamma_true[j] <- gamma[j] - mean(gamma[])
      }
    
      mu_true <- mu + mean(gamma[])
    
      mu ~ dnorm(0, 0.001)
      mu_prec ~ dgamma(0.01, 0.01)
      gamma_mu ~ dnorm(0, 0.001)
      gamma_prec ~ dgamma(0.01, 0.01)
    }
    

    than the model doesn't fit my data well, because the estimate for Plant-Mych is just way off:
    not working

    Alternatively, if I use the model:

    model{
      for(i in 1:N){
        RR[i] ~ dnorm(yhat[i], prec_y[i])
        yhat[i] ~ dnorm(gamma[type[i]], gamma_prec[type[i]])
      }
    
      for(j in 1:J){
        gamma[j] ~ dnorm(overall_mu, mu_prec)
        gamma_prec[j] <- pow(gamma_sigma[j], -2)
        gamma_sigma[j] ~ dunif(0, 100)
      }
    
      overall_mu ~ dnorm(0, 0.001)
      mu_prec ~ dgamma(0.01, 0.01)
    
    }
    

    then it fits much better
    working

    Is the fundamental difference that, in the first bad model, the groups are fixed effects whereas in the second model the groups are random? If so, why does this make such a big difference? Is there a way to code the first model so that it fits better?

    I appreciate the help!

    Nate

     
  • Martyn Plummer
    Martyn Plummer
    2014-05-21

    Sorry, I don't always have time to answer general modelling questions in a timely manner. You might also try the BUGS mailing list ( bugs@jiscmail.ac.uk) in such cases.

    Anway, you are on the right track. Your second model is better because it includes random effects. There is more variation between studies than suggested by their precision estimates (prec_y[i]) so you need to account for this with study-specific random effects. From your boxplots, it looks like the variation depends on the group so you are right to make the variance of the random effect depend on group.

    I would simplify the model a little bit by removing the hierarchical prior on gamma[]. With only J=8 groups, you don't have many degrees of freedom to estimate inter-group variance, so you may as well model these as fixed effects.

    model{
      for(i in 1:N){
        RR[i] ~ dnorm(yhat[i], prec_y[i])
        yhat[i] ~ dnorm(gamma[type[i]], gamma_prec[type[i]])
      }
    
      for(j in 1:J){
        gamma[j] ~ dnorm(0, 1.0E-3)
        gamma_prec[j] <- pow(gamma_sigma[j], -2)
        gamma_sigma[j] ~ dunif(0, 100)
      }
    }
    

    Note that the Markov chain may not mix very well with this centered parameterization. It is more usual to separate out the random effects as shown below:

    model{
      for(i in 1:N){
        RR[i] ~ dnorm(yhat[i], prec_y[i])
        yhat[i] <-  gamma[type[i]] + eps[i] 
        eps[i] ~ dnorm(0, eps_prec[type[i]])
      }
    
      for(j in 1:J){
        gamma[j] ~ dnorm(0, 1.0E-3)
        eps_prec[j] <- pow(eps_sigma[j], -2) #Equivalent to gamma_prec 
        eps_sigma[j] ~ dunif(0, 100) #Equivalent to gamma_sigma
      }
    }
    

    If you load the "glm" module before compiling your model then you should get block updating of gamma and eps with good mixing.