Hi everyone,

I'm using JAGS to run a meta-analysis and I've run into an issue. I have calculated log-response ratios and errors for about 177 studies, where studies fall into one of 8 groups. I'm interested in estimating an overall effect as well as a effects for each group. I've discovered that if I use the model

model{ for(i in 1:N){ RR[i] ~ dnorm(yhat[i], prec_y[i]) yhat[i] <- mu + gamma[type[i]] } for(j in 1:J){ gamma[j] ~ dnorm(gamma_mu, gamma_prec) gamma_true[j] <- gamma[j] - mean(gamma[]) } mu_true <- mu + mean(gamma[]) mu ~ dnorm(0, 0.001) mu_prec ~ dgamma(0.01, 0.01) gamma_mu ~ dnorm(0, 0.001) gamma_prec ~ dgamma(0.01, 0.01) } ~~~~~~~ than the model doesn't fit my data well, because the estimate for Plant-Mych is just way off: ![not working](http://www.natelemoine.com/not_working.png) Alternatively, if I use the model: ~~~~~~~ model{ for(i in 1:N){ RR[i] ~ dnorm(yhat[i], prec_y[i]) yhat[i] ~ dnorm(gamma[type[i]], gamma_prec[type[i]]) } for(j in 1:J){ gamma[j] ~ dnorm(overall_mu, mu_prec) gamma_prec[j] <- pow(gamma_sigma[j], -2) gamma_sigma[j] ~ dunif(0, 100) } overall_mu ~ dnorm(0, 0.001) mu_prec ~ dgamma(0.01, 0.01) }

then it fits much better

Is the fundamental difference that, in the first bad model, the groups are fixed effects whereas in the second model the groups are random? If so, why does this make such a big difference? Is there a way to code the first model so that it fits better?

I appreciate the help!

Nate