Hi All,

Over on the Stan list there has been some discussion of speeding up models through the use of what they refer to as the 'Matt trick.' I have been trying to implement this in JAGS, but I keep getting strange results. Just to be clear up front, I'm working with a logistic-binomial model, hence why the distribution of the priors looks a little bit different than what we might expect were we looking at a model with a continuous outcome.

Let's start with a simple random intercept model with a random intercept term u defined as follows:

for(g in 1:G){ u[g] ~ dnorm(u0,sigma2inv) } u0 ~ dnorm(0,.01) sigma ~ dunif(0,10) sigma2inv<-pow(sigma,-2)

This is a pretty standard way of writing out the model. Assuming that I am correctly translating from Stan to JAGS, the Matt trick refers to the idea of rewriting the model above like so:

for(g in 1:G){ u[g] <- u0 + e_u[g]*sigma e_u[g] ~ dnorm(0,1) } u0 ~ dnorm(0,.01) sigma ~ dunif(0,10)

Even if my code here is wrong, you should still be able to get a sense of what I'm trying to do. As I understand it, the idea behind this parameterization is to make the model more efficient by breaking up the correlation between e_u on the one hand, and u0 and sigma on the other.

When I run the model in JAGS using the above parameterization, I find that while I get good estimates of u0 and bx, I get totally off-the-wall estimates for sigma. I'm not sure whether it is a coding error on my part or whether this trick doesn't carry over to JAGS for some reason. I'm using the glm module. Is it possible that the glm-specific routine might not play well with the Matt trick?

My full model is as follows:

model{ for(i in 1:N){ xb[i] <- bx*x[i]+u[group[i]] logit(p[i]) <- xb[i] y[i] ~ dbin(p[i],trials[i]) } for(g in 1:G){ u[g] <- u0 + e_u[g]*sigma e_u[g] ~ dnorm(0,1) } u0 ~ dnorm(0,.01) sigma ~ dunif(0,10) bx ~ dnorm(0,.01) }

Any thoughts would be much appreciated.

Adam