screenager83
2013-10-12
Hi,
I need to speed up, if possible, a model containing GLMs.
I've noticed that the syntax may influence the time of
convergence. Here is an example of the syntax I used:
for(ind in 1:n) { X1[ind] ~ dbern(prm.X1[1]) mean.X2[ind] <- exp(prm.X2[1]+prm.X2[2]*X1[ind])) X2[ind] ~ dgamma(prm.X2[3],prm.X2[3]/mean.X2[ind]) theta.X3[ind] <- 1/(1+exp(-prm.X3[1]-prm.X3[2]*X2[ind])) X3[ind] ~ dbern(theta.X3[ind]) } prm.X1[1] ~ dbeta(1,1) prm.X2[1] ~ dnorm(0,0.0001) prm.X2[2] ~ dnorm(0,0.0001) prm.X2[3] ~ dgamma(0.0001,0.0001) prm.X3[1] ~ dnorm(0,0.0001) prm.X3[2] ~ dnorm(0,0.0001)
Is this an efficient formulation?
Thanks in advance,
Alessandro
Martyn Plummer
2013-10-18
You can speed up the Bernoulli model by using the appropriate link function instead of constructing the link yourself:
logit(theta.X3[ind]) <- prm.X3[1] + prm.X3[2]*X2[ind]
Then if you load the "glm" module you should get an efficient block sampler for the two coefficients.
For the gamma part, there is no improvement available currently. As you can see the gamma distribution isn't parameterized in terms of the mean and dispersion, so it is hard to recognize it as a glm model. We also have no current general-purpose algorithm (like HMC) that can handle gamma glms.