Hi,

I need to speed up, if possible, a model containing GLMs.

I've noticed that the syntax may influence the time of

convergence. Here is an example of the syntax I used:

for(ind in 1:n) { X1[ind] ~ dbern(prm.X1[1]) mean.X2[ind] <- exp(prm.X2[1]+prm.X2[2]*X1[ind])) X2[ind] ~ dgamma(prm.X2[3],prm.X2[3]/mean.X2[ind]) theta.X3[ind] <- 1/(1+exp(-prm.X3[1]-prm.X3[2]*X2[ind])) X3[ind] ~ dbern(theta.X3[ind]) } prm.X1[1] ~ dbeta(1,1) prm.X2[1] ~ dnorm(0,0.0001) prm.X2[2] ~ dnorm(0,0.0001) prm.X2[3] ~ dgamma(0.0001,0.0001) prm.X3[1] ~ dnorm(0,0.0001) prm.X3[2] ~ dnorm(0,0.0001)

Is this an efficient formulation?

Thanks in advance,

Alessandro