Hi,

I was trying to give a really basic example during a lecture but I found some surprising differences between estimates provided by JAGS, WinBUGS and the glm function in R. WinBUGS and the glm function give the same results, but JAGS gives different estimates.

Here's the example with a simple Poisson regression:

set.seed(24) library(rjags) load.module("glm") # Generate count data using 2 covariates nsites <- 200 alpha0 <- 1 alpha1 <- 2 alpha2 <- 3 A <- runif(n = nsites, -1, 1) # Site covariate 1 B <- runif(n = nsites, -1, 1) # Site covariate 2 lam <- exp(alpha0 + alpha1*A + alpha2*B) y <- rpois(n = nsites, lambda = lam) # Analyze with the glm function (without covariate B) m <- glm(y ~ A, family = poisson) # Define the same model in the BUGS language cat(file = "Poisson_GLM.txt"," model { # Priors alpha0 ~ dnorm(0, 0.0001) alpha1 ~ dnorm(0, 0.0001) # Likelihood for (i in 1:R){ y[i] ~ dpois(lambda[i]) log(lambda[i]) <- alpha0 + alpha1 * A[i] } } ") # Bundle data jags.data <- list(y = y, R = nsites, A = A) # Initial values inits <- function() list(alpha0 = rnorm(1, 0, 10), alpha1 = rnorm(1, 0, 10)) # Parameters monitored params <- c("alpha0", "alpha1") # MCMC settings ni <- 10000 ; nt <- 1 ; nb <- 2000 ; nc <- 3 # Create a JAGS model object fit <- jags.model(file = "Poisson_GLM.txt", data = jags.data, inits = inits, n.chains = nc) # Burnin update(fit, nb) # Generate posterior samples samples <- coda.samples(model = fit, variable.names = params, n.iter = ni - nb, thin = nt) # Convergence looks good plot(samples) # Compute posterior summaries summfit <- summary(samples) # Compare estimates of the glm function with estimates of JAGS (not similar) m$coef summfit$statistics[,"Mean"] # Fit the model in WinBUGS library(R2WinBUGS) out <- bugs(data = jags.data, parameters.to.save = params, model.file = "Poisson_GLM.txt", inits = inits, n.iter = ni, n.chains = nc, n.burnin = nb, n.thin = nt, bugs.dir = "C:/Program Files (x86)/WinBUGS14") # Check the estimates (similar to the glm function) out$mean[1:2]

As you can see, JAGS estimates seem to be wrong. If I use uniform priors for alpha0 and alpha1 (e.g. dunif(-1000, 1000)), then I get the correct estimates. It looks like the normal priors are influencing the results but I can't believe that the priors provide so much information with such a low precision. I also tried to use different initial values, but that didn't help.

I'm now using R 3.0.3 with JAGS 3.4.0 on a Windows computer (but I also tried with JAGS 3.3.0 and 3.4.0 on a Mac).

Can someone explain what's happening here? Thanks in advance!

Jérôme