I have a question about fitting hierarchical models using lmer vs JAGS.
I first created a fake dataset using the function below, and then fit an lmer model, and fit the same model in JAGS. The lmer model works out the variance components correctly, and JAGS does too. But lmer has much higher correlation estimates for the varying intercept-varying slope correlation than JAGS (actually, lmer is failing to estimate the correlations). JAGS always underestimates the true value (set at 0.6 for by subject and by item variance components).
Why does JAGS underestimate the correlations? Is it just because we have uniform(-1,1) priors (the same outcome is seen if I use the priors (commented out below) recommended in an unpublished paper on Gelman's home page, Chung et al 2013)?
Here is a minimum working example:
Log in to post a comment.
Sign up for the SourceForge newsletter:
You seem to have CSS turned off.
Please don't fill out this field.