Ooops! There is a minus sign missing in the likelihood calculations. Fixed now.
When data comes from a dunif distribution V 3.2.0 cannot produce proper priors for limits. As an example:
I simulated some data in R from a uniform 0.7-1.0 distribution (actually the minimum was 0.7055463 and the maximum 0.9738997). I then used a couple of models to see if I could estimate the minimum of the original population.
The first model:
model{
for(i in 1:n.animals) {
time[i] ~dunif(lower,1)
}
# priors
lower ~dbeta(1,1)
}
After a 4000 iteration burn-in and 10000 sample I get:
lower
Min. :2.964e-08
1st Qu.:2.945e-03
Median :7.237e-03
Mean :1.026e-02
3rd Qu.:1.460e-02
Max. :9.867e-02
with no evidence of non-convergence. Note that a similar model worked with an earlier version of JAGS - I think about 1.0.3, I'm using 3.2.0 at the moment.
Assuming that the problem is that dunif in JAGS just has a contribution of 1 to the likelihood for all values between min and max (which is OK if we don't want to estimate the values) I tried to "push" the value higher using the prior.
With prior dbeta(2,1)
lower
Min. :0.0001257
1st Qu.:0.0095245
Median :0.0163108
Mean :0.0192381
3rd Qu.:0.0261746
Max. :0.0858875
I.e. a slight improvement. With prior dbeta(5,1)
lower
Min. :0.003598
1st Qu.:0.031985
Median :0.044491
Mean :0.047283
3rd Qu.:0.059931
Max. :0.157737
I.e. definite improvement but still nowhere near right.
So I tried a completely different approach:
model{
for(i in 1:n.animals) {
ones[i] ~dbern(step(time[i] - lower) * (1-step(time[i] - 1)) * a)
}
a <- 0.1/(1-lower) # works for lower <= 0.9
# priors
lower ~dbeta(1,1)
}
Again 4000 iteration burn-in and 10000 iteration sampling. I now get:
lower
Min. :0.6793
1st Qu.:0.7013
Median :0.7035
Mean :0.7025
3rd Qu.:0.7047
Max. :0.7055
Remember that 0.7055 is the minimum value, so this is the maximum value that the lower bound can have - the MLE. This, then appears to work.
Hope this is clear
Giles
Ooops! There is a minus sign missing in the likelihood calculations. Fixed now.
Anonymous