Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project!

Error message "stuck at value with infinite density""

Help
2013-09-24
2013-09-28

• Inga Schwabe
2013-09-24

Dear forum users,

When I am running my JAGS model I keep encountering the same error message: 'Slicer stuck at value with infinite density', for the node 'b_star'. I am guessing that the problem has to do something with the prior I am using for this node. However, changing the prior into a smaller or a broader one does not solve the problem. The initial value I am using for the node is the same as the simulated value.

A search on google brought me to this very same forum. I read on the topic https://sourceforge.net/p/mcmc-jags/discussion/610037/thread/c21ef62a a suggestion to set a restriction on the prior. I tried to that, using T(-5,5) and T(-3,3) but that did not solve the problem either.

This is my model:

```model{

##MZ twins
for (fam in 1:nmz){
c.mz[fam] ~ dnorm(0, tau.c)
f.mz[fam] ~ dnorm(c.mz[fam], tau.a)
a.mz[fam] <- f.mz[fam] - c.mz[fam]
tau.e[fam] <- 1/(exp(b_star + (beta*a.mz[fam]) + (beta2*(a.mz[fam]^2))))

for (twin in 1:2){
pheno.mz[fam,twin]~ dnorm(f.mz[fam],tau.e[fam])
}

#1pl model twin1
for (k in 1:n.items){
logit(p[fam,k]) <- pheno.mz[fam,1] - bp[k]
Ymz[fam,k] ~ dbern(p[fam,k])
}

#1pl model twin2
for (k in (n.items+1):(2*n.items)){
logit(p[fam,k]) <- pheno.mz[fam,2] - bp[(k-n.items)]
Ymz[fam,k] ~ dbern(p[fam,k])
}
```

}

```##DZ twins
for (fam in 1:ndz){
c.dz[fam] ~ dnorm(0, tau.c)
f0.dz[fam] ~ dnorm(c.dz[fam], doubletau.a)

for (twin in 1:2){
f.dz[fam,twin] ~ dnorm(f0.dz[fam], doubletau.a)
a.dz[fam,twin] <- f.dz[fam,twin] - c.dz[fam]
tau.e.dz[fam,twin] <- 1/(exp(b_star + (beta*a.dz[fam,twin])  +
(beta2*(a.dz[fam,twin]^2))))
pheno.dz[fam,twin] ~ dnorm(f.dz[fam,twin], tau.e.dz[fam,twin])
}

#1pl model twin1 (DZ)
for (k in 1:n.items){
logit(p2[fam,k]) <- pheno.dz[fam,1] - bp[k]
Ydz[fam,k] ~ dbern(p2[fam,k])
}

#1pl model twin2 (DZ)
for (k in (n.items+1):(2*n.items)){
logit(p2[fam,k]) <- pheno.dz[fam,2] - bp[(k-n.items)]
Ydz[fam,k] ~ dbern(p2[fam,k])
}

}

#Priors

beta ~ dnorm(0,.2)
beta2 ~ dnorm(0,.2)

b_star ~ dnorm(0,.1)

doubletau.a <- 2*tau.a

tau.a ~ dgamma(1,1)
tau.c ~ dgamma(1,.5)

}
```

I hoop that there is someone who has any idea or suggestion on how to solve this problem. Any remarks are welcome, I appreciate your help very much!

Thanks,
Inga

Last edit: Inga Schwabe 2013-09-27
• My understanding is that the node b_star, since it has a normal prior, should not have this particular numerical problem. If you show the full error message, or provide a reproducible script it might be easier to provide suggestions.

• Inga Schwabe
2013-09-24

Dear Krzysztof,

The full error message is:
"Error: Error in node b_star
Slicer stuck at value with infinite density"

The error does not occur always, but only sometimes. When I run the script a lot of times (e.g. 20), sometimes the error occurs after just one simulations and sometimes after 18.

The prior is indeed normal, but nonetheless the best guess I have at this moment is that it is the prior what is causing the problem. Changing it does influence how fast the error occurs, so there is actually a prior that 'fits' better I guess (but this may of course also be due to chance). However, I could not find a prior that prevents the error totally.

I am using by the way quite a lot of iterations as burn in (15.000) and assessed the convergence beforehand, graphically as well as by means of ruben and gelmans convergence diagnostic.

Inga

• Martyn Plummer
2013-09-25

As Krzysztof says, you should really give us a a reproducible example.

Here is a simple model that shares some features of your own and that reliably produces the error message "Slicer stuck at value with infinite density". It may shed some light on your problem.

```data {
for (i in 1:1000) {
X[i] <- 0
}
}
model {
for (i in 1:1000) {
X[i] ~ dnorm(0, tau)
}
tau <- 1/exp(beta)
beta ~ dnorm(0, 0.1)
}
```

I've generated a variable X that has zero variance, and I am then trying to model its log-variance, beta, giving it a normal prior.

As the sampler runs, beta goes off towards minus infinity, eventually reaching a point where the log likelihood of X is infinite.

So maybe you are trying to model a variance that is, in fact, zero.

• I had assumed that using T() on the b_star parameter would avoid that problem, no?

• Inga Schwabe
2013-09-25

Dear Kryzystof,

Thanks for your suggestion. However, I tried to use T(-3,3) and T(-5,5) on the b_star parameter, but that did not prevent the error message.

• Dear Inga,

Yes, which is why I'm not sure if Martyn's mind-reading reply is quite right. Either that or, more likely, I misunderstand exactly when/how JAGS applies that constraint.

• Inga Schwabe
2013-09-26

Dear Krystof,

It is not clear to me either when and how the constraint is applied. I just tried other constraints, very large (e.g. T(-5,5)) ones and very small (T(-2,0)) ones, but no combination would work. I am getting the idea that there is something in my model that is not specified well.

• Inga Schwabe
2013-09-25

Dear Martin,
Thanks for your aswer. The variance as simulated is 0.2.

This is an R code that simulates data and runs my JAGS model:

```library(rjags)
var.a <- 0.5
var.a2 <- var.a * 0.5
var.c <- 0.3
b_star <- log(0.2)
beta <- 0
beta2 <- 0
n.items <- 60
Npairs <- 500

nmz <- round(Npairs * 0.28)
ndz <- Npairs - nmz
b <- 0
c.mz <- rnorm(nmz, b, sqrt(var.c))
f.mz <- rnorm(nmz, c.mz, sqrt(var.a))
a.mz <- f.mz - c.mz
sd.e <- sqrt(exp(b_star + (beta*(a.mz-b)) + (beta2*((a.mz^2)-b))))
pheno.mz <- cbind(rnorm(nmz, f.mz,sd.e),rnorm(nmz, f.mz,sd.e))
c.dz <- rnorm(ndz, b, sqrt(var.c))
f0.dz <- rnorm(ndz, c.dz, sqrt(var.a2))
f.dz.twin1 <- rnorm(ndz, f0.dz, sqrt(var.a2))
f.dz.twin2 <- rnorm(ndz, f0.dz, sqrt(var.a2))
a.dz.twin1 <- f.dz.twin1 - c.dz
a.dz.twin2 <- f.dz.twin2 - c.dz
e.dz.twin1 <- sqrt(exp(b_star+(beta*(a.dz.twin1-b)) + (beta2*((a.dz.twin1^2)-b))))
e.dz.twin2 <- sqrt(exp(b_star+(beta*(a.dz.twin2-b)) + (beta2*((a.dz.twin2^2)-b))))
pheno.dz <- cbind(rnorm(ndz, f.dz.twin1, e.dz.twin1),
rnorm(ndz, f.dz.twin2, e.dz.twin2))
bp <- as.matrix(rnorm(n.items, 0,1))
bp.mz <- t(matrix(bp, n.items, nmz))
bp.dz <- t(matrix(bp, n.items, ndz))
traits.MZ.twin1 <- matrix(pheno.mz[,1], nmz, n.items)#MZ twins
traits.MZ.twin2 <- matrix(pheno.mz[,2], nmz, n.items)
traits.DZ.twin1 <- matrix(pheno.dz[,1], ndz, n.items)#DZ twins
traits.DZ.twin2 <- matrix(pheno.dz[,2], ndz, n.items)
sumtraits.MZ.twin1 <- as.matrix(apply(traits.MZ.twin1, 1, sum))
sumtraits.MZ.twin2 <- as.matrix(apply(traits.MZ.twin2, 1, sum))
p.MZ.twin1 <- (exp(traits.MZ.twin1-bp.mz))/(1+(exp(traits.MZ.twin1-bp.mz)))#MZ twins
p.MZ.twin2 <- (exp(traits.MZ.twin2-bp.mz))/(1+(exp(traits.MZ.twin2-bp.mz)))
p.DZ.twin1 <- (exp(traits.DZ.twin1-bp.dz))/(1+(exp(traits.DZ.twin1-bp.dz)))
p.DZ.twin2 <- (exp(traits.DZ.twin2-bp.dz))/(1+(exp(traits.DZ.twin2-bp.dz)))
MZ.twin1.itemdata <- t(apply(p.MZ.twin1, 1, function(x) rbinom (n.items, 1, x)))
MZ.twin2.itemdata <- t(apply(p.MZ.twin2, 1, function(x) rbinom (n.items, 1, x)))
DZ.twin1.itemdata <- t(apply(p.DZ.twin1, 1, function(x) rbinom (n.items, 1, x)))
DZ.twin2.itemdata <- t(apply(p.DZ.twin2, 1, function(x) rbinom (n.items, 1, x)))
Ymz <- matrix(0,nmz,(n.items*2))
Ymz[,1:(n.items)] <- MZ.twin1.itemdata
Ymz[,(n.items+1):(n.items*2)] <- MZ.twin2.itemdata
Ydz <- matrix(0,ndz,(n.items*2))
Ydz[,1:(n.items)] <- DZ.twin1.itemdata
Ydz[,(n.items+1):(n.items*2)] <- DZ.twin2.itemdata
bugsdata <- list(Ymz, Ydz, nmz, ndz, n.items, as.vector(bp))
names(bugsdata)<- c("Ymz", "Ydz", "nmz", "ndz", "n.items", "bp")

model.file <- "model.txt"   #<- File with the model as posted above
inits <- list(tau.a = 2, b_star = log(0.2), tau.c = 3, beta = runif(1, - 1.30,1.30),  beta2 = runif(1,-.10, .10))

jags <- jags.model(model.file, bugsdata, inits, n.chains = 1, quiet=FALSE)
update(jags, 15000)
out <- jags.samples(jags, c("tau.a", "beta", "b_star", "tau.c", "beta2"), 20000)
```

However, as mentioned before, the error does not occur always, but only at a certain point when I am running the code above a lot of times, so the model may just run fine if you try the code above.

Thanks a lot,
Bye, inga

• Martyn Plummer
2013-09-26

Thanks Inga. I've got your simulated example running in a loop.

• Inga Schwabe
2013-09-26

Thanks Martyn!

• Martyn Plummer
2013-09-27

Well, I'm sorry to say that I cannot reproduce the bug. I have run your simulated example 20 times with no problem. I also did 50 shorter runs to see if I could simulate some data that would fail quickly, but again I have no problem.

At this point, I have to ask some more about your platform. What OS and what version of JAGS are you running?

• Inga Schwabe
2013-09-27

Dear Martyn,

That's good and bad news. It's good to hear that you cannot reproduce the bug, it is however strange that I am getting an error message when I run the very same code on my own machine. Did you use the same amount of burn in and number of iterations? (I used 15,000 and 20,000). Even when I run the code with a lot of data (N = 2000 and 60 items) I run into the problem at a certain point.

I am using a Windows 7 i7 with 2600 CPU and 3.40 Ghz system (4.00 GB RAM). My JAGS version is 3.30 (64 bit). I was running the same simulations on my MAC system, and I think that there has no error (occured) yet. Could it be that my problem is somehow related to the system?

Thanks for yoru help, I appreciate it very much!

Inga

• Martyn Plummer
2013-09-27

I ran the example exactly as you posted it 20 times, and another shorter version 50 times.

Now running it on Windows (64 bit) to see if it turns up.

• Inga Schwabe
2013-09-28

Hi Martyn,

I got the error message on my mac as well this morning. I was running the code with N = 1000, var.a = 0.5, beta = 0.77, beta2 = -0.57, b = 0, var.c = 0.3, b_star = log(0.2), 60 items, 20000 total iterations and 15,000 burn in. The error occurred after simulation number 48.

So, unfortunately, I don't think that the system or JAGS version is the problem here.

I really appreciate your time and help!
Inga