[9ff07d]: man / dic.samples.Rd Maximize Restore History

Download this file

dic.samples.Rd    91 lines (82 with data), 3.4 kB

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
\name{dic.samples}
\alias{dic}
\alias{dic.samples}
\alias{as.mcmc.dic}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{Generate penalized deviance samples}
\description{
Function to extract random samples of the penalized deviance from
a \code{jags} model.
}
\usage{
dic.samples(model, n.iter, thin = 1, type)
\method{as.mcmc}{dic}(x)
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{model}{a jags model object}
\item{n.iter}{number of iterations to monitor}
\item{thin}{thinning interval for monitors}
\item{type}{type of penalty to use}
\item{x}{An object inheriting from class ``dic''}
}
\details{
The \code{dic.samples} function generates penalized deviance
statistics for use in model comparison. The two penalized deviance
statistics generated by \code{dic.samples} are the deviance
information criterion (DIC) and the penalized expected deviance.
These are chosen by giving the values ``pD'' and ``popt'' respectively
as the \code{type} argument.
DIC (Spiegelhalter et al 2002) is calculated by adding the ``effective
number of parameters'' (\code{pD}) to the expected deviance. The
definition of \code{pD} used by \code{dic.samples} is the one proposed
by Plummer (2002) and requires two or more parallel chains in the
model.
DIC is an approximation to the penalized plug-in deviance, which is
used when only a point estimate of the parameters is of interest. The
DIC approximation only holds asymptotically when the effective number
of parameters is much smaller than the sample size, and the model
parameters have a normal posterior distribution.
The penalized expected deviance (Plummer 2008) is calculated by adding
the optimism (\code{popt}) to the expected deviance. The \code{popt}
penalty is always larger than the \code{pD} penalty, and penalizes
complex models more severely.
}
\value{
An object of class ``dic''. This is a list containing the following
elements:
\item{deviance}{A list of \code{mcarray} objects, one for each
observed stochastic node, containing samples of the deviance}
\item{penalty}{A list of \code{mcarray} objects, one for each
observed stochastic node, containing samples of the penalty
function}
\item{type}{A string identifying the type of penalty: ``pD'' or
``popt''}
An object of class \code{dic} can be coerced to an \code{mcmc} object
using the \code{as.mcmc} generic function. The resulting \code{mcmc}
object has two variables: the mean deviance over all chains and
the penalty.
}
\note{
The \code{popt} penalty is estimated by importance weighting, and may
be numerically unstable. It is recommended to inspect the \code{dic}
object after coercing it to a \code{mcmc} object using functions from
the \code{coda} package.
}
\author{Martyn Plummer}
\references{
Spiegelhalter, D., N. Best, B. Carlin, and A. van der Linde (2002),
Bayesian measures of model complexity and fit (with discussion).
\emph{Journal of the Royal Statistical Society Series B}
\bold{64}, 583-639.
Plummer, M. (2002),
Discussion of the paper by Spiegelhalter et al.
\emph{Journal of the Royal Statistical Society Series B}
\bold{64}, 620.
Plummer, M. (2008)
Penalized loss functions for Bayesian model comparison.
\emph{Biostatistics}
doi: 10.1093/biostatistics/kxm049
}
\seealso{\code{\link{diffdic}}}
\keyword{models}