## How to deal with case where data is not a stochastic node (time series model)? document.SUBSCRIPTION_OPTIONS = { "thing": "thread", "subscribed": false, "url": "subscribe", "icon": { "css": "fa fa-envelope-o" } };

Help
VelocideX
2014-05-14
2014-05-15
• VelocideX - 2014-05-14

In time series models with unobserved states, Gibbs sampling can be used but this is very inefficient. In JAGS you would do this by (for example)

``` y[i] ~ dnorm(alpha[i],tau[i])
```

The standard solution to this is to use the Kalman filter to get the hidden states. Using the Forward Filtering Backward Sampling (FFBS), one runs the Kalman filter forward to obtain the moments conditioned on the data up to time t. Once the end of the time series is reached, the sampling is run backwards, sampling the value at time t given the value at time t+1.

I have implemented this in JAGS but a problem arises. The FFBS method effectively makes the states a function of the data.

Suppose you have a random walk model defined by

```  y[i] ~ dnorm(alpha[i],tau[i])
alpha[t+1] ~ dnorm(alpha[t],tau_omega)
```

With the FFBS method, you have a method for calculating the alpha values conditional on the data (y), tau and tau_omega. When you actually do this, JAGS assumes effectively that there is no data and so all you get are forward predictions based on your priors for tau and tau_omega. You don't get any posteriors for the other parameters.

How can I make JAGS actually sample the posterior here?

• If I define intermediate variables, can I link the data y into the model via dsum? You could do this also by y[i] ~ dnorm(ytemp[i], 10000) and then link ytemp[i] into the model, but this will also be hugely inefficient.
• I have tried passing JAGS a vector alpha where all entries are NA, but this doesn't change the behaviour. If you make any entry of the passed alpha vector you seem to get the desired behaviour, but the sampling does not seem to be very efficient.
• Another thought I have had is that you could define the model twice - once via FFBS and then once (with a second set of identical data) using the standard conditional definition of the model above. You would need to make the mapping tau[i] --> tau[i]/2 in order to maintain the correct level of statistical precision. This seems very inelegant though.

Any workarounds you can think of are very welcome.

I know that STAN might be efficient for this, but a lot of my time series if missing in an unstructured way, and so STAN is too difficult to use. This should be possible to do within JAGS - you just need to be able to make each of the alpha values contribute to the likelihood.

• 1) I like JAGS very much but you're using the wrong tool for the job. You are trying to work around almost all the code in JAGS---what's left? Once you do all this you might as well re-code the problem in R and use Rcpp.

2) Unstructured missing data in Stan is just fine, you just can't use the same idiom. The correct way to think about it is observation at time points as one part of the problem and predicted location at time points as a separate problem. This works fine in Stan.

3) If you are going for filtering in a BUGS context, this project (https://alea.bordeaux.inria.fr/biips/doku.php) used the JAGS front-end to go that route. I don't know what state it's in now.

• VelocideX - 2014-05-14

Yes I looked at the BIIPS project but it seemed very opaque.

I realise this is a bit of a workaround, but I was hoping to avoid having to write my own slice sampler for some of the variables etc. One of the benefits of JAGS etc is that the language is very concise - making debugging easier.

I should say that I just tried the 'zeros trick', specifying each alpha in relation to the FFBS parameters. Although this initially looks promising, because of the poor convergence (I think) the random walk volatility collapses to zero... ie the chain does not converge to where it should.

• Martyn Plummer - 2014-05-15

If you want to use FFBS then it needs to be implemented as a Sampler in C++. You will then get block updating of all the hidden nodes in the hidden Markov model.