Analyzing model derived data within JAGS

Anonymous
2012-07-18
2012-09-01
  • Anonymous - 2012-07-18

    I am trying to model a logistic regression on data predicted from the results
    of a previous logistic regression, and am having trouble figuring out how to
    do it, either in OpenBUGS (BRugs) or JAGS (with rjags). I'm attempting to
    determine the probabilistic maturation reaction norms (PMRNs) of different
    fish populations. Basically, I am doing a logistic regression to estimate the
    length at which an individual matures at age t and t-1, then using an
    intermediate calculation to estimate the probability the individual is
    maturing for the first time (P_Mat <- (PrMat(t) - PrMat)/(1-PrMat(t-1)) in the
    code below). I'd then like to run a second logistic regression on P_Mat and
    length to determine the expected length at which an individual first matures.
    My attempted code is below:

    [code]model{
    
    # Calculate Age-Specific Length at maturity with logistic regression (Ogive)
    #Priors
    for (i in 1:3)  {   # Cohort 1-3
    for (j in 1:4)  {   # Age 1-4
    alpha[i,j] ~ dnorm(0, 0.0001)   # Intercept
    beta[i,j] ~ dnorm(0, 0.0001)    # Slope
    }
    }
    
    #Age and cohort-specific logistic regression centered for better convergence
    for (i in 1:N) {
            logit(p[i]) <- alpha[COHORT[i],AGE[i]] + beta[COHORT[i],AGE[i]] * (LENGTH[i]-mean(LENGTH[]))
            MAT[i] ~ dbern(p.bound[i])  # Bernoulli distribution for maturity 
            p.bound[i] <- max(0, min(1,p[i]))   # Draw from dist limited to 0-1
    }
    
    ##Intermediate calculations to derive P_Mat     
    for (i in 1:3)  { 
        for (j in 2:4)  { 
            Prev_alpha[i,j] <- alpha[i, j-1]
            Prev_beta[i,j] <- beta[i, j-1]
    }
    }
    #Dummy variable for age-1 fish (not concerned about their PMRNs)
    for (i in 1:3){
        Prev_alpha[i,1] <- 1
        Prev_beta[i,1] <- 1
    }
    
    #Estimate probability each fish matured at current age
    for (i in 1:N){
        PrMat[i] <- 1/(1+exp(-(alpha[COHORT[i],AGE[i]] + beta[COHORT[i],AGE[i]]*(LENGTH[i]-mean(LENGTH[])))))
        PrevMat[i] <- 1/(1+exp(-(Prev_alpha[COHORT[i],AGE[i]] + Prev_beta[COHORT[i],AGE[i]](PrevLen[i]-mean(LENGTH[])))))
        P_Mat[i] <- (PrMat[i] - PrevMat[i])/(1-PrevMat[i])
        }
    
    ######Begin model specification for second regression to estimate PMRN
    #Priors
    for (i in 1:3){
        for (j in 1:4){
            M_alpha[i,j]~dnorm(0, 0.0001)
            M_beta[i,j]~dnorm(0,0.0001)
    }
    }
    
    #Second regression of derived P_Mat data
    for (i in 1:N) {
            logit(p[i]) <- M_alpha[COHORT[i],AGE[i]] + M_beta[COHORT[i],AGE[i](LENGTH[i]-mean(LENGTH[]))
            P_Mat[i] ~ dbern(p.bound[i])  # Bernoulli distribution for maturity
            p.bound[i] <- max(0, min(1,p[i]))   # Draw from dist limited to 0-1
            }
    }
    
    Supplied data is AGE, LENGTH, PrevLen (estimated length of each fish at previous age), MAT (1 or 0), and COHORT.[/code]
    So, is this possible to do? If so, how? I was thinking wrapping the first regression and calculation of P_Mat in a datablock in JAGS could potentially solve the problem, but I've tried that to no avail. I've also read about cut and dsum functions, but I'm still learning the BUGS language and its possibilities, so any thoughts would be extremely helpful. Thanks!
    
     
  • Martyn Plummer

    Martyn Plummer - 2012-07-19

    You are defining your problem operationally, in terms of what you want to do,
    but the best way to formulate a BUGS model is to think of it as a (plausible)
    mechanism for generating the data. Start with a directed acyclic graph (DAG)
    showing the causal relationship between the variables, including unobserved
    ones. Then turn this into a BUGS model.

    Hopefully this will allow you to frame your question in terms of a full
    probability model.

     

Log in to post a comment.

Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:





No, thanks