Our Stan code uses more arrays and looping than some people might like. This is because we think it helps most learners (though not all) to see exactly what’s going on.

This expands on Section 4.4.2 of our book. The basic random effects meta-analysis extends the common effect model by allowing studies to sample from their own populations, so that the true contrasts or intervention effects in the study populations differ. It seems reasonable to represent this between-study variation or heterogeneity with a normal distribution because of the Central Limit Theorem (discussed in Chapter 4 of our book). This produces a multilevel model:

data {
  int m;
  array[m] real logor;
  array[m] real<lower=0> se_logor;
}
parameters {
  real<lower=0> tau; 
  real theta; 
  array[m] real u; 
}
model {
  theta ~ ADD_YOUR_PRIOR_HERE;
  tau ~ ADD_YOUR_PRIOR_HERE;
  u ~ normal(0, tau);
  for(j in 1:m) {
    logor[j] ~ normal(theta+u[j], se_logor[j]);
  }
}

Here, we again start with a contrast-based model and use the log odds ratio as an example of a study statistic with an asymptotically normal sampling distribution. We are also taking the study estimates of the standard errors as totally correct and certain. That makes the model a Bayesian match for the frequentist DerSimonian-Laird estimator.

We have included only the basic implementation of prior and likelihood, omitting the “generated quantities” block, which is useful to store draws from the study-specific parameters theta+u[j] for graphics. Also, that is where we create random draws for prior and posterior predictive checking. We illustrate these methods in another post.

Divergent transitions

Stan will identify “divergent transitions”, which are attempts to move to a new vector of parameter values (unknowns) that are potentially biased by sharp changes in gradients in the log-posterior density. This happens in multilevel models when the heterogeneity standard deviation approaches zero, and this is referred to as a funnel (sometimes Neal’s funnel) problem.

The Stan website’s page on warnings and diagnostics is the go-to resource to learn more about these challenges to MCMC algorithms, and there is a case study by Mike Betancourt on the Stan website about this. One way to avoid it is to use a prior on tau that gives low prior density to values close to zero. Half-normal, half-t, or half-Cauchy will be prone to this, because they give the highest prior density to values close to zero, while gamma, lognormal, chi-squared, or bespoke elicited priors can avoid the problem, but only if you are willing to be more informative about tau. Other methods are explained in the case study. The slow mixing traceplots in Figure 2.12 of the book are in fact caused by funnel behaviour.

Including uncertainty in the standard errors

When the sampling distribution is asymptotically normal, the natural way to acknowledge uncertainty in the standard error is to swap the normal likelihood for the t-distribution, using the student_t() function in Stan, with n[j]-2 degrees of freedom. This model is analogous to the Sidik-Jonkman estimator with the Hartung-Knapp adjustment, and we address this in Section 4.6 of the book. This also has the effect of making the meta-analysis a little more robust to outlier studies.

data {
  int m;
  array[m] int n;
  array[m] real logor;
  array[m] real<lower=0> se_logor;
}
parameters {
  real<lower=0> tau; 
  real theta; 
  array[m] real u; 
}
model {
  theta ~ ADD_YOUR_PRIOR_HERE;
  tau ~ ADD_YOUR_PRIOR_HERE;
  u ~ normal(0, tau);
  for(j in 1:m) {
    logor[j] ~ student_t(theta+u[j], 
                         se_logor[j], 
                         n[j]-2);
  }
}

This student’s t distribution is the exact likelihood for means, but is heuristic for log odds ratios, simply chosen for its higher probability in the tails, and other options could be tried, including changing the degrees of freedom for the t.

tags: #stan, #stan-repo, #repo


Discover more from The Bayesian Meta-Analysis Network

Subscribe to get the latest posts sent to your email.

Leave a Reply

Discover more from The Bayesian Meta-Analysis Network

Subscribe now to keep reading and get access to the full archive.

Continue reading