Why Bayesian? How Bayesian?

02/06/2023

A Bayesian framework for statistical inference affords an intuitive and natural way of fusing existing information or beliefs with new data. Scientists, researchers, and developers most often approach projects with expectations and information either from their own previous work or published in the literature. A Bayesian framework allows us to fuse past information with new data to generate probability-based explanations for our data and generative processes.

Interpretation of Bayesian analyses is intuitive and straightforward. In Bayesian inference, confidence (or uncertainty) in our estimates is represented as credible intervals. Credible intervals have a common-sense interpretation; for example, suppose we conduct a study to investigate the reduction in blood pressure from medication A. If our Bayesian analysis yields a distribution of diastolic blood pressure differences with 0.95 of that distribution lying between 15 and 35 mmHg, then we are 95% certain that medication A lowered blood pressure by 15 to 35 mmHg in our study. This result is a 95% credible interval and is an intuitive representation.

While frequentist confidence intervals are often portrayed as simple measures of uncertainty, this is not the case, and interpretation is not as intuitive. Following from the example above, let’s say we conducted a frequentist analysis of the data yielding a 95% confidence interval of 20 to 30. This confidence interval represents uncertainty about the calculated interval, rather than the parameter of interest (reduction in blood pressure). A 95% confidence interval suggests that across an infinite number of calculated intervals, the true value of the parameter will lie in this range 95% of the time. Not particularly straightforward, intuitive, or meaningful.

#How Bayesian?

Bayesian inference is often performed with purpose-designed software. Historically, BUGS (Bayesian inference Using Gibbs Sampling) and JAGS (Just Another Gibbs Sampler) were the most common languages used for Bayesian inference.

Excitingly, in 2012 a software platform called Stan, named after the Monte-Carlo method pioneer, Stanislaw Ulam, was released (https://mc-stan.org/). Though it scared me at first, Stan is where it’s at for cutting edge probabilistic statistical modelling. I could not do my work without it. Stan provides a fantastic framework for creating, running, and drawing inference from statistical models. Informative diagnostics are automatically generated to let you know immediately how your model is performing. Not only is Stan a current platform, but also development is very active! Stan users see new features and improvement with every release and can interact directly with developers and other users on the Stan forum (https://discourse.mc-stan.org/).

Let’s look at an example to see how a simple linear regression could be approached in R and Stan.

First we can simulate some data. Let’s make a fake data set of IQ scores of two groups:

# Make some data
set.seed(44)
data = data.frame(
        IQ = round(c(
                rnorm(100,104,15)
                , rnorm(100,110,10)
        ))
        , Group = factor(rep(c('A','B'),each=100))
)

A simple frequentist linear regression could be run as:

freq_fit <- lm(IQ ~ Group
               , data = data)

We can use the summary function to view the estimated intercept (mean IQ for group A) and the mean-difference for Group B:

summary(freq_fit)

Call:
lm(formula = IQ ~ Group, data = data)

Residuals:
    Min      1Q  Median      3Q     Max 
-45.980  -7.687  -0.980   8.117  47.020 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  101.980      1.381  73.825  < 2e-16 ***
GroupB         7.610      1.954   3.895 0.000134 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 13.81 on 198 degrees of freedom
Multiple R-squared:  0.07118,   Adjusted R-squared:  0.06649 
F-statistic: 15.17 on 1 and 198 DF,  p-value: 0.000134

We can calculate the 95% confidence intervals with confint:

confint(freq_fit, level = 0.95)
                2.5 %    97.5 %
(Intercept) 99.255910 104.70409
GroupB       3.757555  11.46245

We can approach the same model in Stan with the following code:

stan_code <-  write_stan_file("data{
    int N ;
    vector[N] y ;
    vector[N] x ;
}
parameters{
    real intercept ;
    real GroupB ;
    real<lower=0> ysd ;
}
model{
    intercept ~ normal(105,15) ;
    GroupB ~ normal(0,20) ;
    ysd ~ normal(1,1) ;
    y ~ normal( intercept + GroupB * x ,ysd) ;
}

")

Then we can summarize the parameters as mean estimates and 95% credible intervals surrounding those mean estimates:

Coefficient mean sd 2.5% 97.5%
intercept 102.05 1.08 99.93 104.08
GroupB 7.52 1.52 4.63 10.50

We can see in this example that the mean estimates and even the intervals surrounding the estimates are quite similar. However, as discussed above, how we interpret the results is very different! The Bayesian regression gives us the most probably value for the Intercept and effect of Group and the credible intervals give us the 95% most probable range for those values.

Anyone working in an area where you have lots of important, heterogeneous, and/or noisy data or where you need a clear understanding of uncertainty needs Bayesian statistics. Bayesian inference and prediction has been used in finance, weather, consumer marketing, pharmacology, epidemiology, psychology, and more!

If you’re interested in learning how to get started with Bayesian inference using Stan, you might consider attending an upcoming introductory workshop: https://www.precision-analytics.ca/workshops/introduction-to-bayesian-analysis-in-r-and-stan/