- Introduction
- Setup
- Example dataset
- Model
- Extracting draws from a fit in tidy-format using
`spread_draws`

- Point summaries and intervals
- Combining variables with different indices in a single tidy format data frame
- Plotting intervals with multiple probability levels
- Intervals with densities
- Other visualizations of distributions:
`stat_slabinterval`

- Posterior means and predictions
- Quantile dotplots
- Posterior predictions
- Posterior predictions, Kruschke-style
- Fit/prediction curves
- Comparing levels of a factor
- Ordinal models

This vignette describes how to use the `tidybayes`

and `ggdist`

packages to extract and visualize tidy data frames of draws from posterior distributions of model variables, means, and predictions from `brms::brm`

. For a more general introduction to `tidybayes`

and its use on general-purpose Bayesian modeling languages (like Stan and JAGS), see `vignette("tidybayes")`

.

The following libraries are required to run this vignette:

```
library(magrittr)
library(dplyr)
library(purrr)
library(forcats)
library(tidyr)
library(modelr)
library(ggdist)
library(tidybayes)
library(ggplot2)
library(cowplot)
library(rstan)
library(brms)
library(ggrepel)
library(RColorBrewer)
library(gganimate)
library(posterior)
theme_set(theme_tidybayes() + panel_border())
```

These options help Stan run faster:

```
rstan_options(auto_write = TRUE)
options(mc.cores = parallel::detectCores())
```

To demonstrate `tidybayes`

, we will use a simple dataset with 10 observations from 5 conditions each:

```
set.seed(5)
= 10
n = 5
n_condition =
ABC tibble(
condition = rep(c("A","B","C","D","E"), n),
response = rnorm(n * 5, c(0,1,2,1,-1), 0.5)
)
```

A snapshot of the data looks like this:

`head(ABC, 10)`

condition | response |
---|---|

A | -0.4204277 |

B | 1.6921797 |

C | 1.3722541 |

D | 1.0350714 |

E | -0.1442796 |

A | -0.3014540 |

B | 0.7639168 |

C | 1.6823143 |

D | 0.8571132 |

E | -0.9309459 |

This is a typical tidy format data frame: one observation per row. Graphically:

```
%>%
ABC ggplot(aes(y = condition, x = response)) +
geom_point()
```

Let’s fit a hierarchical model with shrinkage towards a global mean:

```
= brm(
m ~ (1|condition),
response data = ABC,
prior = c(
prior(normal(0, 1), class = Intercept),
prior(student_t(3, 0, 1), class = sd),
prior(student_t(3, 0, 1), class = sigma)
),control = list(adapt_delta = .99),
file = "models/tidy-brms_m.rds" # cache model (can be removed)
)
```

The results look like this:

` m`

```
## Family: gaussian
## Links: mu = identity; sigma = identity
## Formula: response ~ (1 | condition)
## Data: ABC (Number of observations: 50)
## Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
## total post-warmup draws = 4000
##
## Group-Level Effects:
## ~condition (Number of levels: 5)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 1.17 0.43 0.60 2.22 1.00 944 1542
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 0.50 0.48 -0.49 1.41 1.00 904 1204
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma 0.56 0.06 0.46 0.70 1.00 2057 2243
##
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
```

`spread_draws`

Now that we have our results, the fun begins: getting the draws out in a tidy format! First, we’ll use the `get_variables()`

function to get a list of raw model variable names so that we know what variables we can extract from the model:

`get_variables(m)`

```
## [1] "b_Intercept" "sd_condition__Intercept" "sigma" "r_condition[A,Intercept]"
## [5] "r_condition[B,Intercept]" "r_condition[C,Intercept]" "r_condition[D,Intercept]" "r_condition[E,Intercept]"
## [9] "lp__" "accept_stat__" "stepsize__" "treedepth__"
## [13] "n_leapfrog__" "divergent__" "energy__"
```

Here, `b_Intercept`

is the global mean, and the `r_condition[]`

variables are offsets from that mean for each condition. Given these variables:

`r_condition[A,Intercept]`

`r_condition[B,Intercept]`

`r_condition[C,Intercept]`

`r_condition[D,Intercept]`

`r_condition[E,Intercept]`

We might want a data frame where each row is a draw from either `r_condition[A,Intercept]`

, `r_condition[B,Intercept]`

, `...[C,...]`

, `...[D,...]`

, or `...[E,...]`

, and where we have columns indexing which chain/iteration/draw the row came from and which condition (`A`

to `E`

) it is for. That would allow us to easily compute quantities grouped by condition, or generate plots by condition using ggplot, or even merge draws with the original data to plot data and posteriors simultaneously.

The workhorse of `tidybayes`

is the `spread_draws()`

function, which does this extraction for us. It includes a simple specification format that we can use to extract variables and their indices into tidy-format data frames.

Given a variable in the model like this:

`r_condition[D,Intercept]`

We can provide `spread_draws()`

with a column specification like this:

`r_condition[condition,term]`

Where `condition`

corresponds to `D`

and `term`

corresponds to `Intercept`

. There is nothing too magical about what `spread_draws()`

does with this specification: under the hood, it splits the variable indices by commas and spaces (you can split by other characters by changing the `sep`

argument). It lets you assign columns to the resulting indices in order. So `r_condition[D,Intercept]`

has indices `D`

and `Intercept`

, and `spread_draws()`

lets us extract these indices as columns in the resulting tidy data frame of draws from `r_condition`

:

```
%>%
m spread_draws(r_condition[condition,term]) %>%
head(10)
```

condition | term | r_condition | .chain | .iteration | .draw |
---|---|---|---|---|---|

A | Intercept | 0.6829060 | 1 | 1 | 1 |

A | Intercept | -0.7785857 | 1 | 2 | 2 |

A | Intercept | -0.5797397 | 1 | 3 | 3 |

A | Intercept | -0.7168785 | 1 | 4 | 4 |

A | Intercept | -0.7858417 | 1 | 5 | 5 |

A | Intercept | -0.8177604 | 1 | 6 | 6 |

A | Intercept | -0.4562683 | 1 | 7 | 7 |

A | Intercept | -0.2476565 | 1 | 8 | 8 |

A | Intercept | 0.0870252 | 1 | 9 | 9 |

A | Intercept | -0.0982682 | 1 | 10 | 10 |

We can choose whatever names we want for the index columns; e.g.:

```
%>%
m spread_draws(r_condition[c,t]) %>%
head(10)
```

c | t | r_condition | .chain | .iteration | .draw |
---|---|---|---|---|---|

A | Intercept | 0.6829060 | 1 | 1 | 1 |

A | Intercept | -0.7785857 | 1 | 2 | 2 |

A | Intercept | -0.5797397 | 1 | 3 | 3 |

A | Intercept | -0.7168785 | 1 | 4 | 4 |

A | Intercept | -0.7858417 | 1 | 5 | 5 |

A | Intercept | -0.8177604 | 1 | 6 | 6 |

A | Intercept | -0.4562683 | 1 | 7 | 7 |

A | Intercept | -0.2476565 | 1 | 8 | 8 |

A | Intercept | 0.0870252 | 1 | 9 | 9 |

A | Intercept | -0.0982682 | 1 | 10 | 10 |

But the more descriptive and less cryptic names from the previous example are probably preferable.

In this particular model, there is only one term (`Intercept`

), thus we could omit that index altogether to just get each `condition`

and the value of `r_condition`

for that condition:

```
%>%
m spread_draws(r_condition[condition,]) %>%
head(10)
```

condition | r_condition | .chain | .iteration | .draw |
---|---|---|---|---|

A | 0.6829060 | 1 | 1 | 1 |

A | -0.7785857 | 1 | 2 | 2 |

A | -0.5797397 | 1 | 3 | 3 |

A | -0.7168785 | 1 | 4 | 4 |

A | -0.7858417 | 1 | 5 | 5 |

A | -0.8177604 | 1 | 6 | 6 |

A | -0.4562683 | 1 | 7 | 7 |

A | -0.2476565 | 1 | 8 | 8 |

A | 0.0870252 | 1 | 9 | 9 |

A | -0.0982682 | 1 | 10 | 10 |

**Note:** If you have used `spread_draws()`

with a raw sample from Stan or JAGS, you may be used to using `recover_types`

before `spread_draws()`

to get index column values back (e.g. if the index was a factor). This is not necessary when using `spread_draws()`

on `rstanarm`

models, because those models already contain that information in their variable names. For more on `recover_types`

, see `vignette("tidybayes")`

.

`tidybayes`

provides a family of functions for generating point summaries and intervals from draws in a tidy format. These functions follow the naming scheme `[median|mean|mode]_[qi|hdi]`

, for example, `median_qi()`

, `mean_qi()`

, `mode_hdi()`

, and so on. The first name (before the `_`

) indicates the type of point summary, and the second name indicates the type of interval. `qi`

yields a quantile interval (a.k.a. equi-tailed interval, central interval, or percentile interval) and `hdi`

yields a highest (posterior) density interval. Custom point summary or interval functions can also be applied using the `point_interval()`

function.

For example, we might extract the draws corresponding to posterior distributions of the overall mean and standard deviation of observations:

```
%>%
m spread_draws(b_Intercept, sigma) %>%
head(10)
```

.chain | .iteration | .draw | b_Intercept | sigma |
---|---|---|---|---|

1 | 1 | 1 | -0.2767032 | 0.5674665 |

1 | 2 | 2 | 0.8924211 | 0.5621870 |

1 | 3 | 3 | 0.9407381 | 0.5541763 |

1 | 4 | 4 | 0.9993291 | 0.5526330 |

1 | 5 | 5 | 1.0384020 | 0.5439572 |

1 | 6 | 6 | 1.4137790 | 0.5798467 |

1 | 7 | 7 | 0.3329987 | 0.5507381 |

1 | 8 | 8 | 0.5616170 | 0.5364457 |

1 | 9 | 9 | 0.1770721 | 0.5441246 |

1 | 10 | 10 | 0.3237930 | 0.4982546 |

Like with `r_condition[condition,term]`

, this gives us a tidy data frame. If we want the median and 95% quantile interval of the variables, we can apply `median_qi()`

:

```
%>%
m spread_draws(b_Intercept, sigma) %>%
median_qi(b_Intercept, sigma)
```

b_Intercept | b_Intercept.lower | b_Intercept.upper | sigma | sigma.lower | sigma.upper | .width | .point | .interval |
---|---|---|---|---|---|---|---|---|

0.5258874 | -0.4902497 | 1.40569 | 0.5571528 | 0.4558006 | 0.6957939 | 0.95 | median | qi |

We can specify the columns we want to get medians and intervals from, as above, or if we omit the list of columns, `median_qi()`

will use every column that is not a grouping column or a special column (like `.chain`

, `.iteration`

, or `.draw`

). Thus in the above example, `b_Intercept`

and `sigma`

are redundant arguments to `median_qi()`

because they are also the only columns we gathered from the model. So we can simplify this to:

```
%>%
m spread_draws(b_Intercept, sigma) %>%
median_qi()
```

b_Intercept | b_Intercept.lower | b_Intercept.upper | sigma | sigma.lower | sigma.upper | .width | .point | .interval |
---|---|---|---|---|---|---|---|---|

0.5258874 | -0.4902497 | 1.40569 | 0.5571528 | 0.4558006 | 0.6957939 | 0.95 | median | qi |

If you would rather have a long-format list of intervals, use `gather_draws()`

instead:

```
%>%
m gather_draws(b_Intercept, sigma) %>%
median_qi()
```

.variable | .value | .lower | .upper | .width | .point | .interval |
---|---|---|---|---|---|---|

b_Intercept | 0.5258874 | -0.4902497 | 1.4056903 | 0.95 | median | qi |

sigma | 0.5571528 | 0.4558006 | 0.6957939 | 0.95 | median | qi |

For more on `gather_draws()`

, see `vignette("tidybayes")`

.

When we have a model variable with one or more indices, such as `r_condition`

, we can apply `median_qi()`

(or other functions in the `point_interval()`

family) as we did before:

```
%>%
m spread_draws(r_condition[condition,]) %>%
median_qi()
```

condition | r_condition | .lower | .upper | .width | .point | .interval |
---|---|---|---|---|---|---|

A | -0.3363530 | -1.2325960 | 0.7343642 | 0.95 | median | qi |

B | 0.4798242 | -0.4408264 | 1.5448895 | 0.95 | median | qi |

C | 1.3179042 | 0.4059274 | 2.3755540 | 0.95 | median | qi |

D | 0.4981223 | -0.4433201 | 1.5536575 | 0.95 | median | qi |

E | -1.3978927 | -2.3576833 | -0.3957483 | 0.95 | median | qi |

How did `median_qi()`

know what to aggregate? Data frames returned by `spread_draws()`

are automatically grouped by all index variables you pass to it; in this case, that means `spread_draws()`

groups its results by `condition`

. `median_qi()`

respects those groups, and calculates the point summaries and intervals within all groups. Then, because no columns were passed to `median_qi()`

, it acts on the only non-special (`.`

-prefixed) and non-group column, `r_condition`

. So the above shortened syntax is equivalent to this more verbose call:

```
%>%
m spread_draws(r_condition[condition,]) %>%
group_by(condition) %>% # this line not necessary (done by spread_draws)
median_qi(r_condition) # b is not necessary (it is the only non-group column)
```

condition | r_condition | .lower | .upper | .width | .point | .interval |
---|---|---|---|---|---|---|

A | -0.3363530 | -1.2325960 | 0.7343642 | 0.95 | median | qi |

B | 0.4798242 | -0.4408264 | 1.5448895 | 0.95 | median | qi |

C | 1.3179042 | 0.4059274 | 2.3755540 | 0.95 | median | qi |

D | 0.4981223 | -0.4433201 | 1.5536575 | 0.95 | median | qi |

E | -1.3978927 | -2.3576833 | -0.3957483 | 0.95 | median | qi |

`tidybayes`

also provides an implementation of `posterior::summarise_draws()`

for grouped data frames (`tidybayes::summaries_draws.grouped_df()`

), which you can use to quickly get convergence diagnostics:

```
%>%
m spread_draws(r_condition[condition,]) %>%
summarise_draws()
```

condition | variable | mean | median | sd | mad | q5 | q95 | rhat | ess_bulk | ess_tail |
---|---|---|---|---|---|---|---|---|---|---|

A | r_condition | -0.3121398 | -0.3363530 | 0.4978955 | 0.4799092 | -1.0782228 | 0.5256269 | 1.002543 | 971.7765 | 1267.447 |

B | r_condition | 0.4928093 | 0.4798242 | 0.4999156 | 0.4744824 | -0.3005635 | 1.3402506 | 1.003311 | 983.1190 | 1325.680 |

C | r_condition | 1.3327741 | 1.3179042 | 0.5009147 | 0.4778770 | 0.5567025 | 2.1884327 | 1.003105 | 993.5292 | 1489.303 |

D | r_condition | 0.5118006 | 0.4981223 | 0.5020947 | 0.4765011 | -0.2773801 | 1.3607620 | 1.001995 | 1012.8868 | 1395.754 |

E | r_condition | -1.3883041 | -1.3978927 | 0.4995347 | 0.4680347 | -2.1723234 | -0.5349330 | 1.002854 | 989.8028 | 1391.940 |

`spread_draws()`

and `gather_draws()`

support extracting variables that have different indices into the same data frame. Indices with the same name are automatically matched up, and values are duplicated as necessary to produce one row per all combination of levels of all indices. For example, we might want to calculate the mean within each condition (call this `condition_mean`

). In this model, that mean is the intercept (`b_Intercept`

) plus the effect for a given condition (`r_condition`

).

We can gather draws from `b_Intercept`

and `r_condition`

together in a single data frame:

```
%>%
m spread_draws(b_Intercept, r_condition[condition,]) %>%
head(10)
```

.chain | .iteration | .draw | b_Intercept | condition | r_condition |
---|---|---|---|---|---|

1 | 1 | 1 | -0.2767032 | A | 0.6829060 |

1 | 1 | 1 | -0.2767032 | B | 1.1563864 |

1 | 1 | 1 | -0.2767032 | C | 2.1754891 |

1 | 1 | 1 | -0.2767032 | D | 1.0605452 |

1 | 1 | 1 | -0.2767032 | E | -0.3957730 |

1 | 2 | 2 | 0.8924211 | A | -0.7785857 |

1 | 2 | 2 | 0.8924211 | B | -0.0145459 |

1 | 2 | 2 | 0.8924211 | C | 0.8032023 |

1 | 2 | 2 | 0.8924211 | D | 0.2212085 |

1 | 2 | 2 | 0.8924211 | E | -1.5760299 |

Within each draw, `b_Intercept`

is repeated as necessary to correspond to every index of `r_condition`

. Thus, the `mutate`

function from dplyr can be used to find their sum, `condition_mean`

(which is the mean for each condition):

```
%>%
m spread_draws(`b_Intercept`, r_condition[condition,]) %>%
mutate(condition_mean = b_Intercept + r_condition) %>%
median_qi(condition_mean)
```

condition | condition_mean | .lower | .upper | .width | .point | .interval |
---|---|---|---|---|---|---|

A | 0.1898111 | -0.1464277 | 0.5360261 | 0.95 | median | qi |

B | 0.9984930 | 0.6399830 | 1.3346013 | 0.95 | median | qi |

C | 1.8379430 | 1.4924008 | 2.1761012 | 0.95 | median | qi |

D | 1.0165100 | 0.6697091 | 1.3640797 | 0.95 | median | qi |

E | -0.8840828 | -1.2365972 | -0.5282710 | 0.95 | median | qi |

`median_qi()`

uses tidy evaluation (see `vignette("tidy-evaluation", package = "rlang")`

), so it can take column expressions, not just column names. Thus, we can simplify the above example by moving the calculation of `condition_mean`

from `mutate`

into `median_qi()`

:

```
%>%
m spread_draws(b_Intercept, r_condition[condition,]) %>%
median_qi(condition_mean = b_Intercept + r_condition)
```

condition | condition_mean | .lower | .upper | .width | .point | .interval |
---|---|---|---|---|---|---|

A | 0.1898111 | -0.1464277 | 0.5360261 | 0.95 | median | qi |

B | 0.9984930 | 0.6399830 | 1.3346013 | 0.95 | median | qi |

C | 1.8379430 | 1.4924008 | 2.1761012 | 0.95 | median | qi |

D | 1.0165100 | 0.6697091 | 1.3640797 | 0.95 | median | qi |

E | -0.8840828 | -1.2365972 | -0.5282710 | 0.95 | median | qi |

`median_qi()`

and its sister functions can produce an arbitrary number of probability intervals by setting the `.width =`

argument:

```
%>%
m spread_draws(b_Intercept, r_condition[condition,]) %>%
median_qi(condition_mean = b_Intercept + r_condition, .width = c(.95, .8, .5))
```

condition | condition_mean | .lower | .upper | .width | .point | .interval |
---|---|---|---|---|---|---|

A | 0.1898111 | -0.1464277 | 0.5360261 | 0.95 | median | qi |

B | 0.9984930 | 0.6399830 | 1.3346013 | 0.95 | median | qi |

C | 1.8379430 | 1.4924008 | 2.1761012 | 0.95 | median | qi |

D | 1.0165100 | 0.6697091 | 1.3640797 | 0.95 | median | qi |

E | -0.8840828 | -1.2365972 | -0.5282710 | 0.95 | median | qi |

A | 0.1898111 | -0.0339645 | 0.4211348 | 0.80 | median | qi |

B | 0.9984930 | 0.7657473 | 1.2241548 | 0.80 | median | qi |

C | 1.8379430 | 1.6071955 | 2.0572413 | 0.80 | median | qi |

D | 1.0165100 | 0.7855210 | 1.2417771 | 0.80 | median | qi |

E | -0.8840828 | -1.1073737 | -0.6556436 | 0.80 | median | qi |

A | 0.1898111 | 0.0777964 | 0.3065811 | 0.50 | median | qi |

B | 0.9984930 | 0.8811815 | 1.1139238 | 0.50 | median | qi |

C | 1.8379430 | 1.7170860 | 1.9531632 | 0.50 | median | qi |

D | 1.0165100 | 0.8977555 | 1.1336757 | 0.50 | median | qi |

E | -0.8840828 | -1.0081410 | -0.7674066 | 0.50 | median | qi |

The results are in a tidy format: one row per group and uncertainty interval width (`.width`

). This facilitates plotting. For example, assigning `-.width`

to the `size`

aesthetic will show all intervals, making thicker lines correspond to smaller intervals. The `ggdist::geom_pointinterval()`

geom automatically sets the `size`

aesthetic appropriately based on the `.width`

column in the data to produce plots of points with multiple probability levels:

```
%>%
m spread_draws(b_Intercept, r_condition[condition,]) %>%
median_qi(condition_mean = b_Intercept + r_condition, .width = c(.95, .66)) %>%
ggplot(aes(y = condition, x = condition_mean, xmin = .lower, xmax = .upper)) +
geom_pointinterval()
```

To see the density along with the intervals, we can use `ggdist::stat_eye()`

(“eye plots”, which combine intervals with violin plots), or `ggdist::stat_halfeye()`

(interval + density plots):

```
%>%
m spread_draws(b_Intercept, r_condition[condition,]) %>%
mutate(condition_mean = b_Intercept + r_condition) %>%
ggplot(aes(y = condition, x = condition_mean)) +
stat_halfeye()
```

Or say you want to annotate portions of the densities in color; the `fill`

aesthetic can vary within a slab in all geoms and stats in the `ggdist::geom_slabinterval()`

family, including `ggdist::stat_halfeye()`

. For example, if you want to annotate a domain-specific region of practical equivalence (ROPE), you could do something like this:

```
%>%
m spread_draws(b_Intercept, r_condition[condition,]) %>%
mutate(condition_mean = b_Intercept + r_condition) %>%
ggplot(aes(y = condition, x = condition_mean, fill = stat(abs(x) < .8))) +
stat_halfeye() +
geom_vline(xintercept = c(-.8, .8), linetype = "dashed") +
scale_fill_manual(values = c("gray80", "skyblue"))
```

`stat_slabinterval`

There are a variety of additional stats for visualizing distributions in the `ggdist::geom_slabinterval()`

family of stats and geoms:

See `vignette("slabinterval", package = "ggdist")`

for an overview.

Rather than calculating conditional means manually as in the previous example, we could use `add_epred_draws()`

, which is analogous to `brms::posterior_epred()`

(giving posterior draws from the expectation of the posterior predictive; i.e. posterior distributions of conditional means), but uses a tidy data format. We can combine it with `modelr::data_grid()`

to first generate a grid describing the predictions we want, then transform that grid into a long-format data frame of draws from conditional means:

```
%>%
ABC data_grid(condition) %>%
add_epred_draws(m) %>%
head(10)
```

condition | .row | .chain | .iteration | .draw | .epred |
---|---|---|---|---|---|

A | 1 | NA | NA | 1 | 0.4062028 |

A | 1 | NA | NA | 2 | 0.1138354 |

A | 1 | NA | NA | 3 | 0.3609984 |

A | 1 | NA | NA | 4 | 0.2824506 |

A | 1 | NA | NA | 5 | 0.2525603 |

A | 1 | NA | NA | 6 | 0.5960186 |

A | 1 | NA | NA | 7 | -0.1232696 |

A | 1 | NA | NA | 8 | 0.3139605 |

A | 1 | NA | NA | 9 | 0.2640973 |

A | 1 | NA | NA | 10 | 0.2255247 |

To plot this example, we’ll also show the use of `ggdist::stat_pointinterval()`

instead of `ggdist::geom_pointinterval()`

, which summarizes draws into points and intervals within ggplot:

```
%>%
ABC data_grid(condition) %>%
add_epred_draws(m) %>%
ggplot(aes(x = .epred, y = condition)) +
stat_pointinterval(.width = c(.66, .95))
```

Intervals are nice if the alpha level happens to line up with whatever decision you are trying to make, but getting a shape of the posterior is better (hence eye plots, above). On the other hand, making inferences from density plots is imprecise (estimating the area of one shape as a proportion of another is a hard perceptual task). Reasoning about probability in frequency formats is easier, motivating quantile dotplots (Kay et al. 2016, Fernandes et al. 2018), which also allow precise estimation of arbitrary intervals (down to the dot resolution of the plot, 100 in the example below).

Within the slabinterval family of geoms in tidybayes is the `dots`

and `dotsinterval`

family, which automatically determine appropriate bin sizes for dotplots and can calculate quantiles from samples to construct quantile dotplots. `ggdist::stat_dotsinterval()`

is the variant designed for use on samples:

```
%>%
ABC data_grid(condition) %>%
add_epred_draws(m) %>%
ggplot(aes(x = .epred, y = condition)) +
stat_dotsinterval(quantiles = 100)
```

The idea is to get away from thinking about the posterior as indicating one canonical point or interval, but instead to represent it as (say) 100 approximately equally likely points.

Where `add_epred_draws()`

is analogous to `brms::posterior_epred()`

, `add_predicted_draws()`

is analogous to `brms::posterior_predict()`

, giving draws from the posterior predictive distribution.

Here is an example of posterior predictive distributions plotted using `ggdist::stat_slab()`

:

```
%>%
ABC data_grid(condition) %>%
add_predicted_draws(m) %>%
ggplot(aes(x = .prediction, y = condition)) +
stat_slab()
```

We could also use `ggdist::stat_interval()`

to plot predictive bands alongside the data:

```
%>%
ABC data_grid(condition) %>%
add_predicted_draws(m) %>%
ggplot(aes(y = condition, x = .prediction)) +
stat_interval(.width = c(.50, .80, .95, .99)) +
geom_point(aes(x = response), data = ABC) +
scale_color_brewer()
```

Altogether, data, posterior predictions, and posterior distributions of the means:

```
= ABC %>%
grid data_grid(condition)
= grid %>%
means add_epred_draws(m)
= grid %>%
preds add_predicted_draws(m)
%>%
ABC ggplot(aes(y = condition, x = response)) +
stat_interval(aes(x = .prediction), data = preds) +
stat_pointinterval(aes(x = .epred), data = means, .width = c(.66, .95), position = position_nudge(y = -0.3)) +
geom_point() +
scale_color_brewer()
```

The above approach to posterior predictions integrates over the parameter uncertainty to give a single posterior predictive distribution. Another approach, often used by John Kruschke in his book Doing Bayesian Data Analysis, is to attempt to show both the predictive uncertainty and the parameter uncertainty simultaneously by showing several possible predictive distributions implied by the posterior.

We can do this pretty easily by asking for the distributional parameters for a given prediction implied by the posterior. We’ll do it explicitly here by setting `dpar = c("mu", "sigma")`

in `add_epred_draws()`

. Rather than specifying the parameters explicitly, you can also just set `dpar = TRUE`

to get draws from all distributional parameters in a model, and this will work for any response distribution supported by brms. Then, we can select a small number of draws using `sample_draws()`

and then use `ggdist::stat_dist_slab()`

to visualize each predictive distribution implied by the values of `mu`

and `sigma`

:

```
%>%
ABC data_grid(condition) %>%
add_epred_draws(m, dpar = c("mu", "sigma")) %>%
sample_draws(30) %>%
ggplot(aes(y = condition)) +
stat_dist_slab(aes(dist = "norm", arg1 = mu, arg2 = sigma),
slab_color = "gray65", alpha = 1/10, fill = NA
+
) geom_point(aes(x = response), data = ABC, shape = 21, fill = "#9ECAE1", size = 2)
```

For a more detailed description of these charts (and some useful variations on them), see Solomon Kurz’s excellent blog post on the topic.

We could even combine the Kruschke-style plots of predictive distributions with half-eyes showing the posterior means:

```
%>%
ABC data_grid(condition) %>%
add_epred_draws(m, dpar = c("mu", "sigma")) %>%
ggplot(aes(x = condition)) +
stat_dist_slab(aes(dist = "norm", arg1 = mu, arg2 = sigma),
slab_color = "gray65", alpha = 1/10, fill = NA, data = . %>% sample_draws(30), scale = .5
+
) stat_halfeye(aes(y = .epred), side = "bottom", scale = .5) +
geom_point(aes(y = response), data = ABC, shape = 21, fill = "#9ECAE1", size = 2, position = position_nudge(x = -.2))
```

To demonstrate drawing fit curves with uncertainty, let’s fit a slightly naive model to part of the `mtcars`

dataset:

```
= brm(
m_mpg ~ hp * cyl,
mpg data = mtcars,
file = "models/tidy-brms_m_mpg.rds" # cache model (can be removed)
)
```

We can draw fit curves with probability bands:

```
%>%
mtcars group_by(cyl) %>%
data_grid(hp = seq_range(hp, n = 51)) %>%
add_epred_draws(m_mpg) %>%
ggplot(aes(x = hp, y = mpg, color = ordered(cyl))) +
stat_lineribbon(aes(y = .epred)) +
geom_point(data = mtcars) +
scale_fill_brewer(palette = "Greys") +
scale_color_brewer(palette = "Set2")
```

Or we can sample a reasonable number of fit lines (say 100) and overplot them:

```
%>%
mtcars group_by(cyl) %>%
data_grid(hp = seq_range(hp, n = 101)) %>%
# NOTE: this shows the use of ndraws to subsample within add_epred_draws()
# ONLY do this IF you are planning to make spaghetti plots, etc.
# NEVER subsample to a small sample to plot intervals, densities, etc.
add_epred_draws(m_mpg, ndraws = 100) %>%
ggplot(aes(x = hp, y = mpg, color = ordered(cyl))) +
geom_line(aes(y = .epred, group = paste(cyl, .draw)), alpha = .1) +
geom_point(data = mtcars) +
scale_color_brewer(palette = "Dark2")
```

Or we can create animated hypothetical outcome plots (HOPs) of fit lines:

```
set.seed(123456)
# NOTE: using a small number of draws to keep this example
# small, but in practice you probably want 50 or 100
= 20
ndraws
= mtcars %>%
p group_by(cyl) %>%
data_grid(hp = seq_range(hp, n = 101)) %>%
add_epred_draws(m_mpg, ndraws = ndraws) %>%
ggplot(aes(x = hp, y = mpg, color = ordered(cyl))) +
geom_line(aes(y = .epred, group = paste(cyl, .draw))) +
geom_point(data = mtcars) +
scale_color_brewer(palette = "Dark2") +
transition_states(.draw, 0, 1) +
shadow_mark(future = TRUE, color = "gray50", alpha = 1/20)
animate(p, nframes = ndraws, fps = 2.5, width = 432, height = 288, res = 96, dev = "png", type = "cairo")
```

Or we could plot posterior predictions (instead of means). For this example we’ll also use `alpha`

to make it easier to see overlapping bands:

```
%>%
mtcars group_by(cyl) %>%
data_grid(hp = seq_range(hp, n = 101)) %>%
add_predicted_draws(m_mpg) %>%
ggplot(aes(x = hp, y = mpg, color = ordered(cyl), fill = ordered(cyl))) +
stat_lineribbon(aes(y = .prediction), .width = c(.95, .80, .50), alpha = 1/4) +
geom_point(data = mtcars) +
scale_fill_brewer(palette = "Set2") +
scale_color_brewer(palette = "Dark2")
```

This gets difficult to judge by group, so probably better to facet into multiple plots. Fortunately, since we are using ggplot, that functionality is built in:

```
%>%
mtcars group_by(cyl) %>%
data_grid(hp = seq_range(hp, n = 101)) %>%
add_predicted_draws(m_mpg) %>%
ggplot(aes(x = hp, y = mpg)) +
stat_lineribbon(aes(y = .prediction), .width = c(.99, .95, .8, .5), color = brewer.pal(5, "Blues")[[5]]) +
geom_point(data = mtcars) +
scale_fill_brewer() +
facet_grid(. ~ cyl, space = "free_x", scales = "free_x")
```

`brms::brm()`

also allows us to set up submodels for parameters of the response distribution *other than* the location (e.g., mean). For example, we can allow a variance parameter, such as the standard deviation, to also be some function of the predictors.

This approach can be helpful in cases of non-constant variance (also called *heteroskedasticity* by folks who like obfuscation via Latin). E.g., imagine two groups, each with different mean response *and variance*:

```
set.seed(1234)
= tibble(
AB group = rep(c("a", "b"), each = 20),
response = rnorm(40, mean = rep(c(1, 5), each = 20), sd = rep(c(1, 3), each = 20))
)
%>%
AB ggplot(aes(x = response, y = group)) +
geom_point()
```

Here is a model that lets the mean *and standard deviation* of `response`

be dependent on `group`

:

```
= brm(
m_ab bf(
~ group,
response ~ group
sigma
),data = AB,
file = "models/tidy-brms_m_ab.rds" # cache model (can be removed)
)
```

We can plot the posterior distribution of the mean `response`

alongside posterior predictive intervals and the data:

```
= AB %>%
grid data_grid(group)
= grid %>%
means add_epred_draws(m_ab)
= grid %>%
preds add_predicted_draws(m_ab)
%>%
AB ggplot(aes(x = response, y = group)) +
stat_halfeye(aes(x = .epred), scale = 0.6, position = position_nudge(y = 0.175), data = means) +
stat_interval(aes(x = .prediction), data = preds) +
geom_point(data = AB) +
scale_color_brewer()
```

This shows posteriors of the mean of each group (black intervals and the density plots) and posterior predictive intervals (blue).

The predictive intervals in group `b`

are larger than in group `a`

because the model fits a different standard deviation for each group. We can see how the corresponding distributional parameter, `sigma`

, changes by extracting it using the `dpar`

argument to `add_epred_draws()`

:

```
%>%
grid add_epred_draws(m_ab, dpar = TRUE) %>%
ggplot(aes(x = sigma, y = group)) +
stat_halfeye() +
geom_vline(xintercept = 0, linetype = "dashed")
```

By setting `dpar = TRUE`

, all distributional parameters are added as additional columns in the result of `add_epred_draws()`

; if you only want a specific parameter, you can specify it (or a list of just the parameters you want). In the above model, `dpar = TRUE`

is equivalent to `dpar = list("mu", "sigma")`

.

If we wish compare the means from each condition, `compare_levels()`

facilitates comparisons of the value of some variable across levels of a factor. By default it computes all pairwise differences.

Let’s demonstrate `compare_levels()`

with `ggdist::stat_halfeye()`

. We’ll also re-order by the mean of the difference:

```
%>%
m spread_draws(r_condition[condition,]) %>%
compare_levels(r_condition, by = condition) %>%
ungroup() %>%
mutate(condition = reorder(condition, r_condition)) %>%
ggplot(aes(y = condition, x = r_condition)) +
stat_halfeye() +
geom_vline(xintercept = 0, linetype = "dashed")
```

The `posterior_epred()`

function for ordinal and multinomial regression models in brms returns multiple variables for each draw: one for each outcome category (in contrast to `rstanarm::stan_polr()`

models, which return draws from the latent linear predictor). The philosophy of `tidybayes`

is to tidy whatever format is output by a model, so in keeping with that philosophy, when applied to ordinal and multinomial `brms`

models, `add_epred_draws()`

adds an additional column called `.category`

and a separate row containing the variable for each category is output for every draw and predictor.

We’ll fit a model using the `mtcars`

dataset that predicts the number of cylinders in a car given the car’s mileage (in miles per gallon). While this is a little backwards causality-wise (presumably the number of cylinders causes the mileage, if anything), that does not mean this is not a fine prediction task (I could probably tell someone who knows something about cars the MPG of a car and they could do reasonably well at guessing the number of cylinders in the engine).

Before we fit the model, let’s clean the dataset by making the `cyl`

column an ordered factor (by default it is just a number):

```
= mtcars %>%
mtcars_clean mutate(cyl = ordered(cyl))
head(mtcars_clean)
```

mpg | cyl | disp | hp | drat | wt | qsec | vs | am | gear | carb | |
---|---|---|---|---|---|---|---|---|---|---|---|

Mazda RX4 | 21.0 | 6 | 160 | 110 | 3.90 | 2.620 | 16.46 | 0 | 1 | 4 | 4 |

Mazda RX4 Wag | 21.0 | 6 | 160 | 110 | 3.90 | 2.875 | 17.02 | 0 | 1 | 4 | 4 |

Datsun 710 | 22.8 | 4 | 108 | 93 | 3.85 | 2.320 | 18.61 | 1 | 1 | 4 | 1 |

Hornet 4 Drive | 21.4 | 6 | 258 | 110 | 3.08 | 3.215 | 19.44 | 1 | 0 | 3 | 1 |

Hornet Sportabout | 18.7 | 8 | 360 | 175 | 3.15 | 3.440 | 17.02 | 0 | 0 | 3 | 2 |

Valiant | 18.1 | 6 | 225 | 105 | 2.76 | 3.460 | 20.22 | 1 | 0 | 3 | 1 |

Then we’ll fit an ordinal regression model:

```
= brm(
m_cyl ~ mpg,
cyl data = mtcars_clean,
family = cumulative,
seed = 58393,
file = "models/tidy-brms_m_cyl.rds" # cache model (can be removed)
)
```

`add_epred_draws()`

will include a `.category`

column, and `.epred`

will contain draws from the posterior distribution for the probability that the response is in that category. For example, here is the fit for the first row in the dataset:

```
tibble(mpg = 21) %>%
add_epred_draws(m_cyl) %>%
median_qi(.epred)
```

mpg | .row | .category | .epred | .lower | .upper | .width | .point | .interval |
---|---|---|---|---|---|---|---|---|

21 | 1 | 4 | 0.3462689 | 0.0951898 | 0.7103225 | 0.95 | median | qi |

21 | 1 | 6 | 0.6198780 | 0.2634698 | 0.8916873 | 0.95 | median | qi |

21 | 1 | 8 | 0.0135699 | 0.0002870 | 0.1237972 | 0.95 | median | qi |

Note: for the `.category`

variable to retain its original factor level names you must be using `brms`

greater than or equal to version 2.15.9.

We could plot fit lines for predicted probabilities against the dataset:

```
= mtcars_clean %>%
data_plot ggplot(aes(x = mpg, y = cyl, color = cyl)) +
geom_point() +
scale_color_brewer(palette = "Dark2", name = "cyl")
= mtcars_clean %>%
fit_plot data_grid(mpg = seq_range(mpg, n = 101)) %>%
add_epred_draws(m_cyl, value = "P(cyl | mpg)", category = "cyl") %>%
ggplot(aes(x = mpg, y = `P(cyl | mpg)`, color = cyl)) +
stat_lineribbon(aes(fill = cyl), alpha = 1/5) +
scale_color_brewer(palette = "Dark2") +
scale_fill_brewer(palette = "Dark2")
plot_grid(ncol = 1, align = "v",
data_plot,
fit_plot )
```

The above display does not let you see the correlation between `P(cyl|mpg)`

for different values of `cyl`

at a particular value of `mpg`

. For example, in the portion of the posterior where `P(cyl = 6|mpg = 20)`

is high, `P(cyl = 4|mpg = 20)`

and `P(cyl = 8|mpg = 20)`

must be low (since these must add up to 1).

One way to see this correlation might be to employ hypothetical outcome plots (HOPs) just for the fit line, “detaching” it from the ribbon (another alternative would be to use HOPs on top of line ensembles, as demonstrated earlier in this document). By employing animation, you can see how the lines move in tandem or opposition to each other, revealing some patterns in how they are correlated:

```
# NOTE: using a small number of draws to keep this example
# small, but in practice you probably want 50 or 100
= 20
ndraws
= mtcars_clean %>%
p data_grid(mpg = seq_range(mpg, n = 101)) %>%
add_epred_draws(m_cyl, value = "P(cyl | mpg)", category = "cyl") %>%
ggplot(aes(x = mpg, y = `P(cyl | mpg)`, color = cyl)) +
# we remove the `.draw` column from the data for stat_lineribbon so that the same ribbons
# are drawn on every frame (since we use .draw to determine the transitions below)
stat_lineribbon(aes(fill = cyl), alpha = 1/5, color = NA, data = . %>% select(-.draw)) +
# we use sample_draws to subsample at the level of geom_line (rather than for the full dataset
# as in previous HOPs examples) because we need the full set of draws for stat_lineribbon above
geom_line(aes(group = paste(.draw, cyl)), size = 1, data = . %>% sample_draws(ndraws)) +
scale_color_brewer(palette = "Dark2") +
scale_fill_brewer(palette = "Dark2") +
transition_manual(.draw)
animate(p, nframes = ndraws, fps = 2.5, width = 576, height = 192, res = 96, dev = "png", type = "cairo")
```

Notice how the lines move together, and how they move up or down together or in opposition. We could take a slice through these lines at an x position in the above chart (say, `mpg = 20`

) and look at the correlation between them using a scatterplot matrix:

```
tibble(mpg = 20) %>%
add_epred_draws(m_cyl, value = "P(cyl | mpg = 20)", category = "cyl") %>%
ungroup() %>%
select(.draw, cyl, `P(cyl | mpg = 20)`) %>%
gather_pairs(cyl, `P(cyl | mpg = 20)`, triangle = "both") %>%
filter(.row != .col) %>%
ggplot(aes(.x, .y)) +
geom_point(alpha = 1/50) +
facet_grid(.row ~ .col) +
ylab("P(cyl = row | mpg = 20)") +
xlab("P(cyl = col | mpg = 20)")
```

While talking about the mean for an ordinal distribution often does not make sense, in this particular case one could argue that the expected number of cylinders for a car given its miles per gallon is a meaningful quantity. We could plot the posterior distribution for the average number of cylinders for a car given a particular miles per gallon as follows:

\[
\textrm{E}[\textrm{cyl}|\textrm{mpg}=m] = \sum_{c \in \{4,6,8\}} c\cdot \textrm{P}(\textrm{cyl}=c|\textrm{mpg}=m)
\] We can use the above formula to derive a posterior distribution for \(\textrm{E}[\textrm{cyl}|\textrm{mpg}=m]\) from the model. The model gives us a posterior distribution for \(\textrm{P}(\textrm{cyl}=c|\textrm{mpg}=m)\): when `mpg`

= \(m\), the response-scale linear predictor (the `.epred`

column from `add_epred_draws()`

) for `cyl`

(aka `.category`

) = \(c\) is \(\textrm{P}(\textrm{cyl}=c|\textrm{mpg}=m)\). Thus, we can group within `.draw`

and then use `summarise`

to calculate the expected value:

```
= . %>%
label_data_function ungroup() %>%
filter(mpg == quantile(mpg, .47)) %>%
summarise_if(is.numeric, mean)
= mtcars_clean %>%
data_plot_with_mean data_grid(mpg = seq_range(mpg, n = 101)) %>%
# NOTE: this shows the use of ndraws to subsample within add_epred_draws()
# ONLY do this IF you are planning to make spaghetti plots, etc.
# NEVER subsample to a small sample to plot intervals, densities, etc.
add_epred_draws(m_cyl, value = "P(cyl | mpg)", category = "cyl", ndraws = 100) %>%
group_by(mpg, .draw) %>%
# calculate expected cylinder value
mutate(cyl = as.numeric(as.character(cyl))) %>%
summarise(cyl = sum(cyl * `P(cyl | mpg)`), .groups = "drop") %>%
ggplot(aes(x = mpg, y = cyl)) +
geom_line(aes(group = .draw), alpha = 5/100) +
geom_point(aes(y = as.numeric(as.character(cyl)), fill = cyl), data = mtcars_clean, shape = 21, size = 2) +
geom_text(aes(x = mpg + 4), label = "E[cyl | mpg]", data = label_data_function, hjust = 0) +
geom_segment(aes(yend = cyl, xend = mpg + 3.9), data = label_data_function) +
scale_fill_brewer(palette = "Set2", name = "cyl")
plot_grid(ncol = 1, align = "v",
data_plot_with_mean,
fit_plot )
```

Now let’s do some posterior predictive checking: do posterior predictions look like the data? For this, we’ll make new predictions at the same values of `mpg`

as were present in the original dataset (gray circles) and plot these with the observed data (colored circles):

```
%>%
mtcars_clean # we use `select` instead of `data_grid` here because we want to make posterior predictions
# for exactly the same set of observations we have in the original data
select(mpg) %>%
add_predicted_draws(m_cyl, seed = 1234) %>%
# recover original factor labels
mutate(cyl = levels(mtcars_clean$cyl)[.prediction]) %>%
ggplot(aes(x = mpg, y = cyl)) +
geom_count(color = "gray75") +
geom_point(aes(fill = cyl), data = mtcars_clean, shape = 21, size = 2) +
scale_fill_brewer(palette = "Dark2") +
geom_label_repel(
data = . %>% ungroup() %>% filter(cyl == "8") %>% filter(mpg == max(mpg)) %>% dplyr::slice(1),
label = "posterior predictions", xlim = c(26, NA), ylim = c(NA, 2.8), point.padding = 0.3,
label.size = NA, color = "gray50", segment.color = "gray75"
+
) geom_label_repel(
data = mtcars_clean %>% filter(cyl == "6") %>% filter(mpg == max(mpg)) %>% dplyr::slice(1),
label = "observed data", xlim = c(26, NA), ylim = c(2.2, NA), point.padding = 0.2,
label.size = NA, segment.color = "gray35"
)
```