(These documents take a long time to create, so only the code is shown here. The full version is at https://tidymodels.github.io/tidyposterior.)
The example that we will use here is from the analysis of a fairly large classification data set using 10-fold cross-validation with three models. Looking at the accuracy values, the differences are pretty clear. For the area under the ROC curve:
library(tidyposterior)
data("precise_example")
library(tidyverse)
rocs <- precise_example %>%
select(id, contains("ROC")) %>%
setNames(tolower(gsub("_ROC$", "", names(.))))
rocs
library(ggplot2)
# The rocs object is not a real `rset` object so use basic `gather()`
rocs_stacked <- gather(rocs, "model", "statistic", -id)
ggplot(rocs_stacked, aes(x = model, y = statistic, group = id, col = id)) +
geom_line(alpha = .75) +
theme(legend.position = "none")
Since the lines are fairly parallel, there is likely to be a strong resample-to-resample effect. Note that the variation is fairly small; the within-model results don’t vary a lot and are not near the ceiling of performance (i.e. an AUC of one). It also seems pretty clear that the models are producing different levels of performance, but we’ll use this package to clarify this. Finally, there seems to be roughly equal variation for each model despite the difference in performance.
If rocs
were produced by the rsample
package it is ready to use with tidyposterior
, which has a method for rset
objects.
The main question is:
When looking at resampling results, are the differences between models “real”?
To answer that, a model can be created where the outcome is the resampling statistics (the area under the ROC curve in this example). These values are explained by the model types. In doing this, we can get parameter estimates for each model’s effect on the resampled ROC values and make statistical (and practical) comparisons between models.
We will try a simple linear model with Gaussian errors that has a random effect for the resamples so that the within-resample correlation can be estimated. Although the outcome is bounded in the interval [0,1], the variability of these estimates might be precise enough to achieve a well-fitting model.
To fit the model, perf_mod
will be used to fit a model using the stan_glmer
function in the rstanarm
package:
The stan_glmer
model is contained in the element roc_model$stan
:
To evaluate the validity of this fit, the shinystan
package can be used to generate an interactive assessment of the model results. One other thing that we can do it to examine the posterior distributions to see if they make sense in terms of the range of values.
The tidy
function can be used to extract the distributions into a simple data frame:
There is a basic ggplot
method for this object and we can overlay the observed statistics for each model:
ggplot(roc_post) +
# Add the observed data to check for consistency
geom_point(
data = rocs_stacked,
aes(x = model, y = statistic),
alpha = .5, col = "blue"
)
These results look fairly reasonable given that we estimated a common variance for each of the models.
We’ll compare the generalized linear model with the neural network. Before doing so, it helps to specify what a real difference between models would be. Suppose that a 2% increase in accuracy was considered to be a substantive results. We can add this into the analysis.
First, we can compute the posterior for the difference in RMSE for the two models (parameterized as nnet
-glm
):
The summary
function can be used to quantify this difference. It has an argument called size
where we can add our belief about the size of a true difference.
The probability
column indicates the proportion of the posterior distribution that is greater than zero. This result indicates that the entire distribution is larger than one. The credible intervals reflect the large difference in the area under the ROC curves for these models.
Before discussing the ROPE estimates, let’s plot the posterior distribution of the differences:
The column pract_neg
reflects the area where the posterior distribution is less than -2% (i.e, practically negative). Similarly, the pract_pos
shows that most of the area is greater than 2% which leads us to believe that this is a truly a substantial difference in performance. The pract_equiv
reflects how much of the posterior is between [-2%, 2%]. If this were near one, it might indicate that the models are not practically different (based on the yardstick of 2%).