This package is designed to help with assessing the quality of predictions. It provides a collection of proper scoring rules and metrics that can be accessed independently or collectively to score predictions automatically. It provides some metrics, e.g. for evaluating bias or for assessing calibration using the probability integral transformation that go beyond the scope of existing packages like scoringRules
. Through the function eval_forecasts
it also provides functionality that automatically and very conveniently scores forecasts using data.table
.
Predicitions can be either probabilistic forecasts (generally predictive samples generated by Markov Chain Monte Carlo procedures), quantile forecasts or point forecasts. The type of the predictions and the true values can be either continuous, integer, or binary.
Install the stable version from CRAN using
New features not yet available on CRAN be accessed via {drat}
:
The version of the package under active development can be installed with:
pit
bias
sharpness
crps
dss
brier_score
interval_score
More information on the different metrics and examples can be found in the package vignette.