Local explanations of machine learning models describe, how features contributed to a single prediction. This package implements an explanation method based on LIME (Local Interpretable Model-agnostic Explanations, see Tulio Ribeiro, Singh, Guestrin (2016) <doi:10.1145/2939672.2939778>) in which interpretable inputs are created based on local rather than global behaviour of each original feature.
Version: | 0.3.12 |
Depends: | R (≥ 3.5) |
Imports: | glmnet, ggplot2, partykit, ingredients |
Suggests: | covr, knitr, rmarkdown, randomForest, DALEX, testthat |
Published: | 2019-12-18 |
Author: | Mateusz Staniak [aut, cre], Przemyslaw Biecek [aut], Krystian Igras [ctb], Alicja Gosiewska [ctb] |
Maintainer: | Mateusz Staniak <m.staniak at mini.pw.edu.pl> |
BugReports: | https://github.com/ModelOriented/localModel/issues |
License: | GPL-2 | GPL-3 [expanded from: GPL] |
URL: | https://github.com/ModelOriented/localModel |
NeedsCompilation: | no |
Materials: | README NEWS |
CRAN checks: | localModel results |
Reference manual: | localModel.pdf |
Vignettes: |
Explaining classification models with localModel package Methodology behind localModel package Introduction to localModel package |
Package source: | localModel_0.3.12.tar.gz |
Windows binaries: | r-devel: localModel_0.3.12.zip, r-release: localModel_0.3.12.zip, r-oldrel: localModel_0.3.12.zip |
macOS binaries: | r-release: localModel_0.3.12.tgz, r-oldrel: localModel_0.3.12.tgz |
Old sources: | localModel archive |
Please use the canonical form https://CRAN.R-project.org/package=localModel to link to this page.