Interpretability methods to analyze the behavior and predictions of any machine learning model. Implemented methods are: Feature importance described by Fisher et al. (2018) <arXiv:1801.01489>, accumulated local effects plots described by Apley (2018) <arXiv:1612.08468>, partial dependence plots described by Friedman (2001) <http://www.jstor.org/stable/2699986>, individual conditional expectation ('ice') plots described by Goldstein et al. (2013) <doi:10.1080/10618600.2014.907095>, local models (variant of 'lime') described by Ribeiro et. al (2016) <arXiv:1602.04938>, the Shapley Value described by Strumbelj et. al (2014) <doi:10.1007/s10115-013-0679-x>, feature interactions described by Friedman et. al <doi:10.1214/07-AOAS148> and tree surrogate models.
| Version: | 0.10.0 |
| Imports: | checkmate, data.table, Formula, future, future.apply, ggplot2, gridExtra, Metrics, prediction, R6 |
| Suggests: | ALEPlot, bench, caret, covr, e1071, future.callr, glmnet, gower, h2o, keras, knitr, MASS, mlr, mlr3, party, partykit, patchwork, randomForest, ranger, rmarkdown, rpart, testthat, yaImpute |
| Published: | 2020-03-26 |
| Author: | Christoph Molnar [aut, cre],
Patrick Schratz |
| Maintainer: | Christoph Molnar <christoph.molnar at gmail.com> |
| BugReports: | https://github.com/christophM/iml/issues |
| License: | MIT + file LICENSE |
| URL: | https://github.com/christophM/iml |
| NeedsCompilation: | no |
| Citation: | iml citation info |
| Materials: | NEWS |
| CRAN checks: | iml results |
| Reference manual: | iml.pdf |
| Vignettes: |
Introduction to iml: Interpretable Machine Learning in R Parallel computation of interpretation methods |
| Package source: | iml_0.10.0.tar.gz |
| Windows binaries: | r-devel: iml_0.10.0.zip, r-release: iml_0.10.0.zip, r-oldrel: iml_0.10.0.zip |
| macOS binaries: | r-release: iml_0.10.0.tgz, r-oldrel: iml_0.10.0.tgz |
| Old sources: | iml archive |
| Reverse imports: | DriveML, moreparty |
Please use the canonical form https://CRAN.R-project.org/package=iml to link to this page.