The Markov Decision Processes (MDP) toolbox proposes functions related to the resolution of discrete-time Markov Decision Processes: finite horizon, value iteration, policy iteration, linear programming algorithms with some variants and also proposes some functions related to Reinforcement Learning.
Version: | 4.0.3 |
Depends: | Matrix, linprog |
Published: | 2017-03-03 |
Author: | Iadine Chades, Guillaume Chapron, Marie-Josee Cros, Frederick Garcia, Regis Sabbadin |
Maintainer: | Guillaume Chapron <gchapron at carnivoreconservation.org> |
License: | BSD_3_clause + file LICENSE |
NeedsCompilation: | no |
CRAN checks: | MDPtoolbox results |
Reference manual: | MDPtoolbox.pdf |
Package source: | MDPtoolbox_4.0.3.tar.gz |
Windows binaries: | r-devel: MDPtoolbox_4.0.3.zip, r-release: MDPtoolbox_4.0.3.zip, r-oldrel: MDPtoolbox_4.0.3.zip |
macOS binaries: | r-release: MDPtoolbox_4.0.3.tgz, r-oldrel: MDPtoolbox_4.0.3.tgz |
Old sources: | MDPtoolbox archive |
Please use the canonical form https://CRAN.R-project.org/package=MDPtoolbox to link to this page.