Solver for Partially Observable Markov Decision Processes (POMDP)


[Up] [Top]

Documentation for package ‘pomdp’ version 0.99.0

Help Pages

MDP Define an MDP Problem
observation_matrix Extract the Transition, Observation or Reward Matrices from a POMDP
optimal_action Optimal action for a belief
O_ Define a POMDP Problem
plot Visualize a POMDP Policy Graph
plot_belief_space Plot a 2D or 3D Projection of the Belief Space
plot_policy_graph Visualize a POMDP Policy Graph
plot_value_function Plot the Value Function of a POMDP Solution
policy Extract the Policy from a POMDP/MDP
policy_graph Extract the Policy Graph (as an igraph Object)
POMDP Define a POMDP Problem
read_POMDP Read and write a POMDP Model to a File in POMDP Format
reward Calculate the Reward for a POMDP Solution
reward_matrix Extract the Transition, Observation or Reward Matrices from a POMDP
round_stochastic Round a stochastic vector or a row-stochastic matrix
R_ Define a POMDP Problem
sample_belief_space Sample from the Belief Space
simulate_POMDP # Simulate belief points
solve_POMDP Solve a POMDP Problem
solve_POMDP_parameter Solve a POMDP Problem
Three_doors Tiger Problem POMDP Specification
Tiger Tiger Problem POMDP Specification
transition_matrix Extract the Transition, Observation or Reward Matrices from a POMDP
T_ Define a POMDP Problem
update_belief Belief Update
write_POMDP Read and write a POMDP Model to a File in POMDP Format