Skip to content

bkraske/DiscreteValueIteration.jl

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DiscreteValueIteration

Build Status Coverage Status

This package implements the discrete value iteration algorithm in Julia for solving Markov decision processes (MDPs). The user should define the problem with QuickPOMDPs.jl or according to the API in POMDPs.jl. Examples of problem definitions can be found in POMDPModels.jl. For an extensive tutorial, see these notebooks.

There are two solvers in the package. The "vanilla" ValueIterationSolver calls functions from the POMDPs.jl interface in every iteration, while the SparseValueIterationSolver first creates sparse transition and reward matrices and then performs value iteration with the new matrix representation. While both solvers take advantage of sparsity, the SparseValueIterationSolver is generally faster because of low-level optimizations, while the ValueIterationSolver has the advantage that it does not require allocation of transition matrices (which could potentially be too large to fit in memory).

Installation

using Pkg; Pkg.add("DiscreteValueIteration")

Usage

Given an MDP mdp defined with QuickPOMDPs.jl or the POMDPs.jl interface, use

using DiscreteValueIteration

solver = ValueIterationSolver(max_iterations=100, belres=1e-6, verbose=true) # creates the solver
solve(solver, mdp) # runs value iterations

To extract the policy for a given state, simply call the action function:

a = action(policy, s) # returns the optimal action for state s

Or to extract the value, use

value(policy, s) # returns the optimal value at state s

Requirements for problems defined using the POMDPs.jl interface

If you are using the POMDPs.jl interface instead of QuickPOMDPs.jl, you can see the requirements for using these solvers with

using POMDPs
using DiscreteValueIteration
@requirements_info ValueIterationSolver() YourMDP()
@requirements_info SparseValueIterationSolver() YourMDP()

This should return a list of the following functions to be implemented for your MDP:

discount(::MDP)
n_states(::MDP)
n_actions(::MDP)
transition(::MDP, ::State, ::Action)
reward(::MDP, ::State, ::Action, ::State)
stateindex(::MDP, ::State)
actionindex(::MDP, ::Action)
actions(::MDP, ::State)
support(::StateDistribution)
pdf(::StateDistribution, ::State)
states(::MDP)
actions(::MDP)

About

Value iteration solver for MDPs

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Julia 100.0%