Skip to content

Commit

Permalink
updated readme
Browse files Browse the repository at this point in the history
  • Loading branch information
FlyingWorkshop committed Apr 2, 2024
1 parent 59118a4 commit f1aee5a
Showing 1 changed file with 29 additions and 32 deletions.
61 changes: 29 additions & 32 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,47 +2,44 @@

[![Build Status](https://github.com/FlyingWorkshop/CompressedBeliefMDPs.jl/actions/workflows/CI.yml/badge.svg?branch=main)](https://github.com/FlyingWorkshop/CompressedBeliefMDPs.jl/actions/workflows/CI.yml?query=branch%3Amain)
[![Dev-Docs](https://img.shields.io/badge/docs-latest-blue.svg)](https://flyingworkshop.github.io/CompressedBeliefMDPs.jl/dev/)
# CompressedBeliefMDPs.jl

CompressedBeliefMDP.jl provides a simple interface for solving large POMDPs with sparse belief spaces.
## Introduction

# Installation
```
add CompressedBeliefMDPs
```
Welcome to CompressedBeliefMDPs.jl! This package is part of the [POMDPs.jl](https://juliapomdp.github.io/POMDPs.jl/latest/) ecosystem and takes inspiration from [Exponential Family PCA for Belief Compression in POMDPs](https://papers.nips.cc/paper_files/paper/2002/hash/a11f9e533f28593768ebf87075ab34f2-Abstract.html).

# Quickstart

CompressedBeliefMDPs.jl is compatible with the [POMDPs.jl](https://juliapomdp.github.io/POMDPs.jl/latest/) ecosystem.
```julia
using POMDPs, POMDPModels
using CompressedBeliefMDPs
This package provides a general framework for applying belief compression in large POMDPs with generic compression, sampling, and planning algorithms.

pomdp = BabyPOMDP()
solver = CompressedBeliefSolver(pomdp)
policy = POMDPs.solve(solver, pomdp)
s = initialstate(pomdp)
v = value(policy, s)
a = action(policy, s)
```
## Installation

The solver finds an _approximate_ policy for the POMDP.
You can install CompressedBeliefMDPs.jl using Julia's package manager. Open the Julia REPL (press `]` to enter the package manager mode) and run the following command:

```julia
v = value(policy, s)
a = action(policy, s)
```julia-repl
pkg> add CompressedBeliefMDPs
```
# Sampling

There are two ways to collect belief samples: belief expansion or policy rollouts.

## Belief Expansion

CompressedBeliefMDPs.jl implements a fast version of exploratory belief expansion (Algorithm 21.13 from [Algorithms for Decision Making](https://algorithmsbook.com/)) that uses [$k$-d trees](https://en.wikipedia.org/wiki/K-d_tree) from [NearestNeighbors.jl](https://github.com/KristofferC/NearestNeighbors.jl). Belief expansion is supported for POMDPs with finite state, action, and observation spaces.

## Policy Rollouts
## Quickstart

Using belief compression is easy. Simplify pick a `Sampler`, `Compressor`, and a base `Policy` and then use the standard POMDPs.jl interface.

```julia
using POMDPs, POMDPTools, POMDPModels
using CompressedBeliefMDPs

# Compressors
pomdp = BabyPOMDP()
compressor = PCACompressor(1)
updater = DiscreteUpdater(pomdp)
sampler = BeliefExpansionSampler(pomdp)
solver = CompressedBeliefSolver(
pomdp;
compressor=compressor,
sampler=sampler,
updater=updater,
verbose=true,
max_iterations=100,
n_generative_samples=50,
k=2
)
policy = solve(solver, pomdp)
```

As a convenience, we provide several wrappers for compression schemes from [MultivariateStats.jl](https://juliastats.org/MultivariateStats.jl/stable/) and [ManifoldLearning.jl](https://wildart.github.io/ManifoldLearning.jl/stable/).

0 comments on commit f1aee5a

Please sign in to comment.