Skip to content

Reproducing experiments

Felipe Vieira Frujeri edited this page Nov 22, 2022 · 21 revisions

This wiki provides instructions on how to reproduce most of the experiments presented in the NeurIPS 2022 Offline RL workshop paper Towards Data-Driven Offline Simulations for Online Reinforcement Learning by Shengpu Tang, Felipe Vieira Frujeri, Dipendra Misra, Alex Lamb, John Langford, Paul Mineiro, Sebastian Kochman.

Figure 2: Illustrative Example of Evaluation Protocol

image

Figure 2 in the paper illustrates fidelity vs efficiency trade-off between different simulations. See appendix B.1. in the paper for details.

To see how this figure was produced, see notebook notebooks/metrics.ipynb.

Figure 4: latent state encoding

image

In order to reproduce Figure 4 from the paper, in which we represent the visitation for both the observations in the continuous grid and the corresponding latent state visitation (after encoding the observations using HOMER).

Collecting Continuous Grid data through a random policy

In order to train the HOMER encoder, we can use a random agent as the behavior policy to collect the data, to reproduce this data collection you can run this script:

python examples/continuous_grid/random_agent_rollout.py

Encoding observations to generate latent states

If you want to train the encoder from scratch please refer to this script with the following configurations:

python examples/continuous_grid/train_homer_encoder.py --num_epochs=1000 --seed=0 --batch_size=64 --latent_size=50 --hidden_size=64 --lr=1e-3 --weight_decay=0.0 --temperature_decay=False --output_dir='outputs/models' --num_samples=100000

We made available a model checkpoint for the HOMER based encoder here, to use it to encode the previous collected dataset and reproduce Figure 4 (visualizing both the original observation visitation and the latent state representation captured by the encoder), use this notebook.

Figure 5: Learning curves of a PPO agent using PSRS simulations

For these experiments we trained a PPO agent with interactions provided by the PSRS-based environment (withing the latent space decoded by HOMER on the continuous grid task) and measured the average episode return within each training epoch

image

And the average episode return as measured in a real validation environment (after each training epoch) image

To reproduce those results

Clone this wiki locally